text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Augmented reality and mixed reality for healthcare education beyond surgery: an integrative review. Objectives This study aimed to review and synthesize the current research and state of augmented reality (AR), mixed reality (MR) and the applications developed for healthcare education beyond surgery. Methods An integrative review was conducted on all relevant material, drawing on different data sources, including the databases of PubMed, PsycINFO, and ERIC from January 2013 till September 2018. Inductive content analysis and qualitative synthesis were performed. Additionally, the quality of the studies was assessed with different structured tools. Results Twenty-six studies were included. Studies based on both AR and MR involved established applications in 27% of all cases (n=6), the rest being prototypes. The most frequently studied subjects were related to anatomy and anesthesia (n=13). All studies showed several healthcare educational benefits of AR and MR, significantly outperforming traditional learning approaches in 11 studies examining various outcomes. Studies had a low-to-medium quality overall with a MERSQI mean of 12.26 (SD=2.63), while the single qualitative study had high quality. Conclusions This review suggests the progress of learning approaches based on AR and MR for various medical subjects while moving the research base away from feasibility studies on prototypes. Yet, lacking validity of study conclusions, heterogeneity of research designs and widely varied reporting challenges transferability of the findings in the studies included in the review. Future studies should examine suitable research designs and instructional objectives achievable by AR and MR-based applications to strengthen the evidence base, making it relevant for medical educators and institutions to apply the technologies. Introduction The integration of digital strategies has brought healthcare education to a paradigm shift, now reflected in many educational curricula. 1 Modern teaching curricula aim to educate trainees efficiently in safe environments to establish transferability into the clinical context. Augmented reality (AR) and mixed reality (MR) have long been expected to be disruptive technologies, with potential uses in medical education, training, surgical planning and to guide complex procedures. 2 While virtual reality (VR) has mainly led the way for the implementation of the display technologies, it is criticized for several limitations. 3,4 The term display technologies will hereafter be used to refer to AR and MR although it in principle also covers VR. The latter, however, is beyond the scope of this review. AR describes display-based systems that combine real and virtual imagery, which are interactive in real-time and register the real-world environment to be augmented by virtual imagery. 5 The visual display technology augments the physical environment by especially two principal manifestations: See-through (transparent) head-mounted display and non-immersive monitor-based video (window on the world). 6 AR systems are based on the combination of the physical Gerup et al.  Augmented reality for healthcare education 2 and the virtual environment. On the contrary, in VR systems the participant is totally immersed in a completely virtual one. MR is defined as the merging of real and virtual worlds and can be seen as a larger class of technologies covering the display environment of AR and augmented virtuality (AV). 7 Where virtual information augments the real view in AR, real-world information augments the virtual scene in AV. The external inputs providing real-world context are also seen in VR but were classified as MR in this review. The term of MR was included to embrace new technology labeled as MR, that tries to define a clear distinction between AR and MR, even if there is none. 8 The abilities to provide situated and authentic experience connected with the real environment, enhance interaction between the physical and virtual content, while preserving a feeling of presence explains the growing expectations that AR and MR may be suitable for healthcare education in various contexts. 9 Concerning healthcare education, the process of teaching, learning and training with an ongoing integration of knowledge, experience, skills and responsibility qualifies an individual to practice medicine. 10 Looking into medical education, several authors request to eliminate outdated, inefficient, and passive learning approaches and start to embrace these newer methodologies of learning. 11 Surgeons have historically always been quick to adapt to new technology developing new treatment and learning methodologies, while physicians were rather more tardy. 12 Today most studies on display technologies stem from surgery. In an integrative review on AR in healthcare education from 2014, surgical studies accounted for 64% (n=16) of the studies included. 13 A recent systematic review on AR for the surgeon clarifies the current lack of systematic reviews for physicians and ultimately educators within the field of medicine. 14 Many internists and other medical specialists do no longer diagnose and treat illnesses using only their knowledge of pathophysiology and pharmacology. 15 Today, many physicians have taken up procedures and surgical treatment initiatives by operation or manipulation defined as the use of hands to produce the desired movement or therapeutic effect in part of the body. 16 Nevertheless, medicine consists essentially of non-surgical treatment, procedures and other approaches of diagnostics and prevention of disease that need to be taught, learned and trained with an ongoing evaluation of adaptations. AR and MR may effectively help medical educators achieve such instructional objectives for medical education as it is being used for surgical training. According to the review by Zhu and colleagues, publications in the field of AR increased significantly in 2008. 13 Now, ten years after that publication outbreak, a new review is warranted. To the best of our knowledge, current reviews on AR and MR have not specifically studied applications for medical subjects in healthcare education. Most papers predominantly include surgical studies and only a few focused on AR in either otolaryngology or medical training. 1,3,4,9,13,17 Currently, no adequate reviews are available that uncover the educational profile of both AR and MR-based applications across different medical specialties, subjects and target groups. Our aim of this integrative review was to investigate the current research and state of AR and MR-based applications for healthcare education beyond surgery, providing an overview of the findings, strengths and weaknesses of the reported studies. Methods We chose to conduct an integrative review, given that previous reviews showed only a few studies relevant for the current scope. 3,4,13,17 This is thought to be the broadest type of review as it allows the inclusion of various research designs and information sources. 18 The method also integrates a process of quality assessment of the studies included that may qualify the integrative review for recommending practice and answering complex search questions. 19,20 The digital databases of PubMed, PsycINFO and ERIC were searched. The journal of Medical Teacher was hand-searched. Ted Talks and podcasts on the iTunes Podcast app were included, acknowledging the increasing importance of "new media". 21, 22 Studies published between January 2013 and September 2018 were included. Relevant word groups, combinations and openended terms used for the search were: "Augmented reality OR mixed reality" AND "medicine OR medical OR healthcare" AND "educat* OR simulat* OR train* OR learn*". We did not implement any filter of 'NOT virtual reality OR surgery' in our search string to avoid missing relevant studies examining non-surgical elements despite being termed as a surgical study. Eligibility criteria The selection process was done according to three overall criteria regarding research, focus on technology and content. According to the criterion of research studies were included if they described 1) a goal or research question, 2) an appropriate study design, 3) data collection and analysis methods and 4) the discussion of results. Research articles were excluded if they 1) neither described goal nor research question, 2) were review papers and 3) were focused on system descriptions without evaluation or other data. Table 1 provides the inclusion and exclusion criteria for the study. Study selection All abstracts were read by JG, who assessed whether they met the inclusion criteria. In case of doubt, JG discussed the inclusion of studies with the other authors. All duplicates were removed. Data extraction and synthesis Study characteristics and information of all articles were extracted and described by JG. Characteristics were authors, study aim, subject of healthcare education, design, participants, outcome measures, results, application/technologies, training time and display system. Content analysis was used to describe the study designs and to inductively identify the strengths and weaknesses of AR and MR as described by the studies included. Quality assessment The methodological quality of quantitative and mixed methods studies was evaluated with the Medical Education Research Study Quality Instrument (MERSQI). 23 This 10-item instrument has been thoroughly assessed and evaluated for its correlation with other assessment tools for research quality. 24 MERSQI covers six domains of studies: Study design, sampling, type of data, the validity of evaluation instrument, data analysis and outcome. All domains assign 0-3 points valuing the study to a final score between 0 and 18, the larger number indicating better study quality. The score will be presented as mean, standard deviation (SD) and range in parentheses. Each study was scored at the highest possible level. If a study reported more than one outcome, the rating for the highest outcome score was recorded not differentiating between primary or secondary outcome. The quality assessment of all studies was done by JG. In addition, to assess the quality of JG evaluation, a level of approximately 20% of the studies were randomly selected for assessment by co-authors and independently evaluated by at least two authors. We computed the intraclass correlation coefficient (ICC) to calculate the inter-rater reliability (IRR) between all authors. The methodological quality of qualitative studies was evaluated with a 12-item grid for Appraising Qualitative Research Articles in Medical Education that was converted into a quality assessment tool (AQRAME) by the authors of this review. 25 The instrument covers five domains: Introduction, methods, results, discussion and conclusion. The domain of methods assigns 0-5 points and the conclusion domain only assigns 0-1 point, while the three remaining domains assign 0-2 points. It includes a score range between 0 and 12 points, with a larger number indicating better study quality. A score of 0.5 was given in case of an unclear answer of neither yes nor no. The score will be presented as mean, SD and range in parentheses. An overall quality assessment tool was developed for rating all included studies regardless of their methodological design, assigning a figure of 1 to 7, with the larger number indicating better study quality. This was introduced to challenge the relative judgements of the MERSQI and AQRAME, acknowledging that different research questions inherently require different study designs. The appraisal was based on the need to be explicit about the role and assessment of the researcher in qualitative research. 26 For studies with mixed-method designs, we applied the MERSQI tool only, rating the quantitative parts of the study. Results Out of the 315 papers initially identified, four duplicates were removed, three articles in Chinese excluded, and one article could not be retrieved. No reporting of research was found in 14 Ted Talks and iTunes podcasts. Three hundred seven publications were screened and 281 excluded as they did not meet the inclusion criteria. Study subjects related to nasogastric tube insertion, facet joint injection, catheterization or needle guidance were interpreted to clinically related to medicine as a practice of diagnosis and so these studies were classified to fulfill the inclusion criteria. One study focusing on resection planning was included and categorized as preoperative visualization. 27 However, needle insertion itself was interpreted not to produce a desired movement or therapeutic effect in part of the body and not classified as a surgical procedure. This resulted in a total of 26 studies being included in the integrative review. The flow chart of publications selected for inclusion in this integrative review is displayed in Figure 1. Study characteristics The studies applied AR and MR primarily by integrating the display technologies into knowledge platforms and guidance systems for simulator practice. Some studies offered feedback in the endeavor of a skill or a field of knowledge, while others provided an immersion into scenarios and remote assessment-training for telemedicine. The display technologies showed the ability to stimulate the learning process and support the learner for several competencies: To understand spatial relationships and construct mental 3D models of anatomy with the help or without 2D imaging. To acquire cognitive-psychomotor abilities, prolong learning retention, experience student-centered motivation and obtain flexibility to learn anytime and anywhere in their own pace and style. Furthermore, the studies suggested that AR and MR could complement practice in safe simulation environments contributing to patient safety and a higher degree of confidence (See Appendix 1 -"Summary of results"). Technical specifications The majority of studies (n=22) examined an actual application of AR. The rest (n=4) investigated an application based on MR. 27,[50][51][52] Six applications developed by companies were reported in 10 studies. 30,31,37,39,40,43,47,48,50,51 The remaining studies (n=16) involved self-developed applications primarily developed at universities and hospitals. Mobile device-based (tablets and smartphones) applications were used in nine studies. 33,35,37,39,41,42,[47][48][49] Of these two thirds (n=6) involved camera and marker-based recognition, and three studies did not report any further on the applications developed. 41,47,48 Eight studies implemented headmounted display. 27,28,38,40,[43][44][45][46] Two studies utilized the same head-mounted display. 40,43 The head-mounted display-integrated applications had marker-based recognition in four of the studies. 28,40,43,44 One study recognized the hands and gestures of a mentor projecting these into in the trainee's display. 46 Two studies implemented a foot pedal to interact with the application. 27,38 For one study this included toggling between AR and MR-mode. 27 Computers were used in 11 studies. 30,31,34,36,38,40,43,46,[50][51][52] These delivered the computing power for head-mounted display-based applications in four studies. 38,40,43,46 One computer-based application had marker-based recognition. 36 Seven studies were sensor-based. 30,31,34,46,[50][51][52] Two studies recognized landmarks of the user's body. 30,31 Four studies recognized a virtual model registered with a phantom characterized as MR. 27,[50][51][52] Eleven studies reported using external cameras and tracking devices. 27,28,31,32,34,36,44,[50][51][52] Two studies used applications based on projectors, one recognizing markers on a phantom, and one projecting images directly onto a phantom without using a tracking device. 29,51 Methodological quality In the included 26 studies, nine were solely quantitative, 16 were mixed research methods and one was qualitative. Based on rating comparisons of the approximately 20% (n=5) randomly selected papers, the authors' agreed to use the ratings by JG for MERSQI, AQRAME and the overall score for the remaining papers. The average total MERSQI score of the 25 quantitative and mixed methods studies was mean 12.26, SD=2.63 (7-15.5). The ICC between all raters were computed to IRR=.50 for the MERSQI overall score, which corresponds to a moderate reliability. 53 Nearly one-third of all studies (n=8) either had no evaluation tool or did not report any validity of the instrument used. [28][29][30][31][32][33][34][35] The qualitative study involved semi-structured face-toface interviews that explored the needs and challenges of applying AR for healthcare education. The study demonstrated a detailed clarity and rigor according to the individual AQRAME score of all three authors corresponding to 12 (JG), 11.5 (CBS), and 12 (PD). As there was only one qualitative study, we did not report any IRR for the AQRAME overall score. The mean average overall quality score of all studies was 4.08, SD=1.65 (1-7) with an adjusted ICC equaling IRR=.429 also corresponding to a moderate reliability. 53 The scores of the individual studies and the study characteristics are reported in Appendix 1. Strengths and weaknesses of AR and MR Three themes were inductively identified indicating the strengths and weaknesses of AR and MR in healthcare education beyond surgery. Implemented across various subjects for learner types of all levels spanning different sectors The most frequently studied subjects of healthcare education were found within anatomy (n=6) and anesthesia (n=7), the ladder represented by four studies focusing on central vein catheterization. 29,38,44,52 Study participants were divided into 12 different categories: Pre-medical, medical, nursing, and health science students, novices, residents, fellows and established clinicians of different specialties, technicians, non-clinicians, non-specified participants and managers. The mean number of participants was 77.1, SD=170.6 (1-880) since the sample size was set to one in a study that did not report or specify the study participants. 33 The distribution of studies across subjects of healthcare education related to the number of participants enrolled is described in Appendix 2. Growing evidence for improving learning In 11 studies AR and MR were claimed to significantly improve the learning process or part-tasks associated in all or in the majority of outcome measures. 27,29,36,37,39,40,43,[48][49][50]52 Four out of six studies examining the acquisition of anatomy knowledge reported significantly improved learning. 36,37,39,49 Significant positive findings were found in six of 11 studies concerning skill training of needle insertion favoring both students and established clinicians. 27,29,40,43,50,52 Procedure time was significantly reduced in three of nine studies. 27,29,52 Examining different questionnaire-based aspects of the learning experience and user acceptance four of 19 studies demonstrated significant positive findings advocating the usability of the display technologies. 36,37,39,48 Fifteen studies found no significant positive results but all suggested the AR and MRbased applications may outperform traditional learning approaches within the involved subjects of healthcare education. 28,[30][31][32][33][34][35]38,41,42,[44][45][46][47]51 Other promising learning factors facilitated by the display technologies were related to visualization, directing attention, intrinsic benefits of motivation, physical interaction activating kinesthetic schemes, patient safety, skill retention, simulation confidence related to transferability, mobile learning and using oneself as a learning object. 39,41,42,45,49,51 Weaknesses Shortcomings of the study designs for transferability Four studies were designed as a single group user study only, making strong conclusions difficult. [31][32][33]35 Twenty-two studies used a group design or comparison, of which the most (n=17) compared two groups. [27][28][29][30]34,36,[38][39][40]42,44,45,[47][48][49][50][51] Only two studies did not compare AR or MR with another media corresponding to lectures, books, video, virtual reality, mobile devices, conventional training platforms, and telemedical full-setup. 28,34 Two studies compared the media of mobile devices after having provided AR content to one of the groups. 41,42 Five studies encompassed three groups. 37,41,43,46,52 Two of the two-group studies used a cross-over design. 29,30 No study involved patients in an authentic context, but two studies included patient data. 27,32 Lacking evidence for improving learning Eight studies reported descriptive frequencies of self-reported evaluations and measures without any statistical analysis of significance. 28,[30][31][32][33][34][35]47 Seven studies claimed the display technologies offered no significant impact for improving learning in all or in the majority of outcome measures. 38,41,42,[44][45][46]51 The two studies that compared AR within the same media of mobile devices found no significant difference in any of the outcome measures. 41,42 Only a single study presented a significant negative finding of prolonged completion time of an ultrasound examination in the AR group. 46 Potentially conflicting factors were addressed in terms of visual misperception, media or technology enthusiasm-based motivation, negation of patient discomfort related to patient safety, and missing translation of performance from simulation to clinical setting. 27,41,50,51 Discussion Virtual augmentation and guidance of AR and MR are increasingly used in applications for medical subjects of healthcare education these years. The quality of the existing studies and applications including the educational benefits of the display technologies remain unclear at the moment. We reviewed the current research and state of AR and MRbased applications for healthcare education in medical disciplines beyond surgery. Our integrative review identified 26 original studies examining various applications of both display technologies. The applications were found to measure numerous outcomes related to the learning process, acquisition of knowledge and skill training while providing feedback on patient care-related outcomes such as complication rates, insertion time and needle path related to tissue damage. This differs greatly from the findings of a systematic review by Barsom and colleagues on applications for medical training for professionals, in which none were developed to measure the prevention of errors for the interest of patient safety. 4 Our work revealed an increased emergence of established applications corresponding to 27% (n=6) investigated in 10 studies against 16 prototypes. A prior review by Zhu and colleagues only found one established application for laparoscopic colorectal surgery. 13 In the same review, the authors found the application designs lacking guidance by learning theories only resting on traditional learning strategies. We observed that the applications of AR and MR still have not exploited the integration of learning theories and strategies into their design. Still, the increased number of established applications is a step towards turning the research base away from feasibility studies examining prototypes. We conclude that the studies overall were of low-to-medium quality. This is consistent with the low to modest strength of evidence level reported in previous systematic reviews. 4,17 The single qualitative study was found to be of high quality in terms of clarity and rigor, while the relative judgement of the overall quality was found to be of a low-to-medium quality. The greatest limitation across the pool of studies noted in nearly one-third of all studies (n=8) was either the utter lack or poor reporting of the validity of the evaluation instruments indirectly providing the evidence base for the study findings. Additionally, the statistical analyses reported incomplete results or were unclearly interpreted. Shortcomings of the reviewed studies further included heterogeneity of research designs, unstandardized outcome measures and wide variation in details given. Widespread heterogeneity among studies is stated to be one of the greatest challenges of quantitatively synthesizing research evidence. 54 At the same time, an outspoken concern argues that mediacomparative studies in learning are virtually useless and not valid for comparison. 55 From this perspective, the studies failed to determine which media or technologies were best for healthcare education but rather informed practice with the specific application. These limitations are general for much education research but may be especially pronounced for research in the nexus of learning and technology. 56 Nevertheless, we did not exclude studies based on their quality due to our aim of providing an overview of the strengths and weaknesses of all relevant research in AR and MR for healthcare education beyond surgery during the past halfdecade. Limitations and recommendations for future studies To our knowledge, this is the first integrative review of AR and MR solely focusing on medical subjects of healthcare education. Three articles in Chinese were not included, meaning that we possibly excluded relevant knowledge. Moreover, we may have missed relevant research either published or not published in technical journals as our main focus was on databases for healthcare and education. Our finding that all included studies suggested or reported significant positive findings should be interpreted with caution since publication bias cannot be excluded. We tried to minimize the drop-out of relevant material by including unpublished work from new online sources such as TED Talks and the podcast media of iTunes. There was a contentious issue of the designs and presentations of these varying too extensively without enhancing the quality and usefulness of the review. Our study abstained from addressing the educational profile of AV compared to AR both being encompassed by MR. This could not be done due to a low number of studies measuring AVbased learning, possibly related to the impaired technologic and conceptual understanding of MR across the research field and industry. The quality of the included studies was assessed with the MERSQI scale, which revealed inconsistencies across a few domains in the process of rating. This was mainly due to missing information in the reviewed studies as well as a lack of clarity in the MERSQI guidelines. Though moderate reliability was found between all raters in the MERSQI and the overall quality assessment tool, one could argue that the sample size of the rating corresponding to approximately 20% (n=5) of the studies either hinders or disallows reliable calculations beyond descriptive analysis. Finally, the self-developed assessment tool of AQRAME has not been validated for quality scoring qualitative research despite relying on a known 12-item grid for quality appraisal. This tool was introduced since we were not aware of any validated evaluation instruments for quality assessment of qualitative research in healthcare education. A variety of applications for subjects of healthcare education beyond surgery have been developed, and their benefits were supported by this integrative review. We expect that more research will be done on the field as more institutions will explore and apply applications based on AR and MR in the future. Randomized controlled trials should continuously be organized for evaluating clinical performance and patient-care related outcomes. Specifically, the actual effects on real patients and physician behaviors towards patients in a real context are yet to be elucidated. We recommend future studies to justify and validate metrics and report the reliability of measures for higher-quality evaluations. Established guidelines and recommendations for high-quality research formulating joint standards could promote the adoption of the display technologies and facilitate exchange among researchers, educators and developers with widely different experiences and approaches. 57 Similar to the words of David A. Cook, professor of medicine and medical education, we suggest placing more emphasis on the 'How' and 'When' to use AR and MR-based learning and to focus less on 'Whether'. 55 Answering these questions researchers, educators and developers should share and evaluate the instructional design and learning theorybased methods while looking into effective use of simulation, and integration of the display technologies within and between institutions. Eventually, this could also provide an understanding of learning concepts revealed from the included studies involving intrinsic benefits of motivation, physical interaction activating kinesthetic schemes, skill retention, transferability of simulation confidence, mobile learning and using oneself as a learning object. By defining instructional objectives beforehand, the display technologies should be used only when it could refine or even replace training programs and curricula. With that being said partially immersive environments such as AR and MR may offer unique qualities for specifically, assessment and training procedural strategies integrating real patient data and without breaching patient safety. By using non-invasive sensors for imaging, the display technologies could complement the established imaging technologies of MRI, CT scan and ultrasound for monitoring of technical performance with an objective-comparative function as observed in our review. 27,29,50 To tap the full potential of the display technologies, the study and application design must be based on a throughout investigation of the educational context, learner types and learning objectives whether the latter being cognitive, technical, or non-technical such as measuring situational awareness, communication, or stress coping. Conclusions This review reports the current state of AR and MR-based applications for healthcare education beyond surgery. Studies based on both display technologies across various specialties and subjects states an increased number of established applications moving the research base away from feasibility studies on prototypes. All included studies suggested various healthcare educational benefits by the display technologies which significantly outperformed traditional learning approaches in 11 studies, specifically regarding the acquisition of anatomy knowledge and needle insertion skills. Yet, this review identifies multiple shortcomings of the studies. Study quality was low-to-medium especially due to lacking validity of the evaluation instruments, heterogeneity of research designs and widely varied reporting. Future studies are thus needed for researchers, educators and developers to build an evidence base defining suitable research designs and instructional objectives achievable by AR and MR-based applications, for these to complement conventional learning, curricula, and conduct a transformation in healthcare education. 10 Appendix 1. Study characteristics including quality scores Peak values of the forces and the pattern of the profile corresponded to related work. The system was positively reviewed on the system regarding functionality, visual feedback, and haptic feedback Self-developed for computer coupled to a haptic device with stylus and camera recognizing sensors attached to a dummy ultrasound probe and a phantom. Performance of the AR group was not significantly improved (p=.534), but the AR group had a significant prolonged completion time (p=.008). The AR group showed no significant difference though they favored the utility of AR (p=.065) and reported a lower cognitive load (p=.28) Self-developed for HMD with an ultrasound probe connected to computer and live-streamed to mentor connected to a sensor-controller projecting mentor's hands and gestures back into the AR space of the trainees
6,486.2
2020-01-18T00:00:00.000
[ "Medicine", "Education", "Computer Science" ]
A Task Allocation Algorithm Based on Score Incentive Mechanism for Wireless Sensor Networks A wireless sensor network (WSN) consists of many resource constraint sensor nodes, which are always deployed in unattended environment. Therefore, the sensor nodes are vulnerable to failure and malicious attacks. The failed nodes have a heavily negative impact on WSNs’ real-time services. Therefore, we propose a task allocation algorithm based on score incentive mechanism (TASIM) for WSNs. In TASIM, the score is proposed to reward or punish sensor nodes’ task execution in cluster-based WSNs, where cluster heads are responsible for task allocation and scores’ calculation. Based on the task scores, cluster members can collaborate with each other to complete complex tasks. In addition, the uncompleted tasks on failed nodes can be timely migrated to other cluster members for further execution. Furthermore, the uncompleted tasks on death nodes can be reallocated by cluster heads. Simulation results demonstrate that TASIM is quite suitable for real-time task allocation. In addition, the performance of the TASIM is clearly better than that of conventional task allocation algorithms in terms of both network load balance and energy consumption. Introduction WSNs consist of many tiny, light, and energy-limited sensor nodes, which are always deployed in unattended and hostile environment [1]. In recent years, WSNs have been widely used in many applications including military, industrial, household, medical, marine, and other fields, especially in natural disaster monitoring, early warning, rescuing, and other emergency situations [2]. During the applications, especially for some applications, for example, image processing, data fusion, and multimedia data compression, the real-time demand is higher and higher. Meanwhile, the task complexity of the applications is higher and higher. It is generally known that a single sensor node with limited resources is unable to complete a complex task. Therefore, it is especially important to find an appropriate task allocation scheme, which can efficiently apply a large amount of complex tasks to several collaborative nodes for distributed processing. For some complex real-time application, it is requested that all the complex tasks should be successfully completed in time although some sensor nodes fail or are attacked by malicious nodes. This puts forward a higher requirement for task allocation algorithm design in WSNs. On the one hand, the sensor nodes are vulnerable to failure and malicious attacks. These failures will reduce the efficiency of the original task allocation algorithm and eventually interrupt the execution of the algorithm. On the other hand, the sensor nodes always cannot finish a complex task in time due to their limited resource and processing capacity. If one sensor node runs out of its energy, the task allocation algorithm needs to run again from the start, which wastes large amounts of energy and cannot satisfy the real-time requirement of the application. Many task allocation algorithms have proposed for WSNs [3][4][5][6][7]. However, they are not suitable for real-time complex task execution. International Journal of Distributed Sensor Networks Based on the abovementioned problems, we propose a task allocation algorithm based on score incentive mechanism (TASIM) for real-time complex task execution in WSNs. The considered WSN consists of many heterogeneous sensor nodes, which have different initial energy levels and different task process speeds. The proposed TASIM can divide a complex task into several subtasks and allocate them on different sensor nodes for collaborative execution, which can efficiently reduce the energy consumption of a single sensor node and balance the energy consumption of the whole network. In addition, if a sensor node fails or dies, the uncompleted tasks on the sensor node can be migrated to another sensor node as soon as possible based on a task migration algorithm, which ensures that the tasks are completed in time, and accordingly prolong the network life. The contributions of the paper are listed as follows. (i) A score incentive mechanism is proposed for task allocation in WSN. A sensor node which successfully completes tasks can be rewarded by some scores, while another sensor node which is unable to finish tasks is punished by deducting some scores. Based on the scores, the uncompleted tasks on failure nodes can be timely migrated to other sensor nodes for further execution, which ensures that the complex tasks can be finished in time. (ii) The concept of ranking domain is adopted in the process of task allocation. The sensor nodes are divided into different ranking domains based on their different resource levels and service abilities. Cluster members with different ranking domains can communicate with each other to complete the task cooperatively, which efficiently balance the sensor nodes' energy consumption, shorten the algorithm's running time, and prolong the network lifetime. (iii) Heterogeneous network structure is suited for task allocation. Heterogeneous network is a new trend in the development of WSNs, where sensor nodes have different initial energy levels and different abilities of computation, communication, processing, and storing. Therefore, in the process of task allocation, the heterogeneity of sensor nodes should be taken into account. The rest of this paper is organized as follows. Section 2 introduces related works about task allocation in WSNs. Section 3 gives system models including network model and task model. Section 4 presents the score incentive mechanism and the concept of ranking domain. Section 5 investigates details of TASIM. Section 6 presents simulation results. Finally, Section 7 concludes the paper. Related Work In recent years, many task allocation and scheduling algorithms have been proposed for WSNs, which can be classified into three categories: energy efficient, real-time, and hybrid task allocation algorithms. In the energy efficient task allocation algorithms, the limited energy of sensor nodes is taken into consideration to process the tasks with less power, which ultimately aims to save sensor nodes' energy consumption and prolong network lifetime. In the real-time task allocation algorithms, task processing time is taken into consideration, which aims to process the tasks with less delay, while in the hybrid efficient task allocation algorithms, both energy efficiency and task processing time are taken into account to improve the performance of task allocation. Energy Efficient Task Allocation Algorithms. In [8], Xie and Qin proposed a novel balanced energy-aware task allocation (BEATA) algorithm for heterogeneous networked embedded systems. BEATA algorithm aims at making the best trade-off between energy saving and schedule length. The trade-off is achieved by the defined energy-adaptive window (EAW). In EAW scheme, a sensor node with lower energy consumption and earlier finish time is chosen for task processing. However, the residual energy of sensor nodes is not taken into account for choosing processing nodes, which ultimately causes that some nodes are chosen frequently and die early. Therefore, BEATA cannot well balance energy consumption of the whole network. In [9], an online task scheduling mechanism called CoRAl is proposed to allocate network resources of WSNs. Given the amount of available resources and the task set, CoRAl can determine the sampling frequency for all the tasks so that the frequencies of the tasks on each sensor are optimized subject to the previously evaluated upper-bound execution frequencies. However, CoRAl does not address mapping tasks to sensor nodes. In addition, CoRAl fails to explicitly discuss energy consumptions of sensor nodes. In [10], a novel task allocation algorithm based on * algorithm is proposed to allocate tasks. Considering the problem of sensor nodes' limited energy, a greedy * algorithm is proposed to optimize the complexity of * algorithm. However, the sensor nodes conducting task allocation consume much more energy than ordinary sensor nodes. In addition, in task allocation phase, it is not easy to ensure the parallel processing of tasks. In [11], a novel task grouping method is proposed to ensure the parallel processing of tasks. However, how to group the children subtask which has several father tasks is not discussed in [11]. In [12], an energy-constrained task mapping and scheduling scheme called EcoMapS is proposed. First, a channel model is presented for single-hop WSNs. Then, based on this channel model, communication and computation are jointly scheduled. Furthermore, a quick recovery algorithm is executed in case of sensor node failures. EcoMapS efficiently minimizes the schedule length subject to the energy consumption constraint in single-hop clustered WSNs. However, EcoMapS does not provide the guarantee of execution deadline; thus it is not suitable for real-time applications. Real-Time Task Allocation Algorithms. In [13], Tian and Ekici proposed a task allocation algorithm for multiple heterogeneous WSNs, which is a cross-layer collaborative processing scheme. The algorithm is computing tasks in the application layer, while scheduling communication tasks on the MAC layer and network layer. Computation task and communication task can be scheduled at the same time, which ensures the real-time performance of task allocation. In [14], Li et al. proposed a task auction algorithm based on the contract net. Another auction-based strategy for distributed task allocation is proposed in [15]. The realtime distributed task allocation problem is formulated as an incomplete information reverse auction game. Then, based on winner determination protocol (WDP), an energy and delay efficient distributed winner determination protocol (ED-WDP) is proposed to select winner nodes to perform tasks. To the best of our knowledge, this is the first work that considers a completely distributed framework for auctionbased task allocation with an effective distributed winner protocol for WSN applications. In [16], a dynamic task allocation is proposed for multihop WSNs with low node mobility. The proposed algorithm incorporates a fast task reallocation algorithm to quickly recover from network disruptions, such as node or link failures. Since the algorithm runtime incurs considerable time delay while updating task assignments, an adaptive window size is introduced to limit the task processing delay. To the best of our knowledge, this is the first study for mobile multihop WSNs. Hybrid Task Allocation Algorithms. In [17], Okhovvat et al. proposed a novel task allocation algorithm based on the queuing theory, which considers both the system energy optimization and the task completion time. The proposed algorithm can efficiently shorten the task processing time. However, in WSNs, all the sensor nodes are energy limited and the energy is hard to supply. The residual energy of sensor nodes decreases along with the increase of number of processing tasks. The task processing performance will be directly affected, which could not remain unchanged, while in [18], Zhu proposed another task allocation algorithm to minimize energy consumption, which may cause serious processing delay. In [19], a self-adapted task scheduling strategy is proposed based on a discrete particle swarm algorithm. First, a multiagent-based architecture for WSNs is proposed and a mathematical model of dynamic alliance is constructed for task allocation. In the dynamic alliance, the task allocation is conducted based on the energy consumption of sensor nodes, the amount of network loads, and the execution time of tasks to achieve parallel processing of tasks. In [20], an energy efficient real-time dynamic task allocation algorithm is proposed to deal with the failure cluster members, which not only considers the characteristics of the cluster members, but also considers the communication overhead and the task processing time. The algorithm minimizes the energy consumption of the whole network by reducing the number of activated cluster members. But this algorithm completely runs on the cluster head, which requires a higher energy level and processing ability of the cluster head. In [21], Tian et al. developed an application-independent task mapping and scheduling solution, which not only provides real-time guarantee, but also implements dynamic voltage scaling mechanism to further optimize energy consumption. Using a novel multihop channel model and a communication scheduling algorithm, computation tasks and associated communication events are scheduled simultaneously with a dynamic critical-path scheduling algorithm. However, the proposed solution is not suitable for heterogeneous WSNs, where constituent components are not known a priori. Another similar task allocation algorithm which concentrates on both energy saving and time constraint is proposed in [22]. The proposed algorithm is a two-phase task allocation technique based on queuing theory. In the first phase, tasks are equally assigned to sensor nodes to measure the service capabilities of them. In the second phase, tasks are specifically allocated to some sensor nodes according to their measured capabilities in such a way to reduce the total completion times of all the tasks in network. However, the residual energy of sensor nodes dynamically changes with the number of processing tasks, which results in dynamic capabilities of sensor nodes. Therefore, how to dynamically update capabilities of sensor nodes needs further research. Since network dynamicity causes additional complexity in a task allocation system, in [23], a dynamic task allocation and scheduling (DTAS) framework is proposed, in which algorithm complexity, the corresponding runtime, and energy consumption are explicitly taken into account. First, a heuristic minimum hop count algorithm is designed which can effectively reduce problem complexity. Second, a selflearning process (SLP) based on a GA (Genetic Algorithm) is applied, so that multiple design objectives can be met. Intermediate results of SLP can be provided as temporary suboptimal solutions to cope with changing network conditions. The fitness function in SLP initially favors meeting the deadline requirement and, then, gradually leans towards a balanced solution between task execution time and network lifetime. Finally, to deal with sudden node or link failure events and to update the solutions in SLP, a fast task recovery algorithm (FTRA) is designed to quickly reallocate faulty task assignments. Network Model. The WSN considered in this paper is a cluster-based wireless sensor network, where all the sensor nodes are heterogeneous. That is, all the sensor nodes have different initial energy levels and different task process speeds. The adjacent sensor nodes form a cluster, which consists of two kinds of sensor nodes: cluster members and cluster heads. Each cluster head leads and controls several cluster members. The cluster heads can directly communicate with the base station. Also, the cluster heads can communicate with each other through multiple hop, while the cluster members can only directly communicate with their cluster heads or neighbor nodes. Compared with cluster members, the cluster head has a higher initial energy level and task process speed. In addition, the cluster heads are responsible for allocating tasks on their cluster members. Task Model. A complicated task consists of several subtasks, which can be denoted by a Directed Acyclic Graph = ( , , , ). As shown in Figure 1, a DAG consists of a set of vertices representing the tasks that need to be executed and a set of directed edges representing dependency among tasks. Each task has an execution deadline dl . In order to ensure the smooth completion of the complex task, each subtask must be completed before its execution deadline. If there is a directed edge from a vertex to another one, , the execution of subtask needs the output result of the execution of subtask . Therefore, the subtask needs to be processed after the completion of . The weight value of a vertex refers to the amount of computations of the task. The weight value of a directed edge refers to the communication traffic between two tasks. For the sake of simplicity, we assume that the amount of computations approximately equals the amount of transmitted data, which follows a normal distribution. For an edge , is called an immediate predecessor of . is called an immediate successor of . A task without immediate predecessors is an entry task and a task without immediate successors is an exit task. If one task is scheduled on one node while its immediate predecessor is scheduled on another node, a communication between the two nodes is required. Therefore, cannot start its execution until the communication is completed and the result of is received. If both and are assigned on the same node, the communication latency is considered to be zero. can start execution after is finished. In addition, we assume that a subtask can only be assigned on one sensor node, but each sensor node can process several subtasks. Score Incentive Mechanism and Ranking Domain Many task allocation algorithms have been proposed for cluster-based WSNs [18,20]. We also proposed a distributed task allocation strategy for collaborative applications (DTAC) in cluster-based WSNs [24]. In order to further improve the efficiency of task allocation, balance the energy consumption of sensor nodes, and prolong the network lifetime, the concepts of score and ranking domain are introduced in the paper. The score is used to quantitatively assess task completion of sensor nodes by reward or punishment. The sensor nodes which successfully complete the task are rewarded by some scores, while some others which fail to complete the task are punished by the deduction of scores. Based on the scores, the sensor nodes are classified into different ranking domains, which are used to help sensor nodes to collaborate with each other. (2) unstable node can ask for other nodes' collaboration by exchanging their scores; thus their uncompleted tasks can be migrated on the collaborative nodes for further completion. Definition 1. Basic score BasicScore rate : a basic score of a sensor node is the reward score obtained by the energy consumption per 1 joule: where residual is the residual energy of the sensor node . Score total is the total score of the node . is the task arrival time, deadline is the deadline of the task, and idle is the idle time of the sensor node. is an adjusting parameter. If the task is allocated by a cluster head, the basic score is calculated by the cluster head. If the task is migrated between cluster members, the basic score is calculated by the cluster member itself. Definition 2. Energy consumption of task completion total ( ): Definition 3. Reward score Score reward ( ): the sensor node which successfully completes the task can be rewarded by some scores: Definition 4. Penalty score Score punish ( ): the sensor node which fails to complete the task is punished by some scores: Definition 5. Total score Score total : over a period of time, all the scores that a sensor node obtained are calculated by where ( ) is number of tasks allocated on node . is the number of successfully completed tasks. is the number of unsuccessfully completed tasks. The total score of a sensor node is always nonnegative; therefore, if the total score is zero, it is no longer deducted. Ranking Domain. In the process of task allocation, the sensor nodes are selected based on their reliability, historical participation, energy consumption, residual energy level, and so forth. Therefore, in this section, the ranking domain is defined to divide the sensor nodes into different levels. The sensor nodes with higher level of ranking domain can be preferentially selected to execute tasks. Definition 6. Trust of a sensor node : where ∑ ∈ ( ) Score reward ( ) is the sum of the reward scores of node . Definition 7. Participation of a sensor node : where Head is the total number of tasks which are released by a cluster head. is the number of tasks allocated on the sensor node . where initial and residual are the initial and residual energy of node , respectively. Definition 9. Service capability of a sensor node : where + + = 1, ≥ 0, ≥ 0, ≥ 0, Definition 10. Ranking domain: sensor nodes can be divided into different levels based on their service capabilities. A service capability level is named as a ranking domain. The service capabilities of sensor nodes in the same ranking domain are almost the same. In this paper, the sensor nodes can be divided into three ranking domains ( 1 , 2 , 3 ). First, a threshold Th is defined for the rate of residual energy. If the residual energy of a sensor node is lower than the threshold ( < Th), the sensor node is considered to be energy insufficient to complete the task. Therefore, the sensor node is divided in the third ranking domain 3 . Otherwise, the sensor node is considered to have enough service capacities for task completion. Based on specific environmental parameters, the rate of sensor nodes in the first ranking domain can be obtained. If the number of the sensor nodes which have enough service capacities is less than 2, they all are divided in the first ranking domain. The cluster head is responsible for dividing ranking domains of cluster members. For each sensor node, there is a node list, in which the node ID, the residual energy level, the total scores, the number of successfully completed tasks, the number of tasks waiting for completion, and the related ranking domain are all recorded. After completing each task, the residual energy level, the total scores, the number of successfully completed tasks, and the number of tasks waiting for completion of the execution node are different from before. Therefore, the calculation of ranking domains is dynamic, which is updated according to the dynamic task allocation. A Task Allocation Algorithm Based on Score Incentive Mechanism TASIM consists of three phases: (1) task allocation on cluster head, (2) task migration on unstable node, and (3) task reallocation on death node. Task Allocation on Cluster Head. Task allocation on cluster head is a kind of centralized allocation scheme. The allocation principle is firstly selecting the sensor nodes in the first ranking domain. Based on the deadline and the energy needed to complete a task, the sensor nodes in the first ranking domain are first selectively chosen as candidate nodes. If there are not enough sensor nodes in the first ranking domain, the sensor nodes in the second ranking domain will be selected. Then, the basic scores for the candidate nodes are calculated. The candidate node with the least basic score is chosen as working node for task execution. The details of task allocation are listed as follows. Step 1. Sort all the subtasks based on their dependency among each other. The sorted subtasks are listed in a task allocation queue. The subtask in the front of task allocation queue can be preferentially allocated. In addition, all the subtasks in the allocation queue can be allocated and executed at the time. Step 2. Allocate the first subtask in the task allocation queue. Cluster head first estimates the energy needed to complete the task. Then, the sensor nodes with enough residual energy for task completion are selected as candidate nodes, which are denoted by candidate . Step 3. Select working nodes from candidate nodes. First, judge whether the candidate nodes can complete the task within the deadline of the task. The sensor nodes which are not able to complete the task within the deadline are removed from the candidate nodes. Then, the candidate node with the least basic score is chosen as working node. Step 4. If there are not enough sensor nodes in the first ranking domain, the sensor nodes in the second ranking domain will be further selected based on the abovementioned three steps. Step 5. After selecting the working node, the cluster head will send the subtasks and the related basic scores to the working node. Step 6. Task execution: the working node executes the allocated subtasks and waits for the allocation of the next subtasks. The rest of subtasks will be allocated one by one until all the subtasks are successfully allocated. Step 7. If the working node successfully completes the subtask, it will be rewarded by the scores Score reward ( ). Then, the information of the total scores and the residual energy will be sent back to the cluster head with the result of task execution. Step 8. If the working node fails or is attacked by malicious nodes and it is unable to complete the subtasks within deadline, the sensor nodes will be punished by deducting some scores Score punish ( ) (Score punish ( ) = × Score reward ( )). If the uncompleted subtask can be successfully migrated to another node, we obtain = 0.5. Otherwise, = 1. Task Migration on Unstable Node. Since WSNs are always deployed in hostile environment, sensor nodes are easily attacked and vulnerable to fail. When a sensor node monitors its failure or is being attacked by a malicious node when executing tasks, it considered itself unstable. The unstable node cannot continually complete the tasks; thus they ask for other nodes' collaborations to help with the tasks' further completion. In this paper, we propose task migration; that is, the uncompleted tasks can be migrated on other cluster members as a working node becoming unstable. In the task migration scheme, cluster members consult each other to complete the task migration, which does not need the cluster head's participation, thus effectively reducing the cluster head's working load and energy consumption. There are many kinds of methods to achieve the collaboration between sensor nodes. In this section, an auction mechanism is proposed for sensor nodes' collaboration and task migration. The details of the task migration scheme are listed as follows. Step 1. Invitation for bids: when a sensor node monitors its failure or is being attacked by a malicious node and cannot continually complete the allocated tasks, it will initiate an auction to find a cluster member for task migration. The unstable node failure first finds out all the completed tasks failure . Then, it sends a Request for Proposals (RfP) to the cluster members in the same cluster. In the RfP, the uncompleted tasks are described in detail, which include the deadlines of the tasks and the biggest budget for completing the tasks Score budget (Score budget = Score failure total ). Step 2. Selective bidding: when the cluster member receives the RfP, the selective bidding scheme is employed. That is, according to the task descriptions in the RfP, each cluster member judges whether it is able to complete the tasks. If it cannot complete the tasks, it will quit the bidding. Otherwise, it will return a quote to the tender nodes. The details of the selective bidding are listed as follows. Step 2.1. In the process of selective bidding, the deadlines of the tasks are firstly considered. If the expected task completion time is larger than the deadline, the bidding node will quit the bidding. Step 2.2. Otherwise, the rewarded scores Score reward ( failure ) = BasicScore rate × total ( failure ) for completing the tasks are calculated, where BasicScore rate is the basic score and total ( failure ) is the total energy consumption for completing all the bidding tasks. If the reward score is larger than the expected budget in the (Score reward ( failure ) > Score budget ), the sensor node will also quit the bidding. Step 2.3. If Score reward ( failure ) < Score budget , the sensor node will accept the bidding and provide the bid price Score reward ( failure ). Step 3. Time-interleaved bidding: based on the different ranking domains, assign different bid times for the bidding nodes. That is, a bidding node in the first ranking domain can tender at 1 time, while another bidding node in the second ranking domain can tender at 2 time. We set 2 > 1 . That is to say, the sensor nodes with higher ranking domain have a bigger bid on priority. Step 4. The first bidder winning: in order to shorten the tender time, the first bidder winning scheme is adopted in TASIM. That is, the bidding node which firstly delivers the the tender wins the bidding. Step 5. Task migration: the tender node failure then migrates the uncompleted tasks failure to the winning node. The scores of the tender node are deducted by Score reward ( failure ) + Score failure punish ( failure ). Then, the updated scores, the residual energy, the winning node, and the successful migrated tasks are reported to the cluster head. Step 6. Information update: after receiving the reported information, the cluster head will update the total scores and the residual energy of the unstable node as soon as possible. Step 7. Task execution: the winning node executes the migrated tasks. After the successful completion, it reports the results to the cluster head and can be rewarded by the scores Score reward ( failure ). Step 8. Task migration failure and task reallocation: If we cannot find a winning node with time 2 , all the uncompleted tasks will be reported to the cluster head and wait for reallocation. Task Reallocation on Death Node. If a cluster member dies, the uncompleted tasks on the death node will be reallocated by the cluster head. The details of task reallocation are listed as follows. Step 1. Cluster members periodically report their status information to the cluster head, including their residual energy and their scores. The cluster head periodically updates the ranking domains. Step 2. If the cluster head cannot receive any information about one cluster member within the prescribed period of time, the cluster member is considered to be dead. Step 3. The cluster head checks whether there are uncompleted tasks on the death node. If there are not, the death node's information will be directly deleted. Otherwise, the uncompleted tasks will be reallocated to other cluster members. Simulation Results and Analysis Our experiments are performed using Matlab. Two different sets of simulations are implemented. First, the performance of the TASIM based on different simulation parameters, for example, the number of cluster members and the different values of , is evaluated. Then, the network lifetime, the success task execution rate, and the node failure rate of TASIM, EcoMapS [12], and DTAS [23] are compared. The deployment area is set to 500 m * 500 m * 500 m. The communication range of sensor nodes is set to 75 m. Other simulation parameters are shown in Table 1. Network Equilibrium and the Average Number of Cluster Members. Generally, the proposed TASIM can well balance the energy consumption of the whole network. As shown in Figure 2, the network equilibrium does not change with different number of cluster members. That is because of the following. (1) In the score incentive mechanism, the residual energy levels of sensor nodes are taken into account for task allocation. The sensor node which successfully completes more tasks can obtain more score but have less residual energy. In this case, the basic score of the sensor node is relatively small, which leads to a smaller possibility to be selected for task execution. Thus, the sensor nodes with more residual energy are more likely to be selected for task execution. (2) The ranking domain of a sensor node is dynamically updated. Therefore, the task allocation can be timely updated based on sensor nodes' status information. (3) The proposed TASIM is centralized algorithm. The cluster head knows the information of all the cluster members; therefore, the cluster head can efficiently allocate tasks to balance the energy consumption of sensor nodes. Energy Consumption and the Average Number of Cluster Members. As shown in Figure 3, the more tasks are executed, the more energy is consumed. However, the energy consumption is almost irrelevant to the average number of cluster members. This is because of the following. (1) The energy consumption of network depends on the size and the number of completed tasks. Thus, the more tasks are completed, the more energy is consumed. (2) The proposed TASIM chooses the most suitable sensor nodes for task allocation. The sensor nodes with higher residual energy and better service abilities are more likely chosen as working nodes, which is not related with the number of sensor nodes. Therefore, the average energy consumption of the network does not change with the average number of cluster members. Residual Energy and the Average Number of Cluster Members. As concluded in Figure 3, the more tasks are executed, the more energy is consumed. Figure 4 shows that the average residual energy of sensor nodes decreases with the increase of number of tasks, that is because much more energy is exhausted. In addition, we find that, under the same number of tasks, the more sensor nodes can obtain the higher average remaining energy. This is because the number of working nodes remains the same with the constant number of tasks. In this case, the more number of sensor nodes introduces more spare nodes. All the spare nodes have much more residual energy than the working node; thus the average residual energy is relatively higher. Figure 5, the running time of TASIM increases with the increasing number of tasks and cluster members. First, TASIM is a kind of centralized algorithm. When the cluster head allocates each task, it needs to filter the sensor nodes as the best working node. If there is not a suitable node in the first ranking domain, the filter will be continued in the second ranking domain. Each task allocation will introduce a filter process. Therefore, more tasks lead to more filter processes, and the running time of task allocation will be accordingly increased. In addition, when a sensor node fails or is attacked, the uncompleted tasks on the sensor node will be migrated to other cluster members. Algorithm Running Time and the Average Number of Cluster Members. As shown in If there are more cluster members, the number of sensor nodes which participate in the auction will increase. The time of task migration increases and the running time of TASIM becomes longer. Network Equilibrium and . As shown in Figure 6, the node ratio in the first ranking domain is changed from 0.1 to 0.7. The parameter does not have obvious influence on the network equilibrium, because only decides the number of sensor nodes in the first grade domains but does not have influence on selecting sensor node for task allocation. Figure 7, the node ratio in the first ranking domain is set as 0.1, 0.3, 0.5, and 0.7, respectively. The different values of also do not have obvious influence on the energy consumption of the network, because the energy consumption directly depends on the sizes and the number of tasks. Therefore, the energy consumption of the network does not change with the different values of . Energy Consumption and . As shown in 6.1.7. Residual Energy and . As concluded in Figure 8, the different values of also do not have obvious influence on the energy consumption of the network. Therefore, the residual 6.1.8. Algorithm Running Time and . As shown in Figure 9, it can be concluded that the running time of the algorithm increases with the higher value of . The higher is, the more sensor nodes are divided into the first ranking domain. Therefore, much more is spent for working nodes' selection, which ultimately introduces longer running time of TASIM. 6.1.9. The Success Rate of Completion Task and . As shown in Figure 10, it can be concluded that the success rate of completion task increases with the higher value of . This because the higher is, the more sensor nodes are divided into the first ranking domain. When a sensor node fails or is attacked by malicious nodes, the sensor nodes in the first ranking domain are firstly chosen as candidate node for task migration. The sensor nodes in the first ranking domain are more likely to successfully complete the migrated task than the sensor nodes in the first ranking domain. Therefore, if there are more sensor nodes in the first ranking domain, the migration tasks are more likely to be successfully finished, which ultimately introduces higher success rate of completion task. Network Lifetime and the Average Number of Cluster Members. In the simulation, the node failure rate is set between 0.01 and 0.11. is set to 0.7. As shown in Figure 11, it can be concluded that more sensor nodes can prolong the network lifetime. That is because more sensor nodes can execute more complex tasks and also the total energy of the network is increased; therefore the network can work longer. In addition, it is found that, with the same number of sensor nodes, TASIM outperforms EcoMapS and DTAS in terms of prolonging network lifetime. In TASIM, task allocation is conducted by considering both node failure and death. The uncompleted tasks on the failure or death nodes can be successfully migrated to cluster member for further execution. However, in EcoMapS and DTAS, they do not consider how to deal with the uncompleted tasks on the failure or death nodes. The Success Rate of Completion Task and the Average Number of Cluster Members. Figure 12 shows that more sensor nodes can efficiently improve the success rate of completion task. That is because more sensor nodes introduce more candidate nodes in the first ranking domain. In this case, it is more likely to choose other cluster members to continually execute the uncompleted tasks on the unstable nodes. As shown in Figure 11, the number of tasks is set to 100. We only need 25 sensor nodes to finish all of them. In TASIM, a working node can obtain the corresponding scores for rewarding successful completion of task. The sensor node with higher score is considered to be much more trusty. In addition, although the chosen node fails or is attacked, the tasks can be successfully migrated to the surrounding node with high credibility. This mechanism ensures that the task can be completed by fewer and more reliable nodes. Network Lifetime and the Node Failure Rate. Figure 13 shows that higher rate of failure nodes introduces shorter network lifetime. The robustness of TASIM against the node failure rate is better than that of EcoMapS and DTAS. However, when the node failure rate increases to 0.07, the three algorithms cannot work well. Taking our algorithm, TASIM, for example, it is known that if the node failure rate is relatively higher, the chosen node for task migration or reallocation may be an unstable node. In this case, the uncompleted tasks ask for further task migration or reallocation. Repeated task migration or task reallocation can cause the subtasks timeout, and the whole task cannot be completed in time. In addition, the repeated task migration or task reallocation can exhaust the network resources, which ultimately shortens the network lifetime. Algorithm Running Time and the Average Number of Cluster Members. The compared three algorithms are all centralized algorithms. Figure 14 shows that more sensor nodes need longer time for task allocation. In addition, DTAS outperforms our proposed TASIM, because in DTAS the uncompleted tasks on unstable nodes can be directly migrated to the the other working nodes with the maximal idle time, while in TASIM, task migration is achieved by auction, which introduces longer auction time. Therefore, TASIM prolongs the execution time of the algorithm. Conclusions Nowadays, task allocation is needed in many applications such as complex task execution, reliable image processing, and multimedia data compression. In this paper, a task allocation algorithm based on score incentive mechanism (TASIM) is proposed for complex task execution in WSNs. TASIM divides sensor nodes into different ranking domains based on their residual resources and services capacities, which narrows the scope of working nodes and balances the energy consumption of them. In addition, a score incentive mechanism is proposed to assess sensor nodes' qualities for completing tasks. Based on the score, the uncompleted tasks on unstable node can be migrated to other nodes for further completion, which ensures that all the tasks can be finished in time. Furthermore, simulation results demonstrate that the performance of the TASIM is clearly better than that of conventional task allocation algorithms, for example, EcoMapS and DTAS, in terms of both network load balance and energy consumption.
9,195.2
2015-08-01T00:00:00.000
[ "Computer Science", "Engineering" ]
The Role of Entrepreneurship Education and University Environment on Entrepreneurial Interest of MBA Students in Saudi Arabia Entrepreneurship has been recognized as an economic panacea, which engenders employment generation and economic development. This becomes so important at this time when many countries including Saudi Arabia are facing the challenges of unemployment in their economies. Among the goals of Vision 2030 is to reduce unemployment and increase the participation of private sectors in Saudi Arabia. Thus, this paper investigates the role of entrepreneurship education and university environment on entrepreneurial interest among MBA students in Saudi Arabia. The data is obtained from the survey conducted among the MBA students in the College of Business at Imam Abdulrahman Bin Faisal University. Using ordered logistic regression model, the results reveal that variables ‘I have taken entrepreneurship course before (X1)’, ‘Entrepreneurship course has enhanced my practical managerial skills in order to start a new business (X3)’ and ‘The knowledge of entrepreneurship in my university has enabled me to know the actions I need to take to start my own business (X6)’ are statistically significant and have great likelihood of influencing students’ entrepreneurial interest. This study suggests that Saudi government should make entrepreneurship course compulsory for all fields of study as it has a significant impact on the entrepreneurial interest of students as well as challenge the university environment to fully use the entrepreneurship centres within the university to encourage students to engage in entrepreneurship activities right from school. Introduction The issue of unemployment is a major economic issue in any country, and the rate at which fresh graduate students continously search for jobs that are most times not available is becoming alarming. Furthermore, the universities do not always succeed in developing market demand for the traditional degrees as the gap between the graduates and the market demand is becoming increasingly wider (Brown, 2003). The stream of well-educated citizens that most universities all over the world provide has negatively affected the unemployment rate (Saks and Ashforth, 2002). According to Okorafor and Okorafor (2011), universities turn out graduates each year with different abilities in managerial skills who can find job opportunities in public and private sectors in the community whereas these sectors are not always capable of providing all the university graduates the jobs they needed. These sets of unemployed youths become frustrated as they are not able to secure themselves with the jobs they have been trained to do in the universities. Kingdom of Saudi Arabia (KSA) is not exempted from this issue as the rate of unemployment in the Kingdom is growing as large numbers of graduates are finding it difficult to secure employment unlike before. The unemployment rate for Saudis in the second quarter of 2017 was 12.8 percent (Saudi General Authority for Statistics, 2017). This comprises 7.4 percent year-on-year unemployment rate for men and 33.1 percent year-onyear for women. When taking the total population in Saudi including non-Saudis into consideration, the total rate of unemployment was 6 percent in the second quarter which includes 3.3 percent year-on-year for men and 22.9 percent year-on-year for women. The highest percentage (34.2 percent) of Saudi job-seekers was in the age group 25-29 years and approximately half of Saudi job-seekers are university graduates (General Authority for Statistics, 2017). There is definitely need to strategically address this issue if reduction of unemployment rate and increasing the participation of SMEs to GDP have to be realized as stated as part of the goals of Vision 2030 of the Kingdom of Saudi Arabia. That is why most developed, emerging and developing countries have recognized entrepreneurship as an economic panacea, which engenders employment generation and economic prosperity (Packham et al, 2010). Many studies have shown that there is a positive correlation between entrepreneurship and economic growth (Lena and Wong, 2003). In the recent years, the most obvious alternative solution to unemployment is self-employment and entrepreneurship. In order to manage this situation, universities in developing countries have included entrepreneurship courses in their curriculum (Harry Matlay and Dehghanpour Farashah, 2013). Nowadays, many universities provide entrepreneurship education for the reason that the university graduates will be better equipped with the skills needed to be entrepreneurs, which allows the graduate to be a job creator rather than a job seeker (Zamberi, 2013). In developing the local economy, there is evidence that entrepreneurs with academic education are more important than entrepreneurs with a lesser level of education (Taatila, 2010;Kwiek, 2012). Entrepreneurship and entrepreneurship education also have a greater market chance and advantages like promoting the start-ups (Holmgren et al., 2004). There is empirical evidence showing that university-level entrepreneurship education has the most critical role in developing entrepreneurial intention (Sánchez, 2011;Peterman and Kennedy, 2003). Building innovative talents and stimulating the entrepreneurial spirit is essential for fostering youth participation in the economic growth and development of any economy (Mahadea et al., 2011). Teaching entrepreneurship education in tertiary institutions is considered a strategic tool to enhance a nation's development (Muhammad et al., 2011). Intention to do a certain act is seen as a determinant of the actual behavior exhibited (Ajzen, 1991). Thus, entrepreneurial interest has been acknowledged to be a key predictor of entrepreneurial behaviour (Krueger et al., 2000). Consequently, exploring what influences entrepreneurial intent is a critical factor in entrepreneurship research. Many studies have examined various factors including entrepreneurship and university environment, however, there is dearth of such studies in Saudi Arabia as at the period the authors conducted this study. Based on the foregoing problems facing the country and the Vision 2030 of the Kingdom of Saudi Arabia as well as a dearth of studies in this area, there is a need to investigate the entrepreneurial interest among the university students in Saudi Arabia. Hence, this study seeks to contribute to the existing literature and specifically fill the gap of such study in KSA. Literature review and hypotheses formulation The significance of entrepreneurship education and university environment has been acknowledged as one of the crucial factors that help youths to understand and foster an entrepreneurial interest and attitude (Gorman et al., 1997;Kourilsky and Walstad, 1998;Wang and Wong, 2004;Jabeen et al., 2017). Entrepreneurship education at the university level provides the opportunity for students to be more aware of the latest developments, which allows them to have a more clear vision on how they can implement these developments into future business. The importance is in using the high-level skills in starting a new business and developing those skills so as to grow the business (Minniti and Lévesque, 2008). Therefore, large amounts of academically educated people are expected to pursue an entrepreneurial career. However, few research studies about entrepreneurship education focus on the university level (Raposo et al., 2008;Sánchez, 2009). Sine and Lee (2009) recognized that entrepreneurs are supporters of social and economic development. In most developed countries the number of Entrepreneur Education Programs (EEPs) has increased so much in the past three decades (Barak, 2012;Fayolle et al., 2006;Katz, 2008;Spiteri and Maringe, 2014;Varblane and Mets, 2010). These programs are intended for training students to be self-employed, and the students are learning about setting and starting their own business venture. As the number of EEPs is increasing, earlier research provided varied results on the affect EEPs have over entrepreneurial intention. Some studies found that EEPs have a positive impact on entrepreneurial intention (Krueger et al., 2000;Lüthje and Franke, 2003;Guerrero et al., 2008;Krueger, 2009;Lee and Wong, 2004;Liñán and Chen, 2009;Müller, 2011;Iakovleva et al., 2011). For example, many studies found that EEPs have a positive influence on the perceived attractiveness and feasibility of a new business (Fayolle et al., 2006;Müller, 2011;Zhang et al., 2014) and on the personal self-efficacy, pro-activeness, and the ability to take risk (Sánchez, 2013). Many other researchers found that there is a positive and direct relation between attending an EEP and the student's intention in starting a new business after graduating from the program (Karlan and Valdivia, 2006;Dickson et al., 2008;Pittaway and Cope, 2007;. On the opposite end, other researchers found that there is a negative relation between attending an EEP and entrepreneurial intention (Martin et al., 2012;Mentoor and Friedrich, 2007;Oosterbeek et al., 2010), while few other studies found no relation between attending an EEP and entrepreneurial intention (do Paço et al., 2015). While traditional education is transforming the knowledge and skills, entrepreneurship education is seen as a model of changing the motive and the attitude (Hansemark, 1998;Hansemark, 1998;Zamberi, 2013;Fayolle and Gailly, 2015). Entrepreneurship and entrepreneurship education have a greater market chance (Béchard and Grégoire, 2005;Holmgren et al., 2004). Desire and the ability to start a new business are two important basics for success; entrepreneurial attitude is highly required in not only entrepreneurial career but also in independent employment affairs (Korunka et al., 2010). Entrepreneurship education is not merely about educating students on how to run a venture (Cathy, 2005), it is rather more about students learning how to create and sustain a business (Burleson, 2005). The most critical aspects of entrepreneurship education are allowing the individual to identify the opportunities in their life, the ability to start a new venture and manage it, and the ability of the individual to be a more creative and critical thinker (Dahleez, 2009). Furthermore, entrepreneurship education is not only about the knowledge and skills in business, but it is mostly about developing beliefs, values, and attitudes, in order to make entrepreneurship more desirable to students than usual paid job or unemployment (Holmgren et al., 2004;Sánchez, 2011). Taking into consideration the spreading of entrepreneurship education, it is essential to establish entrepreneurship education framework at the university level, keeping in mind that not every individual studying entrepreneurship or receiving entrepreneurship education will have the desire to be an entrepreneur, promoting entrepreneurship education and comprehending the role of this education and also what students expect from these programs can support the idea that entrepreneurs are often made not born (Aruwa, 2004;Van der Sijde et al., 2008). Keat and Ahmad (2012) stated that having an excellent entrepreneurship educator and an educational institution; will transform the traditional way of teaching and transferring knowledge to student into encouraging them to be more active rather than the mere act of receiving knowledge passively. In entrepreneurship education, the teaching methods are supposed to be guided towards entrepreneurship taking into consideration the social interaction, student activation, and student orientation (Ollila and Williams-Middleton, 2011). To measure the role of entrepreneurship education in the formation of entrepreneurial intention research was based on the theory of planned behavior (TPB) (Ajzen, 1991), which has a steady theoretical foundation (Schlaegel and Koenig, 2014;Krueger and Carsrud, 1993). TPB states that an individual's behavior is based on the intention of that individual, the stronger the individual's intention to do a given behavior, the more likely it will happen. Moreover, the individual's intention to perform a given behavior is based on three things, the attitude toward behavior, subjective norms, and perceived behavioral control. Entrepreneurship education is viewed as a strong predictor of entrepreneurial intention. Business education is different from entrepreneurship education; entrepreneurship education is presumed to raise the awareness of entrepreneurship as an alternative career path to employment (Slavtchev, Laspita, & Patzelt, 2012) whereas business education is about educating students to work at established businesses (Grey, 2002). It is safe to say that entrepreneurship education is better related to entrepreneurial intentions than business education because entrepreneurship education is concentrated on the improvement and growth of the skills and knowledge needed for entrepreneurs, entrepreneurship education offers courses in new business planning for example, and that helps in increasing the student's appetite for risk-taking. Moreover, entrepreneurship education is more concentrated on the attitudes, intentions, and the firm creation process (Liñán, 2008) unlike business education which provides the knowledge of administrating a business and is not focusing on creating one. This makes entrepreneurship graduates three times more expected to start a business than non-entrepreneurship graduates (Charney & Libecap, 2000). Even though business education is related to the perceived knowledge, it does not affect entrepreneurial intentions; its objective is to educate students with skills and knowledge to be employed by firms (Davidsson, 1995). Packham et al. (2010) reveal in their study, conducted within European higher education institutions (HEIs) in France, Germany and Poland, that entrepreneurship education has a positive impact on the entrepreneurial attitude of French and Polish students, whereas the course had a negative impact on male German students. Their study further showed that while female students are more likely to perceive a greater benefit from the learning experience, the impact of entrepreneurship education on entrepreneurial attitude is actually more significant for male students. Siyanbola et al. (2012) also revealed that parents' educational qualification, entrepreneurship education and family entrepreneurial history among others influence the students' entrepreneurial interest in Nigeria. Tshikovhi and Shambare (2015) also showed that high levels of entrepreneurship knowledge have a significant impact on entrepreneurship interest among South African Enactus Students. There are more studies on positive impacts of entrepreneurship education on students' entrepreneurship interest than negative ones. So, there is need to find out what is obtainable among the MBA students in Saudi Arabia, hence the formulation of the first set of hypotheses: H1: Entrepreneurship course taken by the student has a positive and significant impact on entrepreneurial interest of MBA students H2: Entrepreneurship course which enhance the students' ability to identify an opportunity has positive and significantly impact on entrepreneurial interest of MBA students H3: Entrepreneurship course which enhance the students' practical managerial skills in order to start a new business has a positive and significant impact on entrepreneurial interest of MBA students The impact that university environment has in influencing the entrepreneurial interest towards starting a new business is also trending in the recent literature. Lüthje and Franke (2003) have identified some factors that have an impact on the university environment that could influence the creation of entrepreneurial behavior. They found that perceived entrepreneurship-related barriers and support factors have a direct influence on the student's entrepreneurial intention, the more favorable the perceived support for entrepreneurship, the greater the entrepreneurial intention and vice versa. In another study by Franke and Luthje (2004), they found that students have a lower entrepreneurial intention because of the perceived undesirable activities by the university in the means of educating students with the knowledge to start a new business. Moreover, the effect of university environment to the entrepreneurial intention was higher than personality traits, attitudes, and socio-economic environmental factors. Zollo et al. (2017) found that entrepreneurial intention of students is significantly affected by the university. It has also been identified by Kraaijenbrink et al. (2009) andSaeed et al. (2015) that there are three kinds of support a university can provide to its students, those are perceived educational support, perceived concept development support and perceived business development support, and they are important for the supportive university environment. Recent studies such as Durst and Sedenka (2016) in Sweden, Jabeen et al. (2017) in United Arab Emirates, Shahid et al. (2017) in Pakistan and Hasan et al. (2017) in Bangladesh among others suggested that university environment where students learn plays a pivotal role in encouraging students to develop business ideas and start up their own business. This seems to help students mitigate any adverse impact that their negative perceptions that surrounds them might have on their entrepreneurial intentions (Shahid et el., 2017). The aforementioned lead to the second set of hypotheses: H4: The encouragement given to students to engage in entrepreneurial activities by the university environment has a positive and significant impact on the students' entrepreneurial interest H5: The inspiration given to students to develop ideas for new businesses by the university environment has a positive and significant impact on the students' entrepreneurial interest H6: Entrepreneurship knowledge which enable the students to know the actions required to start a new business learnt in the university environment has a positive and significant impact on the students' entrepreneurial interest The apriori expectations from the six hypotheses formulated above are positive and significant impact of the factors on students' entrepreneurial interest. Methodology of the study 3.1 Sample and data The data used in this study is part of the preliminary data obtained from the on-going Masters Research thesis. This data is collected between November, 2017 and January 2018 at Imam Abdulrahman bin Faisal University, Dammam, Saudi Arabia using questionnaire adapted from different studies such as Wang and Wong (2004) (2016) and Hasan et al. (2017). The questionnaire is designed to elicit information on the level of entrepreneurial interest of the students and the factors that could influence their interest. A representative sample was selected from the students that enrolled for Masters in Business Administration (MBA) for 2017/2018 session at the College of Business Administration, Imam Abdulrahman Bin Faisal University. MBA students are selected because they are expected to have studied entrepreneurship course at their undergraduate or during their MBA program and the entrepreneurship centre situated in this college is expected to have an impact on these students. The questionnaire was distributed to all the MBA students enrolled in that session, and 46 out of 89 students returned the properly filled questionnaire as at the period of this analysis. This represents the response rate of 51.7%. Dependent and independent variables The purpose of the study is to investigate the factors influencing entrepreneurial interests among MBA students, and specifically this study examines the role of entrepreneurial education and university environment on entrepreneurial interest. This becomes important since the major pointer of entrepreneurship education has always been to bring to fruition the knowledge and procedures required to establish and grow a successful enterprise (Packham et al., 2010). Some studies therefore argue that the three main objectives for effective entrepreneurship education are to: develop a wide understanding of entrepreneurship, acquire an entrepreneurial mindset and how to start and operate an enterprise effectively (Chen et al., 1998;Jack and Anderson, 1999;Solomon et al., 2002;Gibb, 2005). From the foregoing, level of entrepreneurial interest of students becomes the dependent variable in the model, and is presented as categorical ordered variable and assumes the following values: 1 in the case of 'very low level', 2 in the case of 'low level', 3 in the case of 'medium level', 4 in the case of 'high level', and 5 in the case of 'very high level'. The independent variables are measured by the following variables 1. Entrepreneurial education -This is measured using three proxy variables: i. I have taken entrepreneurship course before (X1): This explains whether the respondent has taken entrepreneurship course, and this is measured by binary response of Yes coded as 2 and No coded as 1 ii. Entrepreneurship course has enhanced my ability to identify opportunities (X2): This explains the extent at which the respondents agree to the question of entrepreneurship course enhancing their ability to easily identify opportunities. The extent of this is measured by ordinal scale ranging from 'very low extent' to 'very high extent' (i.e 1 to 5). iii. Entrepreneurial course has enhanced my practical managerial skills in order to start a new business (X3): This obtains information on the extent at which entrepreneurship course that the respondent has studied has impacted on his/her managerial skills to start and operate his/her own business successfully. The extent of this is measured by ordinal scale ranging from 'very low extent' to 'very high extent' (i.e 1 to 5). i. My university environment has encouraged me to engage in entrepreneurial activities (X4): This variable is sought to enquire from the MBA students the extent at which their university has encouraged them to engage in entrepreneurial activities given the availability of the entrepreneurship centres available within the university. This is also ranked in an orderly likert-scale format from 'very low extent' to 'very high extent' (i.e 1 to 5). ii. The atmosphere at my university inspires me to develop ideas for new businesses (X5): This captures the perception of the MBA students regarding how their university environment has motivated them to generate new business ideas. Their perceptions are also ranked from 'very low extent' to 'very high extent' (i.e 1 to 5). iii. The knowledge of entrepreneurship in my university has enabled me to know the actions I need to start my own business (X6): This is used to capture the views of the MBA students on the extent at which their entrepreneurship knowledge gained in the university has prepared them to be ready to start their own business. Their views are obtained through likert-scale format from 'very low extent' to 'very high extent' (i.e 1 to 5). All the aforementioned independent variables are expected to have a significant and positive impact on the entrepreneurial interest level of the MBA students. This implies that each of the variables is expected to indicate its likelihood of contributing significantly to the entrepreneurial interest level of the respondents. Logistic regression Logistic (otherwise known as logit) and/ or probit regression become one of the best form of regression which is used when the perceived outcome for a dependent variable have two or more possible types. Since the outcome of the observed entrepreneurial interest is in form of five likert-scale, logistic regression is therefore adopted for this study. Logistic regression are generally used for models in which the dependent variable is an indicator of a discrete choice either binary such as a "yes or no" decision or an ordered or non-ordered decision such as Likert-scale "very low extent to very high extent" (Greene, 2003;Brooks, 2008;Akinwale et al., 2018). Logistic regression measures the relationship between a categorical dependent variable and one or more independent variables, which can be continuous or discrete. The ordered logistic regression model allows for the prediction of the likelihood of outcome variable (entrepreneurial interest). The regression model which will be predicting the logit, that is, the log of the odds of the entrepreneurial interest, is specified as follows: Log ( Coefficient of the independent variables from the first variable to the last one. Variables X1 to X3 are used to capture hypothesis 1 to 3 which are the proxies for entrepreneurship education whereas variables X4 to X6 are used to capture hypothesis 4 to 6 which are the proxies for university environment. This model measures the significant impact of each of the factors considered on the level of entrepreneurial interest of MBA students. The coefficients of each variable in the logistic regression model, unlike the linear regression, denote the change in the logit for each unit change in the predictor (Akinwale and Surujlal, 2017). Given that the logit is not intuitive, this study emphasises on an independent variable's effect on the exponential function of the Vol.10, No.4, 2019 regression coefficient otherwise known as the odds ratio. Once the coefficient is positive, this signifies the likelihood of improving MBA students' entrepreneurial interest with a particular factor keeping other covariates at their mean. Furthermore, MacFadden-R2 (1 -the ratio of unrestricted and restricted log likelihood) is used to measure goodness of fit in logit model and this is based on the log likelihood. Though the value of the MacFadden-R2 ranges between 0 and 1 but the value is usually small, and this is because it is often the case for limited dependent variable models unlike the ordinary least square regression methods (Brooks, 2008). Descriptive Analysis The survey shows that 38.1% of MBA students have a very high level of interest in starting their own business, 33.3% has a high level of interest, 11.9% has a medium level of interest while 9.5% and 7.1% have low and very low level of interest in starting their own business as shown in Table 1. This clearly shows that majority of the MBA students have entrepreneurial interest. Table 1 also reveal that majority of the MBA students at Imam Abdulrahman bin Faisal University (IAU) have their age group between 20 and 30 years. While 43.5% and 37% of the respondents are in the age bracket of 20-25 and 26-30 years respectively, 19.5% of them are above 30 years old. This implies that the MBA students are still in their youthful ages who are always ambitious to achieve their goals. Female students accounted for 70% of the respondents indicating the extent at which the female are preparing themselves for their future careers. Having higher degrees and becoming a proficient fellow in their chosen careers might be a very good chance for them to compete with their male counterparts in the labour market. Furthermore, most (67%) of the MBA students sampled are single, while 30% of them are married and 2% of them are divorced/separated/widowed. Approximately 94% of the respondents are Saudis which clearly signifies that the study reveal the perception of the Saudi Arabia citizens. Moreover, majority (85%) of the MBA students are from the eastern province, which might be as a result of the location of the university where the study is conducted. Correlation Analysis Correlation is used to measure the extent and direction of relationship among various variables. In order to know whether some of the variables considered in this study are correlated with entrepreneurial interest, correlation analysis was conducted. Table 2 shows the correlation matrix between each of the variables and entrepreneurial interest. As can be seen from Table 2, all the variables are positively correlated with MBA students' entrepreneurial interest except the variable 'My university environment encourage me to engage in entrepreneurial activities' which is negatively related with entrepreneurial interest. Most of the variables in Table 2 are weakly related with entrepreneurial interest as they have correlation coefficients less than or equal 0.2, except four variables (viz: I have taken Entrepreneurship course (EC) before; EC has enhanced my ability to identify an opportunity; EC has enhanced my practical managerial skills to start a new business; and the knowledge of entrepreneurship in my university has enabled me to know the actions I need to take to start my own business) which have moderately strong relationship with entrepreneurial interest. Furthermore, only two variables have statistical significant correlation with entrepreneurial interest. The two variables are 'EC has enhanced my ability to identify an opportunity' and 'EC has enhanced my practical managerial skills to start a new business', and are statistically significant at 10% and 5% respectively. In order to further ascertain the relationship between the variables and solve the hypotheses formulated, logistic regression is then conducted as shown in the next sub-section. 4.3 Role of entrepreneurship education and university environment on MBA students' entrepreneurial interest: Table 3 presents an ordered logistic regression results for MBA students' entrepreneurial interest (Y) as dependent variable and the explanatory variables (Xn) which are the factors influencing entrepreneurial interest. The ordered logistic regression results in Table 3 show that 'I have taken entrepreneurship course before (X1)', 'Entrepreneurship course has enhanced my practical managerial skills in order to start a new business (X3)', and 'The knowledge of entrepreneurship in my university has enabled me to know the actions I need to take to start my own business (X6)' have significant impacts on the level of entrepreneurial interest (Yi) of MBA students at 5% level of significance. Their probability values are less than 5% level of significance (p-value < 0.05). This result is similar to thw studies of Hasan et al. (2017) and Shahid et al. (2017). On the other hand, 'Entrepreneurship course has enhanced my ability to identify opportunities (X2)', 'My university environment encourage me to engage in entrepreneurial activities (X4)', and 'The atmosphere at my university inspires me to develop new business ideas (X5)' are not statistically significant in influencing the level of entrepreneurial interest (Yi) of MBA students at 5% level of significance. This is contrast to our expectations, but this clearly showed that there is room for improvement by the university environment. The interpretation of the coefficients of all the variables using their odd ratios show that the independent variables have great likelihood of influencing entrepreneurial interest of MBA students positively except variables related to the likelihood of developing technology capability except X4 and X5 as the odd ratio of the two variables are less than 1. The outcomes of this model show that X1, X3 and X6 are the main factors that significantly influence entrepreneurial interest of MBA students. The entrepreneurship course taken by the MBA students greatly have impact on their level of interest in starting their own businesses. Also, the entrepreneurship course has enhanced the managerial skills of the sampled MBA students which significantly influenced their level of interest in starting their own businesses. This implies that the sampled MBA students are perceived to have the requisite managerial skills and knowledge to manage their businesses successfully which are learnt from the entrepreneurship course already taken. Furthermore, the knowledge of entrepreneurship that the sampled MBA students learnt in the university has enabled them to know the actions they need to take to start their own businesses. This indicates that these MBA students now have the knowledge of the requirements and activities to start a new business. These results also reveal that entrepreneurship course undertaken by these MBA students have not been able to develop their ability to identify potential business opportunities. Moreso, the university environment has not been able to encourage and inspire the MBA students to develop new business ideas as well as engage in entrepreneurial activities. This is actually against the expectation of the study, as the authors expected that the presence of entrepreneurship centre in the College of Business of the University would have stimulated the MBA students to churned out new business ideas and encourage the development of practical entrepreneurial activities. Thus, we inferred that such entrepreneurship centre is not fully harnessed to its full potentials. The implication of this study is very important for the academia, university administrators, policy makers and other stakeholders in the education sector. This study has revealed that offering entrepreneurship course in the university has a great influence on the level of entrepreneurial interest of MBA students. This goes a long way to develop the students' practical managerial skills to start a new business. The entrepreneurship knowledge in the university further trains the students to know the actions and steps to take to start a business. All these would enable the university to be producing a wealth creating graduates as these students would hit the market with business ideas and start their own businesses instead of contributing to the unemployment rate in the economy. The result reveal that the university environment is not doing enough to encourage students in developing practicable business ideas and engaging in real life entrepreneurship activities while in the university. Thus, the entrepreneurship centre within the universities across the country should go beyond teaching entrepreneurship but also support the students in developing business ideas and nurturing entrepreneurship activities. Conclusion This study investigates the role of entrepreneurship education and university environment on entrepreneurial interest of MBA students in Saudi Arabia. Majority (71.4%) of the MBA students claimed that they are highly interested in starting their own business in the future. The three variables that are used to proxy entrepreneurship course indicate the high likelihood of entrepreneurship course influencing the entrepreneurial interest of MBA students, though only two -'I have taken entrepreneurship course before (X1)' and 'Entrepreneurship course has enhanced my practical managerial skills in order to start a new business (X3)' -out of the three are statistically significant. However, only one -'The knowledge of entrepreneurship in my university has enabled me to know the actions I need to take to start my own business (X6)' -out of the three variables that are used to proxy university environment is significant and also has a great likelihood of influencing the entrepreneurial interest of MBA students. This study therefore suggests that Saudi government should make entrepreneurship course compulsory for all field of study as it has a significant impact on the entrepreneurial interest of students. Moreso, university environment should be put on their toes especially those that have entrepreneurship centres in ensuring that the students come up with a bankable business ideas and opportunities which can be started while in school. This can be properly nurtured and monitored by the university, given the available resources at the entrepreneurship centre, to make the ideas become a full-fledged business. Government should also support such entrepreneurship centres through funds and also encourage the private companies contribute in both financial and human capability to the growth of such centre in the universities. This study contributes to the existing literature in this field of study in Saudi Arabia. This study is limited to MBA students, and further study could cover both undergraduate and MBA
7,560
2019-02-01T00:00:00.000
[ "Education", "Business", "Economics" ]
Purified self-amplified spontaneous emission ( pSASE ) free-electron lasers with slippage-boosted filtering We propose a simple method to significantly enhance the temporal coherence and spectral brightness of a self-amplified spontaneous emission (SASE) free-electron laser (FEL). In this purified SASE (pSASE) FEL, a few undulator sections (called slippage-boosted section) resonant at a subharmonic of the FEL radiation are used in the middle stage of the exponential growth regime to amplify the radiation while simultaneously reducing the FEL bandwidth. In this slippage-boosted section, the average longitudinal velocity of electrons is reduced, which effectively increases the FEL slippage length that allows the radiation fields initially far apart to create phase relation, leading to n times increase in FEL cooperation length, where n is the ratio of the resonant wavelength of the slippage-boosted section to that of the original FEL radiation. The purified radiation, as a seed with improved temporal coherence, is further amplified to saturation in the undulator sections tuned to the FEL wavelength. Using the linac coherent light source II (LCLS-II) parameters as an example, we show that with the proposed configuration the temporal coherence and spectral brightness of a SASE FEL can be significantly enhanced. This scheme may be applied to many SASE FEL light sources to enhance the FEL performance. I. INTRODUCTION High-gain free-electron lasers (FELs) working in the self-amplified spontaneous emission (SASE) mode [1,2] have been successfully operated in the x-ray wavelengths [3][4][5], which marked the beginning of a new era of xray science.Starting from shot noise in the initial beam longitudinal density distribution, the output of a SASE FEL typically has rather limited temporal coherence with spiky spectrum.Improving the FEL temporal coherence has been a topic of recent interest, and various seeding techniques in which a coherent seed is introduced to dominate over the shot noise have been proposed to reach this goal [6][7][8][9][10][11][12].Seeding with external lasers typically suffer from limited frequency up-conversion efficiency, which together with noise amplification (see, for example [13][14][15]) in harmonic generation process make it difficult to reach sub-nanometer wavelengths.While self-seeding technique has been demonstrated at hard x-ray wavelengths [16], it requires roughly twice the undulators to reach saturation and it appears to be a challenging task to reduce the FEL power fluctuations due to the intrinsic fluctuation of the monochromatized seed and electron beam energy jitter. It is well-known that the temporal structure of a SASE FEL output consists of many spikes with a full temporal width of about 2πl c , where l c is the cooperation length that equals to the slippage length within one gain length [17].The number of temporal spikes is roughly N ≈ l b /2πl c , with l b being the full width of the electron beam.Accordingly, the spectrum of a SASE FEL is similarly noisy with ∼ N spikes, each having a frequency spread c/l b and the overall frequency spread of the FEL pulse is approximately c/2πl c , where c is the speed of light.This spiky output is a result of the fact that in an FEL the radiation only propagates through a fraction of the electron bunch such that radiation fields with distance larger than 2πl c evolve independently and therefore are uncorrelated in phase. In this paper, we propose a simple method to increase the coherence length by speeding up the slippage in an undulator tuned to a sub-harmonic of the FEL radiation, i.e. λ 1 n = nλ 0 with λ 0 being the FEL radiation wavelength and n is an odd number larger than one (n =3, 5, 7, 9, . . .).Such an undulator, called slippage-boosted section, is used to amplify the SASE radiation in the exponential growth regime while simultaneously reducing the radiation bandwidth to realize a purified SASE (pSASE) FEL.In this slippage-boosted section, the average longitudinal velocity of electrons is reduced, which effectively increases the FEL slippage length that allows the radiation fields initially far apart to create phase relation, leading to n times increase in FEL coherence length.Therefore, the number of spikes in FEL temporal profile is reduced by a factor of n, leading to significant enhancement in FEL spectral brightness. Using the LCLS-II parameters as an example, we show that even with conservative parameter sets, the FEL bandwidth can be reduced by a factor of 5 with the proposed pSASE scheme.This method will also enable FEL single spike operation [17][18][19] (namely reducing the number of spikes to one) with a relatively long bunch.We believe this pSASE configuration can be used in many future SASE FEL light sources to enhance the FEL per- SLAC-PUB-15345 Work supported in part by US Department of Energy contract DE-AC02-76SF00515. II. METHODS In an FEL with planar undulator, the wavelength of the on-axis radiation that can resonantly interacts with the electron beam is, where K is the dimensionless undulator strength related to the undulator period λ u and undulator peak field B as , γ is the relativistic factor of the beam, and n is an odd number.Given the beam and undulator parameters, an FEL may operate either in the fundamental wavelength (n = 1) or in the harmonic lasing mode (n > 1) [20][21][22][23][24].In the 1D limit where the beam transverse emittance and energy spread are neglected, the ratio of the power gain length for the radiation at high harmonic L (n) 1D to that at the fundamental wavelength L (1) 1D is [20,22,[24][25][26], where A n is the coupling factor for the nth harmonic, The gain length ratio for various undulator strengths is shown in Fig. 1, where one can clearly see that the ratio is always larger than 1.Because the interaction between radiation and electron beam is most efficient for n = 1, most of the FELs work in the fundamental mode.To access shorter radiation wavelength with a beam limited in energy, an FEL may alternatively operate in the harmonic lasing mode, for which case the suppression of the growth of the fundamental radiation is required.This can be achieved by using phase shifters (e.g.minichicanes) to disrupt the interaction between the electron beam and the fundamental radiation while maintaining the interaction between the electron beam and the harmonic radiation field.For instance, the growth of the radiation power at the fundamental wavelength λ 1 can be suppressed by shifting the radiation by λ 1 /3 after each undulator section.In contrast, the growth of the radiation power at the 3rd harmonic is unhindered, for the phase shift is just 2π [20,24]. While the main purpose of harmonic lasing is to extend the FEL operation to shorter wavelength regime, it has also been realized that harmonic lasing also increases the FEL coherence length [27], compared to the case when undulator K value is retuned to produce an FEL pulse having the same wavelength with fundamental lasing mode, because the total slippage length is n times 1D for various undulator K values. longer.However, in order to let the nth harmonic lase to saturation, all the interaction at longer wavelengths need to be suppressed, because their gain lengths are shorter, as can be seen in Fig. 1.For instance, to achieve lasing at the 7th harmonic, one has to suppress the lasing at the fundamental, 3rd and 5th harmonics, which might make harmonic lasing at very high harmonics difficult to implement.Furthermore, the saturation power of an FEL operating in the harmonic lasing mode is lower than the nominal case when undulator is retuned to provide the same wavelength in the fundamental lasing mode [20,22].It's worth mentioning that one may also use mini-chicanes (if available) between undulator sections to shift the radiation field forward to increase the slippage length, which will also improve the SASE FEL temporal coherence [28][29][30]. In this paper, we study a new configuration in which neither the phase shifter to disrupt the lasing at the fundamental and lower harmonics, nor the mini-chicanes to shift the radiation forward with respect to the electron beam, is needed, yet it still provides an FEL pulse with significantly purified spectrum, compared to the FEL working in the standard SASE mode. The schematic layout of this pSASE FEL is illustrated in Fig. 2. The proposed pSASE FEL consists of 3 undulator sections, U1, U2 and U3.The first undulator section U1, resonant at the target FEL wavelength (λ 1 = λ 0 ), is used to produce a standard SASE radiation pulse with central wavelength at λ 0 .The length of U1 is chosen in such a way that the FEL process is in the middle stage of the exponential growth regime, where the power of the SASE radiation is much higher than the spontaneous radiation while the energy spread growth from FEL interaction is negligible.The SASE radiation and the electron beam then enters the second undulator section U2 which is resonant at the sub-harmonic of the target FEL wavelength (by increasing λ u and/or K), i.e. λ 1 = nλ 0 and λ n = λ 0 .In U2 (called slippage-boosted section) the SASE radiation is amplified through the harmonic interaction with its bandwidth simultaneously reduced.This is because the average longitudinal velocity of electrons (v z /c = 1 − λ 1 /λ u ) in U2 is reduced, which effectively increases the FEL slippage length that allows the radiation fields initially far apart to create phase relation, leading to n times increase in FEL cooperation length.The purified radiation is then further amplified to saturation in the last undulator section U3 which is again resonant at the FEL radiation wavelength (λ 1 = λ 0 ).With this configuration a pSASE FEL reaches saturation at a similar power level as that of a standard SASE FEL, with significantly enhanced temporal coherence and spectral brightness. The length of U2 should be properly chosen to make sure the FEL power at the fundamental wavelength λ 1 = nλ 0 is much smaller than that at the harmonic wavelength λ n = λ 0 such that phase shifters are not needed to suppress the lasing at fundamental.This is made possible because the radiation at nλ 0 starts from shot noise while the radiation at λ 0 is seeded by the radiation produced in U1.Typically after a few gain length, the growth rate of the FEL power at λ 0 slows down in U2, because the gain length at harmonic wavelength is more sensitive to energy spread growth. Take the LCLS-II under construction at SLAC as an example, we assume the beam peak current is 2.5 kA, normalized transverse emittance is 0.6 µm and rms energy spread is 1 MeV.LCLS-II uses variable-gap undulator of which the undulator period length is λ u = 5.5 cm and the K value can be tuned from about 1 up to about 10 [31].Assuming the beam energy is 6 GeV and the FEL target wavelength is λ 0 = 0.6 nm, there are several options to produce intense radiation at 0.6 nm, i.e. through fundamental lasing with K = 2 (λ 1 = 0.6 nm), 3rd harmonic lasing with K = 4 (λ 3 = 0.6 nm), 5th harmonic lasing with K = 5.29 (λ 5 = 0.6 nm), 7th harmonic lasing with K = 6.32 (λ 7 = 0.6 nm), . . ., up to 17th harmonic lasing with K = 10 (λ 17 = 0.6 nm).The 3D gain lengths (assuming average beta function of 10 m) at 0.6 nm with various lasing scenarios for various beam energy spreads found by Xie's formula [24,32] are shown in Fig. 3. In the cold beam limit where beam energy spread is negligible, the harmonic lasing mode is more efficient than the fundamental lasing.This is because the effective momentum compaction of the undulator (R 56 = 2N u λ 1 , where N u is the number of periods of the undulator) is larger for harmonic lasing, which speeds up the microbunching, similar to the distributed optical klystron technique (see, for example [33]).However, as the beam energy spread increases, the debunching effect from the larger momentum compaction starts to degrade the FEL gain.As a result, the power gain length quickly grows for FEL operating in the harmonic lasing mode as the beam energy spread increases, as shown in Fig. 3.This also leads to reduced saturation power for FEL working in the harmonic lasing mode, since the FEL power growth stops at smaller energy spread [22].To maintain the same saturation power while reducing the FEL bandwidth, in the proposed pSASE scheme, the length of U2 is relatively short so that the energy spread growth is not significantly increased.The main purpose of U2 is to purify the SASE radiation generated in U1 to prepare a seed with improved temporal coherence for further amplification in U3.Operating U3 at the same wavelength as U1 makes the saturation power of a SASE FEL essentially the same as the standard SASE configuration, which allows one to increase the spectral brightness of the FEL radiation. III. SIMULATION In this section we present simulation results for a typical pSASE FEL to support our analysis in section above.In our simulation we assume a flat-top beam with peak current of 2.5 kA, full width of 40 fs (corresponding beam charge is 100 pC), beam energy spread of 1 MeV, transverse emittance of 0.6 µm, similar to that obtained in start-to-end simulation for LCLS-II.With variable-gap undulator, the LCLS-II soft x-ray beam line will cover a broad range of x-ray energies from about 200 eV to 2 keV.In the baseline design of LCLS-II [31], the soft x-ray undulators have 18 sections with a break of 1 m between each undulators.The number of periods per section is 61 and the undulator period is 5.5 cm. In our simulation we focus on generating x-ray radiation at 0.6 nm in LCLS-II soft x-ray beam line using a 6 GeV beam.Following Fig. 2, in our study the first undulator section U1 consists of 9 undulators with the fundamental wavelength of the SASE radiation at λ 1 = 0.6 nm (corresponding K value is 2).The SASE radiation generated in U1 is purified in U2 which consists of 3 undulators resonant at λ 1 = 4.2 nm.The corresponding undulator strength for U2 is K = 6.32, which can be readily achieved by reducing the gap of the undulator.The SASE radiation is purified and amplified in U2 through the seventh harmonic interaction.Finally the radiation with improved temporal coherence is further amplified to saturation in U3 which consists of 6 undulators resonant again at λ 1 = 0.6 nm (K = 2). After 9 undulator sections, the FEL power and spectrum obtained with GENESIS code [34] at the exit of U1 are shown in Fig. 4. The average power of the radiation is about 60 MW and the energy spread growth from FEL interaction is negligible.The radiation power profile consists of ∼ 40 spikes, each having a full width of about 1 fs.The radiation field and particle distribution are first dumped at the exit of U1, and then further imported in GENESIS code for simulation of the 7th harmonic interaction in U2 which is resonant at 4.2 nm.The evolution of the radiation power profile and spectrum in U2 are shown in Fig. 5, where one can clearly see that the radiation bandwidth is gradually reduced in U2 through the slippage-boosted filtering effect.After 3 undulator sections, the number of spikes in the radiation power profile is reduced to ∼ 10 (Fig. 5e), and the bandwidth is accordingly reduced by about a factor of 4 (Fig. 5f).Further sending the beam through two more undulator sections leads to a smoother temporal profile with only 7 spikes (Fig. 5g), but the spectrum (Fig. 5h) is quite similar to that after just 3 undulators (Fig. 5f).So in our simulation, U2 only has 3 undulators resonant at 4.2 nm. Note, since U2 is relatively short, and the harmonic radiation is seeded by that produced in U1, it is not necessary to use phase shifters to suppress the growth of the fundamental radiation at 4.2 nm.In our simulation, the average power of the 0.6 nm radiation at the exit of U2 is about 300 MW while that of the fundamental radiation (4.2 nm) at the exit of U2 is only about 1 MW. Once again the radiation field and particle distribution are dumped at the exit of U2, and finally they are im-ported for simulation in U3 which is resonant at 0.6 nm.With the FEL interaction in U2, the beam energy spread is increased to about 1.3 MeV at the exit of U2.As can be seen in Fig. 3, at this energy spread level the power gain length of the 7th harmonic lasing mode exceeds that of the fundamental lasing mode.By sending the purified radiation and electron beam to U3 that operates in fundamental lasing mode, the efficiency of FEL interaction is maximized and the saturation power for this pSASE FEL will be similar to that in a standard SASE FEL. The purified radiation is amplified to saturation in U3 after 6 undulators.The radiation power profiles and spectra for an FEL working in the standard SASE mode and pSASE mode are shown in Fig. 6.For a fair comparison, the same beam parameters and initial shot noise are used in the simulation.The only difference is that for the pSASE FEL, the undulator K value for the 10th, 11th and 12th undulators are set at K ≈ 6.32.The average power for both modes is similar, but the bandwidth of the pSASE FEL is significantly smaller than the standard SASE FEL. Note, the bandwidth reduction factor in a pSASE FEL is approximately nL 3D , where n is the harmonic number in the slippage-boosted section, L are the 3D gain lengths of the radiation at the FEL wavelength λ 0 in U2 and U1, respectively.Therefore, one can either increase the harmonic number (e.g. by increasing the K value and/or λ u ) or increase the power gain length (e.g. by increasing the average beta function) in U2 to further reduce the FEL bandwidth. To quantify the bandwidth reduction factor, we performed 10 simulations with different initial shot noise distributions and the averaged radiation spectra are shown in Fig. 7.The relative FWHM bandwidth of the radiation produced in standard SASE mode is about 1.5 × 10 −3 , while that produced in pSASE mode is about 3 × 10 −4 .The bandwidth is reduced by approximatly a factor of 5, in good agreement with the theory (nL 3D ≈ 6 for our simulation parameters).It should be pointed out that in our simulation we conservatively chose K = 6.32 in the slippage-boosted section.Actually for LCLS-II undulator, the undulator strength can be tuned up to about K = 10, which in principle allows one to use the 17th harmonic interaction in U2 to further reduce the FEL bandwidth.However, at such a high harmonic number, the FEL performance may be more sensitive to field errors, beam energy spread, etc.It is worth mentioning that reducing the FEL bandwidth also increases the taper efficiency of a saturated FEL [35].Since the number of longitudinal modes in a pSASE FEL is reduced, one can eventually extract more power from the electron beam in a pSASE FEL compared to a standard SASE FEL.Therefore, adding tapered undulator sections to a pSASE FEL may lead to further enhancement in FEL performance.The spectrum brightness (P (λ)) is normalized to the radiation peak spectral brightness at the exit of U1 (Fig. 4b). .The average power for both cases is about 7 GW and in the simulation the beam parameters, lattice functions and initial shot noise are all the same.The spectral brightness is normalized to the peak spectral brightness at the exit of U1. IV. SUMMARY AND DISCUSSIONS We have studied a simple scheme to significantly enhance the temporal coherence and spectral brightness of a SASE FEL.In this pSASE FEL, a few undulator sections resonant at a sub-harmonic of the FEL radiation are used in the middle stage of the exponential growth regime to amplify the radiation while simultaneously reducing the FEL bandwidth.In this slippage-boosted section the FEL slippage length is increased, which allows the radiation fields initially far apart to create phase relation, leading to significant increase in FEL cooperation length.The purified radiation is further amplified to saturation in the undulator sections tuned to the FEL radiation wavelength.With this configuration a pSASE FEL reaches saturation at a similar power level as that of a standard SASE FEL, with significantly enhanced temporal coherence and spectral brightness. Using the LCLS-II parameters as an example, we show that even with conservative parameter sets, the FEL bandwidth can be reduced by a factor of 5 with the proposed scheme.It is worth mentioning that the parameters used in our simulations and calculations are representative rather than fully optimized design sets.A more careful optimization might lead to further improvements of the scheme.Furthermore, a higher bandwidth reduction factor may be achieved with a larger n and longer gain length in the slippage-boosted section.However, it is likely that the performance of a pSASE FEL with an extremely large n will be more sensitive to a lot of unwanted effects and at some point the disadvantages will outweigh the benefits.These concerns will be addressed in our future work. For SASE FELs with variable-gap undulators (such as LCLS-II and European XFEL [36]), it is straightforward to reconfigure it to pSASE mode by increasing the K value of part of the undulators with no additional cost.For SASE FELs with fixed-gap undulators, since only a relatively short section (with a length comparable to ∼ 3 gain lengths) is needed to purify the radiation spectrum, one may replace several existing undulator sections with variable-gap undulators (at a moderate cost) to enable the pSASE operation.Similarly, for SASE FELs with short-period undulators (such as SACLA [5] and Swiss-FEL [37]) where it may be difficult to increase the undulator K value to significantly increase the slippage length in the slippage boosted section, one may replace several existing undulator sections with large-period variablegap undulators to enable the pSASE operation.The pSASE operation mode may be particularly suited for high-repetition rate FELs where the heat load associated with the high rep-rate beam may pose potential damages to the monochromator required for self-seeding, and the pSASE scheme may be a very promising alternative for generation of radiation with narrow bandwidth.In general this scheme may be applied to many SASE FEL light sources to enhance the FEL performance. FIG. 1 : FIG. 1: Ratio of power gain length L FIG. 5 : FIG. 5: SASE radiation power and spectrum at the exit of the 1st undulator in U2 [(a) and (b)]; at the exit of the 2nd undulator in U2 [(c) and (d)]; at the exit of the 3rd undulator in U2 [(e) and (f)]; at the exit of the 5th undulator in U2 [(g) and (h)].The spectrum brightness (P (λ)) is normalized to the radiation peak spectral brightness at the exit of U1 (Fig.4b). FIG. 6 : FIG.6: Representative radiation power profiles and spectra for a standard SASE FEL [(a) and (b)] and a pSASE FEL [(c) and (d)].The average power for both cases is about 7 GW and in the simulation the beam parameters, lattice functions and initial shot noise are all the same.The spectral brightness is normalized to the peak spectral brightness at the exit of U1. FIG. 7 : FIG.7: Spectrum of the FEL radiation produced in the standard SASE (a) and the pSASE mode (b).Thin lines refer to single shot realizations and the bold line refers to the average over 10 realizations.
5,468.4
2013-01-11T00:00:00.000
[ "Physics" ]
Fraxicon for Optical Applications with Aperture ∼1 mm: Characterisation Study Emerging applications of optical technologies are driving the development of miniaturised light sources, which in turn require the fabrication of matching micro-optical elements with sub-1 mm cross-sections and high optical quality. This is particularly challenging for spatially constrained biomedical applications where reduced dimensionality is required, such as endoscopy, optogenetics, or optical implants. Planarisation of a lens by the Fresnel lens approach was adapted for a conical lens (axicon) and was made by direct femtosecond 780 nm/100 fs laser writing in the SZ2080™ polymer with a photo-initiator. Optical characterisation of the positive and negative fraxicons is presented. Numerical modelling of fraxicon optical performance under illumination by incoherent and spatially extended light sources is compared with the ideal case of plane-wave illumination. Considering the potential for rapid replication in soft polymers and resists, this approach holds great promise for the most demanding technological applications. Introduction Ultrafast laser-assisted 3D micro-/nano-fabrication (printing) using additive [1][2][3][4][5][6][7], subtractive [8][9][10], and patterning [11][12][13][14][15][16][17][18] modes of material structuring is becoming popular for a wide range of technological tasks and applications [19][20][21][22][23], with a good understanding of the underlying mechanisms of energy deposition and light-matter interactions [24][25][26].One of the most promising trends is the rapid prototyping and manufacturing of various micro-optical components merging the refractive, diffractive, waveguiding, and polarisation control, or even combined functionalities [27][28][29][30][31]. Another trend is the inscription of waveguides in glasses and crystals [32,33], as well as the formation of optical vortex generators and optical memory structures via form birefringence of self-organised nanogratings [34].Fs laser-fabricated optical elements are useful for beam collimation [35], shaping, imaging [36], telecommunications [37], and sensing [38], with an expanding range of functionalities and applications due to the possibility of miniaturisation and efficient fabrication by direct laser writing [39].The applications of 3D polymerisation were recently reviewed for micro-mechanical applications triggered by different stimuli: light, temperature, and pH [40].The use of specialised (undisclosed composition) two-photon absorbing photo-resists, hydrogels, and glass composites developed for commercial 3D printers based on fast scanning and high-repetition-rate fs oscillators is a fast-growing application field [41], with a vision of 3D printing applications of computer-designed complex optical elements [42].A new pathway of 3D formations out of silica has a resolution down to ∼120 nm using photo-polymerisable resist with a 2%wt.The Irgacure 369 photo-initiator and subsequent calcination was demonstrated recently [43].Similarly, 3D silica structures can be produced from hydrogen silsesquioxane (HSQ) without any photo-initiator by direct write with an fs laser at very different exposure conditions, low repetition rate (∼10 kHz) and long ∼300 fs pulses [44,45], as well as at a high (∼80 MHz) repetition rate and short ∼120 fs pulses [46].These demonstrations of 3D SiO 2 structuring down to nanoscale resolutions were recently extended to high-refractive-index ZrO 2 resists and, moreover, demonstrated at writing speeds approaching 10 m/s using fast polygon and stepping scanners [47].Another strategy for high-throughput 3D printing is the use of multi-focus arrays [48,49]. High-resolution 3D printing over large areas remains a formidable challenge, especially for optical applications at shorter wavelengths.Maintaining low surface roughness of one-tenth of the wavelength λ/10 or less adds to the challenge.Making flat micro-optical elements for further miniaturisation and compaction of micro-optical solutions is currently trending, but this is even more challenging for the fabrication of optical micro-lenses and functional structures.For controlled phase patterns in diffractive optical elements this is of particular importance and a phase step should be defined over the narrowest lateral width (a step-like height change). Control of focusing from tight (<10λ) to loose (>10λ) is dependent on the curvature and diameter of the lens D and its focal length f , which defines the f number F # = f /D, i.e., the numerical aperture N A ≈ 1/2F # , and the imaging resolution of the lens.More demanding precision is required for larger micro-optics with a large N A. For flat optical elements, e.g., Fresnel lenses, the 2π phase height (along the light propagation direction) is defined over the height of wavelength ∼ λ, which approaches a comparable lateral width for the most off-centre phase rings.This is a challenging 3D laser polymerisation task demanding the most high-resolution laser printing. High-resolution structures can be made via photo-initiator-free laser writing at high pulse intensity I p ∼ (1 − 10) × 10 12 W/cm 2 or (1-10) TW/cm 2 [50].At such high intensities, the photon energy hν ≈ 1.24/(λ [µm]) [eV] approaches the ponderomotive energy (potential) of an electron, i.e., the cycle-averaged quiver energy of a free electron in an electromagnetic field of light: [10 14 W/cm 2 ].For λ ∼ 1 µm and I p ≈ 10 TW/cm 2 , the electron quiver energy during one optical cycle reaches U p ≈ 0.93 eV, which is comparable to the photon energy hν ≈ 1.24 eV.Hence, photo-ionisation of the polymer matrix (∼99 wt.%) can take place without a photo-initiator, which is usually doped at below 1 wt% for the wavelength-specific two-photon absorption.Nonlinear or defect-based absorption provides free electrons which promote further ionisation and chemical-bond breaking via the ponderomotive channel and avalanche ionisation. Another effective polymerisation pathway is via high-megahertz-repetition-rate exposure of a photo-resist, which facilitates thermal accumulation and cross-linking, even with a very small initial temperature augmentation due to low sub-1 nJ pulse energies.A low thermal diffusivity of the glass substrate and resist D T = χ/(c p ρ) ≈ 7 × 10 −7 m 2 /s [51] enhance the local temperature rise (here, χ ≈ 1 W•m −1 •K −1 , and is the thermal conductivity; ρ ≈ 2.2 g/cm −3 , and is the mass density; and c p ≈ 700 J/(kg•K), and is the heat capacity at constant pressure).High-repetition-rate laser writing was, therefore, used in this study.Ionisation of the photo-resist modifies the real and imaginary parts of the refractive index ñ = n + iκ, i.e., permittivity ε ≡ ñ2 , which defines the energy deposition.When the real part of the permittivity ϵ re ≡ (n 2 − κ 2 ) → 0 (epsilon-near-zero, ENZ) enters 0 < ϵ re < 1, the most efficient energy deposition into the resist takes place [24].It is noteworthy that the condition of ε re = 0 (or n = κ) defines a runaway process of dielectric breakdown at the focus, which should be avoided for high precision and resolution of 3D polymerised structures. Designing and manufacturing 3D polymerised structures for beam modulation is more challenging in an integrated optics framework.In most research reports, including distributed Bragg reflector (DBR) laser-based optics, where the periodicity of grating elements is parallel to the beam propagation [52], and cases where the periodicity is perpendicular to the beam propagation [53], binary elements are manufactured.This is because the short integration distance often demands short-period grating elements, and achieving multiple levels within that short period is challenging. Here, we demonstrate 3D laser printing of a 0.2 mm diameter fraxicon [54] (flat conical lens) with a triangular phase profile (2π over ∼1 µm height) at 5 µm width in an SZ2080 resist for integration with a micro light-emitting diode (micro-LED; see Figure 1).The lateral step width was defined within ∼1 µm.Characterisation of the fraxicon performance was carried out with optical microscopy and several optical numerical modelling methods: ray tracing, an analytical solution for Gaussian input, the Rayleigh-Sommerfeld (RS) diffraction integral, and holographic simulation. Resist SZ2080 TM The popular hybrid organic-inorganic silica-zirconia composite SZ2080 TM (IESL-FORTH, Crete, Greece) was used in this study.Its composition is open (in spite of it being a commercial product) with the identities and concentrations of the photo-initiators being known.This makes the determination of the linear and nonlinear portions of the absorbed energy contributing to the final polymerisation at the defined pulse intensity straightforward, as shown in Ref. [24].The instantaneous permittivity, square of the complex refractive index, ε(t) ≡ ( ñ(t)) 2 defines the amount of the absorbed pulse energy as ε(t) changes during the pulse and follows the intensity envelope I p (t). SZ2080 TM has several properties contributing to its wide use in very different applications: it has a solid (gel) state, high accuracy and resolution of 3D laser printing at the nanoscale, ultra-low shrinking, a small change in refractive index during laser exposure, and a high mechanical stability [55].It is used for producing nano-photonic structures [55], cell scaffolds [56], biomedical applications [57], micro-optical elements [28,35,38], and functional structures for micro-fluidics [58].Among other advantages, also useful for this study, are the glass-matching refractive index [55,59], mechanical stability [60], chemical inertness [61], high resilience to laser-induced damage [62], possibility of chemical doping [63], and simple fictionalisation of the fabricated 3D surfaces [64].Recently, it has been used in thermal post-processing for down-scaling [65] and material-morphing [61].Interestingly, in addition to all the aforementioned benefits it also allows the possibility of converting the material into an inorganic substance, which enables the realisation of the 3D printing of glass at the nanoscale [61]. Laser Printing Resist 3D laser printing of fraxicons was performed using 780 nm/100 fs (C-Fiber 780 Erbium Laser, MenloSystems, Martinsried, Germany) tightly focused conditions, using an objective lens of numerical aperture N A = 1.4 with a beam diameter saturating the input aperture for optimal resolution.The radius at focus was r = 0.61λ/N A ≈ 340 nm.The pulse repetition rate was 100 MHz.A combination of fast galvanometric scanners and synchronised precision positioning stages was used, similar to the system described in [66].The scan velocity varied along the structure and depended on the distance to the geometric centre.The writing strategy was to scan the structure concentrically in closed loops and iterate each next loop by a displacement of the relative radius and height of ∆r = 50 nm (see discussion below on thermal accumulation) and ∆z = 300 nm.In the centre, the beam's travel speed was v sc = 10 2 µm/s and increased linearly to the edge, reaching 10 3 µm/s.Also, kinematic commands, known as rapid jumps (G0), were not used to perform all movements at the same accelerations.This strategy minimises the kinematic fabrication error, where smaller radius loops result in greater excentrical acceleration and scan deviations.For simplicity, we did not account for the cumulative dose variation; therefore, the structure features a slight spherical ramp along the radial direction.Each loop had a ramp-up and ramp-down segment of 30 degrees.A previous study showed that the difference in exposure dose affects the final refractive index and optical performance of a micro-lens, which is better described by wave optics [67]; however, such intricate control of the index by polymerisation was not investigated for the fraxicon fabricated in this current study. A commercial SZ2080™ resist was used for 3D printing with 2-benzyl-2-dimethylamino-1-(4-morpholinophenyl)-butanone-1 (IRG369, Sigma Aldrich, Darmstadt, Germany) as the photo-initiator dissolved in the initial pre-polymer at 1% wt.The peak of IRG369 absorbance (in SZ2080™) was at 390 nm with emission at 400 nm, as determined by photoluminescence excitation spectrosopy [50]; for pure SZ2080™, absorption and emission were at 350 nm and 400 nm, respectively.To improve the resolution, pure SZ2080™ is preferable; however, a resist with a photo-initiator was used for a larger laser processing window.The energy of a single pulse, with transmission losses accounted for at the focal point, was E p ≈ 96 pJ, corresponding to fluence per pulse of F p = 0.0275 J/cm 2 and intensity of I p = 0.275 TW/cm 2 (average).Consequently, the pulses generated a negligible ponderomotive potential.The dwell time required for the beam to cross the focal diameter 2r was t dw = 2r v sc = 6.8 ms and, at the repetition rate R, the number of accumulated pulses over the focal spot was large N = t dw R = 680 × 10 3 .The thermal spread (cooling) of the laser-heated focal volume was defined by the time t th = (2r) 2 /D T = 660 ns, while the time separation between pulses was only 1/R = 10 ns.Hence, a very strong thermal accumulation takes place with the used direct laser writing.An average temperature drop at the arrival of the next pulse occurs due to heat transfer to the surrounding cold material.The temperature accumulation T N can be explicitly calculated for the N pulses, where a single-pulse temperature jump is T 1 [68]: where β = t th t th +1/R is the constant which defines heat accumulation; and β → 1 at high repetition rate R → ∞.For the used experimental conditions, β ≈ 0.9925.The first N = 10 pulses cause a significant temperature jump T N = 9.67T 1 (N = 10).Considering the exothermic character of polymerisation, a minute temperature rise at the focal region causes a guided thermal polymerisation [69]. The samples were prepared by drop-casting the liquid resin on a standard microscope cover slip and pre-condensing at 50 • C for 24 h.After exposure, the samples were developed in methyl-isobutyl-ketone for 30 min, then rinsed with pure developer and air-dried under normal room conditions.The refractive index of the resist at visible wavelengths was approximately n SZ ≈ 1.5.While the exact definition of the 2π phase height was experimentally challenging for smaller-period fraxicons, the height of the polymerised phase ramps normalised by the wavelength (or 2π in phase) is h × n SZ /λ ∼ 2, which corresponds to the second-order (2 × 2π) phase steps.This strategy was used for fabrication because of the more straightforward definition of the exact required geometry with a focused laser pulse, which occupies a defined volume. For more widespread practical implementation of the proposed 3D laser printing of micro-optical elements, the fabrication conditions were optimised to complete the entire laser writing step within 100 min for all 200 µm diameter fraxicons in this study. Characterisation Micro-LEDs (CREE C460TR2227, CREE, Durham, USA) were used for the design and prototyping of a low-profile implantable device.The footprint of the µLED was 0.27 × 0.22 mm 2 , with an emitter area comparable to the D = 0.2 mm diameter fraxicon.The micro-LEDs were assembled on a 10 µm thick polyimide substrate [70,71], which was formed by first spin-coating a 5 µm thin polyimide layer on a silicon wafer (diameter 100 mm), followed by the sputter deposition of a metallic thin film.Interconnecting tracks were then patterned using lift-off technology and the track thickness was increased by electroplating 1 µm of gold to reduce the electrical line resistance.A second polyimide layer was subsequently deposited to insulate the metal tracks. To access the metal tracks, small openings were formed in the top polyimide layer by reactive-ion etching (RIE) with oxygen plasma.A second metallisation and electroplating step was used to define "bonding pads" for the micro-LED chips and zero-insertion-force (ZIF) connector pads for wire bonding the test structure to a printed circuit board.Finally, the shape of the polyimide substrate was defined by trenching the stack of polyimide layers down to the silicon substrate with a second RIE process step.The substrates could then be peeled from the silicon wafer using tweezers and the micro-LED chips were assembled on the pads of the polyimide substrate by flip-chip bonding [70,71].They were subsequently underfilled with a biocompatible adhesive (EPO-TEK 301-2, Epoxy Technology, Inc., Billerica, MA, USA) to electrically insulate the pads located at the interface between the micro-LED chips and polyimide substrate.Structural characterisation of the fraxicon was carried out using optical microscopy, scanning electron microscopy (SEM), and atomic force microscopy (AFM).Typical results are shown in Figure 1 for structural and Figures 2 and 3 for optical characterisation, respectively.2).The inset shows the confocal profile of intensity. Fraxicon: Basic Properties and Design The positive fraxicon was designed with a diameter of D = 0.2 mm and featured 20 blazed rings.The thickness profile (t f ra ) of the fraxicon as a function of the radial coordinate (r = x 2 + y 2 ) is given as shown in Equation (2): where h is the height of the fraxicon corresponding to 2π phase retardation (h = λ (n SZ −1) = 1 µm, λ is the incident wavelength, n SZ ≈ 1.5 is the refractive index of SZ2080™), Λ = 5 µm is the period of the gratings (rings), and mod is a function of the remainder after division (modulo operation). The axial intensity AI(z) distribution (along the z-axis) of an axicon/fraxicon depends on the radial intensity at the input and can be found from a relation based on Snell's law r = z(n − 1)α; here, n is the refractive index of the axicon and α is the base angle of the axicon (also, the angle required to the full π angle at the tip of the axicon).For the Gaussian beam , where w o is the beam waist, This equation can be generalised and the input intensity profile I in (r) can be expressed in terms of r-to-z mapping AI(z) as is the axicon geometry-defined constant.This is valid for the on-axis intensity in the 0 < z < DOF region with the depth of focus DOF = w/[(n − 1)α] defined by the radius of beam w. Theory: Fraxicon Illuminated by an Incoherent and Extended Source Gauss-Bessel beam generation by fraxicons is mostly discussed using spatially coherent illumination such as laser beams.The timing of photon emission from the LED is, however, disorganised.To describe beam generation by fraxicons for a spatially incoherent illumination, such as light from an LED, an incoherent imaging framework is needed.The LED needs to be considered as a collection of points and the generated beam is formed by the summation of intensities of the beam generated for every point.While it is difficult to discriminate beams generated for coherent and incoherent illuminations, the generation approaches are quite different from one another.We consider a point in the LED as a Delta function emitting a spherical wavefront with intensity √ I s , given as S(1/z s ) = exp[j(2π/λ) x 2 + y 2 + z 2 s ].The phase of the fraxicon is given as ϕ = exp[j(2π/λ)t f ra ].The complex amplitude after the fraxicon is, therefore, ψ = √ I s C 1 L(r s /z s )S(1/z s )ϕ, where L is a linear phase and C 1 is a complex constant.The intensity distribution at a distance z r for a Delta-function is given as where Q(1/z r ) = exp[j(π/λz r )(x 2 + y 2 )] and ' ' is a 2D convolutional operator.The intensity distribution obtained for the entirety of the LED's active area can be given as I LED ≈ |I Delta O| 2 , where O is the LED's active area in a square shape filled with ones and zeros around it.It must be noted that the above summation is not a complex summation but an addition of intensities, as the phase information is not present.The above expression is an approximate one as Fresnel approximation was used for propagation between the fraxicon and the camera.To understand the beam generation more deeply, let us consider the Delta function with no linear phase attached to it, i.e., the one at the centre of the LED's active area.This Delta function generates a spherical wave that interacts with the fraxicon.We already established that fraxicons and axicons consist of lens functions with different focal lengths multiplexed in the transverse direction [72].At the camera plane, one of the lens functions satisfies the imaging condition, generating a sharp Deltalike function.The other lens functions cause ring patterns around this sharp Delta-like function, typical of a squared Bessel function.The other points in the LED's active area with linear phases attached to them create off-axis squared Bessel functions on the camera.The recorded intensity distribution is the sum of all the contributions from the points of the LED's active area.This is different from coherent illumination as it would depict the behaviour of light emission from a single point.Simulation results for the intensity obtained for a point in an LED, the intensity obtained for a circular region in an LED by incoherent superposition due to spatial incoherence, and the intensity obtained for the same circular region but with coherent superposition are shown in Figure 4a-c, respectively.The incoherent superposition does not generate distinct rings around the central maxima as expected of a Bessel distribution due to the lack of phase relations for light emitted from every point.The temporal coherence length for a Gaussian fit of LED emission spectra is given by L tc = 4 ln 2 π λ 2 0 ∆λ , where λ 0 is the central emission wavelength and ∆λ is its width or full width at half maximum (FWHM), and it coincides well with experimentally measured values L tc ≈ 2 µm using Mach-Zehnder interferometry [73]. Experimental Characterisation of Light Intensity An axicon or a conical lens is a very useful optical element that forms an elongated axial intensity distribution when illuminated by a Gaussian beam.The formed Bessel-Gaussian beam has a diameter defined by the first minimum of the Bessel function J 0 (k ⊥ r), where r is the radial coordinate in the lateral plane, and the perpendicular component of the propagation wavevector k ⊥ is defined by k ⊥ ≡ k sin γ, with γ being the half-cone angle of the beam with an optical axis, wavevector k = ω/c ≡ 2π/λ, and k 2 ⊥ + k 2 ∥ = (ω/c) 2 ; ω and c are the cyclic frequency and speed of light, respectively.The diameter of the axially extended focus is d B = 4.816/k ⊥ and the length depends on the diameter of the incident beam D (lens diameter) as Z max = D/(2 sin γ).A large D and small γ facilitates having a long, so-called, non-diffracting region of intensity on the optical axis.An axicon can be made flat in the same way as a Fresnel lens is made from concentric segments. A fraxicon lens of D = 300 µm, consisting of 30 circular rings of ∼5 µm width and 0.7 µm height, was polymerised in SZ2080™ using fs laser direct write (Figure 1).The step between adjacent phase ramps was within ∼1 µm.Fraxicons are especially promising for optical devices which have strong requirements for spatial constraints.If fraxicons are used for optical focusing of micro-light sources, e.g., LEDs, it is usual that illumination of the aperture of the fraxicon will take place from an extended, non-collimated light source with possible intensity inhomogeneities on a scale of tens of micrometers (see powered µLED image in Figure 1a).Such a situation can be modelled using an optical microscope under condenser illumination of a fraxicon on a sample plane, as discussed next. Confocal Intensity Mapping Figure 2a shows the axial intensity distribution calculated from the axial stacks, as shown in Figure 2b, using a standard microscope (Nikon, Tokyo, Japan, Optiphot) under white light condenser illumination in transmission mode.Images at every ∆z = 1 µm step were recorded and the 3D intensity distribution was calculated using our own Matlab code.The resulting confocal axial intensity distribution was separated into basic red, green, blue (RGB) colour channels.The width of the focal region was d B ≈ 20 µm; hence, k sin γ = 4.816/d B or γ ≈ 1.1 • for λ ≈ 0.5 µm.As expected, the width for R-red axial distribution d for the blue channel is most probably defined by the comparatively high NA of 0.9 of the imaging microscope lens, since the conical angle γ ≈ 1.1 • is small. The length of the most uniform intensity region is wavelength-dependent since the height of the phase ramps is fixed.The longest non-diffracting region was for the blue wavelength.This is also clearly seen in the lateral cross-sections at z > 150 µm (Figure 2b). Figure 3 shows the fraxicon and its axial intensity cross-sections under condenser illumination in the microscope.The period of concentric phase ramps was 1.5 times larger, hence ∼7.5 µm, while the diameter and the height of the phase ramps were the same as for the fraxicon in Figures 1b and 2. This resulted in a longer non-diffracting region.The intensity decreased along the propagation direction as √ z and the RGB colours had different effective lengths.The exact axial intensity profile depends on the wavevector spread for different colours in the illumination light.The ideal performance of fraxicons was modelled numerically and is discussed next. Numerical Predictions from Ray and Wave Simulations For numerical simulations we used the ideal case of triangular phase steps of 20 rings, with each of them having 5 µm width and 1 µm height.The material of the phase steps had a refractive index of 1.5 (dispersion free).This made a D = 0.2 mm diameter fraxicon, which was illuminated with a plane wave (a singe wavenumber) at different RGB colours. Wave Optics The Rayleigh-Sommerfeld (RS) diffraction integral can be used for exact prediction of the axial intensity profile, which is consistent with the wave-optical approach; it is used for thin graphene micro-lenses [74].The electric field distributions U 2 (r 2 , z) in the axial plane at different axial z positions were calculated using the MATLAB program based on the RS diffraction integral expressed as where λ is the incident light wavelength (R = 700 nm, G = 546.1 nm, and B = 435.8nm), k = 2π λ is the wave vector, (r 1 , θ 1 ) and (r 2 , θ 2 ) are the polar coordinates in the diffraction plane (the plane immediately behind the fraxicon) and observation plane (the focal plane), respectively; r) is defined as the cosine of the angle between the unit normal vector n of the diffraction plane and the position vector r from point (r 1 , θ 1 ) to point (r 2 , θ 2 ), and U ′ 1 (r 1 , θ 1 ) is the E-field immediately behind the fraxicon.The incident wave U 1 (r 1 , θ 1 ) is diffracted by the fraxicon through amplitude and phase modulations, and the electric field modified by the fraxicon U ′ 1 (r 1 , θ 1 ) can be expressed by Equation ( 5): where T(r 1 ) is the transmission distribution (amplitude modulation) of the fraxicon, and Φ(r 1 ) = n SZ • t f ra (n SZ is the refractive index of SZ2080™, and t f ra is the thickness profile of the fraxicon) is the phase modulation provided by the fraxicon.Consequently, the light intensity distributions in the axial plane can be calculated by squaring the electric field: 5).Apparent differences between the modeling of ideal plane-wave illumination of a fraxicon (Figure 5) and experimental imaging using condenser illumination of the microscope (Figure 3) is due to the presence of different k components at the same wavelength.Such a situation is expected in real applications where the fraxicon is placed in front of a µLED (Figure 1a).However, the basic features of the oscillatory nature of intensity along the propagation axis, its decay as ∝ √ z, and sub-1 mm long extent of the high-intensity section are consistent.Exact intensity distributions can be well controlled using tailored illumination of an axicon [75], and hence, a fraxicon as well. Ray Optics The Optical Software (Version 6.3) for Layout and Optimisation (OSLO, Lambda Research Co., Westford, MA, USA) is a ray-tracing tool used to model light propagation through (fr)axicons.Figure 6 shows RGB beam propagation through positive and negative (fr)axicons and their pairs, which collimates the beam.Change from an axicon to a fraxicon is conveniently made by tab-selection in the OSLO input; the fraxicon surface appears flat in the viewer; however, it encodes a fraxicon layout of 2π phase ramps.As expected from the shape of the axicon, a conical prism, light dispersion is evident in the RGB ray tracing (Figure 6).It manifests as a colour aberration along the focal region.A slight colour appearance of the fraxicon when imaged with a microscope is evident in Figure 3a, with a red centre and blue outside the edges of the phase ramps.Figure 6 clearly shows the compactness of micro-optical constructions using fraxicons and less dispersion in a pair of the flat optical elements Figure 6c,d for the same diameter of input beam; the alignment in Figure 6 is at the plane of the screen. Such compact fraxicons have the potential for application in space telescope technologies.It is planned to use multiple-order diffractive-engineered surface (MODE) lenses, which comprise a front-surface multiple-order diffractive lens (MOD) and a rear-surface diffractive Fresnel lens (DFL), to aid in the search for Earth-like planets and exoplanets in the universe as part of the upcoming telescope array known as the Nautilus Observatory [76][77][78]. Discussion The circular grating pattern makes an efficient collection of illumination at different wavelengths onto the optical axis and is useful for light delivery and collection to spot sizes of tens of micrometres.The long focal extension is helpful for light coupling into optical fibres and laser machining using increasingly miniaturised laser sources.Photonic crystal (PhC) lasers now deliver output powers of tens of watts at near-IR wavelengths from sub-1 mm apertures at a very low divergence angle [79].Micro-optics based on a high-damage-threshold SZ2080™ photo-resist is a promising solution [80]. Gauss-Bessel beams formed by phase profiles closely matching those of a fraxicon were made with a spatial light modulator for record high ∼10 4 aspect-ratio modification of dielectrics and semiconductors for scribing and dicing with a nanoscale resolution of tens of nanometres [81].For such material modification, multiple ionisation locations along the non-diffracting intensity distribution are essential and their connection occurs via back-scattering under multi-pulse irradiation.Hence, the oscillatory nature of the Bessel beam intensity on the axis (Figure 5) is beneficial for such material modifications. The replication of polymerised fraxicons can be achieved using Ni-shim (plasma coating with subsequent electrochemical deposition of Ni), which replicates structures with 10 nm feature sizes, e.g., nano-needles of black Si [82]. The emergence of a new generation of optical cochlear implants has highlighted the need for better miniaturisation and integration of optical elements.It has been shown that optical neuromodulation with visible light, facilitated by optogenetics, can confer higher spatial precision of neural activation compared to traditional electrical stimulation methods [83].For cochlear implants, a broad current spread from electrical devices reduces the number of independent stimulating channels [84,85].A higher spatial precision of activation could greatly increase the number of independent stimulating channels and enable simultaneous channel stimulation, which would greatly enrich the sound quality experienced by cochlear implant recipients [86,87].Strategies to deliver focused electrical stimulation, including tripolar and focused multipolar stimulation strategies, have failed to deliver significant clinical benefit [88,89].The emerging development of optical arrays could provide an alternative solution. In rodents, such as mice, rats, and gerbils, auditory neurons were modified to express photosensitive ion channels.The spread of activation during optical stimulation was significantly lower compared to electrical stimulation, resulting in near-physiological spectral resolution when using light emitters that were in close proximity to the neural tissue [71,90,91].Furthermore, during two-channel simultaneous optical stimulation in the mouse cochlea using micro-LEDs with a pitch of just 0.52 mm, channel interaction was 13-15-fold lower than simultaneous electrical channel stimulation [92].In the human cochlea, where there is a greater distance between the emitter and the neural tissue, optical arrays of micro-LEDs are still predicted to significantly reduce the spread of activation in the cochlea to 0.4-1.0octaves, up to fourfold lower than electrical stimulation [93,94].While modelling data suggests waveguides could provide even greater spectral selectivity [95], the fraxicon technology presented here could potentially be used to focus the emission cone and improve the spectral resolution provided by LEDs in the human cochlea. An antireflection coating could be used to increase the transmittance of a fraxicon by coating a film with a thickness of λ/4 of a refractive index √ n out n f rax ≈ 1.25, where n out = 1 (air) is the refractive index at the focal region and n f rax = 1.5.MgF 2 is a good candidate for the antireflection coating over the visible spectral range.Atomic layer deposition (ALD) is a candidate for conformal coating of 3D surfaces, as is magnetron sputtering. Fraxicons with ∼1 µm tall phase ramps can be made using a scanning thermal tip (nano-cantilever) method based on the AFM principle (NanoFrazor; Appendix A).By using a polyphthalaldehyde (PPA) 4% resist for a grey-scale mask, e.g., 3000 rpm spin coating for a 100 nm film, a sacrificial PPA mask can be made with high nanoscale control and resolution down to 10 nm.Such a mask is used for the transfer of the 3D pattern onto a substrate by reactive-ion etching (RIE).For Si etch (Figure A1), the etching contrast was ∼2.6 and translates to 260 nm deep structures when a PPA resist is ∼100 nm.Different patterns and metasurface structures can be easily designed for patterning into the resist using the open-source toolbox [96].This is based on the industry standard GDSII, a binary database file format for electronic design automation data exchange of integrated circuit layouts. Conclusions and Outlook The 3D printing of 0.2 mm diameter flat fraxicon lenses with 2π phase step defined over 1 and 5 µm lateral widths and ∼0.7 µm height (axial length) was performed in an SZ2080™ (with 1%wt.IRG) resist.Then, 3D polymerisation was carried out using tightly focused 780 nm/100 fs/100 MHz laser irradiation.The high repetition rate and strong overlap of the laser pulses at the focal diameter of 680 nm, with a linear scan step of 50 nm between the adjacent pulses, determined a strong thermal accumulation of 3D laser printing/polymerisation with only ∼0.1 nJ pulses.The used tight focusing, with a depth of focus approximately 3-4 times longer than the focal diameter, i.e., 2.5-3 µm, was significantly shorter than the axial pulse length of ct p ≈ 30 µm. The entire fraxicon was printed within 1.5 h.It was tested for illumination using an extended light source (condenser of a microscope) to simulate its performance for µLED illumination in endoscopy and opto-probes.The RGB colour analysis revealed an axial colour separation along the light propagation direction, which is significant for an extended white light source.Different axial intensity distributions are predicted from the analysis of a fraxicon illuminated by incoherent and coherent light sources.Fraxicons with wider sections of phase ramps have fewer diffraction-related effects as compared with flat Fresnel lenses, which have increasingly narrow phase ramps at larger diameters, and consequently, a stronger diffraction.These aspects of fraxicon use in micro-optical applications have to be considered. Figure 1 . Figure 1.(a) Blue micro-LED assembled on a polyimide substrate chip with 460 nm emission and concept design of a flat fraxicon for endoscopy applications made out of silicone (polydimethylsiloxane-PDMS).(b) Optical microscope image of a fraxicon made by direct laser writing at 780 nm/100 fs/100 MHz (C-Fiber 780 Erbium Laser, MenloSystems) in SZ2080™ resist.Fraxicon has Λ = 1 µm period.(c) Structural characterisation of fraxicon using SEM and AFM, showing blazed 2π steps with period Λ = 5 µm. Figure 2 . Figure 2. Optical characterisation of fraxicon shown in Figure 1b using an optical microscope with white condenser illumination.(a) Confocal intensity distribution and its RGB colour content along the focus (a "non-diffracting" part of the axial intensity).(b) Optical images at different axial positions along the white light propagation direction (along z-axis). Figure 3 . Figure 3. (a) Optical image of fraxicon with 1.5-times-larger width of the 2π steps.Image taken under white light condenser illumination.(b) Axial intensity profile calculated from the lateral image stacks (same as in Figure 2).The inset shows the confocal profile of intensity. Figure 4 . Figure 4. (a) Intensity distribution obtained for a single point of the LED.(b) Intensity distribution obtained for a circular region of the LED by incoherent superposition.(c) Intensity distribution obtained for the same circular region as (b) but with coherent superposition. B was larger than for B-blue, d (B) B .Strong widening of the intensity profile Figure 5 . Figure 5. (a) Simulation using the Rayleigh-Sommerfeld (RS) diffraction integral for non-polarised plane wave with RGB wavelengths: R = 700 nm, G = 546.1 nm, and B = 435.8nm (top-down).The calculated intensity cross-section is given along the propagation direction.(b) Central intensity cross-section for the RGB colours; inset shows geometry of simulated positive fraxicon with Λ = 5 µm period. Figure 6 . Figure 6.Ray tracing through (fr)axicons and their pairs (OSLO, Lambda Research Co.).Positive (a) and negative (b) fraxicon and axicon.Pairs of (fr)axicons (i.e., collimating telescopes): two positive (c) and a negative-positive pair (d) for the bulk-and flat-optics realisations.Illumination with RGB colour light shown by arrow.Location of the planarised (fraxicon) region is shown by shaded markers.The base angle of the axicon is α and γ is the half-cone angle of the Bessel beam.The refractive index of glass was n = 1.52 and the base angle α = ±40 • , where the positive sign is for real focus (a) and negative for virtual focus (b); a large angle was chosen for visualisation and a strong angular dispersion of RGB rays. Figure A1 . Figure A1.Sacrificial mask etch of micro-optical elements into Si.(a) A tilted-view SEM image of a binary Fresnel lens ( f = 50 µm) after RIE plasma etching, optical image (yellow middle-inset), and AFM image of the pattern defined in a PPA resist using thermal tip nanolithography (NanoFrazor; top-left inset shows the writing tip).(b) Optical profilometer cross-section of an f = 100 µm lens; insets show optical and AFM images.(c) Microscope images of PPA masks of the cubic-phase structure (Airy beam generator; a phase 0-2π phase map in the inset); four-level fraxicon; binary grating with period of 5 µm.Optical profilometer cross-section of the grating after RIE.
8,694
2024-01-30T00:00:00.000
[ "Engineering", "Physics", "Medicine" ]
Microblog sentiment analysis method using BTCBMA model in Spark big data environment : Microblogs are currently one of the most well-liked social platforms in China, and sentiment analysis of microblog texts can help further analyze the realization of their media value; however, the current task of sentiment analysis based on microblog information su ff ers from low accuracy due to the large size and high redundancy of microblog data, a microblog sentiment analysis method using Bidirectional Encoder Representation from Transformers (BERT) – Text Convolutional Neural Network (TextCNN) – Bidirectional Gate Recurrent Unit (BiGRU) – Multihead-Attention model in Spark big data environment is proposed. First, the Chinese pre-trained language model BERT is used to convert the input data into dynamic character-level word vectors; then, TextCNN is used to e ff ectively obtain local features such as keywords and pool the fi ltered features; then, BiGRU is introduced to quickly capture more comprehensive semantic information; fi nally, a multi-headed attention mechanism is implemented to emphasize the most signi fi cant features in order to accomplish the sentiment classi fi cation of microblog information task precisely. By comparing the existing advanced models, the proposed model demonstrates an improvement of at least 4.99% and 0.05 in accuracy and F1-score evaluation indexes, respectively. This enhancement signi fi cantly enhances the accuracy of micro-blog sentiment analysis tasks and aids pertinent authorities in comprehending the inclination of individual ’ s attitude toward hot topics. Furthermore, it facilitates a prompt prediction of topic trends, enabling them to guide public opinion accordingly. Introduction Social media platforms have proliferated in tandem with the Internet's development and have become the primary means by which individuals communicate their opinions and emotions on the web due to their rapid dissemination, wide audience, and convenient use.A microblog has emerged as a significant platform for users to acquire and distribute information due to its rapid transmission rate and substantial social impact.Presently, it is also a representative online social media platform in China at present, with a huge user group [1][2][3][4].Weibo, being an open-source social media platform, is accessible to all users.On Weibo, users have the ability to update their status through text or other means, as well as share their thoughts on various products, events, or individuals [5,6]. Weibo text data is complex and chaotic on the surface, but it contains subjective sentiment information of the masses in many fields.A small range of sentiment expression may affect a large range of users' sentiment preferences for different events, products, and characters [7,8].In case of emergencies, it is also easy to generate online public opinion.If we can dig and use this information in depth, it may have unpredictable value for both society and individuals.It can provide reference information for consumers and producers; conduct sentiment analysis on the relevant comments of a given product, which will help to understand the advantages and disadvantages of the product; and improve the customer satisfaction. By analyzing the user's comments, we can determine the user's daily preferences and provide personalized suggestions [9][10][11].In addition, it can also provide support for the government and other relevant departments, facilitate the government's public opinion monitoring, curb the spread of false information, and maintain social stability and prosperity.Conduct sentiment analysis on the content of microblog and monitor the sentiment tendencies contained therein, so as to understand the users' comments on the products or characters on microblog, or the degree of attention and sentiment changes to the events.It provides a real-time scientific theoretical basis for decision-makers to guide online public opinion in a timely and effective manner, and it can also timely control negative information when generating online public opinion to prevent further expansion of online public opinion [12][13][14].Therefore, the monitoring, analysis, and reasonable guidance of public opinion on the microblog network is of great significance and can create value for the country, collective, or individual life [15][16][17]. As a relatively large open-source online social media platform, microblog is rich in content and large in data.It is difficult to conduct sentiment orientation statistics by artificial means.It is necessary to use the method of sentiment analysis to explore the sentiment orientation of its content [18].Text sentiment analysis can excavate the opinions or evaluation information contained in the subjective text with subjective sentiment features or with commendatory and derogatory tendencies.By analyzing the content of the text, we can predict the sentiment tendency contained in the text and express it in a more intuitive way.Therefore, a fast and effective sentiment analysis of massive and unstructured microblog text data is a hot topic of current research [19]. The current methods exhibit drawbacks such as inadequate extraction of semantic information, insensitivity to multi-sense words, and overly simplistic model structures that fail to account for generalization.In response, this study proposes a microblog sentiment analysis method using the Bidirectional Encoder Representation from Transformers (BERT)-Text [20] Convolutional Neural Network (TextCNN) [21]-Bidirectional Gate Recurrent Unit (BiGRU)-Multihead-Attention (BTCBMA) model within the Spark big data environment.Multi-headed attention (M-HA) [22], BERT, TextCNN, and BiGRU [23] are its primary components.The following are the four primary contributions of this article: (1) For feature extraction, the BERT pre-trained language model is employed to map each sentence to an appropriate dimension and generate dynamic character-level word vectors that are characterized by high dimensionality and abundant semantic information.(2) By convolution, the TextCNN model extracts the local features of the text.The maximum pooling layer is added to derive the significant features in the text.(3) Using the BiGRU model, we use multiple clause rule recognition to solve the sentiment word disambiguation problem, which achieves more bidirectional acquisition of contextual feature information and greatly reduces the computational complexity.(4) In order to enhance the accuracy of sentiment analysis and capture the key information in sentences, M-HA is used to integrate multiple single-headed attention. The subsequent sections of this article are structured as follows: initially, the second section examined the pertinent research on deep learning as it relates to emotional analysis.Then, the recommended BTCBMA was introduced methodically in Part 3. A thorough experimental comparison was performed on the suggested model in Part 4. Part 5 concludes with a summary of the article and a discussion of potential future research. Related works At present, many relevant personnel have carried out corresponding work on the analysis and research of text sentiment [24].Dashtipour et al. [25] suggested an automatic feature engineering approach that leverages deep learning and combines long short-term memory (LSTM) and convolutional neural networks (CNN) models to classify the sentiment of Persian movie reviews.The suggested approach uses a composite model that combines CNN and LSTM.The outcomes of the simulation demonstrate that the suggested approach significantly enhances the precision of sentiment classification.Nevertheless, the approach lacks comprehensiveness with regard to the extraction of features for sentiment classification.Wang et al. [26] introduced a deep learning approach for sentiment-based sentiment classification.This approach employs weakly labeled data for model training, thereby mitigating the detrimental effects of noisy samples in the weakly labeled data and enhancing the overall performance of the sentiment classification model.The experimental findings demonstrate that the suggested approach exhibits superior classification performance in the sentiment classification task of online hotel reviews compared to the conventional depth model, all while maintaining the same labor cost.Nevertheless, the pace at which the approach converges models must be enhanced.Balakrishnan et al. [27] suggested a deep learning model with sentiment embedding for dynamic analysis of cancer patients' sentiment in online health communities, using a bi-directional LSTM (BiLSTM) model for sentiment dynamic analysis of user posts to measure changes in user satisfaction.The efficacy of the method is demonstrated through experimental results in comparison with other established methods.Nevertheless, the approach is deficient in its capacity to capture contextual semantic information.By combining the outputs of CNN, LSTM, BiLSTM, and Gated Recurrent Unit (GRU) models through stacked integration with logistic regression serving as the meta-learner, Mohammadi and Shaverizade [28] suggested a new method to aspect-based sentiment analysis using deep integration learning.In comparison with fundamental deep learning approaches, this approach enhances the accuracy of aspect-based predictions by 5-20%.However, this approach is too redundant.Cheng et al. [29] suggested a polymorphism-based CNN model.The CNN input matrix is generated by the model through the combination of word vector information, word sentiment information, and word location information.Throughout the training process, the model modifies the weight control to modify the significance of various feature information.Using a multi-objective sample dataset, the efficiency of the suggested model in the sentiment analysis assignment of relevant objects is evaluated in terms of classification effect and training performance.Nevertheless, this methodology is incapable of comprehensively capturing and employing contextual data for sentiment analysis.Elfaik and Nfaoui [30] examined an effective BiLSTM that encapsulated contextual information of Arabic feature sequences forward backward, which improved the results of Arabic sentiment analysis.The experimental outcomes derived from six benchmark datasets for sentiment analysis demonstrate that the suggested method outperforms both baseline traditional machine learning methods and state-of-the-art deep learning models.Nonetheless, this method fails to adequately represent the local features of the text. In summary, although certain improvements are obtained in some tasks in the aforementioned literature, there are still some limitations, which include low prediction accuracy, inadequate extraction of semantic information, inability to effectively handle multiple meaning words, and overly simple model structure lacking generalization.Consequently, a microblog sentiment analysis method using the BTCBMA model in Spark big data environment is proposed in this article.In order to assess the efficacy of the proposed method, we employed a publicly available benchmark dataset and obtained significant performance in sentiment analysis tasks in the microblogging domain, which effectively solved the problems in the aforementioned literature. Method First, the sentiment of microblog text is divided into six categories: positive, angry, sad, scared, surprised, and heartless.According to the features of short text, a fine-grained sentiment analysis BTCBMA model based on TextCNN is proposed.The BTCBMA model is summarized as multiple embedded vectors.It uses emoticons and sentiment labels of text to transform into vectors to make sentiment expression stronger and realizes the full use of the relationship between text and labels.BiGRU network is combined with the M-HA mechanism to obtain more comprehensive and deep sentiment features.The last part is the output module.The structure of the BTCBMA model is depicted in Figure 1. The BTCBMA is mainly divided into five parts: (1) Input layer.It is used to process the input text; convert the word vector, emoticon, and sentiment label into vector matrix through Word2vec model; and input them to the subsequent network layer.(5) Output layer.After splicing the results of the attention mechanism, the resulting matrix is input into the fully connected network, and finally, the sentiment classification results are output through the activation function. BERT pre-trained language models A dynamic pre-training model is BERT.The fundamental component of BERT is the Transformer encoder, as illustrated in Figure 2 [21].The structure of the BERT model mainly includes three layers: (1) Embedding word vector coding layer.Unlike the previous word vector coding, the BERT model will generate three types of embedding for each token in the text input, namely, the token embedding containing the meaning of the word itself, the segment embedding containing the sentence information and the sequence information between sentences, and the position embedding containing the sequence of the words in the sentence.Each token is represented by the sum of the three embedded types.In addition, a [CLS] mark is added at the beginning of each sentence.The [SEP] mark set between sentences is used to distinguish sentences.(2) Pre-trained pre-training layer.Two tasks are defined.The first is an integrated two-way language model.The corresponding method in BERT is called Masked language model, or MLM task for short.This method has the capability to simultaneously train two distinct transformer models from right to left and left to right.It is capable of extracting more comprehensive context features from the text, generating a more robust semantic representation, and implementing an integrated two-way language model.The other is Next presence prediction, which is called Next Sentence Prediction task for short.This task is mainly aimed at judging the correlation between sentence pairs with sentence pairs as input in downstream tasks.The basic idea is to select sentence pairs.For example, the sentence pair contains sentence A and sentence B, and then judge whether the next sentence of sentence A is sentence B or other sentences in the corpus.This is a two-class problem.The key of this task is to artificially add a [CLS] mark in front of each sentence.After training, the information is integrated into the vector of the position, which can be used to represent each sentence, so the next sentence can be predicted according to the previous sentence. TextCNN model CNN consists of three primary layers: convolution, pooling, and full connection.The convolution kernel is employed in the core convolution layer to extract features.The pooling layer's objective is to reduce the dimension of features in order to mitigate overfitting and simplify the computations required by the Softmax classifier.As a result, CNN possesses the capabilities of weight sharing, dimensionality reduction, local feature extraction, and multi-level structure.CNN focuses on capturing local features.Some scholars have made improvements to CNN and proposed TextCNN for sentiment analysis.TextCNN model has a simpler structure and can be better applied to the field of text sentiment analysis, which is shown in Figure 3 [22].Microblog sentiment analysis method using BTCBMA model  5 BiGRU model Neural networks are frequently employed in sentiment analysis tasks to extract additional text features.In order to classify the sentiment of microblog text, it is necessary to take into account the overall semantics of the text.RNN, LSTM, and other models are typically employed to acquire additional contextual feature information.While LSTM and GRU share comparable architectures and performance metrics, the GRU network exhibits a reduced computational complexity.GRU is more suitable than LSTM for sentiment analysis of microblogs due to the computationally intensive nature of text-related sentiment analysis tasks in general. Based on the semantic expression features of Chinese, usually the meaning of a word or word is not only related to the preceding text, but also related to the following text, so here we choose to use the BiGRU network structure to extract features.BiGRU consists of both reverse and forward GRU.It extracts from the context the underlying sentiment features of the text.The internal architecture is illustrated in Figure 4 [23]. In Figure 4, [ ] refers to the word vector generated by the transformation of vector representation model, and N refers to the sentence length.[ ] refers to the output of BiGRU, which is obtained by splicing the output of the forward GRU and the output of the reverse GRU.BiGRU inherits the advantages of simple and fast GRU training and can combine context to eliminate ambiguity and read the entire text more accurately. M-HA model The attention mechanism operates on the principle of mapping the Query and Key-Value sets; its output is determined by performing a weighted sum operation on the weight value and the value associated with the Key.M-HA does not only calculate attention once, but also performs multiple Scaled Dot-Product Attention (SD-PA) calculations in parallel.Each attention can learn feature representation in multiple representation spaces after splicing all outputs after independent calculation.The different emphasis of learning in each representation subspace results in different extracted features.Therefore, M-HA has more powerful performance in feature representation.SD-PA reduces the result in order to prevent the gradient from disappearing due to excessive internal product.The structure of M-HA is depicted in Figure 5 [20].The calculation process of M-HA is as follows: (1).First, the Query, Key, and Value are linearly mapped in different ways.(2).Perform SD-PA operation on m times of different linear maps. (3).The output obtained in step ( 2) is spliced and input to the linear mapping layer to get the result.M-HA is shown in the following formula: where Microblog sentiment analysis method using BTCBMA model  7 Maxpooling layer The feature graph resulting from the convolution operation has a substantial dimension.By incorporating a pooling layer into the parameter matrix, it is possible to reduce its dimension.This can effectively prevent overfitting while retaining the required features.The maximum pooling method and the average pooling method use the maximum value and the average value in pooling window, and their principles are illustrated in Figure 6. After transferring the processed features from the pooling layer to the full connection layer, the Softmax is applied to the feature vectors of the full connection layer for classification.With average pooling, it is simple to blur text information.However, maximum pooling can inexpensively substitute for the convolution layer, and its translation invariance is another reason why it is so useful in CNN.A translational model signifies that the location of the object is inconsequential; it will be identified regardless.By incorporating translation invariance into the model, its predictive capability will be significantly enhanced, as it will be superfluous to provide information regarding the precise position of the object.Consequently, in this case, the maximal pooling operation is chosen. Experiment and analysis 4.1 Experimental environment The computer provided by the laboratory is used as the experimental platform.The hardware configuration and system environment, respectively, of the platform are detailed in Tables 1 and 2. Dataset The experimental data used in this study is the Weibo negative sentiment dataset, a newly self-labeled collection.The dataset comprises 15 sentiment labels, each containing 20,000 data points, for a cumulative count of 300,000 items.Comparative experiments are performed in this article on various dataset division ratios in light of the analysis of conventional textual multi-sentiment classification tasks.In the proportion of 6:2:2, the dataset is partitioned into a training set, validation set, and test set. In order to verify the validity of the proposed model, we also use a publicly available benchmark dataset from the following source: https://github.com/SophonPlus/ChineseNlpCorpus/blob/master/datasets/simplifyweibo_4_moods/intro.ipynb.The simplifyweibo_4_moods data consists of more than 360,000 sentiment-labeled Sina Weibo messages, including four sentiments: joy, anger, disgust, and depression, with about 200,000 joy messages and 50,000 anger, disgust, and depression messages.In the ratio of 6:2:2, the benchmark dataset is grouped into training set, validation set, and test set. Evaluating indicator Here, the accuracy (A) and F1 values are chosen as the evaluation indicators of sentiment classification.By dividing the sentiment classification results into positive, neutral, and negative, the sentiment classification task is programmed into three categories of multi-class tasks.Therefore, it can be regarded as three dichotomous tasks, i.e., each category is regarded as "positive" and the other two categories are regarded as "negative."Then, calculate the three categories A. The formula for calculating the A of each category is as follows (2): where T P represents the number of samples that the classifier classifies the exception data into exception types, T N represents the number of samples that the classifier classifies normal data into normal types, F P represents the number of samples that the classifier classifies normal data into attack types, and F N represents the number of samples that the classifier classifies abnormal data into normal types. In multi-class tasks, F1 can be calculated in two ways: Micro-F1 and Macro-F1.Among them, Micro-F1 is suitable for unbalanced data and Macro-F1 is more suitable for general multi-class tasks.The sentiment classification task here is three categories.After preprocessing, the proportion of the three types of sentiment tendency data is as follows: "positive:neutral:negative = 28.41%:50.32%:21.27%,"about 1:2:1, which is not unbalanced data.Therefore, Macro-F1 is selected as the evaluation index. Macro-F1 divides the comment text of N classification into N comment text of two classification and then calculates the F1 of each two classifications.After N F1 are obtained, average them.The specific solution process is shown in the following equation: , where P k is the precision, k is the category, and the calculation process of P k for each category is shown in the following equation: where R k is the recall rate.The calculation process of R k for each category is shown in the following equation: Microblog sentiment analysis method using BTCBMA model  9 Model training In order to evaluate the efficacy of the model during the experiment, the accuracy of the test set and the Macro-F1 was chosen.The loss value of the proposed BTCBMA model on the dataset, which was used for both training and sentiment classification, is illustrated in Figure 7. The results in Figure 7 show that the proposed model gradually converges after training for 33 epochs and completely converges after training for 100 epochs. The following is an analysis of the impact of convolution kernels.During the experiment, it is ensured that the other parameters are consistent except for the number of convolution kernels.Generally, when it is 2 to the n-th power, the GPU parallel computing space can be fully used.By changing it from 16 to 256, we can observe the most suitable number.Macro-F1 value of the observed results is shown in Figure 8. In Figure 8, in the process of increasing the number of convolution cores from 2 to 128, Macro-F1 keeps rising and the rising rate gradually decreases.When the number of convolution cores further increases, Macro-F1 starts to decline, i.e., when the number of convolution cores is 128, the value of Macro-F1 is the highest, and the model performs at its peak.Therefore, the number of convolution kernels of the model is set to 128 in the next experiment. In the BTCBMA model, each Epoch will contain all the operations required for training.Theoretically, the more training rounds, the greater the A of the model.However, when the number of training rounds has reached a certain limit, the problem caused by our use of over-fitting will greatly reduce its A and may also seriously affect the computer's operational efficiency.So, we hope to adjust and replace the number of Epoch to explore and verify whether we have used a reasonable number of training times.The experimental results of A with various number of Epochs are depicted in Figure 9. In Figure 9, the A will decrease when Epoch increases from 5 to 10.However, after 10 rounds, the A has been rising rapidly.In the process of increasing Epoch from 25 to 30 times, the test A began to stabilize.When Epoch reaches 33 times, the A reaches the peak.It is thus verified that the Epoch value of the model is 33 times in the experiment. In order to study the value of Dropout, comparative experiments were conducted by setting its value to different values to observe its effect on the model.The experimental results of Macro-F1 value variation with the value of Dropout are shown in Figure 10.The Macro-F1 value exhibits an upward trend as the Dropout value increases from 0 to 0.2, as depicted in Figure 10.However, it begins to decline once the Dropout reaches 0.2.Therefore, 0.2 is chosen as the Dropout value for this experiment. Based on hyperparameter experiments, in the suggested BTCBMA model, the number of convolutional kernels parameter is set to 128, Epoch parameter takes the value of 33, and the Dropout value is 0.2. Comparative analysis First, in the word vector coding layer of the model, the Word2Vec and BERT are used to encode the word vector input.The experimental results of using these two methods as pre-training models are depicted in Table 3. In Table 3, when Word2Vec is used as the word vector encoder, the model with the best sentiment classification effect is Word2Vec + TextCNN + Attention, its A is 83.49%, and Macro-F1 is 0.8224.When BERT is used as a word vector encoder, the best sentiment classification model is the BTCBMA model proposed.Its A is 93.45% and Macro-F1 is 0.9246, which is 9.96% and 0.1022 higher than Word2Vec + TextCNN + Attention, respectively.The superior capability of the BERT pre-training model to extract features from the input text is evident.Using BERT can not only obtain more comprehensive information about the meaning, between words and between sentences of each word, but also dynamically adjust the word vector context information to obtain a more comprehensive word vector representation. To verify the effectiveness of microblog sentiment analysis using BTCBMA in Spark big data environment proposed in this study, two comparative experiments are designed.The methods were also compared and analyzed with those in the literature [28,29] and The sentiment classification A and Macro-F1 obtained using different methods under different datasets are shown in Table 4 and Figure 11.[28] 88.46 0.8746 Ref. [29] 82.31 0.8132 Ref. [30] 84.65 0.8359 Benchmark dataset BTCBMA (ours) 95.34 0.9435 Ref. [28] 91.74 0.9046 Ref. [29] 90.65 0.8925 Ref. [30] 88.15 0.8776 In Figure 11, the bar graph represents A and the line graph represents Macro-F1 values.In both Table 4 and Figure 10, the proposed sentiment analysis method outperforms the other methods under different datasets.In the self-constructed dataset, A and Macro-F1 reach 93.45% and 0.9246, respectively, which is at least 4.99% and 0.05 improvement compared to other existing methods; in the benchmark dataset, A and Macro-F1 reach 95.34% and 0.9435, respectively, which is at least improved by at least 3.6% and 0.0389.BTCBMA adopted in this study integrates BERT, TextCNN, BiGRU, and M-HA.Compared with Refs [28][29][30] methods, it can more comprehensively extract semantic information from comment data.Using BERT, the inability of the comparison method to resolve the issue of word polysemy can be remedied.It is possible to extract local features in text data more effectively than CNN using the minimum granularity of TextCNN, which is determined by the words.Additionally, the issue of BiLSTM operating too slowly can be resolved by employing BiGRU.This not only facilitates the complete learning of contextual semantic connections in text data, but also enhances the model's execution speed.In conclusion, the M-HA algorithm was employed to allocate weights to distinct features, thereby emphasizing critical features while disregarding irrelevant feature data.This implementation successfully enhanced the precision of sentiment analysis in Weibo comments. In order to further explain the importance and difference of each part of the model, the ablation experiment is carried out for the BTCBMA model proposed.The simulation analysis is carried out for BERT-TextCNN (BTC), BERT-TextCNN-BiGRU (BTCB), BERT-TextCNN-Multihead-Attention (BTCMA), and BTCBMA models, respectively.The ablation experiment results of the proposed BTCBMA are listed in Table 5. The results presented in Table 5 demonstrate that the A and Macro-F1 values for a single BTC model are at their minimum, while the performance of the BTCMA and BTCB models surpasses that of BTC.These results suggest that the implementation of BiGRU or M-HA can significantly enhance the accuracy of sentiment classification and the overall performance of the model.The results indicate that the performance of the Microblog sentiment analysis method using BTCBMA model  13 BTCMA model is marginally superior to that of the BTCB model, suggesting that the enhancement introduced by the BiGRU is more substantial than that of the M-HA model.When both are simultaneously incorporated into the model, the A and Macro-F1 values of the BTCBMA model are at their maximum.This indicates that the model exhibits the highest level of efficacy in sentiment classification.It is capable of harnessing the benefits of both BiGRU and M-HA in order to significantly enhance performance. Conclusions In view of the inaccuracy of current sentiment analysis methods for microblog texts, which cannot classify sentiment accurately, a microblog sentiment analysis method based on the BTCBMA model in Spark big data environment is proposed.Compared with existing methods, it can more comprehensively extract semantic information from comment data.Using BERT, the inability of the comparison method to resolve the issue of word polysemy can be remedied.It can extract local features in text data better than CNN by using the minimum granularity of TextCNN based on words, and by using BiGRU, it is possible to circumvent the issue that BiLSTM operates too slowly.In addition to acquiring a comprehensive understanding of the contextual semantic relationships within textual data, it also enhanced the model's execution speed.In conclusion, the M-HA was employed to allocate weights to distinct features, thereby emphasizing critical features while disregarding irrelevant feature data.This implementation successfully enhanced the accuracy of sentiment analysis in Weibo comments. There are still certain areas where the research presented in this study could be improved.For instance, the BERT pre-training model used in this study is excessively large and requires a performance enhancement.In addition, the sentiment classification in this study focuses on the three-classification task, and the classification granularity is coarse, belonging to the conventional "negative," "positive," and "neutral" sentiment classifications; however, the sentiment categories are rich, and there are two categories of sentiment labels under "positive" and "negative."However, the emotion categories are rich and include more detailed and diverse emotions under the "positive" and "negative" emotion labels, such as "happy, excited, angry, sad."Future work will focus on further investigating how the emotion labels of "negative," "positive," and "neutral" emotions can be combined with the emotion labels of "happy, excited, angry, sad."Subsequent research will concentrate on expanding the suggested method to encompass multi-categorization and fine-grained classification challenges, with the aim of more accurately representing the sentiments and inclinations of Internet users during an abrupt or trending occurrence. ( 2 ) TextCNN layer.The local information of the text is captured through multi-channel CNN network, and more comprehensive features are captured through different vectors of three channels.(3) BiGRU layer.Captures context information for text.(4) M-HA layer: As a supplement to the BiGRU layer, it can fully obtain the global features in the text. (3) Fine-tuning layer.The task of sentiment classification in this article is to classify a single sentence.It only needs simple transformation on the output of Transformer at the last layer of BERT.Connect a fullconnection layer and sigmoid or Softmax function to the last layer of Transformer output corresponding to the CLS position of the starting symbol. Figure 7 : Figure 7: Training loss of the BTCBMA model. Figure 9 : Figure 9: A at different number of Epoch. Figure 11 : Figure 11: Sentiment classification results of different methods: (a) self-built dataset and (b) benchmark dataset. Table 1 : Hardware configuration of the experimental platform Table 2 : Experimental system environment Table 3 : Comparison of experimental results with different pre-training models Table 4 : Sentiment classification results of different methods with different datasets
6,911.8
2024-01-01T00:00:00.000
[ "Computer Science" ]
Double Allocation of Spreading Code Minimizing SI and PAPR for LP-OFDM UWB System Self-interference (SI) occurs if the transmission is through a multipath channel. The channel distorts helps to break the orthogonality between the spreading codes. In addition, the amplitude of linear precoded orthogonal frequency division multiplexing (LP-OFDM). LP-OFDM signals have strong fluctuations. Thus, it is necessary to seek or to minimize these two problems by the reduction of term SI and the amplitude of fluctuations Peak-to-Average Power Ratio (PAPR). In this paper a new method of allocation of spreading code has proposed. It consists of a double selection of spreading codes minimizing joint SI and PAPR. Simulation results show that the LP-OFDM system is optimized by this proposal compared to the conventional solution where the systems don’t operate in full load. *Corresponding author: Nouri Naziha, Innov’COM Lab, Higher School of Communications of Tunis, Tunisia, Tel: (216)71874700; E-mail<EMAIL_ADDRESS>Received May 19, 2016; Accepted August 04, 2016; Published August 16, 2016 Citation: Naziha N, Bouallegue R (2016) Double Allocation of Spreading Code Minimizing SI and PAPR for LP-OFDM UWB System. J Telecommun Syst Manage 5: 137. doi:10.4172/2167-0919.1000137 Copyright: © 2016 Naziha N, et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Introduction To ensure a satisfactory quality of service, future communications systems require a high spectral efficiency and flexibility. In 1993, many studies have shown that the technical meeting in places spread spectrum and multicarrier modulation is a solution which meets these criteria [1]. Multicarrier transmission is a solution which is widely exploited in communication networks, whether local, cellular, wireline, embedded or television [2]. The resources allocation is a fundamental aspect in the design of multicarrier systems. So, with the development of powerline communications, this theme of resource allocation in orthogonal frequency division multiplexing (OFDM) systems is still relevant [3] for access networks, home networks as embedded networks because it can maximize either the throughput or the robustness of the system [4]. The LP-OFDM systems can be described as multicarrier systems using a linear precoding and modulating data to be transmitted. The LP-OFDM is based on conventional OFDM associated with a linear precoding component. The main aim of this system is to make the most flexible multicarrier system with lowered limitations and better overall system performance, without raising its complexity [4]. In mobile radio and especially in LP-OFDM system, this precoding provides additional flexibility, simplifies the separation of received signals, or makes it possible, and improves the performance of communications systems. One of the advantages of this precoding is to allow the operation of subcarriers with low signal-to-noise ratio (SNR) [5]. In this paper, a new technique for dynamic resource allocation for LP-OFDM applications has proposed. The Exploitation of the dynamic allocation for ultra wideband (UWB) systems is a real advantage because the UWB indoor channel varies slowly over time, which reduces the system complexity. Indeed, the proposing solution deals with the selection of spreading sequences applied to LP-OFDM signals which are based on several criteria such as correlation functions. In fact, the aim is to propose the selection of optimum spreading codes according to these criteria by minimizing jointly the SI and PAPR. Furthermore, this optimized allocation procedure can increase significantly the mean BER of the system. This paper is organized as follows. In Section 2, the considered LP-OFDM system is described. Section 3 presents the different selection criteria. In Section 4, allocation algorithm is proposed for LP-OFDM systems to reduce jointly SI and PAPR. In section 5, simulation results are discussed and presented in terms of bit error rate. Conclusions are drawn in Section 6. System Description The insertion of a linear precoding function in the conventional OFDM system leads to a LP-OFDM system. In this particular form, the term LP-OFDM may also be referred to the Spectrum multicarrier multiple-access (SS-MC-MA) in mobile radio communications [6]. So, the system complexity is not increased significantly with the addition of precoding function. The linear precoding component may be exploited to decrease PAPR of OFDM systems [7]. The linear precoding component makes the communication more robust against the channel selectivity, and provides natural system robustness against narrowband interferers. It provides a finer granularity in the selection of transmission rates. In the studied LP-OFDM system, each sub-band occupied by a piconet is divided into several sub-blocks. Each one comtains a number of subcarriers equal to the length L of the spreading codes in Figure 1. Expressions of LP-OFDM Signals The linear precoding operation is performed before the OFDM modulation. It can be written as Where S is the matrix of Ns OFDM symbols, each one is composed of N time samples. X is the matrix of BICM symbols to be transmitted. C is the linear precoding matrix. D is a distribution matrix used to allocate the data on the frequency grid called chip mapping matrix. Finally, F N is the Fourier matrix. where B is the number of sub-blocks of subcarriers in the subband with B × L=N. C b is the matrix containing the N L pre-coding sequences of the sub-block b (b ∈ [1. . . B]). X b is the matrix of N L vectors N s complex symbols transmitted in the sub-block b. Choice of Linear Precoding Matrix Codes Walsh-Hadamard is generated from the Sylvester-Hadamard transform matrix. They correspond to the rows or columns of the orthogonal matrix (L × L) constructed recursively as follows: These Codes used in synchronous applications systems Multi-Carrier Code Division Multiple Access (MC-CDMA) or LP-OFDM because of the facility to generate them. So we use these codes in the precoding matrix of the LP-OFDM. The matrix of procoding C b will correspond to the N L columns of H L (N L <=L). Self-interference The choice of orthogonal codes for LP-OFDM system overcomes the SI. This is true if the transmission of the data is considered carried out on a Gaussian channel but it is not it any more if the transmission is carried out through a multipath channel. Indeed, in this case, the orthogonality between the spreading codes is broken by the distortions introduced by the multipath channel. It is then necessary to try to minimize this term SI at reception by the implementation of singleuser detection techniques. The PAPR and the crest factor (CF) The A multicarrier technique is a suitable solution for the transmission of signals over multipath channels. However, the amplitude of the signal LP-OFDM has strong fluctuations. The PAPR allows estimating the ratio between the power peak and the average power of the generated signal. The amplitude of the fluctuations is evaluated by the CF defined by: In practice, it is desired to transmit a signal with a maximum output power without distortion which leads to a degradation of system performance. The technique of selecting low crest factor sequences can remedy to this problem. Hence the interest is to search for OFDM systems the spreading codes with low PAPR value. Minimizing SI Estimate y n, j of the nth complex symbol x n, j of user j Posing R (k-l)=ε [h k g k h l g l ], the power of SI associated with the complex symbol n can be written as: Or w l (n, p)= c l, n c l, p is defined as the product chip to chip of the spreading codes allocated to complex symbols n and p in the l th chip. The power of the SI is influenced by the spreading codes and by selecting the used equalization technique. At full load, N L =L, the terms β (i) are all negative and the same for each complex symbol. When N L < L, the terms β (i) are different depending on the used spreading codes. Some complex symbols are better over others where the power of the SI will be much more important. Therefore, minimizing the negative term β (1) can reduce the power of the SI. This reduction is achieved by judiciously selecting the N L spreading codes to use. The use of Walsh-Hadamard codes implies that Nc=L. Let's denote Ω the a spreading code family composed of Nc, Ω NL a subset of Ω compound N L Codes with N L < Nc. The J ΩNL is define as a function such as: Where I (n, p) is the function of interference produced by the sequence p on sequence n. The J ΩNL term takes into account the maximal degradation produced by two spreading sequences. The interference term I (n, p) is defined by: where W (n, p) is a vector of L elements W l (n, p)= c l, n c l, p (l ∈ [1, . . . , L]) resulting chip to chip produced between the n and p spreading sequences, T(v) defines the number of transitions between the elements of the vector v: Thus, the minimization J ΩNL to preserve a subset of N L sequences for which the various vectors W (n, p) presents a maximum number of transitions. The selection of the optimum subset of spreading sequences is thus defined by: So this method can widely select the spreading codes to obtain an optimal system LP-OFDM. According to the family of spreading codes used, it may happen that the selection process leads to obtain more optimal subsets, the latter having equal JΩNL functions. Under these conditions, it is possible to arbitrarily choose one of the optimal subsets or use Complementary selection criterion. Minimizing Joint SI and PAPR The algorithm of minimization joint power SI and PAPR or peak factor is made by steps detailed as follows: • Step 1: Select the best complementary selection criteria whether the standard deviation : STD or The second order : 2 nd order • Step 2: Apply the complementary selection criteria already chosen earlier can get more optimal subsets • Step 3: Calculate the optimal PAPR for each subset • Step 4: Select the optimal subset having the minimum PAPR. Simulation Results The performance are given for the simulation chain LP-OFDM in the case of a coding rate R=1/2 and a minimum mean square error (MMSE) detection on the channel CM1. Figure 2 shows the performance obtained in the case of misallocation and in the case of an optimal allocation of spreading codes. The length of the codes of Walsh-Hadamard used is L=16, the system performs with the loads N L =8 and N L =15. In conclusion that for N=8, a bad selection of spreading codes can significantly degrade system performance. So, it is important to correctly select the codes for optimal operation of the system LP-OFDM. This degradation is very low in contrast to N L =15. A good selection of spreading codes is even better than the load is low. When N L increases to L, the difference in performance with a good selection and a bad selection attenuates to the limited case where N L =L. In the condition, the complementary selection criterion is used for obtaining more optimal subsets. The Performance Comparison of these criteria is illustrated in Figure 3 in order to extract the most optimal. For a BER=10 -3 the gain is Eb / No=0.66 dB in the case of the criterion of the standard deviation and second order but the gain is reduced to Eb / No=0.33 dB for the case of the mean criterion. So, According to this figure 3, they are the two criteria which are more optimal: the criterion of the standard deviation and the criterion of the second order. This dual minimization is applied to different load N L . It shows an improvment in performance compared to the conventional criterion. Indeed, the Figure 4 shows an acceptable and variable gain according to load N L . The best profit is shown for the low load N L <L but when N L increases towards L, the performance difference decreases until the limiting case where N L =L because SI tends towards 0. Consequently, the optimal subsets are equal. In this case the application of PAPR has no influence and the performance is achieved its best for full load. Conclusion In this paper, a novel method has proposed which consists of a double selection of spreading code. It can reduce jointly the SI power and the peak factor of the signal. This method show more efficient in terms of performance compared to the conventional method which uses three complementary criterions minimizing only the SI power when the system does not operate with full load.
2,945.6
2016-08-16T00:00:00.000
[ "Engineering" ]
Use of Geochemical Fossils as Indicators of Thermal Maturation : An Example from the Anambra Basin , Southeastern Nigeria Organic geochemical studies and fossil molecules distribution results have been employed in characterizing subsurface sediments from some sections of Anambra Basin, southeasternNigeria.The total organic carbon (TOC) and soluble organicmatter (SOM) are in the range of 1.61 to 69.51 wt% and 250.1 to 4095.2 ppm, respectively, implying that the source rocks are moderately to fairly rich in organic matter. Based on data of the paper, the organic matter is interpreted as Type III (gas prone) with little oil. The geochemical fossils and chemical compositions suggest immature to marginally mature status for the sediments, with methyl phenanthrene index (MPI-1) and methyl dibenzothiopene ratio (MDR) showing ranges of 0.14–0.76 and 0.99–4.21, respectively. The abundance of 1,2,5-TMN (Trimethyl naphthalene) in the sediments suggests a significant land plant contribution to the organic matter. The pristane/phytane ratio values of 7.2–8.9 also point to terrestrial organic input under oxic conditions. However, the presence of C 27 to C 29 steranes and diasteranes indicates mixed sources—marine and terrigenous—with prospects to generate both oil and gas. Introduction The Anambra Basin is a late Cretaceous-Paleocene delta complex located in the southern Benue Trough (Figure 1).It is characterized by enormous lithologic heterogeneity in both lateral and vertical extension, derived from a range of paleoenvironmental settings ranging from Campanian to Recent [1]. The search for commercial crude oil in the Anambra Basin has remained a real source of concern especially to oil companies and research groups.Initial efforts were unrewarding and this led to the neglect of this basin in favour of the Niger Delta, where hydrocarbon reserves have been reportedly put at 40 billion barrels of oil and about 170 trillion standard cubic feet of gas [2][3][4]. The Nigerian sedimentary basin was formed after the breakup of the South American and African continents in the Early Cretaceous [5,6].Various lines of geomorphologic, structural, stratigraphic, and paleontological evidences have been presented to support a rift model [7][8][9][10].The stratigraphic history of the region is characterised by three sedimentary phases [11], during which the axis of the sedimentary basin shifted.More than 3000 m of rocks comprising those belonging to Asu River Group and the Eze-Aku and Awgu Formations were deposited during the first phase in the Abakaliki-Benue Basin and the Calabar Flank.The resulting succession from the second sedimentary phase comprises the Nkporo Group, Mamu Formation Ajali Sandstone, Nsukka Formation, Imo Formation, and Ameki Group.The third phase, credited for the formation of the petroliferous Niger Delta, commenced in the Late Eocene as a result of a major earth movement that structurally inverted the Abakaliki region, displacing the depositional axis further to the south of the Anambra basin [12]. Reports of various authors are valuable in the exploration activities in the Anambra Basin.Avbovbo and Ayoola [13] reviewed exploratory drilling result for the Anambra Basin and proposed that most parts of the basin probably contain gas-condensates due to abnormal geothermal gradient.Agagu and Ekweozor [14] concluded that the senonian shales in the Anambra syncline have good organic matter richness with maturity increasing significantly with depth.Unomah [15] evaluated the quality of organic matter in the Upper Cretaceous shales of the Lower Benue Trough as the basis for the reconstruction of the factors influencing organic sedimentation.He deduced that the organic matter and shales were deposited under a low rate of deposition.Specific references to the organic richness, quality, and thermal maturity in the Mamu Formation and Nkporo shales have been reported by Unomah and Ekweozor [16], Akaegbobi [1], and Ekweozor [17].They reported that the sediments are organic rich but of immature status.Iheanacho [18] investigated aspects of hydrocarbon source potential of the organic rich shales belonging to some parts of the Anambra basin.He indicated the source rocks as shales and coals, which present good prospects in terms of economic viability as typified by the quantity and quality of organic matter they contain. This study thereby aims at producing an extensive molecular fossil record of some parts of Enugu Shale and coal measures of the Mamu Formation. Location of Study Area and Geology The study area is located between latitude 6 ∘ 15 N-6 ∘ 45 N and longitude 7 ∘ 15 E-7 ∘ 30 E and falls within the Anambra Basin (Figure 1).The stratigraphic succession of the Anambra Basin, at the second sedimentary phase, comprises the Campanian-Maastrichtian Enugu/Nkporo/Owelli Formations (which are lateral equivalents).This is succeeded by the Maastrichtian Mamu Formation and Ajali Sandstone.The sequence is capped by the Tertiary Nsukka Formation and Imo Shale.These are discussed below. Nkporo-Enugu Shale Group.These units consist of dark grey fissile, soft shales, and mudstone with occasional thin beds of sandy shale, sandstone, and shelly limestone.A shallow marine shelf environment has been predicted due to the presence of foraminifera Milliamina, plant remains, poorly preserved molluscs, and algal spores [2,19,20].Nyong [21] inferred the Nkporo Shale to have been deposited in a variety of environments including shallow open marine to paralic and continental settings. North of Awgu, the Nkporo Shale shows a well-developed medium to coarse-grained sandstone facies referred to as Owelli Sandstone.The Owelli Sandstone member is about 600 metres thick [19]. Mamu Formation. This formation is also known as "Lower Coal Measures." It contains a distinctive assemblage of sandstone, sandy shale, shale, mudstone, and coal seams [19].Surface sections reveal that the Mamu Formation comprises mainly white, fine-grained and well-sorted sands.There are frequent interbeds of carbonaceous shales with sparse arenaceous microfauna and coal beds [20].The exposed thickness of this Formation ranges from 5 to 15 m.According to Reyment [19], the coals occurring in Enugu area are in five seams ranging from 30 cm to nearly 2 m.The middle seam-the thickest-outcrops along the Enugu Escarpment for 11 km.The coals of Enugu area form only a part of the total coal resources of Nigeria [19]. Ajali Sandstone.This is a Maastrichtian sandy unit overlying the Mamu Formation.It consists of white, thick, friable, poorly sorted cross-bedded sands with thin beds of white mudstone near the base [22].Studies have suggested that the Ajali Sandstone is a continental/fluviodeltaic sequence characterised by a regressive phase of a short-lived Maastrichtian transgression with sediments derived from Westerly areas of Abakaliki anticlinorium and the granitic basement units of Adamawa-Oban Massifs [23].The Formation, where exposed, is often overlain by red earth, formed by weathering and ferruginization of the Formation [24].According to Nwajide and Reijers [25], the coal-bearing Mamu Formation, and Ajali Sandstone accumulated during the regressive phase of the Nkporo Group with associated progradation.The authors characterised the Ajali Sandstones as tidal sands. Nsukka Formation. The Nsukka Formation is a Late Maastrichtian unit, lying conformably on the Ajali Sandstone. The unit consists of alternating succession of sandstone, dark shales, and sandy shales with thin coal seams at various horizons, hence termed the "Upper Coal Measures" [22].The Formation begins with coarse to medium-grained sandstones passing upward into well-bedded blue clays, fine-grained sandstones, and carbonaceous shales with thin bands of limestone [12,19].Agagu et al. [20] reported that the Formation has a thickness range of 200-300 m and consists of alternating succession of fine-grained sandstone/siltstones and greydark shale with coal seams at various horizons.A strand plain/marsh environment with occasional fluvial incursions similar to that of the Mamu Formation was inferred for this Formation. 2.5.Imo Shale.The Imo Shale overlies the Nsukka Formation in the Anambra Basin and consists of blue-grey clays and black shales with bands of calcareous sandstone, marl, and limestone [19].Ostracod and foraminifera recovered from the basal limestone unit indicate a Paleocene age for the Formation [26].Lithology and trace fossils of the basal sandstone unit reflect foreshore and shoreface or delta front sedimentation [27].The Imo Formation is the lateral equivalent of the Akata Formation in the subsurface Niger Delta [11].The Formation becomes sandier towards the top where it consists of alternations of sandstone and shale [26].Nwajide and Reijers [25] interpreted the Imo Shale to reflect product of shallow-marine shelf in which foreshore and shoreface are occasionally preserved. Weathering and Contamination of Rock Samples Borehole samples are preferred because they provide a continuity of vertical sections over tens or hundreds of metres. Even some of the best natural outcrops or exposures do not provide this coverage, because beds are weathered away [28]. The weathering of outcrop samples and contamination could give rise to false and pessimistic indications of hydrocarbon potential.Although well samples can be contaminated by drilling fluid additives (diesel contamination, e.g., can be recognised from gas chromatography by the high concentrations of -alkanes up to C 20 ), steranes and triterpenes should be unaffected.Borehole samples were therefore used for this study. Analytical Methods Borehole samples from Enugu 1325 and 1331 wells were obtained from Nigerian Geological Survey Agency (NGSA), Kaduna and used in this study.The borehole samples, Enugu 1325, range in depths from 165 to 177 m while Well 1331 range in depths from 219 to 233 m.Enugu well 1325 has a sequence beginning from shale, overlain by siltstone, coal, shale, and siltstone successively (Figure 2).The shales are dark grey and fissile; the siltstone is brown to light grey while the coal is blackish.Enugu well 1331 has a bottom to top sequence which begins from coal, shale, and siltstone successively.In the middle section is a siltstone-shale sequence which is overlain by another coal, shale, and siltstone succession (Figure 3).Thirteen (13) representative core samples made up of four (4) coal samples and nine (9) shale samples were subjected to organic geochemical analysis. Total Organic Carbon (TOC) Determination. Approximately 0.10 g of each pulverized sample was accurately weighed and then treated with concentrated hydrochloric acid (HCl) to remove carbonates.The samples were left in hydrochloric acid for a minimum of two (2) hours.The acid was separated from the sample with a filtration apparatus fitted with a glass microfiber filter.The filter was placed in a LECO crucible and dried at 110 ∘ C for a minimum of one hour.After drying, the sample was analysed with a LECO 600 Carbon Analyzer.The analysis was carried out at the Weatherford Geochemical Laboratory, Texas, USA. Rock Eval Pyrolysis. The thirteen samples were further characterised by rock eval pyrolysis to identify the type and maturity of organic matter and petroleum potential in the studied area.Rock-Eval II Pyroanalyzer was used for this analysis.Pulverised samples were heated in an inert environment to measure the yield of three groups of compounds (S 1 , S 2 , and S 3 ), measured as three peaks on a program.Sample heating at 300 ∘ C for 3 minutes produced the S 1 peak by vapourising the free hydrocarbons.High S 1 values indicate either large amounts of kerogen derived bitumen or the presence of migrated hydrocarbons.The oven temperature was increased by 25 ∘ C per minute to 600 ∘ C. The S 2 and S 3 peaks were measured from the pyrolytic degradation of the kerogen in the sample.The S 2 peak is proportional to the amount of hydrogen-rich kerogen in the rock, and the S 3 peak measures the carbon dioxide released providing an assessment of the oxygen content of the rock.The temperature at which S 2 peak reaches maximum- max -is a measure of the source rock maturity. Determination of Soluble Organic Matter (SOM). The soluble organic matter content of both shale and coal samples was carried out to estimate the free hydrogen content of the samples.This was done using the Soxhlet System HT2 Extraction Unit and Methylene Chloride/Methanol mixture (9 : 1) as the solvent.Each pulverised sample, after been weighed, was placed into labelled cellulose thimbles and plugged with glass wool and adapter.For shale sample, 20 g was taken while 2-4 g was taken for coal.The thimble, extraction cups and 100 mls of methylene chloride : methyl solution were placed inside a tecator system.The solvent was allowed to boil, and then the thimbles were lowered into the solvent and left for an hour.The stop cork was closed for faster evaporation.After evaporation, soluble matter were turned into preweighed, labeled 20 mL glass vials, and dried with nitrogen at 40 ∘ C. The dried extract was weighed at room temperature. The soluble organic matter was then calculated; thus, The extraction was carried out at Exxon Mobil Geochemical Laboratory, Que Iboe Terminal (QIT), Eket. Gas Chromatography of Whole Oil.The analyses were carried out in a Hewlett Packard 6890A gas chromatograph, equipped with dual flame ionization detectors.The chromatograph was fitted with HP-1 capillary column (30 m × 0.32 mm I.D × 0.52 microns) using helium as the carrier gas.The column temperature was programmed at 35 ∘ C to 300 ∘ C/min with a flow rate of 1.1 mls/min.The bitumen extract (SOM) was diluted with drops of carbon disulphide while agitating until sample is dissolved.A little volume was placed in a labeled auto-sampler vial which was transferred to the autosampler tray for the analysis to run.1.0 L of the diluted extract was rapidly injected to the gas chromatograph in split mode, using a graduated Hp 10 L injection syringe.This analysis was carried out at the Exxon Mobil Geochemical Laboratory (QIT), Eket, Nigeria. Gas Chromatography Mass Spectrometry. For GC/MS to be carried out on an extract (soluble organic matter), it must be separated into its fractions, that is, saturate, aromatic, asphaltene, and resin.The gravimetric column chromatography method was applied in the separation of extract into saturate, aromatic, resin, and asphaltene fractions (SARA).It is modified from the "SARA" procedure (Exxon Mobil operation manual). The saturate and aromatic fractions recovered from the liquid chromatography were analysed for their biomarker by gas chromatography/mass spectrometry (GC/MS) using the selected ion monitoring mode (SIM).Hexane was added to each sample vial containing the saturates and aromatic fractions to obtain concentrations of 25 g/L and 12.5 g/L, respectively.The samples were mixed with a vortex mixer to agitate and then transferred to an auto-sampler vial and capped.Vials were then placed on the auto-sampler to be run in an HP 6890 gas chromatograph silica capillary column (30 m × 0.25 mm ID, 0.25 m film thickness) coupled with HP 5973 Mass Selective Detector (MSD).The extract was rapidly injected into the gas chromatograph using a 10 L syringe.Helium was used as the carrier gas with oven temperature programmed from 80 ∘ C to 290 ∘ C. The mass spectrometer was operated at electron energy of 70 Ev, an ion source temperature of 250 ∘ C, and separation temperature of 250 ∘ C. The chromatographic data were acquired using Ms Chemstation software, version G1701BA for Microsoft NT.This analysis was carried out at Exxon Mobil Geochemical Laboratory, Eket. 4.6.Aromatic Biomarker Parameters.According to Radke et al., [29], MPI-1 (methyl phenanthrene index), DNR-1 (dimethyl naphthalene ratio), and MDR (methyl dibenzothiopene ratio) can be used as source and maturity parameters.The necessary calculations were made using the results obtained from peak identification and height of aromatic biomarkers of the studied wells (see Table 2). Organic Richness According to Conford [30], adequate amount of organic matter measured as percentage total organic carbon is a prerequisite for sediment to generate oil or gas.Shown in Table 1 are the results of total organic matter content (TOC).The coal samples from both wells show a higher organic richness than shale.Nevertheless, both wells have values above the threshold of 0.5 wt% considered as minimum for clastic source rocks to generate petroleum [31].The soluble organic matter (SOM) of the samples generally exceeds 500 ppm except for samples P3 (EN 1325) and V5 (EN 1331) with SOM values of 250.1 and 467.8 ppm, respectively.These show that the samples can be classified as fair to excellent source rocks.Based on the quality definition of Baker [32], the organic matter is adequate and indicates good hydrocarbon potential for the studied wells. Organic Matter Type The organic matter type in a sedimentary rock, among other conditions, influences to a large extent the type and quality of hydrocarbon generated due to different organic matter type convertibilities [31].The Hydrogen Index (HI) for the shale and coal samples ranges from 83 to 245 mgHC/gTOC with an average value of 178 mgHC/gTOC.This can be interpreted as type III (gas prone).The plot of hydrocarbon potential versus TOC (Figure 4) indicates type II/III organic matter which means a potential to generate oil and gas.The majority fall within the type III organic matter indicating that gas will dominantly be generated, with little oil.Peters [33] suggested that at thermal maturity equivalent to vitrinite reflectance of 0.6% ( max 435 ∘ C), rocks with HI > 300 mgHC/gTOC produce oil, those with HI between 150 mgHC/gTOC and 300 mgHC/gTOC produce oil and gas, those with HI between 50 mgHC/gTOC and 150 mgHC/gTOC produce gas, and those with HI < 50 mgHC/gTOC are inert.From this study, the range of HI is from 83 to 245 for the shales and coal.This indicates oil and gas prone. Petroleum generating potential (GP) is the sum of S 1 and S 2 values obtained from rock eval pyrolysis (Table 1).The values obtained range from 2.34 to 177.36.According to Dyman et al. [34], values greater than 2 kgHC/ton of rock indicate good source rock.This suggests oil and gas potential. Thermal Maturity The degree of thermal evolution of the sedimentary organic matter was derived from Rock Eval max and biomarker parameter.According to Peters et al., [35], biomarkers (geochemical fossils) can provide information on the organic source materials, environmental conditions during its deposition, the thermal maturity experienced by a rock or oil, and the degree of biodegradation. The max values (Table 1) range from 425 to 435 ∘ C.These indicate that the shales and coal range from immature to early peak mature (oil window) but on the average are immature.The interpretation is in line with those given by Peters [33], Dow [36], and Miles [37].This is further highlighted by the plot of HI versus max (Figure 5). (C 27 : 17(H)-22,29,30-Trisnorhopane) represents biologically produced structures and (C 27 : 18(H)-22,29,30-Trisnorneohopane) generated in sediments and rocks by diagenetic or thermal process or both. /( + ) is a ratio used as both source and maturity parameters.The / 191 (hopanes) (Figure 6) and 217 steranes (Figure 7) chromatograms of all the samples are similar.H 30 (hopanes) are the most abundant in the / 191 chromatogram.The maturity and source parameters derived from the hopane distributions in the shales and coals are shown in Tables 2 and 4. Also shown are calculated parameters of aromatic biomarkers.Parameters such as MPI-1 (methyl phenanthrene index), DNR-1 (dimethyl naphthalene ratio), TMNR (trimethyl naphthalene ratio), and MDR (methyl dibenzothiopene ratio) with respective range of values 0.14-0.76,0.75-2.51,0.17-0.50,and 0.99-4.21all indicate that the samples are immature to marginally mature [29].According to Sonibare et al. [38], the abundance of 1,2,5 TMN (trimethyl naphthalene) suggests a significant land plant contribution to the organic matter (Figure 8).Some -alkane ratios can be used to estimate the thermal maturity of sediments [39].Pristane/C 17 and phytane/C 18 can be used to calculate thermal maturity.For the studied wells, the Pr/C 17 values ranged between 0.8 and 3.91 (Table 3); this falls in the immature zone.Ph/C 18 values ranged from 0.2 to 0.57, which is below the threshold value, indicating immature organic matter.Carbon preference index (CPI) is the relative abundance of odd versus even carbon-numbered -alkanes and can also be used to estimate thermal maturity of organic matter [40].In this study, the CPI values obtained range from 1.53 to 1.83 (Table 3).Hunt [41] has pointed out that CPI considerably Palaeodepositional Environment Moldowan et al. [44] have indicated that the presence of bisnorhopane and diasterane is indicative of suboxic conditions.A plot of /( + ) versus dia/(dia + reg)C 27 steranes, as shown in Figure 9, is indicative of a suboxic condition.Pristane/phytane (Pr/Ph) ratio of sediments can be used to infer depositional environment [35].Pr/Ph ratios < 1 indicate anoxic depositional environment, while Pr/Ph > 1 indicate oxic conditions.Pr/Ph 1 < 2 indicate a marinesourced organic matter and Pr/Ph > 3 indicates terrigenous organic matter input with oxic conditions.The values obtained from the studied wells ranged from 5.08 to 8.97, thus indicating that the samples have terrigenous-sourced organic matter deposited in an oxidizing environment.Crossplots Pr/C 17 versus Ph/C 18 (Figure 10) reveal that the sediments were deposited in an oxidizing environment and are from terrestrial and peat environments.This is consistent with the samples as some of them are of coal environment.Dahl et al. [45] reported that a low ratio of homophane index is characteristic of a suboxic environment (Table 4).On the other hand, Pr/Ph ratio tend to be high (>3) in more oxidizing environment such as in swamps.High Pr/Ph values from the work indicate a terrigenous input under oxic conditions.A large proportion of the results point to the fact that a suboxic condition prevailed in the deposited sediments.These indicate that a significant portion of the facies were probably deposited in an offshore, shallow to intermediate marine environment under suboxic water conditions which probably had no connection with the widespread Cretaceous anoxic events but are related to the Campanian-Maastrichtian transgression. Summary and Conclusion Detailed geochemical analysis of the coal and shale intervals gotten from the Anambra Basin, Nigeria, has been used to investigate the aspects of their molecular fossil.The lithostratigraphic sequence penetrated by both wells (Enugu 1325 and 1331) consists of shales, coal, and siltstones.The shales are dark grey and fissile.The siltstones are brown to light grey in colour while the coal is blackish.Organic richness of the samples was deduced from SOM and TOC as fair to excellent.The organic matter type is predominantly terrestrial.This is based on the HI values, HI- max plot, the presence of oleanane, the abundance and predominance of C 29 , C 35 homophane index, and the abundance of 1,2,5 Trimethyl Naphthalene. Biomarker parameters were used to determine the degree of thermal evolution of the sediment organic matter.The presence of bisnorhopane, diasterane, plot of /( + ) against dia/(dia + reg)C 27 sterane and the homophane index all indicate suboxic and high Eh conditions. Discrepancies were observed in the results used in the interpretation of physicochemical conditions prevailing in the deposited sediments.These varied between oxic and suboxic conditions.It is thereby concluded that the lithologies from the core samples are those of the Mamu Formation and Enugu-Shale Group which were deposited in a partial or normal marine (suboxic to oxic water conditions) environment.There is no strong evidence to show that the shales and coals have expelled petroleum although they possess what it takes to be economic, largely in terms of gas, thus presenting a good prospect. Figure 1 : Figure 1: Geologic map of the Anambra Basin showing the study area. Figure 4 : Figure 4: A plot of hydrocarbon potential against TOC. Figure 5 : Figure 5: A plot of Hydrogen Index against max for the studied wells. Table 1 : Data of TOC and rock-eval pyrolysis. Table 2 : Data of molecular parameters for the studied wells. Table 3 : Gas chromatographic data showing values of n-alkanes ratio and their CPI. [43] strong odd/even bias of heavy -alkanes is indicative of sediment immaturity.For this study, the odd numbered -alkanes are more abundant than the even numbered alkanes, indicating that the sediments are immature.The odd-even predominance (OEP) values are less than 1.0, this is indicative of low maturity[43].m ) Table 4 : Results and interpretations of geochemical fossils.
5,362
2015-02-22T00:00:00.000
[ "Geology" ]
Holographic Complexity and Volume The previously proposed"Complexity=Volume"or CV-duality is probed and developed in several directions. We show that the apparent lack of universality for large and small black holes is removed if the volume is measured in units of the maximal time from the horizon to the"final slice"(times Planck area). This also works for spinning black holes. We make use of the conserved"volume current", associated with a foliation of spacetime by maximal volume surfaces, whose flux measures their volume. This flux picture suggests that there is a transfer of the complexity from the UV to the IR in holographic CFTs, which is reminiscent of thermalization behavior deduced using holography. It also naturally gives a second law for the complexity when applied at a black hole horizon. We further establish a result supporting the conjecture that a boundary foliation determines a bulk maximal foliation without gaps, establish a global inequality on maximal volumes that can be used to deduce the monotonicity of the complexification rate on a boost-invariant background, and probe CV duality in the settings of multiple quenches, spinning black holes, and Rindler-AdS. I. INTRODUCTION The AdS/CFT correspondence [1][2][3] provides a satisfying duality between a black hole in asymptotically anti de-Sitter spacetime and a thermal state of a CFT, in which the entropy of the black hole is dual to ordinary thermal entropy. What remains obscure, however, is the relation between the black hole interior and the physics of the CFT. Aside from its (semiclassical) causal isolation, the interior has two qualitative features that one would like to understand from the viewpoint of the CFT: the curvature singularity, and the growth of space. The latter follows from the well known peculiar fact that the symmetry of time flow outside the horizon becomes a space translation symmetry inside. Under this symmetry flow, exterior time elapses, and the length of a spacelike curve at a fixed interior radius grows, with a rate that increases without bound as the fixed radius approaches the singularity. Susskind observed [4][5][6][7][8] that this growth should be reflected somehow in the CFT, because it can be captured by a gauge invariant observable of the bulk gravity theory. He proposed that it corresponds to the computational complexity of the state of the CFT, which continues to grow, after statistical equilibrium is reached, for a time that is exponential in the entropy. This proposal was quickly refined to "CV-duality", according to which the complexity at a given boundary/CFT time is proportional to the volume of the maximal slice enclosed within the corresponding "Wheeler-DeWitt patch," i.e., within the domain of dependence of a spacelike bulk hypersurface that asymptotes to the given boundary time slice 1 . Shortly thereafter, the alternative postulate of "CA-duality" was introduced, according to which complexity is equal to the action of the Wheeler-DeWitt patch (see [9][10][11][12][13] for a selection of recent work on these two proposals 2 ). Both of these proposals predict a rate of growth of the complexity at late time that roughly agrees with general expectations. There is reason to expect that the rate of complexification for a CFT in equilibrium scales as T S/h, the product of the temperature T and the entropy S [4]. The entropy counts the number of 'active' degrees of freedom, andh/T is the timescale for thermal fluctuations. If each such fluctuation counts as the execution of a quantum gate on active degrees of freedom, then the number of gates executed per unit time is ∼ T S/h, which is thus the rate at which the complexity of the state increases. In order to match this rate, the complexity for black holes that are large compared to the AdS radius should be given in terms of the volume of the maximal slice by [5] C ∼ V hG . (1) In equilibrium, the maximal slice approaches a final maximal cylinder inside the horizon, with fixed crosssectional area and a proper length that grows in proportion to Killing time. The above formula equates the complexity to this area, measured in Planck units, times the proper length of the cylinder, measured in AdS length units. For black holes small compared , the complexity should instead be given by so that the proper length of the cylinder is measured in horizon radius units r + [5]. Unlike the case for large black holes, this depends upon the black hole size. This discrepancy is a principal reason for preferring CA over CV. The fact that the volume divisor in CV is for large black holes, but r + for small black holes, indicates an apparent lack of universality. However, in both cases this divisor actually corresponds to an intrinsic property of the black hole: the maximum time τ f to fall from the horizon to the final maximal cylinder is ∼ r + /c for spherical black holes with r + ≤ in D ≥ 4 dimensions, and ∼ /c for black holes with r + ≥ . 3 Hence the complexity formuale (1) and (2) actually coincide, up to an order unity numerical factor, if the length in the denominator is understood as d f := cτ f . That is, in computing the complexity, the length of a section of the final cylinder ∆L f should be measured in units of the maximal time to fall from the horizon to the cylinder. The universal expression for the late time complexity is thus where A f is the cross-sectional area of the final slice. It turns out that A f equal to the horizon area A H up to an order unity factor, so that A f /hG ∼ S BH can be identified with the black hole entropy, which is dual to the CFT entropy. The remaining factor in (3) is then ∆L f /τ f . In Sec. III D we will show that, quite generally, where κ is the surface gravity, ∆t is the elapsed Killing time, and T H is the Hawking temperature of the black hole, which is dual to the CFT temperature. With these results, the expression (3) for complexity thus becomes yielding the black hole dual of the expected complexification rate. While the universality of the divisor τ f is more satisfying than the previous ad hoc prescription, it should be admitted that we have no rationale for measuring the length of the final slice in units of τ f , other than that gives the desired result. Another potential drawback is that this prescription only applies to defining the complexity when the state at late times is thermal equilibrium, so that a 'final' maximal slice exists. In a general dynamical setting, this prescription is inapplicable (although as discussed in Sec. V D it can be applied in empty AdS, using the boost Killing field to define the notion of equilibrium). That said, as discussed in the next section, the notion of complexity itself is more ambiguous outside of a thermal setting, so it is not clear whether we should expect it to admit a universal holographic definition. The CV proposal thus remains interesting, as it passes the same checks as does the CA proposal, in some cases (regarding monotonicity on a stationary background) even better as discussed in Sec. IV 1. The purpose of this paper is to take a closer look at various aspects of the CV proposal, attempting to sharpen it and offer some interpretation of its definition and properties, as well as to extend the tests of it. For both conceptual and computational reasons, we shall make use of a volume current, whose flux through the bulk maximal slices anchored at a boundary foliation is equal to the volume of those slices. This volume current is a unit, timelike, divergence-free vector field orthogonal to the bulk maximal foliation. Our interest in the role of this current was inspired by recent work of Headrick and Hubeny (HH) [15], which established a "min-flow/max-cut" theorem relating volumes of maximal slices to minimal fluxes of timelike, divergencefree, vector fields with norm bounded below by unity ("HH flows"). In Ref. [15], it was remarked that it is natural to relate minimization of the number of gates in defining the complexity of a state, in a dual field theory, to minimization of the flux of an HH flow in the bulk spacetime, suggesting a "gate-line" picture of holographic complexity. In this picture, our volume current would correspond to a "gate current". Let us briefly describe here the HH theorem and its relation to our volume current. The theorem states roughly that, given boundary sub-region A, the maximal spatial volume of any slice homologous to A is equal to the minimal flux of an HH flow through the slice or (equivalently) through A. A given minimizing HH flow has unit norm on the corresponding maximal volume slice, and is orthogonal to that slice, but it is not otherwise uniquely determined. By contrast, the volume current we employ is a particular realization of an HH flow, determined by a boundary foliation, and its flux gives the volume of each slice of the corresponding maximal bulk foliation. That volume is not conserved, because there is flux through the cutoff boundary of the bulk region. Although the HH theorem assumes the spacetime is orientable and time-orientable, and assumes a maximal volume slice exists, its proof does not directly invoke any causality assumption or energy condition on the spacetime. By contrast, the volume current requires the existence of a foliation by maximal slices. We argue in Appendix A that such a foliation exists if i) maximal slices exist, ii) the spacetime satisfies a causality condition, and iii) the strong energy condition and Einstein equation hold. If the foliation is known to exist, then the HH theorem is a simple consequence: the flux of any HH flow is lower-bounded by the maximal volume (for a given boundary Cauchy slice), and the theorem asserts that this bound is actually saturated. The volume current, when it exists, saturates this bound, so our results can be viewed as providing a constructive proof of the HH theorem under certain additional assumptions. The remainder of this paper is structured as follows. Section II confronts the ambiguity in defining complexity. The notion seems most robust when applied to time evolution of thermal states, and we summarize several reasons for thinking the volume inside a black hole horizon captures the relevant quantity. Section III introduces the volume current, a useful tool for quantifying properties of maximal volumes and their evolution, and obtains several results using it. One of these is evidence for the flow of complexity from UV to IR in holographic CFTs. Section IV deduces a global inequality on maximal volumes, and uses this to establish the monotonic increase of the rate of volume growth on a boost invariant background. Section V probes CV duality in three settings: black hole formation with one or two shells of matter, spinning black holes, and empty AdS viewed as a pair of thermal Rindler wedges. Section VI is a brief conclusion and outlook. In Appendix A it is argued, assuming the existence of maximal slices, a causality condition and the strong energy condition, that a boundary foliation determines a maximal volume bulk foliation. The unit vector field normal to this foliation is the volume current. The remaining three appendices derive useful technical results. For the balance of this paper we use Planck units, withh = c = G = 1. II. VOLUME INSIDE AND OUTSIDE THE HORIZON The complexity of a pure quantum state is a measure of how many simple unitary operations, or "gates," it takes to produce it, starting with some reference state [16][17][18]. Hence, in general, complexity is defined only relative to the choice of reference state and the choice of gates. The original motivation for the proposal of CV duality pertained to time development of complexity at the thermal scale in a finite temperature pure state. In this context, the reference state could presumably be taken to be the thermal microstate at any fixed time, and the gates could be taken to be a fixed collection of gates that act at the thermal energy and length scales, so the rate of change of complexity is intrinsically defined without significant arbitrariness. However, the CV proposal encounters a divergence in asymptotically AdS spacetime, where the volume of a maximal slice diverges at spatial infinity. This divergence occurs for any state and, according to the usual UV-IR relation in AdS/CFT duality, it would presumably correspond, according to CV duality, to a divergent UV complexity of the CFT vacuum. While the vacuum is simply the ground state of the theory, it is complex if considered as a state to be prepared, starting with a spatially unentangled state, by the application of local quantum gates. Some analysis has suggested that this interpretation of the UV limit of CV duality might be sensible [12], although the volume-complexity relation could be infinitely sensitive to the somewhat arbitrary definition of the reference state and gates, and to sub-leading modifications of the short distance structure of the state [19,20]. The volume divergence has generally been dealt with in the literature by imposing a cutoff at some large radius, and focusing on the time dependence of the volume, which does not depend on the location of the cutoff. This corresponds, in effect, to taking the reference state to be the vacuum above the cutoff energy scale, and some "unentangled" state below that scale. The rate of change of the volume in a stationary, thermal state at late times is independent of the location of the cutoff, because the volume growth all happens inside the horizon of the black hole. For this reason, and several others, it makes a lot of sense to count only the volume behind the horizon: 4 • It is only the complexity at the thermal scale that appears to have a robust significance, independent of the arbitrary choices of reference state and gates. • The complexity divisor of the volume, as explained in the introduction, is universal when recognized as a free-fall time from the horizon to the final maximal slice. • The stationary state volume growth at late time occurs behind the horizon. This was explained in a picturesque way in [5], where it was referred to as "unspooling complexity" from the horizon. • If the reference state is the vacuum, then only black hole states have complexity that scales as O(N 2 ) in the CFT. This suggests that holographic complexity (with a vacuum reference state) should, at leading order in N , be associated only with black holes, and that the relevant volume in CV duality should be only that located behind a horizon. 4 It was also noted in Ref. [21] that the volume divergence can be regulated by counting only the volume behind the horizon. • For a two-sided black hole, a natural reference state is the thermofield double, which is a "Euclidean vacuum" for this topology. The maximal bulk slice corresponding to this state is a global time slice invariant under time reflection (like the t = 0 slice in Schwarzschild coordinates), which does not enter the (future or past) horizon, and therefore has zero volume behind the horizon. The volume behind the horizon thus gives the "right" result: the complexity vanishes, since the reference state by definition has zero complexity, but it grows if the time on one boundary is boosted relative to that on the other. • A "second law of complexity" [22,23] follows directly when the horizon is a causal barrier, as discussed in the next section. • The volume inside a white hole horizon can also contribute to the complexity, as in the shockwave scenario discussed below. This allows for decreasing complexity, when such behavior is expected. • In the extremal limit of rotating or charged black holes, the exterior of the horizon develops an infinitely long throat. Regularizing the volume near the boundary (or, for that matter, anywhere outside the horizon) would predict that the complexity of the thermofield double state diverges in the IR as extremality is approached. This questionable feature is avoided by regularizing at horizon. When applied in a general, time dependent setting, the proposal that complexity corresponds only to the volume behind the horizon suffers from a major drawback, however, if we use the event horizon, because the volume inside can grow before anything changes in the CFT, at the boundary of the maximal slice. This is illustrated by an example in Sec. V A We therefore propose to use an "apparent horizon" as the cutoff surface when there is time dependence. We follow the prescription, used previously in the literature, of measuring the volume on leaves of a foliation of the spacetime by spacelike hypersurfaces that maximize the volume inside an outer cutoff boundary. The apparent horizon is then defined as the boundary of the region containing trapped surfaces on each leaf of this foliation. In (quasi)stationary black hole spacetimes, this apparent horizon will (nearly) coincide with the event horizon. A component of the apparent horizon that asymptotes to the event horizon consists of points lying on marginally outer trapped surfaces [24]. When focusing on the volume inside the horizon, we are limited to discussing the growth of complexity in states dual to a spacetime with a horizon. This is not as restrictive as it might seem, since even empty AdS is a thermal state, when viewed as a pair of Rindler wedges. Indeed, the much studied account of complexity increase for the two-sided black hole can be adapted in a straightforward manner to the Rindler case, where the relevant volume is that inside the Rindler horizon. The interpretation in this case appears to be fully consistent with that for black holes, as we explain in Sec. V. So far we have been referring to the volume inside the black hole horizon, which is relevant for late time equilibrium states. However an important test for any proposed holographic dual of complexity is that it exhibit the switchback effect [6], which brings the white hole horizon into play. The switchback effect refers to a small time-deficit in the growth of complexity, of order twice the "scrambling time", when a state is evolved backwards in time, perturbed relative to the reference state, and then evolved forwards in time. The calculations in [6] demonstrated that the volume of maximal slices does holographically capture the switchback effect for the thermofield double state, and in particular the maximal volume slices can traverse the black hole region on one side of the shock, and the white hole region on the other side. In the late time approximation used in [6], the portion of the maximal slice outside the horizons does not contribute to the total volume, because it is null, as illustrated in Fig. 1. Hence the volume inside the black and white hole horizons suffices to capture the switchback effect. In general, therefore, our proposal must be taken to include the volume inside the white hole horizon. This appears somewhat natural, considering the fact that the derivation of the switchback effect involves reversed time evolution, and the time reverse of a black hole is a white hole. Finally, although it appears difficult to relate the volume outside the horizon to a definition of complexity of the state in a universal manner, the assumption that such a relation exists leads to the interesting picture of complexity flowing from UV to IR, as explained in the following section. III. VOLUME CURRENT While the volume of maximal slices is a nonlocal construct, there is an associated local object, the "volume current," which can be used to infer the volume growth behind the horizon and the second law of complexity, and which is suggestive the UV to IR flow of complexity. In this section we introduce the volume current, and use it to establish several important properties of the proposed CV duality. A volume current will be defined given a foliation of spacetime by spacelike hypersurfaces with maximal volume. In the present application, we are interested in asymptotically AdS spacetimes, in which a maximal foliation is determined by a Cauchy foliation of the boundary by slices orthogonal to an asymptotic Killing flow defining time translation. Provided that there is a unique bulk maximal slice that terminates on any fixed boundary Cauchy slice, and provided these bulk slices do not skip over a "gap" in the bulk, the boundary foliation induces a bulk foliation by maximal slices Σ t , labeled by a parameter t. We establish the existence of such a bulk foliation by a reasonably convincing-if not mathematically rigorous-series of arguments in Appendix A. To rule out the possibility of gaps we will need to assume that the timelike convergence condition (which is equivalent to the strong energy condition modulo the Einstein equation) holds. Whether or not a global foliation exists, our construction can be applied to the portion of spacetime prior to the final slice that is foliated without a gap. The divergence of the unit timelike vector field v orthogonal to the bulk foliation is the trace of the extrinsic curvature K of Σ t , which vanishes since the slices are assumed to be maximal. This vector field is thus a conserved current, which we dub the volume current associated with the maximal foliation. The volume of Σ t is the flux of this current through Σ t , (Here is the spacetime volume element, and the dot indicates contraction on the first index of .) The construction of the volume flow v, starting from a boundary foliation, is illustrated in Figure 2. As discussed in Sec. II, to obtain a finite volume, and hence a finite putative complexity, the integral must be cut off at some outer boundary ∂Σ t . We will continue to use the letter "V " for this truncated volume. A. Second law of complexity Since the divergence of v vanishes, the change ∆V from one time slice to another is entirely accounted for by the flux of v through the boundary, or boundaries, of that slice. If we restrict to the volume inside the horizon, then the change is accounted for by the flux of v through the horizon. Since v is a future pointing timelike vector, the flux through the future event horizon is positive, and it follows that the interior volume can only increase. When considering a spacelike portion of the apparent horizon forming a past boundary of the trapped region, again the flux is positive. CV duality then implies that in these situations, the complexity must increase, in accordance with the second law of complexity [22,23]. Note that this argument applies in arbitrary dynamical black hole spacetimes, such as a black hole formed by collapse. If, however, the apparent horizon has a timelike section, which can happen when a black hole evaporates, and even when positive energy conditions hold [24], then we cannot rule out a decrease in the volume enclosed. This seems natural: when this horizon is not a causal barrier, there is no reason to expect the associated complexity to irreversibly increase. Note that when the region behind the horizon includes the white hole, as with the two-sided black hole with a shockwave of Figure 1, the complexity can decrease as time increases on the side opposite to the shock [6]. Correspondingly, the volume of the maximal slice inside the white hole decreases, since the volume current can only exit the white hole horizon. B. Complexity flow from UV to IR The flux of the volume current inward across the horizon suggests a picture of complexity flowing from UV to IR, which is further corroborated by examination of the flux through other surfaces. Consider the section of a maximal slice Σ t stretching between the horizon of a black hole and the outer cutoff boundary in asymptotically AdS spacetime. The rate of complexification of the thermal degrees of freedom should not depend upon where the cutoff surface is placed, because that just changes the constant complexity assigned to the degrees of freedom that are in their ground state. Holographically, this works because at sufficiently large radius, as explained below, v is invariant under the asymptotic Killing flow. The volume between two large radii is thus independent of time, which implies that the flux of v through the boundary is independent of its (large) radius. Moreover, at sufficiently late times, as also explained below, v is invariant under the Killing flow everywhere, including on and inside the horizon. The flux of volume through the horizon is therefore equal to the flux through the outer boundary at the UV cutoff. According to CV duality, the complexity thus flows from the UV to the IR, and accumulates at the thermal scale. This conclusion may be related to the fact that, in a holographic CFT, thermalization proceeds from UV to IR [25,26]. On the other hand, it seems to be somewhat in tension with the fact that in a thermal state the UV degrees of freedom remain unexcited. If unexcited, how could they participate in the generation of complexity? Perhaps since their excitation is not strictly zero, but only exponentially suppressed, their dynamics could provide the source from which the complexity unfolds. Or is complexity generated purely from the thermal scale fluctuations? And if the latter is the case, then how can we understand the dual flow of the volume current from large to small radii? We leave these questions to be addressed in the future. asymptotic invariance under the Killing flow at the boundary and as the final slice is approached. The late time limit of this flow can be easily found in closed form in spherical symmetry, where it is given terms of v r . At late times the components are independent of t, and the divergence free condition implies v r = −K/r where K is some constant. K can be determined by the normalization condition g rr (v r ) 2 = −1 on the t = 0 line in the middle of the black hole interior region since, by symmetry, v t vanishes there. Because we have assumed the late time limit, this must be done at the "final slice" [5,6], which is the maximal slice at constant r in the black hole interior. For example, in the non-rotating case of the BTZ black hole treated in Sec. V we have where α f is the norm of the Killing vector ∂ t at r f . The constant K gives the rate of volume flow, with respect to Killing time, per unit angle, through any surface of constant r coordinate, e.g. the horizon. To see this, note that the volume flux is given by the integral of v · pulled back to the constant r surface. In t, r, φ coordinates, = r dt ∧ dr ∧ dφ, so this pullback is −rv r dt ∧ dφ = K dt ∧ dφ. This conclusion generalizes to spherical black holes in any spacetime dimension. D. Asymptotic volume growth and complexity For the BTZ black hole [27], the K written above can be expressed in terms of the surface gravity κ = r + / 2 and the horizon area A = 2πr + as where T H and S BH are the black hole temperature and entropy, respectively. In this way, we can see that the late-time rate of growth is 2 times T H S BH . The factor is the "divisor," discussed in the introduction, that gives the ratio of volume to complexity. The fact that K ∝ T H S BH is not an accident. It could be anticipated from the first equality in (7). In fact, that equality generalizes to a D dimensional spherically symmetric spacetime, where and to black holes of any size. This can be used to understand why, as mentioned in the Introduction, the ratio of volume to complexity should be the maximal proper time from the horizon to the final slice for black holes of any size. The factor r D−2 f is the area per solid angle of a cross section of the final slice. It turns out that r D−2 α(r) reaches its maximum not far from the horizon, so we have r f ∼ r + . The first factor in (9) therefore scales as the horizon area per solid angle, times a numerical constant. The factor α f is the norm of the Killing vector ∂ t at the final slice. The Taylor expansion for α around the horizon is α = κτ + . . . , where τ is the proper time from the horizon, in the direction orthogonal to the Killing flow. 5 Thus, for both large and small spherical black holes in any dimension, the volume grows at a rate where the symbol ∼ denotes equality up to numerical constant that depends on spacetime dimension and is different for large and small black holes. Since complexity is expected to grow in the dual CFT at the rate ∼ T S, we conclude that the ratio of volume to complexity should be τ f (i.e.hGτ f ). In section V C we show that this reasoning also applies to the Kerr metric (with vanishing cosmological constant). E. Maximal time from horizon to final slice In this subsection we first compute the maximal time τ f from the horizon to the final slice for hyperbolic, planar, and spherical Schwarzschild-AdS black holes. We next give a general argument, analogous to that used in Hawking's cosmological singularity theorem, showing that for any black hole in a spacetime with negative cosmological constant Λ, and satisfying the strong energy condition for matter other than the cosmological constant, |Λ| −1/2 sets an upper bound for the value of τ f . Schwarzschild-AdS black holes The value of τ f for Schwarzschild-AdS black holes is given by the proper time from r + to r f along the line To estimate the value of this integral, we may use the Taylor expansion about the horizon. The line element where k = −1, 0, 1 for hyperbolic, planar, and spherical black holes, respectively. Thus for the BTZ black hole (D = 3) or planar black holes, or hyperbolic or spherical black holes with r + , we have τ f ∼ . If instead r + and D = 3 and k = −1, then τ f ∼ r + . The case of small hyperbolic black holes should be treated separately: this case has an extremal limit, i.e. a lower bound for r + of the order of [28]. The estimate for τ f above assumes that r + − r f ∼ r + upto some order unity factor, but r + − r f = 0 at extremality. A computation expanding around extremality (similar to that for the Kerr case treated below in section V C) shows that τ f ∼ for the extremal hyperbolic black hole. Upper bound to τ f set by the AdS scale It is interesting to note that an upper bound of the form τ f < ∼ follows from a more general result. Consider To derive an upper bound for τ f , we can apply this result to the case where the achronal surface S is a spacelike slice just inside the future horizon H of the black hole. As long as the final slice lies inside D + (H), 6 we obtain the upper bound For D = 3 this becomes τ f ≤ π , which is consistent with the exact result τ f = π /4 obtained in section V B for the rotating BTZ black hole. IV. GLOBAL VOLUME INEQUALITY AND COMPLEXIFICATION RATE MONOTONICITY In this section we discuss a global inequality relating the volume on different slices, which leads to an inequality on mixed partial derivatives with respect to boundary time. On a boost symmetric background, this allows us to obtain an inequality for the second time derivative of the volume, which implies that the complexification rate grows monotonically on boost invariant black hole backgrounds. We thus recover from a general viewpoint this fact found previously using explicit computations with eternal black holes in AdS. To derive the global volume inequality, we need only use the definition of maximal slices; no energy condition or other additional ingredient is needed. Consider for concreteness a compact box in an eternal black hole spacetime ( Figure 4), with the vertical sides of the box taken to be some near-boundary cutoff. Let t 1 , t 2 be two times on the left cutoff (with t 1 < t 2 ), and t 3 , t 4 be two times on the right cutoff (with t 3 < t 4 ). The inequality then says that where Vol(t 1 , t 3 ) is the maximal volume between time t 1 on the left and time t 3 on the right, etc. Note that, even though each of the four maximal slice volumes diverges as the cutoff is sent to the boundary, the linear combination in (14) is UV-finite. To establish the inequality, observe that the two dashed orange slices in Fig. 4 intersect each other, and we can divide them into four segments, each connecting the intersection with one of the four boundary times. 6 If D + (H) does not contain everything inside the event horizon, there is a Cauchy horizon, which is presumably unstable to formation of a singularity, eliminating the Cauchy horizon. By maximality, we know that Vol(t 1 , t 3 ) is greater than the sum of the volumes of the two lower orange segments. Similarly, we know that Vol(t 2 , t 4 ) is greater than the sum of the volumes of the two upper orange segments. The sum of these two inequalities yields (14). This example of a global volume inequality can be generalized to a general bulk spacetime, with one or more boundary components. We illustrate this in Figure 5 for a spacetime with one boundary. Let σ 1 and σ 2 be two Cauchy slices of the boundary, and let Σ 1 and Σ 2 be the corresponding maximal slices. (As before, we regulate the volume by placing a cutoff surface in the asymptotic region.) Assuming the bulk is time orientable, it admits a foliation by timelike curves, which also extends to the boundary. Each of these curves intersects each of the Cauchy slices once. On the boundary define two new piecewise smooth Cauchy slices σ − and σ + , consisting of the first and second intersection points respectively, and similarly define two new bulk slices (which are also only piecewise smooth), Σ − and Σ + . Then the boundary of Σ ± is σ ± , and Σ ± is generally not the maximal volume slice with this boundary. Generalizing the previous notation, let Vol(σ) denote the maximal volume for a slice bounded by σ, and now let Vol(Σ) be the volume of the bulk slice Σ. Then we have Vol(σ ± ) ≥ Vol(Σ ± ), and addition of these inequalities yields To recover the previous case from this generalization, take σ 1 to be the two-boundary slice consisting of the union of the t 1 and t 4 slices, and take σ 2 to be that consisting of the union of the t 2 and t 3 slices. Monotonicity on a boost-symmetric background We next explain how inequality (14) The monotonicity property coming out of CV-duality is in agreement with this general expectation. By contrast, CA-duality was recently discovered to violate this monotonicity property [9,30], perhaps putting into question that particular proposal. We now take the the infinitesimal limit of inequality (14), setting t 1 = t L , t 2 = t L + δt L , t 3 = t R , To leading order in small quantities, we find: Therefore the global inequality (14) implies positivity of the mixed partial derivative: In terms of the new variables t ± = t L ± t R , this inequality becomes For an eternal black hole, the boost symmetry implies that the maximal volume cannot be a function of t − [6]. We thus end up with the simple statement: which implies that the first derivative of Vol with respect to t + (or equivalently, with respect to either t L ot t R with the other one kept fixed) is monotonic. Replacing the volume by the complexity C, this implies the monotonic increase of the complexification rate discussed above. Note that inequality (14) does not imply monotonicity of the complexification rate for a general bulk spacetime, since we used the boost symmetry of a 2-sided black hole to deduce it. Nevertheless, we can take the infinitesimal version of the inequality (15) for a generic spacetime, and derive a condition similar to the positivity of the mixed partial derivative (17). To this end, consider the case where the two boundary Cauchy slices σ 1 and σ 2 coincide except on two small disjoint bumps to the future, one on σ 1 and one on σ 2 . Expanding to leading order in the size of the bumps, we find that the "off-diagonal" part (since the bumps are disjoint) of the second functional derivative of the maximal volume Vol(σ) with respect to σ variations is nonnegative. The time dependence of holographic complexity has been studied for quenched systems, i.e. systems into which a finite energy density is injected, using the AdS-Vaidya solution [19,21,31,32]. In this subsection, we compare the growth of the volume inside the horizon for an AdS-Vaidya spacetime in the thin shell limit, using different definitions of the volume cutoff, and we extend existing studies to the case of two infalling shells. In Section II we discussed several reasons supporting the notion that volume inside the black hole horizon is perhaps a more robust measure of complexity of the thermal state than is the volume of a global maximal slice with a cutoff at large distances from the black hole. When the black hole forms from collapse, the degeneracy between different definitions of the horizon is lifted, hence we should examine which (if any) is more appropriate for CV duality. In particular, while the absolute event horizon remains a null hypersurface defined teleologically as the boundary of the past of future null infinity, we shall also consider the apparent horizon, defined here as the boundary of the region containing outer trapped surfaces on the leaves of the maximal foliation An apparent horizon defined this way is an example of a holographic screen [33,34], i.e. a hypersurface foliated by marginally trapped surfaces. Recent work [35,36] has shown that the area of a leaf of such a foliation is related to a certain coarse grained holographic entropy, which lends support to the idea that the volume inside such surfaces might be directly related to complexity [37]. When the spacetime is time dependent, the maximal time from the horizon to the final slice (the "complexity divisor" of the volume) in general becomes time dependent, and in that context it might well make more sense to measure the time from the apparent horizon rather than from the event horizon. We shall make no attempt here to determine which precise extension of the concept is more appropriate. Instead, we will just address the case where the black hole is either the BTZ black hole in D = 3 dimensions, or in higher dimensions is large enough so that the time ∼ is the always the relevant one. Single quench Consider, then, a black hole formed by an infalling shell in AdS. If the event horizon forms "at the same time" as the time on the boundary when the shell starts to fall in, i.e. the time at which an external agent injects some energy into the CFT ground state, then the maximal slice volume inside the horizon remains zero until the injection time, and starts growing after that. This would be consistent with the general expectation that the CFT state starts to complexify after the energy injection. However, the horizon forms before the injection time if the final horizon radius is greater than the AdS length scale, and after the injection time if it is less [38]. To illustrate this, let us work for simplicity in the in the thin-shell limit, and in three spacetime dimensions. The BTZ-Vaidya metric in (r, v) coordinates reads: where Θ is the unit step function. This metric describes a spherical shell at v = 0 collapsing to form a black hole. To draw the conformal diagram, we need to pass to conformally compactified coordinates (R, T ) (see [38] for the coordinate transformation). The metric becomes: Fig. 6 shows the conformal diagrams for three choices of horizon radius (larger than L, equal to L and smaller than L). The center of AdS is at R = 0, the boundary is at R = π 2 , and the singularity is at (1 − r 2 + ) sin R = (1 + r 2 + ) sin T . As illustrated in Fig. 6, the horizon forms at the same time T as the injection time on the boundary only in the special case when the horizon radius of the final black hole is exactly equal to the AdS length. where the geometry is locally AdS. This slice is invariant under the T reflection isometry, so it has vanishing extrinsic curvature, hence is maximal. We depict such a maximal slice in blue on the left panel of Fig. 6. Clearly the portion inside the horizon starts growing even before the energy injection occurs on the boundary. The apparent horizon for Vaidya-BTZ consists of two segments in the conformal diagram (illustrated in 7). One segment is the event horizon r = r + in the BTZ portion of the spacetime, and the other segment is the infalling shell itself. The interior of the apparent horizon (i.e. the trapped region) is shaded in light blue in Fig. 7. It is clear that the volume inside the apparent horizon can grow only after the injection occurs, with a delay because the maximal slice does not have any portion inside the light blue region of Figure 7 immediately after injection. In Fig. 8 we plot the volume inside the horizon as a function of the boundary time, and compare with the volume inside some large cutoff near the boundary, as well as with the volume inside the apparent horizon (see below). The plots on the left are for a large black hole with r + = 5 , while those on the right are for • The volume inside a near-boundary cutoff is shown as the blue curve. It starts to grow at the injection time. For a large black hole, the volume inside horizon, shown by the red curve, starts growing before the injection time. • For a large black hole, the volume inside a near-boundary cutoff (blue curve) grows at essentially the late-time rate as soon as the energy is injected. For a small black hole, the growth rate starts out higher, then decreases to to late time rate. • The growth rates all converge at late time both for large and small black holes. This verifies the expectation that all late time growth of volume occurs inside the horizon for a one-sided black hole, as it does for a two-sided one. • From a geometrical viewpoint, there are two distinct regimes for the volume inside the event horizon: it may be contained entirely inside the pure AdS part of the spacetime, or it may include a part in the AdS portion and a part in the BTZ portion. There is a critical boundary time at which the former regime transitions to the latter one, on the maximal slice, A priori it seems that the volume curve might have a kink at this transition, however inspection of the red curves suggests that the derivative is actually continuous at the transition. • The brown curves show the volume inside the apparent horizon, which is always the last to start growing. In particular, no growth occurs before injection, and in fact there is a delay between injection and the onset of growth of the volume inside the apparent horizon. This delay is perhaps related to a thermalization timescale. As shown in [25,26] thermalization takes a time on the order of the AdS length scale, with specific behavior depending on the scale at which thermalization is probed. • The rate of change of the volume inside the apparent horizon starts out higher than the late-time rate, and then approaches the late-time value from above. This is contrary to the monotonic increase expectation. This decreasing behavior was also observed in [31]. Moreover, the longest time to the final slice is the same from all points on the outer portion of the apparent horizon (which coincides with the event horizon), and is longer than any other time from inside the horizon, so there is presumably no additional time dependence coming from the divisor τ f in the complexity formula. Two quenches In this subsection we consider BTZ-Vaidya with two infalling shells. The field theory picture is that energy is injected twice into the CFT. After the first injection thermalizes, we expect a linear growth of complexity at a rate proportional to the energy injected, since T S ∝ E. After the second injection, the system now has more energy, so we expect the complexification rate to increase. If the second injection occurs sufficiently far to the future of the first one, so that the complexity has enough time to reach the linear growth regime before the second injection, CV duality would imply that the plot of maximal volume versus time will consist of three linear regimes (zero slope, a finite slope, and a bigger finite slope). The metric is still given by eq. (20), except that the function Θ(v) is no longer a unit step function, but a "double step function": where a is a real number between 0 and 1, b is a positive real number, and θ is the unit step function. The function f (r, v) becomes: with R 2 ≡ a(1 + r 2 + ) − 1. Thus, we have AdS for time v < 0, and the usual BTZ with horizon radius r + for b < v. In the intermediate regime, 0 < v < b, we have two qualitatively different cases depending on the sign of R 2 (it need not be positive): If R 2 > 0 we have a BTZ black hole with horizon radius R, while if −1 < R 2 < 0 we have a "conical defect" geometry. 8 FIG. 9. Eddington-Finkelstein diagram of the BTZ-Vaidya solution with two infalling shells. The abscissa and ordinate for the plot are ρ = arctan(r) and t = v − ρ, respectively. The two infalling shells, the center of AdS, and the boundary are in thick black, the singularity is in solid red, the constant radius portion of the apparent horizon between the shells is in dashed blue, the event horizon is in short-dashed red, and the maximal slice anchored at late boundary time is shown in continuous blue. The apparent horizon is the boundary of the grey shaded region. We have solved numerically for the maximal slices and their volume with the parameter choice a = 1/2, b = 1 and r + = 2 (see appendix D for the technical details). The shape of a maximal slice anchored at late time on the boundary is depicted using an Eddington-Finkelstein diagram in Fig. 9. The effect of the outer 8 This conical defect geometry is also a possibility in the single-shell case. This can be seen as follows: the stress-energy tensor of the single-shell AdS-Vaidya in three dimensions is Tvv = 1+r 2 + 2r Θ (v). If −1 < r 2 + < 0, we do not have a black hole final state, yet the null energy condition is still obeyed. This is the conical defect regime. shell is to push the slice further from the singularity, which can be explained intuitively as follows: the final slice for the BTZ-Vaidya should approach the final slice of the eternal BTZ black hole (with the same total mass) at late time, which is a constant radius slice with r ∝ r + . Since the effect of the outer shell is to increase the horizon radius, it also pushes the final slice further from the singularity. In Fig. 10, we plot the volume of the portion of the maximal slice lying inside the apparent horizon as a function of the boundary time t b at which the slice is anchored, with the same parameter choice used for Fig. 9. Note that for a brief period of time after the maximal slice crosses the point where the second shell meets the apparent horizon, the slice has two disconnected parts lying inside the apparent horizon, separated by an annular region falling outside the apparent horizon. It would be interesting to consider similar double quenches but in D ≥ 4 dimensions, with spherical black holes of different sizes, so that the time dependence of the time to final slice divisor τ f might come into play. B. Rotating BTZ black hole A further probe of CV duality is provided by considering rotating black holes. A rotating black hole is dual to a rotating thermal CFT state. The complexity in such a state should presumably grow, with respect to time in the rotating frame, as the entropy of the state times the temperature T rot in that frame, since that is the frame in which thermal equilibrium is established. That is, one would expect that dC/dt rot ∼ T rot S. While the CFT entropy is frame independent, and is thus equal to the dual black hole entropy, T rot is not equal to the dual black hole temperature T BH . Considering that the thermal frequency defines a clock in the rotating frame, there is a time dilation shift of the temperature, and we have T rot dt rot = T BH dt BH , where dt BH is the Killing time increment in the asymptotic rest frame of the black hole. It follows that the rate of complexity growth can equally be expressed as dC/dt BH ∼ T BH S BH . We will now show, for the case of a rotating BTZ black hole, that CV duality indeed predicts this complexity growth rate at late times. The metric for the rotating BTZ black hole can be written as with , and the surface gravity is κ = (r 2 + − r 2 − )/ 2 r + . The "final slice," i.e. the Killing-invariant maximal slice inside the horizon lies where (rα) = 0, which is at r f given by r 2 f = (r 2 + + r 2 − )/2. This is a cylindrical surface, with induced metric ds 2 = −α 2 f dt 2 + r 2 (dφ − Ω f dt) 2 , and volume (area) form r f α f dt ∧ dφ, where α f := −α(r f ) 2 . The volume of a dt section of the slice is thus dV = 2πr f α f dt, so the rate of change of the total volume inside the horizon, growing at both ends, is dV /dt = 4πr f α f . The longest time path from the outer horizon to the final slice has length which is the divisor we use in relating the complexity to the volume. (Note that, unlike in higher dimensions, this time is independent of the horizon radius, even for small black holes. In the next section we consider the Kerr black hole in four dimensions, and find that one still obtains T S even for small black holes, when the time to the final slice is used as the divisor.) The rate of change of the complexity with respect to t is thus As explained above, T BH S BH is equivalent to the rate T rot S rot , which is what one would expect from a thermal state [4]. This is a nontrivial check, since the rotating black hole possesses another dimensionless parameter, r − /r + , on which the result might have depended. We note also that, in the case of the rotating BTZ black hole, the this complexification rate from "complexity = volume" is interestingly the same as the one found for "complexity = action" [8]: the late-time rate of growth is proportional to r 2 + − r 2 − in both cases. Here we emphasize the proportionality of this result with T H S BH , whereas Ref. [8] emphasizes the proportionality with M − ΩJ, noting that the late-time rate of growth is slowed down compared to M , due to the presence of the conserved charge J. C. Kerr black hole Next, we discuss rotating black holes in higher dimensions. This case is substantially more complicated to study than the rotating BTZ one due to the lack of spherical symmetry, and this may be why it has not been studied at all in the literature in the context of CV-duality. 9 We will consider the case of the Kerr solution in four spacetime dimensions, since the asymptotically flat case is somewhat simpler, and since its maximal slices have already been studied to some extent in [40]. Our aim is to check whether the complexification rate (with the time to the final slice divisor taken into account) continues to be of the order of T H S BH . The inner/outer horizons r ± are the roots of ∆ = (r − r + )(r − r − ), A general axially symmetric maximal slice is described by some function r(t, θ). The final slice is a late time limit, so is t-independent due to t-translation symmetry of the background, and is therefore described by a function r(θ) which extremizes the volume. The volume element on such a slice between the inner and outer horizons (where ∆ < 0) is As argued in [40], r(θ) must lie between two values, r min and r max , which are the extrema of Σ∆ with respect to r at θ = 0 and π/2, respectively, and which are very close to each other for all values of the spin parameter a/M . The final slice therefore comes very close to being a slice of constant r. We will adopt r f ≡ r max for that approximate constant value of r, which is given by This radius is never parametrically different from the horizon radius r + : for a = 0 it is 3r + /4, while for extremal spin it is r + . The volume element on this cylinder is given by The volume of a dt section on the final slice is thus dV = dt α f A f , where the integral is over a constant t slice of the cylinder. Next let us evaluate the maximal time to fall from the horizon to the final slice, τ f . Since the coefficient of dr 2 in the line element (30) is negative, while those of the other three terms are positive, the longest time is clearly attained with dt = dθ = dφ = 0. Moreover, the maximum of these is attained at θ = 0, so Our proposal for the t derivative of the holographic complexity due to growth on one side of the final cylinder is thus Since r f ∼ r + , the rate (36) will agree with ∼ T H S BH provided α f ∼ κτ f , where κ is the surface gravity. If α f were the norm of the horizon generating Killing vector ∂ t + Ω + ∂ φ as before, then this last relation would again follow as the first order Taylor expansion, as explained in the introduction. However, in fact, α f is the norm of ∂ t + Ω f ∂ φ . Nevertheless, again, because r f ∼ r + , these two vector fields are not so different, and so it is plausible that indeed α f /τ f ∼ κ. To test this relation at the extreme, we define the parameter = 1 − a 2 /M 2 , and expand around extremality, = 0. Expanding to lowest order in , using units with M = 1, we have r ± = 1 ± , r f = 1 + 2 , and ∆ = (r − 1) 2 − 2 , hence ∆ f = − 2 . Thus, in computing the volume to lowest order, we may set r = 1 in all expressions other than ∆. In particular, α f A f → √ 1 + cos 2 θ sin θ dθ ∧ dφ. At this lowest order in we therefore have and so our proposal for the complexification rate yields On the other hand, the temperature and entropy of the Kerr black hole are so it follows that The important thing is that (39) and (41) are both O( ), so that their ratio approaches a nonzero pure number in the extremal limit. While the ratio depends on the spin parameter a/M , it is does not go to zero or infinity as this parameter goes to zero. Finally to exhibit the ratio over the full parameter range, we evaluated it numerically. As expected, the plot in Fig. 11 of the ratio of these quantities does not vary substantially over the whole range of a/M . It would be interesting to generalize this analysis to Kerr-AdS, in any spacetime dimension. D. Rindler wedge complexity growth It has previously been observed that many aspects of black hole thermodynamics and horizon entanglement apply to acceleration horizons, and specifically to Rindler horizons in flat or AdS spacetimes. In particular, in the AdS/CFT setting, the CFT can be partitioned in to two equal halves, and in the ground state the corresponding bulk entanglement wedges are Rindler wedges, separated by a horizon and future and past "interior" regions analogous to the two-sided black hole interior. Each half of the CFT vacuum is a thermal state with respect to the Hamiltonian generating the conformal boost symmetry of its diamond-shaped domain of dependence [41,42]. The analogy with the two-sided black hole is close enough that we may expect the complexity of the thermal state to grow in time when boosting toward the future on both halves of the partition (as opposed to boosting one side to the future and the other to the past). Moreover, the expected growth rate would be T S, where T is the conformal boost temperature and S is the (entanglement) entropy. We now demonstrate that this is indeed the case for AdS 3 /CFT 2 . The metric of AdS in Rindler coordinates is actually just the BTZ metric (26), with r − = 0 and r + = [28]. The maximal foliation of interest is defined by constant Rindler time slices of the boundary, and the final slice of this foliation therefore meets the boundary at the Rindler horizon. Fig. 12 displays a plot of this slice in global coordinates, which lies at r f = r + / √ 2 as seen in the previous subsection (V B). The results of that section also show that the rate of complexification at late times, measured with respect to the conformal boost time, is ∝ T S, where T = 1/2π is the Unruh temperature, and S is the (infinite) Bekenstein-Hawking entropy of the Rindler horizon. The dual quantities in the CFT are the conformal boost temperature, and the entanglement entropy of the semicircle, as implied by the Ryu-Takayanagi formula. The state on the semicircles is also conformally equivalent to a thermal state on two-dimensional Minkowski space. In higher dimensions it would be conformally equivalent to a thermal state on a static hyperbolic space [43]. in which the metric reads ds 2 = L 2 (− cosh 2 ρdτ 2 + dρ 2 + sinh 2 ρdφ 2 ), except that we have compactified the radial coordinate by applying the arctan function (in other words, the boundary of the cylinder is at radius π/2). The line running across the cylinder is the bifurcation line of the Rindler horizon. VI. CONCLUSION In this paper, we proposed that the apparent lack of universality in the CV duality for large and small black holes is removed if one identifies the complexity with the volume measured in units of the maximal (free-fall) time τ f from the horizon to the final slice times Planck area. The distance cτ f is ∼ the AdS radius for spherical black holes large compared to the AdS radius, and it is ∼ the horizon radius for small black holes, thus accounting in both cases for the divisor that had been previously introduced by hand in order for the complexification rate to match the temperature-times-entropy expectation. We also checked that this prescription matches T S for the rotating BTZ black hole, and the Kerr black hole in four dimensions, for all spin parameters. While this does seem an improvement over the previous ad hoc assignment, it should be admitted that we have no reason from first principles for thinking the time τ f should be relevant, other than that it can be related to the surface gravity and redshift factor at the final slice, as explained in the Introduction and in Sec. III D. Moreover, τ f is of course only defined when a horizon and final slice are present, so is of no use otherwise. In this respect, CA duality appears much more universal. However, it is not so clear whether the notion of complexity and its growth should be expected to have a universal meaning, outside of thermal states, because then the dependence on the arbitrary choice of reference state and gates with which to define the complexity may be more severe. We proposed that to capture complexity at the thermal scale one should count only the volume inside the horizon, and introduced the "volume current," orthogonal to a foliation of spacetime by maximal slices. This current is a divergence-free vector field, whose flux through the slices of the foliation measures their volume. This flux picture suggests that there is a transfer of the complexity from the UV to the IR in holographic CFTs, which is reminiscent of thermalization behavior decuced using holography. It also naturally gives a second law for the complexity when applied at a black hole horizon. We further showed how the volume current is a useful tool for establishing various properties of the volumes of a maximal foliation, established a global inequality on maximal volumes that can be used to deduce the monotonicity of the complexification rate on a boost-invariant background, and probed CV duality in the settings of multiple quenches, spinning black holes, and Rindler-AdS. Finally, we established the existence of a maximal foliation without gaps (on which the existence of the volume current depends) provided that there exists a maximal slice anchored at each boundary slice, and assuming a causality condition, the strong energy condition, and the Einstein equation. Taken together, these results demonstrate the mathematical and physical utility of the notion of volume current associated to a maximal foliation. In the setting of CV duality it is tempting to think of the current as a "gate current" [15]. Perhaps this could be given a more concrete meaning in the context of tensor network models of bulk spacetime. We advertised in Section III that a foliation of the boundary of AdS induces a foliation by globally maximal volume slices in the bulk (assuming there exists such a slice terminating on each boundary slice). To establish this, we first show that, if two boundary slices do not intersect, then the corresponding bulk maximal slices 10 do not intersect. Next we argue that the (nonintersecting) bulk slices fail to be a foliation only if there are two distinct maximal slices anchored at the same boundary slice, and we prove, assuming the strong energy condition and the Einstein equation, that this cannot happen. In order to deal with finite volumes, we take the boundary to lie at a finite cutoff surface, which can be taken to infinity at the end. VII. ACKNOWLEDGMENT The argument works by contradiction. Suppose the maximal slice anchored at the upper boundary slice dips down sufficiently low in the bulk that it intersects the maximal slice anchored at the lower boundary slice (see Figure 13) and (bc) would also have to be maximal slices. But they cannot be maximal, since they have corners, and by rounding off the corners their volume can be increased. If the slices are tangent, rather than intersecting transversally, this "rounding the corners" argument is not applicable, but by moving the boundary slices slightly closer together, one would expect that the tangency generically becomes a transversal intersection, which would be ruled out by the argument already given. 11 Although not quite a rigorous argument, this seems adequate for our present purposes. Now if the bulk maximal slices do not intersect, then the boundary foliation will induce a bulk foliation unless there are gaps where the family of maximal volume slices jumps discontinuously across some spacetime region. Since the metric is assumed continuous, however, the maximal volume function itself cannot jump discontinuously as the boundary slice is pushed toward the future. Hence, if a gap does occur there must be two maximal slices with the same volume, anchored at the same boundary slice. We now argue that this cannot happen, given a causality assumption, the Einstein equation, and the strong energy condition. In fact, the argument will establish a stronger result: there cannot be two extremal bulk slices with the same boundary. Suppose there are two such slices, Σ 1 and Σ 2 , with Σ 2 to the future of Σ 1 , with the same, co-dimension-2 boundary, and both with TrK = 0. While the domains of dependence D 1 and D 2 of Σ 1 and Σ 2 are each automatically globally hyperbolic, we need to assume that Σ 2 ⊂ D 1 and Σ 1 ⊂ D 2 , which amounts to assuming that the domains of dependence coincide, D 1 = D 2 . This condition "obviously" holds for "normal" causal structures. Under this causality assumption, we can invoke Theorems 9.4.3 and 9.4.5 and Lemma 8.3.8 of Ref. [29] to infer that every point p on Σ 2 lies on a geodesic that maximizes the proper time from 11 This argument is essentially an adaptation to Lorentzian signature of a similar argument presented in [44,45] for Euclidean signature in the context of holographic entanglement entropy, establishing the property of "entanglement wedge nesting" on a static slice. p to Σ 1 , meets Σ 1 orthogonally, and has no conjugate points between p and Σ 1 . The congruence of these geodesics maps (possibly a subset of) Σ 1 onto all of Σ 2 . The expansion θ of the congruence at Σ 1 is equal to TrK, which vanishes by assumption. The Raychaudhuri equation together with the timelike convergence condition or, assuming the Einstein equation, the strong energy condition, then implies that θ is decreasing everywhere along the congruence. 12 Moreover, θ cannot go through −∞ before reaching Σ 2 since, as stated above, the time-maximizing curve has no conjugate points between p and Σ 1 . It follows that θ is negative everywhere, which implies that the geodesic flow is volume-decreasing. That is, the volume of a small ball carried along by the flow will decrease, as measured in the local rest frame of the flow. Furthermore, since the geodesics do not generally meet Σ 2 orthogonally, the volume of a small patch of Σ 2 on which the flow lands will be less than the volume of the small ball carried by the flow. It follows that the volume of Σ 2 is less than the volume of the pre-image of Σ 2 in Σ 1 under this flow, and a fortiori the volume of Σ 2 is less than that of Σ 1 . Similarly, we can argue the opposite, and thus we reach a contradiction, since the volume of Σ 2 cannot be both less than and greater than that of Σ 1 . The initial assumption is therefore false: there cannot be two extremal slices with the same boundary. Together with the previous results, this implies that a boundary foliation determines a maximal bulk foliation without gaps. Note that the latter need not completely cover the bulk, however. For example, as discussed in the text, the maximal slices for a two-sided black hole do not extend beyond a final slice, located inside the event horizon. We established the uniqueness property of extremal slices with a given boundary using a global argument in which the existence of time maximizing curves without conjugate points played a key role. However, it was briefly mentioned by Witten, in a conference talk [46], that uniqueness can be proved in a different fashion, namely, by (i) showing that the volume of any extremal slice is a local maximum with respect to small deformations, and (ii) arguing that if there were two local maxima, there would necessarily also be a saddle point of the volume, contradicting the fact that all extremal slices are local maxima of the volume. The reasoning for point (i) is simple and local: the expansion of the congruence of timelike geodesics orthogonal to any extremal slice starts out zero at the slice, and the strong energy condition (together with the Einstein equation) implies that it is negative and decreasing off the slice. The transversal spatial volume therefore decreases along the congruence, and non-orthogonality of the congruence to the deformed slice implies that the latter has even smaller volume, so the extremal slice is a local maximum of volume. The reasoning for (ii), the existence of the saddle point, was not as explicit in the talk, but it was pointed out in a picture that there would necessarily be a local minimum along some one-parameter family of slices joining them. This is of course a necessary condition for the existence of a saddle point, but it is not clear to us that a saddle point is guaranteed to exist. We end this appendix with an example where the strong energy condition does not hold, and consequently 12 Strictly speaking, we need here to assume the generic condition, that R ab u a u b = 0 somewhere along each geodesic, where u a is the geodesic tangent. In the case with a (negative) cosmological constant, this is automatic. In this appendix, we demonstrate the use of the flux picture of complexification as a technical tool for explicit computation. The techniques presented here complement existing studies in the literature such as [9], where the maximal volume was computed by maximizing the volume functional directly. In subsection (B 1), we evaluate the volume flux for the BTZ black hole. In subsection (B 2), we present a variation of this technique when the cutoff is null, in which case the flux density is given by the lapse function. Direct evaluation of flux In this appendix, we present the derivation of the volume current and the volume flux for the BTZ black hole. The boundary foliation is the symmetrical one t L = t R . Let us start with the BTZ black hole. We work in (r, v) coordinates, which are regular across the horizon: The function v(r) describing the shape of the maximal slices was essentially worked out in [9]. Its derivative is given by: where C is a positive constant. 13 The constant C labels the particular maximal slice in the foliation, and it ranges from 0 (for the slice anchored at t L = t R = 0) to r 2 + /2L (for the final slice). 14 The unit normal 1-form to the slice labeled by C is: 13 C is the negative of the "energy" E in [9]. From the viewpoint of that paper, the constant C arises as a "conserved quantity" associated with the v-independence of the volume functional. 14 To see this, note that dv dr = 1 f on the slice t L = t R = 0 (since this slice is at t = 0). As for the final slice, symmetry dictates that it is a slice of constant r, and dv dr diverges. Both of these facts are verified by plugging in C = 0 and C = r 2 + 2L respectively. To get the volume current v µ , we glue together the unit normal to all the slices labeled by different values of C. This amounts to promoting C to the function of v and r implicitly given by integrating (B3) from the midpoint (also called the "throat" in the numerical relativity literature) outward: The first term on the right-hand side is the tortoise coordinate of the throat, and r C is the radius of the throat given by: The two equations above define C(r, v). We then find the volume current: It can be checked that both components of v µ are regular at the horizon. The volume element is = rdv ∧ dr ∧ dφ. Computing the interior product v · and restricting to the cutoff at constant r = r c yields: Note that r c is allowed to be the horizon since our formalism can handle null surfaces. Evaluating the flux, we then find the change in the volume between v 1 and v 2 to be: In the usual near-boundary cutoff r c → ∞, the v coordinates becomes the boundary time coordinate t and the function C(r c , v) is nothing but the flux density, or the complexification rate. The main lesson from this computation is that the flux density coincides with a certain time function C for the maximal slicing. We also note that it is possible to work with Schwarzschild coordinates (r, t) instead of (r, v), despite the coordinate singularity at the horizon. In Schwarzschild coordinates, the shape of the maximal slice reads: To integrate across the horizon, we should understand the integral above in the sense of the Cauchy principal value [47]. In fact, the plot (3) of the volume flow was generated by working in Schwarzschild coordinates and using the Cauchy principal value to continue the maximal slice across the horizon. Finally, it might appear surprising that the flux of the volume flow yields a finite answer when the cutoff is taken to the boundary, especially if we think about the flow direction near the boundary. The maximal slice should become tangential to the constant Killing time slices near the boundary, and since the volume current is orthogonal to the maximal slices, it may seem that the volume flux across a constant-r cutoff is zero as r → ∞. That this is not the case can be understood as follows: at a finite but large cutoff in the bulk, the flow direction has a small component not orthogonal to the constant Killing time slice. As the cutoff is sent to the boundary, this small component tends to zero, but the volume element on the cutoff also diverges in the same time. This divergence cancels the vanishing of the subleading component in a way to yield a finite answer. Flux across a null surface In this appendix, we elaborate on the particular case of the volume flux across a null surface, and relate this flux to the lapse function for a time function τ defining the maximal foliation. Recall that to a time function τ we can associate a lapse function N defined by: where n is the unit normal 1-form to a constant τ slice, with sign chosen so that N > 0. Since the volume current vector v is the unit normal vector to the maximal slices, we have v · n = 1. Now let k be the null normal 1-form to the horizon, normalized so that v · k = 1, and let A be the area form of the intersection of the maximal slice with the horizon, so that = k ∧ n ∧ A. The volume current is then v · = n ∧ A − k ∧ A, whose pullback to the null surface is n ∧ A = N dτ ∧ A, since the pullback of k vanishes. The 2-form N A thus serves as the volume flux density. This fact can be seen more geometrically as depicted in Figure (14). We pick two slices in the foliation labelled by τ and τ + ∆τ , with ∆τ small. Let A be the intersection between slice τ and the horizon, and let C be the intersection between slice τ + ∆τ and the horizon. Moreover, consider a volume flow worldline passing through A, and let B be the intersection of that worldline with the slice τ + ∆τ . Since ∆τ is infinitesimal, the geometry of the triangle ABC is like in flat space. Since AB is orthogonal to BC, we conclude that AB = BC, where AB denotes the proper time elapsed along the worldline between A and B, and BC denotes the proper length of the segment of the slice τ + ∆τ between B and C. Now consider the increment in the volume ∆Vol between τ to τ + ∆τ . We have ∆Vol = 4πr 2 + BC = 4πr 2 + AB, where we work in 3+1 dimensions for concreteness, and in the second equality we used the relation derived in the previous paragraph. (There is also an identical contribution from the left-side of the Penrose diagram, which we ignored.) On the other hand, we have AB = N (A)∆τ . Thus, we can relate the volume increment to the lapse as follows: For a finite time difference, we integrate the lapse: This is the formula we are after: the volume flux across the horizon is also the integral of the lapse along the horizon. In other words, the lapse on the horizon serves as the volume flux density. relativity literature (see for example [47]). In writing (C3) we have imposed the boundary condition αγ → 1 as r → ∞, so that τ will agree asymptotically with the standard AdS global time coordinate. The function C(τ ) is determined by regularity of the maximal slice at the "middle". According to (C1), the metric induced on the maximal slice is γ 2 dr 2 + r 2 dΩ 2 2 , so in particular γdr is a unit 1-form on the slice. Since r reaches a minimum at the middle of each slice, the pullback of dr to the slice vanishes at the middle, so γ must diverge there. This implies where r m = r m (τ ) is the r coordinate at the middle of each constant τ slice. In the late time limit, r m (τ ) approaches a constant, namely r f , the radial coordinate of the "final slice". Therefore C(τ ) too approaches a constant. It follows that the metric functions α, β, and γ all become constant in the late τ limit, which implies that the coordinate vector field ∂ τ approaches the Schwarzschild time Killing field. The unit normal 1-form αdτ therefore becomes invariant under the Killing flow, as does the volume current which is minus the contravariant form of αdτ . where we write the slice as a function r(v). Since the functional is independent of v, we have a conserved energy: Similarly, there is a conserved energy in the AdS region, which can be shown to vanish by smoothness at the center (r = 0). The maximal slices consist of locally maximal slices apart from on the shells, where they satisfy a Weierstrass-Erdmann corner condition [48]. Since the shells are located at a constant value of v, the corner condition simplifies to the requirement that the "conjugate momentum" be continuous across the junction, which amounts to requiring that the jump in r is 1/2 the jump in f . Together with the fact that the portion of the maximal slice in the AdS region must be constant global time slice, this determines the derivative of r(v) at the junction on the BTZ side: where r 1 is the r-coordinate of this junction, and r 1,+ means an r value slightly larger. Similarly, for t b > b, the maximal slice crosses both shells and the junction condition has to be imposed at each junction. At the outer junction, located at r = r 2 , the discontinuity of the derivative of r(v) is found to be:
18,963
2018-07-05T00:00:00.000
[ "Physics" ]
Curves between Lipschitz and $C^1$ and their relation to geometric knot theory In this article we investigate regular curves whose derivatives have vanishing mean oscillations. We show that smoothing these curves using a standard mollifier one gets regular curves again. We apply this result to solve a couple of open problems. We show that curves with finite M\"obius energy can be approximated by smooth curves in the energy space $W^{\frac 32,2}$ such that the energy converges which answers a question of He. Furthermore, we extend the result of Scholtes on the $\Gamma$-convergence of the discrete M\"obius energies towards the M\"obius energy and prove conjectures of Ishizeki and Nagasawa on certain parts of a decomposition of the M\"obius energy. Finally, we extend a theorem of Wu on inscribed polygons to curves with derivatives with vanishing mean oscillation Introduction Approximating functions by functions with better regularity properties was, is, and will certainly remain to be one of the most important techniques in analysis. In this short note we want to contribute to this topic. We consider regular closed curves with regularity somewhere between C 1 and merely Lipschitz continuity. One ends up looking at such curves, if one assumes that the curve is parametrized by arc-length and lies in some critical fractional Sobolev space W 1+s, 1 s , s ∈ (0, 1) -which is known not to embed into C 1 . But still the fact that the curve is of class W 1+s, 1 s gives us some subtle new information on the derivative that we will use in this article. For example, the derivative of the curve γ : R/Z → R n then belongs to the space V MO(R/Z, R n ) of all functions with vanishing mean oscillation, i.e. Here, γ ′ B r (x) denotes the integral mean of the function γ ′ over the ball B r (x). Let η ∈ C ∞ (R, [0, ∞)) be such that η ≡ 0 on R \ (−1, 1) and R η(x)dx = 1. For ε > 0 we consider the smoothing kernels Though for merely regular curves γ ∈ C 0,1 (R/Z, R n ) we cannot expect that the smoothened functions γ ε are regular curves, the situation changes drastically, if we assume that γ ′ has vanishing mean oscillation. We will start with proving the following surprising theorem: Theorem 1.1. Let γ ∈ C 0,1 (R/Z, R n ) be a curve parametrized by arc-length with γ ′ ∈ V MO(R/Z, R n ). For ε > 0 we consider the smoothened functions γ ε = γ * η ε . Then the absolute value of the derivative |γ ′ ε | converges uniformly to |γ ′ | = 1. So especially, the curves γ ε are regular if ε is small enough. Sometimes one might need that also the approximating curves are parametrized by arclength and have the same length as the original curve. In this case the following theorem can help. For denote the length of a curve γ by L(γ). Though the proof of the Theorem 1.1 is extraordinarily elementary and short, it is the impression of the author that this result and the techniques that lead to it are unknown to the community. In the last section, we will show how to apply the techniques of this article in order to answer some open questions in the literature and settle some conjectures in the context of knot energies. All the statements of the theorems are known for curves that possess more regularity than we can naturally assume. The approximation techniques above allow to extend these statements to curves of bounded Möbius energy -which is the most natural assumption for these theorems. Let for regular curves γ ∈ C 0,1 (R/Z, R n ) which was the first geometric implementation of the concept of knot energy. In the influential paper [FHW94], Freedman, He, and Wang discussed many interesting properties of this energy including its invariance under Möbius transformations. In his article [He00], Zheng-Xu He asked whether any regular curve of bounded Möbius energy can be approximated by smooth curves such that the energy converges. We will use the above approximation result together with the characterization of curves of finite Möbius energy in [Bla12] to give the following answer: Theorem 1.3. Let γ ∈ C 0,1 (R/Z, R n ) be a curve parametrized by arc-length such that O'Hara's Möbius energy E möb (γ) is bounded. Then there is a constant ε 0 > 0 such that γ ε are smooth regular curves for all 0 < ε < ε 0 converging to γ in the fractional Sobolev space W 3 2 ,2 and in energy, i.e. E 2 (γ ε ) → E 2 (γ) for ε → 0. We hope that the list of applications, although far from being complete, convinces the reader that the results and techniques developed in this article are of great importance for the analysis of critical knot energies for curves. Approximation by as r → 0 since γ ′ has vanishing mean oscillation. We calculate using the triangle inequality and the estimate above where we used the definition of V MO and (2.1). Hence, |γ ′ ε | → |γ ′ | = 1 uniformly and especially γ is a regular curve for ε > 0 small enough. This completes the proof of Theorem 1.1. Proof of Theorem 1.2. Let us now consider the curvesγ ε which apparently converge to γ uniformly and hence especially in L 2 . We now show that the derivatives of these curves satisfy denotes the Douglas functional also known as Gagliardo semi-norm. We therefore consider the integrand )| 2 |x − y| 2 which converges pointwise almost everywhere to 0 and can be estimate from above by Let us now consider the bi-Lipschitz transformation . Applications We want to present several applications of Theorem 1.1. We will start with analyzing the convergence of the Möbius energy and the parts of its decomposition found by Ishikezi and Nagasawa if the original curve has bounded Möbius energy. Unfortunately, the smoothened curves γ ε in general do not converge in W 1,∞ -so we cannot apply the fact that the Möbius energy is C 1 in W 3 2 ,2 ∩ W 1,∞ [BRS15, Theorem II]. We will show how to use the convergence of |γ ′ ε | from Theorem 1.1 together with bi-Lipschitz estimates in order to prove convergence in energy. Fractional Sobolev Spaces. For the rest of the article we need the classification of curves of finite energy For an thorough discussion of the subject of fractional Sobolev space we point the reader to the monograph of Triebel [Tri83]. Chapter 7 of [AF03] and the very nicely written and easy accessible introduction to the subject [DNPV12]. The following result is a special case of Theorem 1.1 in [Bla12]: Theorem 3.1 (Classification of curves with finite Möbius energy). Let γ ∈ C 0,1 (R/Z, R n ) be a curve parametrized by arc-length. Then the Möbius energy E möb (γ) is finite if and only if γ is bi-Lipschitz and belongs to W 3 2 ,2 (R/Z, R n ). In the following, we will use the well-known fact, that f ∈ W s,p (R/Z, R n ), s ∈ (0, 1), p = 1 s , implies f ∈ V MO. This follows for example from the line of inequalities 1 2r Applying this to f = γ ′ , in view of Theorem 3.1 the velocity of a curve parametrized by arc-length of finite Möbius energy belongs to V MO. Hence, we can apply Theorem 1.1. Convergence of Some Critical Knot Energies. 3.2.1. The Möbius Energy. As a first application, we want to answer a question due to He [He00][Question 8 in Section 7]. Zhen-Xu He asked, whether a curve of bounded Möbius energy can be approximated by smooth curves such that the energies of these curves converge to the energy of the initial curve. Then following lemma shows that this is indeed the case and that one can just use the mollified curves γ ε . This lemma together with Theorem 1.1 obviously proves Theorem 1.3. Proof. We use Vitali's convergence theorem to prove this lemma. Setting As |γ ′ ε | converges pointwise to |γ ′ | by Theorem 1.1 and γ ε converges to γ pointwise, the integrand I γ ε (x, w) also converges to I γ (x, w) pointwise. Let us show that the integrands are uniformly integrable. For this purpose we only have to consider points close to the diagonal, i.e. we will only integrate over x, y ∈ R/Z with |x − y| ≤ 1 4 , since on the rest of the domain the bi-Lipschitz estimate gives us a uniform bound on the integrand. We calculate Using the definition of the convolution, we can estimate this by Clearly for all γ ∈ C ∞ the above integral converges to 0 for ε → 0, as we can use Taylor's approximation twice to estimate it further by For γ ∈ W 3 2 ,2 and δ > 0, we can findγ with Hence, for all δ > 0. We conclude thatĨ This shows that the family of functions I γ ε is uniformly integrable. Hence, Vitali's As in the proof of Lemma 3.2, we can show Lemma 3.3. Let γ ∈ C 0,1 (R/Z, R n ) be a curve of bounded Möbius energy. Then Proof. It is enough to show the convergence for E 1 , as the statement for E 2 follows from the decomposition E möb = E 1 + E 2 + 4 by Ishizeki and Nagasawa [IN15]. As γ has bounded Möbius energy, we know that γ ′ ∈ V MO. Theorem 1.1 shows that the integrand in the definition of E 1 converges pointwise. From the bi-Lipschitz estimate we furthermore get We have shown in the proof of Lemma 3.2 that the right-hand side in uniformly integrableand thus the integrands in the definition of E 1 are uniformly integrable and Vitali's theorem implies the assertion. Proof of a Conjecture of Ishizeki and Nagasawa. In [IN15], Ishizeki and Nagasawa proved that for all curves γ in C 1,1 we have E 1 (γ) ≥ 2π 2 and conjectured that the same is also true under the weaker but more natural condition γ ∈ W 3 2 ,2 . Using the techniques we developed so far, we can now prove this conjecture quite easily. In the same paper, Ishizeki and Nagasawa also showed the Möbius invariance of the energies E 1 and E 2 for curves of bounded Möbius energy except for one important case: the case of an inversion on a sphere centered on the curve. For applications this seems to be one of the most important cases. We can now prove also this last case -and thus obtain full Möbius invariance of the energies E 1 , E 2 for curves of bounded Möbius energy. Theorem 3.5. Let γ ∈ C 0,1 (R/Z) be a regular curve with bounded Möbius energy and I be an inversion on a sphere centered on γ. Then Proof. We will show how to deduce this theorem form the Möbius invariance for smooth curves and the invariance of the Möbius energy. We only have to show the statement for E 1 as due to a theorem of Ishizeki and Nagasawa the sum is known to be invariant under all Möbius transformations [IN14]. The proof now relies on the following Claim 3.6. We have lim denotes the Gagliardo semi-norm on R. Let us prove the statement for E 1 in our theorem using this claim. On the one hand Lemma 3.3 and the Möbius invariance for smooth curves implies On the other hand, we use the estimate and follow the argument in the proof of Lemma 3.2 to see that the integrands in the definition of the energies E 1 (γ ε ) satisfy the assumptions of Vitali's theorem. Hence, But (3.3) and (3.4) imply Proof of Claim 3.6. We will show that the integrands appearing in the definition of converge pointwise to 0 and are uniformly integrable. Then the claim follows from Vitali's theorem. These integrands are Asγ ε converges pointwise toγ, this integrands converge pointwise to 0 for all x y. Let us now first deal with the point ∞ and show that for every δ > 0 there is an R > 0 such that Hence, For δ > 0 we now choose R > 0 such that Then E B R (0) (γ ε ) ≥ E möb (γ) − 2δ for ε > 0 small enough since else the lower semi-continuity of the Möbius energy would imply In view of (3.6) we even obtain for all ε > 0 sufficiently small and hence (3.7) So the energy does not concentrate at the point infinity. Let us confer this into an statment for the Gagliardo semi-norm. We will now deduce that again using Vitali's theorem. As noted before, we know that the integrand converges pointwise almost everywhere to 0. To show uniform integrability of the integrands we use the estimate Of course, one only has to show that the first summand in uniformly integrable. Let s(x) be the re-parametrization that satisfiesγ ε (x) = I • γ ε • s and δ > 0 be such that (−R, R) ⊂ (I • γ ε )((R/Z) \ B δ(0) ) and ψ(x, y) := (s(x), s(y)). As in the proof of Theorem 1.2 we get for |dxdy. Since the integrands |γ ′ ε (x) − γ ′ ε (y)| 2 |x − y| 2 are uniformly integrable for every ε 0 > 0 there is an δ > 0 such that But as ψ −1 is a Lipschitz mapping, we get that there is anδ > 0 such that |E| ≤δ implies |ψ −1 (E)| and hence Hence, Vitali's theorem implies that Let us now conclude the proof of the claim. For δ > 0 we first use (3.5) to get an R > 0 such that for all ε > 0 small enough. Then (3.9) implies lim sup and thus lim With the help of Theorem3.5, we can now also discuss the case of equality in Theorem 3.4 to get the following extension of Corollary 4.1 in [IN15]. We omit the proof of Theorem 3.7 as it is literally the same as the proof of Corollary 4.1 in [IN15] where one only uses Theorem 3.5 instead of Theorem 1.2 in [IN15]. 3.4. Γ-convergence of the Discrete Möbius Energies of Scholtes. Let us extend the Γconvergence result of Scholtes in [Sch14]. Scholtes introduced the discretized Möbius energy of a polygon p : R/Z → R n with vertices p(a i ), a i ∈ [0, 1), i = 1, . . . , m. Scholtes proved this theorem for curves which are in C 1 -which again is not implied by bounded Möbius energy. Proof. Since the lim inf-inequality was already shown by Scholtes, we only have to proof the lim sup inequality. Scholtes has already shown that the lim sup inequality holds for C 1 curves. If now γ is a regular curves with bounded Möbius energy, we can consider the smoothened curves γ ε = γ * η ε . By Lemma 3. Inscribing Equilateral polygons. With the tools we have at hand, we can also extend a result of Wu [Wu04] on inscribed regular polygons in the following way Theorem 3.9. Let γ ∈ C 0,1 (R/Z, R d ) be a regular curve with γ ′ ∈ V MO. Then for every n ∈ N, n ≥ 2 and any x 0 ∈ R/Z there is an inscribed n-gon starting with the point γ(x 0 ). The proof is based on the fact, that there is a lower bound c n > 0 of the Gromov distortion of equilateral n-gons as the infimum of the Gromov distortion is attained for an equilateral n-gon and thus cannot be 0. Proof. Let γ ε = η ε * γ be the standard mollified curves and p k the inscribed, equilateral n-gon with point γ 1 k (x 0 ) for k so large that γ 1 k is a regular curve. We first note that inf k∈N diam p k = 0 would imply
3,833.2
2016-02-29T00:00:00.000
[ "Mathematics" ]
Power Flow Adjustment for Smart Microgrid Based on Edge Computing and Deep Reinforcement Learning As one of the core components that improve transportation, generation, delivery, and electricity consumption in terms of protection and reliability, smart grid can provide full visibility and universal control of power assets and services, provide resilience to system anomalies and enable new ways to supply and trade resources in a coordinated manner. In current power grids, a large number of power supply and demand components, sensing and control devices generate lots of requirements, e.g., data perception, information transmission, business processing and real-time control, while existing centralized cloud computing paradigm is hard to address issues and challenges such as rapid response and local autonomy. Specifically, the trend of micro grid computing is one of the key challenges in smart grid, because a lot of in the power grid, diverse, adjustable supply components and more complex, optimization of difficulty is also relatively large, whereas traditional, manual, centralized methods are often dependent on expert experience, and requires a lot of manpower. Furthermore, the application of edge intelligence to power flow adjustment in smart grid is still in its infancy. In order to meet this challenge, we propose a power control framework combining edge computing and machine learning, which makes full use of edge nodes to sense network state and power control, so as to achieve the goal of fast response and local autonomy. Furthermore, we design and implement parameters such as state, action and reward by using deep reinforcement learning to make intelligent control decisions, aiming at the problem that flow calculation often does not converge. The simulation results demonstrate the effectiveness of our method with successful dynamic power flow calculating and stable operation under various power conditions. Introduction With the continuous development of power grid construction, a large number of terminal devices will be connected to the power grid, and a large number of heterogeneous transmission will be generated. The demand of data analysis and processing puts forward higher requirements to the power system data processing and business operation ability. In the face of demand response and real-time interaction of power flow, information flow and control flow in power system, there are a series of shortcomings in scalability, utilization efficiency and deployment cost of fixed redundant allocation of computing resources in the past. In contrast to the cloud computing model of centralized control in the acquisition, computing and transmission of the various pressures. Edge computing can realize real-time and efficient perceptual response, reduce the load of central site resources, and support regional autonomy, which can effectively respond to the trend of intelligent power grid. As an important research problem in the smart grid, power flow calculation according to the given power grid structure and operation parameters such as power supply, to determine the power system steady state parameter calculation, parts can analyze the power supply and demand changes impact on safe operation of the whole system, and refactoring support grid, fault processing, reactive power optimization and problem analysis of reducing the loss. Power flow calculation, however, often encounter such as computational convergence problem, previous solution can only be artificial adjustment has been conducted on the basis of power flow calculation program, on the one hand, because of many adjustable parameters in the system to adjust the efficiency is very low, on the other hand, the adjustment process relies heavily on the expert experience and also consume a large amount of human resources. Further, as the smart grid can be adjusted in the equipment supply and demand of rapid development, component features a variety of adjustable power supply and demand, makes in the smart grid, renewable energy and micro power grid tide control more conditions and uncertainties exist, therefore needs to be a trend of convergence algorithm of automatic adjustment control framework and precise in support of the smart grid control. Combined with edge computing and artificial intelligence technologies, some studies have made preliminary attempts to apply edge intelligence to smart grid. In terms of edge computing, some works advocate that enabling smart grid with edge computing to overcome the defects of bandwidth and latency existing in cloud computing, which have a lot of application basis and design ideas. In terms of artificial intelligence, some studies focus on how to apply feature engineering or expert systems to manage power flow, however, some of them have shortcomings in the scalability and performance, and how to carry out power flow calculation more effectively with edge intelligence is still in its infancy. In this paper, a power flow calculation adjustment framework based on deep reinforcement learning and edge computing is proposed for multiple power flow control problems in microgrids. Firstly, the typical business scenario requirements of power balance in microgrid are analyzed. Then, combined with the edge of computing and grid system design a micro grid current control framework, once again, based on the depth of intensive study defines the trend of the control of state space, space and set the rewards and punishments, and finally, with the IEEE 30 nodes by Pandapower system to put forward the framework, the simulation experiment verifies the framework described in this paper, the feasibility of the control issues in micro grid tide, for the intelligent power edge provides a certain reference. The main contributions of the paper are summarized as follows: 1) We present a edge-computing-based comprehensive framework for smart grid management and control, which enables the data sensing, processing and controlling of smart grid to realize the functions of real time demand and local autonomy. 2) An learning-based algorithm is presented for power flow calculating considering the grid's requirement and current state. Deep reinforcement learning are also applied to improve the practicability and efficiency of the algorithm. 3) The simulation results demonstrate the effectiveness of our method with successful dynamic power flow calculating and stable operation under various power conditions. The structure of the paper is as follows. After the introduction in Section I, we summarize related works in Section II, and propose our framework and algorithm in Section III. Then, we present the configuration and evaluation results of simulation experiments in Section IV. Finally, in Section V the conclusions and further work are detailed. Related works Power flow adjustment is one of the most important problems in smart microgrid. This is due to the complexity of power supply and demand, since the balance of supply and demand in a power grid is determined by multiple adjustable power supply and demand devices, which mathematically is essentially the solution of nonlinear equations. Previous researchers have done a lot of research on power flow control, but how to apply edge intelligence to power flow calculation has a lot of problems. Smart grid based on edge computing has triggered an unprecedented upsurge in recent years, and changed the model of power management in the past. Trajano designs a mobile edge computing based system architecture for a smart grid communication network that allows smart grid application to run at the mobile network edge, which provides a stable and low latency communication network between customers and providers to manage electrical power efficiently [1]. With a hardwareimplemented architecture, Barik adopts fog computing in smart grid that offloads the cloud backend from multi-tasking, and improve efficacy in low power consumption, reduced storage requirement and overlay analysis capabilities [2]. Huang proposed an edge computing framework for real-time monitoring in smart grid with an efficient heuristic algorithm, which can increase the monitoring frame rate up to 10 times and reduce the detection delay up to 85% compared with cloud framework [3]. Similarly, awadi proposes a fog computing model to detect abnormal patterns in electricity consumption data in advance through the collaboration of distributed devices at the edge of smart grid, which the proposed model was tested reliable with low latency and cyber-resilient on real micro grid [4]. Chen proposes an edge computing system for IoT based smart grids, where electrical data is analyzed, processed and stored at the edge of the network [5]. Different strategies (privacy, data predication, preprocessing) are deployed on the system and simulation results shows that the proposed system supports connection and management of substantial terminals, real-time analysis and processing of massive data. The above work proposed a series of architectures and frameworks for applying edge computing to smart grid. However, they did not specifically consider the application of edge intelligence to microgrid. Hisham proposed a hybrid solution where edge computing is used for smart grid information processing, the cloud for power distributing, and a machine learning engine to establish the communication between different layers, which achieved higher power grid throughput and power utilization [6]. It's worth noting that this paper applies edge intelligence to distributed grids, but does not consider power flow calculations between microgrids. Along with power consumers' increasing demand for flexibility and autonomy of power service, the microgrid framework has become a crucial component for the modern smart grid. Shu proposed a real-time scheduling strategy based on deep reinforcement learning to economically dispatch microgrid energy storage considering operational uncertaintie [7]. The agent is tested on the actual data, and the results show that the algorithm can achieve smaller operating costs in complex situations. Fang proposed a multiagent reinforcement learning approach for auction-based microgrid power scheduling market [8]. It reaches utility balance and supply-demand balance of the whole microgrid. Bi proposed a learning-based control microgrid scheduling strategy for economic energy dispatching [9]. The proposed solution does not require an explicit model that requires predictors to estimate stochastic variables with uncertainties. Simulation results in real data environment demonstrate the effectiveness of the proposed method. Etemad used a reinforcement learning based charging strategy for microgrid battery and renewable energy to improve electrical stability, power quality and the peak power load [10]. The results show that the model improves the use of renewable energy and battery and reduces the annual payments and peak consumption times. Liu proposed a collaborative reinforcement learning algorithm to solve the distributed economic scheduling problem of microgrid [11]. The algorithm reduces the coupling of nodes in the microgrid and improves the efficiency of distributed economic scheduling. The validity of the approach is verified by experiments with read data. Jayaraj employed Q learning algorithm, a variant of reinforcement learning, to carry out economic scheduling of a microgrid with photovoltaic cells and accumulators [12]. The experiments results show that the proposed method is effective and can reduce net transaction cost. The above work proposed a series of strategies and approaches for economic energy management. However, they did not specifically consider the changing of microgrid configuration. Dabbaghjamanesh proposes a approach for finding the optimal switching of reconfigurable microgrids based on a deep learning algorithm. The algorithm learns the network topology characteristics that very with time and make real-time reconfiguration decisions [13]. Using the reconfiguration technique as a fast, reliable, and effective response can enhance the reliability and performance of the microgrid network. For the problem itself, Ma discusses the application difficulties of deep learning in power flow calculation, proposes the network structure and training process of deep neural network, as well as the method to solve the over-fitting problem [14]. To solve the problem of manpower and time cost consumption caused by strict nonconvergence of power flow in large-scale power grid calculation, Wang proposes an adjustment method of power flow convergence based on knowledge experience and deep reinforcement learning [15]. To quantifying the impact of the correlation among multi-dimensional wind farms on the power system, Zhu proposes a probabilistic power flow calculation framework with learning-based distribution estimation approach [16]. Yang et al. propose a model-based deep learning approach to quickly solve the power flow equations, with the main application of speeding up probabilistic power flow calculations [17]. Compared with the pure data-driven deep learning method, the proposed method can comprehensively improve the approximate accuracy and training speed. Compared with the current situation that traditional machine learning algorithms are mostly used for state identification and evaluation, Su et al. propose a power system control algorithm embedded in deep confidence network [18]. By combining NSGA-II algorithm and deep confidence network, the control optimization strategy can be solved quickly and stably. Some of the above work considers how to apply the deep learning method to the power flow calculation problem. However, the research on the application of edge intelligence to the problem of microgrid is still in the preliminary stage. From the point of view of all researchers in the literature, there is hardly research literature has considered how to apply edge intelligence to the power flow calculation of microgrid. The existing methods have poor adaptability to the edge computing framework and are unable to deal with local autonomy, or lead to the failure of power flow calculation convergence, thus leading to system instability. Different from the above work, our research proposes a flow control framework based on edge computing and deep reinforcement learning, while considering resource efficiency and workload arrangement. Considering the complexity of the work flow, we focus on the situation that the system does not converge, design the algorithm framework based on deep reinforcement learning, and fully consider how to deal with the problem of unbalanced power environment. The Framework of Power Flow Adjustment based on Edge Intelligence Edge Computing Due to the rapid increase in the number of mobile devices, conventional centralized cloud computing is struggling to satisfy the QoS for many applications. With 5G network technology on the horizon, edge computing will become the key solution to solving this issue. It is mainly required by some delay-sensitive applications, such as virtual reality, which has stringent delay requirements. Thus, the edge computing paradigm by pushing the cloud resources and services to the edge, enables mobility support, location awareness, and low latency. Generally speaking, the structure of edge computing can be divided into three levels: end device, edge server, and cloud. Figure 1 illustrates the basic architecture of edge computing. This hierarchy represents the computing capacity of edge computing elements and their characteristics. End devices (e.g., sensors, actuators) provide more interactive and better responsiveness for users. However, due to their limited capacity, resource requirements must be forwarded to the servers. Edge servers can support most of the traffic flows in networks as well as numerous resource requirements, such as real-time data processing, data caching, and computation offloading. Therefore, edge servers provide better performance for end users with a small increase in the latency. Cloud servers provide more powerful computing (e.g., big data processing) and more data storage with a transmission latency. The goal of this architecture is to execute the compute-intensive and delay-sensitive part of an application in the edge network, and some applications in the edge server communicate with the cloud for data synchronization. The hierarchical architecture of edge computing encompasses the following attributes. 1) Proximity and low latency: Near to the end of the edge computing both in a physical and a logical sense supports more efficient communication and information distribution than the far-away centralized cloud. 2) Intelligence and control: The performance of a modern edge node is sufficient for the high rate transmission, large data storage and sophisticated computing programs for a set of local users. 3) Less concentration and privacy: Many edge computing servers could be privateowned cloudlets and these less concentration of information shall ease the concern of information leakage in cloud computing caused by the separation of ownership and management of data. 4) Heterogeneous and scalability: Edge computing that scales to a large number of sites, is a cheaper way to achieve scalability than fortifying the servers. Deep Reinforcement Learning An RL task is defined by M = (S, A, T, r). At each time-step t, the agent receives a state s t ∈ S and selects an action a t ∈ A according to its policy π : a t = π(s t ). A distribution of state transition T = p(s t+1 |s t , a t ), which is a mapping from state-action pairs (s t , a t ) to a probability distribution over the next states. After interacting with the environment, the agent reaches the next state s t+1 and receives a reward r t = r(s t , a t ). The expected discounted return at time t is given by R t = ∞ t =t = γ t −t r t where γ ∈ [0, 1] is the discount factor, and the goal of the RL agent is to maximize its expected return. The action-value function, Q function, is defined as Q π (s, a) = E[R t |s t = s, a t = a, π], which represents the expected discounted return after observing the state s and taking the action a depending on the policy π. The optimal Q function Q * satisfies the following Bellman equation: DRL is composed of DNNs and RL. As illustrated in Figure 2, the goal of DRL is to create an intelligent agent that can perform efficient policies to maximize the rewards of long-term tasks with controllable actions. DQN algorithm is a model-free approach for RL using DNNs in environments with discrete action spaces, which optimizes neural networks to approximate the optimal Q function Q * . In DQN, the expected discounted future return of each possible action is predicted at timestep t and the RL agent take the action with the highest predicted return: π Q (s t ) = arg max a∈A Q(s t , a). During training the RL agent collects the tuples (s, a, r, s ) from its experience and stores them in an experience replay memory, which is a key technique to improve training performance in the DQN algorithm. The purpose of the replay memory is to remove correlations between samples experienced by the agent. The neural network to approximate Q * (s, a) is trained using a minibatch gradient descent approach and minimizes the following loss by using samples (s, a, r, s ) from the replay memory: L = E s,a,r,s [(Q(s, a) − y)] 2 , where y = r + γ max a ∈A Q(s , a ). In DQN, the RL agent uses a separate target Q-network, which has the same architecture as the original Q-network but with frozen parameters. The purpose of the target network is to temporarily fix the Q value targets because non-stationary targets make a training process unstable and reduce performance. The parameters of the target Q-network θ − are updated with that of the original Q-network θ every fixed number of iterations. For the use of the target Q-network, the loss function can be reformulated as follows: Advantage Actor Critic (A2C) The Actor-Critic algorithm uses two neural networks to approximate the policy. One is a neural network that approximates policy, and an object that selects an action using this network is called an Actor. This neural network that approximates the policy is called a policy network. The other is a neural network that judges whether the action selected by the Actor is good or bad behavior. Using this network, an object that predicts the value of the action that the Actor selected is called the value network. The value network approximates a Q function that directly represents the value of an action that an actor chooses in a specific state. Let the weights of the policy network at time t be θ t , the state at time t be s, the selected action be a, the learning rate be α, and the policy that has parameter θ be π θ . The update equation of parameter θ of the policy network is Q π (s, a) is the total value that can be obtained by continuing the action selection along the policy π after selection action a in the current state s. In the equation above, the Q function to which the value network approximates is not normalized. Therefore, if the value of Q predicted by Critic using the value network is too large, the parameter θ changes too much at a time. Conversely, if predicted value is too small, θ does not change mush. Instead of the predicted Q value, the value obtained by subtracting the value of the previous state from the Q value is used, which is called advantage. This advantage implies an increment of value obtained by action a. If the value function at time-step t is V (s t ) = E[R t |s t = s], the advantage function is The gradient of the actor is ∇θlogπ θ (a|s)δ(s t ), then The loss function of updating value network is given as δ(s t ) 2 . Knowledge and experience of power flow convergence Analysis of grid power flow unsolved and generator active power output In the actual power grid, the unreasonable arrangement of generator sets may result in excessive active power transmission, exceeding the transmission capacity of the network [19]. In response to this situation, adding reactive power compensator or changing the transformer ratio on the line can improve the transmission capacity of the network to a certain extent. But when faced with extreme unreasonable arrangements, these methods are difficult to achieve a better trend adjustment. Therefore, in order to ensure that the active power transmitted by the line in the power grid does not exceed the upper limit of its transmission capacity, the output of each generator in the generator set needs to be adjusted [20]. Analysis of grid power flow unsolved and line transmission limit The transmission capacity of the transmission line reaching the limit is the main factor for the static stability of the system. Under general condition, the transmission power of the lines in the grid changes with the changes in the generator output and the active and reactive power of the load. There are two situations when the active power of a transmission line reaches its transmission power limit: (i) With the continuous increase of the injected power, the active power of the transmission line reaching the limit will not continue to increase (or increase very little), and the increase in injected power is transmitted through other transmission channels; (ii) The active power of the transmission line increases with the injected power, but the reactive power transmission of the line reaches the limit. The line reaching its transmission power limit is a necessary condition for the power system to lose static stability. In this case, the system power flow has no solution and the calculation does not converge. Based on the above analysis, it is possible to add the corresponding reactive power compensation and adjust the method of power injection by finding the line that reaches the transmission limit in the system as knowledge and experience. In this way, the distribution of the power flow in the network can be changed, and the purpose of power flow non-convergence adjustment can be achieved. Adjustment method of unsolved power flow a). Adjustment of generator output For small-scale distribution networks with power supply path compensation and direct power supply without boosting, adjusting generator output is a relatively economical power flow adjustment method. At this time, there is no need to add additional electrical equipment for adjustment, just change the generator terminal voltage to achieve good results. For power supply systems with long lines and multiple voltage levels, the adjustment of generators alone cannot meet the requirements of power flow calculation convergence. b). Adjustment of transformer ratio Changing the transformer ratio can increase or decrease the voltage of the secondary winding. There are several taps for selection on the high-voltage side winding of the double-winding transformer and the high-voltage side and medium-voltage side winding of the three-winding transformer. The one corresponding to the rated voltage is called the main connector. c). Reactive power compensation The generation of reactive power basically does not consume energy, but the transmission of reactive power along the power grid will cause active power loss and voltage loss. Reasonable configuration of reactive power compensation and changing the reactive power flow distribution of the network can reduce the active power loss and voltage loss in the network. d). Line series capacitor Changing line parameters and voltage regulation can be aimed at resistance and reactance, but it is not economical to reduce resistance by increasing the radius of the wire. Therefore, capacitors can be connected in series on the line to offset the reactance and reduce voltage loss . Automatic Adjustment of Power Flow Calculation Convergence Based on DRL In some MADRL algorithm, each agent requires observation information such as the opponent's policy from other agents in addition to its observation from environment. However, in microgrid, it is unrealistic to Deep reinforcement learning process design In view of the above statements, a deep reinforcement learning method based on knowledge and experience of power flow is proposed to automatically adjust the case that power flow does not converge, when the balance of active power and reactive power is considered simultaneously. For agents in deep reinforcement learning, State For the agent, the state means the variables observed from environment, which will affect the exploration efficiency of the agent.Therefore, in the selection of state variables, we mainly consider the output of each generator and the switching of the reactive power reactive power compensator on each bus.Therefore, for the data of m samples, a state space of m(n + p) is constructed, where n is the total number of generators and p is the total number of buses. Action Action is the actual policy made by the agent in the process of exploration. It is the key to truly affect the convergence of real-time power flow.We consider the regulation of both active power and reactive power, that is, the output of each generator and the number of capacitor switches on each heavy-load bus.Therefore, for the data of m samples, an action space of m(n + q) is constructed, where n is the total number of generators and q is the total number of heavy-load buses. Reward In order to make use of the knowledge related to power flow calculation and improve the efficiency of agents' exploration, we set up multiple reward mechanisms. First of all, if the power flow of a sample converges, the highest positive reward value R 1 can be obtained, and finally the negative reward value R 2 will be added if the power flow doesn't converge. Then, the upper limit of generator output is considered. If the active power output of the generator is greater than its maximum active power limit, the negative reward value R 3 is added. Similarly, if the reactive power output of the generator is greater than its maximum reactive power limit, the negative reward value R 4 is added. The line load is also a vital part when calculate power flow. If the line load rate exceeds its maximum line load rate, the agent will get a negative reward R 5 . Additionally, we also consider the voltage level on the bus. If the voltage on the bus is within its specified maximum or minimum voltage limit, then add the positive bonus value R 6 . Finally, the reward value R for each step of the agent is equal to the sum of the above six types of rewards R 1 , R 2 , R 3 , R 4 , R 5 , R 6 . Actor and Critic Networks Design The detailed DRL network and process of automatic adjustment policy is based on a deep policy gradient method in [21]. We adopt A2C as our deep reinforcement learning algorithm, which consist of two deep neural networks, namely the actor network and the critic network. The actor network is used to explore the policy, and the critic network estimates the performance and provides the critic value, which helps the actor to learn the gradient of the policy. The A2C is obtained by combining the value function approximation algorithm with the policy gradient descent method. To put it simply, it is composed of two networks, in which the Actor network is the one that actually executes the action in the environment, and it contains the strategy and its parameters, which is responsible for selecting the action.The critic network does not do any action; it is used to evaluate actions using value function approximations, and it evaluates the actor network by replacing the real Q-values with approximate Q-values. According to critic, the actor adjusts the parameters of its network to make an update in a better direction. Simulation Setting In the experimental part, based on the Python3.7 environment, we use and modify Pandapower [22], an open source third-party library, as apower flow calculation and analysis tool, which can not only calculate the convergence of power flow, but also provide intermediate results of power flow calculation process.In the power flow calculation algorithm, we choose the Newton-Raphson method with Iwamoto multiplier.In the environment construction part of deep reinforcement learning, we used the environment provided by OpenAI Lab to build the interface Gym [23].In the implementation part of deep learning algorithm, we adopted the Stable Baselines [24] interface provided by OpenAI Lab.In terms of parameters, the upper limit of iteration number of flow convergence is set as 10, the total number of samples is 208, and the number of test episodes is 10. Data Preprocessing We choose IEEE 30 node system as the object of our experiment.The system represents a 345kV power grid in New England, USA, consisting of 10 generators, 12 double-winding transformers and 34 lines, with a base power of 100MVA.Based on the original convergent data of the system, 1000 sets of samples were regenerated by randomly adjusting the size of generator and load within the range of 0-2 times.The Newton-Raphson method with Iwamoto multiplier is used to calculate the power flow of 1000 samples one by one, and 208 non-convergent sample data are finally obtained as the data of non-convergent power flow adjustment. Simulation Results As can be seen from the average reward convergence Fig. 3, the reward value of agents' exploration in deep reinforcement learning increases steeply with the increase of the episode.It converges to a relatively stable value at around Episode 160. Thus, the setting of reward value in deep reinforcement learning can help continuously optimize the parameters in the power grid. The four graphs in Fig. 4 are from the last sample adjusted to convergence, and the action convergence graphs of two generators and two reactive compensators are selected. As can be seen from the figure, the actions of some devices, such as generator No. 2, converge to certain values very early in the episode, while the actions of others, such as capacitor No. 1, take longer to converge. For example, we compare our method with some baseline methods, such as random exploration and A2C without knowledge experience.For 100 samples, compare Conclusion The conclusion goes here. In this article, we proposed an edge computing assisted comprehensive framework for smart grid management and control. Consequently, it assist smart grid to realize real-time demand response and local autonomy in data sensing, processing and controlling. Especially, a power flow calculation adjustment algorithm based on deep reinforcement learning considering the grid knowledge and requirement in microgrids, which improve the efficiency and flexibility compared with traditional adjustment method. Finally, we adopt IEEE 30 nodes with Pandapower under various grid conditions to verify the effectiveness of our proposed algorithm.
7,306.6
2021-03-11T00:00:00.000
[ "Engineering", "Computer Science", "Environmental Science" ]
Synthesis, Chacterization, and Thermal Study of Terpolymeric Resin Derived from m-cresol, Hexamine and Formaldehyde : Terpolymeric resin was prepared from m-cresol (0.1M), hexamine (0.05M) and formaldehyde (0.2M) by acid catalyzed polycondensation method using 1M HCl in temperature range of 122-130 ° C.The resin was abbreviated as m-CHF-I. The molecular weight of terpolymer was determined by non-aqueous conductometric titration technique. The structure of resin was determined by its elemental analysis, UV-VIS, IR, and NMR data. The thermokinetic parameters were determined using Freeman-Carroll (FC) and Sharp Wentworth (SW) method in temperature range (410-485 ° C).The values of activation energies (Ea), entropy (ΔS), and free energies (ΔG) were in good agreement . The order of degradation reaction determined by FC method was confirmed by SW method. Introduction In recent years, considerable interest has been shown in the synthesis and study of chelating resins containing resin with Nitrogen, Sulphur and Oxygen donar atoms on polymeric interface. These polymeric resins, nowadays, have been received much attention and importance, only due to their wide range industrial application. The terpolymers can be used as adhesive, retardants, surface coating, dyes, fungicides in plants and living tissues, ion exchangers, semiconductors, rectifiers, rechargeable electrical cells etc. Thermally stable polymer recently become boon to polymer chemist due to applicability at elevated temperature beside challenges that have to face owing to thermal instability and low processability. In this connection many co-worker tried to improve the thermal stability by changing the monomer composition in polymer synthesis [1][2][3][4][5][6][7][8][9] . Thermogravimetric study of polymer provides information about the degradation pattern during heating and thermal stability. Phenolic resins have a large number of practical applications in electronic controls, insulating materials, aerospace industry, machine parts etc. because of their high thermal stability, chemical and heat resistance 10 . Hiwase et al 11 have reported thermokinetic parameters of resin derived from p-hydroxy benzaldehyde, resorcinol and formaldehyde. Gurnule et al 12 have reported thermodynamic parameters and order of thermal stabilities of tercopolymers by using TGA. Aswar et al 13 have reported the sequences of thermal stability of polymeric chelates predicted on the basis of decomposition temperatures and activation energy was found to be Ni> Mn> Cu>Co>Zn. Whereas kinetic and thermodynamic parameters were calculated from dynamic TGA by the use of Sharp-Wentworth and Freeman-Carroll methods. Masram et al 14 reported kinetic study of thermal degradation of resin derived from salicylaldehyde, ethylenediamine and formaldehyde. In present work thermokinetic parameters were determined by using following methods. A) Freeman-Carroll Method (FC): In this the kinetic parameters determined by following expression, [15][16] , log where dw/dt = Rate of change of weight with time, W r = difference between weight loss at completion of reaction, and at time t., Ea = activation energy, n = order of reaction B) Sharp-Wentworth Method (SW): Following expression is used to evaluate the kinetic parameters, [17][18][19] , where d/dt is fraction of weight loss with time, n is the order of reaction, A is frequency factor, β is linear heating rate, and α is the fraction of amount of reactant. Preparation of m-CHF-I A mixture of m-cresol (0.1 M), hexamine (0.05 M) and formaldehyde (0.2 M) with 1M HCl was refluxed over oil bath at 122-130 0 C for 6 hrs with stirring. The solid product so obtained was immediately removed from the flask as soon as the reaction period was over. It was washed with hot water, dried and powdered. The powder was repeatedly washed with hot water to remove unreacted monomers. The air dried product was extracted with ether to remove copolymer which might be present along with the terpolymer. It was dissolved in 1M NaOH and reprecipitated using 1:1 HCl solution. The product finally collected by filtration, washed with hot water dried, powdered and kept in vacuum. The yield was found to be 74%. The synthetic details are shown in Table 1. Result and Discussion Elemental Analysis: The terpolymeric resin was analyzed for carbon, hydrogen, nitrogen and oxygen content. The elemental analysis was carried out at Sophisticated Analytical Instrumental Facility (SAIF) Punjab University, Chandigarh. The details of elemental analysis are incorporated in Table 2. The number average molecular weight of m-CHF-I terpolymer has been determined by conductometric titration method in non-aqueous medium and using standard potassium hydroxide (0.1M) in absolute ethanol as a titrant. The conductance versus milliequivalent (Meq.) of KOH per 100g of resin was plotted. The large numbers of breaks were observed in the plot. The average degree of polymerisation and hence the number average molecular weight of terpolymer have been determined using the following relations 20-25 . The molecular weight of repeat unit was calculated using elemental analysis data as shown in Table 2. The average degree of polymerisation and number average molecular weight of terpolymer resin were found to be 15.0 and 5160 respectively. UV-VIS Spectrum: The UV-VIS spectrum of m-CHF-I resin was recorded by instrument Shimadzu UV-VIS-NIR spectrophotometer Model No.1601. The spectrum so obtained is shown in fig. 1. The peak at 284.16 nm assigned to n-π* transition due to phenolic group. Absorption at 252.6 nm was assigned to π -π* due to aromatic ring. The absorption at 236.86 nm was assigned to n-σ* which was support to ether linkages in structure of resin in fig. 4. FTIR and NMR Data of m-CHF-I: The IR spectra ( fig. 2) of m-CHF-I terpolymeric resin was carried out at Pharmacy Department, Mahatma Jyotiba Phule Campus, R. T. M. Nagpur University and NMR spectra ( fig. 3) of m-CHF-I resin was carried out at Sophisticated Analytical Instrumental Facility(SAIF) Punjab University, Chandigarh which is presented in Table3. According to data obtain in physicochemical methods, the tentative structure terpolymeric resin was assigned as shown in fig. 4. Thermogravimetric Analysis: The thermogram of m-CHF-I terpolymer resin as shown in fig. 5, was recorded at Dept. of Material Science, VNIT Nagpur using Perkin Elmer Diamond TGA/DTA analyzer in argon environment. The polymeric sample was allowed to heat upto 950 0 C. The thermogram reveals that initial weight loss up to 150 0 C due to loss of water. The decomposition of resin between 410 to 485 0 C was studied. FC and SW plots are shown in fig. 6a and fig. 6b respectively. The order of decomposition was found to be zero order as determined by FC method which was further confirmed by SW method. Thermokinetic parameters are tabulated in Table 4. Conclusion The elemental analysis and spectral studies such as UV-VIS, IR, NMR data is in good agreement with assigned tentative structure of m-CHF-I terpolymeric resin. The activation energies, entropy and free energy of zero order degradation are determined by Freeman-Carroll and Sharp-Wentworth methods are in good agreement. Low value of frequency factor and entropy indicate the slow degradation of resin. The high value of energy of activation relative to thermal energy suggests that the m-CHF-I resin is thermally stable below 400 0 C.
1,593.2
2012-01-01T00:00:00.000
[ "Materials Science", "Chemistry" ]
Deep forest Abstract Current deep-learning models are mostly built upon neural networks, i.e. multiple layers of parameterized differentiable non-linear modules that can be trained by backpropagation. In this paper, we explore the possibility of building deep models based on non-differentiable modules such as decision trees. After a discussion about the mystery behind deep neural networks, particularly by contrasting them with shallow neural networks and traditional machine-learning techniques such as decision trees and boosting machines, we conjecture that the success of deep neural networks owes much to three characteristics, i.e. layer-by-layer processing, in-model feature transformation and sufficient model complexity. On one hand, our conjecture may offer inspiration for theoretical understanding of deep learning; on the other hand, to verify the conjecture, we propose an approach that generates deep forest holding these characteristics. This is a decision-tree ensemble approach, with fewer hyper-parameters than deep neural networks, and its model complexity can be automatically determined in a data-dependent way. Experiments show that its performance is quite robust to hyper-parameter settings, such that in most cases, even across different data from different domains, it is able to achieve excellent performance by using the same default setting. This study opens the door to deep learning based on non-differentiable modules without gradient-based adjustment, and exhibits the possibility of constructing deep models without backpropagation. Introduction In recent years, deep neural networks have achieved great success in various applications, particularly in tasks involving visual and speech information , leading to the hot wave of deep learning [Goodfellow et al., 2016]. Though deep neural networks are powerful, they have apparent deficiencies. First, it is well known that a huge amount of training data are usually required for training, disabling deep neural networks to be directly applied to tasks with small-scale data. Note that even in the big data era, many real tasks still lack sufficient amount of labeled data due to high cost of labeling, leading to inferior performance of deep neural networks in those tasks. Second, deep neural networks are very complicated models and powerful computational facilities are usually required for the training process, encumbering individuals outside big companies to fully exploit the learning ability. More importantly, deep neural networks are with too many hyper-parameters, and the learning performance depends seriously on careful tuning of them. For ex-ample, even when several authors all use convolutional neural networks [LeCun et al., 1998;Simonyan and Zisserman, 2014], they are actually using different learning models due to the many different options such as the convolutional layer structures. This fact makes not only the training of deep neural networks very tricky, like an art rather than science/engineering, but also theoretical analysis of deep neural networks extremely difficult because of too many interfering factors with almost infinite configurational combinations. It is widely recognized that the representation learning ability is crucial for deep neural networks. It is also noteworthy that, to exploit large training data, the capacity of learning models should be large; this partially explains why the deep neural networks are very complicated, much more complex than ordinary learning models such as support vector machines. We conjecture that if we can endow these properties to some other suitable forms of learning models, we may be able to achieve performance competitive to deep neural networks but with less aforementioned deficiencies. In this paper, we propose gcForest (multi-Grained Cascade Forest), a novel decision tree ensemble method. This method generates a deep forest ensemble, with a cascade structure which enables gcForest to do representation learning. Its representational learning ability can be further enhanced by multi-grained scanning when the inputs are with high dimensionality, potentially enabling gcForest to be contextual or structural aware. The number of cascade levels can be adaptively determined such that the model complexity can be automatically set, enabling gcForest to perform excellently even on small-scale data. Moreover, users can control training costs according to computational resources available. The gcForest has much fewer hyper-parameters than deep neural networks; even better news is that its performance is quite robust to hyper-parameter settings, such that in most cases, even across different data from different domains, it is able to get excellent performance by using the default setting. This makes not only the training of gcForest convenient, but also theoretical analysis, although beyond the scope of this paper, potentially easier than deep neural networks (needless to say that tree learners are typically easier to analyze than neural networks). In our experiments, gcForest achieves highly competitive performance to deep neural networks, whereas the training time cost of gcForest is smaller than that of deep Figure 1: Illustration of the cascade forest structure. Suppose each level of the cascade consists of two random forests (black) and two completely-random tree forests (blue). Suppose there are three classes to predict; thus, each forest will output a three-dimensional class vector, which is then concatenated for re-representation of the original input. neural networks. We believe that in order to tackle complicated learning tasks, it is likely that learning models have to go deep. Current deep models, however, are always neural networks, multiple layers of parameterized differentiable nonlinear modules that can be trained by backpropagation. It is interesting to consider whether deep learning can be realized with other modules, because they have their own advantages and may exhibit great potentials if being able to go deep. This paper devotes to addressing this fundamental question and illustrates how to construct deep forest; this may open a door towards alternative to deep neural networks for many tasks. In the next sections we will introduce gcForest and report on experiments, followed by related work and conclusion. The Proposed Approach In this section we will first introduce the cascade forest structure, and then the multi-grained scanning, followed by the overall architecture and remarks on hyper-parameters. Cascade Forest Structure Representation learning in deep neural networks mostly relies on the layer-by-layer processing of raw features. Inspired by this recognition, gcForest employs a cascade structure, as illustrated in Figure 1, where each level of cascade receives feature information processed by its preceding level, and outputs its processing result to the next level. Each level is an ensemble of decision tree forests, i.e., an ensemble of ensembles. Here, we include different types of forests to encourage the diversity, as it is well known that diversity is crucial for ensemble construction [Zhou, 2012]. For simplicity, suppose that we use two completelyrandom tree forests and two random forests [Breiman, 2001]. Each completely-random tree forest contains 500 completelyrandom trees , generated by randomly selecting a feature for split at each node of the tree, and growing tree until each leaf node contains only the same class of instances. Similarly, each random forest contains 500 trees, by randomly selecting √ d number of features as candidate (d is the number of input features) and choosing the one with the best gini value for split. The number of trees in each forest is a hyper-parameter, which will be discussed in Section 2.3. Given an instance, each forest will produce an estimate of class distribution, by counting the percentage of different classes of training examples at the leaf node where the concerned instance falls, and then averaging across all trees in the same forest, as illustrated in Figure 2, where red color highlights paths along which the instance traverses to leaf nodes. The estimated class distribution forms a class vector, which is then concatenated with the original feature vector to be input to the next level of cascade. For example, suppose there are three classes, then each of the four forests will produce a three-dimensional class vector; thus, the next level of cascade will receive 12 (= 3 × 4) augmented features. To reduce the risk of overfitting, class vector produced by each forest is generated by k-fold cross validation. In detail, each instance will be used as training data for k − 1 times, resulting in k − 1 class vectors, which are then averaged to produce the final class vector as augmented features for the next level of cascade. After expanding a new level, the performance of the whole cascade will be estimated on validation set, and the training procedure will terminate if there is no significant performance gain; thus, the number of cascade levels is automatically determined. In contrast to most deep neural networks whose model complexity is fixed, gcForest adaptively decides its model complexity by terminating training when adequate. This enables it to be applicable to different scales of training data, not limited to large-scale ones. Multi-Grained Scanning Deep neural networks are powerful in handling feature relationships, e.g., convolutional neural networks are effective on image data where spatial relationships among the raw pixels are critical [LeCun et al., 1998;; recurrent neural networks are effective on sequence data where sequential relationships are critical [Graves et al., 2013;Cho et al., 2014]. Inspired by this recognition, we enhance cascade forest with a procedure of multi-grained scanning. As Figure 3 illustrates, sliding windows are used to scan the raw features. Suppose there are 400 raw features and a window size of 100 features is used. For sequence data, a 100-dimensional feature vector will be generated by sliding the window for one feature; in total 301 feature vectors are produced. If the raw features are with spacial relationships, such as a 20 × 20 panel of 400 image pixels, then a 10 × 10 window will produce 121 feature vectors (i.e., 121 10 × 10 Figure 3: Illustration of feature re-representation using sliding window scanning. Suppose there are three classes, raw features are 400-dim, and sliding window is 100-dim. panels). All feature vectors extracted from positive/negative training examples are regarded as positive/negative instances, which will then be used to generate class vectors like in Section 2.1: the instances extracted from the same size of windows will be used to train a completely-random tree forest and a random forest, and then the class vectors are generated and concatenated as transformed features. As Figure 3 illustrates, suppose that there are 3 classes and a 100dimensional window is used; then, 301 three-dimensional class vectors are produced by each forest, leading to a 1,806dimensional transformed feature vector corresponding to the original 400-dimensional raw feature vector. Note that when transformed feature vectors are too long to be accommodated, feature sampling can be performed, e.g., by subsampling the instances generated by sliding window scanning, since completely-random trees do not rely on feature split selection whereas random forests are quite insensitive to inaccurate feature split selection. Figure 3 shows only one size of sliding window. By using multiple sizes of sliding windows, differently grained feature vectors will be generated, as shown in Figure 4. Figure 4 summarizes the overall procedure of gcForest. Suppose that the original input is of 400 raw features, and three window sizes are used for multi-grained scanning. For m training examples, a window with size of 100 features will generate a data set of 301 × m 100-dimensional training examples. These data will be used to train a completely-random tree forest and a random forest, each containing 500 trees. If there are three classes to be predicted, a 1,806-dimensional feature vector will be obtained as described in Section 2.1. The transformed training set will then be used to train the 1st-grade of cascade forest. Overall Procedure and Hyper-Parameters Similarly, sliding windows with sizes of 200 and 300 features will generate 1,206-dimensional and 606-dimensional feature vector, respectively, for each original training example. The transformed feature vectors, augmented with the class vector generated by the previous grade, will then be used to train the 2nd-grade and 3rd-grade of cascade forests, respectively. This procedure will be repeated till convergence of validation performance. In other words, the final model is actually a cascade of cascade forests, where each level in the cascade consists of multiple grades (of cascade forests), each corresponding to a grain of scanning, as shown in Figure 4. Note that for difficult tasks, users can try more grains if computational resource allows. Given a test instance, it will go through the multi-grained scanning procedure to get its corresponding transformed feature representation, and then go through the cascade till the last level. The final prediction will be obtained by aggregating the four 3-dimensional class vectors at the last level, and taking the class with the maximum aggregated value. Configuration In this section we compare gcForest with deep neural networks and several other popular learning algorithms. The goal is to validate that gcForest can achieve performance highly competitive to deep neural networks, with easier parameter tuning even across a variety of tasks. Thus, in all experiments gcForest is using the same cascade structure: each level consists of 4 completely-random tree forests and 4 random forests, each containing 500 trees, as described in Section 2.1. Three-fold CV is used for class vector generation. The number of cascade levels is automatically determined. In detail, we split the training set into two parts, i.e., growing set and estimating set 1 ; then we use the growing set to grow the cascade, and the estimating set to estimate the performance. If growing a new level does not improve the performance, the growth of the cascade terminates and the estimated number of levels is obtained. Then, the cascade is retrained based on merging the growing and estimating sets. For all experiments we take 80% of the training data for growing set and 20% for estimating set. For multi-grained scanning, three window sizes are used. For d raw features, we use feature windows with sizes of d/16 , d/8 , d/4 ; if the raw features are with panel structure (such as images), the feature windows are also with panel structure as shown in Figure 3. Note that a careful task-specific tuning may bring better performance; nevertheless, we find that even using the same parameter setting without fine-tuning, gcForest has already been able to achieve excellent performance across a broad range of tasks. For deep neural network configurations, we use ReLU for activation function, cross-entropy for loss function, adadelta for optimization, dropout rate 0.25 or 0.5 for hidden layers according to the scale of training data. The network structure hyper-parameters, however, could not be fixed across tasks, otherwise the performance will be embarrassingly unsatisfactory. For example, a network attained 80% accuracy on ADULT dataset achieved only 30% accuracy on YEAST with the same architecture (only the number of input/output nodes changed to suit the data). Therefore, for deep neural networks, we examine a variety of architectures on validation set, and pick the one with the best performance, then re-train the whole network on training set and report the test accuracy. Image Categorization The MNIST dataset [LeCun et al., 1998] contains 60,000 images of size 28 by 28 for training (and validating), and 10,000 images for testing. We compare it with a re-implementation of LeNet-5 (a modern version of LeNet with dropout and Re-LUs), SVM with rbf kernel, and a standard Random Forest with 2,000 trees. We also include the result of the Deep Belief Nets reported in [Hinton et al., 2006]. The test results show that gcForest, although simply using default settings in Table 1, achieves highly competitive performance. Face Recognition The ORL dataset [Samaria and Harter, 1994] contains 400 gray-scale facial images taken from 40 persons. We compare it with a CNN consisting of 2 conv-layers with 32 feature maps of 3 × 3 kernel, and each conv-layer has a 2 × 2 maxpooling layer followed. A dense layer of 128 hidden units is fully connected with the convolutional layers and finally Figure 4: The overall procedure of gcForest. Suppose there are three classes to predict, raw features are 400-dim, and three sizes of sliding windows are used. a fully connected soft-max layer with 40 hidden units is appended at the end. ReLU, cross-entropy loss, dropout rate of 0.25 and adadelta are used for training. The batch size is set to 10, and 50 epochs are used. We have also tried other configurations of CNN, whereas this one gives the best performance. We randomly choose 5/7/9 images per person for training, and report the test performance on the remaining images. Note that a random guess will achieve 2.5% accuracy, since there are 40 possible outcomes. The kNN method here uses k = 3 for all cases. The test results show that gcForest runs well across all three cases even by using the same configurations as described in Table 1. Music Classification The GTZAN dataset [Tzanetakis and Cook, 2002] contains 10 genres of music clips, each represented by 100 tracks of 30 seconds long. We split the dataset into 700 clips for training and 300 clips for testing. In addition, we use MFCC feature to represent each 30 seconds music clip, which transforms the original sound wave into a 1, 280 × 13 feature matrix. Each frame is atomic according to its own nature; thus, CNN uses a 13 × 8 kernel with 32 feature maps as the conv-layer, each followed by a pooling layer. Two fully connected layers with 1,024 and 512 units, respectively, are appended, and finally a soft-max layer is added in the last. We also compare it with an MLP having two hidden layers, with 1,024 and 512 units, respectively. Both networks use ReLU as activation function and categorical cross-entropy as the loss function. For Random Forest, Logistic Regression and SVM, each input is concatenated into an 1, 280 × 13 feature vector. Sentiment Classification The IMDB dataset [Maas et al., 2011] contains 25,000 movie reviews for training and 25,000 for testing. The reviews are represented by tf-idf features. This is not image data, and thus CNNs are not directly applicable. So, we compare it with an MLP with structure input-1,024-1,024-512-256-output. We also include the result reported in [Kim, 2014], which uses CNNs facilitated with word embeding. Considering that tfidf features do not convey spacial or sequential relationships, we skip multi-grained scanning for gcForest. input-16-8-8-output structure and ReLU activation achieve 76.37% accuracy on ADULT but just 33% on LETTER. We conclude that there is no way to pick one MLP structure which gives decent performance across all datasets. Therefore, we report different MLP structures with the best performance: for LETTER the structure is input-70-50-output, for ADULT is input-30-20-output, and for YEAST is input-50-30-output. In contrast, gcForest uses the same configuration as before, except that the multi-grained scanning is abandoned considering that the features of these small-scale data do not hold spacial or sequential relationships. Influence of Multi-Grained Scanning To study the separate contribution of the cascade forest structure and multi-grained scanning, Table 8 compares gcForest with cascade forest on MNIST, GTZAN and sEMG datasets. It is evident that when there are spacial or sequential feature relationships, the multi-grained scanning process helps improve performance apparently. Running time Our experiments use a PC with 2 Intel E5 2695 v4 CPUs (18 cores), and the running efficiency of gcForest is good. For example, for IMDB dataset (25,000 examples with 5,000 features), it takes 267.1 seconds per cascade level, and automatically terminates with 9 cascade levels, amounting to 2,404 seconds or 40 minutes. In contrast, MLP compared on the same dataset requires 50 epochs for convergence and 93 seconds per epoch, amounting to 4,650 seconds or 77.5 minutes for training; 14 seconds per epoch (with batch size of 32) if using GPU (Nvidia Titan X pascal), amounting to 700 seconds or 11.6 minutes. Multi-grained scanning will increase the cost of gcForest; however, the different grains of scanning are inherently parallel. Also, both completely-random tree forests and random forests are parallel ensemble methods [Zhou, 2012]. Thus, the efficiency of gcForest can be improved further with optimized parallel implementation. Note that the training cost is controllable because users can set the number of grains, forests, trees by considering computational cost available. It is also noteworthy that the above comparison is somewhat unfair to gcForest, because many different architectures have been tried for neural networks to achieve the reported performance but these time costs are not included. Related Work The gcForest is a decision tree ensemble approach. Ensemble methods [Zhou, 2012] are a kind of powerful machine learning techniques which combine multiple learners for the same task. Actually there are some studies showing that by using ensemble methods such as random forest facilitated with deep neural network features, the performance can be even better than simply using deep neural networks [Kontschieder et al., 2015]. Our purpose of using ensemble, however, is quite different. We are aiming at an alternative to deep neural networks rather than a combination with deep neural networks. In particular, by using the cascade forest structure, we hope not only to do representation learning, but also to decide a suitable model complexity automatically. The multi-grained scanning procedure uses different sizes of sliding windows to examine the data; this is somewhat related to wavelet and other multi-resolution examination procedures [Mallat, 1999]. For each window size, a set of instances are generated from one training example; this is related to bag generators [Wei and Zhou, 2016] of multiinstance learning [Dietterich et al., 1997]. In particular, the bottom part of Figure 3, if applied to images, can be regarded as the SB image bag generator [Maron and Lozano-Pérez, 1998;Wei and Zhou, 2016]. The cascade procedure is related to Boosting [Freund and Schapire, 1997], which is able to automatically decide the number of learners in ensemble, and particularly, a cascade boosting procedure [Viola and Jones, 2001] has achieved great success in object detection tasks. Note that when multiple grains are used, each cascade level of gcForest consists of multiple grades; this is actually a cascade of cascades. Each grade can be regarded as an ensemble of ensembles; in contrast to previous studies such as using Bagging as base learners for Boosting [Webb, 2000], gcForest uses the ensembles in the same grade together for feature re-representation. Passing the output of one grade of learners as input to another grade of learners is related to stacking [Wolpert, 1992;Breiman, 1996]. Based on suggestions from studies about stacking [Ting and Witten, 1999;Zhou, 2012], we use crossvalidation procedure to generate inputs from one grade for the next. Note that stacking is easy to overfit with more than two grades, and could not enable a deep model by itself. To construct a good ensemble, it is well known that individual learners should be accurate and diverse, yet there is no well accepted formal definition of diversity [Kuncheva and Whitaker, 2003;Zhou, 2012]. Thus, researchers usually try to enhance diversity heuristically, such as what we have done by using different types of forests in each grade. Actually, gcForest exploits all the four major categories of diversity enhancement strategies [Zhou, 2012]. In particular, when assigning the label of the original instance to all instances generated by sliding windows, as shown in Figure 3, some label assignments are inherently incorrect; this is related to the Flipping Output method [Breiman, 2000], a representative of output representation manipulation for diversity enhancement. As a tree-based approach, gcForest could be potentially easier for theoretical analysis than deep neural networks, although this is beyond the scope of this paper. Indeed, some recent theoretical studies about deep learning, e.g., [Mhaskar et al., 2017], seem more intimate with tree-based models. Conclusion By recognizing that the key of deep learning lies in the representation learning and large model capacity, in this paper we attempt to endow such properties to tree ensembles and propose the gcForest method. Comparing with deep neural networks, gcForest achieves highly competitive performance in experiments. More importantly, gcForest has much fewer hyper-parameters, and in our experiments excellent performance is obtained across various domains by using the same parameter setting. The code of gcForest is available 2 . There are other possibilities to construct deep forest. As a seminal study, we have only explored a little in this direction. In order to tackle complicated tasks, it is likely that learning models have to go deep. Current deep models, however, are always neural networks. This paper illustrates how to construct deep forest, and we believe it may open a door towards alternative to deep neural networks for many tasks. SUPPLEMENT TO Deep Forest: Towards an Alternative to Deep Neural Networks Zhi-Hua Zhou and Ji Feng National Key Lab for Novel Software Technology, Nanjing University, Nanjing 210023, China {zhouzh<EMAIL_ADDRESS> More Experiments The goal of our experiments in mainbody of the paper is to show that gcForest is applicable to various tasks with almost same hyper-parameter settings; this is an apparent advantage in contrast to deep neural networks that are quite sensitive to hyper-parameter settings. After submission we get time to try the CIFAR-10 dataset [Krizhevsky, 2009], which contains 50,000 colored 32 by 32 images of 10 classes for training and 10,000 images for testing. The test results are shown in Table 1, which also includes results of several deep neural networks reported in literature. (linear kernel) 16.32% The gcForest with default setting, i.e., gcForest(default), is inferior to state-of-the-art DNNs; however, it is already the best among non-DNN approaches. The performance of gc-Forest can be further improved via task-specific tuning, e.g., by including more grains (i.e., using more sliding window sizes in multi-grained scanning) like gcForest(5grains) which uses five grains. It is also interesting to see that the performance gets significant improvement with gcForest(gbdt), which simply replaces the final level with GBDT [Chen and Guestrin, 2016]. It could not be ignored that DNNs have been investigated for many years by huge crowd of researchers/engineers, and image tasks are killer applications of DNNs. Generally it is too ambitious to aim at beating powerful techniques on their killer applications; e.g., linear kernel SVMs are still state-ofthe-art for text categorization although DNNs have been hot for many years. There should be plenty of other tasks where deep forests can offer help. Due to limitation of computational resource, we have not tried larger models with more grains, forests and trees, although our preliminary results suggest that larger models might tend to offer better performances, as shown in Figure 1. Note that computational facilities are crucial for enabling the training of larger models; e.g., GPUs for DNNs. On one hand, some new computational devices, such as Intel KNL of the MIC (Many Integrated Core) architecture, might offer potential acceleration for gcForest like GPUs for DNNs. On the other hand, some components of gcForest, e.g., the multigrained scanning, may be accelerated by exploiting GPUs. Moreover, there is plenty of room for improvement with distributed computing implementations. More About Forest Random forest, which has been widely applied to various tasks, is one of the most successful ensemble methods [Zhou, 2012]. Completely-random tree forest has been found useful during recent years, such as iForest for anomaly detection, sencForest [Mu et al., in press] for handling emerging new classes in streaming data, etc. The gcForest offers another example exhibiting the usefulness of completely-random tree forest. Many works try to connect random forest with neural networks, such as converting cascaded random forests to convolutional neural networks [Richmond et al., 2015], exploiting random forests to help initialize neural networks [Welbl, 2014], etc. These work are typically based on early studies connecting trees with neural networks, e.g., mapping of trees to networks [Sethi, 1990], tree-structured neural networks [Sanger, 1991], as reviewed in [Zhou and Chen, 2002]. Their goals are totally different from ours. Future Exploration As mentioned in mainbody, feature sampling can be executed when transformed feature vectors produced by multi-grained scanning are too long to be accommodated; this not only helps reduce storage, but also offers another channel to enhance the diversity of the ensembles. It is somewhat like combining random tree forest with random subspace [Ho, 1998], another powerful ensemble method [Zhou, 2012]. In such situations, it is usually helpful to increase the size of ensemble. Besides random sampling, it is interesting to explore smarter sampling strategies, such as BLB [Kleiner et al., 2012], or feature hashing [Weinberger et al., 2009] when adequate. Moreover, the multi-grained scanning and cascade forest construction processes can be realized separately. Many issues of the feature re-representation process is worth further exploration. For example, we now take the simplest form of class vectors, i.e., the class distribution at the leaf nodes into which the concerned instance falls. It is apparent that more features may be incorporated, such as class distribution of the parent nodes which express prior distribution, the sibling nodes which express complementary distribution, the decision path encoding, etc. Intuitively, more features may enable the encoding of more information, although not always necessarily helpful for generalization. Moreover, a longer class vector may enable a joint multi-grained scanning process, leading to more flexibility of re-representation. The hard negative mining strategy may help improve generalization performance, and the effort improving the efficiency of hard negative mining may also be found helpful for the multi-grained scanning process [Henriques et al., 2013]. The efficiency of gcForest may be further improved by reusing some components during the process of different grained scanning, class vectors generation, forests training, completely-random trees generation, etc. The employment of completely-random tree forests not only helps enhance diversity, but also provides an opportunity to exploit unlabeled data. Note that the growth of completely-random trees does not require labels, whereas label information is only needed for annotating leaf nodes. Intuitively, for each leaf node it might be able to require only one labeled example if the node is to be annotated according to the majority cluster on the node, or one labeled example per cluster if all clusters in the node are innegligible. This also offers gcForest with the opportunity of incorporating active learning [Freund et al., 1997; and/or semi-supervised learning strategies [Zhou and Li, 2010;Li and Zhou, 2007]. In case the learned model is big, it may be possible to reduce to a smaller one by using the twice-learning strategy [Zhou and Jiang, 2004]; this might be helpful not only to reduce storage but also to improve prediction efficiency.
6,956.6
2017-02-28T00:00:00.000
[ "Computer Science" ]
Perirenal Adipose Tissue—Current Knowledge and Future Opportunities The perirenal adipose tissue (PRAT), a component of visceral adipose tissue, has been recently recognized as an important factor that contributes to the maintenance of the cardiovascular system and kidney homeostasis. PRAT is a complex microenvironment consisting of a mixture of white adipocytes and dormant and active brown adipocytes, associated with predipocytes, sympathetic nerve endings, vascular structures, and different types of inflammatory cells. In this review, we summarize the current knowledge about PRAT and discuss its role as a major contributing factor in the pathogenesis of hypertension, obesity, chronic renal diseases, and involvement in tumor progression. The new perspectives of PRAT as an endocrine organ and recent knowledge regarding the possible activation of dormant brown adipocytes are nowadays considered as new areas of research in obesity, in close correlation with renal and cardiovascular pathology. Supplementary PRAT complex intervention in tumor progression may reveal new pathways involved in carcinogenesis and, implicitly, may identify additional targets for tailored cancer therapy. Introduction Adipose tissue is a dynamic cellular complex that includes three distinct cell types: white, brown, and beige ("brite"), each of them displaying a particular morphofunctional profile. These cells are organized in humans into four different types of fat tissues: white, brown, beige, and perivascular adipose tissue [1]. All of these fat deposits contain adipocytes, vascular and nerve structures, preadipocytes, pericytes, and immune cells (mostly connective tissue mast cells) [1]. These fat deposits are physiologically involved in the maintenance of local and general homeostasis, via their endocrine and paracrine activity, but they may intervene in the pathogeny of some diseases. In this respect, the role of liver and muscle adipocytes in the development of diabetes mellitus [2], the intervention of epicardial fat in the development of atherosclerosis and ischemic coronary disease [3], and the involvement of perivascular adipose tissue (PVAT) in the pathogenesis of hypertension [4] have been recently identified. Human fat is predominantly represented by white adipose tissue, which is organized in subcutaneous and visceral adipose tissue. Visceral white adipose tissue consists of gonadal fat deposits, epicardial adipose tissue, retroperitoneal, mesenteric, omental fat depots, and perirenal adipose tissue (PRAT) [5,6]. PRAT is included, according to some authors, in the so-called "ectopic fat", along with PVAT, pericardial adipose tissue, renal sinus fat, and adipose tissue located in different organs or tissues (e.g., muscle and liver) [7,8]. Currently, PRAT is considered a special visceral adipose deposit in terms of its specific anatomical features, regarding vascularization and innervation, in the context of its location in the proximity of the kidney [9]. The most accurate imagistic methods for PRAT size Origin and Structure PRAT is located around the kidney and the adrenal gland, in the retroperitoneal space, between the renal capsule and the renal fascia (Gerota's fascia) [9,12], while paranephric fat is adjacent to PRAT, the renal fascia being located between these two fat areas [9,12,13]. The renal sinus fat, a deposit of adipose tissue located at the medial border of the kidney, is associated with calyces, renal vessels, nerve fibers, and lymphatic channels of this compartment. Due to its relationship with the renal vessels, renal sinus fat is considered to act as PVAT, being mainly involved in blood pressure control [14,15]. Morphologically, the paranephric fat is composed predominantly of cells exhibiting unilocular-type lipid inclusions, while PRAT is mainly consisted of brown adipocytes [12,16]. The arteries that provide PRAT oxygen and nutrient intake derive from the branches of the left colic, lower adrenal, renal, lumbar, and ovarian or testicular arteries and generate an abundant anastomosing capillary network [17]. Lymphatic vessels that drain PRAT open in the renal subcapsular lymphatics and in para-aortic lymph nodes [18]. Although the studies on PRAT origin are limited, recent observations on PRAT adipogenesis have revealed that adipocyte precursors (preadipocytes) from the perirenal area are negative for endothelial markers, like CD31, or for hematopoietic stem cells, such as CD45, but express CD90 and CD166 positivity [16]. Morphologically, PRAT consists of a mixture of white and brown adipocytes, with most brown adipocytes in a dormant status [6,16,21]. These white and brown cells are associated with mesenchymal stem cells, preadipocytes, several inflammatory cells, along with many capillaries and nerve endings [9]. PRAT is considered a reservoir of mesenchymal stem cells (MSCs), which display the same phenotype as those obtained from other fat tissue depots and exhibit the capacity of differentiation into adipocyte, osteogenic, chondrogenic, and epithelial lineages [6], being a focus of research in regenerative medicine. These cells may show an immunoregulatory phenotype in response to inflammatory factors, such as IL-1β, IFNγ, TNF-α, which may be produced by different immune cells and this ability may be exploited in anti-inflammatory therapy [6]. Moreover, TNF-α stimulates the secretion of IL-6 and IL-8 by these MSCs, followed by angiogenesis stimulation in experimental models, while IL-6, IFNγ, and TNF-α increase their immunosuppressive abilities in vitro [6]. These recent findings may represent a potential immunomodulation mechanism which may be used to enhance the therapeutic effectiveness in different types of inflammatory diseases, tissue injuries [6], cardiovascular, and renal diseases. PRAT consists predominantly of brown adipocytes, while white adipocytes form only a thin cellular layer at its periphery, in fetuses and babies (1-11 months of life) [10]. Thus, a progressive conversion to the unilocular white adipocytes is carried out in PRAT, brown adipocytes being represented only by small cellular islands, dispersed within a white adipose area. Most PRAT is thought to be made up of dormant brown adipocytes, while active brown adipocytes are rare in adults, these being located in areas which contain a high number of sympathetic nerve endings [16]. An analysis of gene expression in various human fat depots revealed that PRAT is analogous to subcutaneous adipose tissue, being different from that of visceral adipose tissue [22]. The differences between PRAT and subcutaneous adipose tissue consist in expression of RNA binding motif single stranded interacting protein 1 (RBMS1), ankyrin repeat domain 20 family member A1 (ANKRD20A1), and DnaJ heat shock protein family (Hsp40) member B1 (DNAJB1) [21,22]. The morphological variability is also gender-dependent, PRAT being much more developed in men compared to women, without a direct relationship between the body mass index (BMI) and its volume [26]. These findings were reported after computed tomography (CT) measurements were carried out on 123 persons (58 women, with an average age of 59 years and mean BMI of 28.9 kg/m 2 and 65 men with an average age of 60.0 years and mean BMI of 28.9 kg/m 2 ) [26]. The same results have been obtained by Favre et al., in a study carried out on 40 patients (16 women and 24 men), with an average age of 57.6 +/− 18.1 years and mean BMI of 28.9 +/− 2.9/kg/m 2 [27]. Furthermore, they observed that men had a higher PRAT volume at a comparable waist circumference [26]. PRAT gender variability is also manifested by its thickness and volume correlation with the waist circumference, in men, and its negative correlation with the thickness of subcutaneous fat tissue, in women [27]. PRAT gender differences are equally reflected in its histologic pattern. Thus, brownlike adipocytes with an increased expression of UCP-1 mRNA represent 33% in female and only 7% in male PRAT [28]. Although the PRAT "browning" mechanism after cold exposure is partially explained, it has been already observed that the resulted heat is rapidly dispersed throughout the body, a finding easily attributed to the abundance of kidney blood flow, as the kidneys receive about 20% of the cardiac output [28]. This phenomenon may be partially attributable to the anatomical association with the adrenal gland, which induces an intense "browning" of PRAT, by production of catecholamines [23,29]. Additionally, the stronger female PRAT ability to induce "browning" seems to be more likely related to the specific characteristics of the sex-related MSCs of this area and less likely to the direct intervention of sex hormones [28]. These findings are based on the results of a study conducted on a murine model that showed that Y-chromosome suppresses brown adipose tissue (BAT) UCP-1 expression [30]. PRAT in Chronic Renal Pathology Due to its anatomical location, PRAT's size increase may lead to chronic kidney damage, with a direct correlation between the thickness of this adipose tissue deposit and the kidney damage [31]. According to this observation, the ultrasound evaluation of PRAT volume is nowadays proposed as a parameter for the assessment of early renal lesions associated with obesity [32]. This observation is also supported by the high occurrence of proteinuria (almost tripled in people with BMI > 25 kg/m 2 ) [33,34]. The mechanism of PRAT involvement in chronic kidney damage is not completely elucidated but it has been postulated that PRAT's increase may result in a direct obstruction of renal parenchyma and vessels, followed by an increase of sodium reabsorption and, as a consequence, a high blood pressure, with alterations of renal functions in obese patients [10]. The direct compression of PRAT on renal parenchyma results in intra-renal pressure increase, associated with reduced blood flow rates in vasa recta [35,36]. As a consequence, an increased Na+ absorption in Henle's loop, associated with a decreased NaCl delivery to the macula densa, results in low resistance in afferent arterioles, along with an increased glomerular filtration rate and activation of renin production by juxtaglomerular cells [35,36]. In addition, PRAT compression of the renal parenchyma causes an increased interstitial hydrostatic pressure and a reduced renal blood flow, which result in stimulation of renin secretion, glomerular filtration, and tubular sodium reabsorption, respectively, all these processes accelerating the kidney disease progression ( Figure 1) [10,31,36]. results of a study conducted on a murine model that showed that Y-chromosome suppresses brown adipose tissue (BAT) UCP-1 expression [30]. PRAT in Chronic Renal Pathology Due to its anatomical location, PRAT's size increase may lead to chronic kidney damage, with a direct correlation between the thickness of this adipose tissue deposit and the kidney damage [31]. According to this observation, the ultrasound evaluation of PRAT volume is nowadays proposed as a parameter for the assessment of early renal lesions associated with obesity [32]. This observation is also supported by the high occurrence of proteinuria (almost tripled in people with BMI > 25 kg/m 2 ) [33,34]. The mechanism of PRAT involvement in chronic kidney damage is not completely elucidated but it has been postulated that PRAT's increase may result in a direct obstruction of renal parenchyma and vessels, followed by an increase of sodium reabsorption and, as a consequence, a high blood pressure, with alterations of renal functions in obese patients [10]. The direct compression of PRAT on renal parenchyma results in intra-renal pressure increase, associated with reduced blood flow rates in vasa recta [35,36]. As a consequence, an increased Na+ absorption in Henle's loop, associated with a decreased NaCl delivery to the macula densa, results in low resistance in afferent arterioles, along with an increased glomerular filtration rate and activation of renin production by juxtaglomerular cells [35,36]. In addition, PRAT compression of the renal parenchyma causes an increased interstitial hydrostatic pressure and a reduced renal blood flow, which result in stimulation of renin secretion, glomerular filtration, and tubular sodium reabsorption, respectively, all these processes accelerating the kidney disease progression ( Figure 1) [10,31,36]. In this context, a recent study on 296 patients with hypertensive disease has shown that glomerular filtration rate reduction of <60 mL/minutes per 1.73 m 2 is correlated to PRAT increase not to that of visceral adiposity, regardless of gender [37]. Moreover, a direct link between PRAT size and patients' high serum uric acid and triglycerides, in In this context, a recent study on 296 patients with hypertensive disease has shown that glomerular filtration rate reduction of <60 mL/minutes per 1.73 m 2 is correlated to PRAT increase not to that of visceral adiposity, regardless of gender [37]. Moreover, a direct link between PRAT size and patients' high serum uric acid and triglycerides, in chronic kidney disease [31] or with creatinin values, in hypertensive disease has been registered [37] as a consequence of glomerular filtration rate reduction [10]. The increased volume of visceral fat and, more specifically, of PRAT is associated with overproduction of free fatty acids, showing a serum level directly correlated with albuminuria [32,38]. Fatty acids metabolites, such as ceramides, have a direct renal lipotoxic effect [32,38]. Moreover, PRAT fatty acids excessive release induces an endothelial dysfunction, which is manifested by enhanced oxidation of tetrahydrobiopterin, followed by increased production of superoxides and decreased NO synthesis [10]. The cellular microenvironment of PRAT, characterized by the association between white and brown adipocytes, predipocytes, and macrophages, together with numerous nerve endings and blood vessels, is involved in insulin resistance, chronic kidney disease, hypertension, atherosclerosis, and, recently, in tumor progression. Additionally, the reduction of the inflammatory profile of perirenal adipocytes, expressed by decreased levels of inflammatory cytokines including IL-1β, IL-6, and TNF-α, due to stimulation of heme oxygenase system associated with a decreased macrophage infiltration, results in an improved renal activity [10,44]. An important recent study has demonstrated a direct correlation between age and inflammatory phenotype of donor-derived stromal vascular fraction of perirenal adipose tissue (PRAT-SVF), expressed by a local recruitment of natural killer (NK) cells, which display a CD45+CD3-CD56+ phenotype. The proportion of NK cells in PRAT-SVF is associated with NKG2D receptor activation and transcripts encoding INFγ, suggesting that NK cells may be actively involved in pro-inflammatory mechanisms leading to functional impairment in elderly transplanted patients [45]. Current data have shown that PRAT size potentiates the lesions produced by other renal metabolic factors, such as abnormal insulin serum levels and increased glucose resistance or high triglycerides and uric acid levels, all these features being observed in patients with chronic kidney disease [10,46]. Furthermore, according to the results of a recent study, an increase of PRAT in patients with calcium phosphate apatite or uric acidic nephrolithiasis has been noticed [47]. Since it could not be specified whether there is a direct relationship between the occurrence of these lesions and PRAT volume, further research is needed to clarify this finding [47]. PRAT in Metabolic and Cardiovascular Pathology Obesity is a pathological condition associated with cardiovascular risk, type II diabetes mellitus, dyslipidaemia, and high blood pressure. The deposition of triglycerides in ectopic adipose tissue, including PRAT, is attributed to an "exceedance" of the subcutaneous white adipose tissue storage capacity [48]. An increased waist circumference size is considered by some guidelines as a factor associated with the cardiovascular risk [49]. Moreover, its value is also providing general information about the size of the subcutaneous and perivisceral adipose areas [7,10]. During the last years, research has shown that the cardiovascular risk is more closely correlated with visceral fat tissue volume, including PRAT, in comparison to subcutaneous fat size [7, 31,50]. In this regard, a recent study, conducted on a group of 702 overweight prepubertal children, revealed a relationship between PRAT size and carotid intima-media thickness [51]. Furthermore, the ultrasound evaluation of the thickness of the perirenal and epicardial fat areas may be considered useful in cardiovascular risk assessment, considering the strong relationship between the epicardial and PRAT volume and the carotid intimamedia thickness detected in a group of healthy prepubertal children [39]. These data are supplemented by the study of Ricci et al. which reported a larger PRAT volume evaluated by ultrasound in morbidly obese male patients with BMI ≥ 40 or ≥35 kg/m 2 (mean values of 15.6 ± 4.9 mm), compared with female counterparts (11.6 ± 4 mm) [11]. Despite these findings, the intimate mechanisms of PRAT involvement in cardiovascular etiopathogeny remain incompletely explained. According to literature, PRAT directly regulates the activity of the cardiovascular system by an "adipose afferent reflex", which increases the blood pressure, as a result of enhanced renal sympathomimetic outflow induced by amplified afferent signals from fat deposits [52]. However, PRAT whitening in adult animals has been tested by administration of 6-hydroxydopamine (6-OHDA), a dopamine-derived sympathetic neurotoxin, and resulted in decreased sympathetic innervation and inhibition of adipose tissue browning, showing that PRAT development is a sympathetic-independent process [9]. Additionally, PRAT excess induces the activation of the renin-angiotensin-aldosterone system due to the compression of blood and lymphatic vessels, along with ureters, which may be responsible for the development of hypertensive disease, atherosclerosis, and insulin resistance (Figure 1) [7,53]. There is evidence that glomerular activity, which is important in homeostasis and normal blood pressure control, is influenced by PRAT size [7,53]. The association between hypertensive disease and PRAT size, regardless of other fat deposits indices, is supported by other studies, with direct correlation between excessive PRAT volume and the reduction of glomerular filtration rate [10,37]. Moreover, according to the study of Ricci et al., patients with hypertensive diseases have a larger PRAT thickness (average value of 13.6 mm), in comparison to normotensive patients (average value of 11.6 mm) [11], with additional correlation to age, anthropometric data (waist circumference and BMI), systolic blood pressure, insulin resistance, and glycated hemoglobin values [11]. These findings are supported also by the study performed on 102 uncomplicated overweight and obese patients, which demonstrated a close relationship between PRAT size and systolic and diastolic blood pressure, along with serum triglycerides values [46]. Moreover, UCP-1 protein low expression has been detected in PRAT of obese patients with hypertensive disease, compared to normal subjects [21,54]. Recent analyses performed on patients with high-grade obesity have also shown a correlation between serum creatinine levels and PRAT thickness, in a context of so-called "obesity-related-glomerulopathy" [55,56]. From a morphologic point of view, this condition is characterized by glomerular hypertrophy, with or without focal and segmental glomerulosclerosis and proliferation of glomerular mesangial cells, which are induced by a disruption of PRAT hormone and cytokine secretions [11,57,58]. Furthermore, adipokines and cytokines synthesized by dysfunctional perirenal adipocytes have paracrine or autocrine effects on the cardiovascular system [9,10]. Despite all these findings, there are relatively limited data regarding the correlation between PRAT volume regression and possible hypertensive disease remission [7]. However, hypertension remission after bariatric surgery attributed to PRAT's size decrease has been reported in patients with morbid obesity [11]. Thus, sleeve-gastrectomy performed on a group of 89 patients with morbid obesity and hypertensive disease led to a reduction of doses of antihypertensive drugs prescribed or a withdraw of drug administration in 16 patients that showed normal blood pressure after the surgery, and no need to start the administration of a drug-based therapeutic regimen in 48 patients [11]. Consequently, the ultrasound assessment of PRAT volume should be included in the evaluation of obese patients, in order to establish the risk of cardiovascular and chronic renal disease, using microalbuminuria as a useful indicator of microvascular lesions and early renal dysfunction [10,46]. Furthermore, excessive PRAT is associated with an increased insulin resistance and dyslipidaemia, which in turn lead to an increased cardiovascular risk and to an accelerated age-related decline of renal function [11,37,46,50]. Recent published data support the finding that plasminogen inhibitor-1 activator (PAI-1), a mediator of extracellular matrix (ECM) accumulation in diabetic nephropathy produced by perirenal adipocytes, is involved in the development of diabetic nephropathy and insulin resistance, by increasing the recruitment of immune cells in obese people [59]. It is well recognized that hypoxia due to PRAT enlargement induces lipolysis and acts as a local pro-inflammatory trigger [60,61]. As a consequence, a high increase of immune cells component, mainly composed of macrophages, along with mast cells, neutrophils, and lymphocytes occurs in fat tissue areas [61]. All of these processes are associated with an increase of local synthesis of proinflammatory cytokines, such as leptin, chemerin, resistin, visfatin, retinol binding protein 4 (RBP4), and lipocalin 2 (LCN2) [62]. Their pro-inflammatory action is counterbalanced by adiponectin and omentin, both exhibiting an anti-atherogenic and anti-inflammatory capacity [61]. Although the activity of PRAT inflammatory cells is poorly understood, an analysis of these cells in pigs with obesity-related metabolic dysfunction showed an increase of local infiltration with macrophages, associated with an increased TNF-α expression [43]. Another study performed on a murine model revealed that leptin direct injection into PRAT results in adipose afferent reflex activation, supporting its well-known capacity to induce renal vascular and endothelial damage [42]. Although important steps have been made in deciphering PRAT involvement in the development of cardiovascular and metabolic diseases, further investigations are necessary for the development of a new generation of therapeutic tools, based on adipocyte targets. In this regard, it has been recently observed that a fish oil-rich diet [63], short time-high frequency physical exercises [64], reduction of meal frequency to 1-2/day or exposure to low temperatures [65] induce a decrease of PRAT volume [66]. In an attempt to analyze if PRAT browning may combat obesity, it was revealed that brown adipocytes require a dense vascular network to provide their high energy consumption [67]. In cases of vascular insufficiency, mitochondrial dysfunction occurs, leading to systemic insulin resistance [67], suggesting that promotion of browning may open promising perspectives in the therapy of renal pathology, hypertension, inflammation, or in the control of the general metabolic status. Moreover, the main research objectives in cardiovascular and metabolic diseases could be the identification of molecular factors actively involved in stimulation of PRAT dormant brown adipocytes in order to develop appropriate therapeutic and prevention approaches. PRAT in Tumor Pathology Several large-scale studies have confirmed a significant association between obesity and cancer [68,69]. The dysfunctional adipose tissue is one of the sources of growth factors, cytokines, adipokines or extracellular matrix scaffolding, which support tumor cell growth in obese patients [69][70][71]. The current knowledge regarding the relationship between PRAT and tumor pathology is limited: few studies report the possible involvement of this adipocyte area in supporting local or general invasion of tumor cells. Considering the anatomical relationship between PRAT and kidney, a correlation between PRAT size and clear cell renal carcinoma (conventional) local progression and life expectancy has been demonstrated in a study conducted on a group of 174 patients [72]. Consequently, the imaging evaluation of PRAT volume may be useful in assessing the prognosis in clear cell renal carcinoma [72,73]. Considering PRAT specific morphological profile, consisting of a mixture of white and brown adipocytes, it expresses high levels of UCP-1. In this context, an increased UCP-1 expression of PRAT is considered a negative prognostic factor in patients with clear cell renal carcinoma (Figure 2) [74]. Moreover, a decreased HOXC8 and HOXC9 genes expression, as a classical white adipocytes signature, has been detected in perirenal fat in patients with clear cell renal carcinoma when compared to healthy people, while TBX1, TMEM26, or CD137 expression was unchanged [74]. Furthermore, the result of the same study showed similar expression of white adipose cell markers, such as ADIPQ and LEP, in both study groups [74]. According to this data, a mechanism of PRAT "browning" occurs in patients with clear cell renal carcinoma [74]. Nonetheless, further investigations are necessary to complete the knowledge about the mechanism involved in PRAT regulation of metabolism and stimulation of tumor development [74]. TBX1, TMEM26, or CD137 expression was unchanged [74]. Furthermore, the result of the same study showed similar expression of white adipose cell markers, such as ADIPQ and LEP, in both study groups [74]. According to this data, a mechanism of PRAT "browning" occurs in patients with clear cell renal carcinoma [74]. Nonetheless, further investigations are necessary to complete the knowledge about the mechanism involved in PRAT regulation of metabolism and stimulation of tumor development [74]. The spectrum of factors involved in PRAT induced local tumor progression comprises overexpression of UCP-1 associated with perirenal adipocytes, underexpression of HOXC8 and HOXC9, and a possible added PRAT "browning" in clear cell renal carcinoma, promotion of tumor progression by adipokines and pro-inflammatory cytokines released by dysfunctional perirenal fat, cachexia due to brown adipocytes activation, and dedifferentiation of mature adipocytes at the invasive tumor front. There are relatively limited data regarding the involvement of perirenal adipocytes in promoting ovarian tumor cells adhesion, migration, and invasion [75,76]. In this regard, PRAT stromal cells have been detected as stimulators of tumor growth [75,76]. According to the results of a recently published study on a group of 258 patients with stage III and IV ovarian cancer, PRAT thickness of more than 5 mm has been associated with a lower survival rate [77]. This finding supplements the observation that overexpression of adiponectin and leptin in patients with visceral obesity induces the progres- The spectrum of factors involved in PRAT induced local tumor progression comprises overexpression of UCP-1 associated with perirenal adipocytes, underexpression of HOXC8 and HOXC9, and a possible added PRAT "browning" in clear cell renal carcinoma, promotion of tumor progression by adipokines and pro-inflammatory cytokines released by dysfunctional perirenal fat, cachexia due to brown adipocytes activation, and dedifferentiation of mature adipocytes at the invasive tumor front. There are relatively limited data regarding the involvement of perirenal adipocytes in promoting ovarian tumor cells adhesion, migration, and invasion [75,76]. In this regard, PRAT stromal cells have been detected as stimulators of tumor growth [75,76]. According to the results of a recently published study on a group of 258 patients with stage III and IV ovarian cancer, PRAT thickness of more than 5 mm has been associated with a lower survival rate [77]. This finding supplements the observation that overexpression of adiponectin and leptin in patients with visceral obesity induces the progression of ovarian cancer and its recurrence, as a result of leptin-potentiated IL-6 synthesis and its contribution to survival of dormant tumor cells [78,79]. Furthermore, the immunosuppressive cytokines produced by PRAT, such as IL-10, along with the suppression of IL-6, IL-12p40, and CD86 result in stimulation of ovarian cancer progression, while stimulation of IFNγ, by abrogating IL-12 inhibition, leads to a favorable prognosis in malignant ascites [77]. Another mechanism contributes to a poor prognosis in ovarian cancer is that of increased UCP-1 activity in brown adipose tissue of PRAT, resulting in increased resting energy expenditure which results in tumor cachexia [77]. Beside ovarian and renal carcinoma, PRAT dysfunction has been also correlated to an increased risk and poor prognosis in colorectal cancer [80]. Furthermore, adipocytes located at the tumor invasive front gain a fibroblast-like phenotype, suggesting a preadypocyte population arising from dedifferentiated mature adipocytes, as a possible feedback loop resulting in PRAT dysfunction in abdominally metastasizing cancers [75]. PRAT is not only involved in tumor progression but, in addition, its large thickness is now considered as a predictor of post-surgery complications [81]. Furthermore, PRAT is an active metabolic tissue that releases a panel of inflammatory cytokines, such as TNF-α and IL-6, as a result of the increased number of macrophages in the excessive perirenal fat area [27]. In the same context, PRAT inflammatory profile associated with a local fibrosis induced by the overexpression of genes encoding fibronectin and type I collagen have been noticed in patients with aldosterone-producing adenoma, due to enhanced aldosterone production [54]. Although important steps have been made in deciphering the specific PRAT roles in tumor progression, the intimate molecular mechanisms of its involvement in carcinogenesis and tumor invasion are far from elucidation. Conclusions The past two decades have been marked by changes in the traditional perception of the adipose tissue structure and pathophysiology. Currently, PRAT is a particular visceral adipose deposit, with anatomical and morphological specific features related to its proximity to the kidney. The data concerning PRAT adipocytes origin and activity are limited, but recent research has been oriented towards several directions in order to exploit its potential in therapy and prevention of different diseases. Although PRAT displays a small size in comparison to that of subcutaneous or visceral fat deposits, the paracrine or autocrine mechanisms of action of the adipokines and proinflammatory cytokines which it produces maximize PRAT's effects in the maintenance of renal and general homeostasis. In this context, the main research objectives could be the identification of the regulation pathways of the molecular mechanisms involved in brown adipocyte lineage differentiation in PRAT, as the most promising therapeutic approach in cardiovascular diseases, chronic renal pathology, and tumor local progression.
6,493.2
2021-03-01T00:00:00.000
[ "Medicine", "Environmental Science", "Biology" ]
Improved AHP Model and Neural Network for Consumer Finance Credit Risk Assessment . With the rapid expansion of the consumer financial market, the credit risk problem in borrowing has become increasingly prominent. Based on the analytic hierarchy process (AHP) and the long short-term memory (LSTM) model, this paper evaluates individual credit risk through the improved AHP and the optimized LSTM model. Firstly, the characteristic information is extracted, and the financial credit risk assessment index system structure is established. The data are input into the AHP-LSTM neural network, and the index data are fused with the AHP so as to obtain the risk level and serve as the expected output of the LSTM neural network. The results of the prewarning model after training can be used for financial credit risk assessment and prewarning. Based on LendingClub and PPDAI data sets, the experiment uses the AHP-LSTM model to classify and predict and compares it with other classification methods. Experimental results show that the performance of this method is superior to other comparison methods in both data sets, especially in the case of unbalanced data sets. Introduction Accompanied by the rapid expansion of the consumernance industry and the continuous expansion of the consumer credit scale, various nancial credit problems are relatively severe [1].With the establishment of the public credit investigation system, the demand for personal consumption credit has become increasingly strong [2].In order to adapt to the new changes, commercial institutions gradually began to expand the personal credit investigation business and the personal credit investigation system gradually moved toward marketization [3].e pattern of China's personal credit investigation market has shown a trend of diversi cation, and the design of the personal credit risk assessment model will be its core advantage and the key to lasting management [4].rough the use of appropriate evaluation methods, accurate and e cient identi cation of borrowers are likely to default so as to reduce bad debt losses of banks and consumer nance and other lending institutions to ensure the stable development of social economy [5]. In view of di erent credit risk assessment problems, risk assessment methods are constantly updated and developed.e authors of [6] stated that the online loan borrowers' credit risk assessment method based on the AHP-LSTM model extracted features from personal information, constructed the AHP-LSTM model through multigranularity scanning and the forest module, and predicted default of borrowers.At the same time, the Gini index was used to calculate the importance score of random forest features, and the Bo da counting method was used to sort and fuse the results [7].However, there is still more research space for the model to solve the problem of an unbalanced sample category [8]. e authors of [9] adopted a personal credit assessment based on the heterogeneous integration algorithm model to solve the problem that it is di cult to assess customer personal credit in bank loan risk control.e AUC value of the proposed heterogeneous ensemble learning model reaches 0.916, which is an average increase of 7.38% compared with the traditional machine learning model and has good generalization ability [10].e method based on the synchronous processing of sample undersampling and feature selection by the gray wolf optimization algorithm uses the classifier as the heuristic information of the gray wolf optimization algorithm to conduct intelligent search so as to obtain the combination of the optimal sample and the feature set [11].Tabu table strategy was introduced into the original gray wolf algorithm to avoid local optimization [12].Compared with other methods, the performance of this method in different data sets proves that this method can effectively solve the problem of sample imbalance, reduce the dimension of feature space, and improve the classification accuracy. Studies on the missing value filling method (QL-RF) based on the Q learning and random forest and integrated classification model (QXB) based on the bagging framework using fusion quantum particle swarm optimization (QPSO) and XG Boost have also been further optimized [13].Among them, QL-RF is superior to the traditional RF filling method under G-means, F1-measure, and AUC, and QXB is significantly superior to SMOTE-RF and SMOTE-XG Boost [14]. e proposed method can effectively deal with the deletion and classification problems under high-dimensional unbalanced data [15].A personal credit evaluation model is established by using the support vector machine (SVM) [16].A genetic algorithm is introduced to optimize the model's parameters, and validity analysis and extension analysis are performed for the samples of two P2P lending platforms.Based on the empirical results, this paper discusses the potential risks of credit brushing, which can effectively solve the problem of personal credit evaluation of P2P lending platforms and has good robustness and popularization [17]. is paper uses the LSTM network to establish the personal credit evaluation model by improving the analytic hierarchy process.e final evaluation result of the traditional analytic hierarchy process is related to the subjective scale of the participants in the evaluation, which may lead to an inconsistent judgment matrix, requiring consistent testing and modification many times and resulting in a large workload in the evaluation process.e AHP-LSTM model can use the model to predict default in the case of unbalanced positive and negative samples, which improves the accuracy of credit risk assessment.e improved analytic hierarchy process can intuitively and comprehensively reflect the level of independent ability of credit risk evaluation but also better reflect the comprehensive independent ability of credit risk evaluation and can solve multiobjective complex problems.e concept of the optimal matrix is used to improve the traditional analytic hierarchy process. is method can make the evaluation results automatically meet the consistent requirements, simplify the consistent testing steps, and greatly reduce the workload of evaluation. is paper consists of four main parts.e first part is related background introduction. e second part is the methodology, which introduces the improvement of the analytic hierarchy process and the LSTM model and further establishes the credit risk assessment model.e third part is the result analysis and discussion.e fourth part is the conclusion. Analytic Hierarchy Process. e analytic hierarchy process (AHP) is an analytical and decision-making method for solving multiobjective complex problems.Firstly, the complex problem is decomposed into several evaluation factors and the corresponding index system is established. en, the evaluation factors are divided into different hierarchical structures according to the subordinate relationship, and the hierarchical structure model is constructed.Using the important degree of M, the 1∼9 scale theory was introduced to obtain the quantitative judgment matrix, where the importance and secondary of the 1∼9 scale are defined in Tables 1 and 2, respectively.Finally, the relative weights of the factors at each level are calculated, and the consistency check is carried out. Improve Analytic Hierarchy Process. It can be seen from the algorithm steps of the traditional analytic hierarchy process that the final evaluation result of this method is related to the subjective scale of participants.Suppose there is a big difference between participants' subjective cognition and objective reality, in that case, there may be an inconsistent judgment matrix, which requires consistent testing and correction many times, resulting in a large workload in the evaluation process.To solve this problem, an improved analytic hierarchy process is applied, and the specific steps are as follows. (1) According to the principle of the analytic hierarchy process, we construct the judgment matrix G i , which is shown in the following equation: In the formula, g xy is the importance of the factor x relative to the first factor y and g xy > 0, We add the row to get the sum vector as shown in the following equation: We normalize the vector to get the weight vector as shown in the following equation: We check consistency. In order to coordinate the evaluation factors, the concept of the optimal matrix is used to improve the traditional analytic hierarchy process. is method can make the evaluation results automatically meet the consistency requirements, simplify the consistency testing steps, and greatly reduce the workload of evaluation.(6) We carry out total hierarchy sorting.e importance of the factor at the bottom relative to the factor at the top can be obtained by calculating layer by layer along the hierarchy structure, and the total ranking of the hierarchy can be completed. Quantitative Evaluation Analysis of the Credit Risk Assessment.Credit risk assessment adopts the improved analytic hierarchy process to carry out independent capability assessment.e specific algorithm steps are as follows: (1) Construct the judgment matrix K Given credit risk assessment, the expert survey table is firstly developed, and the relative importance of the factors at the ability level and the relative importance of the factors at the index level corresponding to different abilities are scored by the method of 1-9 scale theory.is paper takes the scoring results of relative importance of competency factors as an example and gives the judgment matrix of competency factors to target factors. (2) Construct the antisymmetric matrix K 1 � lgK of K (3) Solve the optimal transfer matrix K 2 of the antisymmetric matrix K 1 (9) M is the vector and‾M is the weight vector between evaluation indexes. At this point, the relative importance weights of competency layer factors to target the layer are obtained. In this method, the comprehensive ability of credit risk assessment is decomposed to construct the lowest level evaluation index set that can reflect the independent ability of the credit risk assessment.Adopting the bottom-up approach, the influence of different evaluation indicators on the comprehensive independent ability of the credit risk assessment is reflected in the way of weight.On this basis, the paper provides a method that can quantitatively evaluate the independent ability of the credit risk assessment.e method in this paper can fully reflect the level and comprehensive autonomy of the credit risk assessment. WT-LSTM Model. e wavelet transform itself has the ability to process nonstationary financial time series data.Trend information and fluctuation information can be separated through multilayer decomposition and reconstruction of original signals.e decomposition process is as follows: (1) Decomposition by the fast binary orthogonal wavelet transform (Mallat algorithm) is shown in the following equation: where B and A are the low-pass filter and the highpass filter, respectively, t is the decomposition time, and G 0 is the initial time series.e original data are decomposed into D 1 and G 1 components for the first time, and the approximate signal G 1 is decomposed into G 2 and D 2 for the next time.is process continues for t times until t + 1 signal sequence is obtained. (2) Data loss caused by binary sampling is recovered and reconstructed by the interpolation method as shown in the following equation: where B * and A * are the dual operators of B and A, respectively, and make the sum of reconstructed sequences equal to the original sequence. In the selection of the wavelet, Daubechies 4 with the largest applicable range was selected, and the number of decomposition layers was 4. e wavelet transform is used to decompose the price time series of the credit risk assessment.Firstly, a low-frequency approximate sequence G 4 and highfrequency detail sequences D 1 , D 2 , D 3 , and D 4 are obtained by decomposition. e interpolation method is used to reconstruct the approximate sequence G 4 and detail sequences D 1 , D 2 , D 3 , and D 4 .LSTM was used to predict the reconstructed subsequences, and the final prediction result was obtained by summing up the predicted subsequences. e prediction process is shown in Figure 1. e training parameters of the LSTM model are set as follows: the number of hidden cells of the LSTM layer is 200, the maximum number of training iterations is 200, the gradient threshold is set to 1, and the initial learning rate is 0.005.After 125 iterations, the learning rate is reduced by a multiplying factor of 0.2, and the predicted step size is 1. CEEMDAN-LSTM Model. Adaptive noise-complete empirical mode decomposition (CEEMDAN) is based on an improvement of empirical mode decomposition (EMD), which is also a method for analyzing nonlinear and nonstationary data by breaking down the sequence into a series of intrinsic mode function (IMF) components to represent data features at different time scales.However, the unimproved EMD has the defect of mode aliasing, so CEEMDAN based on the white noise method of ensemble empirical mode decomposition (EEMD) proposed the improvement measure of adding independent distributed Gaussian white noise into the original data. e adaptive noise addition solved the problem of mode aliasing and excessive residual noise simultaneously and improved the decomposition efficiency.e CEEMDAN algorithm process is as follows. (1) We add Gaussian white noise with normal distribution on the basis of original time series as shown in the following equation: where j(t) is the original sequence, ω y (t) is the white Gaussian noise, ε 0 is the standard deviation of noise, and n is the number of noise addition.(2) e first-order modal component xwf x 1 (t) is obtained according to the EMD method, the mean value is taken as the first xwf component, and the residual value after the first stage is calculated as follows: 4 Advances in Multimedia (3) Similarly, we take the residual term as the original time series, repeat steps ( 1) and ( 2), and add the adaptive Gaussian white noise for EMD decomposition to obtain A and the corresponding residual value.We repeat the above steps until the residual can no longer be decomposed, that is, the residual term has become the xwf 2 (t) monotone function or constant.When the amplitude is lower than the established threshold and cannot continue to extract the next modal function, the decomposition process ends.Finally, Z orthogonal xwf functions and the nal trend term res z are obtained as shown in the following equation: Based on the advantages of CEEMDAN in sequence decomposition, this paper constructs the CEEM-DAN-LSTM model to predict the price of the credit risk assessment. Because the short-term uctuation is re ected by the high-frequency component, it has little impact on the original sequence, and the average value is close to zero.erefore, the high-frequency component can be screened through t-test, and then the new sequence can be adjusted in combination with xwf b and other factors.e trend term re ects the trend of the original sequence. ( When CEEMDAN was used to decompose the original sequence, the white Gaussian noise with a standard deviation of 0.2 was added, the number of additions was 500, and the maximum number of iterations was 2000. CEEMDAN-SE-LSTM Model. Based on the above models, sample entropy is introduced as the basis for imf component reconstruction.Starting from time series complexity, the sample entropy quantitatively describes the system complexity and regularity degree so as to judge the probability of generating new patterns.e larger the calculated entropy value is, the more complex the time series is and the higher the probability of generating a new mode is.Conversely, the simpler the sequence is, the lower the probability of generating a new mode is.e sample entropy is calculated as follows: (1) For given time series y(n), a set of K-dimensional vectors j z (1), . . ., j z (t − z + 1) can be obtained according to the sequence number, where j z (x) j(x), j(x + 1), . . ., j(x ) e distance of j z (x) is de ned as the absolute value of the maximum di erence between the two corresponding elements, which are denoted as d[jz(x), jz(y)].(3) Given threshold h, for each I, we calculate the number of d[j z (x), j z (y)] < r, which is denoted as T z (x) and de ned in the following equation: Advances in Multimedia (4) We calculate the mean of all the above de ned values, which are denoted as H r z as shown in the following equation: (5) We repeat the above steps to get H r z+1 .When n is limited, the estimated value of sample entropy SampEn is calculated as follows: Based on the characteristics of sample entropy in judging the sequence complexity and the probability of new patterns, this paper introduces the calculation of the sample entropy as the basis of reconstruction.Di erent from the previous two models which take low frequency and high frequency as the basis for reconstruction, this model needs to calculate sample entropy.e closer the sample entropy, the more similar the representative components and the more consistent the uctuations.e prediction process of this model is shown in Figure 2. Framework Design. Based on the hierarchy structure of the credit risk assessment public opinion index system, we use the improved AHP combined with LSTM to establish the AHP-LSTM credit risk early warning model and to carry out the credit risk assessment network public opinion early warning analysis.e framework of the early warning model is shown in Figure 3. e modeling process of the AHP-LSTM model proposed in this paper is as follows: (1) e AHP algorithm is used to analyze the training data set samples, the data feature components are obtained, and a new sample set is formed.(2) e LSTM network is built.We take the training set as the input of the LSTM network and samples in step 1 as the expected output of the LSTM network.(3) We set LSTM parameters and perform network training.(4) We take the test data as the input of LSTM, build a network public opinion warning model according to the expected output, and carry out the credit risk assessment and warning. Model Training. Several credit risk data are selected as samples and trained by the AHP-LSTM model.Firstly, the AHP-LSTM model parameters are determined.For the adjustment of the hidden layer node number and learning e ciency, the method of control variables is adopted, and nonkey parameters are determined rst.en, the learning e ciency η was then set and attenuated at a certain speed, and the results were normalized to [0, 1.0].In order to avoid the unsatisfactory e ect of random initialization, it is necessary to conduct multiple training and nally determine the optimal parameters. Data Sources and Data Preprocessing. e experiment used two data sets.887,979 credits were issued between 2007 and 2015 and rst downloaded from the LendingClub website.e speci c information is listed in Table 3, in which the default samples account for 7.6% of the total samples.e data set is mainly used for experimental veri cation.e speci c information is listed in Table 4. Data preprocessing mainly includes two steps: data cleaning and feature preprocessing.e rst step is to conduct data cleaning on the samples.Firstly, features with missing values greater than 95% are screened out to test whether features are closely related to default.en, "MISSING" was used to ll in the vacancy value of categorical features from the missing value of features in the default sample.After the outliers are removed for numerical features, the corresponding feature mean value is used to ll the vacancy value.In feature preprocessing, the original features are processed to generate derivative variables.In order to reduce the computation amount, the number of category features with more categories is reduced.e data of LendingClub and PPDAI after nal processing are 62 and 58 dimensions, respectively. Parameter Debugging and Comparative Experiment. In order to ensure that the proportion of di erent samples in the training set and the test set is the same as that in the original data set, hierarchical sampling is used for crosscutting due to the large gap between the number of normal performance samples and default samples in the natural samples. In the case of unbalanced positive and negative samples, the model can be predicted by the preprocessed data, and the Literature [18], Literature [19], Literature [20] and Literature [21] were selected as compared methods. e results are listed in Table 5. Due to the imbalance of positive and negative samples in the data set, the recall rate and F1 value of all methods are relatively low, but the recall rate of the AHP-LSTM model is still 15.51% higher than that of the 6 Advances in Multimedia suboptimal literature [20] method.As for the accuracy and other indicators, except the literature [19] method that has a slightly higher accuracy, the proposed method has higher accuracy than other methods.In addition, the average accuracy of this method is the highest among all methods, and the standard deviation is relatively small.Experimental results show that this method has better performance and stronger stability. In addition, in order to balance positive and negative samples, the undersampling operation is performed on the preprocessed data. e experimental results are listed in Table 6.All indexes of the AHP-LSTM model are higher than those of other methods except that the literature [21] model has a slightly higher accuracy.e average accuracy of this method is the highest among all methods, and the standard deviation is relatively small.e above two experiments show that the AHP-LSTM model still has strong stability in the case of unbalanced positive and negative samples. In the LendingClub data set, ROC curves of di erent methods before and after undersampling of normal performance samples are shown in Figures 4 and 5. e closer the curve is to the upper left corner (0,1) in the ROC curve, the better the performance.As can be seen from Figures 4 and 5, under the same FPR, the TPR of the AHP-LSTM Advances in Multimedia model method is higher than that of other compared methods, indicating that the AHP-LSTM model method has better performance. To verify the stability and universality of the method in this paper, the model is used to evaluate the credit risk of borrowers in the PP DAI data set.e experimental results of each method are listed in Table 7. e indexes of the AHP-LSTM model method and compared method are mostly more than 95%, especially the accuracy and the other indexes of the AHP-LSTM model method reach 100%, which is higher than other compared methods, due to the small gap between experimental results under di erent methods. Display and Analysis of the Feature Importance Score. Based on the LendingClub data set, this paper constructs the CREDITrisk assessment model of P2P online loan borrowers based on the AHP-LSTM model and solves the feature importance score of the model so as to explain the model to some extent.e top ten features of feature importance are selected here, and their normalized importance values are listed in Table 8.Among them, the rst "initial rating" refers to the user credit rating assessed by the letter of credit, which is divided into three levels: A, B, and C. Each level is divided into 1, 2, and 3 categories.Among them, A1 borrowers have the best credit rating and di erent credit ratings re ect the credit quality of borrowers."Certi ed status" indicates whether LendingClub has veri ed the borrower's income. e veri ed income indicates that the borrower's income is real and relatively reliable.e "home state" refers to the state where the borrower lives when applying for a loan.In dealing with this feature, this paper divides the 50 states of the United States into three categories according to their economic development level.States with high economic development levels have many borrowers and a large number of defaults. e purpose of the loan is mainly divided into debt consolidation, credit card repayment, house decoration, and other situations, and people with di erent borrowing purposes have di erent default rates.Finally, as for the loan interest rate, the number of repayments due this month and other characteristics, the higher the loan interest rate and amount, and the greater the borrower's probability of default are taken into account. Similarly, in the PPDAI data set, this paper uses the credit model based on the AHP-LSTM model to predict.e normalized values of the top 10 features and importance in the feature importance score are listed in Table 9.In both data sets, the "initial rating" takes the rst place, so it can be used as an important reference index for the lender to predict whether the borrower defaults.e"loan type" can be divided into safety standard receivables, e-commerce, the ordinary standard, etc. e ordinary standard is the most common type of standard.Security standard receivable refers to the standard that the lender meets the amount of safety standard receivables that is greater than a certain value, and the loan credit score is greater than a certain value.E-commerce means that the borrower has passed the e-commerce certi cation, and the store runs well.It can be seen that the division of di erent populations has a certain in uence on the prediction results of the model.In addition, mobile phones, household registrations, and other certications re ect the authenticity of information lled in by borrowers, which is of certain importance to model prediction. To sum up, the model can screen out features that greatly impact the prediction of whether a borrower defaults and the Literature [21] Literature [18] Literature [20] Literature [19] Proposed Literature [20] Literature [18] Literature [19] Literature [21] Proposed Conclusion With the continuous development of the financial industry, consumer financial risks greatly impact the market and individuals.e accuracy of personal credit risk assessment plays a positive role in reducing the losses of banks, consumer finance, and other lending institutions, which is conducive to the stability of the market.Based on the analytic hierarchy Process (AHP) and the LSTM model, this paper evaluates individual credit risk through the improved AHP and the optimized AHP-LSTM model.Based on the LendingClub and PPDAI data sets, the experiment uses the AHP-LSTM model method to classify and predict.It is compared with the random forest and the wide and deep model.Experimental results show that the performance of this method is superior to other comparison methods in both data sets, especially in the case of unbalanced data sets.In addition, this paper explains the prediction results of the model through the measure of feature importance, which is in line with people's intuitive and objective understanding. In order to solve the problem of sample class imbalance, this paper simply uses undersampling technology to balance the model.In the follow-up work, cost-sensitive learning or other more effective class imbalance learning methods can be combined to improve the model performance further.In addition, to enhance the practicability and stability of the model, it can be applied more fully to the anticheating scenario.However, when the dimension of data features is high and sparse, the algorithm in this paper may not be able to find the optimal subspace, which is also the direction of further optimization. Figure 3 : Figure 3: Framework of the AHP-LSTM food safety network public opinion warning model. Figure 4 : Figure 4: ROC curve comparison of di erent methods in the original data set. Figure 5 : Figure 5: ROC curve comparison of di erent methods in the undersampled data set. . Table 1 : Definition of the primacy property of the 1∼9 scale. Table 2 : Definition of the secondary property of the 1∼9 scale.According to k * ij � 10 K 2ij , the quasi-optimal uniform matrix K * is constructed as shown in the following equation: e t-test with a 0.05 signi cance level and a nonzero mean for xwf x (t) was conducted successively.(3)After the sequential test, the rst component xwf g (t) with a signi cant nonzero mean is obtained; the high-frequency subsequence xwf b is obtained by adding xwf 1 (t) to xwf v−1 (t), and the low-frequency subsequence xwf 1 is obtained by adding xwf g (t) to xwf z (t), res z that continues as a trend item.Parameter settings refer to Torres parameter settings. Table 3 : Information description of the LendingClub data set.Amount of loan, amount of promised repayment, number of maturities, interest rate of loan, sum of interest so far, total amount of payment received recently, month of initiating loan, outstanding principal amount, etc Table 4 : Information description of the PPDAI data set. Table 5 : Performance comparison of methods in the original data set (unit: %). Table 6 : Performance comparison of methods in the undersampled data set (unit: %). Table 7 : Performance comparison of methods in the PPDAI data set (unit: %). Table 8 : Feature importance scores in the LendingClub data set. Table 9 : Feature importance scores in the PPDAI data set.
6,552
2022-07-16T00:00:00.000
[ "Computer Science", "Business", "Economics" ]
Heterogeneous Graph Contrastive Learning With Meta-Path Contexts and Adaptively Weighted Negative Samples Heterogeneous graph contrastive learning has received wide attention recently. Some existing methods use meta-paths, which are sequences of object types that capture semantic relationships between objects, to construct contrastive views. However, most of them ignore the rich meta-path context information that describes how two objects are connected by meta-paths. Further, they fail to distinguish negative samples, which could adversely affect the model performance. To address the problems, we propose MEOW, which considers both meta-path contexts and weighted negative samples. Specifically, MEOW constructs a coarse view and a fine-grained view for contrast. The former reflects which objects are connected by meta-paths, while the latter uses meta-path contexts and characterizes details on how the objects are connected. Then, we theoretically analyze the InfoNCE loss and recognize its limitations for computing gradients of negative samples. To better distinguish negative samples, we learn hard-valued weights for them based on node clustering and use prototypical contrastive learning to pull close embeddings of nodes in the same cluster. In addition, we propose a variant model AdaMEOW that adaptively learns soft-valued weights of negative samples to further improve node representation. Finally, we conduct extensive experiments to show the superiority of MEOW and AdaMEOW against other state-of-the-art methods. I. INTRODUCTION H ETEROGENEOUS information networks (HINs) are prevalent in the real world, such as social networks, citation networks, and knowledge graphs.In HINs, nodes (objects) are of different types to represent entities and edges (links) are also of multiple types to characterize various relations between entities.For example, in Facebook, we have entities like users, posts, photos and groups; users can publish posts, upload photos and join groups.Compared with homogeneous graphs where all the nodes and edges are of a single type, HINs contain richer semantics and more complicated structural information.To further enrich the information of HINs, nodes are usually associated with labels.Since object labeling is costly, graph neural networks (GNNs) [1]- [3] have recently been applied for classifying nodes in HINs and have shown to achieve superior performance. Despite the success, most existing heterogeneous graph neural network (HGNN) models require a large amount of training data, which is difficult to obtain.To address the problem, selfsupervised learning, which is in essence unsupervised learning, has been applied in HINs [4], [5].The core idea of selfsupervised learning is to extract supervision from data itself and learn high-quality representations with strong generalizability for downstream tasks.In particular, contrastive learning, as one of the main self-supervised learning types, has recently received significant attention.Contrastive learning aims to construct positive and negative pairs for contrast, following the principle of maximizing the mutual information (MI) [6] between positive pairs while minimizing that between negative pairs.Although some graph contrastive learning methods for HINs have already been proposed [4], [5], [7], most of them suffer from the following two main challenges: contrastive view construction and negative sample selection. On the one hand, to construct contrastive views, some methods utilize meta-paths [8], [9].A meta-path, which is a sequence of object types, captures the semantic relation between objects in HINs.For example, if we denote the object types User and Group in Facebook as "U" and "G", respectively, the meta-path User-Group-User (UGU) expresses the co-participation relation.Specifically, two users u 1 and u 2 are UGU-related if a path instance u 1 − g − u 2 exists, where g is a group object and describes the contextual information on how u 1 and u 2 are connected.The use of meta-paths can identify a set of path-based neighbors that are semantically related to a given object and provide different views for contrast.However, existing contrastive learning methods omit the contextual information in each meta-path view.For example, HeCo [9] takes meta-paths as views, but it only uses the fact that two objects are connected by meta-paths and discards the contexts of how they are semantically connected, which we will call meta-path contexts and can be very influential in the classification task.For example, a group can provide valuable hints on a user's topic interests.Therefore, contrasting metapath views with rich contexts is a necessity. On the other hand, negative sample selection is another challenge to be addressed.Note that most existing graph contrastive learning methods [10]- [12] are formulated in a sampled noise contrastive estimation framework.For each node in a view, random negative sampling from the rest of intraview and inter-view nodes is widely adopted.However, this could introduce many easy negative samples and false negative samples.For easy negative samples, they are less informative and easily lead to the vanishing gradient problem [13], while the false negative samples can adversely affect the learning arXiv:2212.13847v3[cs.LG] 5 Apr 2024 process for providing incorrect information.Recently, there exist some works [13]- [15] that seek to identify hard negative samples for improving the discriminative power of encoders in HINs.Despite their success, most of them fail to distinguish hard negatives from false ones.While ASA [14] is proposed to solve the issue, it is specially designed for the link prediction task and can only generate negative samples for objects based on one type of relation in HINs, which restricts its wide applicability.Since there is not a clear cut boundary between false negatives and hard ones, how to balance the exploitation of hard negative and false negative remains to be investigated. In this paper, to solve the two challenges, we propose a heterogeneous graph contrastive learning method MEOW with meta-path contexts and weighted negative samples.Based on meta-paths, we construct two novel views for contrast: the coarse view and the fine-grained view.The coarse view expresses that two objects are connected by meta-paths, while the fine-grained view utilizes meta-path contexts and describes how they are connected.In the coarse view, we simply aggregate all the meta-paths and generate node embeddings that are taken as anchors.In the fine-grained view, we construct positive and negative samples for each anchor.Specifically, for each meta-path, we first generate nodes' embeddings based on the meta-path induced graph.To further improve the generalizability of the model, we introduce noise by performing graph perturbations, such as edge masking and feature masking, on the meta-path induced graph to derive an augmented one, based on which we also generate node embeddings.In this way, each meta-path generates two embedding vectors for each node.After that, for each node, we fuse different embeddings from various meta-paths to generate its final embedding vector.Then for each anchor, its embedding vector in the fine-grained view is taken as a positive sample while those of other nodes are considered as negative samples.Subsequently, based on theoretical analysis, we recognize that the InfoNCE loss lacks the ability to discriminate negative samples that have the same similarity with an anchor during training.Therefore, we perform node clustering and use the results to grade the weights of negative samples in MEOW to distinguish negative samples.To further boost the model performance, we employ prototypical contrastive learning [16], where the cluster centers, i.e., prototype vectors, are used as positive/negative samples.This helps learn compact embeddings for nodes in the same cluster by pushing nodes close to their corresponding prototype vectors and far away from other prototype vectors.In addition, since the weights are hard-valued in MEOW, we further propose a variant model called AdaMEOW that can adaptively learn the soft-valued weights of negative samples, making negative samples more personalized and improving the learning ability of node representations.Finally, we summarize our contributions as: • We propose a novel heterogeneous graph contrastive learning model MEOW, which constructs a coarse view and a fine-grained view for contrast based on meta-paths, respectively.The former shows objects are connected by meta-paths, while the latter employs meta-path contexts and expresses how objects are connected by meta-paths. • We recognize the limitation of the InfoNCE loss based on theoretical analysis and propose a contrastive loss function with weighted negative samples to better distinguish negative samples.• We distinguish negative samples by performing node clustering and using the results to grade their weights. Based on the clustering results, we also introduce prototypical contrastive learning to help learn compact embeddings of nodes in the same cluster.Further, we propose a variant model, namely, AdaMEOW, which adaptively learns soft-valued weights for negative samples. II. RELATED WORK A. Heterogeneous Graph Neural Network Heterogeneous graph neural network (HGNN) has recently received much attention and there have been some models proposed.For example, HetGNN [17] aggregates information from neighbors of the same type with bi-directional LSTM to obtain type-level neighbor representations, and then fuses these neighbor representations with the attention mechanism.HGT [18] designs Transformer-like attention architecture to calculate mutual attention of different neighbors.HAN [2] employs both node-level and semantic-level attention mechanisms to learn the importance of neighbors under each metapath and the importance of different meta-paths, respectively.Considering meta-path contexts information, MAGNN [3] improves HAN by employing a meta-path instance encoder to incorporate intermediate semantic nodes.Further, Graph Transformer Networks (GTNs) [19] are capable of generating new graph structures, which can identify useful connections between unconnected nodes in the original graph and learn effective node representation in the new graphs.Despite the success, most of these methods are semi-supervised, which heavily relies on labeled objects. B. Graph Contrastive Learning (GCL) Contrastive learning aims to construct positive and negative pairs for contrast, whose goal is to pull close positive pairs while pushing away negative ones.Recently, some works have applied contrastive learning to graphs [20], [21].In particular, most of these approaches use data augmentation to construct contrastive views and adopt the following three main contrast mechanisms: (1) node-node contrast [22]- [24]; (2) graphgraph contrast [10], [25]; (3) node-graph contrast [26], [27].For example, GRACE [11] treats two augmented graphs by node feature masking and edge removing as two contrastive views and then pulls the representation of the same nodes close while pushing the remaining nodes apart.Inspired by SimCLR [28] in the visual domain, GraphCL [29] further extends this idea to graph-structured data, which relies on node dropping and edge perturbation to generate two perturbed graphs and then maximizes the two graph-level mutual information (MI).Moreover, DGI [30] is the first approach to propose the contrast between node-level embeddings and graphlevel embeddings, which allows graph encoders to learn local and global semantic information.In heterogeneous graphs, HeCo [9] takes two views from network schema and metapaths to generate node representations and perform contrasts between nodes.HDGI [31] extends DGI to HINs and learns high-level node representations by maximizing MI between local and global representations.However, most of these methods select negative samples by random sampling, which will introduce false negatives.These samples will adversely affect the learning process, so we need to distinguish them from hard negatives. C. Hard Negative Sampling In contrastive learning, easy negative samples are easily distinguished from anchors, while hard negative ones are similar to anchors.Recent studies [32] have shown that contrastive learning can benefit from hard negatives, so there are some works that explore the construction of hard negatives.The most prominent method is based on mixup [33], a data augmentation strategy for creating convex linear combinations between samples.In the area of computer vision, Mochi [34] measures the distance between samples by inner product and randomly selects two samples from N nearest ones to be combined by mixup as synthetic negative samples.Further, CuCo [35] uses cosine similarity to measure the difference of nodes in homogeneous graphs.In heterogeneous graphs, STENCIL [15] uses meta-path-based Laplacian positional embeddings and personalized PageRank scores for modeling local structural patterns of the meta-path-induced view.However, these methods either fail to distinguish hard negative samples from false ones or are built on one type of relation in HINs, which restricts the wide applicability of these models. III. PRELIMINARY In this section, we formally define some related concepts used in this paper. Definition 1. Heterogeneous Information Network (HIN). An HIN is defined as a graph G = (V, E), where V is a set of nodes and E is a set of edges, each represents a binary relation between two nodes in V. Further, G is associated with two mappings: (1) node type mapping function ϕ : V → T and (2) edge type mapping function ψ : E → R, where T and R denote the sets of node and edge types, respectively. between nodes of types T 1 and T l+1 , where • denotes the composition operator on relations.If two nodes x i and x j are related by the composite relation R, then there exists a path that connects x i to x j in G, denoted by p xi⇝xj .Moreover, the sequence of nodes and edges in p xi⇝xj matches the sequence of types T 1 , ..., T l+1 and relations R 1 , ..., R l according to the node type mapping ϕ and the edge type mapping ψ, respectively.We say that p xi⇝xj is a path instance of P, denoted by p xi⇝xj ⊢ P. Definition 3. Meta-path Context [1].Given two objects x i and x j that are related by a meta-path P, the meta-path context is the set of path instances of P between x i and x j .Definition 4. Heterogeneous Graph Contrastive Learning.Given an HIN G, our task is to learn node representations by constructing positive and negative pairs for contrast.In this paper, we only focus on one type of nodes, which are considered as target nodes. IV. METHODOLOGY In this section, we introduce our method MEOW and the variant model AdaMEOW.The general model diagram is shown in Fig. 1.We perform feature transformation and neighbor filtering as preprocessing steps.First, we map the feature vectors of each different type of nodes into the same dimension (Step ①) and identify a set of neighbors for nodes based on each meta-path (Step ②).Then, we construct a coarse view by aggregating all meta-paths (Step ③), while constructing a fine-grained view with each meta-path's contextual semantic information (Step ④).After that, we fuse different embeddings from various meta-paths in the fine-grained view through the attention mechanism (Step ⑤).We take node embeddings in the coarse view as anchors and those in the fine-grained view as the positive and negative samples.To be capable of distinguishing false negative samples and hard negative samples, we perform clustering and assign weights to the negative samples with the clustering results (Step ⑥).Finally, to further boost the model performance, we use prototypical contrastive learning to calculate the contrastive loss and prototypical loss based on the node embedding vectors under coarse view and fine-grained view and the clustering results (Step ⑦).In addition, to further obtain adaptive negative sample weights, we propose the variant AdaMEOW with MLP to learn the weights instead of clustering and calculate the contrastive loss.Next, we describe each component in detail. A. Node Feature Transformation Since an HIN is composed of different types of nodes and each type has its own feature space, we need to first preprocess node features to transform them into the same space.Specifically, for each object x i in type T , we use the type-specific mapping matrix W (1) T to transform the raw features of x i into: where h i ∈ R d is the projected initial embedding vector of x i , σ(•) is an activation function, and b T denotes the bias vector. B. Neighbor Filtering Given an object x, meta-paths can be used to derive its multi-hop neighbors with specific semantics.When metapaths are long, the number of related neighbors to x could be numerous.Directly aggregating information from these neighbors to generate x's embedding will be time-consuming.On the other hand, the irrelevant neighbors of x cannot provide useful information to predict x's label and they could adversely affect the quality of the generated embedding of x.Therefore, we filter x's meta-path induced neighbors and select the most relevant to x. Inspired by [1], we adopt PathSim [36] to measure the similarity between objects.Specifically, given a meta-path P, the similarity between two objects x i and x j of the same type w.r.t.P is computed by: Contrast Encoder where p xi⇝xj is a path instance between x i and x j .Based on the similarities, for each object, we select its top-K neighbors with the largest similarity.The removal of irrelevant neighbors can significantly reduce the number of neighbors for each object, which further improves the model efficiency.After neighbor filtering, the induced adjacency matrix by meta-path P is denoted as A P . C. Coarse View We next construct coarse view to describe which objects are connected by meta-paths.Given a set of meta-paths, each meta-path P can induce its own adjacency matrix A P .To provide a coarse view on the connectivity between objects by meta-paths, we fuse the meta-path induced adjacency matrices and define , where m is the number of meta-paths and where |V | is the number of target nodes.After that, we feed node embeddings calculated by Equation 1 and A into a two-layer GCN encoder to get the representations of nodes in the coarse view.Specifically, for node x i , we can get its coarse representation z c i : D. Fine-grained View The fine-grained view characterizes how two objects are connected by meta-paths, which is in contrast with the coarse view.Given a meta-path set PS = {P 1 , ..., P m }, for each meta-path P u ∈ PS, let P u = T 0 T 1 ...T l , where the meta-path length is l + 1.The meta-path can link objects of type T 0 to that of type T l via a series of intermediate object types.Since meta-path contexts are composed of path instances and capture details on how two objects are connected, we utilize meta-path contexts to learn fine-grained representations for objects.However, when l is large, due to the numerous path instances between two objects, directly handling each path instance as MAGNN [3] could significantly degenerate the model efficiency, as pointed out in [1].We instead use objects in the intermediate types of meta-path P u to leverage the information of meta-path contexts.Specifically, given a metapath P u and an object x i of type T 0 , we denote N Tj i as x i 's j-hop neighbor set w.r.t.P u .Then we generate x i 's initial fine-grained embedding by aggregating information from all its j-hop neighbors with j ≤ l.Formally, we have where the learnable parameter matrix W (2) uj corresponds to the j-hop neighbors w.r.t.P u .After that, we put the node embedding h Pu i that aggregates the meta-path context information and the adjacency matrix under the meta-path A Pu into a twolayer GCN encoder to generate x i 's fine-grained embedding: Note that the encoder here is the same as that used in the coarse view (see Equation 3).Further, to improve the model generalizability, we introduce noise to the meta-path induced graph by performing graph augmentation, such as edge masking and feature masking.After the perturbed graph is generated, we feed it into Equation 5 to generate the node embedding z Pu i .In this way, for each meta-path P u and an object x i , we generate two embeddings z Pu i , z Pu i .Given a meta-path set PS = {P 1 , ..., P m }, we can generate Z i = {z Pu i , z Pu i |P u ∈ PS} for node x i from various metapaths.Finally, we fuse these embeddings by the attention mechanism: ) Here, we measure the weight of each node type.V is the set of target nodes, W att ∈ R d×d is the weight matrix, b att is the bias vector and β s denotes the attention weight.We can generate x i 's fine-grained embedding vector z f i : E. Theoretical analysis on the InfoNCE loss In contrastive learning, different negative samples have different characteristics, so their impact should not be the same.For a given anchor, some negative samples are easy to distinguish, while some hard negative samples may have a certain degree of similarity with the anchor but belong to a different class.Therefore, in order to keep negative samples away from the anchor, it is necessary to distinguish the effects of different negative samples on the anchor.With this in mind, we first propose Theorem 1. Theorem 1.Consider the contrastive learning InfoNCE loss [6] that uses dot product to measure node similarity, denoted as L. Let f (x) represent the learned embedding of node x.Given x i as an anchor, x k as its positive sample and x t1 , x t2 as its two negative samples, with back propagation, we can get: Proof.The contrastive loss function InfoNCE is defined as: where sim(f (x i ), f (x j )) measures the similarity between node embeddings f (x i ) and f (x j ), τ is a hyperparamter denotes the temperature and n is the number of negative samples. Typically, the dot product is used as a similarity function, and the InfoNCE loss can be further simplified as: (9) For a particular negative sample x t , t = 1, 2, • • • , n, the gradient of the negative sample x t is: For all the negative samples of anchor x i , the gradient only depends on f (x i ) T f (x t ).This is because f (x i ) determines the direction of back propagation, and τ is equal for all the negative samples.We can thus derive inequality (1) in Theorem 1, which states that if f In addition, we can also compute the gradient of the positive sample by taking its derivative. We observe that compared to positive samples, Equation 10for negative samples has an additional softmax term, whose value ranges between 0 and 1.So we can derive inequality The equation holds if and only if the softmax term equals one, which is generally very difficult to satisfy. From Theorem 1, easy negative samples that are less similar to the anchor lead to smaller gradient magnitude, while hard negative samples can derive larger gradient magnitude.This is because easy negative samples are already far enough from the anchor and we don't need to pay much attention to them.However, hard negative samples need larger gradients to push them apart.Further, the comparison between Equation 10and Equation 11shows that the gradient magnitude of positive samples is generally much larger than that of negative samples, due to the additional softmax term that is generally smaller than 1 in Equation 10.In summary, in each epoch, compared to negative samples with lower similarity to the anchor, negative ones with higher similarity will be more violently pushed away from the anchor.On the other hand, positive samples will have a larger update magnitude than negative samples, resulting in a closer proximity to the anchor. We next randomly select a paper and an author as the anchor node from the ACM dataset [37] and the DBLP dataset [3], respectively, and study the relationship between node similarity and gradient magnitude of loss functions w.r.t.negative samples.As shown in Figure 2, the orange curves in both subfigures show that the gradient magnitude of the InfoNCE loss is proportionally to the similarity between negative samples with the anchor node, which is consistent with Equation 10.Although the InfoNCE loss can distinguish samples from different classes to some extent, the gradient magnitude only depends on node similarity, which lacks the flexibility to capture the variability in node embeddings.For example, suppose there are three samples whose representations are: x 1 (1, 1), x 2 (1, 0) and x 3 (0, 1), respectively.We take x 1 as the anchor.For the sample pairs (x 1 , x 2 ) and (x 1 , x 3 ), their similarity values are both 1, but the semantic information contained in x 2 and x 3 is completely different, and even opposite to each other.This should further lead to different gradient update directions.Therefore, using only node similarity to determine the gradient of a negative sample is insufficient.We thus need to introduce other metrics to capture the fine-grained information of embeddings of negative samples. F. The MEOW model In this section, we perform contrastive learning to learn node embeddings with the constructed coarse view and fine-grained view and propose our loss function with additional weights for negative sample pairs.Before contrast, we use a projection head (one-layer MLP) to map node embedding vectors to the space where contrastive loss can be applied.Specifically, for x i , we have: After that, we take representations in the coarse view as anchors and construct the positive and negative samples from the fine-grained view.For each node x i , we take z c i as the anchor, z f i as the corresponding positive sample, and all other node representations in the fine-grained view as negative samples.Further, to utilize hard negatives and mitigate the adverse effect of false negatives, we learn the importance of negative samples.In particular, we perform node clustering based on the fine-grained representations for M times, where the number of clusters are set as Then, we assign different weights to negative samples of a node based on the clustering results.Intuitively, when the number of clusters is set large, each cluster will become compact.Then compared with hard negatives, false negatives and easy negatives are more likely to be assigned in the same cluster and different clusters with the anchor node, respectively.Therefore, we use γ ij to denote the weight of node x j as a negative sample to node x i and set it as a function F of clustering results. For simplicity, we define the function F to count the number of times that the sample x j and the anchor x i are in different clusters.We denote , where C r is the r-th clustering result.In particular, we can understand γ ij as the push strength.For false negatives, γ ij should be small to ensure that they will not be pushed away from the anchor.For hard negatives, γ ij is expected to be much larger because in this way, the anchor and hard negatives can be discriminated.Since easy samples are distant from the anchor, the model will be insensitive to γ ij in a wide range of values.Then based on γ ij , we can formulate our contrastive loss function as where τ is a temperature parameter.Similar as Theorem 1, we also analyze our loss function from the perspective of gradient in the back propagation process. Theorem 2. For the proposed loss function in Equation 13, we use node embedding function f (•) to overload z.Then given an anchor node x i with positive sample x k and one of its negative sample x t , the gradient of L con i w.r.t.f (x) at x t is Proof.Similar as the InfoNCE loss, our proposed loss function L con i for anchor x i can be simplified as: The gradient of the positive sample is the same as the InfoNCE loss, and the derivative of the representation of the negative sample x t can be obtained as: The gradient magnitude now has an additional learnable parameter γ it for x t , which assigns personalized weights to negative samples sharing the same similarity with the anchor. Compared to the original InfoNCE loss, our proposed loss function relies not only on the similarity between anchor and negative samples, but also on the characterization of anchor and negative samples during the optimization process.This can be further combined with the characterization of node pairs to adaptively adjust the push strength in the hidden space, thus improving the quality of the representation.For example, negative samples x 2 (1, 0) and x 3 (0, 1) have the same similarity values with anchor x 1 (1, 1).The learnable weights γ 12 and γ 13 makes them more distinguishable, and the gradients of the two are different during backpropagation. To further make embeddings of nodes in the same cluster more compactly distributed in the latent space, we introduce an additional prototypical contrastive learning loss function.In the r-th clustering, we consider the prototype vector c r i , i.e., the cluster center, corresponding to node x i as a positive sample and other prototype vectors as negative samples and define: where θ r i is a temperature parameter and represents the concentration estimate of the cluster C i r that contains node x i .Following [16], we calculate , where Q is the number of nodes in the cluster and α is a smoothing parameter to ensure that small clusters do not have an overlylarge θ.Finally, we formulate our objective function L as: where V is the set of target nodes and λ controls the relative importance of the two terms.The loss function can be optimized by stochastic gradient descent.To prevent overfitting, we further regularize all the weight matrices W mentioned above.The whole training procedure of the MEOW model is summarized in Algorithm 1. G. The AdaMEOW model The weights we calculate in MEOW based on the clustering results can actually provide more fine-grained differentiations of negative samples.However, these weights are hard values and could limit node representation learning.Therefore, it is necessary to explore alternative approaches that can better facilitate flexible weight calculations, thereby enhancing the overall performance of the model.Instead of clustering, we propose an enhanced model AdaMEOW with an adaptive approach to learn soft-valued weights.Specifically, we apply a two-layer MLP to learn the weights according to the node representations under both the coarse and fined-grained views, which is formally formulated as: (1) Ada are the learnable parameter matrices and b Ada are the bias vectors.Note that σ (1) is the Tanh function and σ (2) is the sigmoid function, which ensures that the weights are soft values ranging from 0 and 1. H(z c i , z f j ) is a pooling function between z c i under the coarse view and z f i under the fine-grained view, and we use SUM as the pooling function.The overall objective is given by: where γ ij is a soft-valued weight between anchor x i and negative sample x j .We distinguish it from the hard-valued weight by using the tilde (∼) symbol.For each anchor node, the two-layer MLP can adaptively learn the weights of negative samples, and further lead to more informative gradients for them.As shown by the blue dots in Figure 2, optimizing Equation 19 allows for a more diverse set of gradient magnitudes for the same node similarity values.This further shows that γ ij can capture the individual characteristics of different negative samples and their corresponding gradients are not only determined by the similarity with the anchor.The contrastive part of the AdaMEOW model is summarized in Algorithm 2. H. Complexity analysis The major time complexity in our proposed model comes from GCN and MLP.Let d max be the maximum initial dimensions of different types of nodes and d A be the average number of non-zero entries in each row of the adjacency matrix for each meta-path induced graph.In Section IV-A, The time complexity for MLP is O(Bd max d) where B denotes the batch size and d is the dimension of the projected initial embedding vector.The GCN encoder used in the construction of the two views has a time complexity of O(Bd A d + Bd 2 ).After constructing the views, the time complexity of the contrastive loss function is O(B 2 d).For MEOW, node clustering requires O(t(k Compute the normalized adjacency matrix: Calculate temperature parameter θ r ; 21: end for 22: Construct L con , L proto , L using Eq.10-12; 23: Optimize L to update all parameters in the model.24: V. EXPERIMENTS A. Datasets To evaluate the performance of MEOW, we employ four real-world datasets: ACM [37], DBLP [3], Aminer [38] and Algorithm 2 The AdaMEOW model Input: The heterogeneous graph G = (V, E); the number of node type |T |; the number of target node |V |; the feature matrix X 1 , X 2 , • • • , X |T | ; a pre-defined meta-path set PS; Output: Target node embeddings for downstream tasks.1: ▷ The same steps as in Algorithm 1, from line 1 to line 16; 2: // Contrastive part 3: Calculate the weights of negative sample pairs γ ij using Eq.13; 4: Construct L based on weights using Eq.14; 5: Optimize L to update all parameters in the model. IMDB [39].The four datasets are benchmark HINs.We next define a classification task for each dataset. • ACM: ACM is an academic paper dataset.The dataset contains 4019 papers (P), 7167 authors (A), and 60 subjects (S).Links include P-A (an author publishes a paper) and P-S (a paper is based on a subject).We use PAP and PSP as meta-paths.Paper features are the bag-of-words representation of keywords.Our task is to classify papers into three areas: database, wireless communication, and data mining. • DBLP: DBLP is extracted from the computer science bibliography website.The dataset contains 4057 authors (A), 14328 papers (P), 20 conferences (C) and 7723 terms (T).Links include A-P (an author publishes a paper), P-T (a paper contains a term) and P-C (a paper is published on a conference).We consider the meta-path set {APA, APCPA, APTPA}.Each author is described by a bag-of-words vector of their paper keywords.Our task is to classify authors into four research areas: Database, Data Mining, Artificial Intelligence and Information Retrieval. • AMiner: Aminer is a bibliographic graphs.The dataset contains 6564 papers (P), 13329 authors (A) and 35890 references (R).Links include P-A (an author publishes a paper) and P-R (a reference for a paper).We consider the meta-path set {PAP, PRP}.Our task is to classify papers into four research areas. • IMDB: As a subset of Internet Movie Database, the dataset contains 4275 moives (M), 5432 actors (A), 2083 directios (D) and 7313 keywords (K).Links include M-A (an actor stars in a movie), M-D (a director directs a movie) and M-K (a movie contains a keyword).We consider the meta-path set {MAM, MDM, MKM}.Our task is to classify movies into three classes, i.e., Action, Comedy and Drama. B. Baselines We compare MEOW with 9 other state-of-the-art methods, which can be grouped into three categories: •[Methods specially designed for homogeneous graphs]: GraphSAGE [20] aggregates information from a fixed number of neighbors to generate nodes' embedding.GAE [40] is a generative method that generates representations by reconstructing the adjacency matrix.DGI [30] maximizes the agree- ment between node representations and a global summary vector. •[Semi-supervised learning methods in HINs]: HAN [2] is proposed to learn node representations using node-level and semantic-level attention mechanisms. •[Unsupervised learning methods in HINs]: HERec [41] utilizes the skip-gram model on each meta-path to embed induced graphs.HetGNN [42] aggregates information from different types of neighbors based on random walk with start.DMGI [27] constructs contrastive learning between the original network and a corrupted network on each meta-path and adds a consensus regularization to fuse node embeddings from different meta-paths.Mp2vec [43] generates nodes' embedding vectors by performing meta-path-based random walks.HeCo [9] constructs two views with meta-paths and network schema to perform contrastive learning across them. In particular, HeCo is the state-of-the-art heterogeneous contrastive learning model. C. Experimental Setup We implement MEOW with PyTorch and adopt the Adam optimizer to train the model.We fine-tune the learning rate from {5e-4, 6e-4, 7e-4}, the penalty weight on the l 2 -norm regularizer from {0, 1e-4, 1e-3} and the patience for early stopping from 10 to 40 with step size 5, i.e., we stop training if the total loss does not decrease for patience consecutive epochs.We set the dropout rate ranging from 0.0 to 0.9, and the temperature τ in Eq. 13 from 0.1 to 1.0, both with step size 0.1.We set K in the neighbor filtering based on the average number of connections of all the objects under each metapath.For data augmentation, we fine-tune the masking rate for both features and edges from 0.0 to 0.6 with step size 0.1.We perform node clustering twice and set α = 5 in all datasets.Further, we set the number of clusters U to {100, 300}, {200, 700}, {500, 1200}, and {100, 500} in ACM, DBLP, Aminer and IMDB, respectively.We fine-tune the regularization weights λ in prototypical contrastive learning from {0.1, 1, 10}.For Aminer, since nodes are not associated with features, we first run metapath2vec with the default parameter settings from the original codes provided by the authors to construct nodes' initial feature vectors.For fair comparison, we set the embedding dimension as 64 and randomly run the experiments 10 times, and report the average results for all the methods.For other competitors, their results are directly reported from [9].We run all the experiments on a server with 32G memory and a single Tesla V100 GPU.We provide our code and data here: https://github.com/jianxiangyu/MEOW. D. Node Classification We use the learned node embeddings to train a linear classifier to evaluate our model.We randomly choose 20, 40, 60 labeled nodes per class as training set, and 1000 nodes as validation set and 1000 nodes for testing.We use Macro-F1, Micro-F1 and AUC as evaluation metrics.For all the metrics, the larger the value, the better the model performance.The results are reported in Table I.From the table, we see that MEOW achieves the suboptimal performance on ACM, DBLP and IMDB, and performs very well on Aminer in all the data splits.This shows the importance of meta-path contextual information and the validity of the contrastive views we designed.Compared with the state-of-the-art graph contrastive learning model Heco, MEOW achieves better performance on ACM, DBLP and IMDB.For example, the Macro-F1 score and the Micro-F1 score of Heco is 90.64% and 91.59% with 60 labeled nodes per class on DBLP, while MEOW is 93.49% and 94.13%.These results show the effectiveness of MEOW.While MEOW performs slightly worse than Heco in Macro-F1 and Micro-F1 on Aminer, it outperforms Heco in the AUC scores.This can be explained by the label imbalance on Aminer.Specifically, the number of objects in the label which has the maximum number of nodes is ∼ 7 times more than that in the label which has the minimum number of nodes.It is well known that when labeled objects are imbalanced, AUC is a more accurate metric than the other two.This further verifies that MEOW is effective. AdaMEOW achieves the best performance in most 36 cases.On the basis of MEOW, AdaMEOW has been improved on each dataset, especially IMDB.In the IMDB dataset, with 20 labeled nodes per class, the Micro-F1 score of AdaMEOW is 62.91% and the Macro-F1 score is 63.13% while the runnersup scores are only 56.89% and 57.01%.This can demonstrate that adaptive weights have stronger learning capability on datasets with more noise. E. Node Clustering We further perform K-means clustering to verify the quality of learned node embeddings.We adopt normalized mutual information (NMI) and adjusted rand index (ARI) as the evaluation metrics.For both metrics, the larger, the better.The results are reported in Table II.As we can see, on the ACM dataset, MEOW obtains about 16% improvements on NMI and 25% improvements on ARI compared to the best of the benchmark methods, demonstrating the superiority of our model.This is because the prototypical contrastive learning drives node representations to be more compact in the same cluster, which helps boost node clustering.AdaMEOW can further make the boundaries between classes more distinct, resulting in better performance than MEOW. F. Ablation Study We conduct an ablation study on MEOW and AdaMEOW to understand the characteristics of its main components.To show the importance of the prototypical contrastive learning regularization, we train the model with L con only and call this variant MEOW wp (without prototypical contrastive learning).To demonstrate the importance of distinguishing negative samples with different characteristics, another variant is to not learn the weights of negative samples.We call this MEOW ww (without weight).Moreover, we update nodes' embeddings by aggregating information without considering meta-path contexts in the fine-grained view and call this variant MEOW nc (no context).This helps us understand the importance of including meta-path contexts in heterogeneous graph contrastive learning.We report the results of 40 labeled nodes per class, which is shown in Fig. 4. From these figures, MEOW achieves better performance than MEOW wp.This is because the prototypical contrastive learning can drive nodes of the same label to be more compact in the latent space, which leads to better classification results.MEOW outperforms MEOW ww on three datasets.This further demonstrates the advantage of weighted negative samples.In addition, MEOW beats MEOW nc in all cases.This shows that when using meta-paths, the inclusion of meta-path contexts is essential for effective heterogeneous graph contrastive learning.The AdaMEOW model outperforms others in most cases.This demonstrates the significance of capturing the characteristics of negative samples and the effectiveness of learning their weights adaptively. G. Hyper-parameter Analysis We further perform a sensitivity analysis on the hyperparameters of our method.In particular, we study three main hyper-parameters in MEOW: the number of selected relevant neighbor in Pathsim , the relative importance λ of the two components of the loss function in Eq. 17 on the ACM dataset and the number of clusters mentioned in Eq. 13 and Eq.16.In our experiments, we vary one parameter each time with others fixed.Figure 5 illustrates the results of the first two hyperparameters with 20, 40, 60 labeled nodes per class w.r.t. the Micro-F1 scores.(Results on Macro-F1 and AUC scores exhibit similar trends, and thus are omitted for space limitation.) Figure 6 displays the Macro-f1, Micro-f1, and AUC scores with 40 labeled nodes per class for the number of clusters.The diagonal represents the results for clustering once, while the off-diagonal entries represent the results for clustering twice.From the figure, we see that 1) In the case of meta-path PAP (Paper-Author-Paper), the more neighbors selected, the better the performance of the model.However, for meta-path PSP (Paper-Subject-Paper), we find that the Micro-F1 score first rises and then drops, as the number of neighbors increases.This is because the co-authored papers are more likely to be in the same area, while papers in the same subject could be from different research domains.With the increase of neighbors, more noisy connections induced by PSP could degrade the model performance. 2) For the weight λ that controls the importance of the prototypical contrastive loss function, MEOW gives very stable performances over a wide range of parameter values.The Micro-F1 score largely decreases when λ is large enough.This is because a larger λ will encourage more compactness within each class.However, this may cause some hard samples to be assigned to the incorrect clusters and cannot be corrected during the training process, resulting in misclassification. 3) From the heat map, we can observe that when the clustering size is small, samples from different classes may mix within the same cluster, which is detrimental to both contrastive loss and prototypical loss, resulting in poor model performance.When the clustering size is too large, each node forms a cluster with its most similar node, or even each node forms an individual cluster.In this case, the prototypical loss and contrastive loss become similar, resulting in the weighted contrastive loss not having a significant impact and causing a slight decrease in performance. H. Case study In this section, we analyze the learned weights in Equation 19 through experiments.First, we evaluate the advantages of weighted InfoNCE using three cases: NW(no weight), RW(random weights), and AW(adaptive weights).NW means all weights are set to 1, which is equivalent to regular InfoNCE.RW means we randomly assign a weight between 0 and 1 to each node pair in each epoch.AW refers to our proposed variant model AdaMEOW.As can be seen in Table III, we can observe that compared to NW, using random weights leads to some improvement in results.This is because different weight assignments of node pairs can influence the optimization direction of the model.However, compared to AW, RW lacks stability during training and does not consider the characteristics of the node pairs.Therefore, the results obtained with adaptive weights outperform the other two cases, demonstrating better performance.Second, in Figure 7 we take the maximum, mean, and minimum values of the weights for all node pairs to reflect the dynamic changes of the weights during the training process.We can observe that while there are some fluctuations in the weight changes, overall they exhibit a stable decreasing trend.This is because as the training progresses, the nodes in the latent space gradually acquire more discriminative representations, requiring only small gradient values for fine-tuning.Finally, after training on the ACM dataset for 500 epochs, we randomly select an anchor and show the learned weights for its negative samples in Figure 8.We can see that the overall trend of the weights is consistent with our expectations, as γ ij can adaptively adjust its magnitude based on the characteristics of the samples.For false negative samples with high similarity, γ ij is relatively small to ensure that they are not pushed away from the anchor.For hard negative samples, γ ij is expected to be larger, so that the anchor and hard negative samples can be distinguished.Further, based on Equation 15, for easy negative samples that have small similarity with the anchor, they can be pushed farther only when the weight γ ij is set large.All these results explain the reason why our experiment works better than other baselines. VI. CONCLUSION We studied graph contrastive learning in HINs and proposed the MEOW model, which considers both meta-path contexts and weighted negative samples.Specifically, MEOW constructs a coarse view and a fine-grained view for contrast.In the coarse view, we took node embeddings derived by directly aggregating all the meta-paths as anchors, while in the fine-grained view, we utilized meta-path contexts and constructed positive and negative samples for anchors.Afterwards, we conducted a theoretical analysis of the InfoNCE loss and recognized its limitations for negative sample gradient magnitudes.Therefore, we proposed a weighted loss function for negative samples.In MEOW , we distinguished hard negatives from false ones by performing node clustering and using the results to assign weights to negative samples.Additionally, we introduced prototypical contrastive learning, which helps learn compact embeddings of nodes in the same cluster.Further, we proposed a variant model called AdaMEOW which can adaptively learn soft-valued weights for negative samples instead of hard-valued weights in MEOW.Finally, we conducted extensive experiments to show the superiority of MEOW and AdaMEOW against other SOTA methods. Fig. 1 . Fig. 1.The overall framework of the MEOW model.For details of each step, see Section IV. Fig. 2 . Fig. 2. The relationship between node similarity sim(•) with a randomly selected anchor and gradient magnitude of loss functions w.r.t.negative samples after training 500 epochs on the (a) ACM dataset and 800 epochs on the (b) DBLP dataset.The orange dots indicate the InfoNCE loss and the blue dots indicate the loss function adopted by AdaMEOW. Fig. 4 . Fig. 4. The ablation study results of 40 labeled nodes per class. Fig. 5 .Fig. 6 . Fig.5.Hyper-parameter analysis on the ACM dataset.Here, K is the number of selected relevant neighbors w.r.t. a meta-path and λ controls the relative importance of two components of the loss function in Eq. 17. Fig. 7 . Fig. 7. Dynamic changes in weights during training on the ACM dataset. Fig. 8 . Fig. 8.The relationship between similarity values and learned weights γ ij of negative samples in Equation 19 for a randomly selected anchor after training 500 epochs on the ACM dataset. • We conduct extensive experiments comparing MEOW and AdaMEOW with other 9 state-of-the-art methods w.r.t.node classification and node clustering tasks on four public HIN datasets.Our results show that MEOW achieves better performance than other competitors, and AdaMEOW further improved on the basis of MEOW. where t is the iteration times and node clustering times M ≤ 2 in our experiments.For AdaMEOW, there is an additional MLP with time complexity O(B 2 d 2 ).The heterogeneous graph G = (V, E); the number of node type |T |; the number of target node |V |; the feature matrix X 1 , X 2 , • • • , X |T | ; a pre-defined meta-path set PS; the number of clusters U = {k 1 , k 2 , • • • , k M }; Output: Target node embeddings for downstream tasks.1: // Pre-Process.2: for all P u ∈ PS do 5: 8: ▷ Lines 9-23 correspond to one epoch 9: Transform node feature and get h i in coarse view using f under fine-grained view into k r clusters and get result C r ; 20: TABLE I QUANTITATIVE RESULTS (%±σ) ON NODE CLASSIFICATION.WE HIGHLIGHT THE BEST SCORE ON EACH DATASET IN BOLD AND THE RUNNER-UP SCORE TABLE III CASE STUDY ON DIFFERENT TYPES OF WEIGHTS ON THE ACM DATASET.
11,243.4
2022-12-28T00:00:00.000
[ "Physics", "Environmental Science" ]
Dict2vec : Learning Word Embeddings using Lexical Dictionaries Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words – natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task. Introduction Learning word embeddings usually relies on the distributional hypothesis -words appearing in similar contexts must have similar meanings, and thus close representations. Finding such representations for words and sentences has been one hot topic over the last few years in Natural Language Processing (NLP) (Mikolov et al., 2013;Pennington et al., 2014) and has led to many improvements in core NLP tasks such as Word Sense Disambiguation (Iacobacci et al., 2016), Machine Translation (Devlin et al., 2014), Machine Comprehension (Hewlett et al., 2016), and Semantic Role Labeling (Zhou and Xu, 2015;Collobert et al., 2011) -to name a few. These methods suffer from a classic drawback of unsupervised learning: the lack of supervision between a word and those appearing in the associated contexts. Indeed, it is likely that some terms of the context are not related to the considered word. On the other hand, the fact that two words do not appear together -or more likely, not often enough together -in any context of the training corpora is not a guarantee that these words are not semantically related. Recent approaches have proposed to tackle this issue using an attentive model for context selection (Ling et al., 2015), or by using external sources -like knowledge graphsin order to improve the embeddings . Similarities derived from such resources are part of the objective function during the learning phase (Yu and Dredze, 2014;Kiela et al., 2015) or used in a retrofitting scheme (Faruqui et al., 2015). These approaches tend to specialize the embeddings to the resource used and its associated similarity measures -while the construction and maintenance of these resources are a set of complex, time-consuming, and error-prone tasks. In this paper, we propose a novel word embedding learning strategy, called Dict2vec, that leverages existing online natural language dictionaries. We assume that dictionary entries (a definition of a word) contain latent word similarity and relatedness information that can improve language representations. Such entries provide, in essence, an additional context that conveys general semantic coverage for most words. Dict2vec adds new co-occurrences information based on the terms occurring in the definitions of a word. This information introduces weak supervision that can be used to improve the embeddings. We can indeed distinguish word pairs for which each word appears in the definition of the other (strong pairs) and pairs where only one appears in the definition of the other (weak pairs) -each having their own weight as two hyperparameters. Not only this information is useful at learning time to control words vectors to be close for such word pairs, but also it becomes possible to devise a controlled negative sampling. Controlled negative sampling as introduced in Dict2vec consists in filtering out random negative examples in conventional negative sampling that forms a (strong or weak) pair with the target word -they are obviously non-negative examples. Processing online dictionaries in Dict2vec does not require a human-in-the-loop -it is fully automated. The neural network architecture from Dict2vec (Section 3) extends Word2vec (Mikolov et al., 2013) approach which uses a Skip-gram model with negative sampling. Our main results are as follows : • Dict2vec exhibits a statistically significant improvement around 12.5% against state-ofthe-art solutions on eleven most common evaluation datasets for the word similarity task when embeddings are learned using the full Wikipedia dump. • This edge is even more significant for small training datasets (50 millions first tokens of Wikipedia) than using the full dataset, as the average improvement reaches 30%. • Since Dict2vec does significantly better than competitors for small dimensions (in the [20; 100] range) for small corpus, it can yield smaller yet efficient embeddings -even when trained on smaller corpus -which is one of the utmost practical interest for the working natural language processing practitioners. • We also show that the embeddings learned by Dict2vec perform similarly to other baselines on an extrinsic text classification task. Dict2vec software is an extension and an optimization from the original Word2vec framework leading to a more efficient learning. Source code to fetch dictionaries, train Dict2vec models and evaluate word embeddings are publicly availabe 1 and can be used by the community as a seed for future works. The paper is organized as follows. Section 2 presents related works, along with a special focus on Word2vec, which we later derive in our 1 https://github.com/tca19/dict2vec approach presented in Section 3. Our experimental setup and evaluation settings are introduced in Section 4 and we discuss the results in Section 5. Section 6 concludes the paper. The Neural Network Approach In the original model from Collobert and Weston (2008), a window approach was used to feed a neural network and learn word embeddings. Since there are long-range relations between words, the window-based approach was later extended to a sentence-based approach (Collobert et al., 2011) leading to capture more semantic similarities into word vectors. Recurrent neural networks are another way to exploit the context of a word by considering the sequence of words preceding it (Mikolov et al., 2010;Sutskever et al., 2011). Each neuron receives the current window as an input, but also its own output from the previous step. Mikolov et al. (2013) introduced the Skip-gram architecture built on a single hidden layer neural network to learn efficiently a vector representation for each word w of a vocabulary V from a large corpora of size C. Skip-gram iterates over all (target, context) pairs (w t ,w c ) from every window of the corpus and tries to predict w c knowing w t . The objective function is therefore to maximize the log-likelihood : where n represents the size of the window (composed of n words around the central word w t ) and the probability can be expressed as : with v t+k (resp. v t ) the vector associated to w t+k (resp. w t ). This model relies on the principle "You shall know a word by the company it keeps" -Firth (1957). Thus, words that are frequent within the context of the target word will tend to have close representations, as the model will update their vectors so that they will be closer. Two main drawbacks can be said about this approach. First, words within the same window are not always related. Consider the sentence "Turing is widely considered to be the father of theoretical computer science and artificial intelligence." 2 , the words (Turing,widely) and (father,theoretical) will be moved closer while they are not semantically related. Second, strong semantic relations between words (like synonymy or meronymy) happens rarely within the same window, so these relations will not be well embedded into vectors. fastText introduced in Bojanowski et al. (2016) uses internal additional information from the corpus to solve the latter drawback. They train a Skipgram architecture to predict a word w c given the central word w t and all the n-grams G wt (subwords of 3 up to 6 letters) of w t . The objective function becomes : Along learning one vector per word, fastText also learns one vector per n-gram. fastText is able to extract more semantic relations between words that share common n-gram(s) (like fish and fishing) which can also help to provide good embeddings for rare words since we can obtain a vector by summing vectors of its n-grams. In what follows, we report related works that leverage external resources in order to address the two raised issues about the window approach. Using External Resources Even with larger and larger text data available on the Web, extracting and encoding every linguistic relations into word embeddings directly from corpora is a difficult task. One way to add more relations into embeddings is to use external data. Lexical databases like WordNet or sets of synonyms like MyThes thesaurus can be used during learning or in a post-processing step to specialize word embeddings. For example, Yu and Dredze (2014) include prior knowledge about synonyms from WordNet and the Paraphrase Database in a joint model built upon Word2vec. Faruqui et al. (2015) introduce a graph-based retrofitting method where they post-process learned vectors with respect to semantic relationships extracted from additional lexical resources. Kiela et al. (2015) propose to specialize the embeddings either on similarity or relatedness relations in a Skip-gram joint learning approach by adding new contexts from external thesaurus or from a norm association base in the function to optimize. Bian et al. 2 https://en.wikipedia.org/wiki/Alan_Turing (2014) combine several sources (syllables, POS tags, antonyms/synonyms, Freebase relations) and incorporate them into a CBOW model. These approaches have generally the objective to improve tasks such as document classification, synonym detection or word similarity. They rely on additional resources whose construction is a timeconsuming and error-prone task and tend generally to specialize the embeddings to the external corpus used. Moreover, lexical databases contain less information than dictionaries (117k entries in WordNet, 200k in a dictionary) and less accurate content (some different words in WordNet belong to the same synset thus have the same definition). Another type of external resources are knowledge bases, containing triplets. Each triplet links two entities with a relation, for example Parisis capital of -France. Several methods (Weston et al., 2013;Xu et al., 2014) have been proposed to use the information from knowledge base to improve semantic relations in word embeddings, and extract more easily relational facts from text. These approaches are focused on knowledge base dependent task. Dict2vec The definition of a word is a group of words or sentences explaining its meaning. A dictionary is a set of tuples (word, definition) for several words. For example, one may find in a dictionary : car: A road vehicle, typically with four wheels, powered by an internal combustion engine and able to carry a small number of people. 3 The presence of words like "vehicle", "road" or "engine" in the definition of "car" illustrates the relevance of using word definitions for obtaining weak supervision allowing us to get semantically related pairs of words. Dict2vec models this information by building strong and weak pairs of words ( §3.1), in order to provide both a novel positive sampling objective ( §3.2) and a novel controlled negative sampling objective ( §3.3). These objectives participate to the global objective function of Dict2vec ( §3.4). Strong pairs, weak pairs In a definition, each word does not have the same semantic relevance. In the definition of "car", the words "internal" or "number" are less relevant than "vehicle". We introduce the concept of strong and weak pairs in order to capture this relevance. If the word w a is in the definition of the word w b and w b is in the definition of w a , they form a strong pair, as well as the K closest words to w a (resp. w b ) form a strong pair with w b (resp. w a ). If the word w a is in the definition of w b but w b is not in the definition of w a , they form a weak pair. The word "vehicle" is in the definition of "car" and "car" is in the definition of "vehicle". Hence, (car-vehicle) is a strong pair. The word "road" is in the definition of "car", but "car" is not in the definition of "road". Therefore, (car-road) is a weak pair. Some weak pairs can be promoted as strong pairs if the two words are among the K closest neighbours of each other. We chose the K closest words according to the cosine distance from a pretrained word embedding and find that using K = 5 is a good trade-off between semantic and syntactic extracted information. Positive sampling We introduce the concept of positive sampling based on strong and weak pairs. We move closer vectors of words forming either a strong or a weak pair in addition to moving vectors of words cooccurring within the same window. Let S(w) be the set of all words forming a strong pair with the word w and W(w) be the set of all words forming a weak pair with w. For each target w t from the corpus, we build V s (w t ) a random set of n s words drawn with replacement from S(w t ) and V w (w t ) a random set of n w words drawn with replacement from W(w t ). We compute the cost of positive sampling J pos for each target as follows: where is the logistic loss function defined by and v t (resp. v i and v j ) is the vector associated to w t (resp. w i and w j ). The objective is to minimize this cost for all targets, thus moving closer words forming a strong or a weak pair. The coefficients β s and β w , as well as the number of drawn pairs n s and n w , tune the importance of strong and weak pairs during the learning phase. We discuss the choice of these hyperparameters in Section 5. When β s = 0 and β w = 0, our model is the Skip-gram model of Mikolov et al. (2013). Controlled negative sampling Negative sampling consists in considering two random words from the vocabulary V to be unrelated. For each word w t from the vocabulary, we generate a set F(w t ) of k randomly selected words from the vocabulary : The model aims at separating the vectors of words from F(w t ) and the vector of w t . More formally, this is equivalent to minimize the cost J neg for each target word w t as follows: where the notation , v t and v i are the same as described in previous subsection. However, there is a non-zero probability that w i and w t are related. Therefore, the model will move their vectors further instead of moving them closer. With strong/weak word pairs in Dict2vec, it becomes possible to better ensure that this is less likely to occur: we prevent a negative example to be a word that forms a weak or strong pair with with w t . The negative sampling objective from Equation 6 becomes : In our experiments, we noticed this method discards around 2% of generated negative pairs. The influence on evaluation depends on the nature of the corpus and is discussed at Section 5.4. Global objective function Our objective function is derived from the noisecontrastive estimation which is a more efficient objective function than the log-likelihood in Equation 1 according to Mikolov et al. (2013). We add the positive sampling and the controlled negative sampling described before and compute the cost for each (target,context) pair (w t , w c ) from the corpus as follows: The global objective is obtained by summing every pair's cost over the entire corpus : 4 Experimental setup Fetching online definitions We extract all unique words with more than 5 occurrences from a full Wikipedia dump, representing around 2.2M words. Since there is no dictionary that contains a definition for all existing words (the word w might be in the dictionary D i but not in D j ), we combine several dictionaries to get a definition for almost all of these words (some words are too rare to have a definition anyway We use the same hyperparameters we usually find in the literature for all models. We use 5 negatives samples, 5 epochs, a window size of 5, a vector size of 100 (resp. 200 and 300) for the 50M file (resp. 200M and full dump) and we remove the words with less than 5 occurrences. We follow the same evaluation protocol as Word2vec and fastText to provide the fairest comparison against competitors, so every other hyperparameters (K, β s , β w , n s , n w ) are tuned using a grid search to maximize the weighted average score. For n s and n w , we go from 0 to 10 with a step of 1 and find the optimal values to be n s = 4 and n w = 5. For β s and β w we go from 0 to 2 with a step of 0.05 and find β s = 0.8 and β w = 0.45 to be the best values for our model. Table 1 reports training times for the three models (all experiments were run on a E3-1246 v3 processor). Word similarity evaluation We follow the standard method for word similarity evaluation by computing the Spearman's rank correlation coefficient (Spearman, 1904) between human similarity evaluation of pairs of words, and the cosine similarity of the corresponding word vectors. A score close to 1 indicates an embedding close to the human judgement. We use MC-30 (Miller and Charles, 1991), MEN (Bruni et al., 2014), MTurk-287 (Radinsky et al., 2011), MTurk-771 (Halawi et al., 2012), RG-65 (Rubenstein and Goodenough, 1965), RW (Luong et al., 2013), SimVerb-3500 (Gerz et al., 2016), WordSim-353 (Finkelstein et al., 2001) and YP-130 (Yang and Powers, 2006) classic datasets. We follow the same protocol used by Word2vec and fastText by discarding pairs which contain a word that is not in our embedding. Since all models are trained with the same corpora, the embeddings have the same words, therefore all competitors share the same OOV rates. We run each experiment 3 times and report in Table 2 the average score to minimize the effect of the neural network random initialization. We compute the average by weighting each score by the number of pairs evaluated in its dataset in the same way as Iacobacci et al. (2016). We multiply each score by 1, 000 to improve readability. Text classification evaluation Our text classification task follows the same setup as the one for fastText in . We train a neural network composed of a single hidden layer where the input layer corresponds to the bag of words of a document and the output layer is the probability to belong to each label. The weights between the input and the hidden layer are initialized with the generated embeddings and are fixed during training, so that the evaluation score solely depends on the embedding. We update the weights of the neural network classifier with gradient descent. We use the datasets AG-News 6 , DBpedia (Auer et al., 2007) and Yelp reviews (polarity and full) 7 . We split each datasets into a training and a test file. We use the same training and test files for all models and report the classification accuracy obtained on the test file. Baselines We train Word2vec 8 and fastText 9 on the same 3 files and their 2 respective versions (A and B) described in 4.2 and use the same hyperparameters also described in 4.2 for all models. We train Word2vec with the Skip-gram model since our method is based on the Skip-gram model. We also train GloVe with their respective hyperparameters described in Pennington et al. (2014), but the results are lower than all other baselines (weighted average on word similarity task is 350 on the 50M file, 389 on the 200M file and 454 on the full dump) so we do no report GloVe's results. We also retrofit the learned embeddings on corpus A with the Faruqui's method to compare another method using additional resources. The retrofitting introduces external knowledge from the WordNet semantic lexicon (Miller, 1995). We use the Faruqui's Retrofitting 10 with the W N all semantic lexicon from WordNet and 10 iterations as advised in the paper of Faruqui et al. (2015). Furthermore, we compare the performance of our method when using WordNet additional resources instead of dictionaries. 5 Results and model analysis 5.1 Semantic similarity Table 2 (top) reports the Spearman's rank correlation scores obtained with the method described in subsection 4.3. We observe that our model outperforms state-of-the-art approaches for most of the datasets on the 50M and 200M tokens files, and almost all datasets on the full dump (this is significant according to a two-sided Wilcoxon signedrank test with α = 0.05). With the weighted average score, our model improves fastText's performance on raw corpus (column A) by 28.3% on the 50M file, by 17.7% on the 200M and by 12.8% on the full dump. Even when we train fastText with the same additional knowledge as ours (column B), our model improves performance by 2.9% on the 50M file, by 5.1% in the 200M and by 11.9% on the full dump. We notice the column B (corpus composed of Wikipedia and definitions) has better results than the column A for the 50M (+24% on average) and the 200M file (+12% on average). This demonstrates the strong semantic relations one can find in definitions, and that simply incorporating definitions in small training file can boost the performance of the embeddings. Moreover, when the training file is large (full dump), our supervised method with pairs is more efficient, as the boost brought by the concatenation of definitions is insignificant (+1.5% on average). We also note that the number of strong and weak pairs drawn must be set according to the size of the training file. For the 50M and 200M tokens files, we train our model with hyperparameters n s = 4 and n w = 5. For the full dump (20 Table 2: Spearman's rank correlation coefficients between vectors' cosine similarity and human judgement for several datasets (top) and accuracies on text classification task (bottom). We train and evaluate each model 3 times and report the average score for each dataset, as well as the weighted average for all word similarity datasets. Table 3: Percentage changes of word similarity scores for several datasets after the Faruqui's retrofitting method is applied. We compare each model to their own non-retrofitted version (vs self) and our nonretrofitted version (vs our). A positive percentage indicates the level of improvement of the retrofitting approach, while a negative percentage shows that the compared method is better without retrofitting. As an illustration: the +13.9% at the top left means that retrofitting Word2vec's vectors improves the initial vectors output by 13.9%, while the -7.3% below indicates that our approach without retrofitting is better than the retrofitted Word2vec's vectors. times larger than the 200M tokens file), the number of windows in the corpus is largely increased, so is the number of (target,context) pairs. Therefore, we need to adjust the influence of strong and weak pairs and decrease n s and n w . We set n s = 2, n w = 3 to train on the full dump. The Faruqui's retrofitting method improves the word similarity scores on all frameworks for all datasets, except on RW and WS353 (Table 3). But even when Word2vec and fastText are retrofitted, their scores are still worse than our non-retrofitted model (every percentage on the vs our line are negative). We also notice that our model is compatible with a retrofitting improvement method as our scores are also increased with Faruqui's method. We also observe that, although our model is superior on each corpus size, our model trained on the 50M tokens file outperforms the other models trained on the full dump (an improvement of 17% compared to the results of fastText, our best competitor, trained on the full dump). This means considering strong and weak pairs is more efficient than increasing the corpus size and that using dictionaries is a good way to improve the quality of the embeddings when the training file is small. The models based on knowledge bases cited in §2.2 do not provide word similarity scores on all the datasets we used. However, for the reported scores, Dict2vec outperforms these models : Kiela et al. (2015) Text classification accuracy Table 2 (bottom) reports the classification accuracy for the considered datasets. Our model achieves the same performances as Word2vec and fastText on the 50M file and slightly improves results on the 200M file and the full dump. Using supervision with pairs during training does not make our model specific to the word similarity task which shows that our embeddings can also be used in downstream extrinsic tasks. Note that for this experiment, the embeddings were fixed and not updated during learning (we only learned the classifier parameters) since our objective was rather to evaluate the capability of the embeddings to be used for another task rather than obtaining the best possible models. It is anyway possible to obtain better results by updating the embeddings and the classifier parameters with respect to the supervised information to adapt the embeddings to the classification task at hand as done in . We also trained Dict2vec with pairs from Word-Net as well as no additional pairs during training (in this case, this is the Skip-gram model from Word2vec). Results are reported in Table 5. Training with WordNet pairs increases the scores, showing that the supervision brought by positive sampling is beneficial to the model, but lags behind the training using dictionary pairs demonstrating once again that dictionaries contain more semantic information than WordNet. Positive and negative sampling For the positive sampling, an empirical grid search shows that a 1 2 ratio between β s and β w is a good rule-of-thumb for tuning these hyperparameters. We also notice that when these coefficients are too low (β s ≤ 0.5 and β w ≤ 0.2), results get worse because the model does not take into account the information from the strong and weak pairs. On the other side, when they are too high (β s ≥ 1.2 and β w ≥ 0.6), the model discards too much the information from the context in favor of the information from the pairs. This behaviour is similar when the number of strong and weak pairs is too low or too high (n s , n w ≤ 2 or n s , n w ≥ 5). For the negative sampling, we notice that the control brought by the pairs increases the average weighted score by 0.7% compared to the uncontrolled version. We also observe that increasing the number of negative samples does not significantly improve the results except for the RW dataset where using 25 negative samples can boost performances by 10%. Indeed, this dataset is mostly composed of rare words so the embeddings must learn to differentiate unrelated words rather than moving closer related ones. In Fig. 1, we observe that our model is still able to outperform state-of-the-art approaches when we reduce the dimension of the embeddings to 20 or 40. We also notice that increasing the vector size does increase the performance, but only until a dimension around 100, which is the common dimen-sion used when training on the 50M tokens file for related approaches reported here. Conclusion In this paper, we presented Dict2vec, a new approach for learning word embeddings using lexical dictionaries. It is based on a Skip-gram model where the objective function is extended by leveraging word pairs extracted from the definitions weighted differently with respect to the strength of the pairs. Our approach shows better results than state-of-the-art word embeddings methods for the word similarity task, including methods based on a retrofitting from external sources. We also provide the full source code to reproduce the experiments.
6,525.4
2017-09-07T00:00:00.000
[ "Computer Science" ]
Bmp8a deletion leads to obesity through regulation of lipid metabolism and adipocyte differentiation The role of bone morphogenetic proteins (BMPs) in regulating adipose has recently become a field of interest. However, the underlying mechanism of this effect has not been elucidated. Here we show that the anti-fat effect of Bmp8a is mediated by promoting fatty acid oxidation and inhibiting adipocyte differentiation. Knocking out the bmp8a gene in zebrafish results in weight gain, fatty liver, and increased fat production. The bmp8a-/- zebrafish exhibits decreased phosphorylation levels of AMPK and ACC in the liver and adipose tissues, indicating reduced fatty acid oxidation. Also, Bmp8a inhibits the differentiation of 3T3-L1 preadipocytes into mature adipocytes by activating the Smad2/3 signaling pathway, in which Smad2/3 binds to the central adipogenic factor PPARγ promoter to inhibit its transcription. In addition, lentivirus-mediated overexpression of Bmp8a in 3T3-L1 cells significantly increases NOD-like receptor, TNF, and NF-κB signaling pathways. Furthermore, NF-κB interacts with PPARγ, blocking PPARγ’s activation of its target gene Fabp4, thereby inhibiting adipocyte differentiation. These data bring a signal bridge between immune regulation and adipocyte differentiation. Collectively, our findings indicate that Bmp8a plays a critical role in regulating lipid metabolism and adipogenesis, potentially providing a therapeutic approach for obesity and its comorbidities. O besity and overweight have become a worldwide epidemic.Obesity is strongly associated with many metabolic and cardiovascular diseases, such as type 2 diabetes, dyslipidemia, hypertension, some types of cancer, and osteoarthritis [1][2][3] .Generally, obesity is characterized by a massive expansion of white adipose tissue due to an increase in the size or number of adipocytes and a decrease in lipolysis.Therefore, identifying the factors that regulate adipose tissue expansion and elucidating the mechanisms are vital for public health and will help formulate therapeutic strategies and targets for the treatment of obesity and its associated comorbidities. Adipogenesis is regulated by a variety of signaling pathways [4][5][6][7] , and bone morphogenetic proteins (BMPs) are a relatively recent addition to the adipose regulation field.BMP belongs to the transforming growth factor-β (TGF-β) superfamily, which is highly conserved in developing vertebrates ranging from humans to zebrafish 8,9 .They were initially discovered as inducers of bone and cartilage 10 , but have been known to be critical in morphogenetic activities and cell differentiation throughout the body, including the development of adipose tissue and adipogenic differentiation [11][12][13][14][15] .BMP2, BMP4, and BMP6 have been shown to promote white adipogenesis in mesenchymal stem cells [16][17][18] .BMP7 and BMP8B can induce the expression of UCP1, a marker gene of brown adipocyte, and promote brown adipogenesis or enhance the thermogenesis of brown adipose tissue [19][20][21][22][23] .In addition, BMP3B suppresses adipogenesis of 3T3-L1 cells 24,25 .Overexpression of BMP9 in the mouse liver significantly alleviates hepatic steatosis and obesity-related metabolic syndrome 26 . BMP8A is almost absent from brown adipose tissue, whereas it is enriched in white adipose tissue 23 .Our previous studies have shown that Bmp8a can accelerate the uptake of yolk sac in zebrafish at 3-dpf (days post fertilization), indicating that Bmp8a may play a key role in fat metabolism 27 .Nevertheless, the overall metabolic regulatory function of BMP8A is not fully understood.Here, we report that bmp8a -/-zebrafish display obesity and fatty liver.Deletion of bmp8a in zebrafish leads to the accumulation of liver TG by downregulating phosphorylation of AMP-activated protein kinase (AMPK) and acetyl-CoA carboxylase (ACC).Furthermore, Bmp8a inhibits the differentiation of 3T3-L1 preadipocytes into mature adipocytes through the Smad2/3 pathway.Interestingly, we also found that the interaction of NF-κB and PPARγ mediates the effect of Bmp8a on adipogenesis, providing a signaling bridge between immunomodulatory and adipocyte differentiation.We present a previously unrecognized insight into Bmp8a-mediated adipogenesis. Results Weight gain, fatty liver and increased fat production in bmp8a -/-zebrafish.We have found that recombinant Bmp8a protein is able to accelerate the absorption of yolk sac in 3 dpf zebrafish, indicating a function of Bmp8a in regulating lipid metabolic processes 27 .Here the bmp8a -/-zebrafish was used to perform further analysis 28 .We monitored the body weight of wild type (WT) and bmp8a -/-zebrafish fed with a high-fat diet (HFD) and found that the body weight of bmp8a -/-zebrafish was gradually higher than WT zebrafish (Fig. 1a, b).Next, we calculated the body weight of male and female zebrafish separately, which bmp8a -/-zebrafish had a significant increase in body weight compared to wild-type zebrafish, independent of sex (Fig. 1c).Furthermore, bmp8a deficiency in zebrafish induced hyperplastic morphology of visceral adipose tissue (VAT) (Fig. 1d).Oil Red O staining on the liver sections showed prominent fatty liver in bmp8a -/-zebrafish (Fig. 1e), which was confirmed by analysis of TG and TC levels in liver tissue (Fig. 1f, g).Zebrafish yolk sac is a quantifiable limited energy source mainly consumed during the first week of larval development and has unique advantages in detecting changes in body lipid metabolism 29 .All changes in fat content could be relatively visually quantified by Nile red fluorescence microscopy of live zebrafish larvae.We found that bmp8a deletion resulted in a detectable increase in fat, in zebrafish at 3-and 7-day larvae (Fig. 1h-k).Importantly, mutation of bmp8a in zebrafish leads to increased lipid droplets in viscera and other sites (Fig. 1l).Taken together, HFD-induced obesity and hepatic steatosis are more severe in bmp8a -/-zebrafish than WT zebrafish. Impaired glucose and fat metabolism in bmp8a -/-zebrafish.We further found that HFD-fed bmp8a -/-zebrafish had higher levels of blood GLU, TG, and TC than WT zebrafish (Fig. 2a).To determine the molecular basis of the metabolic changes in bmp8a -/-zebrafish, we performed gene expression analyses by Quantitative real-time PCR (qRT-PCR) technology.It was shown that in the liver or adipose tissue of bmp8a -/-zebrafish fed a highfat diet, genes for lipolytic enzymes (lpl and lipc), insulinsensitizing hormone (adiponectin), transcription activators and coactivators that induce fat metabolism (pgc-1α and pparα), mitochondrial proteins that used to generate heat by thermogenesis (ucp1), and hunger suppressors (leptin) were all downregulated (Fig. 2b, c).However, another lipase gene, bile saltstimulated lipase (bssl), was upregulated in the tissue of the liver or intestine of HFD-fed bmp8a -/-zebrafish (Fig. 2d-g).The upregulation of the bssl1 and bssl2 in bmp8a -/-zebrafish is beneficial for lipid absorption, since Bssl is mainly involved in the hydrolysis of dietary fat.Hence, fat metabolism-related molecules are regulated by the bmp8a gene. AMPK is known as the key mediator of fatty acid oxidation, so we further examined whether the activation of AMPK is involved in the regulatory process by Bmp8a.Notably, bmp8a knockout significantly decreased AMPK phosphorylation level in adipose and liver tissues (Fig. 2h-j).Meanwhile, the phosphorylation level of ACC was also reduced in bmp8a -/-zebrafish (Fig. 2h-j).It has known that the phosphorylation of ACC increases fatty acid oxidation by inhibiting the activity of ACC.Thus, bmp8a deletion can decrease fatty acid oxidation through reduced phosphorylation levels of AMPK and ACC.In addition, we found that the mRNA and protein levels of Pgc-1α and Ucp1 decrease in both adipose and liver tissues in bmp8a -/-zebrafish (Fig. 2b, c, h-j).AMPK has previously been reported to upregulate the abundance of PGC-1α, which can activate the transcription of UCP1 and other thermogenic genes 30 .Therefore, Bmp8a also regulates fatty acid oxidation through the AMPK-Pgc-1α-Ucp1 pathway. We have proved that Bmp8a regulates immune responses through the p38 MAPK pathway 28 .Combined with other reports that the p38 MAPK-Pgc-1α-Ucp1 pathway plays a vital role in the oxidation of lipids 31 , we hypothesized that Bmp8a might activate fatty acid oxidation through the p38 MAPK-Pgc-1α-Ucp1 pathway.Clearly, compared to wild-type ZFL cells, the phosphorylation level of p38 MAPK and the expression of Pgc-1α and Ucp1 proteins were increased in bmp8a-overexpressed ZFL cells (Fig. 2k-m). Taken together, these results indicate that Bmp8a promotes fatty acid oxidation through AMPK and p38 MAPK-Pgc-1α-Ucp1 pathways.Our data provide a mechanistic explanation for how Bmp8a regulates fatty acid oxidation (Fig. 2n). Bmp8a inhibits adipocyte differentiation of 3T3-L1 cells. There is no available information regarding the role of Bmp8a in adipocyte differentiation.Treating 3T3-L1 cells with methylisobutylxanthine, dexamethasone, and an insulin cocktail for two days (Day 0 ~Day 2), followed by insulin treatment (Day 2 ~Day 4), these cells can be induced to differentiate into adipocyte-like cells.Intracellular lipid accumulation can be observed and quantified after staining with Oil Red O, providing an effective model system for adipogenesis in vitro (Fig. 3a).Firstly, we investigated the expression of Bmp8a and adipogenic markers genes (Pparγ and C/ebpα) during 3T3-L1 differentiation (Fig. 3b).Obviously, the expression of Bmp8a was decreased at the later stage of adipocyte differentiation (Fig. 3b).Then, we examined the lipid accumulation using Oil-Red O staining to evaluate the effect of Bmp8a on 3T3-L1 adipocyte differentiation.Zebrafish bmp8a or mouse Bmp8a was successfully overexpressed or knocked down in 3T3-L1 cells (Fig. 3c, d).We found that overexpression of zebrafish bmp8a or mouse Bmp8a reduced lipid droplet formation (Fig. 3e, f).In contrast, lipid production increased when knocked down mouse Bmp8a (Fig. 4a, b).Also, overexpression of zebrafish bmp8a or mouse Bmp8a decreased the mRNA expression of adipogenic markers, such as C/ebpα, Pparγ, and Fasn (Fig. 3g-j).Protein levels of C/EBPα and PPARγ were also reduced in zebrafish bmp8a or mouse Bmp8a overexpressed 3T3-L1 cells (Fig. 3k-m).Furthermore, the knockdown of mouse Bmp8a caused a significantly increased mRNA expression of C/ebpα, Pparγ, and Fasn (Fig. 4c-f).Consistent with this, when Bmp8a was knocked down, PPARγ and C/EBPα protein levels increased (Fig. 4g-i).These results indicate that Bmp8a can inhibit adipocyte differentiation. Fig. 2 Bmp8a promotes fatty acid oxidation through AMPK and p38 MAPK pathways.a The serum TG, TC and GLU level in WT and bmp8a -/-zebrafish (n = 6).b, c The qPCR analysis of genes related to fatty acid metabolism in the WT and bmp8a -/-zebrafish liver (b, n = 3) or adipose tissue (c, n = 3).d, e The bssl1 (d) and bssl2 (e) gene expression analysis in different zebrafish tissues (n = 3).f, g The qPCR analysis of bssl1 and bssl2 mRNA level in the liver (f, n = 3) and intestine (g, n = 3) from WT or bmp8a -/-zebrafish.h-j Validation and quantification of p-AMPK, p-ACC, Ucp1, and Pgc-1α expression in adipose tissue (i) and liver (j) from WT or bmp8a -/-zebrafish.Protein expression levels were quantified using ImageJ software and normalized to total protein or β-actin (n = 3).k-m Validation and quantification of p-p38 MAPK, p-AMPK, p-ACC, Ucp1, and Pgc-1α expression after overexpression of zebrafish bmp8a in ZFL cells.The cells were collected at 36 h (I) and 48 h (m) post-transfection for Immunoblot analysis.Protein expression levels were quantified using ImageJ software and normalized to total protein or β-actin (n = 3).n Schematic overview.Data were representative of at least three independent experiments.Data were analyzed by two-tailed Student's t-test and presented as mean ± SD (**p < 0.01, ***p < 0.001). Interestingly, the type I receptor Alk6 gene was not found in the 3T3-L1 cells (Fig. 5h).Next, we quantified the relative abundance of transcripts encoding these receptors in 3T3-L1 cells, which Alk3, Alk4, and Alk5 have the higher expression among the type I receptor genes, while the expression of Acvr2a, Bmpr2, and Tgfβr2 are higher among type II receptor genes (Fig. 5i).In parallel, we analyzed the expression patterns of these receptors during adipocyte differentiation (Supplementary Fig. 1a-i).The expression pattern of type I receptors (Alk2, Alk3, Alk4, and Alk5) and type II receptors (Acvr2a, Acvr2b, Bmpr2, and Tgfβr2) during adipocyte differentiation were initially elevated and then progressively decreased.Also, the expression of the Alk7 gene increased gradually alongwith adipocyte differentiation.To further examine the signal transduction pathway mediated by Bmp8a, BRE-and CAGA-driven luciferase reporter assays were performed.In this system, the BRE promoter was activated by signaling through Smad1/5/8, while the CAGA promoter was activated by signaling through Smad2/3 37 .Bmp8a could activate Smad1/5/8 signaling through receptor complexes formed by the type I receptor ALK3 and the type II receptor BMPR2 or ACVR2A (Fig. 5j, k).Also, Bmp8a was capable of activating Smad2/3 signaling through receptor complexes formed by the type I receptor ALK4 or ALK5 and the type II receptor ACVR2A, ACVR2B, or TGFβR2 (Fig. 5l, m).Notably, the CAGA-driven luciferase reporter system exhibited higher potency than the BREdriven luciferase reporter system, indicating that the activation of the Smad2/3 signal by Bmp8a plays a dominant role.These studies further support the conclusion that Bmp8a inhibits adipocyte differentiation through Smad2/3 signaling in 3T3-L1 cells. An implication of functional bridge between immune regulation and adipocyte differentiation.To further explore the molecular mechanism by which Bmp8a inhibits adipogenesis, we performed transcriptome analysis of the LV-bmp8a, LV-Bmp8a, and LV-ZsGreen1 3T3-L1 cells.To identify the differentially expressed genes (DEGs) in the two cell types (LV-bmp8a or LV-Bmp8a), the cutoff values for the fold change and P value were set to 2.0 and 0.05, respectively.Among the DEGs, 2337 genes (1215 downregulated genes and 1122 upregulated genes) were significantly modulated in LV-bmp8a cells, while 2187 genes (1211 downregulated genes and 976 upregulated genes) were modulated in LV-Bmp8a cells.By combining the two data sets, we found the two cell types shared 536 overlapping downregulated DEGs and 334 overlapping upregulated DEGs (Supplementary Fig. 2 and Supplementary Fig. 3).These results suggest that overexpression of zebrafish bmp8a or mouse Bmp8a in 3T3-L1 cell lines exhibits a comparable transcriptional response. Kyoto Encyclopedia of Genes and Genomes (KEGG) analyses revealed that compared with LV-ZsGreen1 3T3-L1 cells, the ).Protein expression levels were quantified by using ImageJ software and normalized to the amount of total protein.h, i knocked-down ALK3, ALK4, and ALK5 in LV-bmp8a or LV-Bmp8a 3T3-L1 cells, were induced to differentiate.Lipid contents of the resulting adipocyte-like cells were stained and quantified (n = 3).j Schematic diagram of the Pparγ promoter region.Three predicted TF binding sites and sequences.k Schematic drawing of wild-type and predicted TF binding sites mutation plasmids (pGL3-Pparγ-promoter-ΔR1, pGL3-Pparγ-promoter-ΔR2, pGL3-Pparγ-promoter-ΔR3).l Dual-luciferase report assay was used to analyze the abilities of zebrafish bmp8a and mouse Bmp8a in activation of the Pparγ promoter (n = 3).The pGL3-Pparγ-promoter, pGL3-Pparγ -promoter-ΔR1, pGL3-Pparγ-promoter-ΔR2 or pGL3-Pparγ-promoter-ΔR3 was transfected into HEK293T cells along with pCMV-bmp8a, pCMV-Bmp8a or empty vector.After 48 h, the transfected cells were collected for luciferase assays.Renilla luciferase was used as the internal control.Data were from three independent experiments and were analyzed by One-way ANOVA and were presented as mean ± SD (ns not significant, **p < 0.01, ***p < 0.001).downregulated genes were remarkably enriched in the PPAR signaling pathway whether in LV-bmp8a and LV-Bmp8a 3T3-L1 cells (Fig. 7a, b).More intriguingly, KEGG analyses showed that the upregulated genes remarkably enriched in NOD-like receptor signaling and TNF signaling pathways involved in the immune process in LV-bmp8a or LV-Bmp8a 3T3-L1 cells, compared with LV-ZsGreen1 3T3-L1 cells (Fig. 7c, d).Given that NF-κB molecules are downstream of NOD-like receptor signaling and TNF signaling pathways, we wondered whether Bmp8a could activate the NF-κB signaling.It was confirmed that phosphorylation levels of IKKα/β and p65 increased in LV-bmp8a or LV-Bmp8a 3T3-L1 cells (Fig. 7e-h). We have shown that Bmp8a inhibits the expression of Pparγ, so we want to understand the relationship between NF-κB and PPARγ in regulating adipogenesis.Previous studies have revealed that NF-κB components p50 and p65 bind PPARγ directly in vitro by GST Pull-down assay 38 .Our co-immunoprecipitation (Co-IP) experiments confirmed that NF-κB (p65) interacts with PPARγ (Fig. 7i).We suspect that the interaction between NF-κB and PPARγ blocks PPARγ activation of its target genes, leading to inhibition of adipogenesis.To our knowledge, PPARγ regulates the expression of target genes by binding to peroxisome proliferator response elements (PPRE) in their promoters 39 .Fatty acid binding protein 4 (FABP4), as a target gene of PPARγ, promotes adipocyte differentiation 40 .We found that a putative PPRE is present in the promoter region of Fabp4 (Fig. 7j).Not surprisingly, overexpression of PPARγ stimulated FABP4 expression.However, overexpression of both PPARγ and p65 impaired the ability of PPARγ to activate FABP4 expression (Fig. 7l).Deletion of PPRE in Fabp4 promoter prevented PPARγ from activating FABP4 expression (Fig. 7m).Overall, these results indicate that Bmp8a regulates PPARγ activity through NF-κB signaling to inhibit adipocyte differentiation. Discussion The BMPs are members of a large highly conserved family of extracellular polypeptide signaling molecules of the TGF-β superfamily.In Mus musculus, there are two members of the Bmp8 gene, Bmp8a and Bmp8b, which arose from a recent duplication of a single gene 41 .However, only a single bmp8a gene is present in D. rerio 27 .BMP8B is induced by nutritional and thermogenic factors in mature brown adipose tissue (BAT), increasing the response to noradrenaline through enhanced p38MAPK/CREB signaling and increased lipase activity.Bmp8b -/-mice exhibit impaired thermogenesis and reduced metabolic rate, causing weight gain 23 .BMP8A is almost absent from BAT, but enriched in white adipose tissue (WAT) 23 .There have been no reports on the role of BMP8A in adipose tissue.In this study, we showed that zebrafish Bmp8a or mouse BMP8A has an anti-fat effect by promoting fatty acid oxidation and reducing adipocyte differentiation.Zebrafish as a model organism in many fields of research, is becoming an increasingly powerful tool in lipid research since the lipid metabolic pathway between fish and mammals is conserved 29,[42][43][44] .We have previously shown that bmp8a mRNA expression in the intestine or brain significantly upregulated in obese zebrafish induced by a high fat diet 27 .In this study, we found that compared to WT, bmp8a -/-zebrafish exhibited higher body weight and increased fat production, confirming the linking of Bmp8a with obesity.Furthermore, we showed that the expression of key genes associated with lipid metabolism (lpl, lipc, adiponectin, pgc-1α, and pparα), thermogenesis (ucp1), and appetite (leptin) were regulated by Bmp8a.Thus, Bmp8a appears to regulate obesity through multiple molecular pathways.Overall, we speculate that Bmp8a increases lipase activities such as Lpl and Lipc to hydrolyze triglyceride (TG) into free fatty acids (FFA).Subsequently, Bmp8a enhances FFA oxidation through AMPK or p38 MAPK pathway, thereby reducing lipid accumulation (Fig. 2n). We also found fatty liver in bmp8a -/-zebrafish.The liver is a crucial player in regulating lipid metabolism throughout the body.The lipogenesis, adipolysis, lipoprotein synthesis, and secretion are mainly carried out in the liver 45 .Meanwhile, dysregulation of lipid metabolism is increasingly recognized as a hallmark of obesity and non-alcoholic fatty liver disease (NAFLD) 46,47 .However, we are unsure about the causal relationship between the fatty liver and dysregulation of lipid metabolism in bmp8a -/-zebrafish.Most likely, these two aspects lead and influence each other. The role of BMP8A in regulating adipocyte differentiation is not yet clear.A series of transcriptional events coordinate the differentiation from preadipocytes to mature adipocytes 48 .Adipogenic factors C/EBPα and PPARγ are two central players in white adipocyte differentiation 49 .Here, we illustrated the inhibitory regulation of Bmp8a on adipogenesis by decreasing the expression of adipogenic markers (C/EBPα and PPARγ).Also, the expression of pparγ was increased in both liver and adipose tissue of bmp8a -/-zebrafish compared to wild-type zebrafish (Supplementary Fig. 4).Therefore, it is conceivable that obesity could result if the inhibitory regulation of adipogenesis by Bmp8a is disrupted, which is consistent with our findings of significantly increased body weight in bmp8a -/-zebrafish.Here the upregulation of the expression of pparγ does not seem to reconcile with the reduced expression of adiponectin and lpl in obese mutant zebrafish, but there are other comparable reports.For example, it was found that the adiponectin level in obese patients was significantly lower than in non-obese people 50,51 .Also, obesity can increase Bmp8a expression 27 . BMP achieves its signaling activity by interacting with the heterotetrameric receptor complex of transmembrane serine/ threonine kinase receptors, BMPR type I and BMPR type II.ALK2, ALK3, ALK4, ALK5, ALK6 and ALK7 were identified as BMPR type I, while BMPR2, ACVR2A, ACVR2B, TGFβR2 were identified as BMPR type II 35,36 .The Alk6 gene was not found in 3T3-L1 cells, although ALK6 involved in the antivirus immunity in zebrafish 28 .Therefore, different BMP functions are achieved by binding to different BMP receptors.The activated receptor kinases are well known to transmit signals through Smaddependent pathways, including Smad1/5/8 and Smad2/3 pathways, as well as Smad-independent pathways, including ERK, JNK, and p38 MAPK pathways [32][33][34] .It has been reported that Smad1/5/8 signaling is fundamental for priming and driving the commitment of 3T3-L1 cells toward adipogenic fates, whereas Smad2/3 activation may blunt adipogenesis via a negative feedback loop that reduces Smad1/5/8 signaling 52 .In this study, we found that Bmp8a inhibits adipogenesis by activating Smad2/ 3 signaling.We further showed that Smad2/3 could directly bind to the Pparγ promoter to inhibit its transcription.To our puzzled, Bmp8a could increase the phosphorylation level of Smad1/5/8, but Smad1/5/8 inhibitor DMH1 has no significant impact on the decrease of lipid content in Bmp8a-overexpressing 3T3-L1 cells.It would be interesting to gain a comprehensive understanding of the mechanism of regulation of Bmp8a on preadipocyte differentiation. Recently, studies have found that BMP6 activity in the liver has a positive immune function 53 .Also, the NK cell-mediated cytotoxic signaling pathway in the liver of Bmp9 -/-mice was affected 26 .Here we found that Bmp8a could increase NOD-like receptor signaling and TNF signaling in 3T3-L1 cells, indicating the role of Bmp8a in regulating the immune process.Adipocytes have an innate antiviral system that regulates adipocyte function 54 .The preadipocytes express antiviral pattern recognition receptors such as TLR3, MDA5, and RIG-I, which can respond to the virus by producing IL6, TNFa and type I IFNs 54 .Meanwhile, virus stimulation also inhibits the differentiation of preadipocytes into adipocytes 54 .This finding can be explained from an evolutionarily view of the host response; that is, to inhibit adipocyte differentiation and conserve energy against infection under virus stimulation 55 .However, the mechanism by which virus stimulation inhibits preadipocyte differentiation remains unclear.NF-κB is a downstream molecule of both NOD-like receptor signaling and TNF signaling pathways.We further revealed that Bmp8a could activate the NF-κB signaling.It has been reported that NF-κB inhibits the expression of PPARγ 56 , but the mechanism by which it inhibits PPARγ expression remains unclear.We confirmed the interaction of NF-κB and PPARγ, and the binding blocked PPARγ to activate its target gene FABP4, thereby inhibiting adipocyte differentiation.The interaction of NF-κB and PPARγ provides a functional linker between immune regulation and adipocyte differentiation (Fig. 8).In addition, it should also note that since NF-κB is an inflammatory signaling, it would be interesting to check what is the inflammatory status of adipose tissue and liver of mutant bmp8a -/-zebrafish.We found that the expression of pro-inflammatory genes, tnfα and il-1β, were elevated in bmp8a -/-zebrafish liver and adipose tissue (Supplementary Fig. 5).Considering the pro-inflammatory cytokines are generally produced in obesity 57 , our data are in line with our observation that a prominent fatty liver appeared in bmp8a -/-zebrafish (Fig. 1e).Thus, we provided the evidence on the interesting effect of Bmp8a not only on adipogenesis, but also on inflammation. In conclusion, we reported that bmp8a -/-zebrafish displayed obesity and fatty liver by decreasing fatty acid oxidation via downregulation of the phosphorylation of AMPK and ACC.Bmp8a suppresses the differentiation of 3T3-L1 preadipocytes into mature adipocytes by increasing the Smad2/3 signal.Bmp8a overexpression markedly increases NOD-like receptor signaling and TNF signaling pathways in 3T3-L1 cells.Furthermore, NF-κB interacts with PPARγ, providing a signaling bridge between immune regulation and adipocyte differentiation.We bring a previously unidentified insight into Bmp8a-mediated adipogenesis.These findings will provide a window into adipose development and metabolism, and bring a basis for the development of strategies targeting obesity and metabolic imbalance. Methods Zebrafish.All animal experiments were performed in accordance with the Institutional Animal Care and Use Committee of the Ocean University of China (SD2007695).The zebrafish bmp8a gene homozygous mutant lines (bmp8a -/-) were established from the zebrafish AB line using TALENs technology 28 .Embryos from natural matings were grown at 28 °C.Female zebrafish were used for the experiments unless otherwise specified. Cell culture.Mouse preadipocytes 3T3-L1 and human HEK293T cells were obtained from ATCC.The zebrafish liver cells (ZFL) were acquired from the China Zebrafish Resource Center (CZRC).3T3-L1 and HEK293T cells were maintained in Dulbecco's modified Eagle's medium (DMEM, VivaCell, #C3113-0500) supplemented with 10% fetal bovine serum (FBS, Gibco, #26140-097) and penicillin/ streptomycin at 37 °C in a humidified incubator with 5% CO 2 .The ZFL cells were maintained in DMEM/F-12 media supplemented with 10% FBS and penicillin/ streptomycin at 37 °C in a humidified incubator with 5% CO 2 .All cell lines were subjected to the examination of mycoplasma contamination and were cultured for no more than 1 month.The cell morphology was confirmed periodically to avoid cross-contamination or misuse of cell lines. Weight measurement.Weight (g) was measured by putting the fish into a small beaker of facility water on a scale and subtracting the non-fish weight. Anesthesia.Tricaine (Sigma-Aldrich, #E10521) was used at 0.02% in water, at 28.5 °C.Fish were transferred to a beaker containing Tricaine and then monitored for signs that they had reached loss of reactivity.For anesthetics, loss of reactivity was typically reached within 60 s 58 . Hepatic histopathology analysis.The tissue was fixed, dehydrated, embedded and Oil red stained according to procedures 59 .In brief, fresh livers were washed two times with PBS to remove impurities such as blood and then fixed in 4% (w/v) paraformaldehyde (PFA, Beyotime, #P0099) for 2 h at 4 °C.Tissue blocks were embedded in opti-mum cutting temperature compound (OCT, Leica, #03803389).The 5 µm thick liver sections were prepared using Cryostat Microtome (Leica, CM1950).Finally, samples were subjected to Oil Red O (Solarbio, #G1260) staining.The histopathological examination of the liver was performed under a Zeiss Axio Imager A1 microscope. Determination of serum levels of TC, TG, and GLU.The zebrafish were fasted for 12 h and then anesthetized.Blood was collected by holding a heparinized microcapillary tube (Kimble, #41B2501) after decapitation.The blood was allowed to settle for 1.5 h and then centrifuged at 4 °C, 12,000 × g, for 12 min.The whole blood levels of TG, TC, and GLU were measured by a fully automated biochemistry analyzer (SMT-120VP, Seamaty). Determination of liver and adipose tissue levels of TC, TG.The zebrafish were fasted for 12 h and then anesthetized.Fresh liver and adipose tissues were collected, rinsed with PBS (pH 7.4) at 4 °C, blotted to filter paper, weighed, placed into a homogenization vessel, homogenized by adding Isopropanol at a ratio of weight (g): Volume (ml) = 1:9 at 4 °C, centrifuged at 10,000 × g for 10 min at 4 °C, and the supernatant placed on ice to be tested.The levels of TG and TC were analyzed using assay kits (Triglyceride Colorimetric Assay Kit, #E-BC-K261-M; Total Cholesterol Colorimetric Assay Kit, #E-BC-K109-M) purchased from Elabscience Biotechnology according to the manufacturer's instructions. Immunoblot analysis and co-immunoprecipitation.Cultured cells were lysed in NP-40 buffer (Beyotime, #P0013F) for 20 min on ice.After centrifugation for 10 min at 13,000 × g, 4 °C, supernatants were incubated with Protein A + G Agarose (Beyotime, #P2055) coupled to indicated antibody for overnight.The sepharose beads were washed three times with 1 ml NP-40 buffer.For immunoblot analysis, immunoprecipitates or whole-cell lysates were separated by SDS-PAGE, electro-transferred to PVDF membranes and blocked for 4 h with 4% bovine serum albumin (BSA) in PBS-T (phosphate buffered saline supplement with 0.1% Tween 20), followed by blotting with the appropriate antibodies and detection by Omni-ECL™Femto Light Chemiluminescence Kit (Epizyme, #SQ201).The membrane was visualized using a fluorescent Western blot imaging system (ChampChemi™ 610 Fig. 8 Schematic illustration of Bmp8a effect on the regulation of lipid metabolism and adipocyte differentiation.Bmp8a is required for fatty acid oxidation and regulates adipogenesis as evidenced by weight gain and fatty liver of zebrafish lacking Bmp8a. plus, Sagecreation).The integrated absorbance (IA = mean grey value) of the protein bands was measured using ImageJ.The target protein expression level was presented as the ratio of the IA of the target protein to the IA of β-actin or total protein.Drug treatment and analysis.Smad1/5/8 inhibitor DMH1(Selleck, #S7146), Smad2/3 inhibitor TP0427736 HCl (Selleck, #S8700), were dissolved in dimethyl sulfoxide The cells were treated with each inhibitor, which was diluted with culture medium at concentrations of 5 μM. Antibodies. Antibodies from Affinity In vivo labeling and imaging of zebrafish adipocytes.Nile Red (Solarbio, # N8440) was dissolved in acetone at 1.25 mg/ml and stored in the dark at 20 °C.Vessels containing live unanesthetized zebrafish were supplemented with Nile Red to a final working concentration of 0.5 μg/ml and then placed in the dark for 30 min.Zebrafish were anesthetized in Tricaine (Sigma-Aldrich, #E10521), mounted in 3% methylcellulose, and imaged using a Leica MZ16F fluorescence stereomicroscope. Luciferase assays.Luciferase activity levels were measured according to the manufacturer's instructions (Luc-Pair Duo-Luciferase Assay Kits 2.0, iGene Biotechnology, #LF002).Briefly, HEK293T cells were plated at 6 ×10 4 cell/well in 24well plates and cotransfected with various constructs at a ratio of 10:10:1 (BRE-or CAGA-driven luciferase reporter/expression plasmid/pRL-TK) using FuGENE HD Transfection Reagent.The luciferase reporter activity was measured using the Spark 20 M multifunctional microplate reader.Data were normalized by calculating the ratio of Firefly/Renilla luciferase activity. Oil red O staining.After induction of adipogenic differentiation, 3T3-L1 cells were rinsed three times with PBS and then fixed for 20 min with 4% PFA.The cells were treated with 60% isopropanol in H 2 O for 2 min and then stained in freshly diluted Oil Red O solution (Oil Red O Saturated Solution (Solarbio, #G1260) was diluted with water (3:2), filtered through a 0.45 µm filter) for 30 min.Cells were then washed with 60% isopropanol in H 2 O and twice with PBS.The cells were observed in H 2 O under a microscope and photographed.Oil Red O was extracted with 100% isopropanol, and the absorbance reading was performed at OD 492 nm.RNA quantification.RNA was isolated using the Trizol Reagent (Invitrogen, #15596026).Generally, RNA was reversed to cDNAs by PrimeScript™ RT reagent Kit with gDNA Eraser (TaKaRa, #RP047A).Samples without reverse transcriptase were also added as control.Gene expression was determined by amplifying the cDNA with ChamQ SYBR Color qPCR Master Mix (Vazyme, #Q431-02) by using an ABI 7500 Fast Real-Time PCR System (Applied Biosystems, USA).Gene expression levels were normalized to an internal control gene (zebrafish β-actin or mouse Gapdh).All qRT-PCR experiments were performed in triplicate and repeated three times.The primer sequences were described in Supplementary Table 2. RNA sequencing.RNA sequencing was accomplished by Beijing Baimaike Biotechnology Co., Ltd (Beijing, China).Total RNA was isolated from 3T3-L1 cells using the TRIzol reagent (Invitrogen, #15596026).For RNA-sequencing analysis, three independent samples from each group, including (group 1, LV-ZsGreen1), (group 2, LV-bmp8a), and (group 3, LV-Bmp8a), were collected.Sequencing libraries were generated using the NEBNext Ultra Directional RNA Library Prep Kit for Illumina (NEB, #E7530L).Sequencing was performed on an Illumina NovaSeq 6000, and 150nucleotide paired-end reads were generated.At least 6 GB of clean data with >94% of them above Q30 were produced for each sample.HISAT2 and StringTie were used to align the reads and to analyze the transcripts 60,61 .The DEGseq R package was used to identify differentially expressed genes.The whole analysis was performed on BMKCloud (www.biocloud.net). Statistics and reproducibility. All the experiments were performed in triplicate and three independent repeats.Statistical analyses were performed using GraphPad Prism 8.0.2.Data are presented as mean ± SD.One-way ANOVA or two-tailed Student's t-test was used to determine the p value.*p < 0.05, **p < 0.01, ***p < 0.001; ns, not significant, p > 0.05. Reporting summary.Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Fig. 4 Knockdown Fig.4Knockdown Bmp8a promotes adipogenesis.a, b After induction of adipogenic differentiation, Bmp8a knocked-down 3T3-L1 cells (LV-shRNA-Bmp8a#1) and control cells (Mock and LV-shRNA-scrambled) were stained with Oil Red O and subjected to OD 492 quantifications (n = 3).Scale bar = 20 µm.c-f On the day after induction as indicated, expressions of adipogenic markers (Cebpα, Pparγ, and Fasn) were examined at the mRNA levels by qPCR (n = 3).g-i On the day after induction as indicated, the protein levels of PPARγ and C/EBPα were detected by Immunoblot (g).Protein expression levels were quantified using ImageJ software and normalized to the amount of β-actin (h, i, n = 3).Data were representative of at least three independent experiments.Data were analyzed by One-way ANOVA and presented as mean ± SD (**p < 0.01, ***p < 0.001). Fig. 7 Fig. 7 The interaction of NF-ĸB and PPARγ mediates the effect of Bmp8a on adipogenesis.a, c After induction of adipogenic differentiation, the downregulated (a) and upregulated (c) KEGG pathway in overexpression zebrafish bmp8a 3T3-L1 cells.b, d After induction of adipogenic differentiation, the downregulated (b) and upregulated (d) KEGG pathway in overexpression mouse Bmp8a 3T3-L1 cells.e, f Immunoblot analysis and quantification of p-IKKα/β and p-p65 in Mock, LV-ZsGreen1, and LV-bmp8a 3T3-L1 cells (n = 3).g, h Immunoblot analysis and quantification of p-IKKα/β and p-p65 in Mock, LV-ZsGreen1, and LV-Bmp8a 3T3-L1cells.Protein expression levels were quantified by ImageJ software and normalized to total protein (n = 3).i Coimmunoprecipitation and immunoblot analysis of co-transfected with PPARγ and p65 (n = 3).j Schematic drawing of predicted PPRE site in Fabp4 promoter region.k Schematic drawing of WT and PPRE site mutation Luc-report plasmids.l, m Quantification of the activity of Fabp4-promoter (l) and Fabp4-promoter-ΔPPRE (m) luciferase reporters in mouse HEK293T cells transfected with Vector, pCMV-Pparγ, or co-transfected pCMV-Pparγ and pCMV-p65, respectively.Renilla luciferase was used as the internal control (n = 3).Data were from three independent experiments and were analyzed by One-way ANOVA and were presented as mean ± SD (ns not significant, **p < 0.01, ***p < 0.001).
7,623
2023-08-08T00:00:00.000
[ "Biology" ]
Efficient Laser-Driven Proton Acceleration from a Cryogenic Solid Hydrogen Target We report on the successful implementation and characterization of a cryogenic solid hydrogen target in experiments on high-power laser-driven proton acceleration. When irradiating a solid hydrogen filament of 10 μm diameter with 10-Terawatt laser pulses of 2.5 J energy, protons with kinetic energies in excess of 20 MeV exhibiting non-thermal features in their spectrum were observed. The protons were emitted into a large solid angle reaching a total conversion efficiency of several percent. Two-dimensional particle-in-cell simulations confirm our results indicating that the spectral modulations are caused by collisionless shocks launched from the surface of the the high-density filament into a low-density corona surrounding the target. The use of solid hydrogen targets may significantly improve the prospects of laser-accelerated proton pulses for future applications. charged electrons in turn pull along the positive ions. RPA requires balancing the pressure from laser radiation with the pressure from charge separation in the target. This is primarily achievable with nm-thin solid targets 19,20 . Yet another possible mechanism is the acceleration of ions by collisionless electrostatic shocks (Collisionless Shock Acceleration, CSA). Such shocks can, e.g., be generated at a sharp transition from a hot, dense plasma to a cooler plasma of lower density 21 . A strong electric field spike is formed at the shock front, accelerating ions from the less dense plasma that the shock propagates into. The problem that a considerable fraction of the energy is imparted to heavier ions occurs in all of these mechanisms. Hence, there is good reason to consider pure hydrogen targets, which can readily be produced e.g. with gas jets, ensuring that protons are the only accelerated ion species. However, both RPA and shock acceleration demand that an over-critical plasma be generated. For regular-pressure gas jets, long-wavelength lasers such as CO 2 -lasers are required. Here, narrow-band proton beams have been observed 22,23 but with low conversion efficiencies (4 × 10 −4 ). If near-IR, high-power lasers are to be used, they require either the use of ultra-high pressure gas jets 24,25 or hydrogen targets at near-solid density [26][27][28] . Furthermore, a self-replenishing target, which is well-suited for high-repetition rate operation would be beneficial. Over the last few years, there has been considerable research on the application of solid-hydrogen as the target material. The results from these measurements show a high conversion efficiency from laser energy to protons when using 300 ps-long 26 or sub-ps pulses 28,29 . In most of these experiments, temperature-like proton spectra following a Boltzmann distribution were detected with cutoff-energies in the range of 1 MeV 26 or up to 20 MeV 28 . In the results presented by Gauthier et al. 27 , a quasi-monoenergetic feature around 1 MeV was observed but the exact origin of this feature has not yet been identified. Furthermore, Göde et al. found that due to the presence of a preplasma on the target rear surface, Weibel-type instabilities affecting the formation of the hot-electron sheath on the target rear surface can strongly modulate the generated beam profile in the transverse direction 30 , rendering such beams problematic for applications, which require a smooth proton beam. In this paper, we report the successful application of a cryogenic solid hydrogen target 31,32 for laser-driven proton acceleration. Using a Joule-class, 1/40-Hz laser system, we achieved both a high energy conversion efficiency from the laser pulse to the accelerated proton beam and a cutoff-energy in excess of 20 MeV while still exhibiting a rather smooth beam profile. Furthermore, we observed clear non-thermal features in the proton spectra, which can be explained to be the result of an electro-static, collisionless shock occurring in the low-density corona surrounding the solid-hydrogen filaments, therefore offering a new explanation for our -and potentially also for other -experimental results. Experimental Setup In a cryogenic microjet source, 99.999% purity hydrogen was pressurized up to 30 bar and liquefied with a continuous flow liquid helium cryostat at a working temperature of 14 K to 19 K. Through a (10 ± 0.5)μm diameter glass capillary nozzle the liquid was injected with a laminar flow into the evacuated main interaction chamber, emerging as a continuous cylindrical stream with a diameter set by the nozzle's cross section. The propagating liquid then rapidly cooled by surface evaporation until it froze producing a continuously replenishing solid filament 31 , cf. Fig. 1. While the hydrogen target solidifies before it is irradiated by the high-intensity laser pulse, the formation of a contamination layer on its surface, which might contain other ion species as commonly present in standard solid-target interactions, is nevertheless suppressed. The liquid hydrogen jet is emitted at a speed of ∼166 m/s from the nozzle and the filament is irradiated by a high-intensity laser pulse at a distance of 13 mm below the orifice (see next paragraph). The corresponding propagation time of ∼78μs might be sufficient to adsorb e.g. hydro-carbon contaminants from the rest-gas in the interaction chamber, in particular when the low temperature of the filament is considered. However, the vapor pressure of solid hydrogen at this temperature leads to the formation of a low-density corona of hydrogen gas surrounding the filament. During this evaporative process (which is responsible for the freezing of the liquid hydrogen in the first place) any adsorbed contaminants would immediately be blown away again. We can therefore assume that the target consists of pure hydrogen only. This assumption is also confirmed by the fact that with the Thomson parabola spectrometer, which was used for the detection of the accelerated ions (see below), only protons and no other ion species were measured. At a distance of 13 mm below the nozzle's orifice, the filament was irradiated by 1030 nm linearly polarized pulses of 2.5 J energy and 217 fs duration (intensity FWHM) from the POLARIS laser system 33 . The pulses were focused by an off-axis parabolic mirror (F # = 2.5) onto the filament under normal incidence. In a focal spot area of 8.4 μm 2 , 25.9% of the laser pulse energy was contained, resulting in an on-target intensity of 3.5 × 10 19 Wcm −2 corresponding to a normalized vector potential of λ μ = . × × ≈ . . Using an optical probe pulse of ns duration, the position of the hydrogen filament could be aligned and monitored with respect to the focal plane of the laser before the interaction, cf. Fig. 1b). The standard deviation of the filament's front surface from the focal plane was 4.0 μm. In the laser's forward direction, a 5 mm thick, fast responding plastic scintillator [Saint Gobain BC-422Q, signal pulse width 360 ps (FWHM) and signal decay time 700 ps] detected the proton beam profile. Positioned 575 mm behind the target, it covered a solid angle of 72.1 msr, the half-opening angle in the horizontal direction was 9.4°. The scintillator was light-shielded by 30 μm of aluminum and imaged onto a gateable CCD with a minimal gate ≤1 ns. The time of flight (TOF) from target to scintillator was 1.9 ns for γ-rays and MeV electrons and ≥9.3 ns for ≤20 MeV protons. This difference was sufficient for a clear distinction between these particles on the scintillator. By an appropriate choice of the gate's width and delay with respect to the main pulse we could record the proton beam profile for energies between 3 and 20 MeV 34 . Through a hole in the scintillator aligned to the laser axis, protons could propagate towards a Thomson parabola spectrometer, covering a solid angle of 1.07 μsr. In this spectrometer, the protons were dispersed by parallel magnetic (B ≈ 600 mT) and electric fields (ℰ = 750 kVm −1 ) and then detected by a micro channel plate (MCP), which had been absolutely calibrated using CR39 nuclear track detectors 35 . At the lower cut-off energy of 3 MeV determined by the MCP's size, the energy resolution was ΔE = 200 keV. Experimental Results The description of the reported results is based on a set of 2197 shots. Influenced by the stability of the filament, proton spectra exceeding the lower cut-off energy of the spectrometer (3 MeV) were recorded in 65.4% of all shots. In 30.5% of our recorded spectra the low-energy part showed an exponential decay as expected by TNSA, but the high-energy part of the spectrum (i.e. above ≈8 MeV extending up to 21 MeV) exhibited clear non-thermal features. Three exemplary proton spectra showing such modulations are shown in Fig. 2a-c). No other ion species were detected. Simultaneously, we recorded the proton beam profiles with the scintillator. The profiles in Fig. 2g-i) correspond to the spectra a)-c). Here, no clear lateral decay of the signal could be detected in our field of view indicating a proton emission into a significantly larger opening angle than covered by the scintillator. Furthermore, we observed no or only low-amplitude high-frequency spatial modulations 36 , which would have hinted towards plasma instabilities occurring during the acceleration process or the subsequent propagation 30 . Such spatial instabilities would likely lead to spectral variations, too. We therefore assume that the proton spectrum emitted in different directions only slowly varies. The conversion efficiency from laser to protons with 3 MeV ≤ E ≤ 20 MeV emitted into the solid angle covered by the scintillator was between 0.3% and 0.9%. ℰarlier simulations 37 found that when using μm-thin foils as a target, non-thermal features in the proton spectra are only generated when the acceleration is limited to a spatially confined source containing another ion species with lower q/m. In comparison, our target was a virtually infinitely long cylinder of pure hydrogen that nevertheless produced spectral modulations. However, the solid hydrogen filament cannot be assumed to exhibit a step-like density profile at its surface. At the triple point of hydrogen (71.9 mbar and 13.947 K 38 ), the vapor pressure leads to the formation of a gas corona surrounding the filament. Close to the surface, the gas density is equivalent to n e = 7.6 × 10 19 cm −3 = 0.072n c (for a critical density π ε λ = n m c e 4 / ( ) c 2 0 e 2 L 2 2 for λ L = 1030 nm). Therefore the target consists of a core of solid hydrogen with 10 μm diameter, which will be ionized to n e = 49.3n c , surrounded by a corona with a density almost three orders of magnitudes smaller. Assuming a stationary isothermal expansion of the evaporating gas and a vapor density n 0 at the filament's surface at r 0 , the corona density scales according to Fick's law of diffusion as where d H 2 is the molecular radius of hydrogen. This leads to a slow rarefaction of the corona close to the target surface. To investigate the possible influence of the laser prepulse on the acceleration process, the laser's temporal intensity contrast (TIC) due to amplified spontaneous emission (ASℰ) was modified. To accomplish that, two alternative frontends could be used 39 , intrinsically generating seed pulses for the final amplifiers with different initial TIC. Furthermore, we could reduce the seed energy for one of the amplifiers in the POLARIS chain and simultaneously increase its gain. As a result of these measures, the TIC at a time 30 ps before the main pulse could be varied between I ASℰ /I 0 = 2 × 10 −13 and 4 × 10 −8 , while keeping the main pulse energy and duration constant 39,40 . The generation of modulated proton spectra was observed over a wide range of TIC (2 × 10 −13 < I ASℰ /I 0 < 2 × 10 −8 ). In this TIC range the achievable proton cut-off energies were extending up to 14-21 MeV, cf. Fig. 2ac) and Fig. 3. Nevertheless, shot to shot fluctuations of the cut-off energy occurred even for fixed laser parameters. These fluctuations can likely be attributed to the spatial instability of the filament. A number of exemplary proton Here, laser pulses (red) from the POLARIS system irradiate the solid hydrogen filament (blue) vertically emitted from the cryogenic target source. Before the interaction, the filament's position with respect to the focal plane of the laser can be controlled with a sideview-imaging system using a frequency-doubled probe laser pulse from a Nd:YAG laser with ns duration (green). Protons emitted during the interaction (grey) are first detected by a plastic scintillator. A gateable CCD camera (not shown here), which is looking at this scintillator from the back, provides energy-resolved information about the proton beam's spatial profile. Through a hole in the scintillator and an ion beam guide aligned to the laser forward direction protons can propagate towards a Thomsonparabola ion spectrometer with parallel electric and magnetic fields equipped with a micro-channel plate (MCP) as the detector. With this spectrometer, energy spectra of the protons and any other ion species could be detected. (b) Sideview image of the solid hydrogen filament around the laser focus position but without the main pulse. energy spectra as measured with the Thomson parabola for different values of the TIC are shown in Fig. 3. When worsening the pulse contrast further, the energy of the accelerated protons immediately dropped close to or below 3 MeV, the low-energy limit of the spectrometer, cf. Fig. 2d-f) and the small inset of Fig. 3, where the maximum proton energy depending on the TIC is shown. It is likely that in this case the ASℰ prepulse significantly changed the target characteristics before the main interaction, rendering the ion acceleration ineffective. We therefore conclude that for the acceleration of protons from solid-hydrogen filaments showing non-thermal features in their energy spectrum, any prepulse-induced plasma expansion of the solid filament prior to the interaction with the main pulse plays a subordinate role only -as long as I ASℰ /I 0 < 2 × 10 −8 . It is more likely that the corona surrounding the filament is the reason for the observed spectral modulations. Figure 2. ℰxperimental results I. Proton energy spectra (a-f) and beam profiles (g-i). The spectra (a-c) and the corresponding beam profiles (g-i) were obtained with a temporal intensity contrast TIC = 3.6 × 10 −9 at a time 30 ps before the main pulse (for a definition of the TIC see text). While the low-energy part of the spectrum in (a-c) shows an exponential decay, modulations are visible at higher energies. The scintillator images (9.4° half-opening angle) show beam profiles with no clear intensity drop towards the edges of the field of view, indicating an emission of protons into a significantly larger opening angle. Note that the scintillator's rear side is imaged onto the CCD. The black shadow visible in the images of the beam profiles is due to the tube used as the ion beam guide towards the spectrometer, which blocks part of the image. The spectra (d-f) correspond to shots with TIC = 4 × 10 −8 . Here, only 5 out of 81 shots produced protons with energies only slightly above the spectrometer's lower cut-off of 3 MeV. www.nature.com/scientificreports www.nature.com/scientificreports/ Numerical Simulations To study the influence of the low-density corona surrounding the filament we performed two-dimensional particle-in-cell (2D-PIC) simulations using the Osiris code 41 . These simulations were performed using a 12000 × 12000 grid with a total size of 400 × 400 (c/ω L ) 2 . The target consisted of a circular filament of purely hydrogenic plasma centered at (100, 200) (c/ω L ) 2 with a radius of 32 c/ω L and an electron/proton density of 40 n c . The filament in turn was surrounded by a circular low-density corona that extended out another 12.5 c/ω L in radius. Due to the slow rarefaction of the evaporating hydrogen close to the filament surface it was assumed in the simulation to have a uniform density of 0.08 n c . A linearly polarized laser pulse was incident on this target from the left hand boundary along the line y = 200 c/ω L . The pulse had a triangular temporal profile with rise/fall times of 377 ω − , which is shown by the solid black line in Fig. 4a. Only protons traveling at an angle of less than 1 mrad with respect to the laser-axis contribute to this spectrum. It clearly contains non-thermal features that are not dissimilar to those observed in the experiment. In particular, the spectrum shows a deep cleft and a pronounced peak in what would otherwise be described as a 'thermal' distribution. It is important to point out that this only occurred in simulations including the low-density corona. In a simulation without the corona these features were no longer present as can be seen by the dashed grey line in Fig. 4a. Hence our simulations support our interpretation that the corona is essential to the generation of spectral modulations, as they are also seen in our experimental results. Discussion We characterize the process producing these spectral features as being 'collisionless shock acceleration 21,23,42 . Overall, the protons are accelerated by TNSA from the filament's rear surface, i.e. by a sheath field generated by hot electrons, hence the overall thermal energy spectrum. The exact shape and extension of the corona will slightly affect the amplitude of this sheath field, which is likely to explain the slight difference in peak energy between experiment and simulations. The presence of the corona additionally affects the electric field structure, cf. Fig. 4b-d. In addition to the sheath field at the corona-vacuum boundary, one also gets a second, initially higher electric field spike forming at the interface between the high-density core of the target and the low-density corona. This is a natural consequence of having a change in the ion density profile over a scale length, which is much shorter than the Debye length λ D = [ε 0 k B T e /(n eh e 2 )] 1/2 associated with the hot electrons (which we estimate to be in excess of 0.3 μm). This spike leads to the formation of a collisionless electrostatic shock that propagates into the corona region. The spike moves together with the step-like change in the proton density, cf. Fig. 4b-d), which can also be seen in the evolution of the protons' phase space as shown in Fig. 4e-h). Since initially the strongest acceleration of protons occurs in the locality of this shock, protons will experience a 'pistoning' type of acceleration on encountering this shock. Piston-type acceleration like this will produce a monoenergetic bunch if the drive is uniform and constant in nature and if the bunch undergoes no further acceleration. In our case, the drive is neither strictly constant nor strictly uniform, and the protons accelerated by this undergo further acceleration. Nonetheless, this effect is sufficient to produce a distinct cleft in the spectrum and a slight peak at higher energies as also observed in the experiment. www.nature.com/scientificreports www.nature.com/scientificreports/ From our cylindrical filament protons are emitted into a large solid angle, cf. Fig. 2g-i). In Fig. 5a), we show the proton phase-space density at ω = − t 1131 L 1 from the simulation. The proton emission is rather uniform within a half-opening angle of 75° with respect to the laser direction, both in terms of proton numbers and spectrum. In particular, the spectral modulation described before is visible by the dark-blue half-ring, which is surrounded by a narrow light-green half-ring, both centered around the origin. In the experiment, the proton beam showed a rather uniform distribution over the area covered by the scintillator in both transverse directions. When assuming a similar opening angle as observed in the simulation, the total conversion efficiency from laser to protons is on the order of 10%, which is quite a high value when compared to results reported so far from similar laser systems but using different targets. Simulations performed at higher laser energies (15 J instead of 2.5 J), but with otherwise similar laser and target parameters showed a shift of the maximum proton energy and the spectral dip to 45 MeV and 28 MeV, respectively. This is an increase by a factor of 4 compared to the low-energy simulation shown in Fig. 4a). Since in our experiment the protons are initially accelerated by TNSA, this scaling agrees with our interpretation. When assuming a similar scaling for the conversion efficiency with laser energy as reported by Robson et al. 12 , the conversion efficiency will increase well beyond 20% for 10-J-class laser systems, significantly more than what is achievable with any other type of target so far. Additionally, we performed simulations with a rectangular target profile (1 × 20 μm 2 ). While the proton spectra are comparable to the cylindrical case, their phase space is remarkably different, cf. Fig. 5b). The proton emission half angle is reduced to 14°. The proton energy spectra are compared in Fig. 5c) for the case of the cylindrical (solid black line) and the planar target (dashed red line). ℰven though the conversion efficiency reduces by 30% due to the larger plasma slab size, the proton numbers within this emission angle are higher by a factor of 8.6 as compared to the cylindrical filament. Such target cross sections could be realized using a rectangular nozzle. In conclusion, our results show that cryogenic solid hydrogen targets are very promising candidates for optimizing laser-based proton sources for future applications, for which a high conversion efficiency is a requirement. While the protons are emitted under a large opening angle, the energy conversion efficiency for our solid hydrogen target in combination with high-repetition rate lasers is higher than for any other solid or gaseous targets used so far. Using solid hydrogen targets with a rectangular cross-section should allow tailoring of the proton beam profile making laser-accelerated protons from solid hydrogen targets a promising source for applications.
5,093.2
2019-11-11T00:00:00.000
[ "Physics", "Engineering" ]
Intelligent Mobile Wireless Network for Toxic Gas Cloud Monitoring and Tracking Intelligent wireless networks that comprise self-organizing autonomous vehicles equipped with punctual sensors and radio modules support many hostile and harsh environment monitoring systems. This work’s contribution shows the benefits of applying such networks to estimate clouds’ boundaries created by hazardous toxic substances heavier than air when accidentally released into the atmosphere. The paper addresses issues concerning sensing networks’ design, focussing on a computing scheme for online motion trajectory calculation and data exchange. A three-stage approach that incorporates three algorithms for sensing devices’ displacement calculation in a collaborative network according to the current task, namely exploration and gas cloud detection, boundary detection and estimation, and tracking the evolving cloud, is presented. A network connectivity-maintaining virtual force mobility model is used to calculate subsequent sensor positions, and multi-hop communication is used for data exchange. The main focus is on the efficient tracking of the cloud boundary. The proposed sensing scheme is sensitive to crucial mobility model parameters. The paper presents five procedures for calculating the optimal values of these parameters. In contrast to widely used techniques, the presented approach to gas cloud monitoring does not calculate sensors’ displacements based on exact values of gas concentration and concentration gradients. The sensor readings are reduced to two values: the gas concentration below or greater than the safe value. The utility and efficiency of the presented method were justified through extensive simulations, giving encouraging results. The test cases were carried out on several scenarios with regular and irregular shapes of clouds generated using a widely used box model that describes the heavy gas dispersion in the atmospheric air. The simulation results demonstrate that using only a rough measurement indicating that the threshold concentration value was exceeded can detect and efficiently track a gas cloud boundary. This makes the sensing system less sensitive to the quality of the gas concentration measurement. Thus, it can be easily used to detect real phenomena. Significant results are recommendations on selecting procedures for computing mobility model parameters while tracking clouds with different shapes and determining optimal values of these parameters in convex and nonconvex cloud boundaries. Introduction Modern industry produces substances whose volatilization causes a severe threat to the environment. Many of these hazardous substances form gas clouds heavier than air when accidentally released into the atmosphere. As a result, the gas cloud changes its shape, which is usually irregular, moves and covers a larger area with time. Toxic gases usually endanger humans [1]. The strategic step for conducting a situational assessment is to create a sensing system to determine a gas-covered area with a concentration greater than the safe value [2]. regularity of the distribution. Therefore, methods that determine the direction of motion based on the exact value of the measured concentration can give outstanding results in simulators and fail in the real world. In our system, sensor readings are reduced to two values, namely the gas concentration below or greater than the safe value, to decrease its sensitivity to the accuracy of gas concentration measurements. To sum up, this work's main contribution is to present the general overview of the system for detection of a heavy gas cloud in a working space W, detection of its boundary, and its estimation and tracking. The algorithms for motion planning and cooperation of the sensing devices used at each stage of the gas cloud exploration are presented and discussed. The system can be used for disaster management using unmanned vehicles or drones equipped with sensors. The data collected by the sensors and transmitted to the emergency management center will allow creating emergency awareness, supporting the evacuation of people from the endangered area, supporting the rescue teams conducting operational activities, and neutralizing a toxic spill. The presented results extend the work described in [11]. The paper [11] addresses the problem of cloud boundary detection and estimation. In this paper, a brief overview of algorithms for cloud boundary detection and estimation is provided. The attention is paid to the most challenging task, namely, a boundary of slowly moving cloud tracking. The main contribution is a novel algorithm for convex and nonconvex gas cloud boundary tracking and five procedures for tuning the algorithm's parameters. An exhaustive simulation study that shows the effectiveness of these procedures depending on the considered scenario is presented. Finally, we provide recommendations for the choice of a procedure, respectively, for a given cloud shape. The rest of the paper is structured as follows. Section 2 presents the survey of the application of MANETs to monitor and truck phenomena clouds. Section 3 provides a formulation of the problem to be solved and the model of a network composed of smart wireless sensing devices. Section 4 describes the concept of a three-stage strategy for heavy gas cloud monitoring and tracking and computing schemes for online motion trajectories calculation. A comparative study of a few variants of procedures for tuning model parameters is presented in Section 5. Finally, Section 6 concludes the paper and highlights future research directions. The Appendix A provides a list of notations used that is standard across all of the sections. Related Work The problem of developing effective and valuable strategies for detecting, estimating and tracking boundaries using collaborating static and mobile sensing devices has been studied in recent years by many researchers. Various techniques have been investigated, implemented and tested through simulation and in testbeds constructed of real devices. Shu et al. in [12] discuss the research directions for existing and future gas leakage source detection and boundary estimation schemes with wireless sensor networks. The authors of [13] present a survey of selected approaches for monitoring phenomena clouds. They can be classified concerning various criteria, such as a task to be performed, equipment used and expected network configuration, sensing and control strategies, communication and computing algorithms, and the accuracy of utilized gas dispersion models. A significant problem addressed in the literature is the resilience of the monitoring network to failures and insufficient energy resources. Imran and Ko in [14] present an energy-efficient method for detecting and estimating continuous objects spread over a large area by a failure-prone network. Improving the accuracy of boundary estimation while reducing the energy consumption is considered in [15]. This paper focuses on mobile sensing networks for phenomena-cloud boundary estimation and tracking. In general, we can distinguish simple systems built of a single mobile sensor or more complex systems comprised of a team of sensing devices. The simple systems for gas cloud boundary tracking are described in [16,17]. Wang et al. in [16] propose the algorithm for boundary estimation and tracking using a gas concentration gradient. A single moving platform equipped with the punctual sensor explores the sensing scene and constantly measures the gas concentration. The concentration gradient and consequently a new position of the platform are calculated based on these measurements. The algorithm for control the motion of a single unmanned underwater vehicle (UUV) equipped with sensors is presented and examined in [17]. Similarly to [16], both the initial detection of the cloud boundary and the tracking procedures are based on calculating the toxic substance concentration gradient. Moreover, the current speed measured by a dedicated sensor is taken into account in motion trajectory calculation. The proposed control algorithm was validated in the natural water reservoir with a dyeing substance imitating the toxic one. The paper [18] addresses the problem of boundary tracking for the mobile robot with uncertain dynamics and external disturbances. Sun et al. present and evaluate an adaptive control system using a radial basis function neural network to approximate a nonlinear function containing the uncertain model terms and toxic substance concentration gradient. Simulation results illustrate the stability of the system. Another group of systems is networks built from cooperating or non-cooperating devices. The scheme for data gathering by multiple autonomous and non-cooperating underwater vehicles is described and discussed in [19]. In this approach, each device's target position is determined autonomously. The current positions of other team members do not influence this device's position calculation. A similar approach with multiple autonomous vehicles used to track boundaries is described in [20]. Many researchers address the problem of boundary tracking by a team of cooperating vehicles. Singh et al. in [21] describe a simple mechanism for unmanned vehicle cooperation. It is composed of two phases: first, mobile devices that form the network explore roughly the whole area of interest, then the initial shape of a boundary is discovered. In the second phase, each node follows the boundary to increase the accuracy of the initial estimation. Singh et al. demonstrate that their strategy can be easily adapted to a network consisting of multiple devices. In the case of a wide sensing area, it can be divided into subregions that are equally assigned to all nodes. Data collected by all nodes are gathered by a selected node that estimates the boundary's shape. Next, the estimated boundary is divided into parts that are assigned to nodes, respectively, for further investigation. This approaches' main drawback is that each device has to cover a long distance in the case of a vast cloud. Triandaf and Schwartz in [22] present a method for calculating the motion path for the formation of communicating sensors. The aim of all sensors is to follow a time-dependent concentration gradient. The algorithm allows the sensors to move in space in a non-stationary environment. It can be used for finding and tracking the boundary of any stable surface. The common feature of the solutions proposed in [23,24] is the concept of nominating a leader of the whole formation. This leader is selected from the group of devices located close to the cloud boundary. The objective of the sensing system described in [23] is to detect and track a wildfire border. It is assumed that network nodes measure the temperature of the air. The Rothermel model describing a spreading fire and the Kalman filter to detect a boundary in a sensing space are used to develop an algorithm for the motion control of all sensing devices. Unfortunately, because the Rothermel model is directly used to calculate all sensing devices' target positions, this computing scheme can not be quickly adapted to other use cases. In [24], a team of four mobile robots is used to track the boundary of a phenomena cloud. A concentration gradient is used to calculate the target positions of all robots. At every time step, all devices equipped with punctual sensors measure the concentration of substances, and the concentration gradient is calculated. All devices' target positions are calculated, solving the optimization problem with the performance function dependent on the concentration gradient. The application of algorithms for motion planning based on the concentration gradient and the concept of a virtual potential field [7], which are widely used in mobile robotics, is presented in [25]. Both techniques were verified and validated through simulation of the underwater oil spill tracking by unmanned underwater vehicles. An alternative approach to tracking heavy gas clouds is to apply accurate models of the phenomena and use computer simulation to predict cloud dynamics and environmental risk. The heavy gas dispersion simulators SLAB [26] and Fluidyn-PANACHE [27] are commonly used. In [1], rapid assessment of exposure to chlorine released from a train derailment is described. However, to estimate the area covered by the heavy gas cloud, additional data and measurements have to be provided, i.e., landform, wind direction and strength, and obstacles presence. The quality of the cloud's boundary estimation is usually sensitive to the accuracy of these data. Reviewing the literature shows that most of the proposed gas cloud tracking systems built from mobile sensing devices use accurate gas concentrations in a sensing space and centralized schemes for node position calculation. The disadvantages of such an approach are discussed in Section 1. Therefore, our research focused on developing a solution that does not utilize the concentration gradient and implements the distributed mobility model. Problem Formulation and Model of a Mobile Wireless Sensing Network The aim is to design a sensing network composed of intelligent mobile wireless devices to explore an unknown environment to detect the toxic gas cloud, detect its boundaries, and finally, monitor and track the boundary of the evolving and moving cloud. Consider a network comprised by a set V of mobile sensing devices (network nodes) In the above formulas, E denotes the set of active direct connections between each pair of devices D i , and D j , i = j, (D i , D j ) is a bidirectional link. Each D i is equipped with the radio transceiver with radio range r t and a positioning system (GPS). Its position is described by a reference point x i = [x i 1 , x i 2 , x i 3 ], which is the location of its antenna. d i j = ||x i − x j || is the Euclidean distance between D i and D j . Assume that each node D i can freely change both its position and role in a network according to its knowledge about the environment and the network topology. Each D i uses a punctual sensor to measure the gas concentration g at a given point in a workspace W every regular time interval ∆t. For the sake of simplicity, it is assumed that each sensor at time t returns one of two values • g(x, t) = 1: the gas concentration exceeds the threshold valueḡ, • g(x, t) = 0: otherwise. This approach does not use a concentration gradient. Therefore, local disturbances in gas concentration can be ignored. Each D i can move with the speed v i ∈ [v i min , v i max ] in a desirable direction. The common direction in motion planning is to apply an artificial potential function V that can be viewed as a landscape where the device moves from a high-value state to a low-value state. A value of this function can be viewed as energy and its gradient as a force. V can be constructed as a sum of repulsive and attractive potentials, i.e., V(d) = V − (d) + V + , where d denotes a distance between a given device and other devices in a network or obstacles in the working space. The meaning of V − (d) and V + is straightforward; the obstacle repels the device, the target point attracts it. Hence, the sum of both these influences draws the device to the target position while deflecting it from obstacles. In our research, we have adopted a network connectivity-maintaining, virtual force mobility model described in [28] to calculate motion trajectories of platforms carrying sensors. In this model, a simple artificial potential function drawing on the Lennard-Jones potential used in liquid crystals is applied to model the interactions between moving devices and their displacement calculation. The function V i j that models the interactions between two devices D i and D j is as follows where d i j denotes the reference distance between D i and D j . The total potential between D i and all other m objects in the working space, i.e., other sensing devices and obstacles in the working space and target position, can be expressed as follows where i j > 0 denotes the weighting factor determining the importance of the impact of the object j on D i . Moreover, since the external communication system can be broken or suffer from congestion in the disaster area, it is assumed that the distance between the selected, critical pairs of nodes should not be greater than the specified safe distance that guarantees permanent connectivity in a network. The task is to calculate the optimal positions of all D i , i = 1, . . . , N at given time steps for a given application scenario. From Equation (4), it is obvious that in the reference positions of all D i , namely such that d i j = d i j , an optimal network topology is obtained. An Overview of a Method for Heavy Gas Cloud Monitoring and Tracking The application scenario considered in this paper is heavy gas cloud detecting and monitoring. Figure 1 shows a typical heavy gas cloud. Due to its dispersion characteristics, heavy gas forms a cloud with the largest cross-section at its base. In situational awareness tasks, determining the extent of contamination and delineating the area at risk is critical. Moreover, in most heavy gas propagation simulators, the cloud moves over a flat area and ignores local terrain and obstacles, which also cause changes in gas density. Therefore, although our virtual force motion model allows motion trajectories to be determined in three-dimensional space, the research presented in this paper focuses on the two-dimensional and obstacle-free workspace. Consider the self-organizing network defined in Equations (1)- (5). Assume the sensing devices (network nodes) D i (i = 1, . . . , N) do not have any information about the environment, particularly the cloud location. However, they can exchange data with each other about their positions and the measured gas concentrations. Based on this information, they autonomously determine the direction and speed of movement in a workspace. The limitation of the number of sensor readings g(x, t) to two values (0 and 1) allows fast detection of the cloud boundary. Changing the reading value indicates crossing the boundary. Our heavy gas cloud monitoring method is composed of three main steps executed sequentially. • Stage 1: workspace exploration to search for a gas cloud. • Stage 2: cloud exploration to detect its boundary. • Stage 3: permanent exploring and tracking of a cloud boundary. All network nodes perform at regular intervals ∆t of gas concentration measurements and exchange information about their locations and current sensors readings. Then, based on the obtained data, an optimal new target position of each device is calculated. Stage I: Gas Cloud Detection A team of N sensing devices D i , i = 1, . . . , N is used to explore the sensing scene W to search for a gas cloud. The device D 1 is responsible for maintaining the connectivity with the base station (BS)-the system's central station. Other devices enable constant communication with D 1 using multi-hop transmission. Hence, all D i , i = 1, . . . , N create the coherent searching network that maintains continuous connectivity with BS. The first stage begins with setting the target point c in the workspace. It is randomly selected. The device closest to the point c is elected to be a temporary leader D leader of the team. D leader is forced to move in the direction of c, the other nodes are forced to follow D leader . A new position of D leader is calculated every ∆t solving the following optimization problem with reference distance d i c > 0 closed to 0. Other devices follow the D leader . They move in formation. The process is repeated until the cloud is detected by at least one sensing device D k ∈ V. Then D k continues its movement, and the remaining devices are attracted to D k . Their displacements are calculated as follows with positive reference distance d i k ≈ 0. The cloud can move and change its shape over time. All D i measure the gas concentration every ∆t and in the case of g(x i , t) = 0, they move towards the cloud. The first stage ends when sensors of at least two devices detect gas concentration higher thanḡ ( Figure 2a). Then, the location Ψ of the gas cloud center is estimated based on data received from devices located inside the cloud. where V + denotes a set of devices located inside the cloud. The number of devices with positive sensor readings increases in time. Ψ is updated based on new data received from all these devices. Stage 2: Gas Cloud Boundary Detection The second stage aims to extract the cloud from the environment, namely, detect its boundary, not explore its inside. The boundary is defined as a curve dividing the workspace W into two regions, respectively, with a gas concentration greater and lower than a given concentration thresholdĝ. The optimal solution of Stage 2 is an even coverage of the cloud boundary with sensors. Hence, the target positions of all devices should be on the cloud boundary. To solve this task, we have to elbow sensing nodes, repulse them from the center of the cloud Ψ, and force them to move and take the target positions. The complexity of the problem grows in the case of vast clouds and an insufficient number of sensing devices. We developed and validated through simulations two computing schemes, centralized and distributed ones. In the centralized method, one node in a network is nominated for a network head D H ∈ V. Each device D i , i = 1, . . . , N repetitively calculates its target position in W, solving the optimization problem In the above formulas, i Ψ ≥ 0 and i j ≥ 0 denote weighting factors determining the importance of the impact of, respectively, the center of the cloud Ψ and the neighboring node D j on the new position of D i . d i Ψ and d i j are real Euclidean distances between x i and, respectively, Ψ and x j after a network transformation. d i Ψ and d i j are the reference distances between x i and, respectively, Ψ and x j . These distances are repetitively calculated by the leader of a network D H . N is a set of neighboring nodes of the node D i and r t is the radio range. In contrast to the centralized method, a clustering-based technique is proposed. The In each cluster V m , m = 1, . . . , M one node is nominated for a cluster head D H m ∈ V m . Moreover, we select one cluster head to be the head of the whole network, D H ∈ {D H 1 , . . . , D H M }. All cluster heads are responsible for maintaining permanent connectivity with D H . All members of a given cluster have to maintain permanent connectivity with its cluster head. Hence, each network node repetitively calculates its target position in W, solving the optimization problem where In the above formulas, i k denotes a weighting factor determining the importance of the impact of the centroid of a cluster V k , d i k an actual Euclidean distance between x i and a centroid of k-th cluster (c k ). IC m denotes a set of indexes of two clusters closest to the m-th cluster that contains D i . d i k is an average distance between centroids of two clusters with indexes from IC m increased by a slight distance margin w 2 . The calculation scheme and algorithm for selecting clusters for the set IC m are described in detail in [11]. Moreover, the authors of [11] present the results of the comparative study of two computing schemes: centralized and distributed. After numerous simulations, some improvements to the cluster-based algorithm that speed up the target positions calculations and do not affect the final result were introduced. Since the task of cluster leaders D H m , m = 1, . . . , M is limited to maintaining communication, the optimization problem (12) that has to be solved by each D H m can be simplified to the following one A sample result of the clustering-based algorithm is depicted in Figure 2b. The paper [29] defines criteria for sensing network topology assessment and describes the algorithm that can be used to temporarily detect optimal topology and complete the second stage of the computing scheme. In general, stage 2 is completed when devices are almost evenly distributed on the boundary of the cloud, Figure 2c. The set of these devices' locations can be used to discover a given gas cloud boundary's shape. Stage 3: Gas Cloud Boundary Tracking The final aim is to explore and track the gas cloud and report changes to its boundary over time. Let us assume that both nodes and clusters are almost evenly dispersed on the area covered by the phenomena; Figure 2c and Stage 3 starts. Each cluster head D H m , m = 1, . . . , M selects one node from m-th cluster members. Let us denote it D P m , where D P m ∈ V m , and m is a cluster number. This node aims to move along the boundary over time to discover its current shape, Figure 2d. Other members of cluster m follow D P m , m = 1, . . . , M. They serve as a communication link to the network head D H , which collects data from all network nodes. Node D P m Selection Let us consider the cluster V m . One can distinguish two situations (Figure 3). Some members of the m-th cluster are located on a boundary (Figure 3a). The set of nodes located on a boundary at time t is defined as follows where ∆t out denotes a fixed time period. The node most advanced in a clockwise direction is selected to play the role of D P m . (a) (b) Otherwise, when all cluster m-th members are located inside the cloud, the node furthest from the cluster head D H m is nominated for D P m (Figure 3b). The distance between a given node D i ∈ V m and D H m is calculated in hops, i.e., number of relay nodes. Motion Trajectory Calculation Both D P m , m = 1, . . . , M and other members of clusters repetitively calculate their target positions in W, solving the optimization problems with performance measures defined respectively to their roles in the sensing network. Let us start from the leaders of tracking teams D P m , m = 1, . . . , M displacements calculation. Two points that influence these nodes' motion trajectories can be defined, i.e., centroid of the cloud Ψ (9) and the point χ ⊂ W that forces the node to follow the cloud boundary. Hence, we obtain two functions V P m Ψ and V P m χ that model the interactions between each D P m and, respectively, Ψ and χ. The method for determining point χ is shown in Figure 4. Two cases are considered, the first one with the current position of D P m outside the cloud (Figure 4a) and the second one with D P m inside the cloud (Figure 4b). (a) (b) Finally, a new position of D P m is calculated every ∆t, solving the following optimization problem. where d P m Ψ and d P m χ denote an actual Euclidean distances between D P m and, respectively, the estimated centroid of the cloud Ψ and point χ. d P m Ψ and d P m χ are reference distances between D P m and points Ψ and χ. Note that the cloud is in motion. Each node D P m , m = 1, . . . , M is forced to move inside or outside the cloud depending on the sensor reading. Hence, the motion trajectories oscillate around the cloud boundary. To force oscillations, the reference distances d P m Ψ in Equation (16) are defined as follows where V + denotes the set of devices located inside the cloud. Other members of the clusters D i ∈ V m , m = 1, . . . , M (including D H m ) aim to maintain connectivity between nodes D P m and D H . They are relay nodes in communication between D P m and D H . It is evident that each node D i ∈ V m , D i = D P m is forced to follow D P m . Hence, its new position is calculated every ∆t, solving the optimization problem. where d i P m denotes an Euclidean distance between D i ∈ V m and D P m , d i P m is a reference distance between D i and D P m . r t is the radio transmission range. Weighting Factor Calculation Several parameters have to be identified in the optimization task (16). To simplify the model, we assumed without loss of generality d P m = d P m χ = d P m Ψ and Ψ = 1. Hence, the optimization problem (16) is reduced to the following one and only coefficient χ must be tuned. The impact of χ on the recommended motion direction of the node D P m is depicted in Figure 5. The recommended motion direction should be fit to the shape of a given boundary. Figure 6 presents the examples of various shapes of the cloud boundary and adequate directions of D P m displacements. Assume τ z , z = 1, 2, . . . , denote time stamps of D Pm detector state changes (g(x P m , τ z ) = g(x P m , τ z − ∆t)) and τ z+1 − τ z = k z · ∆t, where k z is a number of sensor readings taken since the last change of the detector state. Five methods for the weighting parameter χ in Equation (19) updating every ∆t, i.e., at time steps t = τ z + ∆t, τ z + 2∆t, . . . , τ z + (k z − 1)∆t, z = 1, 2, . . . , were proposed and tested. The input data contains a list of historical values of χ and a list of historical bivalent sensor readings. The following items present the algorithms for adjusting the factor in the periods between subsequent sensor readings changes, i.e., τ z+1 − τ z , z = 1, 2, . . . ,. Variant I A constant value of the weighting factor in the whole tracking horizon. Variant II Decreasing the value of the weighting factor over time (the closer the device to the cloud boundary, the slower it moves). The new value of χ is calculated based on the previous one, starting from the predefined value) where l ∈ (0, 1). Variant III The initial value of χ for time stamps τ z , z ≥ 3 is calculated as a linear combination of a predefined constant value and value of χ at the time stamp of the same change of sensor state in the past. where α ∈ (0, 1) is an experimentally adjusted model parameter and χ a predefined constant value. z−2 χ is a value of the factor calculated at time stamp τ z−1 − ∆t. The scaling factor is decreased according to the Equation (20). Variant IV (0) χ = χ at time stamps τ z , z < 3. At time stamps τ z , z ≥ 3 the initial value of χ is calculated according to the following formula where τ z−1 − τ z−2 = k z−2 · ∆t, b > 0 is a constant value used to increase the speed of node D P m . The scaling factor is decreased according to the Equation (20). Variant V Modification of variant IV by replacing the fixed b with the variable b. This variable is updated every τ z , z = 1, 2, . . .. It is calculated as follows where ∆b > 0 is a fixed value and b z−2 a value of the variable b calculated at τ z−1 − ∆t. Finally, the initial value of the factor χ at time stamps τ z , z ≥ 3 is calculated according to the following formula Next, the scaling factor is decreased according to the Equation (20). Nonconvex Boundary Tracking The problem arises in the case of tracking the cloud boundary with concave sections. The possible target positions are limited to the quarter determined by the range of motion of D P m , Figure 7a. Therefore, it may take a long time to cover the border accurately, even if the value of factor χ is close to zero, (Figure 7b). Hence, to improve the quality and speed up the concave sections tracking, the following correction procedure is proposed. In the case of χ < min , where min is a predefined threshold value, the point χ is relocated. Its position is replaced by a new one calculated by rotating the original one by the angle r π 2 , r ∈ {−3, −2, −1, 1, 2, 3} in a clockwise direction relative to the position of D P m . Then, the optimization problem (19) is solved, and new positions of leaders of tracking teams D P m , m = 1, . . . , M are calculated. The parameter r is calculated according to Algorithm 1. Experimental Study The computing scheme for detecting and tracking heavy gas clouds was tested and evaluated through simulation. Many of the experiments were designed and conducted to tune the parameters of the developed algorithms for sensing devices' motion planning and evaluate the efficiency of developed sensing networks. The sensing networks comprising intelligent wireless mobile devices were implemented in the MobASim simulation platform [30]. MobASim is a software environment for prototyping and simulation of wireless ad hoc networks in two-dimensional space. For the experiments, a simple box model of an instantaneous heavy gas cloud [31] that hazard analysts widely adopt was implemented. It provides a convenient and efficient method for atmospheric dispersion modeling. Quality and Performance Metrics Three metrics to measure the quality and efficiency of developed algorithms were defined, i.e., (i) the accuracy of the boundary estimation, (ii) the time used to discover the boundary, (iii) frequency of boundary-crossing. Compact sensors' coverage of the boundary of the cloud is the most crucial requirement. To evaluate this requirement, the widely used accuracy measure acc was proposed Let us assume that t circle denotes a lap time of the gas cloud by the sensing device D P m . Subsequent target positions of D P m are calculated every ∆t. Hence t circle = L · ∆t, where L is the number of calculated positions. Let us use linear interpolation. The input data set consists of L target positions of D P m . Finally, we can approximate the cloud boundary and use it to estimate the area covered by the cloud. Figure 8 shows the result for the circular gas cloud covering the part of the given working space. Then, we can calculate the values of TP (true positive), TN (true negative), FN (false negative) and FP (false positive) in Equation (25). TP and TN denote the correctly recognized area by our sensing devices, respectively, inside and outside the cloud, while FN and FP denote incorrectly recognized areas. It is evident that the greater the acc measure, the better the approximation of the boundary. In general, it is desirable for the sensing device to cross the boundary as frequently as possible. It means that the devices are still within the narrow margin of the boundary. Hence, the corresponding measure f crossing can be defined The linear interpolation better approximates the boundary of the cloud when there is a higher f crossing . Finally, a cloud boundary exploration time t circle should be as short as possible. An indicator q that aggregates all described metrics has been introduced to simplify the sensing network performance evaluation where w t > 0, w f > 0 and w acc > 0 are parameters that indicate the importance of given measures. Scenario Description The testing scenario was a synthetic network comprised of 16 sensing devices designed to detect and track the chlorine cloud resulting from an immediate leak. All devices were equipped with radio transceivers with a maximum transmit power equal to −10 dBm (transmission range r t = 162.86 m), sensors measuring gas concentration every ∆t = 1 s and moving platforms with a maximum speed 15 ms. In all experiments, the network was divided into four autonomous clusters. In general, the subnetworks trucked the cloud independently, and the only binding element was the center of cloud Ψ. The following values of parameters in indicator q (27) were fixed: w t = 3, w f = 1 and w acc = 2. Five simulation scenarios were tested: Cloud 1, a gas cloud with a circle-shaped border, Cloud 2, a gas cloud with an ellipse-shaped border, Cloud 3, a gas cloud with a mixed border, partially circle-shaped and partially ellipse-shaped, and two gas clouds, i.e., Cloud 4 and Cloud 5, with nonconvex borders. The average values of performance metrics acc, f crossing , t circle and q calculated by leaders of all clusters are presented in the tables. All figures illustrate motion trajectories of device D P m from the same selected cluster. Consecutive points show successive D P m positions in the workspace (800 × 800 m). At this localization, D P m measured the gas concentration. Colors indicate the values of χ used to calculate these points. Various shades of five colors: violet ( χ ∈ [0.1, 2.5) ), blue ( χ ∈ [2.5, 5.0)), green ( χ ∈ [5.0, 7.5)), yellow ( χ ∈ [7.5, 10)), red ( χ ≥ 10) are used. Model Parameters Tuning-Circle and Ellipse Shaped Clouds The first set of experiments were designed to examine and compare the performance and efficiency of the developed sensing network for various values of parameters χ , b, ∆b, l and α that have to be arbitrarily selected by the user and the weighting factor χ in Equation (19) calculated according to I-V variants described in Section 4.3.3. The aim was to test the sensitivity of the developed algorithm for boundary tracking of these parameters and to determine the values of all mentioned parameters and a variant of the χ updating procedure that guarantees high-quality boundary estimation. The results of simulations conducted for two different shapes of gas clouds, namely scenarios Cloud 1 and Cloud 2, are presented in Tables 1-4 Tables 2 and 4 show the values of all metrics described in Section 5.1. The best ones are marked in green. Figures 9 and 10 depict the motion trajectories (positions in time) of the selected device D P m calculated for various variants of the χ updating procedure and circle-shaped and ellipse-shaped clouds. Table 1. Values of parameters used to calculate the optimal weighting factor χ ; indicator q, simulation scenario Cloud 1. Table 3. Values of parameters used to calculate the optimal weighting factor χ ; indicator q, simulation scenario Cloud 2. It can be seen that, in general, the best values of all metrics were obtained for two variants of the weighting factor χ updating, namely IV and V for both simulation scenarios. However, variant IV was the best with respect to the indicator q aggregating all metrics, although the time used to circle the cloud was slightly longer than in the case of variant III. The optimal values of parameters χ are lower for ellipse-shaped cloud boundary. A higher value of χ increases the time for the device to return to the cloud region when the radius of the curvature of the cloud decreases rapidly. Moreover, it can be observed that each device needs more time to circle the ellipse-shaped cloud than the circle-shaped one. In general, tracking an ellipse-shaped cloud is a more difficult task. Model Parameters Adjusting-Convex Clouds The second set of experiments was designed to check how the model parameters tuned for networks tracking gas clouds with circle and ellipse-shaped boundaries can be adjusted to networks tracking gas clouds with a mixed border, partially circle shaped and partially ellipse-shaped (scenario Cloud 3). Two series of experiments were conducted. The aim was to compare the quality of boundary tracking in two cases (i) the weighting factor χ in Equation (19) calculated for values of parameters collected in Table 1, (ii) χ calculated for values of parameters collected in Table 3. The results are presented in Tables 5 and 6. Table 1 Table 3 Table 1 Table 3 Table 1 Table 3 Table 1 Table 3 Table 1 Table 6. Successive D P m positions during cloud boundary tracking calculated for two variants of χ updating procedures and two sets of parameters from Tables 1 and 3; simulation scenario Cloud 3. Table 1 Table 3 Table 1 Table 3 (a) (b) (c) (d) Table 1 Table 3 Table 1 Table 3 (e) (f) (g) (h) Table 6. Cont. Table 1 Table 3 (i) (j) Variant V As can be seen, all results of boundary tracking are, in general, acceptable. However, better accuracy of the boundary estimation was obtained using values of parameters tuned for the ellipse-shaped cloud. It is clearly illustrated in Table 6. Similar to previous experiments described in Section 5.3, the highest value of the indicator q was obtained for variant IV of the χ updating procedure. The time taken to circle the gas cloud was the shortest in the case of variant III. However, it should be noted that much worse results in comparison to those presented in Section 5.3 were achieved for variant V. It can be concluded that the presented approach is sensitive to model parameters. Hence, the number of parameters should not be too large as it is difficult to identify them for various shapes of clouds. Variant V can be successfully applied in tracking clouds with a big radius of curvature. Model Parameters Adjusting-Nonconvex Clouds The experiments discussed in the two previous sections were conducted for convex clouds. Let us consider a nonconvex cloud with an irregular, nonconvex boundary, as presented in Figure 11, Cloud 4. A series of simulations were conducted for three variants of the χ updating procedure and parameters from Table 1. Table 7 shows the values of metrics obtained for three variants of χ updating. Worse results were obtained than in scenarios with convex boundaries. It can be concluded that variant V of the χ updating procedure is suitable only when the boundary has a shape with a constant radius of curvature. In the case of two other variants, the results are very similar, and the decision to choose one of them requires prioritizing the criteria. When fast detection of the boundary is crucial, variant III is recommended. In addition, simulations were performed in which χ was calculated for parameters from Table 3. The results were a bit better, i.e., acc = 0.962 and q = 3.2. The last series of experiments aimed to test the efficiency of the correction procedure described in Section 4.4. The results of simulations for the application scenario Cloud 5, i.e., a cloud with multiple concave parts of its boundary, are presented in Table 8 and Figure 12. The weighting factor χ was calculated according to Equation (22) Figure 12b can be compared to the one calculated when the correction for concave parts of boundaries is disabled (Figure 12a). It can be seen that the correction procedure significantly improves the accuracy of cloud boundary estimation for very irregularly shaped clouds. Conclusions The paper summarizes the research results concerned with designing and developing distributed sensing systems for heavy gas cloud monitoring. This system comprises autonomous unmanned vehicles, equipped with punctual sensors, radio transceivers and GPS modules that spontaneously create a network of devices that adapt to achieve goals. The paper describes a three-stage strategy for boundary detection, estimation and tracking. In this approach, the optimal motion trajectories for all sensing devices' discovering the current shape of the boundary are calculated based on the data collected from sensing vehicles and the models for target position calculation that incorporates artificial potential functions. The effectiveness was tested through extensive simulations. The presented results of experiments show that the design of an efficient sensing system for phenomena clouds should account for trade-offs between detection accuracy and computational complexity, measurement quality and equipment cost, communication reliability and network load. The challenge for the designer of this type of network is to develop efficient algorithms for online calculating of motion trajectories of sensing nodes and determine the optimal values of model parameters. This paper presents such an algorithm and examines the quality of various computing schemes for its parameter tuning. Simulation experiments showed the sensitivity of the proposed system to the model parameters and the need to select a procedure for tuning the coefficients of the optimization task to a particular scenario. Nevertheless, even for nonconvex clouds with irregular shapes, satisfactory results were obtained for calculated values of model parameters. Moreover, the simulation results demonstrated that using only a rough measurement to indicate that the threshold concentration value was exceeded can detect and track a gas cloud boundary. Calculating new sensor positions only based on the information about exceeding the safe gas concentration value makes our sensing system less sensitive to the quality of the gas concentration measurement. Because of the inaccuracy of actual sensors, such an approach may be better suited to real-world applications. Moreover, it can be easily adapted to detect other phenomena clouds, namely oil spill and wildfires, etc. To sum up, the simulation results corroborate our analysis and confirm that mobile ad hoc networks can be successfully used to monitor dynamic phenomena, create situation awareness and support rescue teams. The research in that domain will be continued. The future work will focus on the experiments conducted on more complex and realistic scenarios with more detailed toxic gas dispersion models and workspace with obstacles. The goal is to test the sensitivity of our system on the speed of boundary evolvement and the impact of the obstacles on the quality of gas concentration measurement, radio communication and, consequently, the effectiveness of our monitoring system. Moreover, we plan to develop and investigate a version of the system to operate in a three-dimensional workspace. It will allow extending the possibilities of applications to monitor other phenomena such as oil spills, wildfire, moving groups of people. Ψ, Ψ an estimated gas cloud center and its weighting factor χ, χ a point forcing a device to follow the cloud boundary and its weighting factor
10,834.6
2021-05-23T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Analytic model of bunched beams for harmonic generation in the low-gain free electron laser regime One scheme for harmonic generation employs free electron lasers (FELs) with two undulators: the first uses a seed laser to modulate the energy of the electron beam; following a dispersive element which acts to bunch the beam, the second undulator radiates at a higher harmonic. These processes are currently evaluated using extensive calculations or simulation codes which can be slow to evaluate and difficult to set up. We describe a simple algorithm to predict the output of a harmonic generation beamline in the low-gain FEL regime, based on trial functions for the output radiation. Full three-dimensional effects are included. This method has been implemented as a Mathematica R (cid:13) package, named CAMPANILE, which runs rapidly and can be generalized to include effects such as asymmetric beams and misalignments. This method is compared with simulation results using the FEL code GENESIS, both for single stages of harmonic generation and for the LUX project, a design concept for an ultrafast X-ray facility, where multiple stages upshift the input laser frequency by factors of up to 200. I. INTRODUCTION Many proposed X-ray free electron lasers (FELs) are designed to produce radiation starting from the shot noise of an electron beam. This is the self-amplified spontaneous emission (SASE) mechanism. There is much interest in developing a practical method for using seeded electron beams to produce X-ray radiation, rather than relying on SASE, because seeded FELs offer more control over the timing and pulse structure. The seed can be a laser field which is then amplified by the FEL instability, or it can be an initial current variation (bunching) of the electron beam. The second method has the advantages that high output power can be produced in the low-gain regime, and that the output wavelength can be at a harmonic of the initial perturbation [1,2]. Through this harmonic generation technique, interactions of an electron beam with a visible or UV laser can be used to generate photons at much higher energies. The possible use of multiple stages of such harmonic generation is an area of active study, for example in the LUX [3] conceptual design for ultrafast X-ray production. Here, we present an analytic model for predicting and optimizing the FEL output from an idealized, prebunched electron beam, with emphasis on applications towards harmonic generation. While many previous examinations of seeded electron beams in an FEL either assume the laser field structure in advance [4,5], or rely on summations over single-particle radiation fields [6], this formalism uses a trial-function approach to obtain simple analytic prescriptions for determining the output laser field. These expressions only apply to FELs in the low-gain regime, but include three-dimensional dynam- * Also at University of California, Berkeley, Department of Physics. ics and physical effects. These calculations have been implemented using scripts in Mathematica R [7], as a package named CAMPANILE. This allows for the rapid calculation of the dominant mode produced by a seeded electron beam in a low-gain FEL; it is also a means of optimizing the FEL and beam parameters. Under certain circumstances, this method reduces to fairly simple algebraic expressions for the power produced by a single stage of harmonic generation, with a straightforward physical interpretation. The computation time for this method of analysis is kept low by iterating over calculations where, for each iteration, only a single laser mode is considered; this is in contrast with many numerical computations where three-dimensional effects are modeled by calculating the laser field on a grid [8]. This theory is benchmarked using the GENESIS [9] simulation code. The methodology which was implemented here for the simplest case can be extended to more general beam geometries and mechanisms for seeding. II. ANALYTIC MODEL We consider an electron beam that already has a seeded perturbation in the beam current and thus generates a radiation field as it passes through an undulator. A schematic is shown in Figure 1. The electric field which exits from the undulator is taken to be a simple Gaussian mode, but is otherwise kept arbitrary: where G(x, y, s) ≡ Z R Z R + i(s − s 0 ) exp − 1 2 k(x 2 + y 2 ) Z R + i(s − s 0 ) (2) characterizes the structure of the mode. The laser wavelength is λ = 2π/k, the frequency ω = ck, and Z R is the Rayleigh length. The longitudinal coordinate s represents the position along the undulator, and at s = s 0 the laser is at its waist with spot size (Z R /2k) 1/2 (in terms of laser power). It is possible to generalize Eq. (2) to include higher-order transverse modes. The quantities Z R and s 0 are set by the parameters of the FEL and do not vary with s. This field is intended to characterize only the output from the undulator, and so, in general, the mode structure must be chosen to correspond to a vacuum field solution. The temporal variation of the radiation envelope is assumed to be slow compared to other time scales, such as the total time shift between the laser and the electron beam through the undulator. Thus, neither phase noise nor the longitudinal shape of the envelope of the laser field are considered, and the radiation properties are taken to depend only on the local electron beam properties. where a beam, which has a longitudinal distribution that is modulated at one wavelength, then passes through an undulator where it radiates into a harmonic wavelength. Within a planar undulator, the change in energy of a particle is given by where the transverse velocity of electrons due to the undulator field is Here, the undulator period is λ u = 2π/k u , the normalized field strength is a u = eB 0 /mck u , and B 0 is the RMS value of the on-axis undulator field. The field on axis is taken to be B y = √ 2 B 0 cos(k u s). The forward motion of a single electron can be described as wherev z is the forward velocity averaged over an undulator period, and the last term arises from particle motion in the planar undulator. The simplification made here for the linear regime is that the total energy lost by the electron beam at the end of the undulator can be calculated properly even if only the radiation mode of Equation (2), corresponding to the actual output radiation, is considered. Interactions with all orthogonal modes will result in a net cancellation by the end of the undulator. It still remains to determine the proper coefficients to fully characterize the output mode; the method for accomplishing this will be shown in Section III. The corresponding equation for the evolution of energy is then where the normalized (complex-valued) laser field amplitude is Averaging over an undulator period yields [10] dγ where JJ(ξ) ≡ J 0 (ξ) − J 1 (ξ), ξ ≡ ka 2 u /4k u γ 2 , and θ ≡ ks − ωt + k u s is the phase of the electron relative to a plane wave at the beat wavelength. The ponderomotive phase is usually defined as the sum of θ and the phase of the electric field, but in this paper we find it convenient to keep the components separate, because we are neglecting trapping due to self-fields. To leading order in 1/γ 2 , θ evolves according to wherev 2 ⊥ is the square of the transverse velocity averaged over an undulator period. Assuming the betatron period is much longer than the undulator period, the contributions tov 2 ⊥ from these two types of motion add in quadrature. While the displacements caused by the undulator field, of order a u /γk u , are negligible, the angles can be important for modifying the phase slippage, since they will be compared to 1/γ 2 . The betatron motion has several effects, because both the laser fields and the undulator fields vary with transverse position, and because of the change in path length. The angles due to betatron motion are typically smaller than those due to the undulator, because the betatron wavelength is much longer than the undulator period, but can also affect phase slippage. Thus, the electron beam emittance induces a spread in dθ/ds which can adversely affect the performance of the FEL. The undulator field increases with strength off-axis, which generates focusing of the electron beam. Here, we consider a planar wiggler with curved pole faces, so as to generate equal focusing in both planes, as described by E.T. Scharlemann [11]. The matched beta function for the undulator is then given by β u ≡ √ 2 γ/a u k u . The corresponding transverse actions for particle motion in the undulator, J x and J y , are given by: In the presence of external focusing, J x and J y will have a different functional form. In Reference [11], θ is shown to evolve according to where we define k = k r +δk, and the resonant wave vector is The detuning can be expressed, equivalently, in terms of δk or as a shift, δa u , in undulator strength. Using the resonance condition, the argument of the Bessel functions in Eq. (8) is ξ = (1/2)a 2 u /(1 + a 2 u ). Finally, there is the expression for the intensity of the laser field, assuming the power given up by the electron beam goes into a single mode. For the mode defined by Eq. (2), the power is where r e = e 2 /(4πǫ 0 mc 2 ). By conservation of energy, the change in power is given by where I is the electron beam current and the brackets indicate an average over the particle distribution: dγ/ds ≡ dX f (X)(dγ/ds). The termX is used as a shorthand to represent the full set of 6D phase space variables, and the distribution function f (X) is normalized so that dX f (X) = 1. The current, I, is smoothed out to average over perturbations on the time scale of the laser frequency, and is taken here to be a constant. Noting that P L scales as |a L | 2 , we have where I A ≡ ec/r e = 4πǫ 0 mc 3 /e ≃ 17 kA. The result in Eq. (15) is simply the electric field generated by the net bunching of the electron beam; we wish to generalize this to include the possibility of having no seed pulse, but having, instead, a prebunched beam. Using the identity e iΦ0 = a L /|a L |, and the relation d|a L |/ds = ℜe[(a L /|a L |)da * L /ds), Eq. (15) can be expanded to (16) This suggests that the two terms within the real part are equivalent as well; taking the complex conjugate of the resulting equation yields The above average is a generalization of the usual bunching parameter, b ≡ exp(−iθ) . The generalized bunching parameter will be defined as The temporal variation of B(s) at fixed s is neglected, assuming that it is small at the scale of the relative shifts in time caused by phase slippage. To evaluate the output radiation, it is necessary to calculate the generalized bunching parameter, B(s). In the low-gain FEL regime, the radiation field produced by the beam overall is assumed to have a small effect on singleparticle orbits, and free-streaming particle dynamics can be used. Thus, the initial particle distribution is sufficient to perform this calculation. As an explicit example, we consider the case of harmonic generation, as in the LUX conceptual design. This configuration uses a seed laser to generate an energy modulation in one undulator, which is then converted into microbunching by means of a dispersive section, typically a chicane. The chromatic dispersion is characterized by the parameter R 56 , defined by c∆t = R 56 (γ − γ 0 )/γ 0 . The second undulator is tuned to a higher harmonic of the laser seed. Because the bunching in the beam includes Fourier components at harmonics of the initial laser seed, the beam radiates at a level well above that due to shot noise (which will be neglected). Here, we examine a specific case where the modulator applies an energy modulation which depends solely on the phase θ of the electrons. The distribution function is chosen to be a product of longitudinal and transverse terms. The transverse component of the distribution function f (X) is a function of the transverse action and is proportional to exp(−J x /ǫ x − J y /ǫ y ), where ǫ x is the normalized emittance in the x-plane, and similarly for ǫ y . The energy component of distribution after modulation takes the form We will consider both Gaussian and uniform energy profiles for the function H, where ∆ γ is equal to the RMS energy spread and maximum deviation, respectively. The energy modulation varies sinusoidally with θ M , which will have a length scale determined by the source of the modulation. Generally, the length scale for the seed will be chosen to be a subharmonic of the desired output radiation wavelength, so that θ = nθ M for some harmonic number, n. Thus, if the laser seed modifies the electron beam energy in an upstream modulator, we will want to evaluate the quantity exp(−iθ) = exp(−inθ M ), which is the bunching at the n th harmonic of the seed modulation. The energy distribution includes the possibility for κ x , κ y = 0, where κ x and κ y represent a correlation between energy and transverse amplitude. This includes the case of "conditioned beams" [12,13], which has been proposed as a means of improving performance in SASE FELs. After the modulator, the beam passes through a dispersive section with an R 56 that induces a phase shift ∆θ = kR 56 (γ − γ 0 )/γ 0 , where k is the wave vector for the output radiation. Within the radiating undulator, the phase can be written as where θ M is the phase of the initial energy modulation, and similarly for q y (s). Because the chicane yields a phase offset that is independent of transverse action, there will always be phase slippage between particles at different transverse amplitudes, even for 'fully con- . Any correlation between energy and transverse amplitude results in a phase shift that is correlated with transverse amplitude as well. This effect reduces the bunching produced by the chicane. For unconditioned beams, the terms q x (s) and q y (s) grow linearly with distance along the FEL. Note that if one is considering tuning the strength of the undulator field to optimize performance, the detuning term δk/k r can be replaced with 2a u δa u /(1 + a 2 u ). The generalized bunching parameter can be calculated by considering integrals over each phase space coordinate individually. Performing the energy integral first, and shifting by ( where the function F γ depends on the form of the energy distribution: The average over ponderomotive phase is taken over θ M , because this is the scale length for the initial energy modulation: The average over transverse coordinates includes a combination of phases remaining from exp(−iθ) and the mode structure defined by G * (x, y, s). The term in G * is the same for all particles, so the following integral over the x, p x variables remains to be calculated: and similarly in y, p y space. Here, action coordinates have been used so that x = (2β u J x /γ 0 ) 1/2 cos Φ x . Electrons are subject to betatron motion, where Φ x for a specific particle varies with s. However, because Eq. (25) is an average over all phases, and we assumed that the initial energy modulation had no transverse dependence, the value of the integral is independent of the betatron motion. It is simplest to evaluate this by performing the integral over J x first; then, one is left with the average of an expression having the form 1/(a + b cos 2 φ). The integral of this term is slightly complicated, but the average value simplifies to The integral in Eq. (25) then takes the form The final result for the generalized bunching at the higher harmonic is The laser field at the end of the undulator is determined by and the laser power is given by Eq. (13). The basic undulator equations given above can be applied to other configurations, for example, to predict the energy modulation given to a beam by an external laser. They can also be applied to the high-gain regime, but here we will only check the scaling for the gain length. The second derivative of Eq. (17) can be reduced to an equation for the FEL instability, where a L grows exponentially, using Eq. (8) and considering only the energy-dependent term of Eq. (11), where dΨ/ds ≈ 2k u (γ−γ r )/γ r . Assuming that the radius of the radiation field is comparable to the radius of the electron beam, we take Z R ≃ kǫ x β u /γ, and set |G 2 | ≃ 2/3, which yields the following expression for the gain length (expressed in terms of the power radiated): Here, the "FEL parameter" is defined by and n e = (I/2πec)(γ/ǫ x β u ) is the peak electron density. This is in fairly good agreement with the well-known onedimensional approximation [14], III. TRIAL FUNCTIONS The above results are still not fully defined, because Z R and s 0 are free parameters. In general, given a specific choice of Z R and s 0 , any radiation field can be described using a sum of normal modes, but here we are attempting to fit the radiation field to a single, Gaussian mode. In the low-gain regime, each normal mode evolves independently and can be calculated individually. Because the exact result will include the power contained within all these modes, the above analytic result, when only a single mode is considered, is expected to always fall below the correct value. This suggests varying the free parameters to maximize the output power, yielding a greatest lower bound to the total power. The resulting values for Z R and s 0 should serve as the best fit of the output radiation to a pure Gaussian mode. This method is essentially a trial function approach, and any trial function which is a valid vacuum laser field can be used. The closer the trial function is to the exact result, the more accurate this estimate for the power will be. Furthermore, the prediction for the laser power is expected to be second-order accurate compared to the optimized trial function; in other words, even a poor approximation to the laser field can result in a good estimate for the output power. In this paper, only a simplified FEL configuration is considered, where the prebunching is accomplished by a uniform energy modulation, followed by a linear chicane. The trial function method applies to more general cases as well, so long as the generalized bunching parameter B(s) can be calculated and the FEL is operating in the low-gain regime. In the configurations being considered, a pure Gaussian mode is expected to be a reasonable approximation to the FEL output, except in the emittancedominated regime, ǫ/γ 0 > ∼ λ/(4π). It is a feature of trial function methods that, even if the trial function does not accurately represent the radiation field produced by the FEL, the prediction for the output power may still serve as a good estimate. For any given set of trial functions, the analytic model predicts a lower bound on the total output power. The resulting integrals are simple enough to implement as a Mathematica R script, which allows for rapid optimization. Because the trial function procedure is to maximize the output power by varying Z R and s 0 , any additional design parameters -for example the undulator field, R 56 , or energy modulation -can be optimized, simultaneously, to obtain the largest possible output power. The computational time required to optimize these design parameters is greatly reduced, in this way, relative to full scale FEL simulation codes. IV. ANALYTIC SOLUTIONS In certain parameter ranges, the above methodology allows for simple analytic approximations to the radiation power and mode structure produced by an undulator. In the parameter range where the energy modulation is much larger than the energy spread ∆ γ , but not so large that the variations in phase slippage along the length of the undulator can compete with that caused by the chicane, the optimal value of R 56 will be close to the value which maximizes J n (kR 56 γ M /γ 0 ). The argument which maximizes this Bessel function will be referred to as j ′ n,1 , which is the first non-trivial zero of J ′ n . For a cylindrically symmetric beam, the resulting expression for the output power is: where δ k = δk/k r −2(γ 0 −γ r )/γ r is the relative detuning, and (34) Here, Z 0 ≡ 1/(ǫ 0 c) ≃ 377 Ω is the vacuum impedance, which enters through mc 3 /(r e I 2 A ) = Z 0 /4π. The number N u = k u L/2π is the number of undulator periods in the undulator. To continue this analytic approximation to an optimized harmonic generation section, we consider three cases. First, we neglect q(s) altogether, which implies that the effect of emittance is limited to the spot size of the electron beam. Secondly, we consider an ideally conditioned beam, so that q(s) is a constant. Finally, we consider more general cases, including the most typical example of an unconditioned beam, where q(s) = 0 at s = 0 and increases linearly with s. Neglecting q(s), The expression for the power becomes (36) It is still necessary to find Z R and s 0 by optimizing the predicted power. The power is symmetric under s 0 → L − s 0 , and the optimum occurs at the central value of s 0 = L/2. The integral becomes simple to calculate if δ k = 0, in which case the power is (37) The emittance-related term has been rewritten in terms of the ratio between the geometric emittance, ǫ/γ 0 , and the minimum effective emittance of the laser field, λ/4π. When the emittance term is small (a line charge beam), the maximum power occurs for Z R ≃ 0.36 L with a value of 0.65 P 0 . The power only scales linearly with N u , in this case, because the distance along the undulator over which electrons can induce stimulated emission is limited by diffraction. If the detuning is allowed to vary, on the other hand, this allows for further optimization of the FEL. Using Eq. (36), the expected bandwidth of the FEL, in terms of the relative detuning, δ k , is 2π/(k u L) = 1/N u . In general, the value of the detuning parameter which maximizes the output power is negative, and must satisfy the following condition: (38) In the limit of very small ǫ, the power goes to 1.07 P 0 and Z R ≃ 0.18 L. At large values of (β u /L)(4πǫ/γ 0 λ), the power is roughly P 0 L/8Z R , and Z R ≃ kǫβ u /γ 0 . Note that (ǫβ u /γ 0 ) 1/2 is the spot size of the electron beam and, in this limit, the Rayleigh length is determined by the fact that this is also the spot size of the outgoing radiation. As a fit between these two limits, a good approximation for the Rayleigh length is Z R ≃ 0.18 L + kǫβ u /γ 0 . The power can be approximated as Thus, by optimizing the detuning parameter, rather than using the exact resonance condition, the output power can be significantly increased, by over 60% in the limit of small emittance. Also, in the small emittance limit, the Rayleigh length of the output radiation is reduced by a factor of two. When the emittance term is large, on the other hand, the optimal detuning is close to nominal resonance, in comparison with the bandwidth of the FEL. The expression for q(s) in Eq. (21) can be rewritten in terms of the conditioning parameter for a "fully conditioned" beam, κ 0 ≡ k/(2k u β u ), as When the conditioning parameter κ = κ 0 , q(s) = q κ ≡ −κ 0 kR 56 /γ 0 is a constant. In this case, the expression for the output power is almost identical to the case where q(s) is neglected, but with ǫ → ǫ/(1 + q 2 κ ǫ 2 ). Additionally, however, the power is reduced by a factor of (1 + q 2 κ ǫ 2 ), and the beam waist is shifted from L/2 to L/2 − (kǫβ u /γ 0 ) × q κ ǫ/(1 + q 2 κ ǫ 2 ). Because q κ < 0, this implies that the beam waist is shifted towards the end of the undulator. The Rayleigh length is unchanged. Thus, the effect of the constant, non-zero q(s), is to strongly reduce the output power, although this is partly compensated for by reducing the effective spot size of the electron beam. Now we consider more general conditioning parameters and optimize the output power. This is achieved by adjusting the conditioning parameter so that q(s) sweeps from negative to positive values, which keeps the magnitude of q(s) as small as possible throughout the undulator. The optimum condition is, thus, q(L/2) ≃ 0, implying that This optimum can be much smaller than κ 0 when 1 ≪ kR 56 /k u L = j ′ n,1 γ 0 /(2πN u γ M ). The parameter q(s) then varies within the range ±(κ 0 /γ 0 )kR 56 /(k u L+kR 56 ). For this value, the result is, again, symmetric under the transformation s 0 → L − s 0 , and the trial function method yields s 0 = L/2. An approximate fit for the resulting output power is kR 56 k u L + kR 56 A key parameter affecting FEL performance is the ratio of the geometric electron beam emittance, ǫ/γ 0 , to the nominal laser emittance, λ/4π. The two main corrections take the form of this ratio multiplied by either β u /L, or by L/β u . These terms are related to the electron beam spot size and phase slippage rate, respectively. There is an additional, higher order correction, which is only significant when the emittance ratio is of order unity or higher. Note that this additional term arises from the product of (2kǫ/γ 0 )(β u /L) and (2kǫ/γ 0 )(L/β u ). For an unconditioned beam, the final output power can be very similar to the optimized conditioning parameter given above. However, when the term q(s) has a significant effect, the output power is determined mainly by the range of values of q(s) for 0 < s < L. An unconditioned beam, with q(s) varying from 0 to 2k u Lκ 0 /γ 0 , will perform similarly to the optimized case above, with parameters chosen so that q(s) varies between ±2k u Lκ 0 /γ 0 . The output power for an unconditioned beam satisfies a similar approximate fit, but the sensitivity to phase slippage is effectively doubled: However, in contrast to the case where the conditioning parameter is made too large, for an unconditioned beam, the beam waist is shifted towards the beginning of the undulator. For large emittances, when ǫ/γ 0 > ∼ λ/4π, it is even possible to have s 0 < 0. Note that even when kR 56 ≫ k u L, appropriate beam conditioning can increase output power by up to a factor of 4 if the undulator performance is limited by emittance. The improvement is constrained by the mismatch between chicanes and conditioned beams, and also results from the fact that we are only considering radiation in the low-gain regime. In summary, the trial function method leads to a simplified numerical solution for certain examples, including the usual case of an unconditioned beam. The electron beam emittance is seen to affect the output power for an optimized system in two ways, related to the electron beam size and the relative phase slip of electrons having different transverse amplitude. These two terms imply that the undulator performs best when β u ≃ 0.4 L: for larger beta functions, the spot size is too large; for very small beta functions or for long undulator lengths, phase slippage reduces the output power. Constraints with similar underlying physics have been obtained as numerical fits [15] to analytic calculations of FEL radiation in the high-gain regime [16]. One important difference is that, in the high-gain regime, the most significant length scale is the gain length, rather than the total length of the undulator. V. SIMULATION RESULTS For the simplified description of a seeded electron beam, FEL simulations using the GENESIS code [9] have been compared with the analytic theory above. Two cases are considered: the first stage of a cascade which converts 200 nm wavelength to 50 nm, and the final stage, which converts 3.13 nm wavelength to 1.04 nm. All sections are assumed to use planar undulators. The electron beam is assumed to have equal emittances and equal focusing in both transverse planes. The results are summarized in Table I. The electron beam parameters are: γ 0 = 6067, ǫ x = ǫ y = 2 µm, I = 500 A. The transverse mode structure of the output radiation is characterized by the parameter M 2 , which is the ratio of the emittance of the FEL output to the minimum possible value, λ/4π. This parameter can also be described as the ratio of the idealized Rayleigh length for the given waist diameter to the observed Rayleigh length. In terms of power flux, the RMS width of the laser at the waist is (λM 2 Z R /4π) 1/2 . For the first stage, producing radiation at 50 nm by going to the fourth harmonic, the energy modulation is γ M = 2.68, and the idealized chicane uses R 56 = 92 µm. The undulator has an 8 cm period and is 2.4 m long. The electron beam is taken to be matched to the undulator, with β = 16.28 m. The resonant undulator strength is a u = 6.709, but optimal performance occurs at a u = 6.686. At this optimum, the theory predicts a total output power of 130.3 MW, characterized by Z R = 1.12 m and s 0 = 1.20 m. Numerical simulations for the simplified case yield an output power of 134.2 MW, characterized by Z R = 0.94 m and s 0 = 1.19 m, under the assumption that M 2 ≡ 1. On the other hand, a more general fit to the output radiation yields M 2 = 1.04, Z R = 0.97 m, and s 0 = 1.21 m. A detailed analysis reveals that 126.4 MW, or 94%, of the output radiation, lies within the predicted Gaussian mode. The analytic theory underestimates the total power by 3.9 MW, a relative error of 3%, which is of similar order to the power which resides in higher order modes. As a rough check, we note that, when M 2 is close to unity, an estimate for the fraction of power in higher-order modes is (M 2 −1)/2, or 2%, in this case. For this example, neglecting the effect of the FEL radiation field on the electrons themselves does not alter the simulation results. For the final stage, producing radiation at 1.04 nm, by going to the third harmonic, the energy modulation is γ M = 1.10, and the idealized chicane uses R 56 = 3.2 µm. In this stage, (ǫ/γ 0 )/(λ/4π) ≃ 4. The undulator has a 2.8 cm period and is 8.4 m long. The electron beam is taken to be matched to the undulator, with β = 29.00 m. The resonant undulator strength is a u = 1.3186, but optimal performance occurs at a u = 1.3181. At this optimum, the theory predicts a total output power of 35. On the other hand, a more general fit to the output radiation yields M 2 = 1.72, Z R = 33.0 m, and s 0 = 0.73 m. The analytic prediction is too low by 10%. By taking into account the reduced transverse coherence of the laser output, the waist position is shown to be located within the undulator, close to the upstream end. The prediction that the virtual waist of the radiation would be far away from the undulator itself is an artifact of the attempt to characterize the radiation in terms of a single, Gaussian mode. The Rayleigh lengths are also very different, reflecting the importance of higher-order modes. A detailed analysis reveals that 32.8 MW, or 93%, of the output radiation lies within the predicted Gaussian mode. The analytic theory underestimates the total power by 3.9 MW, a relative error of 10%, which is of similar order to the power which resides in higher order modes. Note that by selecting the values of Z R and s 0 in the "best fit" for the laser output, the analytic prediction may partially account for higher order transverse modes. A generalization to trial functions having two or more transverse modes would be desirable to obtain a more complete description of the output radiation. However, for typical parameters, even when performance is strongly impacted by emittance, the errors are comparable to other effects, such as statistical noise within the electron beam. The dependence of the output radiation power on the energy modulation, γ M , is shown in Figure 2, and also shows good agreement between the analytic model and numerical simulations. The value of R 56 is re-optimized for each value of γ M . For short wavelengths, FEL performance is more sensitive to the energy spread, as phase slippage along the length of the undulator leads to debunching of the electron beam. The optimal power of 60 MW can only be increased by using a longer undulator, by lowering the harmonic number, or by changing the electron beam parameters. The dependence of the output radiation on the energy spread is shown in Figure 3. Other FEL parameters are kept constant. The resulting variation in FEL power is consistent with Eq. (34). In particular, for a uniform energy distribution, the power falls off to nearly zero at kR 56 ∆ γ /γ 0 = π, because the energy spread appears in the term F γ (x) = sin(x)/x. The dependence of the output radiation on the beam conditioning parameter is shown in Figure 4. Other FEL parameters are kept constant. The resulting FEL power is consistent with the analytic theory, with the optimum value for the conditioning parameter given by Eq. (41). Typically, the ideal conditioning parameter is much smaller for this geometry than for the case of a long amplifying undulator with no chicane, labelled here as the "matched" value of κ. In the 50 nm example, the optimum is, essentially, an unconditioned beam. Even in the 1.04 nm example, optimizing the conditioning parameter yields only an 8% improvement in output power, as compared with the unconditioned case. Figures 5 and 6 show the dependence of FEL output on the strength of the undulator magnets, which determines the detuning. The agreement between theory and simulations only falters for the 1.04 nm case, when the magnetic fields are tuned below the resonant value, as shown in Figure 6. In this case, the simulations yield about 5 MW more power than the analytic theory predicts. In the optimally tuned case, this is a reasonable value for the power that is emitted into higher-order transverse modes. Far from resonance, the analytic theory predicts very lit-tle power while, in the simulations using GENESIS, there is still roughly 5 MW of power. However, this power is in the form of higher-order transverse modes, with values of M 2 ∼ 10. This radiation is generated by particles having large transverse amplitude, which also move forward more slowly. When the magnetic field is too high, these higher-order modes do not appear, because there are no particles moving fast enough to be in resonance. For earlier stages which are not emittance-limited, the analytic calculations are in much closer agreement with numerical simulations. Another source of error is the nonlinearity of the interaction, where the FEL instability, or trapping, may lead to an underestimate of the output power. The importance of the FEL instability can be checked by performing simulations with reduced electron beam current, thus assuring that the total length of the system is much less than an FEL gain length. For example, in the 1.04 nm case, simulations at low current would scale to a total output power of 38.9 MW at 500 A, demonstrating that the FEL gain is not a significant effect. However, for larger values of the applied energy modulation, nonlinear effects are very important for reducing phase slippage and maintaining a large bunching parameter. The low-gain approximation does not require that the electron beam be unaffected by the FEL interaction. Rather, there are three steps to the FEL instability: the radiation field modulates the electron beam, which then generates bunching as the energy modulation causes a variation in phase slippage, which, in turn, enhances the power channeled into the radiation field. Thus, even if the energy modulation of the electron beam at the end of the undulator is much larger than, for example, the original energy spread, these calculations can still be essentially valid. The energy modulation only need be taken into account when the phase slippage induced by the modulation alters the bunching parameter from what it would be in the free-streaming case; this can be a slow process, and the relevant scale is the gain length. For example, in Figure 7, plots of the longitudinal phase space of the beam are shown for the end of the 50 nm FEL example, both for the low-current limit and for the nominal current of 500 A. The energy modulation due to self-interactions drastically alters the phase space distri-bution, but, because this modulation yields only a small change in phase over the 2.4 m of the undulator, the radiation produced is not altered substantially by this effect. It should also be noted that the geometry considered here for each stage of harmonic generation is an oversimplification. A more typical geometry will alter the predicted output radiation in complex ways. For example, the modulation of the electron beam was assumed to be independent of transverse coordinates, while, in practice, the energy modulation will be less effective for particles that are located off-axis. VI. CONCLUSIONS In this paper, we have proposed and provided strong support for a trial function method which predicts the FEL radiation output in the low-gain regime. This method has been used to approximate the radiation output of a harmonic generation FEL system as a coherent Gaussian mode. Various assumptions have been made in order to perform the specific calculations presented in this paper. We approximate the laser seed and output as monochromatic beams. The electron beam has been taken to be matched to the undulator without external focusing, where the undulator is designed for equal focusing in both planes. The transverse emittances have also been taken to be equal. Shot noise in the current density has been neglected. The undulator is assumed to operate in the low-gain regime, specifically, the total length of the undulator must be smaller than a gain length; as long as this assumption is true, the method considered here is valid, even if the energy modulation generated through self-interactions is, itself, large. The power transferred to a given spatial mode is determined by Eq. (17), with G(x, y, s) being the structure of the expected laser output mode. This leads to the definition of a generalized bunching parameter. We find that, for expected parameter ranges, so long as the FEL is not operating far beyond the emittance limit, the output power can be described reasonably well as a single Gaussian mode, after optimizing the mode parameters for maximum output power. Analytic calculations show detailed quantitative agreement with time-independent simulations using GENE-SIS. Errors are related to the presence of higher-order modes and the corresponding reduction in transverse coherence. The apparent location of the laser waist for emittance-limited beams tends to lie outside of the beginning of the undulator, and this is shown to be due to the typical beam property that κ x = κ y = 0. Optimization of this energy-amplitude correlation would set the beam waist at the midpoint of undulator; however, generating such correlations would be challenging and the total output power is only slightly improved for typical parameters. When higher-order modes are taken into account, simulation results place the laser waist just inside of the undulator. We plan to extend this formalism to more general electron beam parameters including external focusing and elliptical beams, and to a more realistic model for the electron beam modulation process. This method can also be extended to calculate higher-order modes of the output radiation.
9,728.6
2006-02-20T00:00:00.000
[ "Physics" ]
A Framework for Automatic Video Surveillance Indexing and Retrieval The manual search through the surveillance video archives for a specific object or event is very time-consuming and tedious task due to the large volume of video data captured by many installed surveillance cameras. Therefore, the solution to accelerate and facilitate this process is to design an automatic video surveillance with the efficient and effective video indexing, video data model, query formulation and language, as well as visualization interface. There are many challenges, for developing a powerful query processing module, formulating complex queries and selecting suitable similarity matching strategy to detect any abnormality based on semantic content of the video using various query types. This study presents a novel video surveillance indexing and retrieval framework to cope with the above challenges. The proposed framework consists of three main modules i.e., pre-processing, query processing and retrieval processing. Moreover, it supports an efficient search and actively refines the retrieval result by formulating various query types including: query-by-text, query-by-example and query-by-region. INTRODUCTION Nowadays, many surveillance cameras have been installed in public places like banks, airports, parking lots, offices, hospitals and shops to increase the security by real time monitoring of human activities as well as capturing and recording this information for future analysis.Hence, video surveillances mainly supports two applications domain: • Real time monitoring of environment and generating alarm to prevent dangerous situations or threats by predicting recognized abnormal activities and events.• Investigating and retrieving specific events (action of object such as abandoned luggage or person entering the forbidden zone) or object (e.g., vehicle, person and luggage) of interest for the after-the-fact activities as evidence forensics (Le et al., 2010;Şaykol et al., 2010). Although, many research have been carried out in automatic event recognition (Benabbas et al., 2011;Hampapur et al., 2005), crowd analysis (Conte et al., 2010;Xu and Song, 2010), object detection and tracking on video surveillance (Kim et al., 2011), only few works have been dedicated to access the relevant video segment based on the user's intentions for the after-the-fact activities (Chamasemani and Affendey, 2013).In addition, these huge volumes of surveillance video contents bring us many challenges in managing and retrieving useful information efficiently and effectively.Moreover, manually searching the surveillance videos for specific events or objects by security staffs is a tedious and time consuming task which is almost becoming infeasible.Therefore, a practical solution to this problem is to quickly retrieve relevant segment of video based on user query by utilizing semi-automated or automated video retrieval and browsing application (Calderara et al., 2006;Hampapur et al., 2007;Hu et al., 2007).However, a robust video surveillance system should be equipped with powerful data modeling (to extract appropriate features, organizing and storing them in video archive) and retrieval techniques to provide sufficient facilities for detecting specific events or objects in video archives.In addition, developing an efficient query processing algorithm is also essential for accessing the video surveillance archives. This study focuses on the problem of indexing as well as retrieval of objects-of-interest or events within the stored content of the video surveillance archives.Therefore the main contribution of this study is a novel video surveillance indexing and retrieval framework for forensic investigation (after-the-fact activities retrieval).The three main modules of the proposed framework, namely, pre-processing, query processing and retrieval processing enable the user to formulate various query types (including: query-by-text, query-by-example, query-by-region) and allows active interactions with the Fig. 1: Architecture of proposed video surveillance indexing and retrieval framework retrieval model.Our framework is different from developed framework in (Le et al., 2009) since: • Ours is equipped with its own video analysis module. • Videos abstraction are used for indexing process so it decreased the processing time and operational cost during retrieval process as shown in Fig. 1. LITERATURE REVIEW Stringa andRegazzoni (1998, 2000) proposed a real time video shot detection, indexing and retrieval system.Their system is one of the first real time content-based video surveillance retrieval and indexing systems.It is used to retrieve the detected abandoned/lost luggage in subway station either from its related frame (where the detected lost luggage has been left by a person) or video shot (the last frame among 24 frames of each shot contains the detected lost luggage).Their system stored the frame of the detected lost luggage to use it in the future for retrieving the similar instances of the lost luggage based on features such as: color, shape, texture, 3-D position, movement and compactness.Therefore this system supported only textual query for pre-defined event (lost luggage). Video Content Analyzer (VCA) was developed by Lyons and his colleagues with five main components including background subtraction, object tracking, event reasoning, graphical user interface plus indexing and retrieval (Lyons et al., 2000).VCA extracts and classifies the content of video into people and objects.VCA are also able to recognize event such as person depositing/picking up object, person entering/leaving scene, merging and splitting.Although VCA's graphical user interface provides the facility for retrieving video sequences; nevertheless it is based on only given event queries.Lee et al. (2005) developed an object-based video surveillance retrieval system.Their system equipped with the specific user interface as a search/browse tools from indexed surveillance videos.This interface allows its user to search, browse, filter and retrieve simple events such as presence of persons in a given camera in a specific time or even from other cameras.In fact it can retrieve the presence of suspicious person which appeared in multiple camera viewpoints while this event has already been indexed.Jung et al. (2001) designed an efficient event retrieval system for traffic surveillance based on motion trajectory of the moving objects.Access to this moving object in the semantic level is possible by using a generated motion model as an index key which stored in database (Jung et al., 2001).The specific feature of object is used for indexing and searching purpose at different semantic level.Although, their searching module supports different queries (query by sketch, query by example and query by weighting parameter) based on the object trajectory information; it fails to process complex and textual queries. IBM smart surveillance was developed for video indexing and retrieving by focusing on video data model (Hampapur et al., 2005(Hampapur et al., , 2007)).Although, this system was successful to detect moving objects, track group of objects, classify objects and event of interest, only predefined events can be queried and processed.Hu et al. (2007) proposed a semantic-based video retrieval framework for video surveillance based on object trajectories.Objects trajectories are extracted from tracked object in the scene then object activity models (at both low level and semantic level retrieval) are hierarchical clustered and learnt from these object trajectories using their spatio-temporal information.Finally several descriptions are added to the activity model to properly index data.Their framework supports query by sketch-based trajectories, query by multiple objects and query by keyword.However, the semantic level retrieval of their framework supports only few activities such as turn left/right/south/north and normal/high/low speed. SURVIM is a developed data model for online video surveillance which supports different abstraction levels (Durak et al., 2007).The framework of SURVIM includes these four main modules: data extractor, video data model, query user interface and query processing.Users of SURVIM are able to retrieve video segmented based on different query types (query by semantic, spatial, size-based, trajectory and temporal).SURVIM did not extract low level feature of objects such as color, shape and velocity; therefore it suffers from inability in successful object classification.Their model also failed to process those queries with incomplete indexing. Visual Surveillance Querying Language (VSQL) was proposed by Şaykol et al. (2005) with the main focus on semantic and low-level features for surveillance video retrieval supports only query-by-text.They developed their query language to support scenario-based query processing system which provides a mechanism for an effective offline inspection (Şaykol et al., 2010).The two main drawbacks of their system are: firstly, they performed exact matching since during indexing phase the event are recognized and object are detected and tracked.Therefore, their system failed to process those queries with incomplete indexing.Secondly, the users are restricted to formulate their queries from limited set of predefined events and scenario (in their system scenario is specified as a sequence of events arranged temporally and enriched with object-based low level features).Le et al. (2008Le et al. ( , 2009Le et al. ( , 2010) developed a general framework for video surveillance indexing and retrieval.Their framework equipped with a Structured Query Language (SQL) to retrieve surveillance video at both event and object level.The retrieval process can be done based on query by text, query by example and query by region.In their system the simple specified events plus interval time of their relations construct the composed events.They developed their framework based on this assumption that the incoming videos are partially indexed.An external video analyzing module was responsible to index the content of video by performing object detection, object tracking and event recognition. Nam and his colleagues proposed a data model for human activity recognition that support complicated activity by combining a set of basic activities (Nam et al., 2013).They used activity labels to defined validity or invalidity of activity combinations and restricted the human activity into symmetric or asymmetric. Although several works have been dedicated for retrieving object and event on video surveillance archives, still there are many challenges need to be fulfilled.Designing a powerful query processing module and formulating complex query comprising the combination of information and spatial-temporal relations of objects and events is not easy task.Furthermore, developing a similarity matching strategy which enables to match various types of queries and video index is another open problem. OVERVIEW OF THE PROPOSED METHODOLOGY Figure 2 illustrates structure and functions of our proposed indexing and retrieval framework.This framework is based on these three main modules: preprocessing, query processing and retrieval processing.Furthermore, it is designed as an effective, efficient and convenient means for automatic object and event indexing as well as retrieval from video surveillance archives which works on both low and semantic levels.However, the proposed framework contains the entire processing steps needs to accomplish retrieval task.The following subsections give a quick review of each framework modules. Pre-processing module: Surveillance cameras captured raw video, then compressed and stored it into video database based on their location and time information.Then, these stored videos are abstracted in order to decrease processing time and computational cost for performing further tasks such as video indexing, query processing and retrieval. Video abstraction: Video abstraction is an automatic way for extracting important information from largescale video which speeds up video indexing and retrieval by avoiding from performing unnecessary and redundant information.Dynamic video skimming and static video summary are two types of video abstraction.Video skimming is constructed from a collection of image sequences with their related audio; in fact video skimming is a brief representation of the source video.While, static video summary is the simplest ways for providing an abstracted video by extracting and selecting the representative video frames which called keyframes.The common way for extracting these keyframes is based on semantic level of a video.However, determining the proper and informative keyframe is not easy tasks (Jiang and Qin, 2010;Sabbar et al., 2012).Moreover, most of moving objects in real video surveillance application are important since their presence can be referred as evidence in case of crime occurrence. In video surveillance the moving objects are the most informative and representative part which appear in video frames.Therefore, the efficient video abstraction should contain these three main characteristics, first, it should be sufficiently short and compact; second, it should include necessary elements of moving objects which appeared in the original video; third, the temporal order of moving object should be preserved (Chiang and Yang, 2015).We developed an Video indexing: Our video indexing is composed of the following three main sub modules: video analysis, feature extractions and data indexing.Video analysis is responsible for detecting interested stationary or mobile objects as well as tracking them (track these objects in successive frames) and event recognition (by analyzing the behaviors of these detected objects).Therefore, the result of this step is physical objects, events and trajectories. In this framework detecting object of interest was performed by employing an adaptive background modelling from four common approaches of object detection including: background modelling, segmentation, supervised classifiers and point detectors.These detected moving objects are categorized into person, physical object and grouped object using their feature vectors which extracted during feature extraction.Feature extraction is performed to extract low level feature of object like color, shape, velocity, bag of region and trajectory. The results of the two previous sub modules are used in the data indexing according to our data model.The Data model is responsible for determining what kind of features needs to be extracted and the way they should be organized and indexed in database.Hence, in the pre-processing module after detecting objects or recognition events their information (including the low level feature as well as spatio-temporal relation among them) are stored in the related frame based on our data model. Query processing module: Query processing module is the essential part of our proposed framework.Therefore, query formulation, query parsing and query matching are responsible for retrieving the accurate result even the event or object is fully or partially indexed.The query processing module will start with formulating a textual query, selecting whole image as an example query, or part of image as a region query.A visual query-specification interface is devised to facilitate the query formulating or selecting image/subimage processes.The next step is to parse user's query (query-by-text) or extracted feature after performing region segmentation (query-by-example or query-byregion).Moreover, to enhance the performance of retrieval result, the submitted query can include the low level feature of object such as shape, color, size and also specific occurrence of event such as time interval, spatio-temporal relation of grouped objects.Then, the query parsing checks the vocabulary of the words, analyze the syntax of query and separates the textual queries.On the other side, the selected region is represented using the proposed bag-of-regions segmentation algorithm which is computed with respect to the region's dominant color and using the color distance for quantizing regions. Retrieval processing module: The ability to efficiently retrieve and browse the accurate results from video archive considering users' satisfaction according to their submitted query is the most critical aspect of any retrieval system.To this end, we employed the matching techniques to retrieve those objects or events which satisfy user's query either they are fully or partially indexed.Moreover, the proposed relevance feedback algorithm will get the user feedback to refine the retrieval result by interaction among system and user.The user's feedbacks are learnt by developing a learning algorithm to further improve the retrieval performance as well as indexing these retrieved object or recognized event based in these feedbacks on video archive. CONCLUSION This research presented a novel and efficient framework for automatic video surveillance indexing and retrieval based on the problem of existing discussed works.However, successful design and development of the proposed framework will lead to accurate video retrieval with low processing time and operational cost.The three main components of this framework were pre-processing, query processing and retrieval processing modules.We briefly described the important functionalities of each module.We are currently implementing the video abstracting and indexing module to show the feasibility of the proposed framework.The creditability of it will be shown by performing various experiments using benchmark datasets. Fig. 2 : Fig.2: Three modules of the proposed framework adapted shot segmentation algorithm that extracted representative keyframes using clustering method for performing video abstraction.
3,579.8
2015-01-01T00:00:00.000
[ "Computer Science" ]
The AGEL Survey: Spectroscopic Confirmation of Strong Gravitational Lenses in the DES and DECaLS Fields Selected Using Convolutional Neural Networks We present spectroscopic confirmation of candidate strong gravitational lenses using the Keck Observatory and Very Large Telescope as part of our ASTRO 3D Galaxy Evolution with Lenses (AGEL) survey. We confirm that 1) search methods using Convolutional Neural Networks (CNN) with visual inspection successfully identify strong gravitational lenses and 2) the lenses are at higher redshifts relative to existing surveys due to the combination of deeper and higher resolution imaging from DECam and spectroscopy spanning optical to near-infrared wavelengths. We measure 104 redshifts in 77 systems selected from a catalog in the DES and DECaLS imaging fields (r<22 mag). Combining our results with published redshifts, we present redshifts for 68 lenses and establish that CNN-based searches are highly effective for use in future imaging surveys with a success rate of 88% (defined as 68/77). We report 53 strong lenses with spectroscopic redshifts for both the deflector and source (z_src>z_defl), and 15 lenses with a spectroscopic redshift for either the deflector (z_defl>0.21) or source (z_src>1.34). For the 68 lenses, the deflectors and sources have average redshifts and standard deviations of 0.58+/-0.14 and 1.92+/-0.59 respectively, and corresponding redshift ranges of (0.210.5 that are ideal for follow-up studies to track how mass density profiles evolve with redshift. Our goal with AGEL is to spectroscopically confirm ~100 strong gravitational lenses that can be observed from both hemispheres throughout the year. The AGEL survey is a resource for refining automated all-sky searches and addressing a range of questions in astrophysics and cosmology. Introduction Gravitational lenses are powerful cosmic magnifying glasses that we now regularly use to explore a wide range of astrophysical phenomena.Strong gravitational lensing extends our observational reach to include objects that are too faint for even the most powerful telescopes with the added bonus of spatially resolving internal structures of distant objects at subkiloparsec scales.By tracing the total matter distribution, gravitational lensing also illuminates dark matter halos of foreground deflectors that span the range from single galaxies to galaxy clusters up to z = 1.62 (Franx et al. 1997;Sonnenfeld et al. 2013;Wong et al. 2014).With high-resolution observations from the Hubble Space Telescope (HST) at optical/near-IR wavelengths and the Atacama Large Millimeter/submillimeter Array at longer wavelengths, strong gravitational lensing has enabled multiple imaging of a single supernova, discovery and analysis of galaxies with the highest redshift, mapping of dark matter distributions from the subkiloparsec to megaparsec regime, and measurement of the Hubble constant (e.g., Jones et al. 2013;Kelly et al. 2015;Yuan et al. 2015;Leethochawalit et al. 2016;Oesch et al. 2016;Meneghetti et al. 2017;Suyu et al. 2017). Identifying strong gravitational lenses has been challenging due to the required combination of high-resolution imaging, wide-area surveys, and spectroscopic confirmation (Bolton et al. 2008;Gavazzi et al. 2012;Stark et al. 2013).Lenses have complex morphologies, and flux from the foreground deflector and background source is usually blended in ground-based observations.Subarcsecond imaging is key to detecting the distinctive visual signature of gravitational arcs and rings, and spectroscopic follow-up is needed to confirm the foreground lens and background source.Bright (r  22 mag) gravitational lenses that can be followed up with adaptive optics are ideal for multiwavelength observations at high spatial or spectral resolution.However, bright lenses are rare (0.1 per square degree; Jacobs et al. 2019aJacobs et al. , 2019b;;Huang et al. 2020), and identifying more than Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence.Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI. a handful requires imaging hundreds of square degrees (see also SL2S; Gavazzi et al. 2012). The Sloan Digital Sky Survey (SDSS) made possible the first generation of wide-area searches for strong gravitational lenses, but the galaxy-scale lenses studied thus far are not representative of the broader population.Most lensing candidates in SDSS were identified using fiber spectroscopy that captured light from both the deflector and source (e.g., SLACS and BELLS; Bolton et al. 2008;Brownstein et al. 2012), and thus are limited to lenses with Einstein radii (r EIN ) of 1 5 due to the fiber diameter of 3″.Fiber searches miss wide single galaxy lenses like the Cosmic Horseshoe (r EIN = 5″, z lens = 0.44; Belokurov et al. 2007) and group/ cluster-scale lenses.SDSS-based searches also have a magnitude limit of i < 20 mag, which means that most of the confirmed galaxy-scale (foreground) deflectors are at z  0.6 (Bolton et al. 2008;Brownstein et al. 2012;Stark et al. 2013).Complementary searches targeting larger lenses (r EIN > 3″) in SDSS such as CASSOWARY (Belokurov et al. 2009;Stark et al. 2013) and RCS (Bayliss et al. 2011) are also limited to the SDSS depth and resolution. Here we introduce our ASTRO 3D Galaxy Evolution with Lenses (AGEL) survey to spectroscopically confirm strong gravitational lenses selected from deep optical imaging with the Dark Energy Survey (DES; Abbott et al. 2018) and Dark Energy Camera Legacy Survey (DECaLS; Dey et al. 2019) using convolutional neural networks (CNNs; Jacobs et al. 2019aJacobs et al. , 2019b, hereafter jointly J19ab), hereafter jointly J19ab).The DECam imaging available in these public surveys reaches fainter magnitudes and has better angular resolution than SDSS, qualities thereby enabling the AGEL survey to push to volumes at higher redshift and detect gravitational lenses with r EIN > 1 5. CNN-based methods can efficiently sift through increasingly larger data sets like DES to search for the distinct visual signature of gravitational lensing, a process that would be virtually impossible with the human eye alone (Metcalf et al. 2019).We build on earlier searches that used human inspection (More et al. 2016;Diehl et al. 2017), lens modeling (Chan et al. 2015), or neural networks (Jacobs et al. 2017;Petrillo et al. 2017;Huang et al. 2020) to identify high-quality gravitational lenses and increase the number of candidates from the hundreds to the thousands. Developing neural networks (NNs) to produce high-fidelity and high-purity catalogs for different classes of objects is important because upcoming deep, wide-field surveys such as EUCLID and LSST will discover >10 4 lensing systems (Metcalf et al. 2019).NNs are critical for sifting through millions of objects to identify a few thousand candidates that can then be further inspected, e.g., visually and with follow-up observations.In addition to AGEL, which uses a CNN-based search, Huang et al. (2020) apply a residual NN to search through 9000 deg 2 from the Dark Energy Camera Legacy Survey (DECaLS) and find 335 strong lensing candidates.However, only with spectroscopic confirmation of the lensing candidates can we verify the sample purity and characteristics to further refine automated searches. The AGEL survey aims to confirm ∼100 bright (r  22 mag) strong gravitational lenses to enable statistically robust studies of deflectors and magnified sources.Using the J19ab catalogs of lens candidates, we obtain spectroscopic follow-up to measure redshifts for the foreground deflector and background source for lenses that can be observed using telescopes in both hemispheres throughout the year.Most of the arcs and counterimages are at projected distances of r proj ∼ 1″-10″ and require spatially resolved spectroscopy to separately measure redshifts for both the lenses and the sources. Building a sample of spectroscopically confirmed strong gravitational lenses opens a range of new discovery space spanning galaxy-to cluster-sized dark matter halos (e.g., Newman et al. 2015;Nord et al. 2016).Confirmed deflectors at z defl > 0.5 are especially needed to test for the predicted evolution in mass density profiles with redshift (Sonnenfeld et al. 2013).AGEL also enables the first broad characterization of galaxy populations at source redshifts of z src ∼ 1-4 at the resolution and signal-to-noise ratio afforded by lensing. In this paper, we present our first results from the spectroscopic follow-up of the candidate gravitational lenses identified by J19ab in the DES fields and a subsequent search of DECaLS fields using the same method.We summarize how J19ab develop and train the CNN and describe our spectroscopic follow-up with the Keck Observatory and Very Large Telescope (VLT) in Section 2. We discuss our completeness and success rate in confirming strong gravitational lenses in Section 3. We describe the AGEL survey in the context of previous lensing searches and ongoing science analysis in Section 4, and provide our conclusions in Section 5. Unless otherwise noted, we use the AB magnitude system. Convolutional Neural Networks With advances in computational power and algorithms, we can now expand the boundaries of earlier searches for strong gravitational lenses by applying convolutional neural networks to deep imaging taken by DECam from the Dark Energy Survey (Abbott et al. 2018) and DECaLS (Dey et al. 2019).The coadded DES imaging reaches r = 24.1 and has higher angular resolution than SDSS due to a combination of pixel scale (0 396 pix −1 versus 0 263 pix −1 ) and seeing.CNNs can deliver samples with the highest purity of nonspectroscopic lens-finding algorithms and circumvent a limitation of earlier lens surveys such as SLACS and BELLS that were based on spectroscopic selection with the SDSS fiber (radius of 1 5; Bolton et al. 2008;Brownstein et al. 2012).Note that the survey by the Dark Energy Spectroscopic Instrument (DESI) is even more severely limited than SDSS, i.e., the DESI fibers have core diameters of 1 5 (Flaugher & Bebek 2014;DESI Collaboration 2016) compared to 3″ diameter SDSS fibers. Our sample of lens candidates captures a broader sample of galaxy-scale lenses that includes systems with Einstein radii >1 5 (see Figure 1).Here we summarize the approach used in J19ab to select gravitational lens candidates in the DES Year 3 and DECaLS DR7 fields and refer the reader to J19ab for a complete description of the CNN method and resulting catalog of candidates.) from DES and DECaLS for the gravitational lenses with spectroscopy from our ongoing follow-up campaign combined with published redshifts from the literature (see Section 2.2).Each system has a spectroscopic redshift (Q z = 1, 2, or 3) for either the foreground deflector, background source, or both; the lenses are ordered by increasing deflector redshift with the lens candidates that have only source redshifts at the bottom.Considering only redshifts with Q z 2, we present 53 confirmed lenses with both z defl and z src , and 15 lenses with either z defl or z src (see Table 3 for the AGEL redshift tally).Many systems have Einstein radii larger than the SDSS fiber (3″ diameter) and several are likely in clusters/groups. 2.1.2.Selecting Lens Candidates Using the CNN: DES Year 3 and DECaLS DR7 The trained CNN was applied to a catalog of approximately 8 million sources from the DES with gri photometry to select gravitational lensing candidates.J19ab applied color and magnitude cuts of to ensure that the sample is not biased against the combined color of the foreground deflector and background source, where the latter tends to be blue at optical wavelengths. J19ab identified a sample of ∼1300 lens candidates by combining legacy imaging from DES Year 3 and CNNs trained on artificially generated images of lenses (gri).All the candidates had been visually inspected and ranked on a 0-3 scale where 0 is not a lens, 1 is possible, 2 is probable, and 3 is definite.J19ab visually examined candidates with lower and lower scores until the purity was only ∼1%, i.e., candidates with lower scores had a likely contamination rate of >99%.The CNNs used by J19ab delivered samples with a purity as high as 20% for the highest scoring candidates, i.e., one in five examined images was a probable or definite lens.We refer the reader to J19ab for a more detailed description of the visual validation process. To increase sky coverage and take advantage of the DECam Legacy Survey Data Release 7 (DECaLS DR7; Dey et al. 2019), we use the same method from J19ab to identify another ∼600 lens candidates in the DECaLS fields that were observed with the DECam (DR7).The DES fields (5000 deg 2 ) include the South Galactic Cap, while the DECaLS primarily target the SDSS equatorial sky (−15°< δ < 34°).The AGEL catalog of candidate lenses is based on DR7, which includes imaging taken with DECam, MOSAIC-3 on the Mayall telescope, and the 90Prime camera on the Bok telescope.For uniformity of the imaging, note that only the observations from DECaLS taken with DECam are used to select candidate lenses for AGEL. The CNNs from J19ab were retrained on grz imaging from DECaLS DR7 and run on 3.1 million sources.The subset of ∼20,000 most highly scored candidates were then visually inspected by three team experts (C.J., K.G., T.C.).The DECaLS lens candidates were generated separately from the DES candidates and are not published in J19ab.Given the common data sets used to search for gravitational lenses, we note that some of our candidates are in earlier catalogs as well, e.g., Huang et al. (2020Huang et al. ( , 2021) ) and Stark et al. (2013). Figure 3 shows the combined distribution of ∼1900 highquality candidate gravitational lenses identified in the DES and DECaLS DR7 fields.The candidate gravitational lenses span a range in photometric redshift (z phot = 0.39-0.81;see Table 2 and Figure 8) and, due to the magnitude and color cuts, are brighter than r = 22 mag (see J19ab; Figure 4). Literature Spectroscopic Redshifts In selecting targets for spectroscopic follow-up, we prioritized AGEL candidates with published spectroscopic redshifts for the candidate foreground deflector.However, we did not exclude any lens candidates for spectroscopic follow-up because virtually none had spectroscopic confirmation of both the deflector and source, and independent confirmation is helpful. Of the 79 AGEL spectroscopic targets, 37 have spectroscopic redshifts published in the literature (Table 2).The literature redshifts are from existing surveys that used SDSS observations to select lens candidates including SLACS (131; Bolton et al. 2008), BELLS (45;Brownstein et al. 2012), and CASSOWARY (29;Stark et al. 2013).These surveys combined published redshifts from SDSS and BOSS (Eisenstein et al. 2011) with additional follow-up spectroscopy; we refer the reader to their papers for further detail. Photometry from DESI Legacy Imaging Survey DR9 In the following analysis, we use updated magnitudes provided by the DESI Legacy Imaging Survey (https:// www.legacysurvey.org;Dey et al. 2019) that consolidates photometry taken by multiple telescopes to access both hemispheres.The DESI DR9 provides updated photometry for earlier lens searches including SLACS, BELLS, and CASSOWARY that enables direct comparison to AGEL.DR9 provides total r magnitudes (AB system) measured using TRACTOR (for details, see Dey et al. 2019).Every source is modeled using one of six morphological types that is convolved with the specific point-spread function for each exposure (description available on the DR9 website). We use the total r-band magnitude for the DR9 object that is closest in projected distance to the position of the gravitational lens.The gravitational lens is usually centered on the brighter foreground deflector, and the deflector and fainter images of the lensed source are sufficiently separated for our lens candidates such that the reported flux corresponds to the deflector (see Figures 1 and 4).Note that using the ground-based imaging to train our CNN and visual inspection means we are best able to identify lens candidates with r EIN  1″ (Figure 1). Spectroscopy The primary goal of our spectroscopic campaign is to confirm as many gravitational lens candidates from the CNNselected catalogs as possible.To secure spectroscopic confirmation of the candidate gravitational lenses, we use observations from the Keck Observatory and Very Large Telescope for northern and southern targets respectively.Observations were carried over 13.5 nights from 2018 April to 2021 March with varying conditions including telescope closures due to the 2020 pandemic (Table 1).Whether Targets were selected to be visible during awarded nights at optimal airmasses and have Einstein radius r EIN  1″ (Figures 1 and 2).Higher priority was given to targets that (1) were near suitable guide-stars for future follow-up with adaptive optics, (2) have imaging with the Hubble Space Telescope, and/or (3) had previously known spectroscopic redshifts of the candidate deflector that enable efficient confirmation of background arcs.No other criteria were used to prioritize the lens targets.Our general strategy was to target the single brightest arc and the candidate deflector. We focus on the spectroscopic redshifts for the analysis in this paper.We note that the spectra are of sufficient quality to measure velocity dispersions for the foreground deflectors and gas kinematics in the lensed sources (G.C. Vasan et al 2022, in preparation). Keck Spectroscopy We use the Echellette Spectrograph and Imager (ESI, Sheinis et al. 2002) and NIRES (Wilson et al. 2004) instruments on the Keck telescopes to obtain optical and near-infrared spectroscopy respectively of the candidate gravitational lenses (Table 1).ESI was used primarily to measure redshifts for the foreground deflectors (z defl  1).Depending on the redshift of the source, a spectroscopic redshift could be obtained with ESI via interstellar medium (ISM) absorption lines or Lyα emission, or with NIRES via emission lines. With ESI in echelle mode (slit length of 20″), we obtain spectroscopy at 3900-10900 Å with a corresponding dispersion of 0.16-0.30Å pix -1 from order 15 to 30.We use a slit width of 1 0, providing a resolving power of R = 4000, and typical total exposure time on target of 20-80 minutes depending on conditions.The ESI data are reduced using the ESIRedux (2019 runs) and makee (2020 and 2021 runs; see Table 1) pipelines provided by J. X.Prochaska 10 and T. Barlow 11 respectively. We use NIRES primarily to target the background sources at higher redshifts (z src > 1) because sources tend to be starforming galaxies with emission lines.Using the fixed slit width of 0 55, the wavelength coverage is 0.9-2.45μm with a mean spectral resolution of 2700 and spectrometer pixel scale of 0 15 pix −1 .The NIRES slit length is 18″ and typical dither steps are ±(3-7)″.The typical total exposure time on target was 20 minutes (ABBA dither pattern) and the data were reduced using the NSX pipeline written by T. Barlow. 12 The redshift precisions of ESI and NIRES are comparable given the pixel scales and spectral resolutions.Spectra from both instruments can be flux-calibrated using a standard star taken during the respective observing runs.However, flux calibration is not needed for the redshift confirmations that are the focus on this paper.For the same reason, the spectra have not been corrected for telluric absorption. Very Large Telescope Spectroscopy We use the ESO/VLT X-Shooter instrument (Vernet et al. 2011) to obtain spectroscopy at 3000-25000 Å (Table 1).We use slit widths of 1 0, 0 9, and 0 9 with corresponding spectral resolutions of 5400, 8900, and 5600 for the UVB (300-560 nm), VIS (560-1024 nm), and NIR (1024-2480 nm) arms respectively.The typical total exposure time is 10-40 minutes on the deflectors and 40-60 minutes on the lensed sources.In some cases a single slit is placed across the deflector and lensed sources, and in other cases separate slit positions are used because of the lens geometry.The data are reduced using the REFLEX pipeline provided by ESO (Modigliani et al. 2010) and publicly available 2D to 1D extraction code from Corentin Schreiber. 13 The X-Shooter spectra are flux-calibrated using a standard star taken during the respective observing runs.However, flux calibration is not needed for the redshift confirmations that are the focus on this paper.For the same reason, the spectra have not been corrected for telluric absorption. Determining Spectroscopic Redshifts Spectroscopy is essential for determining accurate redshifts of the targeted systems, especially for gravitational lenses where blended light from multiple objects makes obtaining photometric redshifts for sources challenging.The spectra are reduced using their respective instrument pipelines that perform bias, dark current, cosmic ray, and sky subtraction, and flatfield corrections.The 2D spectra are fit along the slit (spatial) axis with a Gaussian profile, and the 1D spectra are extracted from the 3σ region of the fitted Gaussian. Precise spectroscopic redshifts are determined by using a custom Python script to fit Gaussians to the emission and absorption lines in the 1D spectra.A subset of the targets (30/ 79) have a photometric redshift for the foreground deflector from existing public catalogs, and we use z phot for the initial guess to determine the spectroscopic redshift.Note that like all ground-based spectroscopic surveys, we are incomplete at certain redshifts due to spectral features falling in optical/NIR bands of atmospheric absorption. None of the background sources have photometric redshifts because the images of the lensed sources are faint and frequently blended, e.g., with the foreground galaxies.As we discuss in Section 3.3, the photometric redshifts for the foreground deflectors are remarkably reliable despite potentially blended photometry.However, follow-up spectroscopy of both candidate deflector and source is essential to confirm whether the gravitational lens is real (see Section 3.1). Depending on the redshift of the object and wavelength coverage (optical versus NIR), we use different spectral features in the 1D spectra to measure redshifts (Figure 5).Source redshifts measured with NIRES and X-Shooter are almost exclusively determined using rest-frame optical emission lines.Source redshifts from ESI are mostly from interstellar absorption lines except for sources at z src < 1.7, where the redshifts are mostly from [O II].Note that redshifts from ISM absorption lines are not systemic and are typically blueshifted by ∼200 km s −1 due to, e.g., large-scale outflows (Rakic et al. 2011). 10ESIRedux pipeline 11 makee pipeline 12 NSX pipeline 13 Available on github 2).We select spectroscopic targets from the CNN-selected catalogs described in Section 2.1, where systems with spectroscopic redshifts from the literature are prioritized (see Section 2.2). Considering only spectroscopic redshifts with Q z 2, the AGEL redshift tally (Table 3) includes 53 confirmed lenses with both z defl and z src , and 15 lenses with either z defl or z src (Table 2). For the foreground deflectors (z defl < 1), spectral features measured by ESI are mostly absorption lines including Hβ, Hγ, the H & K calcium lines, and Mgb.The spectral features are easily identified with visual inspection to obtain an initial estimate for determining a spectroscopic redshift.For the background sources (z src > 1), spectral features measured with ESI, NIRES, or X-Shooter are usually emission lines including [O II] λ3727, Hβ, Hα, and [O III] λ5007.For higher-redshift sources (z src  2), we sometimes obtain UV absorption lines including C IV λ1550, Fe II λ1608, and Al II λ1670 with ESI. Using multiple spectral features (see Figure 5) results in low redshift uncertainties of <0.00005.The spectral lines have minimum signal-to-noise ratio (S/N) of 3 and we visually inspected all of the Gaussian fits.The spectral templates provided the initial redshift for fitting, and the spectroscopic redshift is the median redshift measured using multiple lines.For the sources, the spectral lines are boosted by the gravitational lensing. The 106 objects targeted for spectroscopic follow-up are listed in Table 2.As described earlier in Section 2.4, candidate deflectors and sources are selected based on the instrument (optical and/or NIR coverage) with the goal of obtaining spectroscopic redshifts for the deflector and at least one lensed image of the source.To quantify the robustness of each spectroscopic redshift, we assign a redshift quality flag Q z by inspecting and comparing their 1D and, where available, 2D spectra.Following Tran et al. (2015), a quality flag of: 1. Q z = 3 denotes a robust measurement (multiple spectral lines).Includes resolved [O II] doublet with S/N 3. 2. Q z = 2 is likely (single spectral line with potential secondary line).3. Q z = 1 is a guess (single line and/or no strong spectral features). The spectra shown in Figure 5 all have Q z = 3.For comparison, spectra with Q z < 3 are shown in Figure 6.We measure 104 redshifts, but in our analysis we consider only the 95 redshifts with Q z 2 (Figure 7). Spectroscopic Success Rate with CNN-based Search During the observing runs listed in Table 1, we targeted 106 objects in 79 candidate gravitational lenses for spectroscopy and measure 104 redshifts.We are unable to measure a redshift for two of the targets due to lack of spectral features (Table 3); in some cases, the spectral features may fall in windows of atmospheric absorption.We define the spectroscopic success rate as the ratio of 104 redshifts to 106 targets, which is 98% (Table 3). Of the 79 candidate gravitational lenses that we targeted, we obtain redshifts in 77 systems.Our spectroscopy confirms that one object is a (red) Milky Way M-star and three are galaxies at z spec < 0.5 (Table 3).The three galaxies are a rotating ring galaxy (AGEL 215041+140248), a rotating ring galaxy where the "arc" is part of the ring (AGEL 211515+101153), and a system where the "arc" and "deflector" are at the same redshift (AGEL 224400+124540).Removing these four systems from our analysis leaves 73 gravitational lenses where we secure redshifts for either the foreground deflector, the background source, or both (Table 3).We then apply a redshift quality requirement of Q z 2 that removes five candidate lenses. In the following analysis, we use only the 68 strong lensing systems that satisfy these criteria: (1) not spectroscopically confirmed to be a star or multicomponent galaxy at z spec < 0.5; (2) spectroscopic redshifts for the foreground deflector and/or background source; (3) spectroscopic redshifts with Q z 2; and (4) if both z defl and z src are measured, z src > z defl .Of the 68 strong lenses, 53 have z defl and z src from a combination of our spectroscopic follow-up and published values in the literature, and 15 have either z defl or z src (Tables 2 and 3; Figures 1 and 2). For the seven systems where we have spectroscopic redshifts for deflectors from our own observations as well as values from the literature, we use our redshifts.The spectroscopic redshifts are consistent: the median absolute difference for these seven deflectors is 0.001 with semi-interquartile range of 0.028.The two largest outliers are at z spec ∼ 0.7 (see Table 2). Our results establish that CNN-based search methods are highly effective at identifying strong gravitational lenses in imaging and strongly support using CNNs in future surveys by LSST and EUCLID.We confirm a high success rate of 88% for the CNN-selected candidates by taking the ratio of the 68 strong lenses to the total number of 77 systems with measured redshifts (Table 3).The 88% is likely a conservative lower limit: if we exclude only the four non-lenses and relax the spectroscopic quality flag to use the remaining 73, the confirmation rate of CNN-selected lens candidates is 95%. Spectroscopic Redshifts of Gravitational Lenses Of the 68 gravitational lenses with secure redshifts (Q z 2), 53 have spectroscopic redshifts for both the foreground deflector and background source (Table 2).The spectroscopic redshifts for 25 of the deflectors are from our spectroscopy, and 28 are published redshifts from surveys including BOSS (Eisenstein et al. 2011) and CASSOWARY (Stark et al. 2013).For the 53 confirmed gravitational lenses, the average redshifts and standard deviations for the deflectors and sources are 0.55 ± 0.15 and 1.91 ± 0.47 respectively (Table 4). We include 15 systems with a spectroscopic redshift for either the candidate foreground deflector or background source (but not both; see Table 2), and we are continuing our spectroscopic follow-up of these 15 systems.We are confident that the 15 strong lenses are real given our statistics, existing high-resolution Hubble Space Telescope imaging for a subset, and their spectroscopic redshift distributions.We have obtained redshifts for eight deflectors (0.21 z defl 0.79) in the 15 systems, and seven are at z defl 0.5.For seven of the 15 systems, we have redshifts for the sources confirming they are at z spec = 1.336-3.388.For comparison, the four systems that we confirm to not be lenses are all at z spec < 0.5. Our results confirm that existing imaging surveys are able to detect strongly lensed sources at z src  2. Included are 41 deflectors at z defl > 0.5 that are especially useful for measuring how mass density profiles evolve with redshift (see Figure 7).Considering systems where we have secured a redshift for the deflector or source (Table 2), we have 35 deflectors and 60 sources with average redshifts of 0.58 ± 0.14 and 1.92 ± 0.59 respectively (Figure 7). Precision of Photometric Redshifts We have spectroscopic redshifts for 23 systems with photometric redshifts for the foreground deflectors determined using DES and DECaLS photometry.For the DES lens candidates, J19ab estimated z phot for the deflectors using the BPZ code (Benítez 2000) with the 3″ aperture gri photometry (Abbott et al. 2018).For DECaLS systems, we used the BPZ code on the model grz photometry published in the DECaLS catalogs (Dey et al. 2019).Due to the spatial resolution of the ground-based DES and DECaLS imaging, our lens candidates tend to be "wide-angle" systems (r EIN  1″; Figure 1).We find the photometric redshifts are remarkably consistent with the spectroscopic values (see Figure 8).There is no systematic offset in deflector redshift for lenses with spectroscopic redshifts from BOSS compared to those without. Our results indicate that potentially blended light from the background source has minimal impact on determining a photometric redshift for the foreground deflector for our sample of lenses (r EIN  1″).Because the images of the higher-redshift sources are fainter and can be blended with the foreground deflector, photometric redshifts are not available for the sources.Thus only with follow-up spectroscopy can we confirm whether a system is a true gravitational lens by obtaining spectroscopic redshifts for both the foreground deflector and higher-redshift source. Comparison to Previous Lensing Searches AGEL has now confirmed more strong gravitational lenses than any single previous survey except for SLACS (Table 4).AGEL's key advantages for pushing to higher redshifts than previous searches are the deeper and higher-resolution imaging from DECam, and spectroscopy spanning optical to nearinfrared wavelengths.Notably, the 68 AGEL systems have a higher average deflector redshift (0.58 ± 0.14) than many previous surveys including SLACS, BELLS, CASSOWARY, and SL2S (Figures 7 and 9; Table 4).The average spectroscopic redshifts for the foreground deflectors in the fiber searches by SLACS and BELLS are á ñ = z 0.18 defl and 0.50 respectively (Bolton et al. 2008;Brownstein et al. 2012).For CASSOWARY and SL2S, which both used imaging to identify gravitational lenses (Sonnenfeld et al. 2013;Stark et al. 2013), the average deflector redshifts of á ñ = z 0.42 spec and 0.49 are also lower than in AGEL (see Table 4). With our combination of optical and near-infrared spectroscopy, we also confirm background sources with a higher redshift range than existing surveys.SLACS, BELLS, and CASSOWARY confirmed sources with average redshifts up to z src = 1.2, 1.5, and 1.76 respectively.For comparison, the average redshift for e Typical uncertainty in the spectroscopic redshifts for the deflectors and the sources is δ(z) < 0.00005.Spectroscopic redshifts with [ ] denote systems that are not lenses. f Redshift quality flag where Q z values of (3, 2, 1) correspond to (robust, probable, guess).In our analysis, we focus on redshifts with Q z 2. the AGEL sources is á ñ z src = 1.91 ± 0.47 with confirmed sources up to z src = 3.549 (see Figures 7 and 9; Table 4).The lensed sources in AGEL are identified by imaging and complement searches for z src 2 galaxies based on fiber spectroscopy (e.g., Shu et al. 2016). The AGEL survey is a useful resource for recent and ongoing searches that identify thousands of gravitational lens candidates and confirm a subset using spectroscopy.Imaging with the Hyper Suprime-Cam on Subaru has provided an especially rich data set with the SuGOHI team publishing a series of papers identifying a total of ∼100 confirmed gravitational lenses and ∼1500 possible/ probable lenses (Sonnenfeld et al. 2018(Sonnenfeld et al. , 2020;;Wong et al. 2018;Jaelani et al. 2020).With the SILO survey, Talbot et al. (2021) identify ∼1500 lensing candidates that have BOSS redshifts for the candidate deflectors.With spectroscopic redshifts for deflectors and sources that span the range in redshift, the AGEL survey can be used to estimate contamination in these complementary searches. Because the J19ab lensing candidates are not limited by fiber diameter and the Einstein radius is proportional to the halo velocity dispersion for an isothermal sphere, we capture a wide range of halo masses including galaxy groups and clusters (see also Huang et al. 2020).Among the confirmed lenses we have seven systems with r EIN ∼ 2″-8″ at z defl = 0.36-0.78(see Figure 1).For comparison, Newman et al. (2015) study 10 strong lensing galaxy groups with r EIN = 2 5-5 1 at z defl = 0.21-0.45.Thus our sample extends studies of galaxy groups identified directly by their halo masses (M 200 ∼ 10 14 M e ) to higher redshifts for comparison to, e.g., cosmological simulations (McCarthy et al. 2017). Future Science with AGEL Systems With the AGEL survey, we will provide a rich legacy data set of ∼100 strong gravitational lensing systems that can be observed with telescopes in both hemispheres and throughout the year.Such a sample of high-magnification lens systems such as the 68 confirmed in this analysis is ideal for a number of scientific investigations.The data already in hand are being used to study the foreground deflectors and background sources.The groundbased spectroscopy used to confirm the gravitational lenses provides emission-line diagnostics of magnified sources at a key epoch in galaxy formation (1 < z < 3; Madau & Dickinson 2014).The width and shape of the spectral lines trace the source kinematics to measure rotation versus dispersion-dominated systems (Leethochawalit et al. 2016;Yuan et al. 2017;Girard et al. 2018;Newman et al. 2018) and search for galactic winds (Jones et al. 2018, Vasan et al. 2022, in preparation).Line ratios 1).The total r-band magnitudes (AB system) are from the DESI Legacy Survey Data Release 9 and determined using TRACTOR to model the photometry (Dey et al. 2019).For AGEL, we focus mainly on candidate lenses from the DES and DECaLS fields but also include candidates lenses from existing surveys such as CASSOWARY (Stark et al. 2013).Note. a We provide statistics for all 68 strong lenses with a secure redshift for either the deflector or source, and for the subset of 53 with secure redshifts for both the deflector and source. such as [N II]/Hα and [O III]/Hβ measure gas-phase metallicities and ionization conditions as well as star formation rates and dust content at z  2 (e.g., Jones et al. 2012;Sanders et al. 2015Sanders et al. , 2016;;Tran et al. 2015;Alcorn et al. 2019;Kewley et al. 2019;Harshan et al. 2020) Gravitational lensing by single galaxies, especially at z > 0.5, is particularly effective at testing galaxy formation models.Current cosmological simulations predict that the slope of the mass density profile (γ′) is essentially flat at 0 < z  0.5 and steepens at z > 0.5, but observations of gravitational lenses suggest the opposite is true (Sonnenfeld et al. 2013;Dye et al. 2014).However, most galaxy-scale measurements are at z < 0.5, which means the 41 confirmed lenses with deflectors at z defl 0.5 to date in AGEL provide a key test by increasing the number of systems at z defl 0.5 (Sonnenfeld et al. 2013). Increasing the number of confirmed gravitational lenses also enables an exciting range of discovery space such as compound lenses for measuring the Hubble constant and time-variable phenomena for repeated observations via time delays (Suyu et al. 2013(Suyu et al. , 2017;;Kelly et al. 2015).The arcs provide multiple sightlines to probe tomographically the circumgalactic medium of the intervening galaxies (Lopez et al. 2018;Mortensen et al. 2021), including the foreground deflectors.The ∼100 pc-scale measurements that are possible with diffraction-limited observations of lensed sources are particularly relevant for the current and next generation of adaptive optics instruments (Wizinowich et al. 2020) as well as with the James Webb Space Telescope for extending galaxy scaling relations to even lower masses at z src 2. High-resolution Imaging with the Hubble Space Telescope High-resolution imaging is critical for constructing lens models that precisely map the matter distribution of the foreground deflectors.To measure the matter density profiles of the gravitational lenses, we are acquiring high-resolution imaging with the Hubble Space Telescope (#16773; Cycle 29; led by K. Glazebrook) that builds on the existing HST imaging from SNAP program #15867 (Cycle 27; led by X. Huang).By combining the HST imaging with the spectroscopic Figure 6.Example of spectroscopic redshifts with redshift quality flag Q z < 3. From top to bottom: probable Hα emission for source with Q z = 1 (single line); probable C IV absorption with Q z = 2.5 (also weak C III]); probable calcium absorption with Q z = 1 (lines not centered); probable calcium absorption with Q z = 2.In our analysis, we use spectroscopic redshifts with Q z 2. redshifts measured by AGEL, we will map dark matter substructure and lensed source morphology. For the HST #16773 observations scheduled through 2023, we target lens candidates at decl.+25°to enable follow-up observations by both northern and southern telescopes and candidates that are distributed in R.A. to allow access throughout the year (Figure 3).Targets with existing or scheduled (through 2022) spectroscopic observations for sources and lenses are promoted to higher priority.We also prioritized lens that have existing imaging with the PISCO instrument on Magellan.With the HST observations from the approved programs, we expect to have upwards of 50+ gravitational lenses with HST imaging and spectroscopic redshift for both deflectors and sources by the end of 2023. Conclusions We introduce the ASTRO 3D Galaxy Evolution with Lenses (AGEL) survey by presenting spectroscopically confirmed strong gravitational lenses in the DES and DECaLS fields that are brighter than r = 22 mag.In this paper, we report on 79 candidate gravitational lenses selected from a magnitudelimited catalog that were identified in imaging taken with DECam (Figures 1, 3, 4, 5, 6;Jacobs et al. 2019aJacobs et al. , 2019b)).The combination of deep, high-quality imaging and a search method using convolutional neural networks with human inspection is highly effective at identifying strong lensing systems within the large cosmic volume surveyed by DECam. We targeted 106 objects for optical-NIR spectroscopy and obtained redshifts for 104 (spectroscopic success rate of 98%). Combining our observations with spectroscopic redshifts published in the literature, we have redshifts for 77 candidate lensing systems (Table 3).For 53 lenses, we secure spectroscopic redshifts for both the deflector and source where z src > z defl .For 15 lenses, we additionally have eight with z defl = 0.21-0.79and seven with z src = 1.34-3.39.Of the remaining nine systems, we identify four as non-lenses while five have inconclusive redshift quality.We define the success rate of the CNN-selected candidate lenses as the ratio 68/77, which is 88%. The AGEL survey pushes to higher redshifts than previous lensing surveys, with deflectors reaching z defl ∼ 0.9 and sources spanning a broad redshift range (Figures 5,7,9).For the 68 confirmed AGEL systems, the redshift ranges for the foreground deflectors and background sources are z defl = 0.21-0.89and z src = 0.88-3.55,and the average redshifts are á ñ z defl = 0.58 ± 0.14 and á ñ z src = 1.92 ± 0.59.There are 41 strong lenses with deflectors at z defl 0.5.The resulting sample is well suited for addressing a range of questions in astrophysics and cosmology such as the current uncertainty of whether mass density profiles evolve with redshift. The AGEL survey provides a useful training set to further refine automated all-sky searches for strong gravitational lenses, especially given the high purity of the CNN-selected sample.For the subset of 23 confirmed lenses with photometric redshifts from existing surveys (Figure 8), the photometric redshifts are remarkably consistent with the spectroscopic redshift of the deflector: the average absolute difference is 0.03 0.02 spec .However, spectroscopy of the candidate deflectors and sources remains critical to confirming whether the system is a strong gravitational lens. Our goal is to spectroscopically confirm a statistically robust sample of ∼100 strong gravitational lenses that can be Figure 7. Distribution of our spectroscopic redshifts measured from follow-up with Keck and the VLT; here we show only our measured z spec and exclude the literature redshifts.The deflectors (orange) and sources (blue) have higher average redshifts (solid arrows) relative to SLACS, BELLS, and CASSOWARY (dashed arrows, see Table 4; Bolton et al. 2008;Brownstein et al. 2012;Stark et al. 2013).The SL2S survey based on CFHT imaging (Sonnenfeld et al. 2013) has a higher average source redshift than AGEL, but the spectroscopic ranges for both the deflectors and sources are marginally lower (see Table 4).By combining our spectroscopic redshifts with literature redshifts, we secure redshifts for both z defl and z src for 53 gravitational lenses, and either z defl or z src for 15 lenses.observed with adaptive optics using telescopes in both hemispheres throughout the year.The optical/NIR spectroscopy combined with existing multiwavelength observations in the DES and DECaLS fields already enables a wide range of studies such as measuring the total matter profiles of the foreground deflectors, using multiple sightlines to probe the circumgalactic medium, and searching for galactic-scale winds in the background sources.In order to more accurately model the lens mass distribution, spatially resolve subkiloparsec structure in the sources, and search for dark matter substructure in the deflectors and along the line of sight, we are also acquiring high-resolution imaging with the Hubble Space Telescope (#GO-16773) for a subset of AGEL systems. We thank the referee for a detailed and constructive report.Data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration.The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community.We are most fortunate to have the opportunity to conduct observations from this mountain.Data include Facilities: W. M. Keck Observatory, European Southern Observatory, Cerro Tololo Inter-American Observatory 2.1.1.Training the CNN Training a CNN to separate lenses and non-lenses requires labeled examples.J19ab used the LensPop code described in Collett (2015) to generate training sets of up to 250,000 images split equally between positive and negative examples; images are each 100 × 100 pixels.J19ab trained a CNN with four convolutional layers with different kernel sizes.With each iteration, the loss and accuracy are measured and used to update the weights of the network.Training continued until the validation loss did not improve by more than 10 −4 over six epochs, where a single epoch constitutes one run over the entire training set. Figure 1 . Figure1.Imaging taken by DECam (26″ × 26″) from DES and DECaLS for the gravitational lenses with spectroscopy from our ongoing follow-up campaign combined with published redshifts from the literature (see Section 2.2).Each system has a spectroscopic redshift (Q z = 1, 2, or 3) for either the foreground deflector, background source, or both; the lenses are ordered by increasing deflector redshift with the lens candidates that have only source redshifts at the bottom.Considering only redshifts with Q z 2, we present 53 confirmed lenses with both z defl and z src , and 15 lenses with either z defl or z src (see Table3for the AGEL redshift tally).Many systems have Einstein radii larger than the SDSS fiber (3″ diameter) and several are likely in clusters/groups. Figure 2 . Figure2.The slit positions of our spectroscopic observations overlaid on the DECam imaging (26″ × 26″) where the observed lens candidates are in the same order as in Figure1(north up, east to the left).The bottom row of seven systems are confirmed either to not be a lens or to have Q z 1 (Table2).We select spectroscopic targets from the CNN-selected catalogs described in Section 2.1, where systems with spectroscopic redshifts from the literature are prioritized (see Section 2.2).Considering only spectroscopic redshifts with Q z 2, the AGEL redshift tally (Table3) includes 53 confirmed lenses with both z defl and z src , and 15 lenses with either z defl or z src (Table2). The average photometric and spectroscopic redshifts are á ñ Figure 3 . Figure 3. Spatial distribution of candidate gravitational lenses in the DES/DECaLS fields (gray circles) and the 77 spectroscopic redshifts from our AGEL survey (pink stars; Table 3) where the secured redshift is of the deflector (foreground) and/or the source (background).The confirmed gravitational lenses span a range in R.A., and most are at declinations near the equator and can be observed by telescopes in both hemispheres; the plane of the Milky Way is shown as the green curve.Several of the confirmed strong lenses are targeted in the HST SNAP program #15867 (open black circles) that provides the high-angular-resolution imaging needed to model the gravitational lenses; additional HST imaging of AGEL systems is ongoing in Cycle 29 (#16773). Figure 4 . Figure4.Most of the AGEL systems targeted for spectroscopic follow-up are brighter than r = 21 mag (foreground deflector; see Figure1).The total r-band magnitudes (AB system) are from the DESI Legacy Survey Data Release 9 and determined using TRACTOR to model the photometry(Dey et al. 2019).For AGEL, we focus mainly on candidate lenses from the DES and DECaLS fields but also include candidates lenses from existing surveys such as CASSOWARY(Stark et al. 2013). Figure 5 . Figure 5.By combining optical and near-infrared spectroscopy, we confirm candidate lenses by measuring redshifts for the foreground deflectors and/or background sources.Here are six examples of confirmed gravitational lenses with spectroscopic redshifts for the deflectors (left) and higher-redshift sources (middle); all have redshift quality flag of Q z = 3.The high signal-to-noise ratio (black/red) spectra show strong absorption features for the deflector and emission lines for the source.The RGB images (26″ × 26″; right) are generated from multiband optical imaging from DECam. Figure 8 .. Figure 8.We compare spectroscopic redshifts for the deflectors to photometric redshifts for 23 systems and find that z phot and z spec are remarkably consistent.The average photometric and spectroscopic redshifts are á ñ =  z 0.63 0.10 phot observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere under ESO program 0101.A-0577.The authors acknowledge support by the Australian Research Council Centre of Excellence for All-Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013.S.L. is funded by FONDECYT grant number 1191232.T.J. and K.V.G.C. gratefully acknowledge funding support for this work from the Gordon and Betty Moore Foundation through Grant GBMF8549, and from a Dean's Faculty Fellowship.T.J. acknowledges support from the National Science Foundation through grant AST-2108515.T.E.C. is funded by a Royal Society University Research Fellowship and from the European Research Council under the European Union's Horizon 2020 research and innovation program (LensEra: grant agreement No 945536). Table 1 AGEL Spectroscopic Observing Runs (2018 April-2021 March)sources or deflectors were targeted depended on the instrument: the key spectral features of deflectors (z defl < 1) are captured with optical wavelength coverage by Keck/ESI while the higher-redshift sources (z src > 1) are better matched to near-IR (NIR) wavelength coverage by Keck/NIRES.Both deflectors and sources can be confirmed with VLT/X-Shooter with continuous optical-NIR coverage. candidate Table 4 Comparison to Existing Gravitational Lensing Surveys with Spectroscopic Redshifts
10,925.6
2022-05-11T00:00:00.000
[ "Physics" ]
Employing graphene acoustoelectric switch by dual surface acoustic wave transducers We implement a logic switch by using a graphene acoustoelectric transducer at room temperature. We operate two pairs of inter-digital transducers (IDTs) to launch surface acoustic waves (SAWs) on a LiNbO3 substrate and utilize graphene as a channel material to sustain acoustoelectric current Iae induced by SAWs. By cooperatively tuning the input power on the IDTs, we can manipulate the propagation direction of Iae such that the measured Iae can be deliberately controlled to be positive, negative, or even zero. We define the zero-crossing Iae as \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${I}_{ae}^{off}$$\end{document}Iaeoff, and then demonstrate that Iae can be switched with a ratio \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${I}_{ae}^{on}/{I}_{ae}^{off}\, \sim \,{10}^{4}$$\end{document}Iaeon/Iaeoff~104 at a rate up to few tens kHz. Our device with an accessible operation scheme provides a means to convert incoming acoustic waves modulated by digitized data sequence onto electric signals with frequency band suitable for digital audio modulation. Consequently, it could potentially open a route for developing graphene-based logic devices in large-scale integration electronics. Graphene -a two-dimensional (2D) sheet of carbon atoms arranged in a honeycomb lattice-exhibits various unique properties beneficial for post-silicon electronics 1,2 . Recent developments in graphene field-effect transistors (GFETs) suggest that graphene holds great promise in radio frequency (RF) applications [3][4][5] . For digital electronics, adopting new materials as a successor to Si must perform excellent switching capabilities with a low off-state dissipation power and a high on/off current ratio 6,7 . Nevertheless, graphene shows a serious hurdle for its applications in logic circuits 2 , because the pristine graphene does not possess an energy bandgap 1 . As a result, GFET cannot be turned off efficiently, leading to a low on/off current ratio typically less than 10 2 . Subsequently, the research efforts are geared toward two different directions: engineering graphene material to open a bandgap 8,9 , or exploiting layered 2D semiconductors with a naturally occurring bandgap, e.g. transition metal dichalcogenides (TMDs) and black phosphorus (BP) 7 . In this work, we report a different approach to implementing graphene for logic devices by utilizing acoustoelectric effects. Here graphene is used as a channel material to convert surface acoustic wave (SAW) into acoustoelectric current I ae . We will show that I ae induced by dual SAWs can be modulated by discretizing RF signals. In this regard, a graphene acoustoelectric transducer(GAET) can function as a logic switch. The switching performance is demonstrated by the successful generation and detection of the digital text carried by I ae with a switching rate up to few tens kHz. A surface acoustic wave is an acoustic wave traveling along the surface of the piezoelectric materials, with its displacement amplitude exponentially decaying into the material so that it is roughly confined within one wavelength beneath the surface 10 . SAW can be induced by distributed comb-like metallic structures, such as interdigital transducers (IDTs), deposited on the surface of the piezoelectric substrate. Triggered by the piezoelectric effect, the RF input signal at the transmitting IDT stimulates the SAW. For a typical SAW device, a second IDT is employed, served as a signal processing unit and a transducer, to convert the acoustic waves back into a RF signal. Nowadays, the SAW devices have been widely used in various RF signal processing techniques for telecommunications and sensors 11,12 . The propagation of SAW is sensitively influenced by the local changes of the host medium, which causes the variations of the SAW velocity v s and the SAW attenuation factor Γ. For example, SAWs can interact with two-dimensional electron gas (2DEG) placed nearby and the corresponding changes in both v s and Γ have been used to probe the distinct electronic states of 2DEG [13][14][15][16] . In addition, the interaction between the SAW and the charge carriers of 2DEG can also induce a macroscopic direct current, acoustic current I ae , which is known as the acoustoelectric effect. The acoustoelectric properties of graphene have been extensively studied [17][18][19][20][21][22][23][24][25] . Owning to the linear energy dispersion and gapless nature of graphene, electrons in graphene can absorb sound waves over a wide frequency range 26 and in theory Γ is strikingly diminished as the Fermi level E F is tuned across the charge neutral point (CNP) 17,20 . However, graphene does not possess piezoelectricity because of its central symmetric lattice structure, unlike to GaAs-2DEG. The major obstacles in studying and utilizing the acoustoelectric effects of graphene lie on how to generate SAW and maintain its propagation under the control of E F . Early experiments reveal that SAW of graphene can be excited by placing graphene either on or in close contact to a substrate with high piezoelectricity, e.g. LiNbO 3 substrate 18 . By incorporating ion liquid gate and IDTs, E F of graphene can be tuned across CNP, and I ae exhibits an ambipolar effect-the sign of I ae is reversed associated with the change of charge carriers from n-to p-type 22,23 . Furthermore, Γ of grapehene is extremely weak, approximately ~0.4 to 6.8 m −1 depending on the carrier density n s , which is three orders of magnitude smaller than that of GaAs 2DEG systems 23 . These fascinating properties make graphene an ideal material for various acoustoelectric devices, ranging from acoustic tweezers, branch switch, flip-chip devices etc. 22,[27][28][29] . A theoretical model to describe acoustodynamic effects in semiconductors was developed by G. Weinreich 30 . The acoustic current in a closed-circuit measurement (or voltage in an open-circuit measurement) is induced by a loss of wave energy associated with a proportional loss of SAW momentum, which is analogous to a force applying on the absorber (the charge carriers of graphene in this study). For a 2D system, we can assume that the acoustic current density j ae is proportional to Γ with the coefficient Λ and flows along the direction of SAW propagation 15,16,31 : is the intensity of the SAW propagating along the x(y)-direction. It has been known that Λ can be described by 16 n e / , where σ is the DC conductivity of graphene. We assume that both Γ and Λ are spatially uniform because graphene is an isotropic material. Note that one may need to treat Γ and Λ in a tensor form when the SAW propagates on an anisotropic substrate or the carriers are in the presence of an external magnetic field 15,16,31 . Because of the ambipolar effect of graphene, j i ae through graphene in the electron-and hole-rich regimes flows in opposite direction and vanishes at charge neutral region due to cancellation 22,23 . Consequently, one can define a true zero-current state or an "off "-state at CNP although the channel is not completely closed. On the other hand, a fair on/off ratio ~20 has been reported by defining an on/off state away from CNP in our earlier study 23 . In principle, if the off-state is set exactly at CNP, one can get a much higher ratio (>10 7 ). There are competitive advantages to utilize GAET for logic devices. For commercial SAW filters used in the RF front-end, the device requires sufficient high-power durability. In general the SAW device can withstand power levels ≥30 dBm, which is high enough to generate I ae with a decent S/N ratio. Moreover, no quiescence power source is needed because GAET is activated by the energy of the RF input signals received by the IDT transceiver. Nevertheless, if one operated GAET like GFET that the RF signal is sent through the gate electrode 3,4 , the modulation speed of GAET would be too slow for practical applications 23 . It is mainly because the ionic liquid is adopted for the gate electrode in the present GAET design 22,23 . Note that the gate electrode made of conducting materials will severely damp the propagation of SAWs. This is the key bottleneck for GAET to be used for the logic devices. Design Concept and Device Details Our design concept is illustrated in Fig. 1(b). Two IDTs, denoted as IDT1 and IDT2, are employed on a LiNbO 3 piezoelectric substrate in a nearly orthogonal arrangement. Each IDT comprises two sets of interleaved fingers and the acoustic current density induced by IDT1 and IDT2 is indicated as j IDT1 and j IDT2 , respectively. Two current sensing leads are placed along the positive x-direction (cf. Fig. 1), and the measured acoustic current I ae is determined by the vector sum of + j x IDT1 and − j x IDT2 . The negative x-component of j IDT2 can be induced by deliberately adjusting orientation of IDTs or simply by the imperfection of the device. Therefore, we can manipulate the flow of I ae by controlling the RF power separately applied on IDT1 and IDT2. As a result, the magnitude of I ae measured could be positive, negative, or even zero. Our approach can be viewed as an application of acoustic-based active mixing technique, which has been widely used in studies of microfluidic channels with I ae acting as the acoustic streaming of the sample liquid 32 . To make analogue to operation method of conventional field-effect transistor(FET), IDT1 functions as the source contact to inject the channel current and IDT2 serves as a gate electrode to turn "on/off " the device. We will demonstrate below that by digitizing RF signal applied on IDT2 or IDT1, the GAET can perform as a logic switch. Figure 2(a) shows a schematic diagram of the investigated GAET. The device consists of two pairs of IDTs, denoted as IDT1 to IDT4, on LiNbO 3 substrate, graphene, four electrodes on graphene labeled as leads 1 to 4, and a micro-bead of an ion-gel coated on graphene 33 , a gate electrode of the polymer electrolyte for applying gate voltage V g . The two sets of opposite ITDs, IDT1-IDT3 and IDT2-IDT4, are separated with a distance L T = 1.4 mm and backed by metallic strips to damp reflected waves. In this study, we only operate IDT1 and IDT2, and conduct their counterpart IDT3 and IDT4 as a passive receiver for checking the SAW properties. Each IDT comprises two sets of interleaved fingers with N IDT = 25 finger pairs made by 5 μm wide electrodes with 8/70 nm of Cr/Au. The www.nature.com/scientificreports www.nature.com/scientificreports/ acoustic aperture W T ~ 600 μm, the overlap between electrodes, is aligned between two opposite IDTs along the [011] direction of the z-cut single-crystal LiNbO 3 substrate. The optical micrograph of device can be found in the Fig. 2(d,e). The SAW wavelength λ SAW determined by the pitch of the IDT electrode is 20 μm and the SAW velocity v s is approximately 3795 m/s 34 . The central resonance frequency f = v s /λ SAW is estimated to be 190 MHz. Figure 2(b) shows the transmittance S 21 as a function of frequency measured by a network analyzer (RS ZVA24). It exhibits a peak at approximately central frequency f c = 191 and 187 MHz for IDT1 → IDT3 and IDT2 → IDT4 respectively, which fairly agrees with the designed value. Graphene is prepared by the chemical vapor deposition. We refer the readers to our previous publications for the details of graphene growth, characterization, and transfer procedures [35][36][37] . Graphene is gently placed between two IDTs, and tailored to a rectangular shape of length L G = 600 μm and of width W G = 400 μm. Caution must be taken to ensure that graphene residues will not short the Au electrodes of IDTs. Four electrodes deposited along the side border of graphene are used for resistance and acoustoelectric current measurements. They are made of an Au/Cr bilayer with 8/70 nm in thickness and 20 μm in width, among which 450 μm in length for leads 1 and 4, and 250 μm in length for leads 2 and 3 (see Fig. 2(a)). Finally, a micro-bead of the solid polymer electrolyte, poly ethylene oxide (PEO) and LiClO 4 33 , is dropped onto graphene surface with size slightly larger than graphene area. Note that the geometry of the electrodes is designed in a way that the damping effect on SAWs due to intruding into the metallic electrodes is minimized and I ae flowing along either longitudinal or transverse direction can be collected as much as possible. Figure 2(c) shows the representative resistance R(=V 24 /I 13 ) of graphene as a function of V g , where V 24 is the voltage measured across the lead 2 and 4, and I 13 is the current passing through the lead 1 to 3. The R versus V g trace of graphene reaches a maximum with resistance ~2.5 kΩ at CNP, where V g ≡ V CNP = −0.549 V. We have measured five devices with the same structures and got consistent results. Data presented below are mainly obtained from one of the devices. Results and Discussion The acoustoelectric characteristics of the studied devices at room temperature is shown in Fig. 3. Figure 3(a) displays the experimental setup of I ae measurement. Here we use IDT1 and IDT2 to generate the SAWs and take leads 4 and 1 to sense I ae , while keep the rest IDTs and electrodes inactive and open. We modulate RF signal at a frequency of 10 kHz and employ standard lock-in technique to measure I ae . Note that the propagation direction of the I ae measured by leads 4 and 1 aligns with that of the SAWs induced by IDT1. Figure 3(b) shows I ae as a function of bias voltage V g with various RF powers P IDT1 applied on IDT1 at the central frequency of 191 MHz, while keeping IDT2 inactive. With the present arrangement of the current leads displayed in Fig. 3(a), the measured I ae is positive, negative, or zero as graphene is biased in the hole-rich regime, electron-rich regime, or at V g ~ V CNP . The gate bias dependence of I ae manifests the unique Dirac dispersion relation of graphene 22,23 . We note that an on/off ratio of I ae up to 10 7 can be achieved, for example, if one defines the on-state at V g − V CNP = 0.5 V and the off-state at V g = V CNP for P IDT1 = 10 dBm. Extracted from Fig. 3(b), the measured acoustoelectric current as a function of the SAW intensity is plotted in Fig. 3(d). Within the applied RF power up to 10 dBm the acoustoelectric current is linearly proportional to the SAW intensity 18 , as indicated in Eq. (1). To present the performance of the device in the dual-SAW operation, we cooperatively activate IDT1 and IDT2 at frequency of 190 MHz. First, we launch the SAWs by IDT1 with a fixed P IDT1 ~ −10 dBm to induce a steady positive I ae on graphene in the hole-rich regime, and then gradually increase the input RF power P IDT2 on IDT2 ranging from −10 dBm to 10 dBm. Figure 3(c) shows I ae as a function of applied bias V g with various P IDT2 . It is found that the value of I ae decreases/increases with the increase of P IDT2 in the hole-/electron-rich regime. As P IDT2 > 2 dBm, I ae almost diminishes. While P IDT2 increases further, I ae remarkably changes sign, and its magnitude increases with P IDT2 . The evolution of I ae with P IDT1 and P IDT2 is inconsistent with the mixing-flow of j ae scenario described in Fig. 1(b). However, the experimental findings demonstrate that the dual-SAW operation can null acoustoelecctric current in a controllable manner, which provides an alternative route to turn "off " I ae . Next we will show that by dynamically controlling the on/off state of I ae , GAET can be effectively operated as a logic switch. Figure 4(a) displays schematic diagram of the measurement circuit for real-time response www.nature.com/scientificreports www.nature.com/scientificreports/ measurements on I ae . We first bias graphene in the hole-rich regime at V g − V CNP = 0.5 V and then simultaneously apply a constant P IDT1 = −10 dBm on IDT1 and a modulated P IDT2 on IDT2 to generate a time-varying I ae . The open-circuit voltage V SAW associated with the induced I ae is amplified by a wide-band low noise amplifier and then www.nature.com/scientificreports www.nature.com/scientificreports/ is directly recorded through a digital oscilloscope with a bandwidth of 100 MHz and a sampling rate up to 1 GHz. The output voltage V SAW corresponding to I ae ~ 1 μA is approximately 1 μV. The RF-power P IDT2 is amplitude modulated by a square wave with a period T m (=1/f m = 100 μs) and a duty cycle D(=0.2). Oscilloscope traces of the applied RF-signal on IDT2 is shown in Fig. 4(b) for reference. Figure 4(c) shows the screen shot of the output waveforms of I ae taken from the oscilloscope at various P IDT2 . As P IDT2 is within active time of the pulse, I ae exhibits a dip feature due to the cancellation by negative I ae . As P IDT2 increases up to 6 dBm, I ae is nearly vanished. Here the GAET functions as an active-High logic switch that processes information as either a "1" or a "0", depending on whether the switch is off − getting finite acoustoelectric current I ae − or on − I ae measured zero. We estimate the on/off ratio of I ae is approximately 10 4 based on the noise level of the on and off states. To characterize the response time of switching I ae , we note that the maximum switching rate -the key parameter to limit sampling rate in digital communications -is determined by the transition time of I ae in response to the modulated RF pulse. We can switch I ae simply by modulating RF signal applied on a single IDT. Unlike the dual-SAW scheme discussed above, one can view such operation as an active-Low logic switch. In terms of GFET, graphene provides a nature 2D conducting channel such that a digital on/off state can be simply achieved by modulating the source-drain bias without applying a gate voltage, if the signal gain is not a concern. On the other hand, a RF signal can be directly converted to an electric signal in GAET. In this regard, GAET has an advantage over GFET as a logic switch. Figure 5(a) shows the circuit diagram to switch I ae by operating IDT1 alone. Figure 5(b) shows the detailed profile of the I ae pulse waveform generated by a square wave-modulated P IDT1 (=17 dBm). The on-time t on is set about 20 μs. Based on the 90% and 10% threshold levels of the pulse amplitude, we determine the Rise t R and Fall t F time to be about ~6 μs. Figure 5(c) shows the evolution of I ae with different modulation frequencies. For comparison, we normalize I ae to its quiescence value I 0 (=1.6 μA), and time to the modulated period T m . As displayed in Fig. 5(c), the on-state remains stable with T m up to 20 μs, corresponding to The quiescent performance of acoustoelectric current I ae as a function of V g at different RF-power P IDT1 applied on ITD1 at 191 MHz. Note that IDT2 remains unactivated. The majority carriers change from p-to n-type as the Fermi level is tuned across CNP, which causes a sign reversal of I ae . (c) I ae versus V g traces with various P IDT2 applied on IDT2. Here P IDT1 is kept at −10 dBm and both IDTs are operated at 191 MHz. When P IDT2 increases larger than 2 dBm, the polarity of measured I ae changes. (d) Acoustoelectric current as a function of P IDT1 at various V g . Data are extracted from (b). www.nature.com/scientificreports www.nature.com/scientificreports/ dynamic switch rate of 50 kHz. That is to say, the peak value of the I ae waveform with a pulse width ~4 μs is within 90% of the full amplitude. The propagation delay time t p -a parameter to evaluate the jitter effects-is about 0.6 μs. is the separation between IDT1 and the center of graphene. We estimate the digital modulation rate of ~10 KB/s for the GAET switch. Data shown in Figs 4(c) and 5(c) present a way to switch on/off channel current by the digitized RF power without resorting the gate voltage, as long as graphene is intentionally doped away from the CNP. The liquid gate is not necessary for the GAET switch. On the other hand, we have affirmed that the switch rate is not affected by the presence of the ionic liquid nor the instrument. We estimate the capacitance of a single IDT C IDT ~ 6.25 pF and the circuit input impedance around 448 Ω, giving rise to the RC time constant around 2.8 ns, which is much shorter than the t R and t F measured. Therefore, we argue that the intrinsic RC delay is likely due to the impedance mismatch. We conceive the switching rate can be immediately raised up by shrinking the channel width of graphene. To optimize impedance-matching one may employ tapered IDTs, functioning as an impedance-transformer [38][39][40] , which impedance match to 50 Ohm transmission line at one end and to the characteristic impedance of the tailored graphene and the leads at other end. To this end, one also needs to characterize the output impedance of GAET and match it with that of the transmission line extended to the measurement ports. The dynamics of acoustoelectric effects of emerging post-graphene 2D materials -e.g. transition metal dichalcogenides (TMDs) and black phosphorus (BP) are much less explored 41 and would be interesting subjects for future studies. Finally, we wish to make few comments on future development of GAET logic devices. The ultimate response time of the GAET switch is limited by the SAW velocity v s and the channel width. There are several approaches to increasing the switching rate. One may try to fabricate the device on substrates with a relatively large electromechanical coupling coefficient, e.g. 42° Y-X LiTaO 3 or 64° Y-X LiTaO 3 substrate, which have been widely applied to the SAW devices for mobile communications. However, the tradeoff is that larger leaky wave may yield a lower I ae . In principle, a high-slew rate of I ae can be obtained from a wide-band SAW device, which can be implemented by an apodized IDT design or simply reducing the number of fingers in the IDT. A narrower channel width may give a shorter response time, but it in turn reduces I ae or requires a larger P IDT . This drawback makes GAET unsuitable for latch operation. Regarding the operation scheme, a single IDT is sufficient for the active-Low switch. Using collinear dual IDTs such as IDT1 and IDT3 (or IDT2 and IDT4), one can apply lower and balanced P IDT for the active-High switch. Nevertheless, evident interference due to reflected waves should be taken into account. For the dual-SAW operation, two SAWs can be excited at different frequencies. However, it makes SAW attenuation become more pronounced at a higher frequency. In addition, by properly utilizing four leads and IDTs, we can www.nature.com/scientificreports www.nature.com/scientificreports/ directly measure I ae to make GAET act as an acoustoelectric branch switch 28 . We note that recent studies reveal several intriguing interface elastic properties of van der Waals materials [42][43][44] and may offer a mean to speed up the switch rate of GAET by engineering the interfacial acoustoelectric properties. Maybe the switch rate is slow (audio frequencies) and the design is a little complex, we think that the GAET opens a route for developing graphene-based logic switch. In this work, we only demonstrate the feasibility of GAET as a logic switch and leave aforementioned issues for future studies. Conclusion In conclusion, we present an accessible operation scheme of GAET for a logic switch with a moderate on/off rate of ~10 4 at room temperature. By manipulating the propagation direction of I ae , the measured values of I ae can be fine tuned to zero -an ideal off state for a logic switch. We demonstrate the dynamic switch rate of I ae can be up to 50 kHz by modulating the amplitude of the input RF-signal applied on IDTs. By deliberately controlling the digitized RF-power applied on a pair of crossed-IDT or a single IDT, the output I ae can be either active-High or active-Low, respectively. The digital modulation rate can achieve ~10 KB/s. The performance of the GAET is suitable for processing digital audio signals. Even the switch rate is slow, our work provides a means to integrate the SAW device and the acoustoelectric effects for future development of graphene-based logic devices.
5,817
2019-06-03T00:00:00.000
[ "Physics" ]
Modelling the Predictors of Mobile Health (mHealth) Adoption among Healthcare Professionals in Low-Resource Environments This study was conducted with objectives to measure and validate the unified theory of the acceptance and use of technology (UTAUT) model as well as to identify the predictors of mobile health (mHealth) technology adoption among healthcare professionals in limited-resource settings. A cross-sectional survey was conducted at the six public and private hospitals in the two districts (Lodhran and Multan) of Punjab, Pakistan. The participants of the study comprised healthcare professionals (registered doctors and nurses) working in the participating hospitals. The findings of the seven-factor measurement model showed that behavioral intention (BI) to mHealth adoption is significantly influenced by performance expectancy (β = 0.504, CR = 5.064, p < 0.05) and self-concept (β = 0.860, CR = 5.968, p < 0.05) about mHealth technologies. The findings of the structural equation model (SEM) showed that the model is acceptable (χ2 (df = 259) = 3.207; p = 0.000; CFI = 0.891, IFI = 0.892, TLI = 0.874, RMSEA = 0.084). This study suggests that the adoption of mHealth can significantly help in improving people’s access to quality healthcare resources and services as well as help in reducing costs and improving healthcare services. This study is significant in terms of identifying the predictors that play a determining role in the adoption of mHealth among healthcare professionals. This study presents an evidence-based model that provides an insight to policymakers, health organizations, governments, and political leaders in terms of facilitating, promoting, and implementing mHealth adoption plans in low-resource settings, which can significantly reduce health disparities and have a direct impact on health promotion. Introduction Mobile health (mHealth), a term coined by Robert Istepanian, refers to the use of mobile technologies and communication networks for the delivery of healthcare [1].The World Health Organization defines mHealth as "the use of mobile devices, such as mobile phones, patient monitoring devices, personal digital assistants (PDAs), and other wireless devices, to support the practice of medicine and public health" [2].The application of information and communication technology (ICT), particularly mobile technology, has altered healthcare service delivery and made it more widely accessible and less expensive.Patients can use mobile devices to access their medical records, lab results, medical imaging, and prescriptions in order to be informed of their diagnosis, disease control, and monitoring as well as to make an appointment with a doctor [3]. mHealth solutions address healthcare challenges such as the increase in communicable or non-communicable diseases and the burden of healthcare cost, and it can also empower patients and families to take preventive measures against diseases and to self-manage their health by providing access to healthcare information and services wherever they are needed [4].As part of the e-health initiative, telemedicine services are accessible to the general public around the world.Some examples include Healthline in Bangladesh, HMRI in India, Teledoctor in Pakistan, Medical Home in Mexico, Fonemed in the United States, NHS Direct in the United Kingdom, Project REMOTE in Europe, and Project Masiluleke in South Africa [5].mHealth solutions have the potential to prevent deaths from diseases that can be cured or controlled, such as malaria, asthma, and diarrhea, which are responsible for the death of large numbers of children every year.Major causes of these largely preventable deaths include a lack of knowledge and understanding of the healthcare system, geographical distance from healthcare facilities, limited access to healthcare services, and poverty [6,7]. mHealth is a low-cost and efficient technology for sharing knowledge on disease prevention, self-management, and diagnostics that aids in educating people in underdeveloped countries.It provides a practical and convenient way to spread preventive and awareness health messages [8].Several studies have reported that the accessibility of mobile-based health information resources improve individuals' knowledge of health in terms of different diseases, such as AIDS and HIV systems [9][10][11], oral contraceptives [12] quitting smoking [13], and pregnancy in women [14].Additionally, the emergence of clinical decision support systems in underdeveloped countries is encouraging since they can help in diminishing the knowledge gap between patients and healthcare professionals.In Kenya, a malaria treatment adherence management trial system was implemented in 107 rural healthcare facilities.The system's findings revealed that treatment adherence improved by 31.7% in the short term and 28.6% in the long term [15].Clinical support systems through mHealth have been successfully implemented in several developing countries to monitor ECG [16] and child mortality [17].mHealth also supports remote monitoring, provides a new pathway to treating patients, and improves survival rates in low-resource settings, where access to hospitals is limited and healthcare facilities are inadequate [18]. The present study is significant in terms of identifying the predictors that play a determining role in the adoption of mHealth among healthcare professionals.This study plays a significant role in presenting an evidence-based model that can suggest mHealth adoption predictors to policy makers, health organizations, government organizations, and political leaders in terms of facilitating, promoting, and implementing mHealth adoption plans in low-resource settings, which can significantly help to reduce health disparities and will have a direct impact on health promotion.Moreover, this study will bridge the gap in the inadequate literature on the predictors of mHealth adoption among healthcare professionals in rural areas and in areas where there is inadequate health infrastructure and resources. Mobile Health Applications mHealth applications are valuable due to the pre-existing infrastructure for mobile phones and a user community that is already familiar with the technology.They can provide a means of communication with generation Z, who have embraced mobile technology far more quickly than any other generation group.mHealth apps can help with healthcare delivery in a number of different ways.However, some programs simply advocate for digital health [19].Others provide a range of healthcare services, including information, reminders for treatments, and data collection [20].The types of services that are included in different categories are shown in Table 1 along with how the categories overlap [9,19,[21][22][23][24][25]. In developing countries like Pakistan, mHealth has revolutionized healthcare delivery.mHealth includes applications designed for users of all ages and with a wide range of demands (as shown in Table 2). Marham The healthcare application Marham allows communication between patients and physicians. Dawaai.pk The Dawaai.pk app allows users to buy prescription medicines, healthcare products, and supplements along with additional information regarding medications and their substitutes.It has over one million users.https://play.google.com/store/apps/details?id=com.dawaai.app&pcampaignid=web_share(accessed on 16 November 2023) Theoretical Framework and the Process of Hypotheses Development Presently, in the scientific community, there is noteworthy academic and professional concern with the process of the implementation and utilization of information technology (IT) and information systems (IS).Researchers have established various theoretical frameworks and models from this particular standpoint during the last few decades, such as the theory of planned behavior [26], the diffusion of innovation theory [27], social cognitive theory [28], the motivational model [29], the model of personal computer utilization [30], innovation diffusion theory [31], the theory of reasoned action (TRA) [32], and the technology acceptance model (TAM).In 2003, Venkatesh et al. proposed the unified theory of acceptance and use of technology (UTAUT), which attempts to explain the process by which individuals adopt a particular technology [33].The UTAUT model is a highly prevalent theoretical framework utilized in the field of technology adoption research and encompasses all relevant variables, taking into consideration the influence of eight prominent theories on the behavioral intention (BI) to adopt a technology, including TAM.The UTAUT model is widely recognized as an efficient framework for understanding the adoption of technology in various fields [34].Furthermore, in contrast to the eight models upon which it is established, researchers have demonstrated that the UTAUT model has a significantly higher level of efficacy, up to 70%, in terms of describing users' intentions [35,36]. Venkatesh et al. [33] analyzed the previously reported models on technology acceptance and identified the constructs that were significantly useful and/or integrated in the models or their extensions.The researchers eliminated overlapping concepts and finally developed a unified model with an overall inclusive explanatory power to conceptualize and predict individuals' attitudes toward technology.Chib et al. [37] reviewed mHealth adoption studies in low-resource settings.Their analysis found a scarcity of research that provides theoretical insight into mHealth adoption.Thereby, the theoretical foundation of the present study is based on the unified theory of acceptance and use of technology (UTAUT) model [33].Several researchers have used the integrated conceptual framework to theorize the adoption behavior of technology [33,[38][39][40].Our study's conceptualization (as shown in Figure 1) is based on the UTAUT framework, suggesting that the adoption of mHealth is influenced by the determinants of UTAUT, i.e., performance expectancy (PE), effort expectancy (EE), facilitating conditions (FCs), social influence (SI), and behavioral intention (BI).However, we added an extra construct, self-concept (SC), to the determinants of the UTAUT model.[33] reported that performance expectancy (PE) is conceptualized as the extent to which a person considers that adopting a technology will enable him or her to increase their productivity at work.PE contains the characteristics of five theories, i.e., extrinsic motivation from the motivational model [29] perceived usefulness from TAM, task fit from the personal computer utilization theory, and relative advantage from [33] reported that performance expectancy (PE) is conceptualized as the extent to which a person considers that adopting a technology will enable him or her to increase their productivity at work.PE contains the characteristics of five theories, i.e., extrinsic motivation from the motivational model [29] perceived usefulness from TAM, task fit from the personal computer utilization theory, and relative advantage from social cognitive theory.In the context of mHealth, PE is conceptualized as the degree to which a healthcare professional believes that using mobile devices for healthcare services would be beneficial.Numerous research studies have predicted PE as a significant predictor of healthcare technology adoption [33,[39][40][41][42][43]. H1: Performance expectancy (PE) has a statistically significant influence on the behavioral intention to adopt mHealth. Effort Expectancy (EE) Effort expectancy (EE) is described as the level of ease a healthcare professional feels while using mHealth.Three constructs of other theories represent EE, which are perceived ease of use (TAM, TAM2), complexity (personal computer utilization theory), and ease of use (innovation diffusion theory).Numerous empirical studies have demonstrated that EE directly influence users' intentions to adopt new technologies [35,44].EE was found as a predictor of users' intent to use e-Health services, clinical decision support systems, and mHealth adoption [44][45][46] Therefore, it is hypothesized that: H2: Effort expectancy (EE) has a statistically significant influence on the behavioral intention to adopt mHealth. Social Influence (SI) Social influence (SI) is defined as a formative construct of behavioral intention as the degree to which a person perceives that other persons that are important to them believe that they should use the particular technology.Several studies have concluded that SI significantly influences technology adoption [44,47,48].Therefore, we hypothesized that: H3: Social influence (SI) has a statistically significant influence on the behavioral intention to adopt mHealth. Self-Concept (SC) Researchers have recognized an integrated effect of psychological phenomena on individuals' willingness to adopt any technology.However, individuals' personalities and internal self-perceptions regarding the significance of any product have an added appeal in conceptualizing another consumer preference component, which is called selfconcept [28,49].Therefore, it is hypothesized that: H4: Self-concept (SC) has a statistically significant influence on the behavioral intention to adopt mHealth. Facilitating Conditions (FCs) The concept of facilitating conditions (FCs) is defined as the degree to which an individual believes that the infrastructure, such as organizational and technical, exists for the utilization of mobile health [20].In the context of mHealth technologies, the factor FCs is considered to be a significant factor.The successful and effective use of mHealth service systems is heavily dependent on uninterrupted contact between the service provider and host, who are in two different locations.Several research studies have predicted FCs to be a significant predictor of technology adoption [42,[50][51][52][53]. Thus, we proposed the following hypothesis: H5: Facilitating conditions (FCs) is a significant predictor for adopting mHealth. Behavioral Intention (BI) The behavioral intention of an individual refers to their individualistic/subjective probability of carrying a specific behavior [26].Based on a theoretical perspective, behavioral intention has a significant impact on the use of technology [33,39,54].Therefore, it is hypothesized that: H6: Behavioral intention is a statistically significant predictor of the adoption of mHealth. Objectives of the Study Pakistan is the fifth most populated country in the world, with 64% of its population residing in rural areas.The rural population has inadequate access to healthcare services due to a lack of medical facilities and health infrastructure in the rural areas of the country [55].In developing countries such as Pakistan, the high cost of transportation for mobility to hospitals or medical emergencies due to the long distances and high cost of fuel is one of the major issues [56].Thus, the adoption of mHealth for the provision of healthcare resources and services is an achievable, cost-effective, convenient, and more efficient method in low-resource settings. Therefore, the present study was conducted with objectives to measure and validate the unified theory of the acceptance and use of technology (UTAUT) as well as to identify the predictors to mHealth technology adoption among healthcare professionals (doctors and nurses) in limited-resource settings. Participants and Procedure A cross-sectional survey was carried out at the six public and private hospitals in the two districts (Lodhran and Multan) of Punjab, Pakistan.The participants of the study comprised registered doctors and nurses working in the participating hospitals.Of these six hospitals, medical colleges are attached to three hospitals (two public and one private) for the provision of undergraduate and graduate medical education and trainings.The other three hospitals are classified as secondary healthcare centers.The population's characteristics may not vary based on the centers' characteristics (tertiary or secondary healthcare) due to the requirements of a basic degree (BSN for nursing and MBBS/FCPS for doctors/consultants) recognized by the Pakistan Medical and Dental Council (PMDC) and Pakistan Nursing and Midwifery Council (PNMC) for appointments in medical centers/hospitals. Research Tool A two-part questionnaire was developed after reviewing the relevant literature for assessing the study's settings and the status of the participants in the healthcare facilities in terms of how health is currently delivered, the status of infrastructure and facilitating conditions, and the need for mHealth in the facilities.The first part of the questionnaire comprised demographics-related questions, such as the respondents' gender and age, the professionals' experience and profession (doctor or nurse), and the working unit (emergency, primacy care, medical, or surgical units).The second part comprised seven sub-scales and 34 statements.The first sub-scale, performance expectancy (PC), contained six statements, the sub-scale on effort expectancy (EE) included five statements, and the sub-scales on the facilitating conditions (FCs) and social influence (SI) contained four statements each.However, the sub-scales on self-concept (SC) and behavioral intention (BI) both comprised five statements each.The last sub-scale on mHealth adoption was measured using five statements. The questionnaire was pre-tested by three experts from the field of information management, public health, and health communication.The recommended changes, such as shuffling and rephrasing statements, were incorporated in the questionnaire. Data Collection and Analysis Procedure Purposive sampling was used to collect the data.A total of 500 questionnaires were sent out to participants through personal visits to their clinics, sending them emails, and posting them printed copies of the questionnaire.The participants were informed that they could leave the questionnaire at any time and that their participation was completely voluntary.Of the 500 questionnaires, 314 filled questionnaires were returned with a response rate of 62.8% after three follow-ups with a gap of two weeks each.All these 314 copies of the questionnaire were valid for data analysis.The Statistical Package for Social Sciences (SPSS software v26) software was used for the data analysis.The dataset's missing values were replaced using expectation-maximization (EM) methods.Demographic data are given as percentages and frequencies.The analysis of moment structures (AMOS) method was used for confirmatory factor analysis, structural equation modelling, and multi-group analysis.The confirmatory factor analysis (CFA) method was used to examine the association between the latent variables and model estimates.The structural equation model (SEM) was then used to estimate the direct and indirect effects of various UTAUT model paths and to determine the validation of the hypotheses.The significance value was set at < 0.05.The study was started after approval was obtained from the Departmental Research Committee, Department of Information Management, The Islamia University of Bahawalpur, Pakistan. Confirmatory Factor Analysis Cronbach's alpha was used to assess the questionnaire's reliability.The six statements for PE received a Cronbach's alpha score of 0.890, the five-item loading for EE received a score of 0.893, the four items for FCs obtained a score of 0.839, the four items for SI received a score of 0.862, SC received a score of 0.885, the statements on the construct BI received a Cronbach's alpha score of 0.885, and the five statements for mHealth adoption received a score of 0.673.Cronbach's alpha value for the questionnaire's 34 statements over seven constructs was 0.966, indicating strong reliability. Regression Weights Figure 2 displays the standardized estimation of the regression weights of the loading of the components on the constructs.The latent variables' (PE, EE, FCs, SI, SC, BI, and MA) path coefficient values were found to be moderate-to-high, ranging from β = 0.49 to β = 0.85.The latent variable PE was measured using five observable variables, and the loading values ranged between β = 0.70 and β = 0.82, showing strong loadings on the construct.The latent variable EE was measured using four items, and the values ranged between β = 0.75 and β = 0.85, demonstrating a strong correlation between the loadings and the construct.The latent variables FCs, SI, and BI were measured using three observable items on each latent variable.All these items received values that ranged between β = 0.68 and β = 0.85, suggesting a strong correlation between the items and the construct.The score of the four items on SC ranged between β = 0.74 and β = 0.80, indicating a strong association of the loadings.The loading values of the MA items varied from 0.49 to 0.68, indicating a moderate-to-strong association. Confirmatory Factor Analysis Cronbach's alpha was used to assess the questionnaire's reliability.The six state ments for PE received a Cronbach's alpha score of 0.890, the five-item loading for EE re ceived a score of 0.893, the four items for FCs obtained a score of 0.839, the four items fo SI received a score of 0.862, SC received a score of 0.885, the statements on the construc BI received a Cronbach's alpha score of 0.885, and the five statements for mHealth adop tion received a score of 0.673.Cronbach's alpha value for the questionnaire's 34 statement over seven constructs was 0.966, indicating strong reliability. Regression Weights Figure 2 displays the standardized estimation of the regression weights of the load ing of the components on the constructs.The latent variables' (PE, EE, FCs, SI, SC, BI, an MA) path coefficient values were found to be moderate-to-high, ranging from β = 0.49 t β = 0.85.The latent variable PE was measured using five observable variables, and th loading values ranged between β = 0.70 and β = 0.82, showing strong loadings on the con struct.The latent variable EE was measured using four items, and the values ranged be tween β = 0.75 and β = 0.85, demonstrating a strong correlation between the loadings an the construct.The latent variables FCs, SI, and BI were measured using three observabl items on each latent variable.All these items received values that ranged between β = 0.6 Multiple Squared Correlations The values of the squared multiple correlations showed that the four modelling factors (PE, EE, SI, and SC), collectively accounted for 98% of the variance in the BI.However, the combined effect of PE, EE, SI, SC, and FCs via BI accounted for 19% of the variance in MA (Figure 3). Multiple Squared Correlations The values of the squared multiple correlations showed that the four modelling factors (PE, EE, SI, and SC), collectively accounted for 98% of the variance in the BI.However, the combined effect of PE, EE, SI, SC, and FCs via BI accounted for 19% of the variance in MA (Figure 3). Standardized Direct, Indirect, and Total Effects In the structural equation model (as shown in Figure 3), BI mediated the effects of PE, EE, SI, and SC on MA.However, there was also a direct path from FCs to MA, as given in the original model (Figure 1).The estimation indicated that SC (β = −0.015)and PE (β = −0.009)had negative indirect effects on MA.Similarly, SI (β = −0.156)and EE (β = −0.230)had a negative direct effect on MA.On the other hand, SC (β = 0.882), and PE (β = 0.514) has positive total effects on the BI (Figure 3). Moderating Effect of Gender, Age, and Experience Using a multi-group analysis, this study examined the moderating effects of age, gender, and experience on the relationship between PE, EE, SI, SC, FCs, BI, and MA. Estimation of Regression Weights and Validation of the Hypotheses Table 3 displays the standardized regression estimates, critical ratio (CR), significance of components, standard error (SE) for parameter estimation, and confirmation of hypotheses.Regression analysis can be used as a measurement to predict the variance in one variable depending on another.The level of significance was fixed at p = 0.05, and the notation "***" represents the probability that the variable's value would go below the alpha value of 0.005.The findings indicated that PE significantly influenced BI (β = 0.504, CR = 5.064, p < 0.05).Similarly, SC significantly influenced BI (β = 0.860, CR = 5.968, p < 0.05).On the other hand, there was significant influence of EE (β = −0.198,CR = −1.900,p = 0.057) and SI (β = −0.134,CR = −1.109,p = 0.267) on BI, and FCs also had no significant influence on MA (β = 0.219, CR = 1.916, p = 0.55).Furthermore, no significant influence of BI was found on MA (β = −0.008,CR = −0.054,p = 0.957).The symbol '***' denotes the likelihood that the variable's value will not exceed the critical threshold of 0.005.All hypotheses' abbreviations are expanded in Appendix A. Discussion Our study measured the unified theory of acceptance and use of technology (UTAUT) among healthcare professionals in low-resource healthcare settings.Several studies have already applied the UTAUT model to predict mHealth adoption [58][59][60][61][62][63].The CFA validation findings of the seven-factor measurement model (PE, EE, FCs, SI, SC, BI, and MA) based on the 25 valid items (9 items with a low loading on the constructs in the model were removed) showed a strong correlation between PE and EE, PE and FCs, PE and SI, PE and SC, and PE and BI.These findings are comparable with previous studies [64].The correlation between PE and MA, however, was found to be only moderately strong. On the other hand, we found a strong correlation between EE and FCs, SI, SC, and BI.However, it had a moderate-level correlation with MA.However, the model indicated that the factor FCs was strongly correlated with SI, SC, and BI.However, the strength of the correlation between FCs and MA was moderate.Our findings validate the findings of a previous study that reported that SI is strongly correlated with SC, BI, and MA [42].SC was also positively correlated with BI and MA.There was a moderate-strength positive correlation between BI and MA.However, BI was a strong predictor for MA.The literature also shows that a higher BI level predicts the actual use behavior of mHealth adoption [40,61,65].Overall, the goodness-of-fit values showed that the CFA model was acceptable (χ 2 = 3.206; df = 254; p = 0.000; RMSEA = 0.084; CFI = 0.893; IFI = 0.894; and TLI = 0.874). Structural equation modeling (SEM) was applied to validate the hypotheses of this study.The correlational scores indicated that PE was strongly associated with EE, SI, SC, and FCs.The path coefficient estimation showed that PE was positively corelated with BI.Our findings validate the findings of previous studies that reported a strong correlation between PE and BI [59,60,62,63].However, in contrast to our findings, [53] claimed that PE does not significantly influence BI. Our findings support the findings of another study that reported that SC influences BI and the factor FCs influences MA [66].However, EE and SI negatively influence BI.These results are in contrast with the findings of previous studies showing a positive influence of SI on BI in the context of mHealth [42]. Moderating Effect of Gender, Age, and Experience The findings of our study showed that the relationships between the UTAUT components (PE, EE, FCs, SI, SC, BI, and MA) were significantly moderated by age.These findings are comparable with a study [67] that showed that age has a significant impact on the relationship between FCs and MA and that the relationship between PE and BI is influenced by age.It also moderates the relationship between BI and MA.However, our study did not find a moderating influence of age on the relationship between PE and BI, EE and BI, SI and BI, and SC and BI. Our findings indicated that gender had a statistically significant moderating effect on the relationship between SI and BI as well as on the relationship between SC and BI.Likewise, several previous studies have reported gender to be a key moderator of mHealth adoption [68,69].However, there was significant moderating influence of gender on the relationship between PE and BI, EE and BI, FCs and MA, and BI and MA.Previously, similar findings have shown that gender is not a key moderator of the adoption of mHealth services [62]. Our results indicated that experience had a statistically non-significant positive moderating effect on the relationship between PE and BI, EE and BI, SI and BI, FCs and MA, and BI and MA. The findings of our study statistically validated only two of the study's six hypotheses, indicating that PE and SC had a statistically significant influence on the BI of healthcare professionals towards mHealth adoption.Previous studies have also indicated a positive influence of PE on BI [40,70,71].The UTAUT model comprised six components (PE, EE, SI, FCs, BI, and MA).However, we added an extra component, SC, with the help of the literature in order to enhance the model's strength and to verify whether SC has an influence on behavioral intentions.The findings of our study showed that SC has a significant influence on BI.However, a previous study indicated a non-significant influence of SC on BI [64].Furthermore, our study indicated that the factor FCs has a positive influence on MA, but the influence is not statistically significant.On the other hand, EE and SI have a non-significant negative influence on BI.Similarly, BI has a negative non-significant influence on MA among healthcare professionals. Limitations of the Study We used a survey method for data collection, which is always subject to respondents' own understanding and self-reporting in terms of the question statements.Therefore, there is always a chance that the respondents' answers to the question statement may differ from the actual conditions.In order to minimize the limitations of the questionnaire, the questionnaire was pre-tested by three experts from the field of information management, public health, and health communication.Cronbach's alpha was used to assess the questionnaire's reliability.The 34 items under seven constructs received a Cronbach's alpha value of 0.966, showing the high reliability of the questionnaire.Furthermore, we acknowledge the Dunning-Kruger effect (DKE) [72,73] as a limitation of the questionnaire given that it relies on subjective self-reporting.DKE is a cognitive bias that causes individuals to overestimate their abilities.Sometimes, we think (overestimation) that we know or understand something well but we actually do not ('being ignorant of one's own ignorance').The authors also acknowledge the potential selection bias for study's population and healthcare centers for data collection as a limitation of the study.Therefore, care should be exercised while generalizing the findings of this study to the population of secondary-or tertiary-care hospitals.However, we adopted purposive sampling in order to avoid selection bias, but it is inevitable and fundamental in this type of survey study. The application of international regulations by the various governments is one of the requirements for mHealth perceivability.Regulatory and legal issues/frameworks are important when considering mHealth apps/services adoption, especially apps/services that behave as a medical device (SaMD or software as a medical device) [74].However, the regulatory and legal aspects of mHealth adoption are not discussed in this study, which may be a limitation of this study. Implications of the Study This study has several practical implications.Ensuring universal health coverage is a part of the United Nations' sustainable development goals.The SDG 3 "Good health and well-being" attempts to promote healthy lives at all ages.Pakistan is a country with low resources; it has an estimated population of 240 million people, and almost 64% of population lives in rural areas with poor health indicators due to limited access to health resources and services [55,56].In this context, the adoption of mHealth can significantly help in improving people's access to quality healthcare resources and services and in reducing costs.It is notable here that the information communication and technology infrastructure is quite good in Pakistan, where there are an estimated 191 million mobile cellular subscribers. The findings of this study have implications for policy makers, as it identified that performance expectancy and self-concept are the main predictors for mHealth adoption among healthcare professionals.Therefore, there is a need for policy makers to demonstrate the capabilities of mHealth in transforming healthcare delivery across rural and urban areas.There is also a need to conduct hands-on sessions and awareness programs in order to demonstrate the performance of mHealth technologies and to develop self-concept about mHealth technologies and its capabilities among healthcare professionals.The way forward is to launch an mHealth pilot in each district.The success of these pilots will involve the whole community of healthcare professionals for mHealth adoption.The involvement of peers and influential people, such as people who are policy makers, early mHealth adaptors, or successful health care professionals, can play a significant role in the adoption of mHealth among healthcare professionals at the level of critical mass. The speed of mHealth adoption is quite slow in Pakistan.The socio-economic and low literacy level are among the main reasons that prevent the adoption of mHealth in Pakistan [7].Moreover, in developing countries such as Bangladesh, Pakistan, and India, the prevalence of traditional culture, the digital divide, the lack of technical skills, and poor health-related information-seeking behaviors all contribute to technology anxiety and resistance to adopting new technologies like mHealth [75].The acceptance of mHealth capabilities would increase exponentially if people believe that mHealth is practical and convenient and could contribute to accessing healthcare.Therefore, there is a need to improve the mobile health literacy of the general population so that they can use mobile health applications for locating relevant doctors, making doctor's appointments, seeking advice from doctors, receiving telemedicine, conducting patient-to-doctor conversations through video, and receiving prescriptions.Governments' health departments have a key role in improving mobile health literacy, and they could achieve this with the involvement of health experts, health educators, and technology experts.The general population can be given awareness and demonstration sessions at hospitals.Moreover, an awareness video explaining the steps involved in the use of mHealth services can also be disseminated among the general population.This study recommends the need for improving the mobile health literacy of the general population.Furthermore, it is important to design user-friendly mobile health applications that will reduce the likelihood of non-adoption.Jacob et al. [76] recently proposed a shift toward theoretical frameworks that address implementation challenges, taking into account the complexity of the sociotechnical structure of healthcare organizations as well as the interplay between technical, social, and organizational aspects. Conclusions This study validates the unified theory of the acceptance and use of technology (UTAUT) model with an additional construct of self-concept among healthcare professionals (doctors and nurses) in low-resource environments.It concludes that performance expectancy and self-concept are the main predictors that influence the behavioral intention of healthcare professionals towards mHealth adoption, while age and gender are moderating factors in terms of mHealth adoption.This study suggests that the adoption of mHealth can significantly help in improving people's access to quality healthcare resources and services, reducing costs and health disparities as well as promoting health in low-resource settings. Table 1 . Type of Services. Table 2 . Popular mHealth applications in Pakistan.
7,420
2023-11-26T00:00:00.000
[ "Medicine", "Computer Science" ]
BEHAVIORAL INTENTION TO THE USE OF FINTECH AND GOVERNMENT SUPPORT IN DEVELOPING THE THEORY OF TECHNOLOGY ACCEPTANCE MODEL Financial technology is still an interesting topic in public talks and discussions among academics. Financial technology is a combination of business innovation and technology become the solution for previous financial services. To study influential variabel to see people acceptance in using fintech by Technology Acceptance Model. On the basis of the results, it was recommended to using the development f the Technology Acceptance Model because the variabel have been developed that obatained the result of elaboration and futher analysis. Financial technology is still an interesting topic in public talks and discussions among academics. Financial technology is a combination of business innovation and technology become the solution for previous financial services. To study influential variabel to see people acceptance in using fintech by Technology Acceptance Model. On the basis of the results, it was recommended to using the development f the Technology Acceptance Model because the variabel have been developed that obatained the result of elaboration and futher analysis. …………………………………………………………………………………………………….... Introduction:- Financial technology is still an interesting topic in public talks and discussions among academics. Fintech began to develop in recent years and is still a debate for users in Jakarta and other big cities because there is no standard regulation and supervision from the government in Indonesia. Financial technology / FinTech is a combination of financial services with technology that eventually changes the form of business from conventional to moderate, where in the payment process is done face-to-face and carry a sum of cash, now can make long-distance transactions by making payments that can be done in a matter of only seconds (Indonesia, 2019). The understanding of fintech is based on the Central Bank of Indonesia, other studies can open up insights and knowledge about fintech itself and its development in various countries that have implemented it. Financial technology (FinTech) is a combination of business innovation and technology to become the solution for previous financial services (Jin et al., 2019). Products from fintech itself are e-wallet, peer-to-peer lending, crowdfunding, and technology insurance. These products are choices that are calculated for consumers and businesses. Fintech has made digital economic developments in India, China, and Britain. The use of fintech in Malaysia is different, many have not used or are aware of the digital economy that exists in that country. FinTech's history began in the 1950s where credit card service users aimed to begin reducing cash-carrying. The development of fintech itself started from the emergence of ATMs (Automated Teller Machines) used and these services to reduce tellers and branches for cash withdrawals to the online banking system in the 1990s. Financial technology sustainability is digitalization in the 21st century via mobile wallets, payment apps, Robo advisors for wealth and financial planning, crowdfunding platforms for alternative financing opportunities. Other research views that Fintech is a digital technology with unity, big data, and profitable investments are very important and can be used widely in the financial sector (Gabor & Brooks, 2017). financial technology, also known as "FinTech", is an innovation in the field of financial services that appears with the latest technology, or what is called cellphone-based payment, is a type of service that exists in China, Korea and the UK (Huei et al., 2018). Data 396 obtained from the American consulting firm Accenture, from 2010 to 2016, global Fintech investment increased from 12 billion dollars to 153 billion dollars, an increase of 12x. not only that the nominal of fintech companies has increased from existing 800 companies in April 2015 to 2000 fintech companies at the end of December 2016 with fintech investments able to get 23.2 billion dollars in 2016, up 21.5% from the previous year. Fintech is a competitor of the national banking industry in the credit sector. Research conducted by PricewaterhouseCoopers Indonesia explains that banks in Indonesia are making every effort to offset the new changes that are starting to occur, where only 8% of respondents from all informants claim that their banks have the same strategy as the strategy in 2017 ago. Although in 2018 it will improve compared to last year, risks associated with technology are still one of the main concerns of bankers in the banking industry in Indonesia. These concerns explain that the operation of fintech in Indonesia has had an impact on the banking industry (PwC, 2018). The transition to mobile and internet platforms is not new, but the speed of change in Indonesia is very significant. Just three years ago, 75% of bankers estimated that more than half of the transactions were done through conventional branch offices -now this number has fallen to 34%, while the trend of transactions on the digital line has risen to 35%. The transition has also become a stepping stone for fintech companies that already rely on and prioritize the ease of the process of only using mobile phones and the internet. Fintech business phenomenon in research in several countries and the results can explain the acceptance of fintech varies. In the development of investment in fintech, an explanation of the most important elements of fintech is a mobile-based payment, fintech products, and fintech polemic in a country. Regulation is often a negative connotation, but in the fintech community, it is acceptable where regulations provide the support needed by financial institutions and the public to feel comfortable operating in this space. Regulators or often praised by being the main facilitator in the UK's position as the leading jurisdiction for the fintech business to establish the business. The FCA's regulatory sandbox, the largest and arguably the most successful sandbox in the UK, has provided the ability for fintech businesses to test their products in a controlled environment, with the FCA continually learning from each cohort and using this to guide advancements in regulation. This has led to huge growth in regulator tech space, a sub-sector of fintech that is now often seen as a sector with its rights, providing more effective and efficient solutions for developing regulations. Increased regulation is a factor in driving fintech to the UK which in turn creates more use of fintech inefficient ways to overcome regulation so that the chicken and egg scenario ensues Once the importance of these regulations, to make fintech acceptance in a country is very high. Regulations that support companies and users (the community) are a win-win solution but are by large corridors set by the government. Support from the government will be seen in the application of the latest technology to its citizens in the form of a regulation. Based on the introduction, it is known that there is a phenomenon of fintech business in the world including Indonesia. The author also adds how fintech in Indonesia, especially fintech loans has become a concern in recent years. The phenomenon of fintech business in several countries, as well as explanations in several studies and existing sources, have a focus of explanation on the phenomenon of fintech. In the development of investment for fintech, an explanation of the most important elements of fintech is a mobile-based payment, fintech products, and polemic fintech in a country. Based on this, it is necessary to study influential variables to see people's acceptance in using the currently booming fintech in Indonesia and tge world by using the development of the Technology Acceptance Model (TAM). The results of this study can provide information in the form of the conceptual development of the Technology Acceptance Model (TAM) and its application to Fintech so that later it can be used as consideration in further research. This research can become an added concept in knowing the character of each prospective fintech user in an area (Research Area) so that it can become an input for Fintech companies in deciding their business development. Literature review or theoritical background: Intention to Use: Intention to use is when someone already has a view and assessment of fintech. It is also planned to use fintech and will recommend it if it has received the benefits offered by the fintech. This is explained that if the views and evaluations of fintech are positive then the community or someone will use fintech and recommend it, and vice versa the views and ratings of that person on fintech are negative or bad (Gomber et al., 2018). Behaviour Intention : Behavior intention is someone's subjective view of using fintech technology. In the ground theory, Technology acceptance model (TAM) explains that there is a positive influence between one's subjective views and one's intention to use fintech itself (Saji & Paul, 2018). This can explain if someone's subjective view of fintech is positive, which of course will produce a desire to use fintech in the future if they have the opportunity. Perceived Risk: Perceived risk is the risk that everyone will accept when he has made a decision (Huei et al., 2018). In the current information technology, perceived risk can explain that the use of fintech is still very risky because the personal information held by the users can be obtained by outsiders without permission from the owner of the personal data. According to Lee, explaining that the higher the level of risk to be accepted by someone, the more reluctant that person to use existing technology in this condition is the use of fintech (Lee, 2009). Perceived Cost: Perceived costs are costs that someone will incur when adopting the technology. Costs that will be incurred by someone will be the main concern for someone to decide on the use of the technology (Saji & Paul, 2018). Can be said the number of additional costs that will be charged to the public or someone as a potential user to use fintech products and services will largely determine his decision Perceived Ease of Use: Perceived ease of use can be defined as how easy is the effort needed to use a new system or technology. Users think that fintech can have a direct impact on the sustainability of its business. Previous research also explained that there was a positive effect on the ease of use with one's attitude in using fintech (Gabor & Brooks, 2017). The ease of use will increase one's taste in using it Perceived Usefullness: Perceived usefulness is how useful technology is for someone so that someone will easily adopt it. Perceived usefulness for a customer or prospective customer will be a significant influence in determining in deciding whether or not to use the fintech. Research conducted in Taiwan explains that perceived usefulness is strongly related to a person's behavioral intention to use this fintech technology (Saji & Paul, 2018). Government Support: Government support has the greatest influence on someone in deciding whether to use fintech or not (Model, 2019). That is because the government determines the regulations of the products or services provided by fintech. One study by Kiwanuka et al explained that government support for fintech services and products has a positive effect on someone to use it (Kiwanuka, 2015). Marakarkandy et al. found that government support in terms of regulation and supervision is crucial for someone to want to use it in their daily lives (Markarkandy, 2017). Research objective and methodology: This paper targets the purpose of the conceptual model for the technology acceptance model in the financial technology sector. The paradigm of the paper used is positivism which sees that previous research is true and scientific. (Wahyuni, 2019) The approach that is tried to be built in this paper is the historical approach to research, so this research is based on the collection of previous research and the aim to review previous research to offer a basic model of technology acceptance models (TAM) that have developed behind this. This research model offering is based on research that has been elaborated to be carried out in subsequent studies with new cases. Data analysis uses domains and taxonomy. Domain analysis means that the paper elaborates on the general picture with two starting points, namely behavior intention and Intention to use. The taxonomy means looking at previous studies the structure of variables in it to be analyzed. So, in the end, the latest variables and hypotheses will be displayed in the latest research models and formed hypotheses. Discusion:- In this discussion can discuss Technology Acceptance Model (TAM) along with tables from previous studies related to research to be carried out along with indicators or statements After seeing the existing indicators and variables from previous studies, a conceptual framework was proposed (see The following is a research model that has been elaborated from behavioral intention and intention to use and looks at previous research in structuring the structure for analysis. We can also do this with hypotheses that have emerged from the research models listed above. We can form seven hypotheses with this research model, namely: Hypothesis 1 :There is a negative influence between Perceived Risk and behavior intention Hypothesis 2 :There is a negative influence between Perceived Cost and behavior intention Hypothesis 3 :There is a positive influence between Perceived Ease to Use with behavior intention Hypothesis 4 :There is a positive influence between perceived usefulness and behavior intention Hypothesis 5 :There is a positive influence between government support and behavior intention Hypothesis 6 :There is a positive influence of Perceived risk, Perceived Cost, Perceived usefulness, and Perceive ease to use (TAM), Government Support on behavior Intention to Use financial technology. Hypothesis 7 : There is a positive influence between behavioral intention and intention to use Conclusion:- The purpose of this paper is to explain that the technology acceptance model is very much based on behavioral intention. But the variables in it are still developing. The majority of previous research has focused on perceived risk, perceived costs, perceived benefits, and perceived ease of use. The latest variable from the development of TAM itself is Government support which is an important part of making someone want to use certain technological innovations in this case regarding financial technology. Another new thing is how to see the effects of perceived risk, perceived costs, perceived benefits, and Perceive easy to use (TAM), Government Support for the intention to use financial technology. It is hoped that later it can become the basis of researchers with research models in the discussion of further research
3,330.6
2020-04-30T00:00:00.000
[ "Economics" ]
An Efficient and Privacy-Preserving Multiuser Cloud-Based LBS Query Scheme Location-based services (LBSs) are increasingly popular in today’s society. People reveal their location information to LBS providers to obtain personalized services such as map directions, restaurant recommendations, and taxi reservations. Usually, LBS providers offer user privacy protection statement to assure users that their private location information would not be given away. However, many LBSs run on third-party cloud infrastructures. It is challenging to guarantee user location privacy against curious cloud operators while still permitting users to query their own location information data. In this paper, we propose an efficient privacy-preserving cloud-based LBS query scheme for the multiuser setting. We encrypt LBS data and LBS queries with a hybrid encryption mechanism, which can efficiently implement privacy-preserving search over encrypted LBS data and is very suitable for the multiuser setting with secure and effective user enrollment and user revocation. This paper contains security analysis and performance experiments to demonstrate the privacy-preserving properties and efficiency of our proposed scheme. Introduction Location-based services (LBSs) are increasingly popular in today's society.It is reported that up to 150 million people have enjoyed LBSs as early as 2014 [1].People reveal their location information to LBS providers to obtain personalized services such as map directions, restaurant recommendations, and taxi reservations. The most common and most important service in an LBS system is a location query service.In LBS applications, a user is able to use his powerful smartphone equipped with GPS modules to obtain accurate location information anytime and anywhere by submitting a query keyword of interest (e.g., hotel) to the LBS system.Upon receiving an location query request from the user, an LBS provider rapidly returns back a target location list, in which all locations are ranked in an ascending order based on distances between the query user and these target locations. Every coin has two sides: although the LBS greatly facilitates people's life nowadays, the user privacy disclosure problems for LBS applications are more and more serious.The LBS providers can mine LBS users' privacy by analyzing LBS queries or recovering the spatial correlated data [2].For example, LBS providers can easily obtain user's mobility trace or even infer user's real identity, healthy status, hobbies, and so on [3,4].To address the privacy challenge in the LBS system, many solutions have been proposed such as the pseudoidentity technique [5], location fuzzy [6][7][8][9], and private information retrieval in the trusted third party (TTP) [5,7].These schemes significantly promote the further development of LBS applications. With the rapid development of the cloud computing, more and more LBS providers are beginning to consider to outsource their location data and services to the cloud server for enjoying the numerous advantages brought by the cloud computing such as economic savings, great flexibility, quick deployment, excellent computation performance, and abundant bandwidth resources.However, the cloud server is not fully trusted usually by the LBS providers due to being 2 Security and Communication Networks operated by the remote commercial organizations.Once the location data is outsourced to the cloud server in the plaintext form, the data security would not be guaranteed.For example, a corrupted administrator of the cloud server may sell the location data outsourced by the LBS provider to other one for obtaining illegal profit.Presently, the most effective way to protect the confidentiality of outsourced location data is to encrypt data before outsourcing it to the cloud server [10].On the other hand, the bare user query requests also provide more opportunities for the cloud server to mine user's privacy just like a traditional LBS system.Therefore, the user requests should also be encrypted before being submitted to the cloud server.However, data encryption makes the available location query service a challenging task, since the ciphertext no longer bears the natures of numerical computation and character match in the plaintext field.Therefore, there are two essential problems that need to be solved in a cloud-based LBS application over the encrypted outsourced location data: (1) how to find out all target locations over the encrypted location data according to the encrypted user request; (2) how to compute or compare distances between these target locations and user current location over the encrypted outsourced location data. A recent work in [11] sets out to explore the challenging issue that how to implement the cloud-based LBS system over encrypted location data and proposes a privacy-preserving cloud-based LBS query scheme, called "EPQ."The scheme enables the cloud server to perform LBS query over the encrypted LBS data without divulging users' location information.However, the scheme only can enforce a user location coordinate query according to a user's current location.In a practical LBS application, a goal-oriented keyword query is necessary for the user to accurately locate locations of interest (e.g., the user may need to accurately search hotels near to him/her).Compared with the existing work, in this paper, we propose an efficient and secure keyword-based query scheme that allows the user to be able to first accurately locate desirable locations according to the encrypted query request and then rank distances between these target locations and the user's current location, which greatly improves the user's location server experiences.Moreover, our scheme is very suitable for a multiuser cloud environment by equipping with flexible users enrollment and users revocation mechanisms. In this paper, we make the following three key contributions: (i) First, we propose an efficient and privacy-preserving cloud-based LBS query scheme.For protecting the security of location data and user requests against the curious cloud server, we adopt a hybrid encryption to encrypt the outsourced location data and user requests while the cloud server can still provide accurately LBS query services for users by performing privacy-preserving and efficient search over the encrypted locations data.In addition, our scheme is very suitable for the multiuser setting by equipping with flexible user enrollment and user revocation mechanisms. (ii) Second, we provide detailed correction analysis and security analysis.The analyses show that our scheme is correct and can achieve user privacy preservation and confidentiality of LBS data, simultaneously.(iii) Lastly, we implement our scheme in Java and evaluate the performance on a real data set.Experimental results demonstrate that our proposed scheme is efficient and practical. The rest of our paper is organized as follows.In Section 2, we review some related literatures.In Section 3, we recall a bilinear pairing map, secure kNN, and a difficulty assumption of discrete logarithm problem as the preliminaries.Then, we formalize a system model and a threat model and depict problem statements in Section 4. We present our approach in Section 5. What is more, some analyses and performance evaluations are conducted in Sections 6 and 7, respectively.Finally, we draw our conclusions in Section 8. Related Work In this section, we review some related works about privacy protection in traditional LBSs and cloud-based LBSs, respectively. Traditional LBS Privacy Protection. Privacy leakage problem in traditional LBSs has drawn much attention of researchers, and we review mainly related literatures. Firstly, a location -anonymity model is introduced, which guarantees that an individual cannot be identified from − 1 other individuals [12].What is more, in a distributed environment, an anonymous approach based on homomorphic encryption [13] is proposed to protect location privacy.However, when the anonymous region is sensitive or individuals are the same place, the sensitive location will still be leaked.Thus the third party (TTP) is proposed to manage the location information centrally [14][15][16].To achieve an accurate query, a method is proposed to convert original locations of LBS data and query, maintaining a spatial relationship between the LBS data and query [16].However, because of many users' sensitive information in the TTP, attackers would aim at attacking it easily.Then a scheme without the TTP is proposed, which protects the locations through private information retrieval [7].Recently, considering mobile nodes, a distributed anonymizing protocol based on peer-to-peer architecture is proposed [17].A specific zone is responded by each mobile node.Besides, an information-theoretic notion was introduced to protect privacy in LBS systems [18].An approach is proposed to protect both client's location privacy and the server's database security by improving the oblivious transfer protocol [19].For providing privacy-preserving map services, a new multiple mix zones location privacy protection is proposed.By using this method, users are able to query a route between two endpoints on the map, without revealing any confidential location and query information [20]. Cloud-Based LBS Privacy Protection. Considering the low computation and communication cost, the LBS providers outsource the LBS data to the cloud server to compute accurate LBS queries, whereas the cloud server is semitrusted.Hence the privacy problem is still a challenge in the cloudbased LBS.There are some literatures about this problem.Firstly, a spatiotemporal predicate-based encryption is proposed for fine-grained access control [21].Then an improved homomorphic encryption [11] is proposed to protect users' privacy and LBS data privacy.A privacy extension in crowdsourced LBS [22] is proposed.To handle the longterm privacy protection and fake path generation for LBS, a dummy-based long-term location privacy protection [23] is proposed.Recently, two-tier lightweight network coding based on pseudonym scheme in a group LBS [24] is proposed to protect privacy.What is more, a query scheme by using Bloom filter and bilinear pairing is proposed [25].However, the literatures above did not consider the multiusers condition (i.e., joining of registered users and revocation of expired users).But unregistered users and expired users access for cloud-based LBSs is a typical scenario.Therefore, providing an efficient and privacy-preserving cloud-based LBS in multiuser environments is an unnegligible issue. Preliminaries In this section, we introduce several necessary tools used in our scheme, including a bilinear pairing map, secure kNN computation techniques, and the difficulty assumption of discrete logarithm problem. Bilinear Pairing Map. Let G 1 and G 2 be two multiplication cyclic groups with large prime order .A bilinear pairing map [26,27] (iii) Computable: for any element , ∈ G 1 , there exists a polynomial time algorithm to compute (, ). Secure kNN. Secure kNN [28] enables an efficient kNN computation over encrypted data points.It adopts an asymmetric scalar-product-preserving encryption (ASPE) to achieve a distance comparison between two encrypted data vectors.We synoptically introduce the principle of this technique as follows. Definition 1 (asymmetric scalar-product-preserving encryption).Let be an encryption function and (, ) be an encryption of a point under a key . is an ASPE if and only if just preserves the scalar product between encrypted data points and an encrypted query points; that is, (1) ⋅ = ( , ) ⋅ (, ), where is one encrypted data point and is one encrypted query point; (2) ⋅ ̸ = ( , ) ⋅ ( , ), where and are two encrypted data points. For ease of understanding, we describe the APSE scheme in Algorithm 1.As shown in Algorithm 1, this scheme includes five parts, that is, a key, a tuple encryption function, a query encryption function, a distance comparison operator, and a decryption function. Difficulty Assumption of Discrete Logarithm Problem. Given a multiplication group G with the prime order , is its generator.An element is selected from Z * randomly, computing = ∈ G.The definition of the difficulty assumption of discrete logarithm problem (DLP) is as follows. Definition 2 (difficulty assumption of discrete logarithm problem).Given G and , it is difficult to compute the correct value of .In other words, given a tuple (G, , , ), there is not an efficient polynomial time algorithm to output . Background In this section, we formally introduce our system model and threat model and then state our proposed problem.(i) LBS Provider.An LBS provider is a location data owner.It outsources large-scale location data to the cloud server for enjoying the low-cost storage services and powerful computation services.To ensure the confidentiality of location data, all location data is uploaded to the cloud server after being encrypted by the LBS provider.In addition, when an LBS user wants to join the system, the LBS provider provides authentication and registration service for the LBS user.Once the LBS user passes authentication, the LBS provider sends some important security parameters to the user via secure communication channels.Correspondingly, the LBS provider is also able to revoke any expired LBS user, who no longer has the query capabilities for the outsourced location data when being revoked by the LBS provider. (ii) A Group of LBS Users.A group of LBS users are the location data users, who enjoy convenient LBSs by submitting LBS query requests to the LBS provider anywhere and anytime.To hide query requests of LBS users for protecting privacy, LBS users first encrypt their query requests and then submit the encrypted query requests to the cloud server.Note that the LBS users are usually referred to the legal registered users and unregistered users and revoked users from the provider cannot enjoy LBSs. (iii) Cloud Server.Upon receiving the encrypted LBS query request submitted by a legal LBS user, the cloud server is responsible for performing the query over encrypted outsourced location data on behalf of the LBS user and returning the satisfied query results to the LBS user.In the whole query processes, the cloud server does not know any contents about outsourced location data, the user's query request, and the current location of the LBS user. Problem Statements. In a conventional LBS system, the LBS data construction usually is organized as the category set and the location data set, as shown in Table 1(a).A denotes the general name of location data, which contains multiple concrete location data.Each concrete location data is a four-tuple {, , -, }, which describes the detailed information of a certain location.When a registered user searches an interested location, he/she submits the specified CAT-EGORY and his current location coordinates to the LBS system.The LBS system first searches over the category set according to the submitted CATEGORY to obtain all target locations and then sorts target locations in an ascending order based on the distances between the user's current location and these target locations, which are easily computed according to the user's coordinate and each target location's coordinate, and finally returns the first nearest locations to the query user, if the query user sends an optional parameter to the LBS system.It means that the LBS system can analyze what are the LBS user interested in and his/her real-time location, when receiving an LBS query. To ensure the confidentiality of LBS data and enable registered users to enforce efficient location query in the privacypreserving manner when the LBS provider outsources LBS data and query services to the cloud server, in this paper, we adopt a hybrid encryption mechanism to encrypt the LBS data; the encryption version of LBS data is shown in Table 1(b), where 1 , 2 , and 3 denote different encryption scheme, respectively, whose constructions will be formally proposed in the next section.By encrypting different fields of LBS data using the different encryptions 1 , 2 , and 3 , our scheme allows the cloud server to provide totally the same query service over encrypted location data as the plaintext environment aforementioned while information about the location data and user's query request is exposed to the cloud server. From the point of view of LBS users, compared with the LBS system in the plaintext environment, the only difference in our scheme is that an LBS user needs to encrypt the interested and his/her location coordinates to generate a query trapdoor.The cloud server performs the LBS query over encrypted outsourced location data according to the query trapdoor.Of course, the necessary decryption operations need to be involved for the LBS user once the encrypted LBS query results are received; however, this is not our concerned problem in this paper. In addition, the LBS system is a typical multiuser application, our scheme designs efficient and flexible user registration and user revocation mechanisms to guarantee that only registered users are able to use the LBS system and unregistered users or revoked users have not access to it. A Privacy-Preserving Multiuser LBS Query Scheme Based on Hybrid Encryption In this section, we describe the implementation details of our privacy-preserving multiuser LBS query scheme.From the system-level point of view, our scheme includes six key modules: system initialization, new user grant, location data encryption, query trapdoor generation, search over encrypted location data, and user revocation.Each module is operated by one entity independently or multiple entities interactively and all modules integrally constitute our privacy-preserving multiuser LBS query system. System Initialization. The system initialization operation is executed by the LBS provider to set up the system running environment.The LBS provider takes a large security parameter as input and first generates two multiplication cyclic groups G 1 and G 2 with the large prime order equipping the bilinear pairing map : Let be a generator of G 1 .Then, the algorithm defines a cryptographical hash function ℎ 1 : {0, 1} * → G 1 , which maps a message of arbitrary length to an element in G 1 .Lastly, the algorithm chooses a random value ∈ Z * and generates a 3 × 3 invertible matrix that are kept secretly by the LBS provider and opens the public parameter = {G 1 , G 2 , , , , ℎ 1 }. New User Grant. When a new LBS user wants to join the system, the LBS provider registers the new user in this phase.At first, the LBS provider selects a random value ∈ Z * for and computes / and the inverse matrix −1 of .Then, / , −1 , and are sent to the user by secure communication channels.When receives / , −1 , and , he/she randomly selects ∈ Z * and keeps , −1 secretly and then further computes . According to the received value / , computes his/her register secret key : At last, (, ) is sent to the cloud server and the cloud server stores this tuple into a user list -. Location Data Encryption. To guarantee the security of location data, the LBS provider needs to encrypt all location information before outsourcing them to the cloud server.In this paper, to enable an efficient and privacy-preserving LBS query, we use different encryption mechanisms to encrypt the different attributes of the location data.Without loss of generality, we use { : , : ; : ( , ); : } to denote any location data belongs to in category set.The LBS provider takes the following three steps to encrypt the location data . First, the LBS provider uses its secret value to code as ℎ 1 ( ) and further employs the bilinear pairing map to calculate (ℎ 1 ( ) , ).We use 1 to denote the code operation of attribute of the location data such that Second, the LBS provider uses the secretly preserved invertible matrix to encrypt 's coordinate ( , ) as Here, correspondingly, we use 2 to denote this encryption operation. Third, for the remaining other attributes TITLE and DESCRIPTION, the LBS provider adopts any semantically secure symmetric encryption such as AES under a given key to encrypt them, denoted as 3 in our paper such that (4) Query Trapdoor Generation. To preserve user's query privacy and enable correct search over encrypted location data, a query user with the current location coordinate ( , ) needs to encrypt his/her query request before it is submitted to the cloud server.In this paper, a query trapdoor generation is conducted in two steps. First, the user chooses a desired query objective denoted as (e.g., = ) and uses the secret value granted by the LBS provider and the secret value randomly chosen by himself/herself in the user grant phase to encrypt as ℎ 1 () and ℎ 1 () . Search over Encrypted Location Data. After the query user generates a query trapdoor T (), he/she submits T () and a parameter to the cloud server.Upon receiving the query trapdoor T () and , the powerful cloud server is responsible for searching over encrypted outsourced location data on behalf of the query use, without knowing any plaintext information of the outsourced location data and the user query request.If the user is a legal user, the cloud server returns back the first encrypted target locations that satisfy the query and are nearest to the query user.Therefore, in the whole query process, the cloud server must perform two key operations under the encrypted environment: (1) searching the encrypted category set according to the query trapdoor to obtain all target locations; (2) sorting the distances between target locations and user's current location in an ascending order.To achieve the above goal, the cloud sever processes the search in two steps.First, the cloud server looks up the query user 's registration information ⟨, ⟩ from the user list -.If the user information does not exist, the cloud server rejects the query; otherwise it linearly scans the encrypted category set and obtains all encrypted target location data if it finds out an encrypted 1 () in the encrypted category set that satisfies the following equation: Second, upon obtaining all target locations, the cloud server sorts the distances between target locations and user's query location by evaluating where and are any two locations satisfying the query with the encrypted coordinate 2 ( , ) and 2 ( , ), respectively. If the above inequality holds, then this indicates that the target location is closer to the query user than the target location .Hence, is sorted in the front of .Finally, the cloud server returns the first encrypted locations to the query user . User Revocation. User revocation is an essential yet challenging task in a practical multiuser application such as an LBS system.In some related literatures supporting user revocation, to prevent revoked users from continuing to access outsourced cloud data, the data provider usually has to rebuild data index or reencrypt large amounts of data and reuploads them to the cloud server and issues new keys to the remaining users.It unavoidably brings heavy computation and communication cost for the data provider because of the high of dynamic of users in the cloud environment.In this paper, we propose an efficient user revocation mechanism without any data reencryption and keys update operations while being able to effectively prevent the revoked user from searching outsourced location data.More concretely, for a user who will be revoked by the LBS provider, the LBS provider first sends the user information about to the cloud server.Then, the cloud server scans user information in the user list - to find out the information of and deletes ( , ).Once ( , ) is deleted from -, no longer has the capability to search location data stored at the cloud server.Since without , the cloud server cannot complete matching between the trapdoor and encrypted according to the query scheme proposed in Section 5.5, can still generate a legal query trapdoor. Analysis In this section, we analyze the search correctness and security to prove that our proposed scheme is correct and secure. Search Correctness Analysis. When an authorized query user submits his/her query trapdoor T () to the cloud server, the cloud server firstly obtains the all encrypted locations satisfying the query by performing a matching operation between an encrypted and T ().Specifically, the cloud server judges whether 1 () = ( , ℎ 1 () )/(, ℎ 1 () ) holds or not for an encryption 1 ().If the equation holds, then this indicates that the query correctly matches 1 () and the cloud server obtains all target locations belong to .The correctness can be easily verified as follows: Then, for any two target locations and , the cloud server is able to determine whether is closer to the query current location than by evaluating This is because that Thus, the cloud server is able to sort all target locations according to the above distance comparisons in an ascending order and returns back the first nearest locations to the query user. Security Analysis. In our proposed scheme, three encryption schemes 1 , 2 , and 3 are employed to protect the confidentiality of LBS data.In this section, we will analyze the security of our scheme against the "honest-but-curious" cloud server in the multiuser environment. 2 is a semantically secure symmetric encryption such as AES that encrypts the TITLE and DESCRIPTION fields of LBS data.The semantic security of AES guarantees the security of TITLE and DESCRIPTION fields of LBS data.We use secure kNN encryption technique denoted as 2 to encrypt the attribute of LBS data and query user's coordinate to enable a secure and effective distance comparison.The security of the attribute of LBS data and the query user's coordinate mainly relies on the security of secure kNN scheme.For the attribute of LBS data, we use 3 to encrypt it to enable secure and flexible search over encrypted location data.Specifically, given a location data with attribute , the ciphertext can be denoted as 3 () = (ℎ 1 () , ) = (ℎ 1 (), ) .Since (ℎ 1 (), ) is a group element in G 2 with a large prime order, the secret key is acknowledgedly intractable from 3 () in the large number field due to the well-known DLP assumption.Without the secret key kept secretly by the LBS provider, the cloud server cannot recover the message from encryption 3 . In addition, in the multiuser environment, the system should prevent an illegal user from requesting for query LBS data stored in the cloud server.In our scheme, when a registered user wants to query the LBS location data using , uses secret query keys and to generate the query trapdoor of , () = (ℎ 1 () , ℎ 1 () ).Under the assumption of DLP, attackers cannot compute out and according to ().Without the correct and , an illegal user cannot generate the correct query trapdoor such that cannot perform the correct query over encrypted location data.For a revoked user , although can generate the trapdoor () for , still cannot let the cloud server perform a correct query on behalf of him/her due to lacking of the necessary query parameter that has been deleted from the list - by the cloud server in the phase of user revocation. Evaluation In this section, we evaluate the performance of our proposed scheme from the perspective of the LBS provider, the LBS user, and the cloud server, respectively.The software and hardware configurations of the LBS provider and LBS user side are Windows 7 desktop systems with Intel Core 2 Duo CPU 2.26 GHZ, 3 GB memory, and 320 GHZ hard driver and the cloud sever side is a virtual machine with Core 2 Duo CPU 4 × 2.394 GHZ, 8 GB memory on the Dell blade sever M610, and the Linux Centos 5.8 OS. All programs are developed using Java language and the JPBC library [29] is employed to implement group operations.We execute all experiments in a real data set collected from the open street map [30] with 50 categories and the number of concrete location data being 1000 by extracting the location data belonging to Yuelu District, Changsha, China. 7.1.LBS Data Encryption.Figure 2(a) shows that the time cost of encrypting LBS data for the LBS provider increases linearly with the increasing size of category set when the total number of location data remains unchanged ( = 1000).Figure 2(b) shows the number of concrete location data has little influence on the time cost of encrypting LBS data when fixing the size of category set ( = 50).Recall that, in our scheme, 1 is used to encrypt categories in category set and 2 and 3 are used to encrypt location data.Experimental results from Figure 2 illustrate that the time cost of encrypting LBS data is closely related to the encryption 1 while not being almost affected by 2 and 3 .It is reasonable that the 1 consists of the time-consuming pair operation and exponent operation over the ellipse curve group while 2 and 3 almost do not consume time when an extremely small message and 3-dimensional vector are encrypted by them, respectively. Query Request Encryption. According to the query trapdoor generation proposed in the Section 5.4, in the whole processes of the query request encryption, three key operations are involved to encrypt an interested query keyword and current location coordinate for a registered LBS user (i.e., the hash operation, the exponentiation operation on group, and the matrix multiplication operation between a 3 × 3 matrix and a 3-dimensional column vector).The time cost of each operation based on our software/hardware setting is shown in Table 2. Therefore, the total time cost of generating a query request for an LBS user can be denoted as ℎ 1 + 2 * G 1 + ≈ 161 ms; it is extremely efficient in practice.show the time cost of search over encrypted location data for the cloud server.We can observe that the number of categories and encrypted location data have little influence on the overhead of search for the cloud server.This is because that the main time cost for the search is to only enforce two relatively time-consuming pairing operations while linear search over 50 categories according to the query trapdoor for target location data and 3-dimensional vector computation for distance comparison almost does not consume time.Figure 3(c) shows the average response time of our query scheme for different sizes of query users.We can see that the response time grows linearly with the increasing number of query users.When the number of query users achieves 100, the response time is about 6.82 s, and this is extremely efficient in practical application. Conclusion In this paper, we propose a privacy-preserving multiuser LBS query scheme based on the hybrid encryption in the cloud environment.Adopting different encryptions on different attributes of LBS data, our proposed scheme can achieve users' location privacy protection and the confidentiality of LBS data.In particular, the LBS query is performed in the cipher environment, thus the LBS users can get the accurate LBS query results without disclosing their private information.Besides, we consider LBS user accountability and LBS user dynamics, for preventing the unregistered users and expired users accessing.And extensive experiments show that our proposed scheme is highly efficient.In the future, we will consider collusion attacks in the cloud-based LBSs. 4. 1 . System Model.In our system model, there are three entities: an LBS provider, a cloud server, and a group of LBS users, as shown in Figure1.Next, we introduce each entity of our model as follows. Figure 2 : Figure 2: (a) The time cost of encrypting LBS data for LBS provider for different size of category set with fixed number of location data, = 1000; (b) the time cost of encrypting location data for LBS provider for different number of location data with fixed size of category set, = 50. Figure 3 : Figure 3: (a) The time cost of search for cloud server for different number of categories with fixed number of location data, = 1000; (b) the time cost for search for cloud server for different number of location data with fixed number of categories; (c) the system response time for different number of search users with fixed number of categories and location data, = 1000, = 50. Table 2 : Time cost of operation.
7,233.8
2018-03-08T00:00:00.000
[ "Computer Science" ]
DIY photometer in determining the beginning of dawn time in Cimahi City The use of SQM as a calibrated portable photometer is currently a hot topic in astronomical research in terms of measuring the brightness of the night sky; DIY Sky Quality Meter (SQM) with the name Photometer D.I.Y – CJ'01 has been successfully made and tested on a limited basis, the output data obtained by the photometer This is the magnitude value per square arc second (mag/sq arc second ~ mpsas). This research aims to determine the performance of the D.I.Y - CJ'01 Photometer in measuring the night sky's brightness to determine the start of dawn. The research method used was an experiment with data collection from the D.I.Y Photometer – CJ'01 and SQM Unihedron in Cimahi City during the New Moon and Full Moon. The results of data processing using the solver method show that the D.I.Y - CJ'01 Photometer has almost the same performance as the SQM Unihedron based on the inflection point value that determines the start of dawn and also indicates the existence of pseudo-night in the city of Cimahi which is by typical urban areas dominated by light pollution and air pollution. INTRODUCTION As is known, the agreed dawn time in Indonesia is when the sun is at -20 degrees, but in recent years, the issue has reappeared, stating that dawn time is too early.This issue re-emerged from several studies and research articles in proceedings, journals, and books (Rakhmadi et al., 2020;Yazid Raisal et al., 2019;Saksono, 2017).The study's results placed the sun at a depth of 18 degrees.The government then, through the Ministry of Religion, re-measured the dawn time in locations far from light pollution, namely Labuan Bajo Beach and Mount Timau, finding that the depth of the sun at dawn was -19.5 degrees which was then raised to -20 degrees (Amin, 2020;Setyanto, 2021). In Islam, based on fiqh studies, there are two fajr types: fajar kadzib (false) and fajar shadiq (real).Fajr kadzib usually comes early to decorate the eastern sky with a weak intensity that resembles a triangle or a wolf's tail that rises along the ecliptic line, which comes from interplanetary dust exposed to sunlight to produce a weak light known as zodiacal light then fajar shadiq appears slowly from the horizon horizontally until it gets brighter and brighter which comes from the position of the sun that moves up to the horizon (Setyanto et al., 2021), The appearance of Fajar Sadiq is then used as a sign of the entry of dawn for prayer and fasting, this is based on various studies from the Koran, and historical traditions studied in fiqh (Herdiwijaya, 2020). Meanwhile, in astronomy, dawn is a phenomenon that describes a transition from night to day, where the sun begins to creep up towards the horizon.In the process of crawling up, dawn is then divided into 3 phases of dawn based on its position: astronomical dawn, nautical dawn, and civil dawn.At astronomical dawn, the sun is at a depth of -18 to 12 degrees below the horizon, where the sky is still dark, so any object is difficult to recognize or see.Nautical dawn is when the sun is at a depth of -12 to -6 degrees, where the sky is still dark enough so that conditions are still dimly lit, causing the objects seen to be still blurry.Meanwhile, the civil dawn of the sun is at a position of sun depth of -6 to -0.5 degrees, where the scattering of sunlight is starting to be strong enough to give the eye the ability to recognize an object or various objects (Herdiwijaya, 2017). Therefore, knowing the beginning of dawn with the naked eye is very difficult; a sensitive tool is needed to measure changes from dark to light to indicate the start of dawn (Barbur & Stockman, 2010).The tool is a photometer, a device used to measure lighting or irradiation, which detects the intensity of light scattering, absorption, and fluorescence (Ohno et al., 2020).Most photometers are based on photoresistors or photodiodes, where electrical properties change when light is irradiated, which can then be detected by specific electronic circuits (Yurish, 2005).The Sky Quality Meter from the Unihedron company from Ontario, Canada, uses a TSL237 sensor to convert light into frequency. This Unihedron Sky Quality Meter (SQM) photometer has a CM500 HOYA filter with a spectral range between 300-720 nm (500 nm peak), which means the SQM detector response is equal to the visual spectral sensitivity of the human eye (Ngadiman et al., 2020).There is a TSL237 sensor that converts light to frequency; TSL237 combines a silicon photodiode and a current-to-frequency converter on a single monolithic CMOS integrated circuit; the output is a square wave with a frequency directly proportional to the light intensity (irradiance) on the photodiode (Hänel et al., 2018).Around 2005, Cinzano, through a research report related to night sky photometry, used the Unihedron Sky Quality Meter (SQM) photometer, which is lowcost and small and very easy to use (point the photometer at the peak, press the button and read the data on the screen), but still accurate enough to carry out scientific research in measuring the brightness of the night sky (Cinzano, 2005). The TSL237 sensor is also embedded in the D.I.Y-CJ'01 Photometer.It has been successfully made using the Arduino nano microcontroller (Asmoro et al., 2022), which is equipped with an LCD screen to display measurement results and a time display with a realtime clock (RTC) module, as well as a data logger to store measurement results including internal power sources that are packaged in one tool into more value that distinguishes it from the existing unihedron SQM variants (Figure 1).SQM Unihedron and D.I.Y-CJ'01 measured in the astronomical magnitude system mag/arcsec 2 (magnitude per square arc second), the basis of which is that if an area of the sky contains exactly one star of magnitude X in every square arc second, then the brightness of the sky is X mag/arcsec 2 (Hearnshaw, 2022).The magnitude system was introduced by an ancient Greek astronomer named Hipparchos, who assigned magnitude 1 to the brightest star visible to the naked eye and magnitude 6 to the faintest star visible to the naked eye, noting that light pollution was not yet dominant at that time (Cunningham, 2020;Kaltcheva & Berry, 2023).The utilization of SQM in determining the time of dawn has been done by several researchers (Affendi et al., 2021;Musonnif, 2022;Putraga et al., 2022;Raisal et al., 2019;Saksono et al., 2020), the new thing that will be done in this study is the utilization of photometer tools that have been made independently and different data analysis techniques using solvers.This article will explain the process of collecting and processing sky brightness data in determining the start of dawn using the turning point approach obtained from photometers, both SQM Unihedron and D.I.Y-CJ'01.The results also test the performance of the D.I.Y-CJ'01 Photometer as a measuring instrument suitable for use as an alternative photometer for observing the night sky's brightness. RESEARCH METHODS The research method used is the Experimental Method, the DIY CJ'01 Photometer simultaneously with a calibrated SQM Unihedron photometer, which is then used to measure the night sky's brightness from 00.00 WIB to 06.00 WIB.This test was carried out in Cimahi City during the New Moon and Full Moon phases, as seen in Figure 2. The pattern formed during the transition from night to morning is characterized by a decrease in the measured MPSAS value so that if displayed in the graph, a straight line pattern will be seen and then turn downward over time.The reversal process is said to be the beginning of dawn, so by analyzing the turning point, we will get the value of the start of dawn (Rizkiawan et al., 2021), and the turning point determination results from the two nights will be compared to see if the performance of the DIY-CJ01 Photometer (CJ01) is the same as the SQM LU-DL Unihedron RESULTS AND DISCUSSION Geographically, the location in Cimahi City is at coordinates 6.894895 south latitude and 107.540804 east longitude, with an elevation of ±777 meters above sea level.Place the two photometers together in one container and store them at a height of ±3 meters because they are located around residential areas to avoid direct light.Sky brightness measurements were carried out on July 19, 2023 (New Moon) and August 3, 2023 (Full Moon), with a photometer directed at the zenith (Herdiwijaya, 2016). The sky brightness data during the transition from night to morning are presented in Figure 3 and Figure 4, with the horizontal axis being the local time and the vertical axis being the sky brightness value in magnitude per square arc second unit (mag/sq arc second ~ mpsas).There is a similar graphical pattern between the measurements from the SQM and CJ01; in Figure 3, during the new Moon phase, the Moon and Sun are in the same direction of the sky, so the Moon is not visible in the night sky.Hence, the night sky brightness tends to stabilize at 18 mpsas before decreasing slowly as the Sun's light begins to contribute. In Figure 4, during the Full Moon phase, the Moon is in the sky throughout the night, from rising in the east in the early evening, around the zenith after midnight, and setting in the west.This is recorded in the photometer, which can be seen in the graph where at midnight, the brightness value of the night sky is at a smaller value of about 15 mpsas, then rises to 17 mpsas as the moon is no longer around the zenith where the photometer is pointed and the mpsas value drops again as light from the sun starts to contribute (Ahmed, 2021;Setyanto et al., 2021). Sky brightness data during the transition from night to morning is presented in Figure 3 and Figure 4, with the horizontal axis being local time and the vertical axis being the sky brightness value in units of magnitude per square arc second (mag/sq arc second ~ mpsas).The same graphic pattern between measurements from SQM and CJ01 can be seen in Figure 3; during the new moon phase, the Moon and the Sun are in the same direction as the sky, so the Moon is not visible in the night sky.Hence, the night sky's brightness tends to stabilize at a value of 18 mpsas before decreasing slowly as a sign that sunlight is starting to contribute. In Figure 4, during the Full Moon phase, the Moon is in the sky all night, starting from rising in the east at the beginning of the night, being around the zenith after midnight, and setting in the west.This is recorded in the photometer, which can be seen on the graph where at midnight, the brightness value of the night sky is at a smaller value of around 15 mpsas, then rises to 17 mpsas because the moon is no longer around the zenith where the photometer is directed and the value mpsas fell again as light from the sun began to contribute.To find out the turning point as a determinant of the beginning of fajr time Islamically and astronomically, convert the time variable into an angular variable of solar depth in degrees.Next, choose the range of sun depth position -25 to -5 degrees to make it easier to determine the equation of the graph model formed.Then, we will get a blue graph, the mpsas value of the measurement results, and an orange graph, which is the mpsas value of the modeling results.Then, determine n = 3 consistent standard deviations, consisting of the mean and constant level difference, and display them as three consecutive vertical dashed lines with colors (1) red, (2) green, and (3) blue. Figure 5 shows the same graphical pattern where the night sky brightness for solar depth angles between -23 to -17 degrees tend to stabilize around 18 mpsas for both the SQM and CJ01-generated data.Figure 6 also shows a similar graphical pattern for both the data generated by SQM and CJ01, where at sun depth angles of -23 to -15 degrees, the night sky brightness values tend to stabilize at around 17 mpsas.So, it can be seen that the presence of the Full Moon contributes to a decrease in the night sky brightness value by 1 mpsas, and the duration to the initial turning point at dawn is delayed by 2 degrees from the movement of the Sun, this value is different for each measurement location because the moon's trajectory in the sky is different (Cui et al., 2021;Krieg, 2021;Liu et al., 2022).Table 1 shows the calculation results of 3 inflection points (1) red, (2) green, and (3) blue from the graphs obtained from SQM and CJ01 on July 19 and August 3, 2023.From the calculation results of the turning point difference, it can be seen that SQM tends to turn earlier than CJ01, with the smallest turning point difference occurring at turning point 3 on July 19, 2023, which is -0.03 degrees or about 0.17% and the largest difference at turning point 3 on August 3, 2023, which is -0.50 degrees or about 3.2%.The significant difference at the turning point on August 3, 2023, is due to the contribution of the Full Moon moving across the zenith with a maximum height of 79 degrees in the southern sky region. As discussed in the introduction, the standard for the height of the sun at dawn prayer time in Indonesia, the ijtihad used is the position of the sun -20 degrees (below the horizon), with the basis of shar'i and astronomical arguments that are considered vital (Budiwati, 2018).In astronomical science, the beginning of dawn starts from an angle of the sun's depth of -18 degrees, called astronomical dawn.However, from processing the data obtained in both the New Moon phase and the Full Moon phase, it was found that the turning point value determining the start of dawn ranged from -12.76 to -15,05 degrees for Full Moon and -14.59 to -17,19 degrees for New Moon.This indicates that the pseudo-night effect is occurring, namely the condition of small changes in sky brightness due to sunlight being absorbed by pollutant particles originating from air pollution accumulating in the lower atmosphere (Herdiwijaya, 2017).Artificial light emitted from the Earth's surface can also scatter off molecules or aerosols in the atmosphere and return to Earth as "skyglow."In a bright night sky, this results in a loss of star visibility, especially in the region near the horizon; here, the light is brightest (Luginbuhl et al., 2014). CONCLUSION Based on observations using two photometers, SQM and CJ01, in Cimahi city, both during the New Moon and Full Moon phases, it shows that both photometers have the same performance.This conclusion is generated by calculating the turning point as the beginning of dawn using the Solver method, showing values close to each other with the smallest calculated difference of -0.03 degrees or 1.7%.It appears that at astronomical dawn -20 to -18 degrees, the sky brightness value is still constant as night while the change in sky brightness occurs at -17 degrees or about 65 minutes before sunrise; at this angle is the beginning of dawn time based on sky brightness data in Cimahi.The observation results show a false night in Cimahi City, a typical urban areas dominated by light and air pollution.These two types of pollution also affect the depth value of the Sun.In addition, measurements during the Full Moon Phase contribute light so that the brightness value of the night sky decreases and the false night effect becomes longer. Figure 2 . Figure 2. Location and data collection set in Cimahi City The work steps in this research are as follows: a. Photometer Installation b.Set the time to measure the brightness of the night sky every 5 seconds c.Download night sky brightness data d.Create a table containing the time and mpsas values e. Graph plot of mpass value against time f.Convert the time column to a solar depression angle (https://www.esrl.noaa.gov/gmd/grad/solcalc/NOAA_Solar_Calculations_day.xls)g.Determining Inflection Points with the Solver Method in Microsoft Excel software.i.Arrange a table with the first column being the sun's depth angle from -25 to -5 degrees, followed by the next column containing the MPSAS value from the measurement results and the MPSAS value from the m(x) model, respectively.ii.The m(x) model has a formula consisting of constant level, normalization, mean, and standard deviation, as a Gaussian function.iii.Looking for the best model parameters by minimizing these parameters with Solver.iv.Determine several n standard deviations consistently to obtain three inflection point Figure 3 .Figure 4 . Figure 3. Graph of local time versus mpsas value during the New Moon Figure 5 .Figure 6 . Figure 5. Mpsas value plot as a function of Solar Depression Angle during the New Moon
3,893.4
2024-04-28T00:00:00.000
[ "Physics", "Environmental Science" ]
Identification of Cancer Dysfunctional Subpathways by Integrating DNA Methylation, Copy Number Variation, and Gene-Expression Data A subpathway is defined as the local region of a biological pathway with specific biological functions. With the generation of large-scale sequencing data, there are more opportunities to study the molecular mechanisms of cancer development. It is necessary to investigate the potential impact of DNA methylation, copy number variation (CNV), and gene-expression changes in the molecular states of oncogenic dysfunctional subpathways. We propose a novel method, Identification of Cancer Dysfunctional Subpathways (ICDS), by integrating multi-omics data and pathway topological information to identify dysfunctional subpathways. We first calculated gene-risk scores by integrating the three following types of data: DNA methylation, CNV, and gene expression. Second, we performed a greedy search algorithm to identify the key dysfunctional subpathways within pathways for which the discriminative scores were locally maximal. Finally, a permutation test was used to calculate the statistical significance level for these key dysfunctional subpathways. We validated the effectiveness of ICDS in identifying dysregulated subpathways using datasets from liver hepatocellular carcinoma (LIHC), head-neck squamous cell carcinoma (HNSC), cervical squamous cell carcinoma, and endocervical adenocarcinoma. We further compared ICDS with methods that performed the same subpathway identification algorithm but only considered DNA methylation, CNV, or gene expression (defined as ICDS_M, ICDS_CNV, or ICDS_G, respectively). With these analyses, we confirmed that ICDS better identified cancer-associated subpathways than the three other methods, which only considered one type of data. Our ICDS method has been implemented as a freely available R-based tool (https://cran.r-project.org/web/packages/ICDS). INTRODUCTION Cancer is a complex disease involving multiple biological processes and multiple factors, including genomic, epigenomic, and transcriptomic aberrations associated with cancer formation and development (Forozan et al., 2000;Zhang et al., 2012). Identifying molecular markers of cancer is a major challenge and can effectively clarify diagnosis and treatment. With the development of high-throughput sequencing technology, it is possible to understand the pathogenic mechanisms of cancer at the molecular level (Wang et al., 2014;Liu and Xu, 2015;Zhang et al., 2017). Large-scale cancer genomics projects, such as the Cancer Genome Atlas (TCGA) (Giordano, 2014), provide multiomics profiles from a large number of patient samples from many cancer types. This may provide a basis for the systematic understanding of the development of cancer. However, both copy number variation (CNV) and DNA methylation changes may affect gene expression, and integration of these data may enhance essential gene characterization in cancer progression (Kim et al., 2010;Xu et al., 2010). Many studies have shown that the use of multi-omics data for integrated analysis helps us to understand the pathogenic mechanisms of cancer. For example, Xu et al. (2010) have shown that the correlation between gene expression and CNV has biological effects on carcinogenesis and cancer progression. Additionally, Zhang et al. (2013) has classified the prognosis of patients with different subtypes of ovarian cancer by integrating four types of molecular data related to gene expression. In view of these works, our goal is to explore the multi-layered genetic and epigenetic regulatory mechanisms of cancer. Biological pathways are models containing structural information between genes, such as interactions, regulation, modifications, and binding properties. In addition, genes in the same pathway usually coordinately achieve a particular function. With the appearance of some traditional pathway-analysis tools, such as GSEA (Subramanian et al., 2005) and SPIA (Tarca et al., 2009), the pathway-based approach has become the first choice for complex disease analysis to facilitate biological insights. Existing biological-pathway databases provide pathway topological information, such as with the Kyoto Encyclopedia of Genes and Genomes (KEGG) (Wixon and Kell, 2000), which is being updated to suit the needs for practical applications and act a systematic reference knowledge database to understand the metabolism and other cellular processes. Recently, the KEGG pathway database has become one of most widely used resource for biological function annotation (Kanehisa et al., 2017). Based on pathway topological information, the subpathway concept was proposed in our previous study in which we confirmed that key subpathways -rather than entire pathwayswere more suitable for explaining the etiology of diseases (Li et al., 2009(Li et al., , 2013. Subpathways contain fewer components, which enables a more accurate interpretation of the biological function of the disturbance, for the future study of precision medicine. Subpathway-GM (Li et al., 2013) was proposed to identify disease-relevant subpathways by integrating information across genes, metabolites, and pathway structural information within a given pathway; using this, 16 statistically significant subpathways were identified as associated with metastatic prostate cancer. SubpathwayMiner (Li et al., 2009) uses a subgraph-mining method to find subpathways where all of the genes have highly similar functions; this method identified36 dysfunctional subpathways -enriched by differentially expressed genes -as associated with the initiation or progression of lung cancer. Recently, some other methods have been developed to identify subpathways from pathway topology. One example is MIDAS (Lee et al., 2017), which determines condition-specific subpathways and fully utilizes quantitative gene-expression data and network-centrality information across multiple phenotypes. Moreover, the following subpathway-activity measurement tools have been designed to identify activated subpathways between two phenotypes: PATHOME (pathway and transcriptome information) (Nam et al., 2014), TEAK (Topology Enrichment Analysis frameworK) (Judeh et al., 2013), and MinePath (Mining for Phenotype Differential Sub-paths in Molecular Pathways) (Koumakis et al., 2016). Moreover, there is also some other methods proposed network-based analysis to discover de novo pathway. For instance, de novo pathway enrichment extracted sub-networks enriched in biological entities active by combining experimental data with a large-scale interaction network (Batra et al., 2017). These subpathway-analysis methods mainly identify dysfunctional subpathways only by comparing the expression levels of their involved genes between tumor and normal tissues. In this way, other genetic characterizations of genes, such as CNVs and DNA methylation, are ignored. However, both DNA methylation and CNVs in cancer genomes frequently perturb the expression levels of affected genes and, thus, disrupt pathways controlling normal growth. It is therefore necessary to integrate gene expression information and other genetic information, such as DNA methylation and CNVs, to identify dysfunctional subpathways. In this study, we propose a novel method, termed Identification of Cancer Dysfunctional Subpathways (ICDS), to identify dysfunctional subpathways by integrating multiomics data and pathway topological information. In ICDS, the first step is to calculate gene-risk scores to evaluate the contribution of genes to cancer states by considering the following three molecular characterizations: DNA methylation, CNV, and gene expression. In the second step, we convert the KEGG pathway into an undirected-pathway network with genes as nodes and biological relationships as edges, and use a greedy search algorithm to search for candidate dysfunctional subpathways within the pathways for which the discriminative scores are locally maximal. Finally, a perturbation test is used to calculate statistical significance for these dysfunctional subpathways. We applied the ICDS method to liver hepatocellular carcinoma (LIHC), head-neck squamous cell carcinoma (HNSC), and cervical squamous cell carcinoma and endocervical adenocarcinoma (CESC) datasets, and compared our results with three analytical methods that only used DNA methylation, CNV, or gene expression to calculate subpathway-activity scores (defined as ICDS_G, ICDS_CNV, ICDS_M, respectively). Through these analyses, we confirmed that ICDS could better identify cancer-associated subpathways compared to the other three methods. Datasets The datasets containing gene expression, CNV, and DNA methylation information were collected from the TCGA website 1 . We downloaded TCGA RNA-seq level-3 data, which were processed and normalized and used the Reads Per Kilobase per Million mapped reads (RPKM) values for the gene-expression levels. Finally, there were 19,754 genes used in 424 LIHC, 546 HNSC, and 309 CESC samples. CNV profiling was estimated using the GISTIC2 method (Mermel et al., 2011), and was annotated to genes using the UCSC cgData HUGO probeMap. For example, the LIHC dataset contains CNVs in 24,776 genes from 373 cancer samples. In this study, we further filtered 364 LIHC samples with matched gene-expression profiles. We downloaded TCGA level-3 Illumina Human-Methylation450 Bead Array data for DNA methylation. The LIHC DNA methylation level-3 dataset contain β-values for 20,105 genes from 429 samples, which included 50 normal samples and 379 lung-cancer samples. The β-values are calculated by M/(M+U+100) with a range from 0 to 1, in which M is methylated allele frequencies and U is unmethylated allele frequencies. Overall, higher β-values indicate higher methylation. For three datasets, we removed genes with values of zero in more than 80% of the samples. In this paper, we also use the data from HNSC and CESC samples, which were processed using the above procedure. Detailed data information is shown in Supplementary Table S1. The KEGG pathway database contains experimentally verified pathway structural information (e.g., interactions, regulation, modifications, and binding between genes). We collected 294 KEGG biological pathways, and each pathway was converted to an undirected network with genes as nodes and biological relationships as edges on the basis of pathway structural information using the "iSubpathwayMiner" system (Li et al., 2009(Li et al., , 2013. Calculated Gene Risk Score in Cancer There are many factors influencing tumorigenesis, such as gene expression, CNV and DNA methylation. For each gene, we calculated its risk score in cancer by considering the following three types of genetic molecular features: gene expression, CNV, and DNA methylation. With the above data, we used the Student's t-test (Hogben, 1964) to calculate the adjusted p-value for differential expression level and differential methylation level of each gene in the tumor and normal samples (denoted by p gene and p methy ). According to results of GISTIC2 analysis, the sample was then divided into a copy-number-variated group and an unvariated group for each gene, and then the differential expression level of the gene in the two groups was calculated by Student's t-test (denoted by p cnv ). It is difficult to define the quantitative relationship and relative degree of each factor's influence on tumorigenesis, so we assume that gene expression, CNV, and DNA Methylation equally contribute to the cancer development. The gene risk score (RS) was calculated by integrating the 1 https://tcga-data.nci.nih.gov/tcga/ above three p-values with Fisher's combined probability test. This method computed a combined statistic S from the adjusted p-values obtained from the three individual datasets as shown in Equation (1). Usually, the statistic S followed a χ 2 distribution with 2k degrees of freedom, and we then calculated the null hypothesis p-value of the statistic S. Finally, we converted the p-value to a z-score according to the inverse-normal cumulativedistribution function (CDF), and the z-score was taken as the RS of each gene in cancer. Calculated Subpathway-Activity Score Previous studies have confirmed that subpathways can provide more detailed biological information than whole pathways. In this study, we proposed a novel method to combine gene-risk score with pathway topological structure to infer subpathway activities. The RS of genes were obtained by the above method, considering gene expression, CNV and methylation. For a KEGG pathway, we performed a greedy algorithm to search for dysfunctional subpathways within the pathways for which the discriminative scores were locally maximal. Specifically, the search algorithm started from a seed gene i which had a significantly high risk score (p < 0.001) and expanded iteratively, after which it selected one of the neighbors of the seed gene to form the current subpathway. For a subpathway k, the activity score (AS k ) was the average of the RS of the member genes in the subpathway, calculated by Equation (2): In Equation (2), i is the index of the gene in the subpathway k, while n is the number of genes involved in the subpathway. At each iteration, the algorithm adopted a gene from the neighbors of genes in the current subpathway, which produced maximal increases between AS k+1 and AS k . The search algorithm will stops when no additional gene increases in the score AS k+1 over (1+r) AS k or the distance in the current subpathway between any two nodes is greater than 3 in order to keep the search locally. The improvement rate r is chosen to avoid too large subpathway region, resulting in the addition of redundant weak information. The parameter r = 0.05 has been demonstrated to be appropriate in the greedy heuristic algorithm applied in the biological network (Chuang et al., 2007). When the Jaccard index between each pair of subpathways in the same pathway was more than 0.6, they were combined, which ensured that the subpathways we found in our method contained more information and reduce redundancy. Furthermore, we only considered subpathways with more than five genes and less than 100 genes, to avoid overly narrow or broad functional subpathways. Significance Test of the Subpathway We provided two statistical test methods for each candidate subpathway, of which one was a whole gene-based perturbation, and the other was a local-gene perturbation in a particular pathway. Users can choose the test method that they prefer. The first test perturbs the gene labels on the entire gene list in the pathway networks, and recalculates the activity score of the subpathway, denoted as AS k_perm1 . This test was used to test the correlation between real subpathways and disease phenotype. In this study, we performed 10,000 perturbations for this test and calculated statistically significant p-value = M/N, in which M is the number of AS k_perm1 greater than the real subpathway score AS k , and N is the number of perturbations. In addition, the second test perturbed the gene names in the pathway to which the subpathway belonged, and recalculated the activity score of the subpathway, denoted as AS k_perm2 . This test was used to test the correlation between real subpathways and pathway structure. We also performed 10,000 perturbations and the score of each real AS k was indexed on the null distribution of all AS k_perm2 whose p-values could be evaluated. The p-values were adjusted using the false discovery rate (FDR) method proposed by Benjamini and Hochberg to correct for multiple comparisons (Benjamini et al., 2001). In this study, both FDR at 0.001 was used as the subpathway-significance threshold. We have implemented ICDS as an R-based package that is publicly available on CRAN 2 . Analyses of Hepatocellular Carcinoma Data A workflow diagram of the ICDS is shown in Figure 1. We first applied ICDS to identify dysfunctional subpathways in LIHC. The LIHC dataset was obtained from TCGA, and its detailed information is shown in Supplementary Table S1. In the LIHC dataset, we calculated the risk score of 16,207 genes by considering the following three types of genetic molecular features: gene expression, CNV, and DNA methylation. We set the genes with p < 0.001 (derived from the combined statistic S) as the seed genes in the pathway network for the subpathway search algorithm (see Materials and Methods). Subpathways were selected which satisfied two permutation tests with FDR1 < 0.001 and FDR2 < 0.001 out of the 10,000 permutations. ICDS identified 19 dysfunctional subpathways associated with LIHC, belonging to 12 entire pathways ( Table 1 and Supplementary Table S2), of which up to nine were reported to be associated with tumor occurrence, development, and metastasis. The most significant subpathway was path 00230_1 in purine metabolism, which contained 61 genes. Some studies have confirmed that the purine-metabolism pathway is highly correlated with the occurrence and metastasis of liver cancer. In multiple cancer cells, a marked imbalance in the enzymic pattern of purine metabolism is linked with transformation or progression, such as in kidney, liver, and colon carcinomas (Weber, 1983). The subregion corresponding to the subpathway included 61 genes (Supplementary Figure S1A), such as adenosine monophosphate deaminase 1 (AMPD1) and adenosine kinase (ADK), which are important enzymes 2 https://cran.r-project.org/web/packages/ICDS involved in purine metabolism. ADK plays a significant role in affecting apoptosis and may become a target for the treatment of cancer (Dzeja et al., 1998). More evidence is mounting regarding the direct relationship between defects in ADK and AMP metabolic signaling (e.g., AMPD) and human diseases (Pavlova and Thompson, 2016), which is a set of collaborative interactions that converts adenosine monophosphate (AMP) to inosine monophosphate (IMP) as part of the process of the purine nucleotide cycle. Compared with normal hepatocytes, the levels of ADK and AMPD1 in LIHC cells were significantly different in expression and methylation (p gene = 6.58e-05 of ADK and p gene = 0.0042 of AMPD1; p methy = 1.05e-05 of ADK and p methy = 9.48e-12 of AMPD1) (Supplementary Figure S1B). The abnormality of ADK and AMPD1 changes the metabolic homeostasis of cells and promotes the progression of cancer cells (Pedley and Benkovic, 2017). To assess the effectiveness of ICDS, we compared our results in LIHC with three other analytical methods in which we calculated the RS of genes by considering only one of the following types of data: gene expression, CNV, or DNA methylation (defined as ICDS-G, ICDS-CNV, or ICDS-M, respectively). Next, we used the same procedure as above to find significant subpathways and used the same parameter settings. Using the methods of ICDS-G and ICDS-M, we obtained three and one significant subpathways, respectively, and the entire pathways they belonged to were all found by the ICDS method ( Table 1). Using the method ICDS-CNV method, we could not find any significant subpathway. If we consider the genetic differences or expression differences based on a single type of data, we may lose important information. However, ICDS exclusively identified 15 significant subpathways marked with red asterisks in Figure 2A, and the KEGG pathways they belong to could not be found based on the three other methods. Some pathways identified by ICDS were the chemokine signaling pathway and focal adhesion, which have been reported to be related to the occurrence and development of hepatocellular carcinoma (Zhao et al., 2011). It has been reported in the literature that the chemokine signaling pathway is involved in the establishment of a tumor-promoting microenvironment and in the development and progression of hepatobiliary cancer (Zlotnik and Yoshie, 2000). Drug targeting of the chemokine pathway is a promising approach for the treatment or even prevention of hepatobiliary cancer. Chemokines play a vital role in tumor progression and metastasis, where the binding of chemokines to their receptors leads to a conformational change, which activates signaling pathways and promotes migration (Zhao et al., 2011). Meanwhile, the subpathway path:04062_1 in the chemokine signaling pathway (Figure 2B), exclusively identified by ICDS, included the chemokine family (CC or CXC) and its receptors family (CCR or CXCR). All of these chemokine families exert their biological effects by binding to chemokine receptors that interact with G protein-linked transmembrane receptors (Decaillot et al., 2011). In the subpathway path:04062_1 (Figure 3A), the CXC motif chemokine 12 (CXCL12) is a chemokine protein that is differentially expressed between LIHC and normal samples (p gene = 1.53e-35), and both the expression of CCL20 and CCR2 are regulated by differential methylation Combine gen-risk score with pathway topological structure to infer the subpathway activity score (AS); subpathways with discriminative activity score in cancer were identified via a greedy search algorithm. (C) A permutation test is performed on the risk score of genes, and pathways are prioritized by FDR after permutation tests. (p methy = 3.07e-18, 2.3e-16). Importantly, the ICDS method not only recognized subregions of differential gene expressions but also detected some genetically or epigenetically diverse regions (e.g., CNVs and methylations). Another subpathway of the chemokine signaling pathway was path:04062_4, which contains 9 genes ( Figure 3B). We found that four of these genes were mainly influenced by differential expressions and five were mainly influenced by differential methylation. Thus, our method can efficiently find dysfunctional local regions in biological pathways and indicate their perturbation by deriving specific types of molecular aberrations (CNV, differential methylations or differential gene expressions). Analyses of Head-Neck Squamous Cell Carcinoma Data The HNSC datasets were obtained from TCGA, and their detailed information is shown in Supplementary Table S1. ICDS identified 17 significant dysfunctional subpathways associated with HNSC belonging to 9 entire pathways and the subpathways exclusively identified by the ICDS method are marked with red asterisks in Figure 4A ( Table 2), of which up to eight have been reported to be central to the growth and survival of cancer cells. Subpathways were selected that satisfied two tests with FDR1 < 0.001 and FDR2 < 0.001 (see Materials and Methods). Path:04919_4 is a significant subpathways (Figure 4B and Supplementary Table S3) belonging to the thyroid hormone signaling pathway (Figure 4C). Many studies have confirmed that the thyroid hormone signaling pathway is a critical component in tumor progression (Kim and Cheng, 2013). Loss of normal function of thyroid-hormone receptors by deletion or mutation can contribute to cancer development, progression and metastasis. Thyroid Hormone Receptor Alpha (THRA) belongs to the nuclear receptor superfamily, is located on different chromosomes, and encodes thyroid hormone (T3) binding thyroid hormone receptor (TR) isoforms, which have been shown to mediate the biological activities of cells (Laudet et al., 1993;Wagner et al., 1995). TRs can function as tumor suppressors, because reduced expression of TRs due to hypermethylation or deletion of TR genes is found in human cancers. The samples had significantly different methylation of THRA (p methy = 4.79e-12) in HNSC, and low expression of THRA is known to activate PIK3R1, which provides instructions for synthesizing a subunit of phosphatidylinositol 3-kinase (PI3K). PI3K signaling is important for many cell activities, including cell growth, division, and migration (Jaiswal et al., 2009). However, we calculated the RS of PIK3R1in HNSC, and its contributions with differential methylation were greater than that of differential expression (p methy = 4.78e-12; p gene = 1.46e-06) (Figure 4B). Similarly, we compared the results of HNSC with the three methods above (ICDS-G, ICDS-CNV, and ICDS-M). Using the methods of ICDS-G and ICDS-M, we obtained two significant subpathways and the pathways they belonged to were also found by the ICDS method. However, 13 subpathways identified by ICDS were missing from all of the other methods (ICDS-G, ICDS-CNV, and ICDS-M) ( Table 2). For example, the subpathway path:00830_3 in retinol metabolism pathway was identified by ICDS but not by ICDS-G, ICDS-CNV, or ICDS-M, and Supplementary Figures S3, S4 show the distribution of the activity score of path:00830_3, combined and individual data source, for the real subpathways and for the randomization cases. The local region of the subpathways was reported to be central to the growth and survival of cancer cells (Supplementary Figure S2A). Specifically, vitamin A (retinol) can control mucosal lesions before the occurrence of HNSC and prevent the occurrence of second primary tumors. Therefore, retinol metabolism is essential for the early diagnosis and treatment of HNSC. Retinoic acid (RA) is a critical signaling molecule that regulates gene transcription and the cell cycle (Tzimas and Nau, 2001), and retinal is then metabolized by NAD/NADP-dependent retinal dehydrogenases (RALDH) and by retinal oxidase enzymes to RA (Chen et al., 1995). Additionally, CYP26C1 in the path:00830_3 is involved in the metabolic breakdown of retinoic acid, which could be more effective in the growth inhibition of cancer cells (Thatcher and Isoherranen, 2009). Moreover, in the HNSC dataset, some genes mainly showed differences in the degree of methylation compared to normal samples, such as CYP26C1 (p methy = 9.25e-34) and ALDH1A2 (p methy = 1.65e-13). Other components in the same subpathway, path: 00830_3, mainly showed differences in the degree of expression compared to normal samples, such as AOX1 (p gene = 3.11e-18) and ADH4 (p gene = 2.75e-38) (Supplementary Figure S2B). Therefore, the ICDS method that we proposed can effectively identify disordered genetic and epigenetic subpathways. Analyses of Cervical Squamous Cell Carcinoma and Endocervical Adenocarcinoma Data We applied ICDS to identify dysfunctional subpathways in CESC (see Materials and Methods). With the threshold of FDR1 < 0.001, we obtained four significant subpathways that had just exceeded the threshold FDR2 (Supplementary Table S4), and all of these subpathways were associated with the development and progression of CESC tumors. Meanwhile, using the method of ICDS-G, we obtained three significant subpathways, and the pathways they belonged to were also found by the ICDS method (Supplementary Tables S4, S5). Subpathway 04020_1 in the calcium-signaling pathway, identified by ICDS, was simultaneously neglected by the other three methods. Interestingly, subpathway 04020_1 ( Figure 5A) in the calcium-signaling pathway is involved many G-protein coupled receptors (GPCRs), such as TACR1, TACR2, and HTR2B, and downstream heterotrimeric guanine nucleotide-binding proteins (G-proteins; GNA14) ( Figure 5B). In this subpathway, many GPCRs had significant patterns of expression changes in CESC patients, such as TACR1 (p gene = 9.92e-32), TACR2 (p gene = 3.82e-08), and HTR2B (p gene = 2.76e-26). Moreover, with CESC samples, AVPR1A, which is a GPCR in cells, mainly showed differences in methylation and expression compared to normal samples. Many studies have shown that the abnormal expression and activity of GPCRs is associated with the development and progression of cancers (Audigier et al., 2013;Moody et al., 2016). GPCRs play a role as key transducers of signals from the extracellular milieu to the intracellular milieu of cells. It has been confirmed that many GPCRs are highly expressed in specific cancer cells, such as in cervical, breast, and prostate cancer cells (Dey et al., 2010). Similarly, abnormal expression of GPCRs contributes to the development of cancer (Radhika and Dhanasekaran, 2001;O'Hayre et al., 2013). Furthermore, initial signal transduction, such as that of calcium signaling, is achieved primarily by GPCRs activated downstream of heterotrimeric G proteins (Hanlon and Andrew, 2015;Schafer and Blaxall, 2017). Calcium-signaling channels are important for the proliferation, migration, and differentiation of cells, including tumors. CESC is associated with the significant upregulation of calcium-signaling pathways (Perez-Plasencia et al., 2007;Monteith et al., 2012). Comparison of ICDS With Other Pathway Analysis Methods In recent years, the pathway-based and subpathway-base approaches have become the first choice for complex disease analysis in order to yield biological insight. To explore whether ICDS could provide new biological insights in identifying important subpathways, we compared ICDS with three widely used pathway-based and subpathway-base approaches including SPIA (Tarca et al., 2009), GSEA (Subramanian et al., 2005), and SubpathwayMiner (Li et al., 2009). These three methods FIGURE 4 | (A) SubPathways identified by ICDS with FDR < 0.001 in the HNSC dataset. The y-axis represents significant subpathways sorted by FDR2, while the x-axis represents the log-transformed FDR2. Compared to the three methods (ICDS-G, ICDS-CNV, and ICDS-M), the subpathways exclusively identified by ICDS method are marked with red asterisks. (B) Dysfunctional subpathway (path:04919_4) of thyroid hormone signaling pathway in HNSC. The vertex in the subnetwork represents a gene, and green and purple colors in the vertex represent the proportion of the gene's differential expression scores and differential methylation scores between cancer samples and normal samples; orange colors represent the proportion of influence of CNV on gene expression. (C) Annotation of genes in path:04919_4 to the original thyroid hormone signaling pathway in KEGG. Genes are marked with red, and the light-yellow circle corresponds to path:04919_4. mainly identify dysregulated pathways or subpathways by using gene expression data, however, the ICDS method identifies the dysregulated subpathways by integrating the three types of data: DNA methylation, CNV, and gene expression. In order to compare the results of the above methods uniformly, we chose to compare the entire pathways identified by them. In HNSC datasets, ICDS identified 17 statistically significant subpathways, which belong to nine entire pathways. SPIA and GSEA found five and eight significant pathways, and SubpathwayMiner did not yield any significant pathways. Through comparing the results of these methods, we found that ICDS identified six statistically significant pathways, which were simultaneously missed by other methods (Supplementary Table S6). The significant pathways exclusively identified by ICDS, such as the cAMP signaling pathway, chemokine signaling pathway, Retinol metabolism etc., have been well reported to be associated with the development of HNSC (Tzimas and Nau, 2001;Tanaka et al., 2005). For example, the thyroid hormone signaling pathway and retinol metabolism were reported to be central to the growth and survival of cancer cells. A subpathway of Retinol metabolism identified by ICDS methods (Supplementary Figure S2A) is essential for the early diagnosis and treatment of HNSC. These results indicate that the ICDS method may uncover something new dysregulated subpathways. DISCUSSION The occurrence and development of diseases, especially cancer, involves a complex biological network (Zou et al., 2016). Genetic variation, epigenetic changes, abnormal gene-expression levels, and many other factors will change in the characteristics of living organisms. With the generation of large-scale sequencing data, more opportunities exist to study the multi-omics molecular mechanisms of cancer development. In systems biology, dysfunctional genes may jointly activate biological pathways. Therefore, the most critical step in exploring complex disease mechanisms is to identify the functional pathways in which these dysregulated genes are located. We proposed the concept of subpathways in our previous work (Li et al., 2009(Li et al., , 2013. The subpathway, defined as a local region of an entire pathway, contains fewer components, which enables a more subtle and accurate interpretation of the biological function of disturbances involved in cancer progression. In this study, the employed method was based on a priori biological pathways (e.g., KEGG), each of which represents a network of interactions between genes, proteins, and chemical molecules. The main purpose of this study was to discover important dysfunctional subregions based on the pathway topological structure. ICDS used Fisher's combined probability test to integrate gene expression, CNVs, and methylation to calculate the RS of genes. As the gene expression, CNV and DNA methylation are not completely independent, and thus the independence assumption of Fisher's combined probability test may be violated here. This may be a limitation of our ICDS method. Alternatively, the Brown's method (Poole et al., 2016) can also be used to integrate multiple data source, and it does not suffer from this limitation. A larger RS in cancer indicated a greater correlation between the gene and the cancer phenotype. Next, we used a greedy algorithm to search for subpathways in each KEGG pathway network, so that subpathway activities were local maxima. This algorithm have also been used to identified subnetwork markers of breast cancer metastasis in the human proteinprotein interaction network previously, and achieved higher accuracy in the classification of metastatic versus non-metastatic tumors (Chuang et al., 2007). To avoid excessive redundancy in the candidate subpathways, we set several parameters, such as seed gene (p-value of combined statistic S < 0.001), subpathway size (5 < size < 100), and overlap between subpathways (i.e., Jaccard index between each pair of subpathways in the same pathway < 0.6), which can be set by a user of the ICDS package. We applied the ICDS method to LIHC, HNSC, and CESC datasets. Based on these analyses, we demonstrated that ICDS can effectively identify dysfunctional subpathways correlated with a cancer phenotype. For the HNSC dataset, the subpathway path:04062_1 was the most significant subpathway and included 41 genes belonging to chemokine signaling pathway. Studies have confirmed that the chemokine signaling pathway is a critical component of tumor progression. These genes did not simultaneously have changes in copy number, methylation, and gene expression. However, these subregions could still be found through our integration algorithm, which is the most prominent advantage of our method. To further validate the ICDS method, we compared it with three other methods that only considered one type of data -gene expression, CNV, or DNA methylation -named as ICDS-G, ICDS-CNV, and ICDS-M, respectively. The results showed that the ICDS method was able to identify new risk subpathways associated with cancer that were simultaneously neglected by the other three methods. Thus, it is essential to integrate multi-omics data to identify additional dysfunctional subpathways in cancer. In the future, we will involve other omics data, such as proteomics, to improve our ICDS method. To provide users with convenient and simple analytical tools, we have integrated the ICDS, ICDS-G, ICDS-CNV, and ICDS-M methods into an available R-based package on CRAN 3 . If users are considering using the ICDS method, they need to input three datasets of gene expression, copy number, and methylation. The ICDS-package will produce a prioritized list of subpathways. With this method, ICDS is used to find key subpathways related to cancer phenotypes, and it is expected that it can be used to mine for key subnetworks within some prior networks (e.g., the PPI network) based on integrating DNA methylation, CNV, and gene expression data. In addition, ICDS may identify key subpathways as biomarkers to distinguish high and low risk cancer patients. For this purpose, researchers should input the molecular profile of genes with different stage samples, such as patients in different stages of glioma. Therefore, we have developed a free and robust tool to identify dysfunctional subpathways in cancer by integrated multiomics data. Research Foundation of Harbin (Grant No. 2017RAQXJ195), and the National Natural Science Foundation of Heilongjiang Province (Grant No. H2016074).
7,572.6
2019-05-15T00:00:00.000
[ "Medicine", "Biology" ]
(2017). Natural and bioinspired nanostructured bactericidal surfaces. Advances in Colloid and Interface Science 248 , 85-104. Bacterial antibiotic resistance isbecoming more widespread due to excessiveuse of antibiotics inhealthcare and agriculture.Atthesametimethedevelopmentofnewantibioticshaseffectivelygroundtoahold.Chemicalmod-i fi cations of material surfaces have poor long-term performance in preventing bacterial build-up and hence ap- proaches for realising bactericidal action through physical surface topography have become increasingly importantinrecent years. Thecomplexnatureof thebacteriacellwallinteractionswith nanostructuredsurfaces representsmanychallengeswhilethedesignofnanostructuredbactericidalsurfacesisconsidered.Herewepres-entabriefoverviewofthebactericidalbehaviourofnaturallyoccurringandbio-inspirednanostructuredsurfaces against different bacteria through the physico-mechanical rupture of the cell wall. Many parameters affect this process including the size, shape, density, rigidity/ fl exibility and surface chemistry of the surface nanotextures as well as factors such as bacteria speci fi city (e.g. gram positive and gram negative) and motility. Different fabrication methods for such bactericidal nanostructured surfaces are summarised. © (http://creativecommons.org/licenses/by/4.0/). Introduction There has been a constant drive for smart technology towards development of materials and surfaces capable of repelling or killing pathogenic microorganisms present on various exteriors in our daily life (such as mobile phones, hospital tools, food packages, kitchen and bathroom surfaces etc.). Most of these surfaces are not intrinsically bactericidal and modifications are thus required for microorganism destruction and prevention of further bacterial infections. Furthermore, bacterial biofilm formation can be inhibited if the bacteria adhesion and growth can be prevented on the surface in the initial stage [1]. Once a biofilm begins to form, tackling bacterial colonies becomes considerably harder [2]. Whenever an antibiotic is applied to a typical biofilm population, its efficacy in killing the bacteria is limited to the top layer of the biofilm, with little effect on the bacteria located deeper within the microcolonies [3]. Such inabilities of antibiotic agents to penetrate into and exert their effects throughout the biofilm could allow bacterial colonies to develop antibiotic resistance over prolonged periods of use, which is one of the major causes for the failure of using antibiotics against the biofilms [4][5][6][7][8][9][10][11]. Antimicrobial-resistant infections currently claim 700,000 lives each year from all across the world and this figure will increase alarmingly to 10 million by 2050 if it is not stopped (Fig. 1). One of the methods to tackle biofilms therefore involves prevention of biofilm formation by actively killing the bacteria as soon as they arrive on the surface. Use of antibiotic (chemically) coated surfaces has a significant concern, as widespread antibiotic usage has been linked to the emergence of several multi-drug resistant strains of infectious diseases, some of which (e.g. Tuberculosis) may be epidemic. Many of antibacterial surfaces are effective only in the presence of an aqueous solution, and may prove less effective killing airborne bacteria in the absence of a liquid medium [12]. Consequently, instead of killing bacteria chemically, several studies have explored alternative physical methods through the contact killing mechanism. These developments have in part been inspired by nature where several insects are known to have bactericidal surfaces that kill microbes coming in contact with them. The bactericidal effects of these surfaces are due to the presence of sharp nanostructures (nanopillar shaped with diameter 50-250 nm, height 80-250 nm, and pitch 100-250 nm) which pierce into the bacterial cell wall upon contact or rupture the bacteria cell wall, thereby killing the bacteria. Such a physical bactericidal method has become an attractive approach to potentially tackle multi-antibiotic resistant bacteria [13]. Killing bacteria physically though nanostructures rather than chemical means has since become very topical, and several recent reviews on antimicrobial surfaces have focused on different types of antimicrobial coatings to prevent infections [14,15], use of nanoparticles as antimicrobial agents [16], antimicrobial surfaces based on polymers [17] and other smart materials [18][19][20][21], and naturally occurring antimicrobial surfaces [22,23]. More generally, nanoparticle dispersions (nanofluids) [24][25][26] and nanostructured surfaces are increasingly found in modern formulations and technological applications for controlled adhesion or friction [27][28][29] and for enhanced or additional performance and functionalities [30]. Furthermore, the knowledge of nanostructure-bacteria interactions is also intimately related to the topic of nanotoxicity [31] and to our fundamental understanding of interactions between nanoparticles and organised soft matter [32][33][34]. However, our knowledge on the design strategies for fabricating effective and economically viable bactericidal nanostructured surfaces remains limited. In this review, we highlight the recent progress on the bactericidal efficacy of different natural and bio-inspired nanostructured surfaces, focusing on the understanding of interactions between nanostructures and the bacteria cell wall, the essential design parameters for efficient nanostructured bactericidal surfaces, and the feasibility of large scale cost-effective fabrication of bactericidal nanostructured surfaces. Bacteria cell wall classification The physical killing mechanisms are underpinned by the deformation or rupture of the bacterial cell wall ( Fig. 2A), which is a multilayered structure to provide strength, rigidity, and shape and to protect the microbe from osmotic rupture and mechanical damage [35][36][37]. According to their structure, components, and functions, the bacteria cell wall can be divided into the two main categories: gram positive and gram negative. Some bacteria commonly used in antimicrobial research are listed in Table 1, along with their size, source, morphology, and infections they cause. The gram negative cell wall is composed of an outer membrane linked by lipoproteins to thin, mainly single-layered peptidoglycan (PG) (7-8 nm) located within the periplasmic space between the outer and inner membranes. The outer membrane contains the porin, a protein which allows the passage of small hydrophilic molecules across the membrane, and lipopolysaccharide (LPS) molecules that extend into extracellular space. These components in the outer membrane are essential for the structural integrity and viability of gram negative bacteria (Fig. 2B). The cytoplasmic membrane of gram-positive cells contains a thick (30-100 nm) PG layer (4-5 times thicker than that of gram negative bacteria) (Fig. 2C), which is attached to teichoic and lipoteichoic acids that are unique to the gram-positive cell wall. Teichoic acids are attached and embedded in the PG layer, whereas lipoteichoic acids are extended into the cytoplasmic membrane. The gram negative cell wall is more complex, both structurally and chemically [38]. It is also worth mentioning the mycobacteria cell wall, which is thicker than in many other bacteria. Mycobacteria are known to cause many serious diseases such as tuberculosis, leprosy etc. It consists of an inner layer and an outer layer that surround the plasma membrane [39][40][41][42]. The outer layer consists of both proteins and lipids, which are associated with some long-and short-chain fatty acids in the cell wall. The inner layer consists of PG, arabinogalactan (AG), and mycolic acids (MA) covalently linked with each other to form a hydrophobic MA-AG-PG complex (Fig. 2D). The distinguishing characteristic of all mycobacterium is the cell wall which is thicker than in many other bacteria, and it is hydrophobic, waxy, and rich in mycolic acids (Fig. 2D). This kind of cell wall architecture of the mycobacterium protects it in the difficult survival situations. We refer readers to a number of papers in this area for more detailed information [35][36][37][38][39][40][41][42]. Till now there has not been any study on the bactericidal efficiency of nanostructured surfaces against mycobacterium. This represents an unexplored avenue for future studies in the field of antimicrobial surfaces. Naturally occurring nanostructured bactericidal surfaces Antibacterial surfaces are widespread in nature. There are many plants and insects with antimicrobial surfaces which protect them from pathogenic bacteria. Table 2 lists some of the naturally occurring (as well as artificial mimetic) nanostructured bactericidal surfaces. These bactericidal surfaces typically consist of nanopillars of diameter 50-250 nm, with different heights and densities. A number of early studies have focused on the connection between surface wettability and anti-biofouling effects [43][44][45][46][47][48][49][50][51][52][53], attributing it to non-stickiness of the microbes on the presumed superhydrophobic surface. That is, hydrophilic surfaces seem to allow bacteria proliferation, whereas hydrophobic surfaces prohibit bacterial growth as the bacteria cannot stick to the surface. Surface hydrophobicity or superhydrophobicity is more critical in water-immersed conditions (entailing air entrapment) than in air. More recent observations show that such natural nanostructured surfaces can kill bacteria by rupturing the cell wall, known as the contact killing mechanism [54]. One of the first studies of the naturally occurring bactericidal surface of cicada wings against P. aeruginosa (gram-negative) was reported in 2012 by Ivanova et al. [54]. The nanocones present on the cicada wing are uniform in height (200 nm), shape (60 nm diameter size at the top and 100 nm at the base of the pillar) and spatial distribution (170 nm apart) (Fig. 3A). In contrast to previous reports, they showed that, despite the superhydrophobic nature of the cicada wing (static water contact angle (CA) of 158.8°), there was significant bacterial adhesion on the nanostructured surface. On contact, the adhered bacteria went through a rapid morphological change and were killed within 5 min as estimated through imaging techniques. It was concluded that the anti-biofouling nature of cicada wings was not due to its ability to repel the bacteria, rather to its ability of kill them upon contact. The kill rate of P. aeruginosa on cicada wings was approximately 2.05 × 10 5 colony forming units (cfu) min −1 cm −2 . It was also reported that an attachment/kill cycle of 20 min seemed present, during which the wing surface was first saturated with the bacteria which were then killed and dispersed before the next group of bacteria could attach to the surface. Using atomic force microscopy (AFM), the time required for wall rupture was estimated to be approximately 3 min. The wing was also made hydrophilic with a 10 nm gold coating while its surface topography was retained, and it was found that its bactericidal activity was preserved, confirming the physico-mechanical nature of the killing. However, the cicada wing could only effectively kill gram negative bacteria but not gram positive bacteria [55], which has been attributed to their peptidoglycan cell wall being 4-5 times thicker than that of gram negative bacteria (Fig. 3B). This selective killing of gram negative bacteria is consistent with the mechanical model predicting cell-wall rigidity as the primary factor determining the ability of bacteria to survive the bactericidal cicada wing. Watson et al. [56] demonstrated the bactericidal nature of a gecko skin with micro-/nano-structures consisting of spinules with a radius of curvature smaller than 20 nm and spacing in the sub-micron range. They found that the gecko skin was lethal to Porphyromonas gingivalis, a gram negative, nonmotile, pathogenic bacterium. It was suggested that the bacteria cell wall was stretched and ruptured when it came in contact with the nanostructured Gecko skin (Fig. 3C). However, bactericidal efficacy of gecko skin was not tested against any gram positive bacterium. Ivanova et al. studied the bactericidal efficacy of the dragonfly wing surface [57]. Unlike cicada wing, the nanostructures present on the dragonfly wing are randomly distributed in terms of shape, size and distribution (Fig. 3D). The nanopillar diameters on dragonfly wings show a sigmoidal distribution below 90 nm. The dragonfly wing was shown to be very efficient in killing both gram negative (P. aeruginosa) and gram positive bacteria (S. aureus and B. subtilis), as well as endospores (B. subtilis) with a kill rate of approximately 4.5× 10 5 cfu/(minute * cm 2 ). Hayes et al. [58] described the surface texture of the cuticle of the aquatic larvae of the drone fly. An array of nanopillars (diameter b 100 nm, length 200-1000 nm, average spacing 230 nm) were observed on the cuticle. The surface of the drone fly was found to be hydrophilic unlike the superhydrophobic cicada wing, with the nanopillar density on the cuticle as high as that on a cicada wing [54]. It was suggested that this surface might antagonize the formation of biofilms and would potentially act as an efficient bactericidal surface. However, they did not test the bacteriacidal efficacy of the surface against any pathogenic bacteria. Ma et al. [49] reported the fouling resistance behaviour of the Taro leaf in both nonwet (fresh leaf without any surface treatment) and wet (leaf underwent soaking treatment/water vapour condensation/ ethanol wetting to make it hydrophilic) conditions. P. aeruginosa was used for testing the adhesion with the Taro leaf which exhibits superhydrophobicity with a high static water contact angle and low roll off angle due to the presence of the hydrophobic epicuticle layer and the micro/nano structures. Under the nonwet condition, the antiadhesion property of the Taro leaf towards P. aeruginosa cell suspension (concentration~2 × 10 7 cfu/ml) in PBS was due to the trapped air between the nanostructures. However, the anti-adhesion property observed under the completely wet condition was attributed to the reduced adhesion force in the area of the Taro leaf covered with dense nanostructures, although the exact mechanism for the adhesion reduction was not explained. The surface of the nanostructured Taro leaf was found to be bacteriostatic rather than bactericidal. This finding is useful when considering the design for antimicrobial structures for underwater applications. Silicon based nanostructured bactericidal surfaces Surfaces bearing well-defined nanotextures have been increasingly found in modern applications to facilitate enhanced functionalities and desired properties [24,27,29]. Due to the ease of fabrication and distinct electronic, optical, mechanical and thermal properties, silicon is widely used in industrial applications. It is so far the surface of choice for mimicking natural bactericidal surfaces, with the focus on reproducing the geometric features and surface chemistry observed on the cicada and dragonfly wings, and also the lotus and taro leaves. To mimic dragonfly wing, Ivanova et al. [57] developed black silicon surfaces using reactive ion etching of silicon. Nanopillar diameters on the black silicon surface showed a bimodal distribution spanning 20-80 nm, with bactericidal efficacy matching that of the dragonfly wing (Fig. 4A). Although such nanotextured black silicon could effectively kill minimum infective doses of S. aureus and P. aeruginosa in very short time in a nutrient deficient environment, in nutrient rich environment the bacteria could survive up to 6 h. The black silicon was also found to be more efficient in killing both gram positive and gram negative bacteria as compared to the dragonfly wing (Refer Table 2 of Reference 42 for more details). Hasan et al. [59] fabricated nanostructured silicon surfaces using the deep reactive ion etching (DRIE) technique. The surface with pillars 4 μm tall and 220 nm in diameter were superhydrophobic (static contact 4. (A) Image highlighting differences and similarities of (a1) black silicon (bSi) and (a2) dragonfly wings created by a three-dimensional reconstructions based on a displacement map technique. Inset shows tilted view at an angle of 53°[reproduced with permission from Ref. [57]]. (B) Representative SEM images of P. aeruginosa on (b1) flat silicon control, (b2) high and (b3) low nanocone density diamond coated silicon surface. Fluorescence micrographs of P. aeruginosa after 1 h incubation on these surfaces are shown in (b4)-(b6) respectively. More dead cells (appearing red) were observed on the low density nanostructured silicon surface as compared to high density nanostructured silicon surface and flat silicon surface [reproduced with permission from Ref. [61]]. (C) SEM images of (c1) healthy P. aeruginosa on flat boron-doped diamond control surface, and (c2) damaged bacteria cells on black silicon sample coated with diamond after 1 h of incubation. Fluorescence micrographs of Pseudomonas aeruginosa on (c3) control flat boron-doped diamond surface, and (c4) black silicon sample coated with diamond after 1 h incubation, showing more dead on the nanostructured black silicon as compared to the flat boron doped diamond surface (Red and green colours due to Propidium Iodide and Syto-9 dyes respectively) [reproduced with permission from Ref. [62]]. angle (CA) 154°and contact angle hysteresis 8.3°), contrasting with the hydrophilic nature of the native black silicon [57]. Bacterial viability studies showed that 83% of gram negative (E. coli) and 86% of gram positive (S. aureus) bacteria exposed to the surface were killed in 3 h. A fast initial kill rate was noted, with 25% of the bacteria killed in the first 5 min. For this surface, a different killing mechanism has been proposed, based on pinning of the cellular membrane on the nanopillars as the motile cell tries to find other attachment points by stretching themselves. As the bacterial cell reaches their limit of stretching, it ruptures and dies. This different mechanism could be due to the larger diameter and height of the nanopillars in this study in comparison to the previous studies [57]. Exploiting the mechanical hardness, high bulk modulus, and low compressibility of diamond [60], Fisher et al. [61] reported fabrication of nanocone-shaped diamond on silicon substrate with two nanocone densities and tested their bactericidal efficacy. The nanocones were 3-5 μm in height, with sharp tips of diameter 10-40 nm and a base width 350 nm-1.2 μm, achieved by controlling the bias voltage in RIE. These nanoconed diamond surfaces caused significant killing of gram negative P. aeruginosa as compared to the control silicon surface (Fig. 4B). The killing efficiency of the surface with a lower nanocone density was found to be 17% higher than that with a higher nanocone density (more dead cell (red colour due to Propidium Iodide Dye and green colour because of Syto-9) seen in Fig. 4B (b6) as compared to Fig. 4B (b5)). This was attributed to the larger spacing between the nanocones which facilitated more extended stretching of the cell membrane, leading to bacteria lysis. May et al. [62] further demonstrated that black silicon surfaces coated with diamond nanoneedles of two different heights (0.5-1 μm and 15-20 μm, respectively) showed excellent bactericidal activity against pathogenic P. aeruginosa. Both scanning electron microscopy and fluorescence microscopy images demonstrated the higher bactericidal efficacy of the diamond nanoneedle surface as compared to the flat boron doped diamond control surface (Fig. 4C). It is however important to note that the bactericidal efficacy of nanostructured surfaces depends on parameters such as the nanostructure dimension, coatings present, and the type and size of bacteria. Indeed, proliferation of cells on different nanostructures under certain conditions has also been reported. For instance, Hizal et al. [63] reported the detachment of bacteria (Staphylococcus epidermidis (non-extracellular polymeric substance (n-EPS) producing strain) and Staphylococcus aureus (EPS producing strain) from a nanostructured silicon surface (with blunt nanopillars of pitch values 200, 400 and 800 nm) to a smooth surface, due to smaller bacterial adhesion on the nanostructured surface as compared to the smooth surface. This was attributed to the decreased surface area between the textured substrate and the adhering bacterium. No significant bactericidal activity was observed on the nanostructures surfaces, possibly due to the nanopillars not deforming and stretching the cell wall of the spherical bacteria, as observed for rod shaped bacteria. Bacteria were also observed to settle between the pillars in the case of the large pitch value (800 nm). Titania based nanostructured bactericidal surfaces Titania is important in many applications because of its biocompatibility, mechanical stability and chemical inertness. Diu et al. [64] fabricated titania nanostructured surfaces (nanowires with diameter ~100 nm) of two different morphologies (brush and niche type) using an alkaline hydrothermal process. They found that the bactericidal effect of the nanostructured surfaces against motile (P. aeruginosa, E. coli and B. subtilis) bacteria was more pronounced than that against nonmotile bacteria (S. aureus, E. faecalis and K. pneumonia). Chris et al. [65] fabricated hierarchically ordered Titanium nano-patterned arrays (average diameter~40.3 ± 20.0 nm) mimicking the dragonfly wing using a chemical hydrothermal process at high temperature. The fabricated surfaces allowed the adherence of human cells, but showed excellent bactericidal behaviour against P. aeruginosa and S. aureus (Fig. 5A). Similarly, Ferdi et al. [66] reported the fabrication of 2D nanoporous (pore diameter 55 nm, depth 1 μm and interpore distance 70 nm) and hierarchical 3D nanopillared surface (average tip diameter 10 nm, height 2 μm, and average distance between nanopillars 2 μm) on titanium, and then a "smart" bacteria-triggered self-defensive coating containing tannic acid/gentamicin was deposited atop via the layer by layer (LbL) technique (see Table 2). The tannic acid/gentamicin coating on the 3D nanopillared surface allowed a greater exposure of the antibiotic coating to the adhering bacteria and increased the antibacterial efficiency by 10-fold compared to a smooth surface coated with the same antibacterial coating, thus allowing a reduction in the number of cycles used in LBL coating deposition for the same killing efficiency. Terje et al. [67] fabricated nanostructured surface (nanospikes with a di-ameter~20 nm with an anisotropic surface distribution (Fig. 5B (b1)) on Titanium alloy using a thermal oxidation method. They noticed a 40% reduction in E. coli viability on the nanospike surface as compared to the smooth control surface (Fig. 5B (b2 & b3)). Flexible nanostructured bactericidal surfaces Nanostructured surfaces with bactericidal behaviour can also be realized on flexible substrates. To mimic cicada wings, Dickson et al. [68] showed that most of the gram negative E. coli bacteria incubated on poly (methyl methacrylate) (PMMA) films with nanopillars (diameter 70-215 nm and height 200-300 nm) fabricated using nanoimprint lithography, were killed (Fig. 6A). This reduced the bacterial load in contaminated aqueous suspensions by 50% over a 24 h period as compared to flat controls. The optimal spacing between the nanopillars was reported to be within 130 nm to 380 nm for good bactericidal response. Kim et al. [69] showed PMMA surfaces with periodic nanostructures (height 460 nm, aspect ratio 3, and spacing~300 nm) fabricated using nanoimprint lithography exhibited hydrophobicity, anti-reflectivity (with b0.5% reflectance) and antimicrobial properties (Fig. 6B). Valle et al. [70] utilised the technique of direct laser interference patterning (DLIP) to fabricate line-and pillar-like patterns and complex lamella microstructures on polystyrene. These surfaces were tested against gram positive S. aureus in both static and continuous culture flow conditions, and it was concluded that the line-and pillar-like patterns enhanced S. aureus adhesion, whereas the complex lamella microtopography reduced S. aureus adhesion under both flow conditions (Fig. 6C). Selectivity and specificity of bactericidal surfaces: interactions of nanostructured surfaces with mammalian, RBC and other cells A number of recent studies [71][72][73][74][75][76][77][78][79] have investigated the parameters affecting the interactions of nanostructured surfaces with other cell walls (cells other than bacteria, e.g. mammalian, red blood cells), and such knowledge is relevant to our considerations of the selectivity of bactericidal surfaces. The nanostructured surface fabricated by DRIE [59] displayed lethal action against mammalian cells (mouse osteoblasts) by mechanically rupturing the cell wall, leading to a 12% viability (Fig. 7A). Pham et al. [80] reported the interaction of the erythrocytes (RBC) with the nanopillars on the black silicon with a tip diameter of 12 nm and a pillar length of~600 nm. The nanopillars caused stressinduced cell deformation, rupture and lysis (Fig. 7B). A model for the interaction of the nanopillars and RBC cell wall was put forward in terms of a free energy driving force, showing that the lysis of the erythrocyte took place because of the piercing of the membrane by the nanopillars present on the black silicon surface. Shalek et al. [71] showed that N95% of vertical silicon nanowires (NWs) prepared by chemical vapour deposition (CVD) and reactive ion etching (RIE) could penetrate into HeLa cells (Magenta) bedded atop after 1 h incubation, delivering the biomolecules attached to the NWs (Fig. 8A), although the forces involved in the process were not discussed. However, growth and division of the HeLa cells was observed on the silicon NWs despite of the penetration. Berthing et al. [73] used a fluorescence labelling and imaging technique to study the conformation of human embryonic kidney cells on the NW array with single NW resolution (Fig. 8B). It was shown that the outer cell membrane was not penetrated by the NW and instead adapted its conformation to enclose the individual NWs. Recently in a cell interface with nanostructured arrays (CINA) model, Bonde et al. [77] assumed that the nanotextures (random array: density 7-200 nanostructures/100 μm 2 , length 1-5 μm, diameter 70-200 nm; ordered nanostructures array: spacing 3-5 μm, length 11 ± 2 μm, 5 ± 1 μm, diameter 100 ± 20 nm) deformed the cell wall membrane, rather than penetrating it, with the cells settling on the nanostructured surface in two states: the Fakir state with the cells hanging on top of the nanostructures, and the Wenzel state with the cells completely deformed around the nanostructures and coming in contact with the flat substrate between them (Fig. 8C). The aspect ratio of the nanostructures determines whether the bacteria will be in the Fakir state or the Wenzel state. In the case of high aspect ratio structures, bacteria mostly remain in the Fakir state as they cannot touch the bottom surface. However, in the case of the low aspect ratio, there can be a transition from the Fakir to the Wenzel state as the bacteria can touch the flat bottom surface and settle there. Forces such as gravity and adhesion acting on the cell membrane were discussed. It was suggested that the cell settling mechanism was highly dependent on both the single nanostructure dimension and the nanostructure density. Silverwood et al. [78] found improved bone deposition from a co-culture of human bone marrow cells without unwanted osteoclastogenesis on titanium nanopillars (15 nm in diameter and average spacing~30 nm), as compared with that on flat titanium (Fig. 8D), suggesting they could be used for orthopaedic implant applications. Tsimbouri et al. [79] fabricated titania nanospikes (average diameter 25.1 nm, height 1 μm, randomly oriented) on titanium surface using the hydrothermal process. Such surfaces supported the osteoblast (stem cell) growth and at the same time showed bactericidal behaviour against gram negative P. aeruginosa. Choi et al. [81] studied the interaction between human foreskin fibroblasts cells and two different nanopatterns (nanoposts and nanogratings) fabricated using interference lithography and deep reactive ion etching technique (DRIE) (height 50-600 nm, pitch 230 nm for both nanoposts and nanogratings). Cell proliferation was similar on the 2D smooth surface and short (50-100 nm) 3D nanoposts and nanograting structures. In contrast, cell proliferation was suppressed the on the needle-like nanoposts and nanograting structures with higher textures (200-300 nm and 500-600 nm). While the cells retained their shape on the 2D smooth surface, they exhibited different morphologies on the 3D nanoposts. On the short (50-100 nm high) nanoposts the cells became elongated; on medium height (200-300 nm) nanoposts, the cells elongated and also shrank in size; on tall (500-600 nm high) nanoposts, the cells exhibited a rounded shape with a much smaller size. In contrast, cells were found to spread well on the nanograting structures with more pronounced elongation on the taller nanograting structures as compared to the shorter ones. The results thus again point to the importance of the nanotexture topographic characteristics in determining the cell-substrate interactions. Ideally, the co-culture model should be used so that the interactions of nanostructured surfaces with bacteria and mammalian cells are studied simultaneously to assess 'the race for the surface' effect. However, Fig. 10. (A) Three-dimensional representation of the modelled interactions between a rod-shaped cell and the wing surface. As the cell (a1) comes into contact and (a2) adsorbs onto the nanopillars, the (a3) outer layer begins to rupture in the regions between the pillars [reproduced with permission from Ref. [92]] (B) Schematic illustration of the bacterial cell adhered to a (b1) flat surface and (b2) bacterial cell adhered to a cicada wing-like nanopatterned surface (L and R represent the length and radius of the bacteria respectively, h is the height of the nanopillar, R p is the radius of the nanopillar) [reproduced with permission from Ref. [93]] (C) Side-elevation sketch map of a bacterial membrane adsorbing onto two neighbouring nanoridges, where H is the height of the nanoridge, 2R is the bottom width of the nanoridge, S A denotes the contact area of the part of the bacterial membrane covering the nanoridge, S B denotes the area of the suspended membrane, r 0 is the distance from the dividing line to the x-axis, and D is the distance between two adjacent nanopillars [reproduced with permission from Ref. [94]] (D) (d1) Top view, cross-sectional view and enlarged view of bacteria membrane adhered to the surface with nanopillars in a hexagonal arrangement (R p is the radius of the nanopillar, D p is the distance between nanopillars, L and R represent the length and radius of the bacteria on the nanopatterned surface. h is the deformation depth, and θ the contact angle the bacteria cell membrane makes with the patterned surface). (d2) The phase diagram for the bacterial membrane stretching in the space of radius versus spacing of nanopillars (the colour bars indicate the values of the stretching degree of the bacterial membrane, with red corresponding to a high value and blue a low value) [reproduced with permission from Ref. [95]]. such a model is complex, cell type-dependent and still under development [82][83][84][85]. There are currently few reports on the studies of the nanostructured surfaces using such a co-culture model, and most often the cellular and bacterial interactions with biomimetic nanostructured surfaces are characterized either sequentially [86] or separately [79]. Effects of physical characteristics of nanostructured surfaces on their bactericidal efficacy From these models and related experimental studies, it has emerged that the bactericidal activity of a nanostructured surface depends on several parameters such as the size, shape and spacing/density of the nanostructures. Epstein et al. [87] discussed the effect of nanostructure geometry (spacing and aspect-ratio) on bacteria biofilm growth. Nowlin et al. [88] and Kelleher et al. [89] reported the eukaryotic and prokaryotic microorganism adhesion respectively on different types of cicada and dragonfly wings with different nanopillar height to width (h/w) ratios. The sanddragon dragonfly wing surface with the highest h/w~4.6 ( Fig. 9A) showed the highest bactericidal efficiency against S. cerevisiae (an eukaryotic microbe), causing more cell rupturing, as compared to the dog day (DD) annual cicada (Tibicen subspecies (ssp) h/w~1.8) and the periodical cicada (Magicicada ssp, h/w~0.5). It was thus suggested that allowing bacteria to adhere to the nanostructured surface and killing them physically could be a more effective strategy for antibacterial surface design than repelling the bacteria from the surface. Similarly, Kelleher et al. compared three cicada species with different nanopillar packing (or spacing): M. intermedia (height 241 nm, diameter 156 nm, pitch 165 nm), C. aguila (height 182 nm, diameter 159 nm, pitch 187 nm), and A. spectabile (height 182 nm, diameter 207 nm, pitch 251 nm) (Fig. 9B). Surfaces with tighter nanopillar packing and smaller feature (diameter) size (M. intermedia and C. aguila) showed higher bactericidal efficiency than the surface with a lower density and a bigger feature size (A. spectabile). These results suggest that the bactericidal efficiency can be tuned by the density and diameter of the nanostructures. However, they did not explain the underlying mechanism behind the higher killing efficiency of the surfaces with smaller diameter and higher density. In addition to the physical effects of nanostructures, the chemical (e.g. hydrophobicity/hydrophilicity) and mechanical (e.g. pliability) effects may play important roles in conjunction with the physical ones on the bactericidal effects. The strength of adhesion between the bacteria and the nanostructured surface is a vital element in the nanostructures induced rupturing of the microbes. A larger adhesion force between the bacteria cell wall and the surface leads to a high probability of rupturing for a given nanostructure geometry [88]. Adhesion of the bacteria with the nanostructured surface depends on the hydrophobicity/hydrophilicity of the surface and the cell membrane composition. When challenged with the nanostructured surfaces, bacteria will try to settle on the nanostructured surfaces by increasing the contact area with multiple anchoring points. In this process of stretching, when the cell wall reaches a threshold limit of strain acting on it, the cell wall rupturing can take place. If the nanostructures present on the surface are pliable, they may bend and it is more difficult to attain the threshold strain for the stretched cell wall to become ruptured. This pliability might thus allow the bacteria to deform the nanostructures so that the microbes can settle and proliferate on "the bed of nails". Physical models for interactions of bacteria with the nanostructured surfaces Pogodin et al. [92] developed a biophysical model to explain the interactions between the nanopillared cicada wing and bacteria, considering two sections of the bacteria cell wall: (a) the area in contact with the nanopillars, and (b) the area suspended between the nanopillars (Fig. 10A). The bacteria cell was assumed as a thin elastic layer as the dimension of the nanopillars (100 nm) on the cicada wing is an order of magnitude larger than the thickness of the cell wall (10 nm) and the curvature of the bacteria surface between the nanopillars was ignored. Due to the physical nanostructured topography present on the cicada wing, the bacteria membrane adsorbed on multiple nanopillars by enhancing the surface area of interaction. This led to a nonuniform stretching which in turn ruptured the membrane. Li et al. [93] studied the bactericidal mechanism of a nanostructured surface (of height h and radius Rp in Fig. 10B) with a quantitative thermodynamic model by considering the free energy change of the bacteria cells (of length L and radius R), showing the difference in interaction of the bacteria cell wall with a flat surface and a nanostructured surface. The main contrast between the bacterial cell interaction with a flat surface and with a nanostructured surface is the contact area of adhesion and the deformation of the cell membrane in the adhesion area. The enhanced bactericidal efficiency of the nanostructured surface compared to a flat surface is attributed to the increase in the contact adhesion area (see Fig. 10B (b2)) which enhances the stretching strain of the membrane which leads to cell lysis when the stretching is sufficient, and could thus be promoted by increasing the surface roughness, and the radius and height of the nanostructures. In a similar model for the cicada wing, Xue et al. [94] assumed a parabolic profile for the deformation of the bacterial membrane in the area both in contact with the nanopillars, and that hanging between the nanopillars (Fig. 10C). This was different from the study of Pogodin et al. [92] in which they ignored the curvature of the bacteria membrane hanging between the nanopillars. The combined role of gravity and van der Waals forces in rupturing the cell wall were considered, and it was shown that gram negative bacteria could be killed with a very high efficiency by the nanopillared wing surface. It was also suggested that bactericidal efficiency could be enhanced by sharp nanofeatures with large spacing, which is contrary to the findings of Kelleher et al. [89] who recommended tighter nanotextures packing for higher killing efficiency. Li et al. [95] considered the balance between the cell-nanostructured surface adhesion energy and the deformation energy of the cell membrane ( Fig. 10D (d1)). They argued that the adhesion energy could be enhanced due to an increase in the contact area caused by the surface roughness, and at the same time the deformation energy could also be increased by nanopillars with a small radius. A phase diagram (the colour bars indicate the values of the stretching degree of the bacterial membrane, with red corresponding to a high value (an enhancement phase) and blue a low value (a suppression phase)) was devised to explain the interrelated effects of the nanopillar radius and spacing on the adhesion of bacteria on the nanostructured surface ( Fig. 10D (d2)). We refer readers to these references for a more detailed mathematical description. Cost-effective large-scale fabrication of nanostructured bactericidal surfaces Capability to fabricate biocompatible surfaces on a large scale via a cost-effective route is important practically and represents a technological challenge. Biocompatibility and selectivity of the textured material is essential for its use in medical devices for requirements of killing pathogenic bacteria while allowing the proliferation of mammalian cells. There are different techniques to fabricate nanostructured surfaces on variety of substrates [96][97][98][99][100][101]. Fig. 11 shows the schematic of different fabrication methods commonly used, and Table 3 compares between different fabrication techniques in terms of their cost, complexity of the process and feasibility for large scale fabrication. Serrano et al. [90] used oxygen plasma treatment on sutures to make nanotextured surfaces (see Table 2) by varying the etching time. These nanostructured sutures showed reduced bacterial adhesion and biofilm formation. This is a promising way to fabricate large area antibacterial surface as it can be realized at very low cost and can be applied to different polymer surfaces and geometry. Diu et al. [64] used a hydrothermal The substrate is usually placed on a quartz or graphite plate. The gas required for etching is injected into the process chamber via the gas input present in the top electrode. Radio frequency (RF) plasma source is applied at the lower electrode which determines both the ion density and energy for etching. RIE is normally used to etch surface textures with depth b1 μm. (a2) Deep reactive ion etching (DRIE) is a highly anisotropic etching process used to create deep penetration, through silicon via (TSVs) and trenches in wafers/substrates, typically with very high aspect ratios. To control the ion energy and ion density with more flexibility, separate RF (Table bias) and Inductively Coupled Plasma (ICP) generators are provided. (Source: Oxford Instruments) (B) Nanoimprint lithography (NIL) is a method for fabricating micro/nm scale patterns economically with high throughput and high resolution. NIL relies on direct mechanical deformation of the resist using an imprint mold unlike optical or electron beam lithographic approaches, which create pattern through the use of photons or electrons to modify the physical and chemical properties of the resist. It is therefore possible to achieve very good resolution beyond the limitations set by the diffraction of light or beam scattering that are observed in conventional lithographic techniques. Minimum feature size of the imprint mold determines the resolution of nanoimprint lithography. (C) Laser interference lithography (LIL) is a maskless technique. In this process a collimated laser beam is passed through a pinhole which only allows the central bright spot of the laser beam, and then expanded by Lens 3. Part of the expanded collimated beam falls directly on the photoresist-coated sample placed on the sample stage, which interferes with the other part reflected from Mirror 3 to create the interference pattern on the sample. The angle between the sample stage and the Mirror 3 can be adjusted to obtain desired interference patterns. The photoresist patterns produced with LIL provide the platform for further fabrication of different types of structures in the submicron scale. Large area fabrication possible (fabrication area in cm 2 ), simple process, low cost method for fabrication of Cicada wing inspired nanostructured (Brush and niche type, see Table 2) titania surfaces for potential dental and orthopaedic implant applications, and the obtained brush and niche type nanostructures both showed excellent bactericidal efficacy against pathogenic bacteria, while allowing the growth of mammalian cells. Wu et al. [91] reported a template electrodeposition technique to fabricate gold nanopillars, nanorings and nanonuggets (see Table 2) on tungsten reference substrate. In this method, tungsten and aluminium thin films were first deposited on silicon substrates. Then the nanoporous alumina template was generated by anodization of the top layer aluminium. Reference tungsten substrate with nanoscale roughness was obtained by dissolving the alumina nanoporous template. Au nanopillars were then electrodeposited within the nanoporous template and the aluminium template was removed. Similarly, Au nanorings and clusters of nanopillars were obtained by modifying the structure of the tungsten layer. All of these surfaces were tested against gram positive S. aureus bacteria, showing excellent bactericidal performance, as qualitatively validated by scanning electron microscopy and fluorescence microscopy images, with cell proliferation experiments also carried out to evaluate the antibacterial performance quantitatively. The template electrodeposition technique may not be cost effective for fabrication on a small scale. However, it can be economical for manufacturing on a large scale. Ozkan et al. [102] fabricated nanostructured superhydrophobic antibacterial surface by combining PDMS and Cu nanoparticles via AACVD (aerosol-assisted chemical vapour deposition) (sample dimension -14 cm × 4.5 cm × 0.5 cm). Static water contact angles as high as 151°w ere obtained on the fabricated surface, the surface showed excellent bactericidal property, killing gram positive S. aureus in 1 h and gram negative E. coli in only 15 min. These techniques described above can be used as the basis for fabricating large area cost-effective bactericidal surfaces and can be used for practical applications. Summary Antimicrobial resistance has become an urgent global challenge and smart alternative solutions are needed to tackle bacterial infections. Bacteria differ in shape, size, cell wall thickness, outer membrane composition and indeed other characteristics. A number of insects and plants with sharp nanostructures on their surfaces can kill bacteria by physically rupturing/stretching the bacteria cell wall via the contact killing mechanism. This review aims to highlight our current understanding on how natural and bioinspired nanostructured surfaces interact with bacteria cell wall membranes, and these kinds of nanostructured bactericidal surfaces have the potential to be incorporated in many biomedical and industrial applications as an alternative to, or a synergistic combination with, chemical bactericidal mechanisms. Bactericidal efficiency of nanostructured silicon, nanostructured titania, nanostructured polymer surfaces has already been tested against different pathogenic bacteria, and these nanostructures represent a wide range of shape, size, density, rigidity. How these physical parameters can be optimized to enhance the bactericidal efficiency remains a challenge. Different fabrication techniques have been briefly discussed, with the focus on their feasibility for cost effective, large area production of nanostructured surfaces, which is an important consideration when we consider employing the contact killing mechanisms as part of our material design. Further considerations involve selectivity of the nanostructured surfaces, i.e. benign or even functionally active towards mammalian cells but hostile towards bacteria. Though physical killing of bacteria has been demonstrated on different nanostructure morphologies, there is no clear generic guideline which holds true for all the bacteria and for all the substrates with different mechanical and chemical properties. Different models give preference to different pitches: (1) Lower pitches preferred by bending energy models; and (2) higher pitches preferred by stretching based models. Too high a pitch may in fact lead to growth of bacteria in between the pillars. It also appears that a certain minimum aspect ratio is required for the nanotexture, as otherwise the cell would be able to "sense" its topology. Finding an optimized nanotopography in terms the size, shape, aspect ratio, and density, which should be tuned for different sizes and types of bacteria, remains a significant scientific challenge. It is envisaged that developments in the design, fabrication, optimisation, and mechanistic understanding of bactericidal efficacy of such nanostructured bactericidal surfaces present many opportunities for further investigations and may serve as an effective strategy in combating pathogenic bacteria and rising to the challenge of antimicrobial resistance facing us.
9,885.4
2017-10-01T00:00:00.000
[ "Materials Science" ]
MODELING NONLINEARITIES WITH MIXTURES-OF-EXPERTS OF TIME SERIES MODELS We discuss a class of nonlinear models based on mixtures-of-experts of regressions of exponential family time series models, where the covariates include functions of lags of the dependent variable as well as external covariates. The discussion covers results on model identifiability, stochastic stability, parameter estimation via maximum likelihood estimation, and model selection via standard information criteria. Applications using real and simulated data are presented to illustrate how mixtures-of-experts of time series models can be employed both for data description, where the usual mixture structure based on an unobserved latent variable may be particularly important, as well as for prediction, where only the mixtures-of-experts flexibility matters. Introduction The last three decades have experienced a great deal of research on nonlinear regression models, as described in [23].Among the several models proposed in the literature, we can find an important class denoted as mixtures-of-experts (ME), and its extension, denoted as hierarchical mixtures-of-experts (HME).Since the publication of the original papers by Jacobs et al. [26,33], these two classes of models have been used in many different areas to account for nonlinearities, and other complexities in the data.In these models, the dependent variable y t ∈ ⊂ is assumed to have the following conditional density specification: f y t | x t ,θ = J j=1 g j x t ;γ π y t ;η α j + x t β j ,ϕ j , (1.1) where x t ∈ X ⊂ s is a vector of covariates, and π(y t ;η(α j + x t β j ),ϕ j ) is a generalized linear model [38] with mean η(α j + x t β j ) and dispersion parameter ϕ j .The specification 2 Mixtures-of-experts of time series models in (1.1) describes a mixture model, with J components, where the weights g j (x t ;γ) ∈ (0,1) are also functions of the covariate vector x t .Because of its great flexibility, simple construction, and good modeling properties, ME started to be commonly used in models for nonlinear time series data.Let y t be a univariate stochastic process observed at time epoch t, t = 1,...,T, and let I t−1 be the available information set at time t − 1.In the time series ME construction, the conditional density of y t given I t−1 is assumed to have the form in (1.1), where x t may include lags of transformations of the observed response y t , as well as lags of external predictors. An application of ME to signal processing in a noninvasive glucose monitoring system is presented in [35].Reference [22] applies ME to gender and ethnic classification of human faces.Reference [37] presents the use of ME to uncover subpopulation structure for both biomarker trajectories and the probability of disease outcome in highly unbalanced longitudinal data.Reference [27] presents an application of ME in modeling hourly measurements of rain rates.Reference [18] studies local mixtures-of-factor models, with mixture probabilities varying in the input space.Reference [48] employs a model based on combinations of local linear principal components projections, with estimation performed via maximum likelihood.In [52], the authors apply ME, what they called "gated experts," to forecast stock returns.Reference [54] studies mixtures of two experts, referred to as "logistic mixture autoregressive models."Finally, [57] treats mixtures of autoregressive experts, what they call "mixtures of local autoregressive models," or MixAR models, where the covariate vector x t contains only lags of y t . The stochastic underlying process represented by (1.1), in a time series context, can be interpreted as follows: imagine there exist J autoregressive processes π(y j,t ;η(α j + x t β j ),ϕ j ), all belonging to one specific parametric family π(•;•,•), and, conditional on the past information I t−1 , each component j generates a response y j,t , j = 1,...,J.Additionally, imagine there is a multinomial random variable I t ∈ {1, 2,...,J}, independent of y j,t , where each value j has a probability g j (x t ;γ) ∈ (0,1), and if I t = k, the value y t = y k,t is observed.Based on the law of iterated expectations, we conclude that (1.1) is the conditional density for y t , given I t−1 . In general, the probabilities g j (x t ;γ) are assumed to have a logistic form: where υ j and u j ∈ s , j ∈ {1, ...,J}, are unknown parameters.In order to avoid identification problems, we assume that 3) The mixed components π(y t ;η(α j + x t β j ),ϕ j ) are referred to as experts and the probabilities (or weights) g j (x t ;γ) ∈ (0,1) are called gating functions or simply gates.The grand vector of gating parameters is the list of all the individual gating parameters γ = (υ 1 ,u 1 ,υ 2 ,u 2 ,...,υ J−1 ,u J−1 ) . Several properties for ME and HME were initially proved by Jiang and Tanner [28][29][30][31].They treated consistency and asymptotic normality of the maximum likelihood estimator, conditions for parameter identifiability, and approximations properties, for exponential family experts.Nonetheless, although these results are quite general, they do not apply directly to time series data.The authors assumed independent observations y t , t = 1,...,T, and a compact covariate space X. In a series of papers, Carvalho and Tanner [8][9][10][11] extended most of the maximum likelihood estimation results proved by Jiang and Tanner to time series applications.Besides, [7,10] presented parameter conditions to guarantee stochastic stability of the ME construction.In these papers, the authors also treated exponential family distributions, focusing on normal, Poisson, gamma, and binomial autoregressive processes. By using mixtures of regressions of one of these four distributions, it is possible to treat a great variety of time series problems.Mixtures of binomial experts can be used to model discrete time series with response y t bounded by some value ν (see, e.g., [51]), whereas mixtures of Poisson experts can be used to model unbounded discrete time series.For continuous responses, we can use mixtures of normal experts for observations assuming values on the whole real line, and mixtures of gamma experts for strictly positive time series.For unbounded count data, mixtures of Poisson experts present an advantage over several models in the literature since the proposed mixture construction allows for both positive and negative autocorrelations, while most of the existing count data models allow only for positive autocorrelation (see, e.g., [3,32]).Besides, most of count time series models have likelihood functions that are difficult to write explicitly, and computational intensive approaches have to be used.This problem does not happen in the ME context and standard maximization algorithms can be employed for parameter estimation (see, e.g., [34]). The ME models bear some similarity to other nonlinear models in the literature.We can mention, for example, the threshold autoregressive (TAR) models introduced by [49], where a threshold variable controls the switching between different autoregressive models.Another example is the Bayesian-treed model introduced by [12], where the input space is split in several subregions and a different regression model is fitted in each subregion.In both approaches, after the partition of the covariate space, a different regression curve is fitted in each subregion. In this paper, we present a survey of the main ideas and results involved in the usage of the ME class of models for time series data.The discussion combines analytical results, simulation illustration, and real applications examples.In Section 2, we provide a more formal definition of ME of time series, with exponential family distributions.Section 3 discusses the probabilistic behavior, focusing on stochastic stability and moment existence, for ME time series models.Section 4 discusses parameter estimation (or model training) using maximum likelihood.In Section 5, Monte Carlo simulations provide evidence to support the BIC in selecting the number J of mixed components.In Section 6, several examples using real and simulated data illustrate how ME can be employed both for data description, where the underlying latent variable I t may be particularly important, as well as for prediction.Final comments and suggestions for future research are presented in Section 7. 4 Mixtures-of-experts of time series models Mixtures-of-experts of time series models In the models discussed here in this paper, the observed stochastic process y t ∈ ⊂ has the conditional distribution, given the available information set I t−1 , following the conditional density specification in (1.1), where the vector of covariates x t includes functions of lags of y t .This formulation follows the specification proposed by [36] for time series based on generalized linear models.The vector x t at time t has the form {ζ(y t−1 ),..., ζ(y t−p ),w t−1 ,...,w t−q }, where w t is a vector of external covariates, ζ(•) is a transformation of the response y t , and p and q correspond to the maximum lags.Because the covariate vector is known at time t − 1 (x t ∈ I t−1 ), hereinafter we will use the notation x t−1 instead of x t for the conditioning vector of predictors. Examples of ME of exponential family distributions can be based on the experts π(y; η,ϕ). A. X. Carvalho and M. A. Tanner 5 The grand vector of parameters for the whole model is θ ∈ Θ ⊂ K , where θ is the union of all the components θ j = (α j ,β j ,ϕ j ), j = {1, ...,J}, and γ.The dimension of Θ is K = J(2 + s) + (J − 1)(1 + s).For models with known dispersion parameters, θ has J fewer elements.From the density in (1.1), the conditional expectation for the response y t is and higher moments can be obtained by similar expressions.Identifiability of the models treated here can be obtained by following the steps in [9,30].Because of the mixture structure, we have to impose some order constraints for the experts parameters θ j = (α j ,β j ,ϕ j ), j = 1,...,J, that is, we assume θ 1 ≺ θ 2 ≺ ••• ≺ θ J according to some order relation, so there is no invariance caused by the permutation of expert indices.We can impose, for example, an order relation of the following form: if 1 , and β j,2 < β k,2 , then θ j ≺ θ k ,..., if α j = α k , β j,1 = β k,1 ,...,β j,s = β k,s , and ϕ j < ϕ k , then θ j ≺ θ k , for all j,k ∈ {1, ...,J}.(As will be discussed in Section 4, parameter estimation can be performed by using maximum likelihood methods.For maximizing the likelihood function, heuristic optimization methods, such as simulated annealing or genetic algorithms, can be employed.In this case, the ordering relation can be imposed directly in the objective function, by using, e.g., the parameterization α 1 = α, α 2 = α + e κ2 ,...,α J = α + e κJ , where the new parameters to be estimated are α,κ 2 ,...,κ J , instead of α 1 ,...,α J .We opted for a simpler approach, where we employ the EM algorithm to the unrestricted maximization problem.One could rearrange the parameter estimates after the EM solution is obtained, so as to impose the ordering relation.However, in practical terms there is no need to do so, and we decided just to use directly the estimated parameters.)Additionally, to guarantee identifiability of the gate parameters, we impose the initialization constraint as presented in (1.3).Finally, given the dependence of the mean function of the exponential family distributions on a vector of covariates x t−1 , we need some additional constraints on the marginal distribution of x t−1 .Basically, the conditions are imposed so that we do not allow for linear dependence among the elements of vector (1, x t−1 ). Probabilistic behavior Stochastic stability properties of the ME of time series models can be studied based on the general results for stochastic processes given in [17,40].These properties are specially important, for example, when treating the asymptotic behavior of the maximum likelihood estimators for the model parameters.Especially for ME of autoregressive linear models, which is the case when each expert has a normal distribution, some results are presented initially in [57], and extended in [7].In a nutshell, these authors show that stationarity of each autoregressive model individually guarantees stationarity and existence of moments for the ME structure.Nonetheless, for mixture models, with constant gate functions (not depending on covariates), reference [54] shows that, even with not all mixed experts being stationary, it is still possible to combine them in a mixture model and obtain a stationary process in the end. 6 Mixtures-of-experts of time series models Although stochastic stability for autoregressive linear models can be proved for experts with an arbitrary number p of lags, extending these results to other exponential family distributions is not trivial, since linearity plays a key role in going from one-lag models to multiple-lag models (see [7]).The exception are stochastic processes with bounded sample spaces (e.g., mixtures of Bernoulli or binomial experts).For Poisson and gamma experts, reference [10] shows that, given some simple parameter restrictions on ME models, where each expert has only one lag, stochastic stability holds and the resulting observed process has a moment generating function, and therefore all moments exist. Simulated time series. In this section, we present a simulated example to illustrate the capability of mixtures-of-experts models to approximate the behavior of various time series data.Although we do not present a more thorough discussion of the approximation theory for the mixtures-of-experts, the example below, involving normal experts, gives an idea about the flexibility implied by the proposed construction.The reader can refer to [28,29,56] for related approximation results.For similar examples on simulated data from ME of Poisson autoregressions, see [11]. We simulate a mixture of two Gaussian autoregressions (1) y t = −3.0+ 0.5y t−1 + 2,t , where 1,t and 2,t are normally distributed with mean 0 and unit variance.The gate functions are (1) g 1 (y t−1 ) = exp(0.9yt−1 )/(1 + exp(0.9yt−1 )), (2) g 2 (y t−1 ) = 1 − g 1 (y t−1 ).The upper graph in Figure 3.1 presents the plot of 10 000 observations of the simulated series.(In order to estimate the marginal density of {y t }, we simply used a kernel estimator based on the generated time series.Given that the process {y t } is stationary and all moments exist (Carvalho and Skoulakis [7]), we can use the generated series to estimate nonparametrically the density for the marginal process.To have a better precision in these estimates, we used 40 000 time points after the warm-up sample.)To guarantee that the series reaches stationarity, we initially generated 100 000 warm-up data points.The middle graph presents an estimate for the marginal density of {y t }. (Depending on the parameter configuration, a warm-up sample of 100,000 observations may be excessive.Nonetheless, given the speed of sample generation, we decided to use a large number to guarantee that the series achieves stationarity.)Note the clear existence of two regimes in the series, which is very similar to the behavior of hidden Markov processes.In fact, when y t is close to 6.0 (the stationary mean for the first autoregression), the weight for the positive autoregression (first component) is close to one, as can be seen from the lower graph in Figure 3.1, so that the series tends to keep following the first autoregression.Analogously, when y t is close to −6.0, the weight g 2 (y t−1 ) is close to 1, and the series tends to behave according to the second autoregression. To have an idea about how different parameter values change the observed time series, we simulated a model similar to the mixture of two experts above, using an autoregressive coefficient of 0.6 instead of 0.5.The results are shown in Figure 3.2.Observe that, for a higher autoregressive coefficient, the frequency for regime changes decreases.This is A. X. Carvalho because when the autoregressive coefficient changes from 0.5 to 0.6, the stationary mean for the first expert becomes 7.5 and the stationary mean for the second expert becomes −7.5.Therefore, regime changes become less likely, because it becomes more difficult for an observed y t to jump to regions in where the weight for the other expert is sufficiently high.Some additional experiments show that, for autoregressive coefficients closer to 1.0, the probabilities of regime change are even lower. Maximum likelihood estimation Estimation of the parameter vector θ for the ME of time series models studied in this paper can be performed by maximizing the partial likelihood function [53].( We are using the partial likelihood function because we are modeling only the conditional process of y t given x t−1 .We are not using the overall likelihood function, where we model the stochastic process for both y t and x t−1 jointly.)From the density in expression (1.1), we can write down the conditional likelihood function based on a sample {y t ,x t−1 } T t=1 .If the vector x t−1 contains functions of lags of the response variable y t , such that the maximum lag order is p, we will require T + p observations so that our sample has an effective size T. The likelihood function logL T (θ) to be maximized over Θ is given by Numerical optimization can be performed by applying the EM algorithm (see [25,33]), described in Section 4.1.(In this paper, we focus on the frequentist approach, employing the maximum likelihood for parameter estimation.However, one can also use Bayesian methods, which present various nice properties as discussed in Section 7.) In Section 4.2, we discuss formal results for the asymptotic properties of the maximum likelihood estimator. The EM algorithm. For simple problems, where the parameter space is low-dimensional, maximization of log-likelihood function in (4.1) can be performed directly by using some standard optimization algorithm, such as Newton-Raphson.However, in most of the practical problems, the dimension of Θ is high enough so that the usual optimization methods become very unstable.The alternative, commonly used in mixture of distribution models, is the EM algorithm, proposed by [15].The use of the EM algorithm for mixtures-of-experts models is thoroughly described in [25,33], and that is the procedure used here for estimation. Note that, at each iteration i, maximization of Q i (θ) in (4.2) can be obtained by maximizing separately the J terms Q i j , corresponding to parameters for each expert distribution individually, and the term Q i gates , corresponding to the parameter vector λ for the gating functions, where ω j = (v j ,u j ) , λ = (ω 1 ,ω 2 ,...,ω J−1 ) , and z t−1 = (1,x t−1 ) .We used the notation π(y t | x t−1 ;θ j ) = π(y t ;α j + x t−1 β j ,ϕ) so as to make explicit the dependence on the target parameter θ j .Therefore, the EM algorithm in our case consists of calculating, at each iteration i, the weights h j,t ∈ (0,1), j = 1,...,J, t = 1,...,T, and then maximizing the functions Q i 1 (θ 1 ),..., , to find the new value θ i+1 .Maximizing Q i j (θ j ) can be seen as a weighted maximum likelihood estimation, where each observation in the sample is weighted by its corresponding gating function value.Maximizing Q i gates (λ) corresponds to estimating a multinomial logistic regression.The limit of the sequence {θ i }, denoted by θ(θ 0 ), is a root of the first-order condition ∂ θ logL T (θ) = 0 (see [47]). When the log-likelihood function is multimodal, the limits θ(θ 0 ) may not correspond to the global maximum of the log-likelihood function, so we used multiple starting points to initialize the algorithm.In this case, the point with maximum likelihood from multiple points is an approximation to the global maximum, and the maximum likelihood estimator θ is approximately the root corresponding to the largest likelihood value L T ( θ(θ 0 )).Alternatively, one can resort to heuristic algorithms such as genetic algorithms [20,41] 10 Mixtures-of-experts of time series models and simulated annealing [50].Besides, several methods which take advantage of the specificities of the mixtures structures in ME models are also available [46]. Asymptotic properties of the MLE. Given the simple structure of the likelihood function for ME models, the main method for parameter estimation is via maximum likelihood.By using the EM algorithm or any other global search heuristic method, maximizing the log-likelihood function is a rather simple task and does not involve maximizing simulated likelihoods.Therefore, it is expected that the MLE will present all the nice asymptotic properties of regular parametric models, for example.In fact, that is exactly what happens. Carvalho and Tanner [9][10][11] present a series of very general results guaranteeing consistency and asymptotic normality of the MLE for several different situations.In fact, given stationarity and ergodicity of the conditioning series (predicting variables) t=0 and some hypotheses about moment existence of {x t−1 } ∞ t=0 , both consistency and asymptotic normality hold: where I ≡ −E{∂ θ ∂ θ log f (y t |x t−1 ;θ 0 )} is the Fisher information matrix and θ 0 is the true parameter value.By imposing the existence of a little higher moments, the same results hold for nonstationary time series. If we assume that there is a single parameter θ * that minimizes the Kullback-Leibler pseudodistance, [9,10] show that, under some regularity conditions on the true data generating processes, consistency and asymptotic normality of the MLE still hold.In this case, if one is interested in statistical inference, such as hypothesis testing or confidence interval construction, the asymptotic covariance matrix of θ MLE is no longer the Fisher information matrix, and some correction has to be done (see [8]). More generally, one can show that if the Kullback-Leibler pseudodistance [6] achieves a global minimum at all the elements of a nonempty set Ψ 0 , the maximum likelihood estimator is consistent to Ψ 0 , in the sense that P{min θ∈Ψ0 | θ − θ| < } → 1 as T → ∞, for any > 0. (see, e.g., [28]).The importance of this fact is that even if there is more than one parameter θ * resulting in the best approximation for the true data generating process, the maximum likelihood will provide a parameter estimate close to one of these best approximation parameters, which is important for prediction purposes. Selecting the number of experts The selection of the correct number of experts has no easy answer.Basically, log-likelihood ratio tests are not applicable in this case, as long as, under the null hypothesis of fewer experts, the alternative hypothesis implies a nonidentified problem (see, e.g., [43]).We will examine the use of information criteria such as BIC [45] or AIC [1,2] in selecting the right number of experts. In [55], BIC is used to select the number of experts for spatially adaptive nonparametric regression models.For well-behaved models, we know that the BIC is consistent for model selection, since, with probability tending to one as the sample size goes to infinity, the true model will be chosen because it has the largest BIC.However, when the model is overidentified, the usual regularity conditions to support this result fail.Fortunately, [55] presents some evidence that, even when we have overidentified models, the BIC may still be consistent for model selection. In this section, we present some results about the Monte Carlo simulations to evaluate the performance of the Bayesian information criteria (BIC) and the Akaike information criteria (AIC) in selecting the right number of mixed experts.We performed simulations under true models composed by three experts and, for each generated data set, we estimated mixtures-of-experts models with various number of mixed distributions.For each simulated data set, we stored the BIC and the AIC values.We expect that one of the two criteria (or both) will present the smallest value for the estimated model with the same number of experts as the simulated true model.We performed simulations for normal and binomial distributions.We report that simulations for other distributions presented similar conclusions.For each true model, we generated 400 data sets, with T = 100 and T = 200 observations.Each model includes an external covariate x t , which was generated as an autoregressive process of order 1, with autoregressive coefficient equal to 0.5. For the binomial case with three experts, the expressions for the experts y j,t 's and for the gating functions are presented below.In all models, we considered 50 trials for the binomial random variables: (5.4) The results for binomial experts are summarized in Table 5.2.As can be seen from the tables, the BIC performed very well in selecting the correct number of mixed experts, for the two distributions studied in the simulations.The AIC tends to pick more experts than needed, especially in the normal case.Therefore, the use of the BIC seems to be very appropriate for model selection in this case, and its performance tends to improve as the sample size T increases.We replicated similar experiments with true models containing one and two experts, and with other distributions (Poisson and gamma), and the conclusions are basically the same.For some of these distributions, we also simulated samples with 1 000 observations and noticed that the BIC still selected the true number of experts for 100% of the samples, while the AIC continued to present a bias towards selecting a higher number of experts.These results seem to suggest the consistency of the BIC for selecting the number of components.This conclusion agrees, for example, with the results presented in [14], where the authors show that the BIC is an almost surely consistent for estimating the order of a Markov chain. Applications In this section, we present examples where ME are used to model different time series. In the first example, we present an application of mixtures of binomial experts (for applications using ME of Poisson experts, see [10,11]), where we are interested not only in predicting the response variable, but also in presenting some plausible description of the data-generating process, based on the stochastic underlying mixture structure behind the mixtures-of-expert models.In these cases, the latent variable I t , which determines which regime (or expert) is observed, has a meaning and helps us interpret the results. In example two, we are not interested in explaining the data anymore, but only in using a flexible modeling structure, such as ME, so as to approximate and predict the conditional density function of the observed process.In this case, the underlying latent variable I t has no meaning, but only the functional form for the density in (1.1).For the simulated time series, clearly the data-generating process does not follow a ME model.However, as we will discuss in these examples, we are still able to reasonably approximate the conditional process. Number of buying customers. In this example, we consider the problem of modeling the buying behavior of a list of regular customers in a supermarket.Basically, we have 20 months of scanner data, and we selected a list of 3 497 regular customers who bought one of 8 brands of canned tuna at least once.By regular customers, we mean the customers that appeared in the store during the 20 months.Therefore, we have a binomial time series, where the response variable y t (see upper-left graph in Figure 6.1) is the number of customers buying one of the 8 brands on week t.The number of trials ν = 3497. One natural covariate in this case is some price index.For these 8 brands, we have 8 different prices in each week (promotions are launched in a weekly basis).The price index was composed by finding a weighted average of the 8 individual prices, with weights given by the overall market shares during the 20 months.After calculating the weighted average, we applied a logarithm transformation, obtaining the covariate p t (some preliminary estimations have shown that using the logarithm of prices provides better values for the BIC and AIC than using the untransformed prices).The logarithm of the price index is presented in the upper-right graph in Figure 6.1. To model some nonstationarities in the data, we also included, in the vector of predictors, a linear time trend t, 1 ≤ t ≤ 86, where t is the number of the focused week.Finally, in the middle of the overall period, one competitor entered the neighborhood, which may have caused some impact in the buying behavior of the list of customers, and to model A. X. Carvalho and M. A. Tanner 15 the competitor effect, we used a dummy variable d t , with d t = 0, if t ≤ 42, and d t = 1 otherwise. After trying different numbers of experts and different numbers of lags for each predicting variable, the resulting model is a mixture of two experts (all parameters are significant with level 1%).(In selecting the final model, we employed the BIC for choosing the number of experts and the t-statistics for selecting the number lags, starting from an initial model with high number of lags.This procedure was based on the general-to-specific approach, commonly used for model building in econometrics (see [13,24]).Nonetheless, there is still need for further research on model selection in ME models, as discussed in Section 7.) The two binomial regressions for each expert are given by y j,t ∼ Bin ν = 3,497; e hj,t 1 + e hj,t , j = 1,2, ( where As expected, the contemporary price elasticities are negative, which implies the effectiveness of price reductions in increasing the number of buying customers.Observe that the second expert presents a higher-price sensitivity.Both regressions present significant positive coefficients for the first and second lags of the logarithm of the price index, what implies the existence of a dynamic effect.Basically, the inclusion of the lags of the price index suggests that if there is a promotion in the current week, some of the customers buy and stock up canned tuna so that, even if there is another price reduction the next week, the price effect will not be so pronounced. The gating function, corresponding to the weight of the first expert, is given by g 1,t = exp 0.1375 + 0.0292t − 0.9828d t 1 + exp 0.1375 + 0.0292t − 0.9828d t .( Intuitively, we can regard the overall price elasticity as a linear combination of the price elasticities in both experts, weighted by the corresponding gate functions.Therefore, when we increase the weight for the first expert, we decrease the overall price sensitivity.As we can note from the expression for g 1,t (see lower graphs in Figure 6.1), the price sensitivity decreases with the time trend and increases with the entrance of the competitor. The above conclusion about the positive effect that the competitor caused in the overall price sensitivity is quite surprising, if we take into account the fact that the competitor has the tradition of being a more inexpensive store.Basically, we expect that the competitor will attract the more price sensitive customers, so the remaining tuna buyers will be less-price sensitive.One plausible explanation for this contraction can be found by looking at the plot for the logarithm of the price index in the upper-right graph in Figure 6.1.Apparently, the studied store changed its price strategy, increasing the number of promotions after the appearance of the competitor.Actually, the averages of the logarithm of the price index before and after the competitor are 0.0169 and −0.0114.In this way, it seems that the studied store regained its price sensitive customers. Simulated data. The following example illustrates the application of ME of Gaussian autoregressions to simulated data, so as to illustrate the ability of ME of time series in modeling the conditional density in time series processes.The artificial time series present nonlinearities not only in the conditional mean function but also in other conditional moments.To evaluate the performance of the estimated models, we present several graphical criteria as suggested in [16]. The simulated time series corresponds to a variance covariate dependent process.The response y t and the covariates x 1,t and x 2,t obey the autoregressions y t = 3.0 + 0.6y t−1 − 0.2 x t−1 − 10.0 2 + σ t t , where σ 2 t = 1.0 + 0.9 x t−2 − 5 4 , x 2,t = 2.0 + 0.6x 2,t−1 + 0.3 t , t ∼ N(0,1.0).(6.4)Note that there is an explicit nonlinearity in how the conditional variance of y t depends on the second lag of x 2,t .Besides, the conditional mean function of y t is a nonlinear function of the lagged covariate x 1,t .The simulated time series are presented in Figure 6.2.We estimated a mixture of normal experts model to 600 observations.These observations were obtained from the data generating process in (6.4), after 100000 warm-up data points.By selecting the number of experts via BIC, the resulting model is a mixture of three experts, with lags up to order 2 of both the covariates x 1,t and x 2,t and the response y t in the experts and in the gates. In order to assess the goodness-of-fit of the estimated ME of normal autoregressions in modeling the simulated series studied in this paper, we use the methodology based on the probability integral transform, initially defined by [44].This approach has been employed by a number of recent papers such as [4,16].The analysis is based on the relationship between the data generating process f t (y t |x t−1 ), for the response variable y t , and the sequence of estimated conditional densities p t (y t |x t−1 ), obtained by using the mixture model.The probability integral transform u t is the conditional cumulative distribution function corresponding to the density p t (y t | x t−1 ) evaluated at the actual observed value y t , (6.5) We then have the following fact, a proof of which can be found in [16], which is the backbone for the model-checking analysis in this paper: if a sequence of density estimates {p t (y t |x t−1 )} T t=1 coincides with the true data-generating process { f t (y t | x t−1 )} T t=1 , then under the usual conditions of nonzero Jacobian with continuous partial derivatives, the sequence of probability integral transforms {u t } T t=1 of {y t } T t=1 with respect to {p t (y t | x t−1 )} T t=1 is i.i.d.U(0,1).In this paper, instead of working directly with the sequence {u t } T t=1 , we followed the suggestion in [4] and worked with the transformation {Φ −1 (u t )} T t=1 , where Φ −1 (•) is the inverse of the standard normal distribution function.The aforementioned fact implies that {Φ −1 (u t )} T t=1 is a standard normal i.i.d.sequence.Therefore, after estimating the mixtures of autoregressive Gaussian experts, we evaluated the model fitting by checking for the hypothesis of independence and standard normality for the constructed series {z t } T t=1 , where z t = Φ −1 (u t ), t = 1,...,T.Following [16], we employed a number of graphical methods for assessing goodness-of-fit.The analysis can be done by plotting the density estimate for the series z t and comparing it to the standard normal density function.Our density estimation employs the Gaussian kernel and uses the optimal bandwidth for i.i.d.Gaussian data.Additionally, we also plotted the normal quantile plot for the series {u t } T t=1 and compared it to the normal quantile plot for a standard normal variable.The two upper graphs in Figure 6.3 present the normal quantile plots (left-upper graph) and the density estimates (right-upper graph) for the simulated example. To check for the independence hypothesis in the series {z t } T t=1 , we can plot the autocorrelation function for the series (z t − z), (z t − z) 2 , (z t − z) 3 , and (z t − z) 4 , as suggested by [16], where z is the sample mean for {z t } T t=1 .The four lower graphs in Figure 6.3 contain the plots of autocorrelation functions for the four trasformed series for the ME model applied to the two artificial time series, along with the corresponding 5% significance limits.For these limits, we used the approximation ±1.96T −1/2 (see [5], for details). According to Figure 6.3, the ME model seems to be a good approximation for the true data-generating process in the simulated example.Note that the normal quantile plots and the density plots seem to support the standard normality of {z t } T t=1 .Besides, the ACF 18 Mixtures-of-experts of time series models plots seem to provide support for the independence of the contructed series {z t } T t=1 .For more examples on simulated and real data regarding ME of Gaussian autoregressions, see [9]. Final comments and suggestions for future research In this paper, we discussed some of the recent results on a nonlinear class of models for time series data.This class is based on the idea of combining several simple models, in a mixture structure, where the weight for each model is a function of the covariates.Each combined simple model is called expert, whereas the weights are denoted as gates.The combined resulting model is denoted as mixtures-of-experts of time series.To incorporate time series dynamics, the covariates in the experts and in the gates may include lags or transformed lags of the dependent variable.Therefore, we can regard these models as nonlinear autoregressive structures, and they include several archtectures suggested in the literature [25,51,52,54,57]. Some simulated examples showed that, even with a relatively simple and intuitive structure, mixtures-of-experts can reproduce a great variety of time series behaviors, even A. X. Carvalho and M. A. Tanner 19 with a small number of components.When the number of mixed components go to infinity, ME of time series models constitute a universal approximator for the conditional function of y t given x t−1 , in the same way as artificial neural networks, stochastic neural networks, and other sieves-type models [28,29].However, because of the mixture construction, ME of time series models are also able to capture more than only approximations in the mean function.In fact, it may also capture multiple modes [54], asymmetries (skewed conditional distribution), heavy tails, and nonhomogeneity in higher conditional moments (e.g., conditional heterocedasticity).Moreover, one can easily extend the ideas presented in this paper and combine densities from different families, such as normal and gamma, or Poisson and binomial.Therefore, ME of time series models may be able to provide not only good approximations for the conditional-mean function, but also to provide good approximations to the entire conditional distribution of the response variable y t .This fact was illustrated in this paper using a simulated example.More examples, with simulated and real data, can be found in [9]. We discussed several important results regarding model identification and stochastic stability for the ME of time series models.The main two assumptions for model identification are no two experts have the same parameter vector (θ i = θ j , for all i = j, 1 ≤ i, j ≤ J); and the design matrix obtained from stacking the covariate vectors x t−1 's is full rank with probability 1 [9,11,30].For stochastic stability, a sufficient condition is that all autoregressive experts are stationary individually.Given that condition, no additional assumptions on the gates are necessary [7,10].Nonetheless, as [54] has pointed out, even when some of the experts are nonstationary, the whole system may still be stationary.Therefore, providing more general conditions for ME stability still remains an open question. Parameter estimation of the ME model can be performed by maximum likelihood, employing the EM algorithm, which exploits the mixture construction.Alternatively, one can use heuristic methods for likelihood maximization (genetic algorithms, simulated annealing, etc.), instead of using the EM method.Several analytical results have been shown that, when the true data generating process follows a ME construction, maximum likelihood parameter estimates are consistent for the true parameters, and asymptotic normality holds.Additionally, even when the model is misspecified and the true data-generating process does not belong to a ME of time series family, the parameter estimates are still consistent and asymptotic normal.In this case, some easily computable corrections have to be done for the estimated covariance matrix.For more details, refer to [9,10,28,31].Finally, simulated examples show that BIC seems to be consistent for selecting the number of experts. Several important questions still remain regarding ME of time series models.The analytical results for approximation capability and stochastic stability can be extended to more general conditions.Moreover, there is still work to be done on alternative estimation algorithms.Besides, model selection still deserves further investigation.Even though the BIC seems to be consistent in selecting the number of experts, there is still need for research on the selection of covariates, and on the selection of the number of lags. In terms of estimation algorithms, one possibility is to use Bayesian techniques, which have been successfully employed for nonlinear time series models and for mixture models 20 Mixtures-of-experts of time series models (see [19,25,39,42], e.g.).One of the advantages of using Bayesian methods is that, in terms of forecasting k-steps ahead, the Markov chain Monte Carlo (MCMC) approach will automatically provide samples from the predictive distribution.Besides, one can employ reversible jump MCMC to obtain the posterior distribution for the number of experts in ME construction (see [21]).Some of these topics are under current investigation by the authors. Figure 3 . 1 . Figure 3.1.Generated time series (a), density estimate for the observed series (b), and weight g 1 (y t−1 )for the positive mean autoregression (c) in the first example.The autoregressive coefficient for each expert is assumed to be 0.5. 81 1 Figure 3 . 2 . Figure 3.2.Generated time series (a), density estimate for the observed series (b), and weight g 1 (y t−1 )for the positive mean autoregression (c) in the first example.The autoregressive coefficient for each expert is assumed to be 0.6. and M. A.Tanner 7 Figure 6.2.Simulated time series for both covariates the x 1,t and x 2,t and the response y t .
9,811.2
2006-08-01T00:00:00.000
[ "Mathematics" ]
Preparation and Antibacterial Properties of Substituted 1 , 2 , 4-Triazoles Background. Both 1,2,3and 1,2,4-triazoles are nowadays incorporated in numerous antibacterial pharmaceutical formulations. Aim. Our study aimed to prepare three substituted 1,2,4-triazoles and to evaluate their antibacterial properties. Materials and Methods. One disubstituted and two trisubstituted 1,2,4-triazoles were prepared and characterised by physical and spectroscopic properties (melting point, FTIR, NMR, and GC-MS). The antibacterial properties were studied against three bacterial strains: Staphylococcus aureus (ATCC 25923), Escherichia coli (ATCC 25922), and Pseudomonas aeruginosa (ATCC 27853), by the agar disk diffusionmethod and the dilutionmethod withMIC (minimal inhibitory concentration) determination. Results.The spectroscopic characterization of compounds and the working protocol for the synthesis of the triazolic derivatives are described.The compounds were obtained with 15–43% yields and with high purities, confirmed by the NMR analysis. The evaluation of biological activities showed that the compounds act as antibacterial agents against Staphylococcus aureus (ATCC 25923), while being inactive against Escherichia coli (ATCC 25922) and Pseudomonas aeruginosa (ATCC 27853). Conclusions. Our results indicate that compounds containing 1,2,4-triazolic moiety have great potential in developing a wide variety of new antibacterial formulations. According to this, we set our goal in the synthesis, characterization (physical and spectroscopic properties, melting point, FTIR, NMR, and GC-MS), and evaluation of biological activity of three substitutes triazoles (1-3) against Staphylococcus aureus (ATCC 25923), Escherichia coli (ATCC 25922), and Pseudomonas aeruginosa (ATCC 27853). (TLC) was carried out on silica gel-coated plates 60 F254 Merck using hexane : ethyl acetate 1 : 1 (v/v) as eluant.FTIR spectra were recorded in KBr pellet using a Specac Pellet Press (Specac Ltd., Kent, UK) on a Jasco FT/IR-410 spectrophotometer (Jasco Analytical Instruments, Easton, USA) using the following abbreviation for the aspect/intensities of the bands: br: broad; s: strong; m: medium; w: weak. 1 H-NMR and 13 C-NMR spectra were recorded on a Bruker Avance AC200 spectrometer (Bruker Biospin International, Ag, Aegeristrasse, Switzerland) in DMSO- 6 , using TMS as reference; chemical shifts are reported in ppm and the coupling constants in Hz.The multiplicity of the signal and the aspect of the band are abbreviated as follows: br: broad; s: singlet; d: doublet; m: multiplet. Mass spectra GC-MS were obtained using an Agilent G1701DA apparatus (Agilent Technologies, Inc., Santa Clara, USA) using methanol as carrier solvent. Disk Diffusion Method. The antimicrobial activity of the compounds was evaluated according to the guidelines of the National Committee for Clinical Laboratory Standards (NCCLS, 1997) using agar disk diffusion method [26]. NCCLS recommends a bacterial suspension with a density equal to a 0.5 McFarland (which gives a final bacterial concentration of 1-2⋅10 8 CFU/mL).The Mueller-Hinton agar plates were inoculated with bacterial suspension using a sterile cotton swab. A series of solutions of the tested compounds were prepared in DMSO, with a concentration of 0.01%, 0.1%, 0.5%, and 1%, respectively.Within 15 min after the plates were inoculated, sterile Whatman number 1 filter paper disks (6 mm in diameter) impregnated with the solutions in DMSO of the tested compounds (20 L solution/disk, corresponding to 2, 20, 100, and 200 g compound/disk, resp., for the four solutions prepared with each compound) were distributed evenly on the surface, with at least 25 mm (center to center) between them.Disks with gentamicin (10 g), supplied by Bio-Rad, were used as positive control for the antimicrobial activity.For negative control we used a paper disk impregnated with dimethylsulfoxide (DMSO). Plates inoculated with the bacterial suspensions were incubated at 37 ∘ C for 24 h.The inhibition zone diameters were measured in millimeters, using a ruler.For all the bacterial strains the disk diffusion tests were performed in triplicate and the average reading was taken into account. Dilution Method. The MIC values were determined only for the active compounds, with a zone of inhibition >10 mm.The MIC values were evaluated in the range of 1.56-50 g/mL. The MIC (g/mL) was determined by the binary microdilution method.We prepared stock solutions in DMSO of the tested compounds, at a concentration of 100 g/mL.From these solutions, serial dilutions of the compounds (50, 25, 12.5, 6.25, 3.12, and 1.56 g/mL) were prepared and brought, under aseptic conditions, to a final volume of 200 L with nutrient medium.In all tubes 50 L of bacterial suspensions were added, with a density equal to a 0.5 McFarland.All tubes were incubated at 37 ∘ C for 24 h.The MIC was recorded as the minimum concentration of the compound which inhibited the visible growth of the tested microorganisms.For a negative control 50 L of DMSO was introduced in a tube with 50 L of bacterial suspension and 100 L of nutrient medium. We performed triplicate tests and, from all the tubes used for MICs reading, cultures on Columbia agar supplemented with 5% sheep blood were performed in order to verify the results; no bacterial growth was recorded in the presence of the tested solution to which MIC was attributed. Results and Discussion The synthesis of heterocyclic derivatives (1-3) was performed according to indications mentioned in literature.The compounds were obtained with moderate yields (15-43%) but with high purity, fact confirmed by TLC analysis and evaluation of melting interval.The formation of triazolic derivatives was proven by FTIR and NMR analysis, as well by mass spectrometry for the compound (3). The results of the antimicrobial evaluation are presented in Table 1. The synthesized and evaluated derivatives with triazolic structure were tested for antibacterial activity against three bacterial strains by means of the disk diffusion method.The results presented in Table 1 reveal that only two compounds may act as anti-infective agents, inhibiting the growth of S. aureus ATCC 25923 at concentrations within the range of 25-40 g/mL.It is worth mentioning that both compounds are active against Gram-positive bacteria (S. aureus) while no activity was present against Gram-negative bacteria (E.coli, P. aeruginosa); these results demonstrate a specific antimicrobial activity against Gram-positive bacterial infections.Gentamicin is a powerful broad-spectrum antibiotic, largely used in clinical practice, usually in association with -lactam antibiotics, for the treatment of a wide range of bacterial infections; in this paper, it served as reference for the antimicrobial potency of the tested compounds.Both active substances, as showed in Table 1, produced an inhibition zone Table 1 : Antimicrobial activity of compounds.
1,430.8
2015-03-18T00:00:00.000
[ "Chemistry", "Medicine" ]
Conference-ALC ’ 15-Electronic Structure of MePc / Si ( 100 ) Surface Studied Using Metastable-Atom Induced Electron Spectroscopy Metal phthalocyanines (MePc) have unique features applicable to the field of electronics and optics. In this study, we observe the surface electronic structure of MePc (Me = Cu, Zn) adsorbed Si(100) using metastable-atom induced electron spectroscopy (MIES). MePc molecules are deposited for less than 2000 s in vacuum at room temperature. At the initial adsorption of the MePc, each molecule lays flat on the substrate and the center metal atom is on top. This orientation of the adsorbed molecules gradually changed with an increase in the deposition time of the MePc. When the MePc covered surface was annealed by direct current heating at 800◦C or below, the molecules started to decompose and desorbed from the Si(100) surface. However, Cu atoms remained on the surface. We discuss the adsorption structure based on the deposition time and behavior of the MePc molecule with annealing. [DOI: 10.1380/ejssnt.2016.141] I. INTRODUCTION Metal phthalocyanines (MePc) have unique features for applications in the field of electronics and optics.For example, they can be employed in various devices such as solar batteries, sensors, and fuel cells.These features of MePc are attributed to the atomic bonding state and the molecular structure.The adsorption structures of MePc have already been observed on single crystal substrates using a scanning tunneling microscope (STM) [1,2].STM results were reported that the orientation of the adsorbed MePc depends on the structure of the substrate surface.The surface electronic structures of CuPc and ZnPc were calculated using theoretical methods such as density functional theory (DFT) [3][4][5][6].These results showed partial density of states for each atom in the CuPc and ZnPc molecules. In this study, we measure the surface electronic structure of MePc (Me = Cu, Zn) adsorbed Si(100) using metastable-atom induced electron spectroscopy (MIES).The MIES technique provides accurate measurements at the outermost surface.The detailed surface electronic structures obtained by MIES revealed the influence of the amount of adsorbed MePc molecules to their orientation on the surface.Moreover, we observed desorption and decomposition of MePc molecules at higher substrate annealing temperatures. II. EXPERIMENTAL The experimental setup comprised a helium metastabele atom (He*) source, rear-view low energy electron diffraction (LEED) optics, retarding field energy analyzer, quadrupole mass spectrometer, and MePc evaporators.The base pressure was approximately 1.0 × 10 −7 Pa.Helium atoms were excited to the metastable states by hotcathode low voltage discharge.The discharge was pulsed so that the fast photons and the slow He* in the incident beam could be distinguished based on time of flight using a time-resolved detection technique.The raw data obtained by the retarding field energy analyzer, which integrated energy distributions, were differentiated numerically to yield MIES and UPS spectra.In MIES, the deexcitation of metastable atoms proceeds through different channels depending on the relation of the work function at the surface.At high work function surfaces such as clean Si(100) in Fig. 1(a), He* undergoes resonance ionization followed by Auger neutralization (RI+AN).The AN process takes place close to the topmost layers, and the MIES spectrum reflects a convolution of partial density of states at the surface. The sample was a Si(100) substrate of P-doped n-type, cleaned in vacuum by direct-current Joule heating.A Si(100) clean surface was confirmed by a double-domain (2 × 1) LEED pattern.The sample temperatures were measured with an optical pyrometer (200-1600 • C).Cu or Zn phthalocyanine powder was inserted into the evaporator crucible.This crucible was heated by direct current to deposit MePc molecules at room temperature.We did not measure the amount of adsorbed MePc molecules; instead, we determined its index based on the deposition time.To determine the coverage or film thickness, observations using analytical techniques are necessary.During sample surface preparation, outgassing products of rel- atively lightweight atoms and molecules were monitored using the quadrupole mass spectrometer. A. CuPc/Si(100) surface We measured the MIES spectra for clean and CuPc deposited Si(100) surfaces up to the deposition time of 2130 s at room temperature.Figure 1 shows a series of MIES spectra for the CuPc/Si(100) surface.In Fig. 1(a), at the Si(100) clean surface, the peak P 1 at 8.4 eV can be assigned to the Si-3p state.The peak at around 11.5 eV in Fig. 1(b3), labeled P 2 , is due to electron emission from the Cu induced states [7].The theoretical calculations about the partial density of states of each atom composing the CuPc molecule were reported by some researchers [3,4].We referred to these calculations for interpreting the origins of peak structures.The shoulder S 1 at 9-12 eV contained electron emissions for each atom (Cu, N, and C) in the CuPc molecule.The intensity of peak P 2 decreased with increasing CuPc deposition time.The peak structure S 2 at 5-8 eV originated from the electron emissions induced C atoms in CuPc.In the MIES spectrum (b5), S 2 made a significant contribution while the intensity of P 2 decreased.This result implies that the CuPc adsorption structure changed from flat depending on the CuPc deposition time [8].Namely, the adsorbed CuPc molecule lied flat on the substrate keeping the Cu atom on top at the initial stage of CuPc adsorption.However, at higher coverage, because He* de-excited around C atoms at the outermost position in CuPc, electron emission due to C induced states increased.It was possible that CuPc molecules were tilted upwards on the Si(100) surface. Figure 2 shows a series of MIES spectra for the CuPc adsorbed Si(100) surface at different annealing temperatures.When the CuPc/Si(100) surface was annealed at 400-1000 • C, peaks P 1 and P 2 reappeared slightly with an increase in the annealing temperature.After annealing at 1000 • C for 3 min, the shape of several peaks were con- firmed clearly in the spectrum (c2).As the pyrrole rings and aromatic rings in phthalocyanine were decomposed by annealing, the light elements (N, C, and H) preferentially desorbed from the surface.Therefore, electron emissions were derived from residual Cu and C atoms or bare Si atoms.We found that Cu atoms remained on the Si(100) surface with annealing at 1000 • C or less. B. ZnPc/Si(100) surface The MIES spectra obtained for the ZnPc deposited Si(100) surface are shown in Fig. 3.The interpretation for our MIES spectra was derived from the previously reported theoretical and experimental results of the surface electronic structures of the ZnPc molecule [5,6,9].In Fig. 3(a), peak P 1 was induced by a Si-3p state.The spectrum (a) for the Si clean surface exhibited peak positions for Si induced states were identical to its in Fig. 1(a).The shape of this spectrum in the low energy region reflects the influence of the secondary electrons.For a ZnPc deposition time of 80 s, peak P 2 caused by the electron emissions from the Zn atom appeared.As the spectrum (b4) reflects the density of states originating from the Si substrate, ZnPc molecules could not fully cover the surface but laid flat on keeping the Zn atom on-top.In Fig. 3(b5-b9), the shoulder structure S 1 and the peak P 3 at around 6.5 eV appeared.S 1 contains the electron emissions for each atom (N and C) in the ZnPc molecule.Peak P 3 originated from the C atom.The intensity of P 3 increased with ZnPc deposition time.These results suggested that the molecular plane of adsorbed ZnPc stood upright.The behavior of ZnPc on the Si(100) surface was similar to that of the CuPc. After 2000 s ZnPc deposition, the ZnPc/Si(100) sample (Fig. 3(b9)) was annealed at 300-1000 • C. Fig. 4 shows a series of MIES spectra for ZnPc adsorbed Si(100) surface with annealing temperatures.In Fig. 4(c1), the intensity of peaks P 2 induced by Zn decreased slightly and a new peak P 1 ′ containing electron emission due to Si induced states was observed at around 7.8 eV after annealing at 300 • C.This result suggests that ZnPc molecule started to decompose and the Zn atom desorbed from the surface at relatively low temperatures.When the ZnPc/Si(100) surface was annealed at 1000 • C for 19 min, the position of peak P 1 ' shifted to a higher energy.The intensity of P 1 recovered, and P 2 vanished in Fig. 4(c4).This MIES spectrum was roughly similar to that obtained for the Si clean surface indicating almost complete desorption of ZnPc and its decomposed species from the surface. IV. CONCLUSIONS We measured the surface electronic structure of MePc (Me = Cu, Zn) adsorbed Si(100) using MIES.The MePc absorption structures depend on deposition time.At the initial adsorption of MePc, molecules laid flat on the substrate, keeping metal atoms on top of the surface.With increasing MePc deposition time, the orientation of MePc molecule became upright.It was found that even low temperature annealing triggered the decomposition of CuPc molecules.Then, the decomposed Cu atoms were directly bonded to the Si subsurface by high temperature annealing.Therefore, Cu atoms were not completely removed from the surface.In the case of the ZnPc adsorbed Si(100) surface, ZnPc started to desorb from the surface with low temperature annealing and finally desorbed approximately at 1000 • C with the other decomposed elements.
2,084.6
2016-05-07T00:00:00.000
[ "Physics", "Chemistry" ]
The Impact of Herding on the Risk Pricing in the Egyptian Stock Exchange We test the impact of herding behaviour on the risk pricing in the Egyptian Stock Exchange (EGX) by adding an additional risk factor reflecting herding behaviour to the Fama and French three-factor model. We construct a portfolio to mimic an additional risk factor related to herding behaviour, in addition to the original risk factors in the Fama and French three-factor model. The three-factor model will be tested in its original form and re-tested after adding the herding behaviour factor. The study is based on Hwang and Salmon methodology, in which the state space approach based on Kaman’s filter was used to measure herding behaviour. We used monthly excess stock returns of 50 stocks listed on the EGX from January 2014 to December 2018. The results do not support Fama and French model before and after adding the herding behaviour factor, therefore, there is no effect of herding behaviour on the risk pricing in the Egyptian Stock Exchange. Introduction In the 1970s, there was a widespread belief that financial markets are efficient, investors are rational and the stock prices quickly adapt to new information and reflect all available information. The concept of the Efficient Market hypothesis presented by Fama (1970) attracted the attention of many scientists and researchers in the financial sciences, in an attempt to either support or challenge the Efficient Market principles (Fama and MacBeth 1973;Black, Jensen, and Scholes 1972;Jensen 1978). In the 1970s and 1980s, the term of behavioural finance began to appear as an application of behavioural economics in financial markets; it became an alternative to classical theory. The concept of cognitive psychology was used to explain the behaviour of investors in financial markets. Many empirical researches have shown that market transactions often manifest clear anomalies and that investors make unreasonable decisions, which may lead to inaccurate asset pricing, in great contrast to traditional theories claiming the absolute rationality of investors (Rozeff and Kinney 1976;French 1980;Shiller 1980;Banz 1981;De Bondt and Thaler 1985;Basu 1983;Shefrin and Statman 1985;Ariel 1987;Lakonishok and Smidt 1988). The field of financial modelling is one of the most important fields in the theory of modern finance, which examines the relationship between return and risk. The Capital Asset Pricing Model (CAPM), presented by Sharpe (1964) and Lintner (1965) and developed based on portfolio theory by Harry Markowitz (1952), is the first capital asset pricing model to explain the relationship between risk and return. However, there have been some studies that have called the CAPM asset pricing model invalid, which has led researchers in the financial sciences to find upgraded versions of the CAPM model (Black 1972;Ross 1976). In 1992, Fama and French studied the ability of each the market beta coefficient, size of the firm, book-to-market equity ratio, leverage and Earnings/Price ratio (E/P) to explain the expected return. The results indicated no relationship between the market beta factor and the stock returns, but there was a strong relationship between size and B/M ratio and stock returns: a strong significant positive relationship between B/M value and stock returns and a negative relation between size and stock returns. Based on their previous results, Fama and French (1993) concluded that the variation in stock returns could be explained by the market beta coefficient, size of the firm, and book-to-market equity ratio. Accordingly, Fama and French constructed the three-factor model, which is used for explaining the variation in stock returns by employing these three factors as the explanatory variables. The research is still ongoing in the field of financial modelling and risk pricing to our present time, especially after the major developments in the financial science, and the most important of these developments is the emergence of behavioural finance. In this study, we will be exposed to the field of behavioural finance as one of the most important fields of the modern finance, where investment behaviour has become one of the most important factors to take into account when talking about the topics of finance. It has a great impact on the pricing of capital assets. We will try to identify the impact of herding behaviour -as a behavioural financing factor -on the risk pricing in the Egyptian Stock Exchange (EGX) by adding an additional risk factor reflecting herding behaviour (hmt) to the three-factor model of Fama and French, where the Fama and French three-factor model (1993) will be tested in its original form and re-tested after adding the additional risk factor from January 2014 to December 2018, to know the effect of herding on the risk pricing in the Egyptian stock exchange. The study will be organized as follows. In section 2 we will provide theoretical and empirical literature of the herding behaviour and the Fama and French three-factor model (1993). Section 3 presents the data and methodology. Section 4 presents empirical results, and finally, we will provide conclusions in a separate chapter. Sharpe (1964) and Lintner (1965), is based on the work of Harri Markowitz (1959), who developed the "mean-variance model". The CAPM model pointed out that there is a positive linear relationship between the expected risk of the asset and the expected rate of return. The only measure of risks is the systematic risk, which is measured through beta. However, the assumption of the CAPM that only systematic risk factors explain the expected return has led many researchers to criticize the model in an attempt to provide a model that is more able to explain the expected return. In their study on the non-financial stocks listed on the NYSE, NASDAQ, and AMEX from 1963to 1990, Fama and French (1992 examined the ability of the beta coefficient, book-to-market equity ratio, size of the firm, Earnings/Price ratio (E/P), and leverage, to predict the stock returns. They concluded that there was no relationship between the market beta factor and the stock returns, they also found that small stocks and stocks with high book-to-market equity ratios (value stocks) have high returns compared to big stocks and stocks with low book-tomarket equity ratios (growth stocks). Fama and French (1993) examined the relation between expected excess returns and the market premium as well as the size of the firm measured by market capitalization, which is calculated through the average return on the portfolios with small market capitalization stocks minus the average return on the portfolios with big market capitalization stocks, and the value of the firm measured by the bookto-market equity ratio, which is calculated through the average excess return on a portfolio with a high ratio of book-to-market stocks minus the average excess return on a portfolio with a low ratio of book-to-market stocks. They expanded the study to include the U.S government and corporate bonds in addition to stocks. They concluded that portfolios created based on the market factor, size and bookto-market-equity have important effects on stock returns, where Fama and French three-factor model (1993) is successful in the explanation of the cross-section of average returns on U.S. stocks. Their model can be written as: Where: - Rit is the expected return on asset i at time t; - Rf is the risk-free rate; -, , ℎ are the coefficients (betas) of the three independent variables Rmt− Rf, SMB and HML; -Rmt − Rf is the expected excess return of the market portfolio at period t; -HML is the expected return on the book-to-market value (BE/ME) factor (a proxy for firm value); -SMB is the expected return of the size factor (a proxy for firm size). Empirical Literature Canbaş and Arioğlu (2008) (1993) is weak in explaining the cross-sectional differences in average returns. Sobt (2016) examined the performance of the CAPM and the Fama and French three-factor model (1993) in their ability to explain the cross-sectional differences in average return in the Indian stock market for a sample of 298 stocks listed in the S&P CNX 500 index from October 2005 to March 2015. The study found that the systematic risk factor (beta) is not statistically significant. The results also showed that the explanatory power of the CAPM model is very weak, which indicates that other factors explain the cross-sectional differences in average returns other than the market factor. The study concluded that there was a significant improvement in the value of the (R 2 ) coefficient when using the Fama and French three-factor model (1993). Wang (2018) examined the ability of the Fama and French three-factor model (1993) to explain the cross-section of expected stock returns in the Taiwan stock market using monthly data from July 1982 to December 2012. He concluded that (R 2 ) for the six portfolios ranged from 93% to 97%, which indicates a great ability to explain the cross-section of expected stock returns. Theoretical Literature Bikhchandani and Sharma (2001) defined herd behaviour as deliberate or inadvertent reproduction of the behaviour of other investors. The classical theory does not adequately address this aspect, as it supports the independence of investor decisions and assumes rational behaviour of investors. According to Hwang and Salmon (2004), herd behaviour arises when investors decide to mimic others' decisions in the market rather than follow their own beliefs and information. This means that the return on individual investments will move in the same direction as the market portfolio, which makes returns on individual investments very close to market returns, resulting in a lower degree of deviation from these returns. Many researchers have measured herd behaviour within financial markets. Christie and Huang (1995) measured the herd behaviour in the market by observing how individual stock returns move against the return on the market portfolio. They proposed the Cross-Sectional Standard Deviation of Returns (CSSD) as a measure of dispersion to measure the herd behaviour in the market. Herd behaviour means that the return on individual stocks is approaching the return on the market. The study assumed that the investors neglected their beliefs and made the investment decision according to market consensus. The study applied the method of dispersion of return on the daily and monthly data of the New York Stock Exchange (NYSE) and AMEX from 1962 to 1988. This method failed to detect herd behaviour. Chang et al. (2000) extended the work of Christie and Huang (1995) and developed a new method based on the Cross-Sectional Absolute Deviation of return (CSAD) in a nonlinear regression to examine the relation between the level of stock return dispersions and the market return. The study examined herd behaviour in some international stock markets and found that herd behaviour was not found in the developed markets such as the US and Hong Kong, but the results supported herd behaviour in the emerging markets such as South Korea and Taiwan. Hwang and Salmon (2004) developed a different approach in their study of the US and South Korean markets. Their model is based on the Cross-Sectional Standard Deviation of the beta to test herding in UK, US, and South Korean stock markets. When investors have a behavioural bias and their decisions are not rational, their assessment of the relationship between return on assets and risk is distorted. Thus, if the herd behaviour is already found among investors in the market, the return on all investments moves in the same direction as the market portfolio, so CAPMbetas will deviate from their equilibrium values. They found herding behaviour in the stock market under normal market conditions rather than under market stress. Empirical Literature Demirer et al. (2010) (2004) method. The study found that herd behaviour is statistically significant in the Istanbul Stock Exchange. The study also found that the behaviour is due to the emotion and feelings of investors and not due to market conditions such as fluctuations in stock returns and fluctuations in market returns. (2014) investigated the presence of herd behaviour in the Athens Stock Exchange from February 1995 to April 2010. Herd behaviour was measured using the Hwang and Salmon (2004) model. The results showed that the herd behaviour was found during different periods of the study and that stocks manifested high levels of herd behaviour and high levels of volatility. The study also considered herd behaviour as an additional risk factor. Güverciṅ (2016) investigated the presence of herd behaviour at both the Egyptian and Saudi stock markets. The study also aimed to assess the impact of regional and global shocks (such as the mortgage crisis, the political volatility that occurred on July 3, 2013, oil price fluctuations and the civil war) on herd behaviour at these markets. The herd behaviour was measured using Hwang and Salmon (2004) method. The study found that herd behaviour only exists within the Egyptian stock market, whereas there is no herd behaviour among investors in the Saudi stock market. The results also showed that the mortgage crisis and the political fluctuations that occurred on July 3, 2013, had a significant effect on herd behaviour, while oil revenues, oil price fluctuations and the Syrian conflict did not affect herd behaviour. Metwally et al. (2016) investigated the presence of herd behaviour in the Egyptian stock market in the case of uptrend and downtrend of the market. The data used in their study consisted of daily closing prices, market returns, and interest rates as a risk-free rate of return, from January 2007 to December 2012. The study found that herd behaviour exists in the Egyptian stock market, where the results indicate that the returns of individual stocks have a low dispersion from market returns during the period of study. The study also found that the herd behaviour is stronger during periods of declining market returns (downtrend), while no evidence was found on the herd behaviour during the volatility of the market. The study also pointed to the absence of any evidence of the herd behaviour among investors in the uptrend case . (2018) The book-to-market ratio should be positive . Mertzanis and Allam By applying these conditions, we got a sample of 50 stocks listed on the EGX100 index in the Egyptian Stock Exchange. Methodology To mimic the common risk factors of size and book-to-market equity, we used the Fama and French (1993) approach to construct six portfolios sorted according to market capitalization and book-to-market equity. Fama and French form size and book-to-market equity portfolios to describe the cross-sectional variation in the average stock rate of returns. The state-space model used by Hwang and Salmon (2004) was used to measure herd behaviour. Herding Behaviour Measurement At fist, we obtained 60 monthly beta estimates for each stock. We estimated beta coefficients using the OLS method, based on daily observations of each month, as follows: where is the excess returns; is the excess market returns and refers to the daily data for the month (t). Individual stocks returns are calculated as follows: Where is the closing price of the stock (i) at the time (t). To test herding behaviour, the study employs Hwang and Salmon (2004) method from their study on the US and South Korean markets, which is based on the crosssectional volatility of beta coefficients. Hwang and Salmon (2004) method is based on the relationship between the equilibrium beta ( ) and its behaviourally biased equivalent ( ) as follows: Where ( ) is the biased expected returns on the asset i at time t and is a measure of systematic risk. ( ) is the conditional expectation of the market excess returns at time t. ℎ is the latent herding behaviour parameter changing over time. When ℎ = 0, this indicates that there is no herding behaviour. When ℎ = 1, this indicates that there is perfect herding behaviour towards market portfolio, meaning that all the individual stocks change following the market portfolio movements. However, when 0  ℎ  1, this indicates the presence of herding behaviour depending on the degree of ℎ . For measuring herding behaviour on a market-wide basis, the cross-sectional variation of is calculated as follows: And when taking logarithms on both sides of equation (4) we get: Equation (5) = Log (1-ℎ ) (8) Hwang and Salmon (2004) suggested that follows an AR(1) process, which will be estimated using the Kalman filter: Where ~ iid (0, , ). The Log [ ( )] is expected to change with herding. A significant value of the variance of the error term ( , ) refers to the existence of herding behaviour and a significant of the persistence parameter ( )supports this observation. Furthermore, the φ must be stationary, i.e., |φ| ≤1. The cross-sectional standard deviation of betas for each month is calculated by the following equation: Portfolios Construction To construct the SMB (Small minus Big) and HML (High minus Low) factors, the Fama and French (1993) methodology was used, where all stocks in the sample were ranked based on market capitalization in June of each year t. Then the stocks are sorted into two portfolios using the median sample size (Big (B) and Small (S)) according to split point which is 50%, where the highest 50% stocks are big and the lowest 50% stocks are small. The sample is also ranked by book-tomarket equity ratio, where the stocks are divided into three portfolios according to book-to-market equity ratio. The first portfolio, 30% of whole sample stocks, has highest book-to-market equity ratio (High: H group). The second portfolio, 40% of whole sample stocks, has medium book-to-market equity ratio (Medium: M group); and the third portfolio, 30% of whole sample stocks, has the lowest book-to-market equity ratio (Low: L group). Based on the intersection of the tow size and three BE/ME portfolios, we constructed six portfolios (BL, BM, BH, SL, SM, SH) where: -SH is the portfolio with small-cap and high book-to-market stocks; -SM is the portfolio with small-cap and medium book-to-market stocks; -SL is the portfolio with small-cap and low book-to-market stocks; -BH is the portfolio with big-cap and high book-to-market stocks; -BM is the portfolio with big-cap and medium book-to-market stocks; -BL is the portfolio with big-cap and low book-to-market stocks. SMB (small minus big) is the difference between returns on a small-cap stocks portfolio and on a big-cap stocks portfolio, and is calculated by the following equation: Where R (SL+SM+SH) is the expected return on (SL+SM+SH) portfolios, and R (BL+BM+BH) is the expected return on (BL+BM+BH) portfolios. HML (high minus low) is the difference between returns on high (BE/ME) stocks portfolio and a portfolio of low (BE/ME) stocks, and is calculated by the following equation: Where R (SH+BH) is the expected return on (SH+BH) portfolios, and R (SL+BL) is the expected return on (SL+BL) portfolios. To construct the herding factor (hmt), we will construct a portfolio created by the difference between the returns on the portfolio in which the herd behaviour is statistically significant (Rhmt1), and the returns on the portfolio in which the herd behaviour is not statistically significant (Rhmt0), as follows: The Model Fama and French (1993) developed the three-factor model to describe the relation between expected excess returns [Rit− Rf] and excess market return (RM-RF) as well as the model including two additional factors related to the value risk factor (HML) and the size risk factor (SMB). We have added an additional risk factor to the three-factor model, called herding factor (hmt). To estimate the model parameters, the two-pass cross-sectional regression was used. The first step is to use the time-series regression of the excess return of the sample stocks on excess market return, HML, SMB and hmt by the following model: Rit− Rf = + [Rmt− Rf] + [SMB] + ℎ [HML] + ßi [hmt] + εit Where: -Rit is the expected return on stock i at time t; -Rf is the risk-free rate; -Rmt− Rf is the expected excess return of the market portfolio at time t; -SMB is the expected return of the size factor (a proxy for company size); -HML is the expected return on the book-to-market value factor (a proxy for company value); -hmt is the expected return on herding factor (a proxy for herding behaviour); -, , ℎ , ßi are the coefficients (betas) of the independent variables; -, εit are the intercept and the error term, respectively. The second step is to run the cross-sectional regression, as follows: Where:  ri is the average excess return for the stock i over our full sample period;   0, 1, 2, 3 and 4 are the parameters that will be estimated;  is the estimated coefficients of the expected excess return of the market portfolio;  is the estimated coefficients of the size factor (SMB);  ℎ is the estimated coefficients of the value factor (HML);  ßi is the estimated coefficients of the herding factor (hmt). Testing Herd Behaviour We divided the sample into two portfolios and tested the herd behaviour for each of them; Table 1 and Table 2 show the statistical tests of the tow portfolio. (1), and it is insignificant at the level of 5%. Coefficient c(2) corresponds to the error term of equation (1), the error term vmt was written in an exponential form in the spacestate model, to avoid negative values. Both c(3) and c(4) represent the persistence parameter (φm) as well as the standard deviation (σmη) of the state-equation error (ηmt), respectively. Both of them are statistically significant at 5% significance level, which confirms the presence of herd behaviour. Table 2 shows the Herding space-state model for 32 stocks listed on the (EGX) from January 2014 to December 2018. The results show that both c(3) and c(4) represent the persistence parameter (φm) as well as the standard deviation (σmη) of the state-equation error (ηmt), respectively, and neither of them is statistically significant at 5% significance level, which confirms the absence of herd behaviour. Table 3 presents the descriptive statistics of the excess stock return, excess market portfolio return, size factor (SMB), and the value factor (HML). Where ( -) is the excess stock return; (R -) is the market portfolio excess return; SMB is the difference of returns on the portfolio consisting of small stocks and the portfolio consisting of big stocks per month; HML is the excess return of stocks with high BE/ME-ratio compared to stocks with low BE/ME-ratio per month. The results in Table 3 show that the mean return of SMB factor is equal to (-0.01). This indicates that the return on the portfolio consisting of big stocks outperforms the return on the portfolio consisting of small stocks. Also, the mean return of HML factor is equal to zero; this indicates that there is no difference between the return on the portfolio consisting of stocks with high BE/ME-ratio and the portfolio consisting of stocks with low BE/ME-ratio. This result is in conflict with the threefactor model, which states that the stocks with high book-to-market equity ratios (value stocks) have high returns compared to stocks with low book-to-market equity ratios (growth stocks) and small stocks have high returns compared to big stocks. The results also show that the mean return of ( -) and (R -) equals to (-0.08), which may be due to the political fluctuations the country witnessed during the study period and which led to successive losses for the stock market. Finally, the mean return of herding factor (hmt) equals to (0.00502), indicating a slight superiority of the portfolio in which the herding behaviour is statistically significant compared to the returns on the portfolio in which the herd behaviour is not statistically significant. The results show that the Adjusted R Square is equal to 0.1%, which indicates that the explanatory power of the risk coefficients in the three-factor model is very weak. The intercept is significant and negative; if the intercept is negative, the returns on assets are lower than it should have given its risk level; therefore, there is a pricing error in the specifications of the model, where the intercept should be equal to zero. The slope of the market premium (beta) is not significant and is positive with t-statistics equal to (0.889), so the market risk premium is not a determinant of the required rate of return for stocks. The SMB coefficient is not significant and is equal to zero with t-statistics equal to (-1.387), which provides evidence of the absence of the small firm effect. Moreover, the HML coefficient is not significant and is equal to zero with t-statistics equal to (0.297), which confirms that the book-to-market ratio effect does not exist in the market. The results indicate that the Fama and French three-factor model cannot explain excess stock returns in the EGX. These results contradict the findings of Shaker and Elgiziry (2014) Table 5 shows the regression results after adding an additional risk factor reflecting herding behaviour (hmt) to the three-factor model of Fama and French from January 2014 to December 2018 in the EGX. The results show that the Adjusted R Square is equal to 0.005%, which indicates that the explanatory power of the risk coefficients in the model is very weak. The intercept is significant and negative; whereas the intercept should be equal to zero. The slope of the market premium (beta) is not significant and is positive with a t-statistics equal to (0.999), so the market risk premium is not a determinant of the required rate of return for stocks. The SMB coefficient is not significant and is equal to zero with t-statistics equal to (-1.329), which provides evidence of the absence of the small firm effect. Moreover, the HML coefficient is not significant and is equal to zero with t-statistics equal to (0.115), which confirms that the book-to-market ratio effect does not exist in the market. The (hmt) coefficient is not significant and is negative with t-statistics equal to (-0.841), therefore, the results do not support the Fama and French model after adding the herding behaviour factor, i.e., there is no effect of herding behaviour on the risk pricing in the Egyptian Stock Exchange. Also, the results are not consistent with the findings of Messis and Zapranis (2014), in which they concluded that herding behaviour can be regarded as an additional risk factor. Conclusions The study examined the impact of herding behaviour on the risk pricing in the Egyptian Stock Exchange by adding an additional risk factor reflecting herding behaviour to the Fama and French three-factor model, using a sample of 50 stocks listed on the EGX from January 2014 to December 2018. At first, we investigated the validity of the Fama and French three-factor model in its original form by using monthly data for a sample of 50 firms listed on the Egyptian Stock Exchange from January 2014 to December 2018. The study used the same methodology of Fama and French (1993) to construct six portfolios (SL, SM, SH, BL, BM, BH) based on the intersection of the tow size and three BE/ME portfolios. And then we re-tested the Fama and French three-factor model after adding the additional risk factor, in order to detect the effect of herding on the risk pricing in the EGX. To construct the herding factor (hmt), we constructed a portfolio created by the difference between the returns on the portfolio in which the herd behaviour is statistically significant (Rhmt1), which consisted of 18 stocks, and the returns on the portfolio in which the herd behaviour is not statistically significant (Rhmt0), which consisted of 32 stocks. The study used the same methodology as developed by Hwang and Salmon (2004), who used the state-space model and using Kalman's filter to measure herd behaviour. Based on our statistical results, this study found that the Fama and French three-factor model cannot explain excess stock returns in its original form and after adding the herd behaviour factor as an additional risk factor, i.e., the beta, HML, SMB factors are not appropriate in evaluating the relationship between risk and return, and there is no effect of herding behaviour on the expected return in the Egyptian Stock Exchange. The political and economic fluctuations that Egypt witnessed are perhaps the most appropriate explanation for why the Fama and French three-factor model (1993) does not show statistical significance in its original form and after adding the herd behaviour factor as an additional risk factor, as these fluctuations began after the January revolution in 2011, which witnessed a change of the political system in Egypt, and extended to include most of the study periods. They are considered one of the most important factors that negatively affected the performance of the Egyptian stock market .
6,613.6
2020-12-31T00:00:00.000
[ "Economics" ]
Natural Convection in a Rotating Nanofluid Layer In this paper, we study the effect of rotation on the thermal instability in a horizontal layer of a Newtonian nanofluid. The nanofluid layer incorporates the effect of Brownian motion along with thermophoresis. The linear stability based on normal mode technique has been investigated. We observe that the value of Rayleigh number can be increased by a substantial amount on considering a bottom heavy suspension of nano particles. The effect of various parameters on Rayleigh number has been presented graphically. Introduction: Nanofluids are engineered colloidal suspensions of nanometer sized(1 − 100nm) particles in ordinary heat transfer liquids.The common heat transfer fluids, known as base fluids, include water, ethylene glycol, engine oils, to name a few, while the nanoparticles used include metallic or metallic oxide particles (Cu, Cuo, Al 2 O 3 ), carbon nanotubes, etc.The first scientist to use the term "Nanofluids"was Choi [1] in the year 1995 while working at the A.N.L.,USA.He was working on improved heat transfer mediums to be used in industries like power manufacturing, transportation, electronics, air conditioning etc.. Prior to the development of technology for manufacture of nano-meter sized particles, micro-meter sized particles were used in ordinary heat transfer fluids to enhance there thermal properties.The possibility of their usage was suggested by Maxwell[2] more than a century ago.But the use of these posed problems such as settling, producing drastic pressure drops, clogging channels, and premature wear on channels and components.These difficulties are overcome by the usage of nanoparticles.The smaller particles provide much larger relative surface area than microsized particles improving the heat transfer properties.Because of the superior properties of nanofluids like reduced pumping power due to enhanced heat transfer, minimal clogging, innovation of miniaturized systems leading to savings of energy and cost, over the base fluids, made Choi [3] to regard nanofluids as the next generation heat transfer fluids. In the past one and a half decade, many researchers have shown interest in studying the enhanced heat transfer characteristics of nanofluids.These include Masuda et al. [4], Eastman et al. [5], Das et al. [6], Xie et al. [7][8][9][10], Wang et al. [11], Patel et al. [12].They used nanoparticles of copper, silver, gold, copper-oxide, alumina, SiC, in base fluids such as water, ethylene-glycol, toulene, etc..The nanoparticle concentration used by these ranged from 0.11 vol.% to 4.3 vol.%, and the thermal conductivity enhancement observed by them ranged from 10% to 40%.These were a e-mail<EMAIL_ADDRESS>promising data obtained by these workers.There were also studies conducted to account for the unusual behavior, with Eastman [13] coming out to claim that further studies are needed to account for the observed phenomenon.Buongiorno [14], conducted an extensive study to account for the unusual behavior of nanofluids focussing on Inertia, Brownian diffusion, thermophoresis, diffusophoresis, Magnus effects, fluid drainage and gravity settling, and proposed a model incorporating the effects of Brownian diffusion and the thermophoresis.With the help of these equations, studies were conducted by Tzou [15,16], Kim et al. [17][18][19] and more recently by Nield and Kuznetsov [20,21]. Kuznetsov and Nield [22] studied the onset of thermal instability in a porous medium saturated by a nanofluid, using Brinkman model and incorporating the effects of Brownian motion and thermophoresis of nanoparticles.They concluded that the critical thermal Rayleigh number can be reduced or increased by a substantial amount, depending on whether the basic nanoparticle distribution is top-heavy or bottom-heavy, by the presence of the nanoparticles.The corresponding Horton-Rogers-Lapwood Problem was investigated by Nield and Kuznetsov [20] for the Darcy Model.Agarwal et al. [23] studied thermal instability in a rotating porous layer saturated by a nanofluid for top heavy and bottom heavy suspension considering Darcy Model.Kuznetsov and Nield [24], and Nield and Kuznetsov [21] also studied the effect of local thermal non-equilibrium (LTNE) on the onset of convection in a nanofluid saturated porous medium and in a nanofluid layer.They found that in case of linear non-oscillatory instability, the effect of LTNE can be significant for some circumstances but remains insignificant for a typical dilute nanofluids. From the literature survey it is clear that the following is true about nanofluids: the same for nanofluids too, and also that whether all other parameters behave conventionally. Thus the aim of the present study is to explore the above possibilities in rotating nanofluid layer.Assuming that the nanoparticles being suspended in the nanofluid using either surfactant or surface charge technology, preventing the agglomeration and deposition of these, in the present article, we study linear thermal instability in a rotating nanofluid layer, under the classical Rayleigh Bénard problem. Governing Equations: We consider a nanofluid layer, confined between two free-free horizontal boundaries at z=0 and z=d, heated from below and cooled from above.The boundaries are perfect conductors of heat and nanoparticle concentration.The nanofluid layer is extended infinitely in x and y-directions, and zaxis is taken vertically upward with the origin at the lower boundary.The fluid layer is rotating uniformly about zaxis with uniform angular velocity Ω.The Coriolis effect has been taken into account by including the Coriolis force term in the momentum equation, whereas, the centrifugal force term has been considered to be absorbed into the pressure term.In addition, the local thermal equilibrium between the fluid and solid has been considered, thus the heat flow has been described using one equation model.T h and T c are the temperatures at the lower and upper walls respectively such that T h > T c .Employing the Oberbeck-Boussinesq approximation, the governing equations to study the thermal instability in a nanofluid layer are [14][15][16][20][21]: where v = (u, v, w) is the fluid velocity.In these equations, ρ is the fluid density, (ρc) f , (ρc) p , the effective heat capacities of the fluid and particle phases respectively, and k f the effective thermal conductivity of fluid phase.D B and D T denote the Brownian diffusion coefficient and thermophoretic diffusion respectively, p is pressure, g is the acceleration due to gravity, µ denotes viscosity of fluid.It is assumed that the Brownian motion and the thermophoresis processes remain coherent. Assuming the temperature(T ) and volumetric fraction(φ) of the nanoparticles to be constant at the stress-free boundaries, we may assume the boundary conditions on T and φ to be: where φ 1 is greater than φ 0 .To non-dimensionalize the variables we take Equations ( 1)-( 6), then take the form (after dropping the asterisk): 1 Here , is the Taylor s number, , is the Prandtl number, , is the Lewis number, is the basic density Rayleigh number, is the concentration Rayleigh number, is the modified particle density increment, and Basic Solution At the basic state the nanofluid is assumed to be at rest, therefore the quantities at the basic state will vary only in z−direction, and are given by 06001-p.2 Substituting eq.( 13) in eqs.( 9) and ( 10), we get employing an order of magnitude analysis [22], we have: The boundary conditions for solving (16) can be obtained from eqs. ( 11) and ( 12) as: The remaining solution p b (z) at the basic state can easily be obtained by substituting T b in eq.( 16), and then integrating eq.( 8) for p b .Solving eq (16), subject to conditions ( 17) and ( 18), we obtain: Stability Analysis Superimposing perturbations on the basic state as listed below: We consider the situation corresponding to two dimensional rolls for the ease of calculations, and take all physical quantities to be independent of y.The reduced dimensionless governing equations after eliminating the pressure term and introduction of the stream function, ψ, come out as where The equations ( 22) -( 24) are solved subject to idealized stress-free, isothermal, iso-nano concentration boundary conditions so that temperature and nano concentration perturbations vanish at the boundaries, that is The choice of these boundary conditions, though not very liable physically, eases the difficulty of mathematical calculations not ignoring the physical effects totally [25,26].This type of boundary conditions are encountered in some places like in the case of geothermal regions where the fluid layer cannot be isolated from the surroundings avoiding fluid inclusion to its full extent.The critical Rayleigh numbers for stationary and oscillatory onset of convection and the frequency of oscillations, ω, are obtained as where , is the critical wave number. These expressions can be obtained from [20] by dropping the terms pertaining to porous media.It is quite obvious from eq.(28) that oscillatory convection is possible only when 5 Results and Discussion: Analytical expressions have been obtained for the Rayleigh numbers pertaining to stationary and oscillatory convections.The expression for stationary Rayleigh number is For ordinary fluids, Le = 0 = N A and non-rotating case T a = 0, we obtain which is a classical result for all fluids.Thus it is interesting to observe in this case, that to the value of Rayleigh number for ordinary fluids, we have added a positive term in the form of Rn(Le − N A ).We can say that this is a positive term as the experimentally determined values of Rn are in the range 1 − 10, for N A are 1 − 10, while of Le are large enough, of the order 10 − 10 6 .Thus the value of Ra cr will be higher in the case of nanofluids than ordinary fluids implying a delay in the onset of convection in this case.Thus to say, more heat is required by nanofluids for convection to start in.This behavior may be attributed to the property of high thermal conductivity of nanofluids which delays the occurrence of density differences across the fluid layer brought about by heating, thus delaying the onset of convection.This implies that the heat transferred by nanofluids will be more than ordinary fluids, making them ideal heat transfer mediums.This fact is also well documented by fig. 1. In figures 2(a)−(b) and 3(c)−(d), we present the linear stability curves showing oscillatory and stationary modes of convection.Ra st and Ra osc are being plotted against wave number α for Rn = 4, Le = 200, N A = 4, T a = 50 and Pr = 5.From the figures 2 and 3, we observe that initially when α is small, the onset of convection occurs as oscillatory convection.Then at intermediate value of α, the critical value for onset of convection is achieved through oscillatory convection.Finally when α is large, mode of convection changes to stationary convection.Therefore, it can be said that Exchange of Stabilities occurs [25]. In all these curves it is to be noted that the convection sets in oscillatory mode and slowly switches over to the stationary mode.There seems to be negligible effect of the parameters concentration Rayleigh number Rn, Lewis number Le and modified diffusivity ratio N A , on the overstable regime while on the damped oscillations, these seem to be independent of the effect of modified diffusivity ratio N A and Taylor number T a as well.Rn and Le have stabilizing effect on the damped oscillations.An increase in their value increases the critical Thermal Rayleigh number thus stabilizing the system by delaying the onset of convection for demand of more thermal energy. Conclusions: We considered linear thermal instability analysis in a horizontal rotating layer of a nanofluid, heated from below and cooled from above, incorporating the effect of Brownian motion along with thermophoresis.Further bottom heavy suspension of nano particles has been considered.The effect of various parameters on the onset of thermal instability has been found.We draw the following conclusions: 1.More amount of heat is required by nanofluids than ordinary fluids for convection to start.2. "Exchange of Stabilities "takes place in this case. 3. Rn, Le and T a have stabilizing effects on the system. (a) Nanoparticles influence the thermal conductivity of base fluids in a positive way.(b) Nanoparticle concentration being present at the lower boundary may result differently from to their presence at the upper boundary.(c) Rotation is known to show stabilizing effect in viscous fluids without nanoparticles.We need to investigate Web of Conferences DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, Fig. 1 . Fig. 1.Comparison of Values of Thermal Rayleigh number for Nanofluid and Ordinary fluids. 0 Fig. 2 . Fig. 2. Linear stability curves showing oscillatory vs stationary convection for different values of (a)Rn. Fig. 5 . Fig. 5. Linear stability curves showing oscillatory vs stationary convection for different values of (d)T a.
2,863
2012-07-01T00:00:00.000
[ "Physics", "Materials Science", "Engineering" ]
Longitudinal fluctuations of Co spin moments and their impact on the Curie temperature of the Heusler alloy Co 2 FeSi The magnetism of the full Heusler alloy Co 2 FeSi with its high magnetic ordering temperature is studied on a first-principles basis employing the disordered local moment approximation, the magnetic force theorem and single-site spin fluctuation theory as formulated recently in the framework of the Local Spin Density Approximation. We find that the magnetic moments of Fe and Co in Co 2 FeSi exhibit a quite distinctive behavior at high temperatures. The Fe moments are well localized and keep their magnitude unchanged with temperature, whereas the Co moments are itinerant and show a temperature dependence. We find that the effects of magnetic disorder strongly renormalize the Fe-Co inter-atomic exchange interactions. Our results suggest a deficiency of the classical Heisenberg model with rigid localized atomic spin moments for the description of magnetism in Co 2 FeSi. An accurate estimation of the Curie temperature is obtained by taking into the account the thermal longitudinal fluctuations the Co moments. Introduction The Heusler alloys have been in the focus of intensive experimental and theoretical research for more than two decades [1,2].A root of such an interest is the possibility of tuning their magnetic and electronic properties by varying the chemical composition in a very wide range keeping the lattice structure fixed.As a result, several of the magnetic Heusler alloys have been developed with physical properties that are highly attractive for applications [1,3,4].One of the most intriguing of these properties is half-metallic behavior that has been predicted for a few of them [5,6].These ferromagnetic half-metallic materials, i.e. the materials that have a band gap only in the minority spin-channel attract huge theoretical [7] and experimental interest during the last few decades [8].The technological prospectus of these materials relies on the possibility of their application as a source of a spin-polarized current in spintronics devices [9].The most studied class of the full Heusler alloys with potentially half-metallic behavior were perhaps [10] Co-based compounds with formula Co 2 XY, where X = Fe, Mn and Y = Al, Si.A very high value of spin polarization in Co 2 MnSi has been found [11] experimentally.Another representative of this class is ferromagnetic Co 2 FeSi.Although experimental evidence of the half-metallic state in this material is still debated [12] it attracts a huge interest [13,14] due to its large magnetization (~6 µ B /f.u.) and a very high Curie temperature (1100 K).Thin films, various quasi-quaternary and offstoichiometric alloys based on Co 2 FeSi [15,16] have been widely studied for the potential applications in magnetic tunnel junctions [17,18,19].Thus, it is not surprising that the Co 2 FeSi alloy has attracted a considerable theoretical attention also from first-principles perspectives.Since the very first studies it has become clear [12] that the conventional ab-initio band structure methods based on the Local-Spin Density Approximation (LSDA) have a problem to describe the ground state magnetic properties of this system.The LSDA predicts a significantly smaller magnetic moment with respect to experiment [20] even in the framework of a General Gradient Approximation (GGA).This is a signal for the importance of correlation effects.Indeed, the LDA + U methodology with properly chosen U parameter gives correct magnetic moments and put the Fermi level in the pseudo-gap of the minority spin channel [20,21,22].However, it was also shown that the LDA + U method worsen the spectral properties of the Co 2 FeSi compare to the experiment [23].Meinert et al. [24] have shown that one can solve the problem by considering the correlation effects in the framework of the ab-initio GW approximation. Another problem is an estimation of the magnetic ordering temperature of Co 2 FeSi alloy from first principles.The inter-atomic exchange constants of the Heisenberg Hamiltonian have been calculated using the magnetic force theorem in the framework of Density Functional Theory in several works [25,26,27].In all these investigations, the interactions have been calculated in the reference of the ferromagnetic ground state.It was shown that the dominating interaction appeared to be Co-Fe nearest neighbor one.However, the simulated Curie temperature (T c ) appears to be smaller than in experiment with use of LSDA (750 K in Ref. [26] and 650 K in Ref. [27]) and GGA (800 K) [27] as well.The same also holds for the results of mean-field results presented in Ref. [25] (T c = 1100 K) since the mean-field approximation for the Heisenberg model notoriously overestimates the value for the critical temperature.It has been commonly agreed [26,27] that both problems, the Curie temperature and the underestimation of the ground state magnetic moment in LSDA/GGA, are connected.This proposition became even more obvious when Chico et al. [27] have shown that the LDA + U method, with a parameterization that yields the correct value of the ground state magnetic moment in Co 2 FeSi, also leads to a larger Heisenberg exchange constant and thus a Curie temperature in close agreement with experiment.However, in the present work we will argue that the two abovementioned problems might be of different origin.The ground state moment underestimation in LSDA is indeed related to correlations effects.However, the source of the problems in the high temperature regime is the itinerant character of the Co moments that change their magnitude with temperature.We also find a strong dependence of the Fe-Co exchange interactions on the state of magnetic disorder.Thus, the exchange interactions calculated in the ferromagnetic ground state are not relevant to the discussion of the Curie temperature.A similar problem with the application of exchange constants derived for the ground state to estimate the magnetic ordering temperature has been reported previously for various metals and alloys [28,29].We will argue here that Co 2 FeSi is a nice example of a metallic system where the application of the conventional Heisenberg model, that assumes fixed local moments on the atomic sites and temperature independent interactions, are not sufficient for the description of high temperature properties. Our paper is organized as follows: after the description of the applied methodology in the next section, in section III we perform an analysis of the exchange interactions calculated for the ferromagnetic state.We obtain similar results as in a couple the previous works based on the magnetic force theorem and first-principles calculations, where LSDA and GGA essentially underestimate the magnetization and the Curie temperature.However, we also show there that by fixing the total magnetization magnitude to the experimental value and calculating the interactions for such FM state, one can get the same estimate of the T c as was derived previously [27] within the LDA + U approximation.In section IV, we investigate the high temperature paramagnetic state using in the ab-initio framework developed as described in Refs.[30,31,32].We find a strong itinerant character of the Co moments and derive an accurate estimation of the Curie temperature by taking into the account the temperature induced longitudinal spin fluctuations. Methodology We use gthe bulk Korringa-Kohn-Rostoker (KKR) method in the atomic sphere approximation (ASA) [33,34] in the framework of the LSDA33 and the GGA34 method to calculate an electronic structure of Co 2 FeSi for the experimental lattice geometry [12].The partial waves have been expanded up to l max = 3 (spdf basis) inside the atomic spheres that were set equal for all nonequivalent atomic sites. After the derivation of the self-consistent electronic structure for a selected magnetic configuration (reference magnetic state), the exchange interaction parameters of the classical Heisenberg Hamiltonian: J ij e i e j (1) where e ⇀ i is a directional unit vector of the magnetic spin moment at the i th lattice site, has been estimated using the first-principles magnetic force theorem (MFT) based on the Green function formalism [35] implemented in KKR ASA [36].A Monte-Carlo simulation with the Hamiltonian (1) and the calculated J ij constants is performed to obtain the corresponding magnetic ordering temperature.Let's note than in metals the exchange constant might depends on the choice of the reference magnetic state, even if atomic spin moments are very rigid/ localized, since the structure of the electronic bands are very sensitive to the long-range magnetic configurations [37].In the next section we use the ferromagnetic ground state as a reference as a reference to earlier computational investigations on finite temperature magnetism in Co 2 FeSi.In Section IV we will demonstrate the itinerant character of the Co moment and the necessity of considering the thermally induced longitudinal fluctuation of the Cobalt atomic moment.There we will deal with a high temperature paramagnetic state and use the Disordered Local Moment [38] (DLM) state as a reference for calculations of the inter-atomic exchange interaction.The DLM formalism is used for modelling the direction thermal magnetic disorder above the Curie temperature and its influence on the electronic structure (see details in Ref. [30,39,40]).Since the Cobalt moment has a strongly fluctuating longitudinal component in the paramagnetic regime at high temperature (>1000 K) instead of the Hamiltonian (1) we use the extended version allowing for these fluctuationsthe Longitudinal Spin Fluctuation (LSF) Hamiltonian [30] where ⃒ is a length of the atomic spin-moment at the i th lattice site. The first term is the moment dependent on-site energy and the second term is a Heisenberg interaction, similar to those in Eq. ( 1) with exchange constants depending on the size of the atomic moments on neighboring sites.The procedure applied here for estimating the magnetic ordering temperature using Eq.(2) has been described previously in detail [31,32].The onsite term in Eq. ( 2) is approximated as DLM total energy, E DLM (m), calculated with a fixed atomic spin moment m.The temperature dependence of the atomic moments in the high temperature paramagnetic state has been calculated as a statistical average: where g(m) is the longitudinal integration measure in classical spin space [32,39].We determine the ordering temperature as a crossing point between a < m> T curve calculated from equ.(3) and T ord (<m > ) calculated by MC simulations with the Hamiltonian (1) where the J ij are calculated for each fixed value of m.We use the common choice [41,40] of a vector space measure g(m) = m 2 , which we found here to perform better for Co 2 FeSi (as well as for pure hcp Co -see Ref. [41]) than alternative linear form, [32] A notable difference of the application of the described procedure to Co 2 FeSi with respect to previous works cited above is the presence of two different magnetic sub-lattices, Fe and Co.In general this would require a full 2D ab-intio mapping of the Hamiltonian (2) with respect of Fe and Co fluctuating moments and 2D integration in Eq. (3).To avoid this mapping further approximations should be used (see for example Ref. [42]).The huge simplification for the present case is due to the fact that Fe moments are very well localized, so one can consider LSF only on Co sites allowing the Fe moments to converge to the self-consistent values.In section IV we will provide a full justification of this simplification. S. Khmelevskyi and P. Mohn Ferromagnetic ground state and exchange interactions The self-consistent magnetic moments calculated with LSDA and GGA exchange-correlation potentials for the ferromagnetic (FM) ground state of Co 2 FeSi are given in the Table 1.In full agreement with earlier similar calculations [26,27], the total magnetic moments in both cases are smaller than the ideal half-metallic value of 6μ B /f.u.found in experiment at low temperatures.The calculated interatomic exchange interactions (Fig. 1) suggest a dominating first nearest neighbor's (NN) Co-Fe coupling with a smaller contribution from 1NN Co-Co coupling and almost vanishing interactions with the more distant shells.The results of the Monte-Carlo simulations with Hamiltonian (1) using the calculated interactions from Fig. 1 yields estimates for the magnetic ordering temperature (see Table 1) similar to those obtained earlier by Chico et al. [27] The considerable underestimation of T c compared to the experimental value of 1060 K, has been ascribed to the underestimation of the magnetic moments [26,27] from LSDA/GGA and the presence of correlation effects.Indeed, it was shown [27] that the use of LDA + U with a proper choice for the U-parameter gives a half-metallic state and an improved value of T c .Thus, the following logic might be applied: the underestimation of the correlation effects leads to the underestimation of the magnetic moments and as results to the underestimation of interatomic Co-Co and Co-Fe exchange interactions in the FM state and consequently a too low value of T c in LDA/GGA. In order to verify the statement that the underestimation of the moment is a key issue in evaluating T c we performed calculations of the exchange interactions for the self-consistently derived LDA/GGA FM state with Co and Fe atomic moments fixed to the ideal "half-metallic" values.Indeed, the results of the Monte-Carlo simulations with J ij obtained in this way gives a T c value (Table 1) very close to the experiment and the LDA + U result.It thus appears, that our results completely confirm the above mentioned earlier conclusions.However, the main idea of the present work is to challenge this interpretation. Due to the metallicity of the system the exchange interactions calculated in the magnetic ground state can be and must be used for simulations and the explanation of the magnetic properties at temperatures lower than the magnetic ordering temperature, when the thermal magnetic directional disorder is very small.Normally the average electronic structure is affected by the magnetic thermal disorder at high temperatures near the T c and thus the exchange interactions are altered.Only in a small number of cases (dominant direct NN exchange, i.e. the electronic structure averaged over random magnetic configuration is similar to the ground state one) or by chance, if some random compensation effects occur, the exchange interactions in the ground state and in the high temperature paramagnetic state might produce nearly the same value of T c .For itinerant metallic system such cases should be very exceptional.In addition, the longitudinal fluctuations of the magnetic moments in the paramagnetic state of a metallic system (although being largely frozen in the magnetically ordered state) bring in another complication.In the next section we show that such an exceptional situation might occur in the Co 2 FeSi compound. High temperature paramagnetic state Among the first ones who pointed out a possible importance of longitudinal spin fluctuations in Co 2 FeSi was Kübler [43].He noted that in the spin fluctuation approximation (SFA) and for the spherical model a reasonable value of T c in Co 2 FeSi can be derived from calculated exchange constants estimated from spin-spiral LSDA total energies within the Random Phase Approximation (RPA).Although not too many details were given on LSF magnitude of Co 2 FeSi it was noted [43]] a "somewhat surprising [43] successes of the SFA method in the framework of bare LSDA approximation without special local treatment of the correlation effects. Here we look at the problem from another perspective.We investigate a high temperature paramagnetic state (PM) of Co2FeSi using the DLM formalism.A self-consistent DLM calculation converges to the DLM state with a magnitude of the spin moment on the Fe site of 2.75μ B /Fe being almost exactly the same as in the FM ground state (2.74μ B /Fe).However, the atomic Co moment vanishes completely in the DLM state.This suggests that Cobalt in Co 2 FeSi does not fulfill the Andersson criterium for the formation of local moments in a paramagnetic state [39] and the formation of the spin moments on the Cobalt sites in the PM state is entirely due to thermal longitudinal spin fluctuations [41,44,45].On the opposite, the Fe sites develop a well localized "robust" moment in the PM state.To illustrate the role of the temperature effects on the local moment formation we show the total energies of DLM states calculated with fixed spin moment constrained (Fig. 2).The results presented on both panels of the figure were calculated by fixing a respective spin moment on the given atomic sites (Co/Fe) allowing the other magnetic sites (Fe/Co) to converge freely.The energies given in the figure are in Kelvin/atom units to make a connection the temperature scale of LSF.One can see that thermal excitations of the order of 1000 K (=about the experimental T c ) can induce a rather significant thermally averaged spin moment on the Co cites of about ~1 μ B , departing strongly from the minimal energy zero value.In contrast, on the Fe sites thermal fluctuations on the same temperature scale lead only to rather insignificant fluctuations of the moment around its equilibrium value, giving on average the value close to the minimum of the total energy.Thus, one Table 1 Calculated and experimental total (m tot ) and atomic (m Co/Fe ) magnetic moments on Co and Fe sites in the ferromagnetic state, first nearest neighbor Co-Fe (J Co-Fe ) and Co-Co (J Co-Co ) exchange interactions, and Curie temperatures (T c ) for Co 2 FeSi.The experimental (exp.)values are taken from Ref. [12].The three sets of the calculated values correspond to the self-consistent LSDA, GGA and fixedspin moment calculations (FSM).In the latter case, the self-consistent ferromagnetic solution was derived by fixing the Co and Fe moment to the ideal values for half-metallic state.We add also the results of the full potential LDA + U calculations for the atomic moments from Ref. [13] and their experimental estimation of the T c .Note that direct correspondence between FSM KKR-ASA and full potential LDA + U atomic moments is subject of some uncertainty since inevitably different choice of the muffin tin spheres.3), using the results presented in the upper panel of Fig. 2, we calculate the dependence of <m Co >(T).The interception of the two curves gives the value of the physical Curie temperature.The visualization and details of this procedure for different transition metal magnetic materials can be found elsewhere [31,41]. For Co 2 FeSi we thus obtain T c = 1100 K in GGA, and 1070 K in LDA approximations (within the applied vector space integration measure).Both results are in the fair agreement with the experimental value of 1040 K.Moreover, our LDA result are in full agreement with Kübler's SFA model (1058 K). Thus, two essentially different ways of calculations: i) the LSF model presented here or Kübler's SFA model, and ii) a straightforward application of the classical Heisenberg model with calculated exchanges in FM state obtained either in correlated LDA + U calculations [27] or just by fixing the atomic moments to correct "half-metallic" values (previous section), provide equally good ab-initio values of T c .This result is rather intriguing, since both approaches rely on completely different physical points of view. To understand this coincidence further, in Fig. 3, we plot the calculated dependence of the interatomic exchange interactions on the value of the Co moment in the DLM state and the corresponding values derived in the FM state (see section III).We only show two NN interactions, which give the dominating contribution to the ordering temperature.One can see that in the DLM state the interactions are greatly enhanced as compared to the FM state with respective to the Co moment.The converged Fe moment is essentially the same in both calculations.This enhancement might be understood taking into the account the fact that in the PM state the half-metallic pseudo-gap (or the real gap in the true half-metallic state) in the spectral function vanishes when both spin-channels, up and down, become equally populated (see e,g, Ref. [46]).This provides an additional channel for indirect exchange between atomic sites.The values of the exchange interactions in true half-metallic states (stars in the figure) becomes approximately the same as the exchange in DLM states with values of the Co moments close to their average values around 1000 K.We illustrate this statement by the dashed lines in Fig. 3.It thus appears that the similar values of T c obtained from the both methods described above is thus a pure random coincidence, peculiar for a given compound. Conclusions The main conclusion of the given work is that for an itinerant electron magnetic system a simple comparison of the calculated magnetic ordering temperatures with experiment cannot be used as a judgment for the validity if the proposed ab-initio model.The exchange interactions calculated in the magnetically ordered state and the application of the straightforward Heisenberg model might give an excellent result purely by coincidence.However, just considering one particular system it might be difficult to resolve the issue.The Heusler alloy Co 2 FeSi is such a peculiar example.Here we have shown that the itinerant character of the Co moments in Co 2 FeSi and dominant role of the longitudinal spin fluctuations together with an enhancement of the inter-atomic exchange interactions in the magnetically disordered PM state, is a requirement for the formation of very high Curie temperature. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence The Co moments was fixed to the corresponding values using FSM method.The asteriks denote the exchange interactions calculated in the FM state.The asteriks are given at the corresponding self-consistently calculated values of the Co moments in the FM state.For an explanation of the dashed lines, see text.HMe states for half-metallci state modelled by FSM method. Fig. 1 . Fig. 1.Calculated interatomic exchange interactions in the ferromagnetic ground state of Co 2 FeSi.The exchanges in half-metallic state (HMe) are calculated using FSM method described in the text. Fig. 2 . Fig. 2. Calculated dependence of the total energies in the Disordered Local Moment State of Co 2 FeSi with the atomic moment of Co constrained (upper panel) and atomic Fe moment constrained (lower panel).Circles: LSDA-, squares: GGA-calculations. Fig. 3 . Fig. 3. Nearest neighbour inter-atomic exchange interactions: 1NN Co-Co (closed squares) and 2NN Co-Fe (open circles) calculated in the DLM state.The Co moments was fixed to the corresponding values using FSM method.The asteriks denote the exchange interactions calculated in the FM state.The asteriks are given at the corresponding self-consistently calculated values of the Co moments in the FM state.For an explanation of the dashed lines, see text.HMe states for half-metallci state modelled by FSM method.
5,138
2022-10-01T00:00:00.000
[ "Physics" ]
Effects of Receptor Binding Specificity of Avian Influenza Virus on the Human Innate Immune Response ABSTRACT Humans infected by the highly pathogenic H5N1 avian influenza viruses (HPAIV) present unusually high concentrations in serum of proinflammatory cytokines and chemokines, which are believed to contribute to the high pathogenicity of these viruses. The hemagglutinins (HAs) of avian influenza viruses preferentially bind to sialic acids attached through α2,3 linkages (SAα2,3) to the terminal galactose of carbohydrates on the host cell surface, while the HAs from human strains bind to α2,6-linked SA (SAα2,6). To evaluate the role of the viral receptor specificity in promoting innate immune responses in humans, we generated recombinant influenza viruses, one bearing the HA and neuraminidase (NA) genes from the A/Vietnam/1203/2004 H5N1 HPAIV in an influenza A/Puerto Rico/8/1934 (A/PR/8/34) backbone with specificity for SAα2,3 and the other a mutant virus (with Q226L and G228S in the HA) with preferential receptor specificity for SAα2,6. Viruses with preferential affinity for SAα2,3 induced higher levels of proinflammatory cytokines and interferon (IFN)-inducible genes in primary human dendritic cells (DCs) than viruses with SAα2,6 binding specificity, and these differences were independent of viral replication, as shown by infections with UV-inactivated viruses. Moreover, human primary macrophages and respiratory epithelial cells showed higher expression of proinflammatory genes after infection with the virus with SAα2,3 affinity than after infection with the virus with SAα2,6 affinity. These data indicate that binding to SAα2,3 by H5N1 HPAIV may be sensed by human cells differently than binding to SAα2,6, inducing an exacerbated innate proinflammatory response in infected individuals. Influenza A viruses, due to their mode of transmission and the high mutation frequency of their genomes, are among the leading pandemic disease threats. From 1997 until today, highly pathogenic avian influenza viruses (HPAIV) of subtype H5N1 have caused several outbreaks in birds that have resulted in a high mortality rate and that have been accompanied by occasional transmission to humans. Infections in humans often result in a severe and rapidly progressive pneumonia and subsequent systemic disease, with a fatal outcome in approximately 60% of the total cases reported to the Word Health Organization to August 2010 (http://www.who.int/csr/disease /avian_influenza/country/cases_table_2010_08_31/en/index.html). Humans infected by H5N1 HPAIV present unusually high serum concentrations of chemokines and proinflammatory cytokines, and it is thought that this cytokine dysregulation may contribute to disease severity (5,7,11,25,32). Furthermore, elevated expression of MxA and alpha interferon (IFN-␣) has been observed in autopsy lung tissue from an H5N1 virusinfected patient (46). Avian strains of influenza virus are not efficient at infecting humans (4), and direct transmission from human to human has been reported only in close family clusters, with very limited spread of the virus (54). There are some receptor restrictions for avian influenza viruses in human airways that may account for the poor ability of avian strains to establish infections in humans (22,(29)(30)(31)49). The capacity of the influenza viruses to infect birds or humans seems to be defined in part by the binding specificity of the hemagglutinin (HA), the major glycoprotein on the influenza virus surface. Generally, HAs of human strains of influenza virus preferentially bind sialic acids attached through an ␣2,6 linkage to the terminal galactose (SA␣2,6) of the oligosaccharides on the cell surface. These types of linkages are frequent in human respiratory epithelia (36). In contrast, the HA of avian strains bind preferentially to ␣2,3-linked sialic acids (SA␣2, 3), which are abundant in the avian intestinal tract (33). Interaction of the HA with sialylated glycans on the cell surface is necessary for the infection of host cells and the transmission and virulence of influenza viruses (22,37). Mutations that alter the receptor binding specificity of avian viruses could be important for the crossover of the virus from avian to human hosts, as well as for allowing direct human-tohuman transmission (29). Several amino acid changes in the HA receptor binding site of avian viruses have been shown to change the receptor specificity from SA␣2,3 to SA␣2,6 (8,43,53). Recently, it has been reported that the A/Indonesia/5/2005 H5N1 HPAIV, which bears point mutations that switch the receptor preference to SA␣2,6, shows strong attachment to human tissue sections from different regions of the respiratory tract; in contrast, binding of the virus with wild-type (WT) HA is minimal and restricted to tissue sections from the lower respiratory tract (8). These findings suggest that alterations in the receptor binding specificity could make HPAIV capable of infecting human hosts. Examination of the receptor specificity of different human and avian H2 viruses and of human, avian, and equine H3 influenza viruses by Connor et al. (9) revealed a correlation between the receptor specificities of these viruses and the residues at positions 226 and 228 of the HA. Specifically, they observed that viruses binding terminal SA␣2,6 had residues L and S at these positions but that viruses binding SA␣2,3 presented Q and G, respectively. Also, they observed that amino acids L and S were conserved at positions 226 and 228 in the human isolates but that Q and G were frequently found in avian and equine isolates. Later, using glycan arrays, it was shown that the change at positions 226 and 228 to L and S, respectively, in the HA of the H5N1 A/Vietnam/1203/2004 virus altered its receptor specificity, permitting binding to a natural human SA␣2,6 glycan (43,53). Dendritic cells (DCs) have an essential role in initiating the innate immune response. This cell type presents an important number of pattern recognition receptors (PRRs) that recognize pathogen-associated molecular patterns (PAMPs) in different cell locations. Hence, pathogens can be "sensed" by cytosolic receptors, like retinoic acid inducible gene I (RIG-I)-like helicases (RLH) (19), in endocytic compartments by several of the Toll-like receptors (TLRs) (21) or on the cell surface by other TLRs and c-type lectin receptors (CLRs) (20,48). Besides having this function of pathogen recognition, these receptors induce the activation of different signaling cascades, leading to the modulation of gene expression. Moreover, DCs are also professional antigen-presenting cells, forming the main link between innate and adaptive responses (2,40). In this work, we hypothesized that the receptor specificity of the avian influenza virus may be related to exacerbated levels of proinflammatory cytokines and chemokines in infected humans. As a strategy to tackle this question, we generated recombinant influenza viruses with different receptor specificities by introducing the mutations Q226L and G228S into the HA of the A/Vietnam/1203/2004 virus. Then, following characterization of the receptor binding specificities of the viruses bearing 226Q 228G (wild-type genotype) and 226L 228S (mutant genotype) by solid-phase and flow cytometry binding assays, we studied the expression profiles of proinflammatory genes and proteins in primary human DCs and subsequently in macrophages and human tracheobronchial epithelial (HTBE) cells upon infection with those viruses. The virus encoding WT HA, which showed SA␣2,3-preferential binding, induced higher levels of cytokines and chemokines in DCs, macrophages, and HTBE cells than the SA␣2,6-binding mutant. Our data suggest an important role for receptor binding specificity in the activation of the innate immune response and offer a possible explanation for the hypercytokinemia developed in humans infected by HPAIV. Cells and viruses. Human primary dendritic cells and macrophages were generated from CD14 ϩ cells isolated from buffy coats of healthy human donors (New York Blood Center). Peripheral blood mononuclear cells (PBMC) were isolated by Ficoll density gradient centrifugation (Histopaque; Sigma Aldrich) and incubated with anti-human CD14 antibody-labeled magnetic beads, and CD14 ϩ cells were purified using iron-based MiniMACS liquid separation columns (Miltenyi Biotech). For the generation of immature dendritic cells, CD14 ϩ cells were incubated at 37°C for 5 days at a concentration of 10 6 cells/ml in RPMI medium containing 10% fetal bovine serum (FBS) (HyClone; Thermo Scien-tific), 2 mM L-glutamine, 1 mM sodium pyruvate, and 100 U/ml penicillin-100 g/ml streptomycin (Gibco, Invitrogen) (complete DC medium) and supplemented with 500 U/ml human granulocyte-macrophage colony-stimulating factor (hGM-CSF) and 1,000 U/ml human interleukin 4 (hIL-4) (Peprotech). For macrophage generation, CD14 ϩ cells were incubated at 37°C for 10 days at a concentration of 0.5 ϫ 10 6 cells/ml in complete DC medium and supplemented with 1,000 U/ml hGM-CSF, with fresh hGM-CSF added every 2 or 3 days. Human tracheobronchial epithelial cells (Clonetics, Lonza) were grown in bronchial epithelial cell growth medium (BEGM), prepared by adding BEGM from a SingleQuot kit (Clonetics, Lonza) to 500 ml of bronchial epithelial cell basal medium (BEBM; Clonetics, Lonza). For differentiation, the cells were seeded on 12-mm Transwell filters (pore size, 0.4 m; Corning) coated with collagen type I from human placenta (Sigma Aldrich) in 12-well plates (Corning) and were incubated with a 1:1 mixture of BEGM and Dulbecco's modified Eagle's medium (DMEM) (supplemented also with BEGM from a SingleQuot kit). When the cultures were confluent, liquid from the upper compartment was removed and cells were cultured in an air-liquid interphase for 4 to 6 weeks. Medium in the basal compartment was supplemented with 5 ϫ 10 Ϫ8 M retinoic acid (Sigma Aldrich). Total cell differentiation was assessed by ␤-tubulin surface staining. Recombinant viruses. Recombinant influenza viruses were generated using reverse-genetics techniques as previously described (13). Viruses encoding HA with the WT sequence (Viet WT) and with the mutant sequence (Viet Mut) were constructed with the HALo WT or HALo Q226L G228S segment (HALo denotes an HA segment modified by the removal of the encoded polybasic cleavage site), neuraminidase (NA) from the H5N1 A/Vietnam/1203/2004 virus, and the six segments PB2, PB1, PA, NP, M, and NS from influenza A/Puerto Rico/8/1934 (A/PR/8/34) virus as previously described (39). BB was constructed using the HA and NA segments from the seasonal H1N1 A/Brisbane/59/2007 virus (kind gift from Adolfo Garcia-Sastre), and the rest of the segments were from A/PR/8/34 virus. All viruses were grown in 9-day-old embryonated chicken eggs (SPAFAS; Charles River Laboratories). All influenza viruses were titrated by plaque assay on MDCK cells by following standard procedures. For the solid-phase binding assay, the viruses were partially purified through a 20% sucrose cushion according to standard procedures. Flow cytometry-based binding assay. To carry out the analysis of the binding specificity of the influenza viruses, MDCK cells were infected at a multiplicity of infection (MOI) of 5 for 24 h with the corresponding influenza viruses. Next, cells were harvested, washed 3 times with cold phosphate-buffered saline (PBS), and then incubated with 10 g/ml of the biotinylated glycans Neu5Ac␣2,3Gal␤1,4GlcNAc-PAA (3Ј SLN-PAA) and Neu5Ac␣2,6Gal␤1,4GlcNAc-PAA (6Ј SLN-PAA), provided by the Consortium of Functional Glycomics, and anti-M2 antibody E10 (Mount Sinai Hybridoma Shared Research Facility) for 2 h at 4°C. Then cells were washed with PBS-1% bovine serum albumin (BSA) and incubated with streptavidinfluorescein isothiocyanate (FITC; Jackson Immunoresearch) and secondary antimouse rhodamine antibody (Jackson Immunoresearch). Both incubations were performed in the presence of 1 M GS4071 (a kind gift of Christopher Basler) in order to avoid cleavage of the sialic acids of the synthetic polymers (18). Flow cytometry was performed using a FACScan flow cytometer (Becton Dickinson) and analyzed with FlowJo software. Solid-phase binding assay. We also used a solid-phase binding assay to study the receptor specificity of the recombinant viruses as previously described by Matrosovich et al. (27), with some modifications. Briefly, 96-well enzyme-linked immunosorbent assay (ELISA) plates were coated with the specific purified influenza viruses at 20 g/ml and incubated overnight at 4°C. Next, plates were blocked with Carbo-Free blocking solution (Vector Laboratories) for 30 min at room temperature (RT) and washed with washing buffer (0.1% BSA, 0.05% Tween 20, PBS), and the biotinylated glycan 3Ј SLN-PAA or 6Ј SLN-PAA was added at different concentrations; plates were then incubated for 2 h at RT. Next, samples were washed with PBS and incubated with streptavidin-horseradish peroxidase (HRP; R&D Systems) for 1 h at RT. Both incubations were performed in the presence of 1 M GS4071. The HRP was developed with the substrate o-phenylenediamine (OPD; Invitrogen), the reaction was stopped with 1% sodium dodecyl sulfate (SDS), and the absorbance at 450 nm was analyzed in a microplate reader (BioTek). Growth curves of recombinant viruses in cell lines. To examine viral replication, confluent MDCK and A549 cells were infected at multiplicities of infection (MOIs) of 0.001 and 0.1, respectively. Cells were incubated at 37°C in DMEM containing 0.3% bovine albumin (MP Biomedicals) and 1 g/ml of tolylsulfonyl 4422 RAMOS ET AL. J. VIROL. phenylalanyl chloromethyl ketone (TPCK)-treated trypsin (Sigma). Supernatants were collected at selected time points postinfection (p.i.), and viral titers on MDCK cells were determined in a standard plaque assay. Evaluation of SA␣2,6 and SA␣2,3 on human DC surfaces. Dendritic cells were treated with Clostridium perfringens neuraminidase (Roche) or heat-inactivated neuraminidase (incubated for 20 min at 95°C) for 2 h at 37°C. Then, the cells were incubated with 20 g/ml of the biotinylated lectins Sambucus nigra agglutinin (SNA) and Maackia amurensis agglutinin (MAAI) (Vector Laboratories) for 15 min at 33°C, subsequently washed with PBS-1% BSA, and incubated with streptavidin-FITC for 1 h at RT. Data were acquired by flow cytometry using a FACScan and analyzed with FlowJo software. Neuraminidase-treated cells were used as a negative control for the presence of sialic acid. For fluorescence microscopy analysis, DAPI (4Ј,6-diamidino-2-phenylindole; 1 g/ml) was added to the cells during the incubation with streptavidin-FITC, and cells where fixed and mounted for analysis in a fluorescence microscope (Zeiss Axioplan 2). Infections of primary human cells with the recombinant influenza viruses. The primary human DCs were infected with recombinant influenza viruses at an MOI of 1, using serum-free DC medium for 45 min at 37°C as previously described (12). Then, DCs were plated in complete DC medium (10% FCS) at 1 ϫ 10 6 cells/ml and incubated for 4 h at 37°C. Subsequently, cells were recovered by centrifugation for 10 min at 400 ϫ g, and cell pellets were lysed for RNA isolation, while the supernatants were tested for cytokine production by multiplex ELISA. Macrophages were infected at an MOI of 2 with serum-free DC medium for 45 min at 37°C, and then complete DC medium was added and incubated for 4 h at 37°C. Supernatants where recovered for multiplex ELISA cytokine analysis, and RNA from the cells was isolated for quantitative reverse transcription-PCR (qRT-PCR) analysis. Human tracheobronchial epithelial (HTBE) cells, cultured in 12-mm Transwell filters, were washed 10 times with BEGM prior to infection in order to remove mucins. For cytokine induction evaluation, cells were infected at an MOI of 2 in 100 l BEGM inoculum for 1 h at 37°C. Then, cells were washed once and a 1:1 BGEM-DMEM mixture was added to the apical and basal compartments of the wells. At 4 h, 24 h, and 48 h p.i., medium from apical and basal chambers was harvested and stored at Ϫ20°C for subsequent cytokine evaluation, and cells were lysed and kept at Ϫ80°C for qRT-PCR analysis. For replication assessment, at the desired time points, 100 l of PBS was added over the infected cells, which were cultured at an air-liquid interphase. After 30 min of incubation at 37°C, PBS was removed and the titer of virus present in the wash was determined by plaque assay. RNA isolation. RNA from human dendritic cells was extracted from 5 ϫ 10 5 cells using an Absolutely RNA microprep kit (Stratagene). The concentration was evaluated in a spectrophotometer at 260 nm, and 500 ng of RNA was reverse transcribed using the iScript cDNA synthesis kit (Bio-Rad) according to the manufacturer's instructions. qRT-PCR. Evaluation of the expression of cytokines from different cell types was carried out using iQ SYBR green Supermix (Bio-Rad) according to the manufacturer's instructions. The PCR temperature profile was 95°C for 10 min, followed by 40 cycles of 95°C for 10 s and 60°C for 60 s. The mRNA level of each sample for each gene was normalized to ␣-tubulin and rps11 expression. The primers used for detection of the M protein from A/PR/8/34 influenza virus fragment RNA were 5Ј-TCAGGCCCCCTCAAAGCCGA-3Ј (forward) and 5Ј-GGGCACGGTGAGCGTGAACA-3Ј (reverse). For IFN-␤ quantification, we used 5Ј-GTCAGAGTGGAAATCCTAAG-3Ј (forward) and (5Ј-ACAGCATCT GCTGGTTGAAG-3Ј (reverse), for tumor necrosis factor alpha (TNF-␣), 5Ј-A GTGAAGTGCTGGCAACCAC-3Ј (forward) and 5Ј-GAGGAAGGCCTAAG GTCCAC-3Ј (reverse), for RANTES, 5Ј-TTGCCAGGGCTCTGTGACCA-3Ј (forward) and 5Ј-AAGCTCCTGTGAGGGGTTGA-3Ј (reverse), and for RIG-I, 5Ј-AAAGCCTTGGCATGTTACAC-3Ј (forward) and 5Ј-GGCTTGGGATGT GGTCTACT-3Ј (reverse). For amplification of ␣-tubulin, 5Ј-GCCTGGACCAC AAGTTTGAC-3Ј (forward) and (5Ј-TGAAATTCTGGGAGCATGAC-3Ј (reverse) were used, and in the case of rps11, 5Ј-GCCGAGACTATCTGCACTA C-3Ј (forward) and 5Ј-ATGTCCAGCCTCAGAACTTC-3Ј (reverse) primers were used. All the reactions were performed in duplicate. The primer efficiencies for qRT-PCR were evaluated and in all cases were confirmed to be approximately 100%. CXF Manager software (Bio-Rad) was used to analyze the relative mRNA expression levels by the change in threshold cycle (⌬C T ) method using the two housekeeping genes for ␣-tubulin and rps11 to normalize the results. Thus, the nonnormalized relative expression of each gene was obtained with the formula 2 [CT(min)ϪCT(sample)] , where C T (min) is the average C T for the sample with the minimal average C T . The normalized relative expression was obtained by dividing the previous formula by the geometric mean of the relative expression of the ␣-tubulin and rps11 housekeeping genes. Multiplex ELISA. Quantification of IP-10, TNF-␣, MIP-1␤, and interleukin 6 (IL-6) release in DCs, macrophages, and HTBE supernatants after infection was performed using the Milliplex multianalyte profiling human cytokine/chemokine kit (Millipore) according to the manufacturer's instructions. Data were analyzed using the Multiplex data analysis software. Statistics. Results are presented as average values of results from replicates Ϯ standard deviations (SD). Average values were compared by the unpaired Student t test. Experiments performed with DCs and macrophages were performed at least with three donors, and the results of two representative experiments are shown in every case. RESULTS The mutations Q226L and G228S change the receptor specificity of the H5N1 A/Vietnam/1203/2004 HA. In order to study the effect of receptor binding specificity in human primary immune cells, we used recombinant viruses that were identical in sequence except in the receptor binding domain of the HA. Viruses encoding HA with the WT sequence (Viet WT), HA with the mutations Q226L and G228S (Viet Mut), NA from the highly pathogenic A/Vietnam/1203/2004 H5N1 virus, and the six internal proteins for the A/PR/8/34 virus were generated by reverse genetics. The polybasic cleavage site of the HA of these viruses was mutated to reduce its virulence (39). We first assessed the receptor specificities of the viruses Viet WT and Viet Mut using a flow cytometry-based assay. Briefly, MDCK cells were infected with these two viruses and also with another virus bearing the HA and NA genes from the seasonal influenza A/Brisbane/59/2007 H1N1 virus as a control for SA␣2,6 binding specificity ( Fig. 1A and B). Incubation with the 3Ј SLN-PAA and 6Ј SLN-PAA glycans indicated that 99% of the cells infected with the Viet WT virus bound to SA␣2,3 but showed no detectable binding to the 2,6-linked SA. However, those cells infected with Viet Mut virus, which has the mutations Q226L and G228S in the HA, showed a significant reduction in binding to 2,3-linked SA and a striking increase in the affinity for SA␣2,6. Specifically, 92% of cells infected with the Viet Mut virus bound SA␣2,6, levels similar to those seen with the HA from the seasonal H1N1 virus. To confirm these results, we also performed a solid-phase binding assay as described in Materials and Methods. As shown in Fig. 1C, the binding assay using the 3Ј SLN-PAA showed that the Viet WT virus had a strong affinity for 2,3linked SA, but a reduction in the case of the Viet Mut virus was observed. On the other hand, Viet WT showed low absorbance values for the 6Ј SLN-PAA, but consistently with the flow cytometry results, the virus with the mutations Q226L and G228S in the HA showed high levels of binding to the 2,6linked SA. Taken together, results from Fig. 1 show that the introduction of the Q226L and G228S mutations into the HA of Viet WT changed the receptor preference for that HA from ␣2,3to ␣2,6-linked SA. SA␣2,6-binding Viet Mut replicates to higher titers in cell lines and in primary epithelial human cells than Viet WT. In order to characterize the viruses with different receptor specificities, we determined growth curves in the permissive MDCK and A549 cell lines ( Fig. 2A and B). The Viet Mut virus, with preferential specificity for SA␣2,6, replicated in MDCK cells similarly to Viet WT, showing a difference, an ϳ10fold higher titer, at 24 h postinfection (p.i.), consistent with the presence of both SA␣2,6 and SA␣2,3 linkages on those cells (17). A549 cells also supported increased replication of Viet Mut com-VOL. 85, 2011 HPAIV RECEPTOR SPECIFICITY AND INNATE IMMUNE RESPONSE pared to that of Viet WT at 24 h p.i., although in this case, the differences were observed also at later time points, showing 10fold differences for the maximum titers. We also characterized the replication of these two viruses in human tracheobronchial epithelial cells (HTBE). Similarly to what was seen in A549 cells, the virus with affinity for SA␣2,6 grew more efficiently in HTBE cells than the virus with the H5 WT HA using either a low or a higher starting MOI (0.1 or 2 PFU/cell), as shown in Fig. 2C and D. Infecting with an MOI of 0.1, we observed that the virus Viet Mut released about 100 times more virus than Viet WT at 72 h p.i. Therefore, the virus with human-virus-like receptor specificity showed increased fitness in epithelial cell lines, as well as in differentiated primary human respiratory cells, compared to the virus with avian influenza virus receptor specificity. Human DCs contain both SA␣2,3 and SA␣2,6 on their surfaces. To assess the presence or absence of SA␣2,3 and SA␣2,6 on the surfaces of human DCs and thereby predict their susceptibility to influenza viruses with distinct receptor specificities, we incubated monocyte-derived DCs with the biotinylated plant lectins SNA and MAAI, which have affinity for ␣2,6and ␣2,3-linked SA, respectively, and then with streptavidin linked to FITC (see Materials and Methods). To remove the sialic acids present on the surfaces of DCs as a negative control, we added neuraminidase from Clostridium perfringens, whereas heat-inactivated neuraminidase was added as a control in the rest of the samples. Flow cytometry analysis revealed that virtually all DCs presented both ␣2,6and ␣2,3-linked SA on the cell surface (Fig. 3A), and these results were confirmed by fluorescence microscopy (Fig. 3B). This experimental system does not allow us to compare the levels of expression of ␣2,3and ␣2,6-linked SA on DCs, since the lectins used may have differing affinities for their ligands. Nevertheless, these data indicate that human DCs can potentially be infected by influenza viruses with different receptor binding specificities (both ␣2,3and ␣2,6-linked SA). Viet WT virus, with a higher affinity for ␣2,3-linked sialic acids, induces a strong activation of human dendritic cells. We next studied cytokine and chemokine expression profiles in primary human DCs after infection with the recombinant vi- DCs express both ␣2,3and ␣2,6-linked SA, viruses with affinity for the ␣2,6-linked SA replicate more efficiently in those cells. Since we are interested in early events in the innate immune recognition of influenza viruses by human DCs, and to eliminate possible differences in DC activation by these two viruses due to differential replication in DCs, we evaluated the expression of genes associated with the proinflammatory response at 4 h p.i. By qRT-PCR, our data show (Fig. 4B) that Viet WT virus, with SA␣2,3-preferential specificity, induced elevated levels of mRNA IFN-␤ and TNF-␣ in infected DCs, while those observed upon Viet Mut infection were comparable to the basal levels. These results were highly reproducible among different donors (data not shown). Also, the Viet WT virus induced significantly greater expression of RIG-I and RAN-TES in infected DCs than did the Viet Mut virus (Fig. 4B). Additionally, we analyzed the concentration of several cytokines and chemokines secreted by the cells to the supernatants in the infected cell cultures by multiplex ELISA (see Material and Methods). As shown in Fig. 4C, DCs infected by the virus with SA␣2,3 specificity released high levels of IP-10, indicative of IFN production, whereas in the case of DCs infected with Viet Mut, which had SA␣2,6-preferential binding, very small amounts of this chemokine were detected. We obtained similar results when we evaluated the levels of the proinflammatory cytokines TNF-␣ and IL-6 and the chemokine MIP-1␤, indicating that the virus with SA␣2,3-preferential binding induced a stronger proinflammatory response in human DC than the virus with SA␣2,6 binding specificity. To ensure that this differential response was not due to possible contaminants in the allantoic fluid or cell supernatant of the virus stocks, sucrose cushion-purified viruses (grown in MDCK cells or in eggs) were also tested, and similar results were obtained (data not shown). These results-based on a comparison of influenza viruses which differ by only 2 amino acids-indicate that preferential binding of HA to SA␣2,3, but not to SA␣2,6, induces a high proinflammatory response in human DCs at early times postinfection. Differences in DC activation by SA␣2,3-and SA␣2,6-binding influenza viruses are independent of viral replication. The experiments described above suggested that the differences in activation were independent of viral replication, since we did not observe differences in replication at the time when the cytokine profile was analyzed and because the distinct activation pattern was observed at an early time point. To test this hypothesis, we performed the infections with UV-inactivated viruses. We first confirmed the lack of replication of the UVinactivated preparations by qRT-PCR (Fig. 5A) and plaque assay and then studied the expression of proinflammatory genes. We observed that the UV-inactivated Viet WT virus was able to induce the expression of IFN-␤ and TNF-␣ in a way similar to that of the noninactivated viruses, but we did not observe an upregulation of these genes in the case of those cells infected with the virus Viet Mut (Fig. 5B). Analysis of RIG-I and RANTES expression revealed a similar result. As demonstrated in Fig. 5C, the analysis of supernatants by multiplex ELISA revealed an elevated production of the cytokines TNF-␣ and IL-6 and of the chemokines IP-10 and MIP-1␤ following infection with either UV-inactivated or live viruses with SA␣2,3 specificity relative to that of noninfected cells and those infected with the SA␣2,6-binding virus. Together, these data indicated that the high activation of DCs induced by viruses with ␣2,3-linked SA specificity is independent of replication. Human primary macrophages produce higher levels of cytokines upon infection with virus possessing SA␣2,3 receptor preference than with virus possessing SA␣2,6 receptor preference. To elucidate whether the observed differential response was specific for DCs or extended to other human immune cell subtypes, we tested the responses induced by the Viet WT and Viet Mut viruses in human macrophages. We first assessed viral growth in this substrate using the M segment-based qRT-PCR assay. As in DCs, at 4 h p.i., the levels of replication were comparable between the two viruses, but at 20 h p.i. the virus with affinity for SA␣2,6 showed a higher level of replication. In the case of this cell type, the pattern of cytokine and chemokine production as a consequence of infection was similar to that observed in DCs, showing a higher expression of IFN-␤ and TNF-␣, as well as of other genes tested (Fig. 6), following infection with the virus possessing the SA␣2,3-binding HA than following infection with the virus with the SA␣2,6-binding HA. These results indicate that the phenotype observed for these viruses is not unique to DCs. Also, they show that macrophages, which are one of the main cytokine producers of the innate immune system, show a higher activation phenotype following infection by an SA␣2,3-binding virus than following infection by an SA␣2,6-binding virus. Differential expression of proinflammatory responses in human respiratory epithelial cells by SA␣2,3-and SA␣2,6-binding influenza viruses. The respiratory epithelium and mucosa is the first tissue with which influenza viruses interact, and therefore the innate immune response induced in cells of this tissue is crucial for immune cell recruitment and, eventually, virus clearance (51). In order to elucidate if the viruses with differing receptor specificities also induced different innate immune responses in respiratory human cells, we tested the recombinant viruses with different receptor specificities in HTBE cells. Consistent with the release of viral particles by virus titration as described above, the analysis by qRT-PCR showed higher levels of mRNA of the M segment in cells after infection with the virus Viet Mut than after infection with the virus Viet WT (Fig. 7A). As observed in Fig. 7B, the virus Viet WT showed higher levels of expression of IFN-␤ and the interferon-inducible chemokine IP-10 at 24 and 48 h p.i. than Viet Mut, as evaluated by qRT-PCR. Also, TNF-␣ and IL-6 mRNA levels were notably high at 48 h p.i. in cells infected with the virus with SA␣2,3 specificity (16.8-and 11.5-fold over those in mock-infected cells), while those infected by SA␣2,6-specific viruses showed levels only slightly higher than those in the uninfected cells (3.1 and 4.6 times over those in mock-infected cells). The expression of the chemokines IL-8 and RANTES presented similar patterns, showing the greatest differences between the cells infected with the SA␣2,3-and SA␣2,6-binding viruses after 48 h of infection. The release of IP-10, TNF-␣, IL-6, and MIP-1␤ was also analyzed in the supernatants in the apical compartment of the HTBE cultures (Fig. 7C). Production of IP-10 was detected at 48 h p.i. in cultures infected with both the Viet WT and Viet Mut viruses, being about 8.5-fold higher than those observed in Viet WT than in Viet Mut. Elevated production of TNF-␣, IL-6, and MIP-1␤ was detected in supernatants from cultures infected with Viet WT (2,400.5 Ϯ 909.6 pg/ml, 1,074.7 Ϯ 272.2 pg/ml, and 105.3 Ϯ 12.5 pg/ml at 48 h p.i., respectively), while levels observed in Viet Mut-infected HTBE cells were similar to those observed in mock-infected cultures. Comparable results were obtained when we evaluated the production of IP-10, TNF-␣, IL-6, and MIP-1␤ in the lower chamber of the HTBE cultures (data not shown). Therefore, the virus with ␣2,3-linked SA specificity induced a stronger inflammatory response in primary respira-tory cells than that with ␣2,6-linked SA specificity. It is important to note that the expression of proinflammatory molecules is delayed in epithelial cells compared to that in immune cells, since, unlike with DCs and macrophages, we were not able to detect differences at 4 h p.i. between uninfected and infected cells in the HTBE cultures. DISCUSSION H5N1 HPAIV have expanded throughout Asia and some parts of Africa and Europe in the last decade. This expansion is associated with widespread death in poultry (which has had an important economic impact on the poultry farming industry) and has resulted in more than 500 human infections to date. The high pathogenicity of H5N1 influenza viruses and their ability to transmit from birds to humans has become a major concern worldwide, and although the virus has not yet acquired the capacity for sustained human-to-human transmission, it continues to undergo genetic changes that may result in the acquisition of this capacity (1,14,41,53), making these viruses a potential pandemic threat. Although the factors that determine efficient human-to-human transmission are not completely understood (29), it is thought that a change of receptor specificity from ␣2,3to ␣2,6-linked SA of the viral HA is essential for facilitating transmission between humans (47). As a consequence, several studies have identified specific mutations in the influenza virus HA that can change HA's receptor specificity from SA␣2,3 to SA␣2,6, which may confer on avian influenza viruses the ability to be transmitted among humans (8,(41)(42)(43)53). Here, we analyzed the HA receptor specificity of the A/Vietnam/1203/2004 H5N1 HPAIV and the effect of the amino acid changes Q226L and G228S in the HA of this isolate, using two different methods (a fluorescenceactivated cell sorting [FACS]-based assay and a solid-phase binding assay), confirming the previously reported observation that these mutations changed the preferential receptor specificity of the H5N1 influenza virus from SA␣2,3 to SA␣2,6 (8,41,43). The work by Stevens et al. in 2006 (43) first described the effect of these mutations in the recombinant HA from the A/Vietnam/1203/2004 virus by using a glycan array to test the binding of the protein to different glycans. In that work, although they did not see a dramatic shift, they observed a considerable reduction of binding to SA␣2,3 and significant binding to SA␣2,6 glycans. Later, another report from the same authors showed similar experiments performed with the whole virus, showing a more remarkable shift in preferential binding than when the recombinant purified protein was used (41). Therefore, in those previous reports, the A/Vietnam/ 1203/2004 HA with the mutations Q226L and G228S showed some levels of binding to SA␣2,3 glycans. Here, using two different assays, we observed a switch in the preferential binding after introducing the changes Q226L and G228S in the HA of this H5N1 virus. However, in concordance with those previously reported data, low remaining levels of binding to SA␣2,3 by the Viet Mut virus were also observed. Given that the severity of human infections by HPAIV is believed to be associated with the induction of a strong inflammatory response in the host (11,32,46), we hypothesized that the receptors to which the avian or human influenza viruses bind and how they bind to the cell surface receptors may have an effect on the way that they are "sensed" by the immune cells in the lung and therefore induce a different immune response, contributing to the hyperinduction of proinflammatory cytokines observed in humans infected with avian viruses. Differential expression of proinflammatory genes by seasonal human and highly pathogenic avian viruses has also been observed in human primary dendritic cells (34,45) and macrophages (7,16,23,24,55). Due to the important role of the DCs in initiating innate immune responses as a consequence of the recognition of PAMPs and their ability to produce proinflammatory cytokines following activation, we first investigated the effect of the receptor specificity of the influenza viruses in human primary DCs. As shown in Results, we observed that the virus with ␣2,3-linked SA receptor specificity induced a higher expression of proinflammatory genes by DCs and macrophages ( Fig. 4 and 6), and this effect was also observed using UV-inactivated viruses (Fig. 5), which indicates that the recognition of the virus that binds SA␣2,3 may occur differently from that of the virus with SA␣2,6 specificity. The response that is induced is independent of the replication of the virus. Consistently with this, Miller and Anders (28) observed that inactivated influenza viruses induced type I IFN in murine splenocytes and that the interaction of the virus with the sialylated receptors on IFN-producing cells was required. Sialic acids are structural determinants of the cell surface, and there is increasing evidence of their importance in immune system modulation (3,10,35,38,44,50), although there are still numerous unknown aspects in the interaction of the influenza virus with their receptors. As shown above (Fig. 3), human DCs contained both ␣2,6and ␣2,3-linked SA on their surfaces, and therefore they are potentially susceptible to infection with both avian and human viruses. Nevertheless, little information has been reported to date regarding sialylation levels of surface glycoproteins in either immune or epithelial cells. The respiratory tract is the first barrier of defense against influenza viruses and other respiratory pathogens, conferring mechanical protection through cilia and mucus but also releasing proinflammatory cytokines and chemokines that stimulate the recruitment of immune cells. In this work, we show that the change of an avian virus to have a more human virus-like receptor specificity resulted in an increased ability to infect and replicate in HTBE cells, which resemble the epithelia of the human respiratory tract. Similarly, Matrosovich et al. (26) tested in differentiated HTBE cells the pandemic human A/Hong Kong/1/1968 (H3N2) virus, which presents SA␣2,6 receptor specificity, and a mutant virus with the changes L226Q and S228G in the HA, which presents SA␣2,3 receptor specificity. Consistent with our data, they observed a better infection and replication capacity for the viruses with SA␣2,6 receptor binding receptor specificity. Comparisons of the innate immune responses elicited by human and avian influenza viruses in human respiratory epithelial cells showing that avian influenza viruses induce a stronger inflammatory response than human influenza viruses have been reported elsewhere (5,15,52). Those studies used differing gene constellations, so that it is not possible to attribute the observed effects to a particular genetic factor. The work reported here is focused on the contribution of receptor binding specificity to the initiation of innate immune responses in human cells. Therefore, we used viruses that were different only in the receptor binding site of the HA and shared all the other genes. Thus, based on our data, we can attribute the differences in cytokine activation by the viruses tested to their different receptor specificities. Another recent report showed a higher induction of type I IFN in the epithelial cell line A549 after infection with influenza viruses bearing the HA and NA from two avian H5N1 viruses (A/Vietnam/1203/2004 and A/Hong Kong/213/2003) than viruses with the HA and NA from the H1N1 human influenza A/New Caledonia/20/1999 virus (6). In that work, the recombinant viruses constructed shared the six internal genes from the cold-adapted virus A/Ann Arbor/6/1960, which indicates that the differences in type I IFN induction are mediated by the HA and NA. Although we show that both immune and epithelial human primary cells present higher proinflammatory cytokines after infection with viruses that bind SA␣2,3 than with viruses that bind SA␣2,6, it is important to point out that the detection of cytokines was delayed in the epithelial cells, since at 4 h p.i., no differences from the mockinfected cells were observed. Interestingly, high production of the chemokines IP-10, IL-8, and RANTES, which are involved in immune cell recruitment to infected tissues, were detected after infection with the virus with SA␣2,3 receptor specificity in HTBE cells. Indeed, induction of high levels of IP-10 have been observed in human lungs (46) and in the sera of humans (32) infected by H5N1 virus. These data suggest that infection of lung epithelial cells by SA␣2,3-binding viruses may result in higher recruitment of proinflammatory cells to the site of infection. Taken together, our findings provide evidence to support the hypothesis that the strong host inflammatory responses induced in humans by H5N1 HPAIV could result from their SA␣2,3 receptor specificity. Interestingly, our data strongly suggests the existence of two non-mutually exclusive scenarios. One is that the receptors on human DCs and macrophages that sense SA␣2,3-binding viruses are distinct from those that sense SA␣2,6-binding viruses, resulting in a more rapid and enhanced proinflammatory response in the lungs of infected patients after binding by SA␣2,3-preferential viruses. Alternatively, differential recognition of SA␣2,3-and SA␣2,6-binding viruses by the same receptor on human immune cells could result in distinct signaling cascades of activation in those cells. Further studies related to the recognition of influenza viruses by immune cells would help to clarify and help us understand the mechanisms involved in the induction of hypercytokinemia by highly pathogenic influenza viruses in humans. Cytometry and Microscopy Shared Facilities in the Mount Sinai School of Medicine for assistance; and the Consortium of Functional Glycomics for sharing reagents. Parts of the data will be published on the Consortium of Functional Glycomics website. We also thank all the Fernandez-Sesma lab members for suggestions and comments. This work was supported by the NIH/NIAID Center for Research on Influenza Pathogenesis (CRIP) (grants HHSN266200700010C and 1R01AI073405 to A.F.-S.). John Steel is supported by a Career Development Fellowship from the Northeast Biodefense Center (U54-AI057158-Lipkin). We have no competing interests to declare.
9,144.8
2011-02-23T00:00:00.000
[ "Biology", "Medicine" ]
Unsupervised Dense Retrieval Training with Web Anchors In this work, we present an unsupervised retrieval method with contrastive learning on web anchors. The anchor text describes the content that is referenced from the linked page. This shows similarities to search queries that aim to retrieve pertinent information from relevant documents. Based on their commonalities, we train an unsupervised dense retriever, Anchor-DR, with a contrastive learning task that matches the anchor text and the linked document. To filter out uninformative anchors (such as ``homepage'' or other functional anchors), we present a novel filtering technique to only select anchors that contain similar types of information as search queries. Experiments show that Anchor-DR outperforms state-of-the-art methods on unsupervised dense retrieval by a large margin (e.g., by 5.3% NDCG@10 on MSMARCO). The gain of our method is especially significant for search and question answering tasks. Our analysis further reveals that the pattern of anchor-document pairs is similar to that of search query-document pairs. Code available at https://github.com/Veronicium/AnchorDR. INTRODUCTION Dense retrieval matches queries and documents in the embedding space [15,16,26], which can capture the semantic meaning of the text and handle more complex queries compared to traditional sparse retrieval methods [23].Due to the scarcity of labeled data in certain domains, including legal and medical, numerous recent studies have focused on unsupervised dense retrieval, which trains dense retrievers without annotations [11,13,14,18].One of the most common approaches of unsupervised dense retrieval is to design a contrastive learning task that approximates retrieval [3,11,13,14,18,19,22], yet it is nontrivial to construct contrastive pairs.Most existing methods construct contrastive pairs from the same context, such as a sentence and its context [14], or two individual text spans in a document [11,13,19].The relation between these co-document pairs is different from query-document pairs in search or question answering, where the query aims to seek information from the document.LinkBERT [27] leverage text spans sampled from a pair of linked Wikipedia pages.However, such text spans are not guaranteed to have high relevance.Few other methods train a model to generate queries from documents [2,18], but they either require large language models or huge amounts of training data. In this work, we present Anchor-DR, an unsupervised dense retriever that is trained on predicting the linked document of an anchor given its anchor text.The text on the anchor of hyperlinks typically contains descriptive information that the source document cites from the linked document, suggesting that the anchor-document pairs exhibit resemblances to query-document pairs in search, where the search query describes the information that the user is required from the relevant document.As a result, we present to train Anchor-DR to match the anchor text and its linked document with a contrastive objective. Although the relation between anchor-document pairs is typically similar to that of search queries and relevant documents, there also exist a large number of uninformative anchors.For example, a web document may use anchor links to redirect to the linked document (e.g., "homepage" or "website").Such anchor-document pairs do not resemble the relation between search queries and documents and may introduce noise to our model.We thus design a few heuristic rules to filter out functional anchors, such as headers/footers or anchors in the same domain.In addition, we train a classifier with a small number of high-quality search queries to further identify anchors containing similar types of information as real search queries. Experiment results show that Anchor-DR outperforms state-ofthe-art unsupervised dense retrievers by a large margin on two widely adopted retrieval datasets, MSMARCO [1] and BEIR [24] (e.g., by 5.3% NDCG@10 on MSMARCO).The improvement of Anchor-DR is most significant on search and question answering tasks, suggesting that compared to the contextual relation between co-document text spans [11,13], the referral relation between anchor-document pairs is more similar to the information-seeking relation between search query-document pairs.We further present examples to show that anchor-document pairs indeed have similar patterns as query-document pairs. RELATED WORK Dense Retrieval.Dense retrieval is the technique of using dense vector representations of text to retrieve relevant documents [5,12].With the development of pretrained language models [6,15], recent works have developed various techniques for dense retrieval, including retrieval-oriented pretraining [11,13,19] and negative selection [26].While dense retrieval has exhibited remarkable effectiveness in contrast to traditional sparse retrieval approaches [23], its benefits are generally confined to supervised settings that involve an adequate amount of human annotations [24].Unsupervised dense retrieval.Previous work on unsupervised dense retrieval mainly adopts contrastive learning to model training.ICT [14] matches the surrounding context of a random sentence.SPAR [3] uses random sentences as queries with positive and negative passages ranked by the BM25 score.Co-condenser [11], COCO-LM [19], and contriever [13] regard independent text spans in one document as positive pairs.QExt [18] further improves their work by selecting the text span with the highest relevance computed by an existing pretrained model.A few other research works use neural models to generate queries, such as question-like queries [2] or the topic, title, and summary of the document [18].However, both works require a large-scale generation system.Leveraging web anchors in retrieval.Web anchors have been widely applied to classic approaches for information retrieval [4,7,8,10,28].Recently, HARP [17] designs several pretraining objectives leveraging anchor texts, including representative query prediction or query disambiguation modeling.ReInfoSelect [29] learns to select anchor-document pairs that best weakly supervise the neural ranker.However, these methods either focus on classic bag-of-word modeling or apply a cross-encoder architecture that does not fit the setting of dense retrieval. METHODOLOGY We present an unsupervised dense retrieval method that trains the model to match the representations of anchor text and its linked document.This section describes the contrastive learning task of anchor-document prediction and the anchor filtering process. Contrastive Learning with Anchor-Document Pairs Based on the commonalities between anchor-document pairs and query-document pairs [4,7,8,10,28], we compute the representation of each anchor and document with our model, Anchor-DR, and trains it with a contrastive objective of matching anchor text and its linked document: where is our presented model, Anchor-DR, with T5 [21] as its backbone, the sequence embedding () is the embedding of the first token output by the decoder of Anchor-DR, (, + ) is the anchor text and its linked document, and () is the set of negative documents sampled from the whole dataset.In practice, we use BM25 negatives in the first iteration [15] and use the negatives mined by Anchor-DR in the following iterations [26].In inference, we feed the query and all the documents into Anchor-DR separately and use the embedding of the first token in the decoder output as the sequence embedding.Then we rank all the documents by their similarity to the query: (, ) = ⟨ (), ()⟩, where denotes Anchor-DR. Anchor Filtering While some anchor-document pairs exhibit strong similarities with query-document pairs in search, others do not.For instance, "homepage" or "website" and their linked documents hold entirely distinct relations with query-document pairs.Including these pairs in the training data may introduce noise to our model.As a result, we first apply a few heuristic rules and then train a lightweight classifier to filter out uninformative anchor text.Anchor filtering with heuristic rules.We observe that a large number of uninformative anchors are functional anchors and these anchors mainly exist between pages within the same website.Consequently, we filter out anchor text that falls in the following categories: (1) In-domain anchors, where the source and target page share the same domain; (2) Headers or footers, which are detected by specific HTML tags, such as <header> and <footer>; and (3) Keywords indicating functionalities, which are manually selected from anchors with top 500 frequency. 2nchor filtering with query classifier.We train a lightweight query classifier to learn the types of information that is typically contained in search queries about relevant documents.Specifically, we use the ad-hoc queries provided by WebTrack [9] as positive examples.These small number of queries are manually selected to reflect important characteristics of authentic Web search queries for each year.As for negative examples, we sample a subset of anchors before filtering by our rules, which has the same size as positive examples We train the query classifier with the Cross-Entropy Loss: where is a miniBERT-based [25] model.After training the query classifier, we rank all the anchor text by the logits of the positive class (i.e., similarity to search queries) and only keep the top 25%. EXPERIMENTS In this section, we describe the experiment setups, compare Anchor-DR with baselines and ablations, and analyze its effectiveness. Experimental Setup We evaluate Anchor-DR on two public datasets: MSMARCO [1] and BEIR [24] for unsupervised retrieval, where we directly apply the methods to encode test queries and documents without supervision.We report the nDCG@10 results following previous works [13,18]. Table 2: Unsupervised retrieval results on MSMARCO and BEIR under<EMAIL_ADDRESS>best result for each task is marked in bold.The best result among dense retrievers is underlined.We follow previous work [13] and report the average performance on 14 BEIR tasks and MSMARCO (BEIR14+MM).The results of coCondenser and results with † are evaluated using their released checkpoints.The results of other baselines are copied from their original papers.Training data.We train Anchor-DR on a subset of the ClueWeb22 dataset [20].To preprocess the data, we first randomly sampled a subset of English documents with at least one in-link.After that, we use rules and then train a query classifier to filter out uninformative anchors, as introduced in Sec.3.2.Finally, we sample at most 5 in-links for each document.The statistics of the anchors and documents after each step of filtering are shown in Table 1.Note that ClueWeb22 has in total of 52.7B anchors, hence we are able to further scale up our model in the future.Implementation details.For continuous pretraining on anchordocument prediction, we train our model with BM25 negatives for one epoch and with ANCE negatives [26] for another epoch.We use a learning rate of 1e-5 and a batch size of 128 positive pairs.The query classifier is trained on the adhoc test queries of WebTrack 2009 -2014 [9], which contains 300 queries in total. Baselines.We compare Anchor-DR with a sparse retrieval method: BM25 [23] and four unsupervised dense retrieval methods: coCondenser [11], Contriever [13], SPAR Λ (trained on Wikipedia) [3], and QExt-PLM (trained on Pile-CC with MoCo) [18].All these dense retrieval methods construct contrastive pairs in an unsupervised way: either by rules [11,13], lexical features [3], or with pretrained models [18].Note that we do not compare with methods that require large-scale generation system to generate contrastive pairs, such as QGen [18] or InPars [2], as their generators either require additional human annotations or have significantly larger sizes compared to our model (e.g., 6B vs. 220M).As for ablation studies, we substitute the anchor-document prediction task with two other contrastive tasks: ICT [14], which considers a document and a sentence randomly selected from the document as positive pairs, and co-doc [11], which treats two text sequences from the same document as positive pairs.We also compare to Anchor (rule only), which removes the query classifier and only uses rules to filter anchors.For a fair comparison, we train all the ablations on the same subset of documents in ClueWeb22. Main Results Table 2 shows the unsupervised retrieval results on MSMARCO and BEIR.Anchor-DR outperforms all the dense retrieval baselines on MSMARCO and BEIR with a large margin (e.g., by 2.9% nDCG@10 on BEIR14+MM and 3.8% on all datasets).Furthermore, compared to other dense retrievers, Anchor-DR achieves the best performances across a majority of datasets.indicating that our method can be generalized to a wide range of domains and retrieval tasks. We observe that Anchor-DR exhibits strong performance in specific subsets of tasks.For instance, Anchor-DR achieves a large performance gain of 11.8% nDCG@10 on TREC-COVID, but it is outperformed by other baseline methods on ArguAna and Quora. Ablation Study To demonstrate the effectiveness of our anchor-doc prediction task, we perform ablation studies in Table 3.We observe that Anchor-DR outperforms both methods.Additionally, ICT and co-doc have less than 1% performance gap on 7 out of 19 datasets.This is probably because the contrastive learning pairs in both methods contain contextual information about each other.Anchor-DR also outperforms Anchor (rule only), indicating that it is effective to train on anchor texts with higher similarities to search queries.Table 4: Examples of the query-document pairs in two BEIR datasets: ArguAna and TREC-COVID, the co-document text pairs (co-doc), and the anchor-document pairs (Anchor-DR). Dataset: ArguAna Query: Becoming a vegetarian is an environmentally friendly thing to do.Modern farming is one of the main sources of pollution in our rivers, and as long as people continue to buy fast food ... Document: Health general weight philosophy ethics You don't have to be vegetarian to be green.Many special environments have been created by livestock farming, for example chalk down land in England and mountain pastures ... Dataset: TREC-COVID Query: what causes death from Covid-19?Document: Predicting the ultimate outcome of the COVID-19 outbreak in Italy: During the COVID-19 outbreak, it is essential to monitor the effectiveness of measures taken by governments on the course of the epidemic.Here we show that there is already a sufficient amount of data collected in Italy to predict the outcome of the process ... Method: Codoc Query #1: Going vegetarian is one of the best things you can do for your health.Document #1: We publish a quarterly magazine The Irish Vegetarian, with features and our roundup of news and events of interest to Irish vegetarians.Get involved!There are lots of ways to get involved.You can read our Going Vegetarian page.You can pick up a copy of The Irish Vegetarian.You can come to a Meetup meeting ... Query #2: COVID-19 vaccines designed to elicit neutralizing antibodies may sensitize vaccine recipients to severe diseases Document #2: According to a study that examined how informed consent is given to COVID-19 vaccinetrial participants, disclosure forms fail to inform volunteers that the vaccine might make them susceptible to more severe disease.The study, "Informed Consent Disclosure to Vaccine Trial Subjects of Risk of COVID-19 Vaccine ... Method: Anchor-DR Query #1: Vegetarian Society of Ireland Document #1: The Vegetarian Society of Ireland is a registered charity.Our aim is to increase awareness of vegetarianism in relation to health, animal welfare and environmental perspectives.We support both vegetarian and vegan aims.Going vegetarian is one of the best things you can do for your health, for animals and for the planet ... Query #2: How COVID19 Vaccine Can Destroy Your Immune System Document #2: According to a study that examined how informed consent is given to COVID-19 vaccine trial participants, disclosure forms fail to inform volunteers that the vaccine might make them susceptible to more severe diseases... Performance Analysis Performance breakdown.The results in Table 2 show that Anchor-DR achieves strong performance in a majority of datasets but not in others.To analyze the effectiveness of Anchor-DR on different datasets, we categorize the datasets into three subsets: (1) Search/QA, where the query is a question or keywords related to the document; (2) Context/Paraphrase, where the query and document contain coherent or overlapping information; and (3) Others.Figure 1(a) shows that Anchor-DR performs better on Search/QA datasets and co-doc is better on Context/Paraphrase datasets.The results are consistent with our hypothesis that the referral relation between query-document pairs is similar to the information-seeking relation between search queries and relevant documents. We further quantitatively analyze the information pattern of query-document pairs captured by Anchor-DR and co-doc.Figure 1(b) shows the performance gap between Anchor-DR and co-doc versus the degree of information overlap between queries and documents in each test dataset, which is measured using Jaccard Similarity.We observe that Anchor-DR performs much better on datasets where queries and documents contain less overlapping information.The primary emphasis of datasets with high query-document similarity is mainly on paraphrasing and coherency, which are distinct from the relation between search queries and documents.Case studies.We show in Table 4 the contrastive pairs of Anchor-DR and co-doc, as well as the positive pairs in ArguAna and TREC-COVID, which represent the Search/QA and Context/Paraphrase datasets.The query-doc pairs of ArguAna are arguments around the same topic, which are coherent and have similar formats.Similarly, the contrastive pairs of co-doc contain either coherent (e.g., the claim and recent work of the vegetarian society) or repeating information (e.g., COVID vaccine may cause diseases), which may explain its good performance on Context/Paraphrase datasets. In contrast, in TREC-COVID, the answer to the query is contained in the document.As shown in Table 4, the anchor text in Anchor-DR could be the topic of the linked document, or in the format of a question.In both examples, the anchor text can serve as a search query and the document can provide the information the query is seeking, which could be the reason why Anchor-DR achieves strong performance on the Search/QA datasets. CONCLUSION We train an unsupervised dense retrieval model, Anchor-DR, leveraging the rich web anchors.In particular, we design a contrastive learning task: anchor-document prediction to continuously pretrain Anchor-DR.Additionally, we apply predefined rules and train a query classifier to filter out uninformative anchors.Experiments on two public datasets: MSMARCO and BEIR show that Anchor-DR significantly outperforms the state-of-the-art dense retrievers on unsupervised retrieval.Our analyses provide a further comparison of the patterns of information contained in our contrastive learning pairs and query-document pairs in test datasets. Table 1 : The statistics of ClueWeb22 anchor training data. Table 3 : nDCG@10 of models trained with different contrastive tasks on the same subset of documents, with 400K documents and 400K contrastive pairs.T-test shows Anchor-DR outperforms co-doc on All Avg. with p-value < 0.05.
4,012.4
2023-05-10T00:00:00.000
[ "Computer Science" ]
Leafflower–leafflower moth mutualism in the Neotropics: Successful transoceanic dispersal from the Old World to the New World by actively-pollinating leafflower moths In the Old World tropics, several hundred species of leafflowers (Phyllanthus sensu lato; Phyllanthaceae) are engaged in obligate mutualisms with species-specific leafflower moths (Epicephala; Gracillariidae) whose adults actively pollinate flowers and larvae consume the resulting seeds. Considerable diversity of Phyllanthus also exists in the New World, but whether any New World Phyllanthus is pollinated by Epicephala is unknown. We studied the pollination biology of four woody Phyllanthus species occurring in Peru over a period of four years, and found that each species is associated with a species-specific, seed-eating Epicephala moth, here described as new species. Another Epicephala species found associated with herbaceous Phyllanthus is also described. This is the first description of Epicephala from the New World. Field-collected female moths of the four Epicephala species associated with woody Phyllanthus all carried pollen on the proboscises, and active pollination behavior was observed in at least two species. Thus, Epicephala moths also pollinate New World Phyllanthus. However, not all of these Epicephala species may be mutualistic with their hosts, because we occasionally observed females laying eggs in developing fruits without pollinating. Also, the flowers of some Phyllanthus species were visited by pollen-bearing thrips or gall midges, which potentially acted as co-pollinators or primary pollinators. Phylogenetic analysis showed that the New World Epicephala associated with woody Phyllanthus are nested within lineages of Old World active pollinators. Thus, actively-pollinating Epicephala moths, which originated in the Old World, successfully colonized the New World probably across the Pacific and established mutualisms with resident Phyllanthus species, although whether any of the relationships are obligate requires further study. There is likely a major radiation of Epicephala still to be found in the New World. Introduction Obligate pollination mutualism between plants and actively pollinating, seed parasitic pollinators represent some of the most sophisticated examples of plant-pollinator coevolution [1]. Examples include the fig-fig wasp [2,3], yucca-yucca moth [4], and leafflower-leafflower moth mutualisms [5,6], wherein the plants sacrifice a subset of the seeds as nourishment of pollinator larvae in return for pollination services. The pollinating insects have evolved to actively pollinate host flowers to ensure food (developing seeds) for their larvae, and have morphological features that enhance active pollination, such as the coxal comb and pollen pockets in fig wasps [7], maxillary tentacles in yucca moths [8], and hairy proboscis in leafflower moths [9]. Usually a subset of the seeds is left uneaten by the pollinator larvae, providing net benefit of the mutualism for the plants. Plant specializations to these pollinators have led to highly restrictive floral structures and/or loss of nectar reward, making their flowers hardly attractive to ordinary flower visitors. Leafflowers are the plants in the genera Glochidion, Breynia, and Phyllanthus in the tribe Phyllantheae (Phyllanthaceae), comprising about 1,200 species distributed throughout the Old World and New World tropics [10,11]. The mutualism with leafflower moths, or moths in the genus Epicephala (Gracilariidae), was initially discovered in three species of Glochidion in Japan [5], but later studies showed that as many as 500 leafflower species, occurring throughout tropical Asia, Africa, Australia, and the Pacific, are mutualistic with species-specific, actively pollinating Epicephala moths [12]. These plants bear small (up to 3-4 mm), greenish, unisexual flowers that are visited nocturnally by the females of Epicephala moths. The moth uses the hairy proboscis to actively collect pollen on male flower and deposit pollen on the stigma, after which it lays an egg in the flower that it has just pollinated. In most leafflower species, the number of seeds per fruit is 6 (but can be up to 20 in some Glochidion) [11]. The proportion of the seeds in each fruit consumed by single Epicephala larva is variable among species, but a subset of the seeds usually remains intact after moth consumption [13]. Phylogenetic analysis of the plants and the moths indicated that, whereas the active pollination behavior originated only once in Epicephala, specialization to Epicephala pollination occurred independently in at least five distinct leafflower lineages (Fig 1) [12]. While knowledge on the diversity and evolution of the leafflower-moth association is accumulating in the Old World tropics, virtually nothing is known about the pollination biology of leafflowers in the New World, despite the fact that there are ca. 250 Phyllanthus species throughout the New World. Cuba and Venezuala harbor particularly high diversity with 50 and 58 species, respectively [10]. Molecular phylogenetic analysis suggested that New World Phyllanthus group to into three clades of entirely New World species (Fig 1) [14]. These clades occupy derived positions on the phylogeny (Fig 1), indicating that Phyllanthus plants colonized the New World from the Old World multiple times. Based on divergence time estimation using fossil calibrations [12], New World colonization by Phyllanthus occurred no earlier than the Oligocene (33.9-23.0 Ma). Although the transboreal tropical forest spanning the northern continental area during warm periods of the late Paleocene and early Eocene (ca. 50-52 Ma) [15] would presumably have allowed overland dispersal by these tropical plants, Phyllanthus colonization of the New World is too young to be explained by such a scenario and thus is likely the result of transoceanic dispersal. Due to dearth of fossil Gracillariidae, the divergence times of Epicephala is much less reliable. However, available estimation based on COI molecular clock suggests that the genus originated around ca. 25 Ma [12]. Thus, the Neotropics was well separated from the Old World tropics by ocean mass by the time actively pollinating Epicephala evolved. Although intercontinental dispersal by tropical insects such as Epicephala seems unlikely, Epicephala has The tree is based on the most recent molecular phylogenetic analyses of the tribe [12,14]. Species richness of each terminal clade is provided as the area of the clade triangle. The five clades repeatedly colonized remote oceanic islands of the Pacific [16], suggesting that intercontinental, transoceanic dispersal may also be feasible. Because presently there are no Epicephala species described from the New World, we set out to determine whether Epicephala occurs in the New World, and if so, whether they are mutualistic with the host Phyllanthus, with an aim to understand the global distribution and diversity of the leafflower-leafflower moth mutualism. Study sites and materials The study was conducted in Peru during four field expeditions conducted each year during 2013-2016. The first study site, La Florida (6˚52'05"S, 79˚07'43"W), is located between 900-1,200 m a.s.l. on the western slope of the Andes Mountains and harbors a seasonally dry tropical forest. Three Phyllanthus species, P. salviifolius, P. graveolens, and P. huallagensis, were studied at La Florida. Fieldwork was conducted during 6-7 December 2013, 29-31 October 2014, 16-18 November 2015, and 24-26 August 2016. The second site, Tarapoto (6˚28'08"S, 76˚21'13"W), is located on the Amazonian (eastern) side of the Andes Mountains at around 350 m a.s.l. and possesses a wet tropical forest. Phyllanthus acuminatus was studied at Tarapoto during 26-28 November 2013, 20-23 October 2014, and 21-22 November 2015. In addition to the above four woody Phyllanthus, three herbaceous Phyllanthus species were encountered during the course of the study; P. stipulatus and P. orbiculatus at Tarapoto, and P. amarus at a lowland Amazonian site at Iquitos (3˚34'00"S, 73˚07'11"W; ca. 100 m a.s.l.) visited during 29 August-2 September 2016. Pollination by Epicephala has never been found in herbaceous Phyllanthus in the Old World. However, because several herbaceous Phyllanthus species in Asia are associated with Epicephala that lack pollination behavior [12], the above three herbaceous species were also included in the study. Phyllanthus salviifolius and P. huallagensis are members of the subgenus Xylophylla [14], a New World endemic group of ca. 90 species with its center of diversity in the Caribbean. Phyllanthus salviifolius belongs to the section Oxalistylis and is distributed in Costa Rica, Colombia, Venezuela, Ecuador, and Peru, whereas P. huallagensis belongs to the section Elutanthos and is only known from Peru [10]. Phyllanthus acuminatus and P. graveolens belong to the New World section Nothoclema of the subgenus Conami, which contains 10 species and ranges from Mexico to Argentina [17]. Both species are widely distributed from Mexico to the north to Peru (P. graveolens) and Argentina (P. acuminatus) to the south. Although Xylophylla and Nothoclema are not sister taxa, they group with other Neotropical herb, subshrub, and aquatic species and form the largest New World Phyllanthus radiation (Fig 1). Of the three herbaceous species studied, P. orbiculatus (sect. Apolepis) belongs to the same subgenus Conami as Nothoclema [18] and thus is likely a part of this large New World clade. On the other hand, P. amarus belong to the distantly related subgenus Swartziani [19], which represents an independent New World colonization (Fig 1). The phylogenetic position of P. stipulatus is presently uncertain. All the four woody Phyllanthus species are found on forest edges or on disturbed land with abundant sunlight. Only P. salviifolius is a tree that reaches up to 10 m (Fig 2), whereas the containing Epicephala moth-pollinated plants are indicated in green, and the rest is colored blue. Lineages occurring in Asia/Oceania, Africa, and the New World are indicated by beige, turquoise, and yellow boxes, respectively. Although the number of Phyllanthus colonizations from the Old World to the New World depends on how the internal polytomy is resolved, suspected three colonization events are indicated on the tree. A group of entirely Madagascan species was recently placed in subgenus Gomphidium [27] and thus is tentatively labeled "Subg. Gomphidium (Madagascar)", but note that they are distinct from true Gomphidium in New Caledonia. Figure modified from [11]. (Figs 3 and 4). Phyllanthus salviifolius also differs from the other three species in that a single flowering branch either produces male or female flowers ( Fig 2C). The male flowers of P. salviifolius possess long pedicels and as many as 30 flowers aggregate on each axil (Fig 2D and 2E). The female flowers in turn are hardly pedicellate, and the tepals form a globular structure that covers the ovary and the style, leaving only the stigmatic surface exposed (Fig 2F). By contrast, both male and female flowers of P. acuminatus have spread tepals and exposed anthers and styles (Fig 3B and 3C). The flowers of P. graveolens and P. huallagensis, although belonging to different subgenera, resemble each other in that the tepals of both male and female flowers form a globe and almost entirely covers the anthers and the styles (Fig 4B, 4I and 4J). Once pollinated, the female flowers of P. huallagensis become erect until they mature into fruits and disperse the seeds (Fig 4H). Nectary discs are present in both male and female flowers of all four species. The three herbaceous Phyllanthus species are common in sunny habitats along roadsides or on disturbed land and bear small, nectariferous male and female flowers (Fig 5) Sampling and species delimitation of Epicephala moths To determine whether the studied Phyllanthus species are associated with Epicephala, we initially inspected the fruits and the nearby foliage for the presence of Epicephala larvae and pupae. Because Epicephala moths were found in all the studied Phyllanthus, we sampled the fruits haphazardly in the field and incubated them at room temperature in plastic containers to rear the larvae. Adults that emerged were used for morphological examination and sequencing of the insect barcoding region to delimit species and assess host specificity. Morphological examinations follow standard dissection protocol detailed in [20]. In total, 91 adult pinned specimens were examined for the study, from which 28 genital dissections were made. The sensilla on the proboscis of females were also examined because they are an adaptation to facilitate active pollination and are reduced or lost in species that secondarily lost the pollination behavior [9]. Genomic DNA was extracted from 19 of the 28 specimens for which genital dissections were made, and the barcoding region of the cytochrome oxidase subunit I (COI) gene was sequenced using the LCO and HCO primers [21]. DNA extraction, polymerase chain reaction, and sequencing followed the protocols of [22]. The aligned sequence matrix of 612-bp length was subjected to a maximum-likelihood phylogenetic analysis on the IQ-TREE web server [23] using the default settings. Newly obtained sequences have been deposited in DDBJ under accession numbers LC424114-LC424132 and in BOLD under the project EPICE. Behavioral observation The behavior of Epicephala moths was studied in the field during 1800-2200 to determine whether any species is an active pollinator. We patrolled plant individuals with flowering branches at night using flashlights, and whenever Epicephala moths were found visiting female flower, we recorded whether each moth displayed pollination behavior prior to oviposition and the location of egg deposition. The moths were collected after observation, although some attempts to collect the moths were unsuccessful. The proboscises of the sampled moths were observed under a dissecting microscope for pollen load, and genital dissections were made to identify the species. field-collected female E. anomala (arrow) covered with Phyllanthus pollen; (i) a young fruit with Epicephala oviposition scars (arrows). Two front tepals have been artificially removed; (j) developed fruits, one of which having an exit hole (arrow) excavated by Epicephala larva. https://doi.org/10.1371/journal.pone.0210727.g002 In addition, non-Epicephala insects that visited the flowers were recorded and captured whenever possible. Each insect specimen was inspected under a dissecting microscope for pollen load to determine whether they could potentially contribute to pollination. Pollen and egg loads on female flowers Although direct observation of pollination behavior provides straightforward evidence that the species is an active pollinator, observation was not sufficient for Epicephala associated with P. graveolens and P. huallagensis. We therefore examined the female flowers of the two species for pollen and egg loads under a dissecting microscope. If the majority of the pollinated flowers are infested with moth eggs, and unpollinated flowers are free of moth eggs, it is likely that the associated Epicephala pollinates before oviposition and acts as the sole pollinator. In turn, if some fraction of pollinated flowers is free of moth eggs, co-pollinator may be present, regardless of whether the associated Epicephala actively pollinates. We examined whether there is such an association between pollen and egg loads to compensate for a lack of direct observations in P. graveolens and P. huallagensis. Phylogenetic positions of New World Epicephala To infer the phylogenetic positions of New World Epicephala relative to the Old World species, we analyzed the sequences of the combined mitochondrial COI, nuclear elongation factor 1-alpha (EF1α) and arginine kinase (ArgK) genes. We used the primers and laboratory protocols described in [22] and [24] to obtain EF1αand ArgK sequences for one representative individual of each New World Epicephala species, chosen haphazardly from the individuals for which COI was sequenced as described. Obtained sequences were analyzed with published COI, EF1α, and ArgK sequence data for 29 Old World Epicephala and related Conopomorpha flueggella (associated with Flueggea). Sequences of Cuphodes diospyrosella, Stomphastis labyrinthica, and Melanocercops ficuvorella, obtained from the database, were used as outgroups. Phylogenetic analysis was done using IQ-TREE as described above for COI. Newly obtained EF1αand ArgK sequences have been deposited in DDBJ under accession numbers LC424133-LC424142. Nomenclatural acts The electronic edition of this article conforms to the requirements of the amended International Code of Zoological Nomenclature, and hence the new names contained herein are available under that Code from the electronic edition of this article. This published work and the nomenclatural acts it contains have been registered in ZooBank, the online registration system for the ICZN. The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix "http://zoobank.org/". The LSID for this publication is: urn:lsid:zoobank.org:pub:7B78433F-07E2-40ED-BAE1-C083AF3304C2. The electronic edition of this work was published in a journal with an ISSN, and has been archived and is available from the following digital repositories: PubMed Central, LOCKSS. Species description Morphological examination of the adults that emerged from the fruits of four woody and three herbaceous Phyllanthus resulted in recognition of five Epicephala species (Fig 6). Four of them were each specific to one of the four woody Phyllanthus hosts, whereas the fifth species was broadly associated with the three herbaceous species. Analysis of the COI sequences confirmed the existence of five distinct clades, corresponding to morphologically recognized species (S1 Fig). Intraspecific pairwise sequence variation was within the range of 0-0.16%, whereas pairwise sequence variation between species exceeded 2.7%. Below we describe the five Epicephala species. Detailed description of morphology is provided in S1 Text. Epicephala anomala Kawakita & Kato, sp. nov. urn:lsid:zoobank.org:act:171DA3E5-2318-4A01-90FD-5684CEA33F83 Diagnosis. This species and the following Epicephala acuminatella are unlike any other Epicephala species in having sclerotized blade on the posterior side of cucullus and elongated inward projection on dorsal margin of cucullus. This species can be clearly distinguished from E. acuminatella by the broader and overall straight shape of the sclerotized blade on cucullus and distinctly narrower lamella postvaginalis. Description. Morphological description is provided in S1 Text. Diagnosis. This species is morphologically similar to E. nudilingua in having forked lamella postvaginalis but differs from the latter in lacking well developed cornutus and more rectangular sacculus. Description. Morphological description is provided in S1 Text. Adult Epicephala behavior and other floral visitors The flowers of the four woody Phyllanthus species were not always available during our fieldwork, so the number of occasions on which we encountered adult Epicephala was limited. Nevertheless, we were able to successfully observe the behavior of three moths on each of P. salviifolius and P. acuminatus, one moth on P. graveolens, and two moths on P. amarus. We also observed one moth in the act of oviposition on P. huallagensis, but whether this moth pollinated the flower prior to oviposition is unclear. Below we provide details of the behavior of each moth species. A summary of the results is given in Table 1. We encountered the flowering of P. salviifolius once in October 2014. There were numerous male flowers, but many female flowers had already been pollinated and started to develop into fruits. We observed three female individuals of E. anomala visiting such flowers with developing ovaries. After finding suitable female flower in which to oviposit, the moths bended the abdomen and inserted the ovipositor into the ovary through the interspace between the tepals or by penetrating the tepals directly with the ovipositor, without exhibiting pollination behavior ( Fig 2G; S1 Video). We observed more than 10 such oviposition events in each of two moths and three such ovipositions in one moth. Perplexingly, however, after capturing the Neotropical leafflower-leafflower moth mutualism three moths and inspecting their proboscises under a microscope, we found that their proboscises were coated with Phyllanthus pollen in a manner very similar to the proboscises of fieldcollected individuals of actively pollinating species. Thus, they have apparently visited the male flower before visiting female flowers and actively collected pollen. In addition, we observed one moth stretching the proboscis and inserting it into the flower several times before oviposition, which we regard as active pollination (S2 Video). This was observed in only one of >10 oviposition events by this moth, so active pollination occurred only occasionally in E. anomala. No other flower visitors were observed except one thrips individual that was found on male flower during the daytime. The behavior of adult E. acuminatella on P. acuminatus was observed three times; twice in 2013 and once in 2014. Oviposition was observed twice in one moth and once each in the other two moths. On all occasions, the moths laid eggs in young fruits, but not female flowers, without exhibiting pollination behavior (Fig 3F). However, the two moths that were successfully captured both had their proboscises coated with Phyllanthus pollen, indicating that they had actively collected pollen before visiting female flowers. We collected one additional female that was resting on P. acuminatus foliage, and this individual also had pollen on the proboscis. Both male and female flowers of P. acuminatus were visited frequently by gall midges in the evening (Fig 3D and 3E). They pushed their mouthparts against the floral discs presumably to take in nectar and had many Phyllanthus pollen grains attached to their bodies (Fig 3D). No other flower visitors were found either during the daytime or in the evening. Gall midges were found resting on flowers or leaves of P. acuminatus during the daytime. We studied the flower of P. graveolens in November 2015 and observed one adult Epicephala on female flower. Unlike E. anomala or E. acuminatella that laid eggs in young fruits, this Epicephala visited female flower, pollinated the stigma with the proboscis and subsequently laid an egg (Fig 4E). Nine pollen grains were attached to the stigma of this flower and one egg was laid internally on the ovary wall. We failed to capture this moth, but inspection of the photograph taken of this moth while resting on the branch shows that the proboscis was dusted with pollen ( Fig 4F). Thus, we consider this moth an active pollinator, similar to those typically found on Old World leafflowers. Because the moth was not captured, we could not confirm the species morphologically, but DNA barcoding of the egg indicated that the moth is E. graveolensella. In addition to Epicephala, thrips were commonly found inside both male and female flowers (Fig 4C and 4D). Both juvenile and adult thrips were found, indicating that they use P. graveolens flowers as brood sites. Pollen grains were attached invariably to 55 adult thrips individuals collected haphazardly on male flowers, and, out of 36 adult thrips individuals sampled on female flowers, two had pollen on their bodies, suggesting that thrips may act as co-pollinators. On P. huallagensis, one moth was found in the act of oviposition (Fig 4K), but because this moth was the only individual that we could observe, we were unable to assess whether pollination takes place prior to oviposition. We also failed to capture this moth and thus could not inspect its proboscis microscopically, but the photograph shows that the proboscis is coated with pollen (Fig 4L). Thirty-one pollen grains were attached to the stigma of the flower visited by this moth, and one egg was laid internally on the ovary wall. Using DNA barcode, we confirmed that the moth is E. huellagensiella. No other flower visitors were found during the observation, although we found two thrips individual, without pollen, in one of 206 female flowers that we dissected. Finally, two E. chancapiedra individuals were observed on P. amarus. Both individuals neither pollinated female flowers nor carried pollen on the proboscises, and laid eggs in young fruits (Fig 5G). Thus, this species is likely a non-pollinating parasite, similar to Epicephala associated with herbaceous Phyllanthus in the Old World. Ants and stingless bees were frequent visitors to P. amarus flowers, which likely contributed to pollination. Proboscis morphology The proboscises of the five Epicephala species were observed under a microscope to determine whether sensilla are present. The sensilla were clearly present on the proboscises of E. acuminatella, E. graveolensella, and E. huellagensiella, but absent in those of E. anomala and E. chancapiedra (Fig 7). Pollen and egg loads on female flowers Because behavioral observations were not sufficient for E. graveolensella and E. huallagensiella, we examined the female flowers of P. graveolens and P. huallagensis to determine whether there is association between pollination status and presence of moth eggs. In P. graveolens, we inspected 173 female flowers, of which 112 were pollinated. Out of the 112 pollinated flowers, only 64 (57.1%) received Epicephala eggs ( Table 2), indicating that co-pollinators, such as thrips, may be responsible for the pollination of the remaining flowers that had not received moth eggs. An alternative possibility is that Epicephala moths pollinated most of the flowers but failed to lay eggs in some of them (loss of eggs subsequent to oviposition is unlikely because moth ovipositions cause visible scars on ovaries which were not observed on flowers without eggs). We also examined 206 female flowers of P. huallagensis and found 154 that were pollinated. Of these, 130 (84.4%) had moth eggs (Table 2), so the contribution of co-pollinators, if any, is smaller than it is in P. graveolens. No moth egg was deposited on 61 and 52 unpollinated flowers of P. graveolens and P. huallagensis (Table 2). Phylogenetic position of New World Epicephala Maximum-likelihood analysis of the combined COI + EF1α+ ArgK gene sequences indicated that the four Epicephala species associated with woody Phyllanthus form a well-supported clade, and this clade is embedded among lineages of Old World active pollinators (Fig 8). In turn, E. chancapiedra grouped with Old World Epicephala species associated with herbaceous Phyllanthus (Fig 8). Because Old World Epicephala on herbaceous Phyllanthus all lack pollination behavior, the ancestor of E. chancapiedra likely did not possess the pollination behavior by the time it colonized the New World. Discussion Although the presence of Epicephala in the New World has been implicated based on the presence of Epicephala-like larvae and pupal cocoons on herbarium specimens of Neotropical Phyllanthus [25], this is the first report of any Epicephala from the New World. The diversity of Phyllanthus in Peru is not high as compared to other regions in the Neotropics; however, we found species-specific Epicephala species from each of four woody Phyllanthus that were studied and one Epicephala species associated with three herbaceous Phyllanthus. Considering that there are ca. 250 Phyllanthus species throughout the New World, there is probably a considerable diversity of Epicephala remaining to be found. Epicephala chancapiedra was associated with herbaceous Phyllanthus species belonging to two distantly related subgenera (Conami and Swartziani). This species is thus likely capable of attacking various herbaceous Phyllanthus species and may be found from other herbaceous Phyllanthus in other parts of the Neotropics. Molecular phylogenetic analysis indicated that the four moth species associated with woody Phyllanthus are descendants of Old World active pollinators. We observed adult females carrying Phyllanthus pollen on the proboscises in all the four species, and pollination in at least two species (E. anomala and E. graveolensella). Sensilla on female proboscis were also confirmed in three species (E. acuminatella, E. graveolensella, and E. huallagensiella). These findings indicate that actively pollinating Epicephala moths, which originated in the Old World, successfully colonized the New World and remained as active pollinators as they diversified on New World Phyllanthus. It is unknown how Epicephala moths were able to colonize a remote continent, but a presumably young age of the genus (ca. 25 Ma [12]) suggests that colonization most likely occurred by means of transoceanic dispersal. Simultaneous dispersal of plants and moths (as larvae inside fruits) is unlikely because even if it did occur, emerged adult moths would not survive until the plants become mature enough to flower. It is, however, not conclusive whether any Phyllanthus-Epicephala association in the New World is an obligate mutualism (Table 3). In P. graveolens and P. huallagensis, flowers that had moth eggs were always pollinated ( Table 2), suggesting either that the associated moths pollinate flowers before oviposition, or the moths are capable of distinguishing pollination status and selectively lay eggs in pollinated flowers without pollinating themselves. We consider that the former is more likely because we observed at least one E. graveolensella actively pollinate prior to oviposition. However, the P. graveolens-E. graveolensella association may not be obligate because 42.9% of pollinated flowers did not have moth eggs, indicating that co-pollinators may be present. Phyllathus huallagensis may be more dependent on Epicephala for pollination because there was a strong association between pollination status and egg load (Table 2), at least in the La Florida population during the period of our fieldwork. The imbricate tepals surrounding the anthers and styles in P. graveolens and P. huallagensis resemble those of Epicephala-pollinated Phyllanthus in the Old World, which hints at the possibility that their flowers are specialized for Epicephala pollination. It is also unknown whether there is net positive effect for the plant of being associated with Epicephala, because haphazard dissections of fruits with single exit holes indicated that a single moth larva consumes all the six seeds in each fruit in both P. graveolens and P. huallagensis. Such a pattern of destructive seed consumption is also the case for Epicephala-pollinated Phyllathus in New Caledonia, but moth mortality at the immature stage ensures that some seeds remain intact in this case [26]. In any case, actively pollinating Epicephala species likely contribute to the pollination of Phyllanthus in the New World, but to what extent Phyllanthus plants are dependent on Epicephala requires further ecological study. We also observed the adult behaviors of E. anomala and E. acuminatella, but assessing whether they have positive effects on the reproduction of host Phyllanthus is not straightforward. Although both species laid eggs in young fruits, and thus most likely did not contribute to pollination, all the female moths that have been examined carried pollen on the proboscises, suggesting that they have the potential to act as pollinators. Because we observed one instance of E. anomala moth pollinating P. salviifolius flower (although the flower had been pollinated and started developing into fruit), one possibility is that E. anomala pollinates only occasionally. The majority of P. salviifolius female flowers that we observed during the study were already pollinated, but E. anomala may pollinate more frequently in situations where there are Neotropical leafflower-leafflower moth mutualism more virgin flowers, although this requires that the moth can discriminate between pollinated and unpollinated flowers. However, E. anomala lacks the sensilla on the proboscis entirely, which we regard as an indication that the species is evolving toward being parasitic or already is parasitic. The sensilla on the proboscis of E. acuminatella are also not well developed as compared to other actively pollinating species in Asia (Fig 7) [20], so selection to retain pollination behavior may be relaxed in this species as in E. anomala. Epicephala anomala and E. acuminatella are closely related on the phylogeny, and thus we hypothesize that there has been a reversal to a less mutualistic lifestyle in the common ancestor of the two species (Fig 8). We did not observe any alternative pollinator in P. salviifolius, but in P. acuminatus, gall midges may act as the primary pollinator. Overall, the finding of Epicephala in the New World opens up a new avenue of research on the diversity and evolution of the leafflower-leafflower moth association. A large number of Epicephala species is likely to be found throughout the New World, especially in regions of high Phyllanthus diversity such as Cuba and Venezuela. Phyllanthus species in the sections Epistylium, occurring in the Caribbean, and Microglochidion, restricted to the tepuis of Guiana Highlands, have non-bifid and fused styles reminiscent of those of Glochidion in Asia; thus, obligate mutualistic relationships, as those found in the Old World, may be widespread in some regions or taxonomic groups. On the other hand, some species of New World Phyllanthus possess flowers that are most unusual of all Phyllanthus, such as P. orbicularis with large, white, showy tepals, or P. arbuscula with brightly red flowers borne on flattened, photosynthetic branches (phylloclades). These species likely have entirely different pollination systems, so it is of high interest to clarify how interactions with Epicephala and other pollinators have driven the evolution of floral diversity and led to the remarkable radiation of Phyllanthus in the New World. Blue lines indicate lineages with parasitic lifestyles, including species that either entirely lack the pollination behavior or show pollination behavior only occasionally [9,12]; green lines indicate lineages with active pollination behavior. Lineages occurring in Asia/Oceania, Africa, and the New World are indicated by beige, turquoise, and yellow boxes, respectively. Each terminal label includes information on host plant species. Abbreviations are: E., Epicephala; P., Phyllanthus; G., Glochidion; B., Breynia; F., Flueggea. Numbers at nodes are bootstrap probability values based on 1,000 replications. Major evolutionary events are indicated for selected nodes. Epicephala anomala and E. acuminatella are conservatively labeled as active pollinators, but there is a possibility that they are parasitic (see text for discussion). https://doi.org/10.1371/journal.pone.0210727.g008 S2 Video. Female Epicephala anomala exhibiting active pollination behavior before attempting to oviposit into Phyllanthus salviifolius ovary. Ovipositor of this moth was blocked by the tepal, and oviposition was not successful. The female is the same individual as in S1 Video.
7,470.4
2019-01-30T00:00:00.000
[ "Biology", "Environmental Science" ]
Histone Deacetylation Inhibitors as Therapy Concept in Sepsis Sepsis is characterized by dysregulated gene expression, provoking a hyper-inflammatory response occurring in parallel to a hypo-inflammatory reaction. This is often associated with multi-organ failure, leading to the patient’s death. Therefore, reprogramming of these pro- and anti-inflammatory, as well as immune-response genes which are involved in acute systemic inflammation, is a therapy approach to prevent organ failure and to improve sepsis outcomes. Considering epigenetic, i.e., reversible, modifications of chromatin, not altering the DNA sequence as one tool to adapt the expression profile, inhibition of factors mediating these changes is important. Acetylation of histones by histone acetyltransferases (HATs) and initiating an open-chromatin structure leading to its active transcription is counteracted by histone deacetylases (HDACs). Histone deacetylation triggers a compact nucleosome structure preventing active transcription. Hence, inhibiting the activity of HDACs by specific inhibitors can be used to restore the expression profile of the cells. It can be assumed that HDAC inhibitors will reduce the expression of pro-, as well as anti-inflammatory mediators, which blocks sepsis progression. However, decreased cytokine expression might also be unfavorable, because it can be associated with decreased bacterial clearance. Introduction Sepsis is a major cause of patients' deaths in intensive care units (ICUs) [1]. It is characterized by organ failure caused by severe infection. One reason for sepsis to occur is a compromised immune system which cannot adequately combat infectious pathogens [2,3]. Sepsis is known as a biphasic disease, first characterized by a hyper-inflammatory phase where high levels of pro-inflammatory cytokines provoke an excessive inflammatory response [4,5]. To limit inflammatory events, a second, hypo-inflammatory phase associated with an immunosuppressive phenomenon follows [6,7]. In septic patients, these two phases can occur in parallel, with a pro-inflammatory predominance at the beginning, changing to an anti-inflammatory prevalence at later time-points [8]. The anti-inflammatory stage is accompanied by T-cell depletion, contributing to immune paralysis [7,9]. This reduced immune status is often reflected by the patients' predisposition to secondary infections, commonly accompanied by rehospitalizations [10]. Therefore, understanding the mechanisms leading to this immunosuppressed state is mandatory. An impaired immune response as one sequelae of previous sepsis is believed to be a major contributing factor in delayed patients' deaths [11]. Considering transcriptional regulation of gene expression as the main factor controlling the proand anti-inflammatory phenotype of immune cells, it is obvious that altering underlying mechanisms may affect septic outcomes. One prerequisite of transcription is an open chromatin structure designated as "euchromatin" [12]. This allows the recruitment of transcription factors and RNA polymerases Although HATs and HDACs modify histones, resulting in changing chromatin, i.e., nucleosome structure, this itself is no epigenetic regulation [27]. Epigenetics requires a kind of memory, which is heritable, self-perpetuating, and reversible, and does not alter the DNA sequence [28]. Based on this prerequisite, epigenetic alterations should persist over a longer period. In line with this, the maintenance of the epigenome has been shown to overcome DNA replication and cell division [29]. It is worth mentioning that histone modifications, e.g., acetylations, do not only loosen the chromatin structure of the DNA, but additionally provide a new binding motif for factors such as protein modification readers. One such family of readers are the bromodomain (BRD) and extraterminal domain (BET) proteins, which specifically recognize and bind to acetylated lysine residues on histones [30]. These proteins detect histone acetylations in chromatin, bind to it, and recruit co-factors, transcription factors, and RNA polymerase II to the DNA to modulate gene expression. Besides the BET readers, HATs such as CBP and p300 hold a bromodomain. This means HATs can bind to already acetylated lysine residues, which further enhances their acetylase activity toward histone lysines and allows the recruitment of co-factors as well. Finally, transcription is initiated. 3 proteins sequesters these HDACs in the cytosol, consequently inhibiting their deacetylating function. HDAC6 and 10, belonging to class IIb, are mainly localized in the cytoplasm. HDAC11 has some sequence similarities with HDACs of class I and II, and thus can be found in the cytosol as well as in the nucleus. A special role of the function of sirtuins has already been suggested because of the special co-factor NAD + from the Krebs cycle in mitochondria, which links the class III HDACs to metabolism. Sirtuins localize to the cytosol, nucleus, as well as to mitochondria. Based on these differences in intracellular localization, HDACs have diverse target proteins, which consequently are not exclusively histones. Here, we focus on the HDACs which deacetylate histones, which belong mainly to the classical family. Characteristics of all four classes of HDACs are summarized in Table 1. These are closely packed, leading to a transcriptionally inactive state, the hetero-chromatin. Following acetylation of amino-ε lysine residues of histones by HATs-the writers-the nucleosome structure is loosened, which enables transcription factors and the RNA polymerase II to bind to the DNA, which thus initiates transcription. HDACs, also recognized as erasers, can deacetylase lysine-residues of histones, thus counteracting HAT activity and provoking a denser chromatin structure not allowing transcription. (Ac, acetylated; HATs, histone acetyl transferases; HDACs, histone deacetylases; K, lysine; Pol II, RNA polymerase II; TF, transcription factor). HAT and HDAC Activities in Sepsis Taking the tremendous changes in gene expression during sepsis initiation and progression into consideration [31][32][33], it is obvious that epigenetic changes contribute to the gene expression profile found in septic patients. Here, it is interesting to differentiate between gene silencing and gene activation mechanisms. The latter one can be triggered by HATs, and the former one is initiated by HDACs. As shown by Warford et al. in autopsies of the brains of sepsis patients, expression of HDAC6 was enhanced [34]. As depicted in Figure 2 in the healthy situation, HAT and HDAC activities are well-balanced. This pattern is changed when, at the beginning of sepsis, an overwhelming expression of pro-inflammatory mediators requires HAT activity to open the chromatin structure for effective transcription of pro-inflammatory genes, such as TNFα, IL-1β, or iNOS [35]. Strikingly, this process is counteracted by the HDACs, which are in part induced and activated by bacterial compounds [36], leading to chromatin reconstitution closely connected to gene silencing [37], which is consequently associated with immunosuppression [38,39]. and HATs is important to guarantee an appropriate immune response. Any alteration leading to a predominance of either HATs ( Figure 2B) or HDACs ( Figure 2C) is associated with corresponding epigenetic modifications, such as gene activation or gene silencing. In the sepsis situation, gene activation is mainly valid in the hyper-inflammatory phase, whereas gene silencing occurs particularly during immune paralysis in the hypo-inflammatory response. Taking this together, it is obvious that epigenetic regulation is an important mechanism during sepsis progression. Figure 2. Based on an initial hyper-inflammatory phase, followed by a hypo-inflammatory response which then in part occurs in parallel, epigenetic regulation of gene expression is expected. Thus, compared to the healthy situation (A), epigenetics will be out of control due to increased HAT activity in the hyper-inflammatory phase (B), enhancing expression of pro-inflammatory genes, and a rise of HDAC-dependent deacetylations (C), silencing pro-inflammatory gene expression. Based on an initial hyper-inflammatory phase, followed by a hypo-inflammatory response which then in part occurs in parallel, epigenetic regulation of gene expression is expected. Thus, compared to the healthy situation (A), epigenetics will be out of control due to increased HAT activity in the hyper-inflammatory phase (B), enhancing expression of pro-inflammatory genes, and a rise of HDAC-dependent deacetylations (C), silencing pro-inflammatory gene expression. Polymicrobial Sepsis Mouse Models to Elucidate Epigenetic Mechanisms Taking a closer look at mechanisms involved in gene activation and gene silencing, mouse models especially have been used. In response to the polymicrobial sepsis model initiated by cecal ligation and puncture (CLP), it was shown to be associated with HDAC6 activation. In line with this, HDACi improved sepsis progression [40][41][42]. Mechanistically, HDAC6, mainly located in the cytosol, has been shown to associate during sepsis with HDAC11 in the nucleus in antigen-presenting cells, inducing IL-10 expression [43,44]. In the control situation, HDAC11 prevents IL-10 expression. Accordingly, the pan-HDAC inhibitor LAQ824, a hydroxamic acid analogue, induces several chromatin changes in macrophages, which leads to enhanced HDAC11 recruitment to the IL-10 promoter in Balb/c mice [45]. Also, other HDACis belonging to the hydroxyamic acid family of compounds, which all are pan-HDACis, such as panobinostat and TsA, inhibited IL-10 production in peritoneal elicited macrophages (PEM) following LPS stimulation [45]. The more specific HDACi MS-275, inhibiting class I HDACs, did not effectively prevent IL-10 expression. In parallel, LAQ824 enhanced LPS-mediated expression of pro-inflammatory cytokines, such as TNFα, IL-6, IL-1α/β, and RANTES [45]. Considering these alterations, it is obvious that HDACis shapes the expression profile of macrophages to a pro-inflammatory expression pattern. Interestingly, as a possible consequence of this shift, LAQ824-treated PEM effectively prime naïve antigen-specific T-cells. Moreover, anergic T-cells recover responsiveness. Considering the biphasic nature of sepsis, i.e., a hyper-inflammatory vs. a hypo-inflammatory response, the latter one, especially, might be effectively improved by HDACis. One characteristic of hypo-inflammation is immune-paralysis, mainly mediated by T-cell depletion. This leads to an inappropriate immune response toward the initial, or a new second infection [46,47]. Therefore, recovery of T-cell function by inhibiting T-cell apoptosis and preventing anergy will improve septic outcomes [9,48]. Although HDACis have already been clinically approved, this therapy approach focuses only on tumor treatment [47,49] and a spectrum of other diseases [50]. Up to now, there have been no clinical trials listed using HDACis to treat sepsis. However, HDACis have already been used concerning their effect on parasite growth, such as Plasmodium, Leishmania, and Schistosoma [51], as well as to prevent human immunodeficiency virus (HIV) latency [52,53]. In general, epigenetic manipulations are considered to have therapeutic potential in infectious diseases [54]. Most importantly, the correct moment in sepsis onset and progression to inhibit HDACs has to be found. As shown in Figure 2A, the balance between HDACs and HATs is important to guarantee an appropriate immune response. Any alteration leading to a predominance of either HATs ( Figure 2B) or HDACs ( Figure 2C) is associated with corresponding epigenetic modifications, such as gene activation or gene silencing. In the sepsis situation, gene activation is mainly valid in the hyper-inflammatory phase, whereas gene silencing occurs particularly during immune paralysis in the hypo-inflammatory response. Taking this together, it is obvious that epigenetic regulation is an important mechanism during sepsis progression. Endotoxemia and LPS Treatment of Cells to Mimic Epigenetic Alterations in Sepsis Besides polymicrobial sepsis models, such as CLP, colon ascendens stent peritonitis (CASP), or peritoneal cavity infection (PCI) [55], endotoxemia by a LPS challenge is an important model mimicking sepsis-like symptoms in animals. LPS treatment is a more controllable model, which can be used to understand underlying principles leading to sepsis-dependent cellular modifications [56]. Besides animal models that are also cellular in vitro models, focusing on the role of macrophages is used to understand the role of HATs, HDACs, and HDACis in sepsis. In bone-marrow-derived macrophages, Aung et al. found that LPS regulates pro-inflammatory gene expression in macrophages by altering histone deacetylase expression [57]. In this study, the authors observed that LPS transiently repressed expression of HDACs 4, 5, and 7, followed by an induction of these HDACS, which was more rapid, concerning HDAC-1 mRNA [57]. Recently, Wu et al. described the crucial role of HDAC2 in LPS-dependent inflammatory activation of macrophages [58]. The expression of HDAC2, belonging to class I of HDACs, is enhanced following macrophage stimulation with LPS. Knockdown of HDAC2 reduces expression of pro-inflammatory genes IL-12, TNF-α, and iNOS [58]. This is in line with the work of Somanath et al. [59,60], showing a similar effect after CRISPR/Cas9-mediated HDAC2-disruption. Moreover, adoptive transfer of macrophages with a HDAC2 knockdown to mice diminishes their inflammatory response to LPS and E. coli [58]. Mechanistically, HDAC2 reduced c-Jun expression by directly binding to its promoter. There, acetylation of histones is removed, leading to compact nucleosome formation and, consequently, to gene-silencing following LPS-treatment. Considering LPS tolerance or cellular reprogramming as a mechanism associated with endotoxemia, it is interesting that the gene expression signature characteristic for endotoxin tolerance was also found in patients during the early onset of sepsis [61]. This is especially important, because endotoxin tolerance has been assumed to be mediated in part by epigenetic alterations, also termed "trained immunity" [62,63]. HDAC3 has been found to be required for the inflammatory gene expression program in macrophages [64]. In macrophages which do not express a functional HDAC3, roughly 50% of the pro-inflammatory genes in response to LPS were not expressed [64]. Interestingly, this was mediated in a large part by the loss of basal and LPS-dependent expression of IFNβ, suggesting the involvement of STAT1 as a contributing transcription factor. Also, HDAC7 seems to be involved in TLR4-dependent pro-inflammatory gene expression. As shown by Shakespear et al., HDAC7 promotes pro-inflammatory gene expression in mouse macrophages following LPS treatment [65]. HDAC7 was elevated in PEMs compared to untreated BMDMs. Mechanistically, HDAC7 seems to link LPS signaling with HIF-dependent transactivation [65]. Glucocorticoids as Epigenetic Regulators in Sepsis Considering sepsis as a mainly catabolic condition, Alamdari et al. observed that, during sepsis in rats, expression and activity of HDAC 6 was downregulated in skeletal muscle, whereas HAT p300 expression was upregulated [35]. Mechanistically, the glucocorticoid receptor antagonist RU38486 reversed this expression change. In line with this, treatment of the rats with dexamethasone significantly enhanced the expression of p300 and reduced expression of HDAC6 [35]. For further analogy, Yang et al. (2007) demonstrated that proteolysis of cultured myotubes was induced by dexamethasone [67]. In cultured L6 myotubes, dexamethasone induced increased nuclear localization of p300 and downregulated expression of HDAC3 and 6. Role of Sirtuins in Sepsis Sirtuins, i.e., class III HDACs, are largely uninvolved in histone deacetylation. Thus, other different roles have been defined. Among these other roles, HMGB1 hyperacetylation has been attributed to the function of SIRT1. This is a prerequisite for HMGB1 release from the cells. This process is also triggered by LPS stimulation, and is also valid in an animal model of polymicrobial sepsis [68]. Analogous to this work, Zhao et al. provided evidence that SIRT1-specific inhibition by EX-527 significantly improved survival of mice following CLP [69]. Moreover, expression of pro-inflammatory cytokines TNF-α and IL-6 in the blood and peritoneal fluid were reduced [69]. Interestingly, sepsis-dependent coagulopathy, as well as bone marrow atrophy, were reduced [69]. More obviously, a role of SIRTs has been proposed in immune-metabolism [70] or by long-noncoding RNA [71]. Interestingly, SIRT2 deficiency prevents chronic staphylococcus infection [72]. It has also been shown that acute kidney injury in a septic rat model is in part due to the reduced activation of SIRT1 and 3, giving rise to enhanced acetylated SOD2 levels, concomitant oxidative stress, and mitochondrial damage [73]. The chemical SIRT1 activator, resveratrol, restored SIRT1/3 activity and improved rat survival [73]. These data support the notion that members of the sirtuin family of HDACs mainly deacetylate proteins others than histones. In summary, the regulation of gene expression during sepsis requires the balanced function of HATs and HDACs [38,39]. An overshooting of both sides is deleterious, associated with a bad septic outcome. Taking this into consideration, altering the function of HDACs may be one new tool to restore appropriate gene expression and to maintain a functional adequate immune response. HDAC Inhibitors (HDACi) as Anti-Inflammatory Agents Taking the role of epigenetic modifications during sepsis initiation and progression into consideration, it is obvious that HDAC inhibitors (HDACi) will be effective in altering pro-and anti-inflammatory gene expression. Considering the broad range of unspecific, so-called "pan" HDAC inhibitors, and some more recently developed specific ones (as shown in Table 2), the role of HDAC inhibition could be determined. Initial studies have used the pan-HDAC inhibitors, SAHA (vorinostat) and trichostatin A (TSA) in various models of sepsis, as summarized in Table 2; these three compounds belong to two different chemical classes of HDAC inhibitors. Both compounds were effective in improving sepsis outcomes. Following CLP operation, the survival was improved in response to SAHA [74]. SAHA reduced TNF-α and IL-6 expression in LPS-endotoxemia [75]. Neuronal damage was also reduced by SAHA treatment of CLP-operated animals [76]. A similar protective role was shown with the HDACi TSA [76]. In LPS-dependent endotoxemia, acute lung injury and inflammation were reduced after the application of TSA [77]. This protective role was also evident in bone-marrow-derived macrophages (BMDM) by blocking DNA fragmentation, and reduced expression of pro-apoptotic genes [77]. In this cell type, TSA enhances LPS-dependent Cox-2, Cxcl2, and Ifit2 expression, whereas it blocks the expression of the LPS target genes, Ccl2, Ccl7, and Edn1 [57]. In the CLP model, TSA improved survival, reduced acute lung injury, and lowered the expression of TNF-α and IL-6. Moreover, expression of TLR2, TLR4, and the adaptor protein MyD88 were attenuated. Concomitantly, nuclear NF-κB was reduced [78]. In another study, the authors observed reduced plasma urea and creatinine, a decrease of CRP, less tubular damage, and reduced expression of MCP-1 and HDACs2/5. In line with this, H3Ac was enhanced [79]. Moreover, TSA reduced neutrophil infiltration, ICAM-1, and E-selectin expression [80], and reduced liver-damage markers, IL-10 expression, and MPO [81]. Interestingly, TSA blocks endotoxin tolerance induction as well [82]. Other pan-HDAC inhibitors, such as valproic acid [83] or butyric acid [80,84] were similarly effective in improving septic outcomes by reducing pro-inflammatory gene expression and concomitant reduced organ damage. However, unwanted side effects, such as enhanced toxicity, prevent their use in clinical trials [85]. The use of HDACi, which are specific for one class of HDACs or only one HDAC directly, is gaining more interest [86]. As seen in Table 2, currently, the HDAC6 inhibitor tubastatin A is particularly important. Likewise, showing a similar protective role, such as the pan-HDACi SAHA and TSA, only HDAC6 is inhibited [87][88][89]. Although SIRTs are barely involved in histone deacetylation, their specific inhibition improved the septic outcome in rodents as well. CLP-mediated damage was restored by the SIRT1-specific inhibitor EX-527 [90], and the SIRT2-specific inhibitor AGK2 [91]. With the use of more specific HDACi, there should be a more precise target which is affected. Side effects should therefore be minimized. However, up to now, there have been no clinical trials using HDACi to treat sepsis. Conclusions A therapy approach to fine-tuning gene expression by the activating or silencing of genes is a promising tool to overcome dysregulated gene expression, as observed in sepsis. Because accurate tuning of gene expression is mandatory, the development of new, more specific HDAC inhibitors is important. This will allow a direct and reversible change in gene expression, which is necessary to prevent sepsis progression and improve sepsis outcomes. Therefore, the use of HDAC inhibitors in clinical trials will be one major method in the near future for clarifying the impact of epigenetics during sepsis initiation and progression.
4,358.2
2019-01-01T00:00:00.000
[ "Biology", "Medicine" ]
IMPLICIT PARAMETRIZATIONS AND APPLICATIONS IN OPTIMIZATION AND CONTROL . We discuss necessary conditions (with less Lagrange multipliers), perturbations and general algorithms in non convex optimization problems. Optimal control problems with mixed constraints, governed by ordinary dif-ferential equations, are also studied in this context. Our treatment is based on a recent approach to implicit systems, constructing parametrizations of the corresponding manifold, via iterated Hamiltonian equations. 1. Introduction. The paper discusses in some detail nonlinear programming problems, starting with the classical case of equality constraints, up to general problems involving inequality constraints and abstract constraints as well. We point out optimality conditions with no or less Lagrange multipliers and algorithms of gradient type or of a new type. This last case is developed for general optimization problems, under weak assumptions. An essential ingredient is the possibility to compute efficiently general nonlinear "projection" operators, associated to the equality constraints, and their discretization. Our approach applies a modification of the standard dimension reduction method, based on the implicit parametrization technique [29] for general nonlinear systems in finite dimensional spaces. An important part is played by iterated systems of ordinary differential equations of Hamiltonian type. Although just continuity is valid in their right-hand side, uniqueness and strong regularity properties of their solution can be also obtained, due to the special Hamiltonian structure. We also report on applications of similar ideas to optimal control problems governed by ordinary differential systems and involving general mixed constraints. The notion of admissible set of initial points has a key role. The formulation of the problem and certain techniques are inspired by [26], where optimality conditions are investigated. Our analysis is devoted to a new algorithm, in this general setting. We underline that all the transformations and procedures that we use are easy to implement and need just standard routines from MatLab. Implicit parametrizations give a generalization of the implicit functions theorem in arbitrary finite dimensional spaces and have been recently developed in [29], [20], [21], [22]. Obtaining parametrizations is advantageous since they may provide a more complete description of the corresponding manifold and the approach has a useful constructive character. We give a brief review in the next Section 2, devoted to preliminaries. As a first short application, in the final part of Section 2, an example of perturbations is indicated, relevant in shape optimization and in free boundary problems. Section 3 is devoted to the study of general nonlinear programming problems and also provides numerical examples together with comparisons with the relaxation approach [28] or the fmincon routines in MatLab. Finally, the last section is devoted to optimal control problems with mixed constraints, governed by ordinary differential equations and includes the new algorithm and an academic example. The efficiency of the proposed methods is apparent. We underline that our approach allows the use of maximal solutions in the ordinary differential systems defined in Section 2. In fact, in examples from Section 3 and from [20], or in a recent result from [33], these solutions are even global (periodic) and this ensures a global-like search in Alg. 2 and in Alg. 3. In general, under our approach, it is easy to enlarge the search region, in the absence of critical points, simply by increasing the intervals where the parametrization from Section 2 is defined. For instance, this plays an essential role in the comparison with [28]. See Remark 2 as well. In this sense, the methods introduced here offer more information than usual local optimization algorithms. Another important point is that Alg. 2 and Alg. 3 generate the discrete admissible sets in two steps. First the discrete sets corresponding to the equality constraints is constructed via the implicit parametrization method from Section 2. The other constraints are just checked on these discrete sets and the infeasible points are eliminated. This allows very weak assumptions on the inequality and the abstract constraints, on the cost functional. The minimization is also performed directly and several solutions may be obtained simultaneously, in case the minimum point is not unique. In this section, we discuss the system (1) under the classical independence assumption, following [29]. To fix ideas, we assume The hypothesis (2) can be dropped by using the notion of generalized solution of (1), according to [29]. See as well [20], [30], [21] where several relevant examples are discussed as well. Clearly, condition (2) remains valid on a neighbourhood V ∈ V(x 0 ), V ⊂ Ω, under the C 1 (Ω) assumption on F j (·), j = 1, l, and we denote by A(x), x ∈ V , the corresponding nonsingular l × l matrix from (2). Without loss of generality, A(x) may be assumed to be in the left upper corner of the Jacobian matrix. We introduce on V the undetermined linear systems of equations with unknowns (3) We shall use d − l solutions of (3) obtained by fixing successively the last d − l components of the vector v(x) ∈ R d to be the rows of the identity matrix in R d−l multiplied by ∆(x) = detA(x). Then, the first l components are uniquely determined, by inverting A(x), due to (2). In this way, the obtained d − l solutions of (3), denoted by v 1 (x), . . ., v d−l (x) ∈ R d , are linearly independent, for any x ∈ V . They give a basis in the tangent space to the manifold defined by (1). Moreover, these vector fields are continuous in V as ∇F j (·) are continuous in V and the Cramer's rule ensures the continuity of the solution for linear systems with respect to the coefficients. Other choices of solutions for (3), useful in this section, are possible (see Theorem 2.5). We introduce now d − l nonlinear systems of first order partial differential equations associated to the vector fields (v j (x)) j=1,d−l , x ∈ V ⊂ Ω. Furthermore, we denote the sequence of independent variables by t 1 , t 2 , . . . , t d−l . These systems have a nonstandard (iterated) character in the sense that the solution of one of them is used as initial condition in the next one. Consequently, the independent variables in the "previous" systems enter just as parameters in the next system, via the initial conditions. Due to their simple structure (one derivative in each equation), we stress that each subsystem (4), (5),..., (6), may be interpreted as an ordinary differential system in V ⊂ R d (with parameters in (5),..., (6)), although partial differential notations are used: . Here, the notations I 1 , I 2 (t 1 ), . . . ., I d−l (t 1 , . . . , t d−l−1 ) are the d − l local existence intervals (for each subsystem), containing 0 in interior and depending, in principle, on the "previous" parameters t 1 , . . . , t d−l−1 . The existence of the solutions y 1 , y 2 , . . . , y d−l follows by the Peano theorem due to the continuity of the vector fields (v j ) j=1,d−l on V . The Peano theorem also gives an evaluation of the intervals I j . As a simple example for the system (4)-(6), we consider the case d = 3, l = 1 and the equation f (x, y, z) = 0 instead of (1). The condition (2) is assumed in the form f x (x 0 , y 0 , z 0 ) = 0, without loss of generality. Then, the system (4)-(6) has just two subsystems of dimension three (with obvious notations for the derivatives): The right-hand side of these two subsystems satisfy (3) and the Hamiltonian structure is clear. We recall some basic properties following [29]. has a unique solution. Remark 1. Note that for systems associated to divergence free fields, the uniqueness results of [9] are valid under certain Sobolev type regularity conditions. However, under our hypotheses, we have just continuity in the right-hand side of the differential system (4)-(6) and [9] cannot be applied. For related results, see [4], [35]. b) The solutions of the systems (4) -(6) are of class C 1 in any existence point and we have: Under the hypothesis (2), the local solution of (1) is a d − l dimensional manifold around x 0 . We expect that Theorem 2.4. If F k ∈ C 1 (Ω), k = 1, l, with the independence property (2), and the I j are sufficiently small, j = 1, d − l, then the mapping is regular and one-to-one on its image. We consider now another solution choice in (3). We shall use d−l solutions of (3) obtained by fixing the last d − l components of the vector v(x) ∈ R d to be the rows of the identity matrix in R d−l . The next result shows that we construct exactly the solution of the classical implicit functions theorem, which follows as a special case of our approach. Remark 2. We underline that, although Theorem 2.5 provides the classical solution of the implicit functions theorem, the constructed parametrization may be more advantageous in applications since it offers a more complete description of the corresponding manifold by removing the condition to use functions. For instance, a torus in R 3 cannot be described via one function, but one parametrization may perform this. One can use maximal solutions of (4) - (6) and, in many examples, the (local) maximal solution from Theorem 2.4 may give even a global description of the manifold, [20], [30]. In applications, the choice of other solutions of (3) is also possible and of interest [21], in order to improve the description of the manifold. Remark 3. Beside the existence statement, Theorem 2.5 gives a construction recipe for the implicit functions solution and an evaluation of its existence neighborhood (via Theorem 2.3), in the system (1), [29]. This may be compared with [5], [24] where other types of arguments are used. Consider finally, as an example, the special case of perturbations of the form where h j ∈ C 2 (Ω), h j (x 0 ) = 0. If, moreover, l = 1 and the equation F 1 (x 1 , . . . , x d ) = 0, F 1 ∈ C 2 (Ω), together with the associated initial condition, represents the boundary of a subdomain in Ω (where F 1 < 0, for instance) then the geometric perturbation defined by (10) is called functional variation, [19], and may be very complex, including topological and boundary perturbations of the initial domain [18], [12], [27]. Computing the equation in variations as in [30], the perturbations (10) generate a directional derivative in the implicit system (1). Consequently, by the above geometric interpretation, we may define, for l = 1, a new type of geometric directional derivative of domains. This is more general than the speed method or the topological derivatives [18] and has applications in shape optimization, fixed domain methods, see [19], [31], in free boundary problems [13]. In the recent paper [33], this technique is exploited in combination with a penalization method. 3. Reduced gradients in nonlinear programming. In constrained optimization, projected gradient methods are a classical tool, but their application may be hindered by the difficulty to effectively compute projections on the admissible set, Ciarlet [6]. Based on the results from the previous section, we use here the dimension reduction approach to eliminate, totally or partially, the constraints (and the corresponding Lagrange multipliers). The optimality conditions are in a more effective form, due to the decreasing of the dimension. Local and global-like algorithms and numerical examples are also discussed, under weak assumptions. The elimination of certain unknowns has advantages at the computational level. In the recent papers [28], [17], dimensional reduction is obtained via new relaxation procedures associated to implicit functions. Our approach is certainly different and ensures good numerical results. In the case of polynomial and semialgebraic optimization, [14] Thm.6.5, Thm.7.5, in the setting of global optimization, a stronger constraint qualification is used. We consider first the classical minimization problem with equality constraints: (1). It is known that by Theorem 2.5 we can replace it (around x 0 ) by the unconstrained problem for (t 1 , t 2 , . . are the components of y d−l , the solution of (4)-(6), corresponding to this case. This methodology will be extended later to the general case of implicit parametrizations from Theorem 2.4. By Theorem 2.5, Theorem 2.3 and the chain rule, one easily obtains the (known) first order optimality conditions in the Fermat form, involving the tangential gradient of g (due to (3), the vectors v j in (11) give a basis in the tangent space to the equality constraints manifold M ): Proposition 1. Assume that g and F i , i = 1, l, are in C 1 (R d ) and (2) holds. If x 0 is a local solution of (P ), then we have: In fact, this is equivalent to the classical Lagrange multipliers rule, since due to (11), ∇g(x 0 ) is in the normal space to the manifold M , which has the basis given by ∇F i (x 0 ), i = 1, l. In this non convex setting we assume here that g ∈ C 1 (Ω) is bounded from below and we introduce the following algorithm of projected gradient type, based on the use of the tangential gradient, given in (11): Algorithm 1 1) choose n = 0, δ > 0 (a tolerance parameter) and let t n = (t n 1 , . . . , t n d−l ) be such that y d−l (t n 1 , . . . , t n d−l ) = x n in (4)-(6). 2) compute ρ n+1 ∈ [0, α n ] via the line search: 3) set: The constraints are as in (1) with hypothesis (2) satisfied in all the iterations x n . We denote by G(t) = g(y d−l (t)), defined in a neighborhood of the origin in R d−l and of class C 1 due to (4)-(6) and Theorem 2.3. The sequence {g(x n ) = G(t n )} is non increasing and convergent in this general setting, ensuring the convergence of the algorithm. The sequence {x n } is bounded. Moreover, we have ∇G(t n ) = [∇g(x n ).v j (x n )] j=1,d−l by Theorem 2.3 and the Algorithm 1 is in fact a transcription of the classical gradient method for the unconstrained problem (P 1 ). One can discuss other (very rich) variants of such local algorithms with their convergence (to stationary points, in general), under supplementary hypotheses if necessary, Bertsekas [2], Patriksson [23]. The new point in Algorithm 1 is that one can effectively and efficiently compute the "projection" y d−l via (4)-(6), by using standard routines in MatLab. Remark 4. The algorithm works practically in V , where the system (4)-(6) is defined and the parameter α n in the line search has to be chosen "small", such that we remain in V and the system (4)-(6) can be solved around t n , in Step 2). In Step 3) we perform the "projection" on the constraints manifold M ⊂ Ω. The points x n generated by this algorithm are always admissible for (P ). No convexity properties are assumed. The definition of (P 1 ) uses the implicit function Theorem 2.5 which is appropriate for optimality conditions, while for the Algorithm 1 the general implicit parametrization method has to be taken into account. The same is valid for the subsequent problem (Q 1 ) and the related results. We discuss now the general case of both equality and inequality constraints: where g, F i , G j are in C 1 (R d ). The Mangasarian-Fromovitz condition in this case consists of (2) and there is e ∈ R d such that with I(x 0 ) being the set of indices of active inequality constraints in x 0 . See [3], $ 2.3.4 or [7], $ 6 for excellent presentations. The necessary and sufficient metric regularity condition from [34] cannot be used here due to the lack of convexity. The reduced problem is again obtained via Theorem 2.5: (Q 1 ) M in{g(y 1 d−l , y 2 d−l , . . . , y l d−l , t 1 + x 0 l+1 , t 2 + x 0 l+2 , . . . , t d−l + x 0 d )}, subject to the constraints (12), in the "reduced" form: Lemma 3.1. The minimization problem (Q 1 ) satisfies the Mangasarian-Fromovitz condition in the origin of R d−l . Proof. By the first part in (13), we see that e is orthogonal to ∇F i (x 0 ), i = 1, l and belongs to the tangent space to the manifold (1) since ∇F i (x 0 ), i = 1, l is a basis in the normal space to the manifold given (1) If x 0 is a local solution of (Q), by Lemma 3.1, one can apply the classical KKT theorem, [6], to the problem (Q 1 ) in the origin of R d−l that becomes a local solution for (Q 1 ). Using again the derivation formula, we get: Theorem 3.2. Let x 0 be a local minimum for (Q). Then, there are β j ≥ 0, j = 1, m such that Remark 5. This is a simplified version of the KKT conditions since it eliminates the Lagrange multipliers for the equality constraints. It is possible, at least in principle, to eliminate completely the Lagrange multipliers: if x 0 is a local solution of problem (Q), then one can remove the inactive inequality constraints at x 0 . This is a consequence of the remark that the inequality constraints that are not active at x 0 define a neighborhood of x 0 . The minimum property of x 0 is preserved in this neighborhood, just under the equality constraints supplemented by the active constraints rewritten as equalities. Under the independence condition for all these constraints, one can write optimality conditions as in the Proposition 1. We relax now the hypotheses in the problem (Q) and we describe a direct globallike minimization algorithm. It looks for the solution in a maximal neighborhood of x 0 , corresponding to the maximal solutions of the subsystems in (4) -(6) (the maximal existence intervals may depend on the respective initial conditions). See Remark 2 and [20], [30]. We assume in the sequel less regularity: g and G j , j = 1, m, are just in C(R d ) and F i , i = 1, l, are in C 1 (Ω) and satisfy condition (2) in x 0 . This last condition can be removed in fact, working with generalized solutions, according to the subsequent Remark 7. Notice that x 0 is here just an admissible point for (Q) and not a local minimum as in Theorem 3.2. We can also add the abstract constraint x ∈ D, some given closed subset, with nonvoid interior in R d , such that x 0 ∈ D. The main observation is that in solving numerically (4) -(6), now using the variant corresponding to Theorem 2.4, we obtain automatically a discretization of the manifold M defined by (1), in a maximal neighborhood of x 0 , as explained above. Let us denote by n the discretization parameter. For instance, 1/n can characterize the size of the discretization for the parameters t 1 , . . . , t d−l , or n may be linked to the length of the intervals where the maximal solution is computed, or both, etc. As n → ∞, the union of these discretized points is dense in M . We denote by C n the set of all these discretized points that, moreover, satisfy all the constraints (the inequality and the other restrictions have to be just checked). They give the approximating admissible set and we formulate the algorithm: Algorithm 2 1) choose n = 1, the discretization step 1/n and the solution intervals I n 1 , . . . , I n d−l , the tolerance parameter δ. 2) compute the discrete set C n of admissible points, starting from x 0 , via (4) -(6) and by testing the validity of (12) and D. 3) find in C n the approximating minimum of (Q), denoted by x n . 4) test if the solution is satisfactory by |g(x n ) − g(x n−1 | ≤ δ. 5) If YES, then STOP. If NO, then n := n + 1 and GO TO step 1). In step 4) other tests (on the solutions, on the gradients, etc.) may be used. The approximating minimum x n ∈ C n may be not unique and the Algorithm 2 finds all of them. One can adapt the convergence test to such situations. Proof. This is a consequence of the continuity of g and the density of C n in the admissible set, according to Theorem 2.4. Remark 6. In general, the set defined by the equality constraints may have several connected components. See Example 2. Starting from x 0 , Algorithm 2 will minimize just on the component that contains x 0 . Initial guesses from all the admissible components are necessary if we want to minimize on all of them. (2) is not fulfilled, then one can use the generalized solution of (1) as mentioned in Section 2, since the Hausdorff-Pompeiu distance (involved in the definition of generalized solutions, [29]) ensures the convergence of these approximating points. This extension to the critical case will be investigated in a subsequent work. Remark 7. If condition We indicate now some illustrative numerical examples and compare our results with other approximation methods, from MatLab or [28]. Example 1. We consider first a minimization problem on the torus in R 3 , with radii 2 respectively 1, defined implicitly: min{xyz} F (x, y, z) = (x 2 + y 2 + z 2 + 3) 2 − 16(x 2 + y 2 ) = 0 To generate the discrete admissible set, we use the system (7) Using other starting points like (1, 0, 0) or (3, 0, 0) is not allowed by MatLab that finds no other admissible solutions in these cases, while our approach works. Example 2. Now, we consider two equality restrictions, that represent a torus intersected with a paraboloid, see Fig.1 and Fig.2. The numerical results and a comparison with MatLab routine fmincon is indicated below: In the second case fmincon stops after 42 iterations with the message that constraints are not satisfied within the tolerance. In the first case, fmincon finds basically the same values. Consequently, the true solution (obtained in the second case) is not found by MatLab. Remark 8. We indicate no example involving inequality constraints and/or abstract constraints since they involve no supplementary mathematical developments. By step 2 in Alg. 2 such constraints have just to be checked on the set of discrete admissible points for the equality constraints, obtained via (4)- (6). Notice that in the above examples we get the global solutions. Remark 9. In [28], an example in R 6 , with three stiff equality constraints, is discussed. Reworking [28] since our algorithm needs no bounds on the independent variables and extends the search domain, which is an advantage from the point of view of global optimization. The necessary working time, on a medium performance laptop, is several minutes. More details on the experiment and some high dimensional numerical examples are indicated in [22]. 4. Applications in control. We investigate first the following optimal control problem with equality mixed constraints: h(x(t), u(t)) = 0, t ∈ [0, 1]. Above, l : are given mappings and x : [0, 1] → R d is the state variable, u : [0, 1] → R m is the control unknown. Initial conditions may be added to (16), but it is not necessary now. For simplicity, we assume here that for any u ∈ C(0, 1; R m )], (16) has global solutions in C 1 (0, 1; R d ), in order that (15) -(17) make sense. Such global existence properties have to be checked in each application. It is possible to work with Carathéodory solutions and weaker regularity assumptions, but this would introduce more technicalities. Systems of this type have been recently studied in a more general implicit form by Maria do Rosario de Pinho [25], [26], Clarke and de Pinho [8], from the point of view of the maximum principle and under weak differentiability assumptions. We indicate here a new algorithm. The formulation (15) -(17) is of Mayer type and the conditions (16), (17) give a DAE system. A short announcement of some of the ideas discussed in this section can be found in [32]. If inequality constraints are added to (15) - (17), the classical procedure is to introduce supplementary control variables in order to transform them in equality constraints. Our procedure is similar to the previous section, without increasing the dimension of the control space and applies as well to abstract constraints, separated constraints. As general assumptions, we shall require l continuous, f continuous and locally Lipschitzian in (x, u), the mapping h is of class C 1 , with locally Lipschitzian gradient and there is a point (x 0 , u 0 ) ∈ R d × R m such that h(x 0 , u 0 ) = 0 and ∇h(x 0 , u 0 ) of maximal rank. (18) Notice that, in this setting, some of the constraints (17) may become just state constraints or control constraints and the remaining constraints may be of mixed type. Under hypothesis (18), one can use Thm. 2.4 to obtain a constructive parametric (maximal) description of the admissible manifold for (17), denoted by M ⊂ R d ×R m . As explained in Rem. 4, this is the search region used below. It may be global if the solution of (4) -(6) is so. M is not the admissible set of pairs for the control problem (15) - (17). However, any admissible state-control trajectory (including their initial conditions) should satisfy In the sequel, we shall make the supplementary hypothesis that the admissible controls satisfy u(t) ∈ C 1 (0, 1; R m ). Consequently, (17), (19) make sense. This will be justified in the next proposition and the subsequent remark. In the standard terminology for DAE system, relations (16), (17) are semi-explicit of index one. Taking into account (16), (19) and differentiating, we get: Under the regularity assumption for the admissible pairs, we get that they provide global solutions in [0, 1] for the system (16), (20). The important observation is that any point in M provides a consistent initial condition for the differential system (16), (20), which solves a key difficulty for the DAE equations. Remark 10. In fact, we have shown that the differential system (16), (20), under the indicated assumptions, generates admissible pairs in C 1 ((0, 1); R d × R m ), starting from initial conditions in M . And it is meaningful to minimize on this class of admissible state-control pairs and to get at least suboptimal pairs. Moreover, existing regularity results for the optimal pairs, Fleming and Rishel [10], Lee and Markus [15], J.L. Lions [16] allow, in many cases, to restrict the search for admissible controls by such regularity conditions and the problem governed by (16), (20) becomes equivalent with the original minimization problems. We notice that the set of discretization points in M generated by (4) -(6), applied to (17) and denoted by ∪ n∈N M n , is dense in M when the discretization of I 1 × I 2 × . . . × I d−l is finer and finer. 5. Conclusion. Based on the implicit parametrization method developed in the last years by the author, we analyze constrained nonlinear programming and optimal control problems. We discuss theoretical questions like optimality conditions involving less or no multipliers, for general optimization problems in finite dimensional spaces. We also study new algorithms in optimization and control, under weak assumptions on the data. Some academic numerical examples are indicated and comparisons with other approaches from MatLab or from the recent literature, are performed.
6,644.4
2020-01-01T00:00:00.000
[ "Mathematics", "Computer Science" ]
A Genetic Programming Approach to Reconfigure a Morphological Image Processing Architecture . Mathematical morphology supplies powerful tools for low-level image analysis. Many applications in computer vision require dedicated hardware for real-time execution. The design of morphological operators for a given application is not a trivial one. Genetic programming is a branch of evolutionary computing, and it is consolidating as a promising method for applications of digital image processing. The main objective of genetic programming is to discover how computers can learn to solve problems without being programmed for that. In this paper, the development of an original reconfigurable architecture using logical, arithmetic, and morphological instructions generated automatically by a genetic programming approach is presented. The developed architecture is based on FPGAs and has among the possible applications, automatic image filtering, pattern recognition and emulation of unknown filter. Binary, gray, and color image practical applications using the developed architecture are presented and the results are compared with similar techniques found in the literature. Introduction Morphological image processing is a nonlinear branch in image processing developed by Matheron and Serra in the 1960s, based on geometry and on the mathematical theory of order [1][2][3][4][5][6].Morphological image processing has proved to be a powerful tool for binary and grayscale image computer vision processing tasks, such as edge detection, noise suppression, skeletonization, segmentation, pattern recognition, and enhancement [7].Initial applications of morphological processing were biomedical and geological image analysis problems [8].In the 1980s, extensions of classical mathematical morphology and connections to other fields were developed by several research groups worldwide along various directions, including computer vision problems, multiscale image processing, statistical analysis, and optimal design of morphological filters, to name just a few. The basic operations in mathematical morphology are the dilation and the erosion, and these operations can be described by logical and arithmetic operators.Dilation and erosion morphological operators can be represented, respectively, by the sum and subtraction of Minkowski sets [9]: In (1), A is the original binary image, B is the structuring element of the morphological operation, and B + a is the B displacement by a.Therefore, the dilation operation is obtained by the union of all B displacements in relation to the valid A elements.In (2), −B is the 180 • rotation of B in relation to its origin.Therefore, the erosion operation corresponds to intersection of the A displacements by the valid points of −B.These ideas can be extended to gray-level image processing using maximum and minimum operators, too [9]. International Journal of Reconfigurable Computing As mentioned by Haralick [10], since mathematical morphology operates with shapes, it becomes a natural processing to deal with problems of identification of image objects based on shape.Some other basic computer vision operations such as edge detection, skeletons, and noise elimination can be performed eroding or dilating objects in an algorithmic way. In color images, pixels are represented by vector values (RGB, e.g.): P x, y = P1 x, y , P2 x, y , P3 x, y T . ( Mathematical Morphology is based on the application of lattice theory to spatial structures [11].The definition of morphological operators needs a totally ordered complete lattice structure.A lattice is a partially ordered set in which any two elements have at least an upper bound (supremum) and a greatest lower bound (infimum).The supremum and the infimum are represented by the symbols ∨ and ∧, respectively.Thus, a lattice is complete if every subset of the lattice has a single supremum and a single infimum.Color is known to play a significant role in human visual perception.The application of mathematical morphology to color images is difficult due to the vector nature of the color data.The extension of concepts from grayscale morphology to color morphology must first choose an appropriate color ordering, a color space that determines the way in which colors are represented and an infimum and a supremum operator in the selected color space should also be defined.There are several techniques for ordering vectors.The two main approaches are marginal ordering and vector ordering.In the marginal ordering, each component P1, P2, or P3 is ordered independently and the operations are applied to each channel; unfortunately, this procedure has some drawbacks, for example, producing new colors that are not contained in the original image and may be unacceptable in applications that use color for object recognition.The vector ordering method for morphological processing is more advisable.Only one processing over the three dimensional data is performed using this method.There are several ways of establishing the order, for example, ordering by one component, canonical ordering, ordering by distance, and lexicographical order [12]. Once these orders are defined, then the morphological operators are defined in the standard way.The vector erosion of color image f at pixel x by the structuring element B of size n is [2]. The corresponding dilation DnB is obtained by replacing the inf by sup An opening is an erosion followed by a dilation, and a closing is a dilation followed by an erosion. The design of morphological procedures is not a trivial task in practice [13].Some expert knowledge is necessary to properly select the structuring element and the morphological operators to solve a certain problem [14].In the literature, there are several approaches using automatic programming to overcome these difficulties [15][16][17][18][19][20][21][22]; however, they present several drawbacks as a limited number of operators, only regular forms of structuring elements and only morphological instructions, to name just a few. Genetic programming (GP) is the most popular technique for automatic programming nowadays and may provide a better context for the automatic generation of morphological procedures [23].GP is a branch of evolutionary computation and artificial intelligence [24][25][26], based on concepts of genetics and Darwin's principle of natural selection to genetically breed and evolve computer programs to solve problems. Genetic programming is the extension of the genetic algorithms [27] into the space of programs.That is, the objects that constitute the population are not fixed-length character strings that encode possible solutions to a certain problem.They are programs (expressed as parse trees) that are the candidate solutions to the problem [28,29]. There are few applications of GP for the automatic construction of morphological operators [14,23] and for color image processing.Thus, we propose a linear genetic programming approach for the automatic construction of morphological, arithmetic, and logical operators, generating a toolbox named morph gen for the Matlab program.The proposed toolbox can be used for the design of nonlinear filters, image segmentation and pattern recognition of binary, grayscale, and color images.The instructions generated by the toolbox are transferred to a 32-stage pipeline architecture developed in this work, which has been implemented on an FPGA.Some examples of applications are presented, and the results are discussed and compared with other approaches. This paper is organized as follows; a brief review of the basic concepts of morphological operations and genetic programming is presented in Section 1; a detailed description of the developed system is presented Section 2; results and application examples are presented in Section 3; and Section 4 is the conclusions. Training Process. The developed algorithm for automatic construction of morphological operators uses a linear genetic programming approach that is a variant of the GP algorithm that acts on linear genomes [30,31].It operates with two images, an input image and an image containing only features of interest which should be extracted from the input image.The genetic procedure looks for operators' sequences in the space of mathematical morphology algorithms that allow extracting the features of interest from the original image.The operators are predefined procedures from a database that work with particular types of structuring elements having different shapes and sizes.It is also possible to include new operators in the database when necessary.The program output is a linear structure containing the best individual of the final population.The output result from one operator is used as input to the subsequent operator and so on, for example, the sequence "ero q 3->dil q 3" performs an erosion in the input image followed by a dilation using for each operation a 3 × 3 square structuring element.The genetic algorithm parameters are supplied by the user using a graphical user interface (GUI).The main parameters are: tree depth, number of chromosomes, number of generations, crossover rate, mutation rate, reproduction rate, and certain kinds of operators suited to a particular problem.It has been used for the problems the mean absolute error (MAE) as a fitness measure.The cost function using MAE error was calculated as follows: In ( 6), a is the resulting image evaluated by a particular chromosome, b is the goal image with the same size as a, and (i, j) is the pixel coordinate.The chromosomes are encoded as variable binary chains.The main steps of the proposed algorithm are illustrated in Figure 1. The genetic parameters and the images are supplied by the user; the initial population of programs is randomly generated.Since the chromosomes are encoded as binary chains, if the user has selected the instructions: and (AND logic), sto (STORE), ero (EROSION), and cpl (COMPLEMENT), the first operator will be coded as "00 2 ", the second as "01 2 ", the third as "10 2 ", and the last as "11 2 ".If the chosen tree depth was four, for example, the chromosome: "00011011 2 " could be created.The evaluation of this chromosome will be as illustrated in Figure 2, for example, AND (A, A) followed by a Store in a temporary variable followed by an Erosion, and followed by a Logical inversion.In example, A is an input binary image.This idea is repeated for the others chromosomes from the initial population.After evaluation of each chromosome in a generation, a cost value is assigned to each one using (6).The next step is to create a new population of the fittest programs.The selection method used to choose the best individuals for reproduction was the tournament selection [32].The best ones are selected for the genetic operations of crossover and mutation.In crossover operation, morphological operators are randomly selected and exchanged between parents chromosomes.The mutation operation replaces a randomly selected instruction by another in the range of morphological algorithms space. The reproduction operator copies a single parent into the new generation according to its fitness value.In Figure 3, we can see the crossover and mutation operators used in this paper.This process is repeated for several generations until a stop criteria is reached (the fittest program). Implemented Architecture. The block diagram of the developed architecture can be seen in Figure 4.The opcodes (best chromosome) file "program.mif" with the sequence of operators (binary chain) seen in Section 2.1 generated by Matlab is transferred with the other project files containing the description of the architecture to the FPGA board by means of the USB interface from PC through the Quartus II software.In this project, the DE2 board from Altera [33] was used to develop the video architecture that is based on a 32stage pipeline.The DE2 board contains a Cyclone II (2C35) FPGA, a NTSC/PAL TV decoder circuit, and a VGA output circuit.A composite video signal supplied by a commercial video camera is deinterlaced and converted to 10 bit RGB data (640 × 480 pixels) through a video decoder stage.The RGB frames are processed through the pipeline stages, and the results are converted to an analog format again through a DA converter.Then, the processed images can be shown in a VGA monitor.A 27 MHz oscillator was used as a clock source. In Figure 5, a block diagram of the pipeline stages is presented.The opcodes from morph gen toolbox are loaded into the stages through a state machine named ROM that contains the original program.The implementation of a stage from the pipeline can be seen in Figure 8, and it is described below. The state machine ROM uses the bus dat and the bus add to distribute the data (instructions) to each processor that uses an add (address) in the architecture.For example, the P1 has the add = 01 h, the P2 has the add = 02 h, and so on.The "program.mif" contains a binary chain representing the chromosome generated by Matlab where each line corresponds to an instruction according to Table 1 that will be processed by each stage.The block diagram of ROM unit explained before can be seen in Figure 6.In Figure 7, a simulation of this unit is shown.Considering Figure 6, after the reset process of the architecture, while the end add pin is low, the state machine loads the instructions referring to a certain problem into the instruction register (IR) of each processor of the pipeline.When the load process ends, the end add pin will be high and the state machine will indicate the timing of the video processing, and this cycle will be repeated when the state machine reads a reset state again. Each stage stores two adjacent 640-pixel lines followed by a 3-pixel line to constitute a (3 × 3) input to a morphological processor implemented in that stage.This same structure is used by a previously stored result that is delayed by each stage, too.This result is stored in img temp register.Figure 8 shows the block diagram of a stage from the pipeline.Each stage has been built using the Verilog language.The Instr dec block in the processor decodes the instruction stored in IR register and apply a morphological or logical operation, according to Table 1, to input pixels p1 1, p1 2, p1 3, p2 1, p2 2, p2 3, p3 1, p3 2, and p3 3 (input window) and/or s2 2 (previous stored result from n-1 stage).In Instr dec block, the dilation and erosion operations are implemented according to ( 5) and ( 4), respectively. For example, the instruction dil c 3 (dilation by a 3 × 3 circular structuring element) and ero c 3 (erosion by a 3 × 3 circular structuring element) are implemented in Verilog as follows, respectively: out dil<=0|p1 2|p2 1|p2 2|p2 3|p3 2, and out ero<=1&p1 2&p2 1&p2 2&p2 3&p3 2. To avoid bottlenecks, the system does not use memory access.The only significant delay presented in this architecture is due to the number of the pipeline stages.The logical instructions have been implemented using Verilog HDL through Quartus II.In this architecture, a chromosome is decoded according to Figure 2.Each stage can work with a RGB digital image of 10 bit/channel.For binary processing, the least significant bit of G channel is used.For monochromatic images, the R, G, or B channel is used and for color processing, a combination of R, G, and B, to form a lattice structure required for morphological processing.This combination is as follows: {R1G1B1R2B2G2, . . ., R10G10B10}, thus, RnGnBn is a 30bit scalar number, and the morphological operations ((4) and ( 5)) can be defined for color images.After processing, the resulting scalar value is decomposed again into its RGB component. The implementation idea of the proposed architecture can be seen in the following simplified example (Figure 9) for a dilation of a 5 × 5 binary input image using a 3 × 3 circular structuring element.In this figure, only one stage of the pipeline architecture is shown.Firstly, the image pixels are inserted into the buffers using a raster sweep.The buffers are necessary to maintain a window with the current pixels to be processed in each stage during the raster sweep.Once the structuring element has size 3×3, the first three pixels of each buffer (b1, b2, b3, b6, b7, b8, b11, b12, and b13) are passed to the processor along the time.Since the structuring element, in this hypothetical case, is circular, the only pixels used by the processor are b2, b6, b7, b8, and b12.In this example, a dilation operation that was preconfigured by the state machine ROM is implemented using a logical OR operator.The output of the stage is given by a stream of pixels.In this specific case, the input active pixels were img in (3,2) and img in (3,3), thus, after the logical operation, the active pixels of the dilated output image can be seen by means of the result variable. rst Results and Application Examples In this section, some results using the developed architecture are presented. In Figure 10, the input image was corrupted by salt and pepper noise with a density of 0.09 generated by Matlab.The instructions ero q 3 and dil q 3 (morphological operators) were used to construct a morphological filter.The genetic procedure converged before the 5th generation and the filter "ero q 3-> dil q 3-> dil q 3-> ero q 3" (morphological algorithm) was automatically created.The genetic parameters chosen for this task were: 50 generations, 25 chromosomes, depth of tree 4, crossover rate of 90%, mutation rate of 20%, and reproduction rate of 20%.The MAE error found between the goal image and a clear version of the original image was less than 0,4%.The training time was less than 4,3 seconds, and the execution time was performed in real time by the developed system.In this example and in the following ones, a PC notebook equipped with an AMD 64 Athlon processor and 512 MB of system memory was used for the training process. In Figure 11, an original image and a training image containing features (heads) from a fragment of a music score to be extracted by the evolutionary system are presented.The genetic procedure found the following best program to extract heads using the developed system: "dil dd 3-> dil de 3-> dil dd 3-> dil v 3-> dil v 3-> dil dd 3-> dil v 3-> ero q 3-> ero v 3-> ero q 3-> ero c 3".The genetic parameters chosen for this task were: 50 generations, 50 chromosomes, tree of depth 12, crossover rate of 97%, mutation rate of 3%, and reproduction rate of 10%.The MAE error found between the goal image and the obtained result was less than 1,1%.The training time was less than 12 min, and execution time was in real time.This procedure Best individual: ero q 3 -> dil q 3 -> dil q 3 -> ero q 3 was applied to image in Figure 12 producing an expected result, too. In Figure 13, an emulation result of the Photoshop's Trace Contour filter after a training process of the evolutionary system is presented.After the presentation of training samples the best program found was "ero (c 3)-> sto1-> sto1-> dil (q 3)-> dil (c 3)-> cpl-> cpl-> dil c 3-> ero q 3-> cpl-> or1-> cpl-> cpl."The genetic parameters chosen for this task were 50 generations, 90 chromosomes, tree of depth 16, crossover rate of 97%, mutation rate of 3%, and reproduction rate of 10%.The MAE error found was less than 2,86% compared to Photoshop's result.The training time was less than 18 min, and execution time was in real time. In Figure 14, there is an example of an emulation result of the Photoshop's Glowing Edge filter generated automatically after a training process of the evolutionary system for the following parameters: 51 generations, 70 chromosomes, tree of depth 9, crossover rate of 95%, mutation rate of 20%, and reproduction rate of 10%.For this task, an intensity image was used as input and the best program found was "add cz-> add cz-> dil c 3-> sto1-> cpl cz-> dil c 3-> dil c 3-> add cz-> sto1."The MAE error found was less than 6,2% compared to Photoshop's result.In Figure 15 the same result was applied to a color image.The morphological operations in this example preserve the colors in the original image [34].The training time was less than 10,6 min, and execution time was in real time. International Journal of Reconfigurable Computing Comparing the results with those obtained from other works in the literature, our implementation presented improvements in fitness, processing time, and programming flexibility.In [20], a genetic algorithm was used for the task of head extraction in music scores.The error found in this work was about 11,8% for a chromosome of size 14.In [23], the error for the same task was greater than 20%.In this paper, the error was less than 0,7% for a chromosome of size 6.In [23] and [20], the processing time of the procedures is not specified.In [13], a genetic algorithm for the task of automatic design of morphological filters is presented.The error found in this work for this task was about 10,59% and the processing time was not performed in real time.In this work, this error was about 2,2% for the same task.In this work, all applications were performed in real time by the developed architecture. As a contribution of the current paper in relation to the paper [35] presented at SPL, 2010, there are some improvements, such as, the sections were updated, the morphological operators were extended, to gray and color images, new morphological processing arithmetic operators were introduced, additional references were included and new figures and new experiments were shown. In relation to the paper [36], the current work presents some improvements such as the intelligent reconfiguration of the pipeline architecture by means of a genetic procedure. Table 2 summarizes all the obtained results, and Table 3 presents the FPGA device used resources. Conclusions In this paper an original reconfigurable architecture using logical and morphological instructions generated automatically by a linear approach based on genetic programming was presented.The developed architecture was based on an FPGA from Altera's Cyclone II family.The system is composed by a 32-stage pipeline and can be used in real-time mathematical morphology and linear applications.The system is able to process 640 × 480 pixels images at 60 frames/sec.Binary, gray-level, and color image practical applications using the developed architecture were presented, and the results were compared with other implementation techniques.The developed system can be applied to digital images for automatic design of nonlinear filters, image segmentation, and pattern recognition.Applications examples were shown where the solutions were expressed in terms of basic morphological operators, dilation, and erosion, in conjunction with arithmetic and logical operators.Compared with other methods described in the literature, the developed methodology presents many improvements in processing time, fitness, and Figure 1 : Figure1: Flowchart of developed system.The index i refers to an individual in the population.The reproduction rate is pr, the crossover rate is pc, and the mutation rate is pm.The goal image can be created using an editor program or a processing program. (A and A)) Figure 5 : Figure 5: Block diagram of the pipeline stages. Figure 7 : Figure 7: Simulation example of ROM unit. Figure 10 : Figure 10: Automatically generated filter to eliminate the salt and pepper noise from the corrupted original image. Figure 11 : Figure 11: Obtained result by developed hardware for head extraction in real time. Figure 12 : Figure 12: Example of head extraction in real time. Figure 13 : Figure 13: Emulation result of the Photoshop's Trace Contour filter implemented in hardware. Figure 14 :Figure 15 : Figure 14: Emulation result of the Photoshop's Glowing Edge filter implemented in hardware. Figure 4: Block diagram of the developed system.After FPGA programming, the composite video is deinterlaced by the video dec, and the 10-bit RGB data produced are processed by the pipeline architecture.The results are converted again to an analog format to be shown in a VGA monitor. Table 2 : Summary of results.Example Generations Chromosomes Tree depth Cross rate Mut rate Repr rate MAE error Training time Table 3 : Summary of FPGA device. in relation to program size (variable), types of operators, and extension to color images.The developed method can be used as a guide to morphological design as well as to other applications involving linear image processing. flexibility
5,450.2
2011-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Evaluation of the Perspective Power Transistor Structures on Efficiency Performance of PFC Circuit : The aim of this work is to investigate the influence of circuit elements on the properties of the selected power factor correction (PFC) topology. Active or passive PFC serves to increase the power factor (PF) and reduce the total harmonic distortion (THD) of the mains current. As a result, the distribution network is lightened due to its interference caused by connected electronic devices. An important indicator of all electronic converters is efficiency. Therefore, the work deals with the analysis of possible efficiency improvements in conjunction with the use of technologically new active components. Detailed experimental analyses and optimization procedures are performed in terms of the influence of transistor structures (SiC and GaN) on the qualitative indicators of the proposed PFC converter for a wide operating spectrum. The synthesis of the obtained results is given, together with recommendations for optimal selection and optimal design of PFC main circuit elements with regard to achieving peak efficiency values. Introduction Due to the rapid development of power semiconductor and computing technology, the structures of electrical equipment have changed in recent decades, and great efforts are being made to produce these sophisticated devices with high efficiency and the necessary dimensions and effects [1][2][3]. Nowadays, in large electronic devices, the power supply is realized as a switchedmode power supply. A diode rectifier is usually used at the input, followed by a filter capacitor, which allows the AC voltage to be transfered to a DC voltage. An input rectifier with capacitive filter taken from the distribution network of pulse currents, leads to the consumption of higher harmonic components of the current, which adversely affects other equipment on the distribution network. As a result, we started using active rectifierspower factor correction. Today, every manufacturer is trying to reduce the volume of the inverter that is connected to the mains in order to optimize the dimensions and weight. One of the ways to reduce the converter itself is to optimize it for higher switching frequencies, through which it is possible to use passive elements with smaller values and thus smaller dimensions [4][5][6][7]. However, the ability to operate at high switching frequencies is limited by the switching speed of the transistor, as it is important to maintain relatively low switching losses. At present, from an economic or functional point of view, SiC technology appears to be the most widely used transistor technology in industrial power supplies. Transistors of this technology are manufactured up to a voltage of 1700 V and are able to operate up to a frequency of 200 kHz at several kW of inverter power. Nowadays, however, GaN technology is also beginning to be used in industrial power supplies, which promises a significant increase in switching frequency with relatively small switching losses. The previous problem was mainly the absence of high voltage GaN transistors (650-1200 V), which would be applicable to three-phase PFC sources connected to the European network [8]. The main goal of this work is to implement and evaluate the switching performance of the latest transistor technologies (SiC and GaN) within the industrially used PFC converter. This aim targets the optimization of the selected converter considering the efficiency characteristic as well as power density optimization. Regarding the power density, it is related to the verification of the operation of selected transistor structures at high-frequency operation (from 120 kHz up to 250 kHz), while the original switching frequency was only 41 kHz. This significant increase in switching frequency enables us to further optimize PFC inductances, which represents the main component of the selected topology of PFC. From a scientific point of view, this approach represents the investigation of the posibilities of how to improve the power density of modern power semiconductor converters, while maintaining a very high efficiency performance within a wide power range. This issue also adresses the requirements regarding the optimization of the inductive components due to a multiple increase in the operational frequency. As the above-mentioned claims represent a complex task, the main focus here is on the investigation of the optimization process in the utilization of modern power semiconductor transistors within the main circuit of a three-phase dual interleaved PFC converter. Materials and Methods Continuous development in the field of power semiconductors is leading power electronic systems to new dimensions characterized by improved efficiency, power density, and thermal performance. All these aspects are related to relationships that must be conceived in the form of a compromise during development. One of the ways to optimize solutions is through the research and development of a completely new system. Another way is the substitution of the main components and parts of the existing system using new technological materials and structures. In this work, we are discussing the upcoming trend where silicon power transistors are continuously substituted with the proposed components based on SiC and GaN technology. The process should be simple, just exchanging part for part, but several solutions require additional modification to the circuit, thus making the process more complex and costly. Because most of the switched mode power supplies require power factor correction, it is worth considering the efficiency performance of this system overall. Currently, SiC transistors are popular for modern PFC circuits; however, there is also GaN technology, which should undergo investigation as to whether it is a better solution for improving the operational characteristics of PFC itself [9]. The target power semiconductor system is represented by a three-phase PFC converter, the principal schematics of which is shown on Figure 1. This PFC type is based on the topology of an interleaved boost converter supplied by a non-controlled three-phase rectifier. The testing procedure was achieved under laboratory conditions ( Figure 2). Experimental measurements were performed with calibrated laboratory equipment from renowned manufacturers such as Tektronix, California Instruments, Agilent and Fluke, Hocherl & Hackl, etc. Block diagrams of the experimental testing and measurements are shown in Figure 3. As an input source of the tested power converter, a three-phase supply grid was used. The change in the output power was performed by electronic load Hocherl & Hackl ZS7080. Because during the experimental testing of individual transistors, the switching frequency was also one of the variables, it was modified using a microprocessor controller. Measurement related to the efficiency evaluation was done in recoding effective values of the input and output variables, i.e., voltages and currents. This was performed by Fluke 45 (output ammeter) and Agilent 34401A (output voltmeter). Time waveforms were recorded with the use of the oscilloscope Textronix MDO3014 equipped with voltage and current probes from Tektronix. A list of the laboratory equipment used is given in Table 1, while a block diagram of an experimental test-stand is shown in Figure 3. As an input source of the tested power converter, a three-phase supply grid was used. The change in the output power was performed by electronic load Hocherl & Hackl ZS7080. Because during the experimental testing of individual transistors, the switching frequency was also one of the variables, it was modified using a microprocessor controller. Measurement related to the efficiency evaluation was done in recoding effective values of the input and output variables, i.e., voltages and currents. This was performed by Fluke 45 (output ammeter) and Agilent 34401A (output voltmeter). Time waveforms were recorded with the use of the oscilloscope Textronix MDO3014 equipped with voltage and current probes from Tektronix. A list of the laboratory equipment used is given in Table 1, while a block diagram of an experimental test-stand is shown in Figure 3. As an input source of the tested power converter, a three-phase supply grid was used. The change in the output power was performed by electronic load Hocherl & Hackl ZS7080. Because during the experimental testing of individual transistors, the switching frequency was also one of the variables, it was modified using a microprocessor controller. Measurement related to the efficiency evaluation was done in recoding effective values of the input and output variables, i.e., voltages and currents. This was performed by Fluke 45 (output ammeter) and Agilent 34401A (output voltmeter). Time waveforms were recorded with the use of the oscilloscope Textronix MDO3014 equipped with voltage and current probes from Tektronix. A list of the laboratory equipment used is given in Table 1, while a block diagram of an experimental test-stand is shown in Figure 3. Because an important part acting as a source of power losses in PFC converters is represented by inductors, we considered the various construction (winding organization, air gap implementation) and material properties of inductors during the transistor evaluation as well. The original solution of the PFC inductor was made on toroidal alloy core, which is suitable for an operational frequency up to 100 kHz. The switching frequency of the original solution of the PFC converter was 41 kHz, while the parameters and circuit components are listed in Table 2. Because an important part acting as a source of power losses in PFC converters is represented by inductors, we considered the various construction (winding organization, air gap implementation) and material properties of inductors during the transistor evaluation as well. The original solution of the PFC inductor was made on toroidal alloy core, which is suitable for an operational frequency up to 100 kHz. The switching frequency of the original solution of the PFC converter was 41 kHz, while the parameters and circuit components are listed in Table 2. As one of the main goals is the practical utilization of the new proposed transistor structures for industrial use, while maintaining high-efficiency operation together with high switching operation/improving power density, we were required to design and construct power inductors of the original PFC converter to operational conditions characterized by high-frequency operation. Modified inductors for high-frequency operation are made on toroidal ferrite core R50/30/20. Specific to these inductors is the implementation of a distributed airgap ( Figure 4) in order to reduce the effect of the fringing flux. These inductors have been verified for a switching frequency within 120 kHz up to 250 kHz, while the main parameters for the design and construction are given in Table 3. Currently, the commercial market for electronic power systems dispose of two of the most important material technologies regarding semiconductors, i.e., galium nitride and silicon carbide. Both structures belong to wide-bandgap semiconductor materials, while the semiconductors made of these materials are defined as composed components because they are composed of several elements from the periodic table of elements. The main material properties are listed in Table 4. The high value of electric field intensity for GaN and SiC compared to Si material represents ability, which prefers components made on these substrates to operate with higher blocking voltages and higher currents. The higher thermal conductivity means that a semiconductor component made of GaN and SiC can withstand higher power losses. High-mobility devices with electron saturation velocity are related to properties that are reflected in the ability to operate at very high switching frequencies. If we compare individual substrates (Si, SiC, and GaN), then SiC is characterized by the highest power density compared to GaN or Si, thus they are suitable for application where high power ratings of semiconductor systems are required. This is related to the fact that current SiC transistors are characterized by the ability to block high voltages. Currently, discrete transistors are available for 1700 V of blocking voltage, while simultaneously it provides a high current rating (72 A). This is not valid for GaN technology, where only 600 V of blocking voltage is available and already well tested. However, the development of the GaN semiconductors is continuously being improved and many problems related to the value of blocking voltage capability or the process of transistor driving have been solved. Therefore, it is a question of time as to which applications and nominal parameters of the power semiconductor systems GaN will present standardized solutions for within industrial applications. There are pros and cons for both technologies, therefore the evaluation here will be summarized in more detail within a practical application. Example of the Calculation of Losses of Power Transistor in Three-Phase PFC Circuit In this section, the target system for verification of the properties of selected GaN and SiC structures will be given. At the same time, the procedure for the calculation of the main power losses of switching transistors within expected power systems will be given. Initially, in this way, we would like to show the main differences between the properties Two types of transistors will be evaluated analyzed regarding switching losses, one of which will be considered as SiC type and the other GaN (Table 5), both operating at a frequency of 200 kHz. Table 5. Basic parameters of GaN and SiC selected power transistors for power loss calculation. 2000 mA @ t pulse = 50 ns, f = 100 kHz 750 pF @ 500 V 400 pF @ 500 V C OSS 90 pF @ 500 V 70 pF @ 500 V C RSS 9 pF @ 500 V 0.4 pF @ 500 V Q G max 22 nC @ 400 V 5.8 nC @400 V t r 15 ns 10 ns t f 13 ns 9 ns The parameters of three-phase PFC circuits with regards to the operation of used transistors are as follows: (1): Calculation of the capacitive losses is defined in Equation (2): The switching losses of the transistor, i.e., turn-on and turn-off, should be determined using Equations (3) and (4), while total switching losses is the sum of these parts (Equation (5)). If parameters listed in Table 5 are used together with Equations (1)- (6), then the estimation of the power losses of one transistor in the topology considered in Figure 1 is shown in Table 6. Here, it is seen that the selected GaN transistor is exhibiting lower switching losses. Conduction losses together with capacitive losses on the other side show a similar amount when the SiC and GaN structures are compared. From the previous analysis, it can be seen that it is possible to estimate the loss performance behavior of the power transistor if the operational parameters are available for the selected power converter circuit [10][11][12][13]. However, these data are often available if detailed circuit simulations of power converters are provided, or experimental measurements have been already performed [14]. Therefore, the second approach for the detailed evaluation of the proposed power semiconductor structures on the performance of PFC converters should be done experimentally. However, at this point, it should be emphasized that an optimized physical prototype of the converter is necessary for these purposes [15][16][17][18]. The next section shows the procedure for the efficiency optimization of the PFC circuit under consideration, while the main way is based on the evaluation of the properties of the proposed power transistors. Physical Prototype of Three-phase PFC Circuit Undergoing Optimization Process In this section, we will discuss the procedure for PFC converter optimization for higher switching frequencies, i.e., hundreds of kHz. The aim is to achieve a minimum switching frequency of 200 kHz at an output power of 4 kW, an output voltage of 500 V, and at approximately the same efficiency as the original non-optimized PFC converter. Through these procedures, the aim will be to point out the possibilities of GaN transistors in terms of their potential use in the form of a replacement for industrially used SiC and Si transistors. The physical prototype of the PFC converter, whose circuit diagram of the main circuit is shown in Figure 1, is shown in Figure 5. It is clear from the parameters listed in Table 2 that the industrial use of such a system is predestined for the US network. Therefore, GaN and SiC technology transistors (Table 7), whose blocking voltage is at the level of 600 Vdc to 650 Vdc, will be analyzed. in terms of their potential use in the form of a replacement for industrially used SiC and Si transistors. The physical prototype of the PFC converter, whose circuit diagram of the main circuit is shown in Figure 1, is shown in Figure 5. It is clear from the parameters listed in Table 2 that the industrial use of such a system is predestined for the US network. Therefore, GaN and SiC technology transistors (Table 7), whose blocking voltage is at the level of 600 Vdc to 650 Vdc, will be analyzed. Type of Transistor Manufacturer In the first step, the efficiency and total losses were evaluated, with the original components, and the results will serve as reference values ( Figure 6). The goal of the utilization of progressive semiconductor components should be the increase of the switching frequency together with the increase in efficiency. Therefore, the results given in Figure 6 use components listed in Table 2. From Figure 7, it can be seen that the power losses of the circuit shown in Figure 1 are approximately 100 W. These losses are referred to the input filter stage, rectifier stage, and the PFC circuit (boost converter) itself. Figure 7 shows the analyzed dependency of power losses on the input common mode filter and the rectifier circuit dependency on input current (i.e., power of the converter as well). It can be seen that sum of those losses represent almost 50% of the total power losses of the proposed PFC converter. In the first step, the efficiency and total losses were evaluated, with the original components, and the results will serve as reference values ( Figure 6). The goal of the utilization of progressive semiconductor components should be the increase of the switching frequency together with the increase in efficiency. Therefore, the results given in Figure 6 use components listed in Table 2. From Figure 7, it can be seen that the power losses of the circuit shown in Figure 1 are approximately 100 W. These losses are referred to the input filter stage, rectifier stage, and the PFC circuit (boost converter) itself. Figure 7 shows the analyzed dependency of power losses on the input common mode filter and the rectifier circuit dependency on input current (i.e., power of the converter as well). It can be seen that sum of those losses represent almost 50% of the total power losses of the proposed PFC converter. The operation/power losses of the input filter with the rectifier are not dependent on the value of the switching frequency of the PFC. Therefore, the values of the losses on these elements will be constant during further measurements, where the investigated PFC will operate with different switching frequencies. In the following steps, we will calculate the rectifier and filter losses into the total measured losses as these losses are an integral part of the PFC converter. Results for PFC Efficiency for Original CoolSiC Transistor IMW120R060M1H with Modified Inductor for High-Frequency Operation The first approach to increasing the switching frequency of the PFC converter under consideration and investigate its efficiency performance was realized with the use of the original SiC transistor (blocking voltage 1200 Vdc). Because high-frequency operation compared to reference value will be applied (within 120 kHz-160 kHz), the replacement of the original alloy inductors was performed, while the inductors shown in Figure 4 were The operation/power losses of the input filter with the rectifier are not dependent on the value of the switching frequency of the PFC. Therefore, the values of the losses on these elements will be constant during further measurements, where the investigated PFC will operate with different switching frequencies. In the following steps, we will calculate the rectifier and filter losses into the total measured losses as these losses are an integral part of the PFC converter. Results for PFC Efficiency for Original CoolSiC Transistor IMW120R060M1H with Modified Inductor for High-Frequency Operation The first approach to increasing the switching frequency of the PFC converter under consideration and investigate its efficiency performance was realized with the use of the original SiC transistor (blocking voltage 1200 Vdc). Because high-frequency operation compared to reference value will be applied (within 120 kHz-160 kHz), the replacement of the original alloy inductors was performed, while the inductors shown in Figure 4 were used. The results are shown in Figure 8. It can be seen that the efficiency for high-frequency operation dropped within the range of 0.5% if 50% of nominal power is considered. used. The results are shown in Figure 8. It can be seen that the efficiency for high-frequency operation dropped within the range of 0.5% if 50% of nominal power is considered. Evaluation of the power losses related to previous efficiency characteristics ( Figure 8) is shown in Figure 9. The main part of the losses associated with this efficiency drop are addressed to the switching losses of the transistors and therefore next, the experimental Evaluation of the power losses related to previous efficiency characteristics (Figure 8) is shown in Figure 9. The main part of the losses associated with this efficiency drop are addressed to the switching losses of the transistors and therefore next, the experimental measurements for the selected transistors is shown. The main difference compared to previous results (Figure 8) is that the transistors with a lower voltage blocking capability have been selected, which should reduce the switching and conduction losses of this semiconductor with the evaluated PFC circuit. Evaluation of the power losses related to previous efficiency characteristics ( Figure 8) is shown in Figure 9. The main part of the losses associated with this efficiency drop are addressed to the switching losses of the transistors and therefore next, the experimental measurements for the selected transistors is shown. The main difference compared to previous results (Figure 8) is that the transistors with a lower voltage blocking capability have been selected, which should reduce the switching and conduction losses of this semiconductor with the evaluated PFC circuit. Optimization of Three-Phase PFC Circuit with the Use of Perspective Transistor Structures As was previously mentioned, three types of the proposed transistors were selected for evaluation. Individual transistors have similar parameters according to electrical parameters (Table 8), while they differ technologically. The main electrical parameters of this transistor are listed in Table 7. As was already mentioned, there is a MOSFET transistor at the input of the cascoded GaN transistor, which controls the power GaN transistor (Figure 10). The advantage of this cascode connection is the possibility of the utilization of the driving circuit originally used for the reference transistor (CoolSiC IMW120R060M1H), thus no reconfiguration is required as the voltage range of the U GS is the same as that of the conventional MOSFET. On the other side, the value of the gate resistance does not have a great effect on the turn-on and turn-off speed of the transistor, which is reflected in the transistor's disadvantage, i.e., a limitation of the switching dynamics of the cascoded GaN transistor. Results for Cascode GaN Transistor TPH050WS for Switching Frequency 16 200 kHz The main electrical parameters of this transistor are listed in Table 7. As was mentioned, there is a MOSFET transistor at the input of the cascoded GaN tra which controls the power GaN transistor (Figure 10). The advantage of this casco nection is the possibility of the utilization of the driving circuit originally used reference transistor (CoolSiC IMW120R060M1H), thus no reconfiguration is requ the voltage range of the UGS is the same as that of the conventional MOSFET. On th side, the value of the gate resistance does not have a great effect on the turn-on an off speed of the transistor, which is reflected in the transistor's disadvantage, i.e. tation of the switching dynamics of the cascoded GaN transistor. For this type of transistor, the switching frequency was set to 160 kHz as wel kHz. Regarding PFC inductors, ferrite toroidal inductors were used (Figure efficiency for the wide output power delivery was evaluated. As can be seen from 11, the efficiency of the converter has been improved by 0.5% at full power comp the previous measurement (IMW120R060M1H). Simultaneously, the ef characteristic is almost constant from half of the output power compared to the p measurements. For this situation, where the switching frequency was increased up For this type of transistor, the switching frequency was set to 160 kHz as well as 200 kHz. Regarding PFC inductors, ferrite toroidal inductors were used (Figure 4). The efficiency for the wide output power delivery was evaluated. As can be seen from Figure 11, the efficiency of the converter has been improved by 0.5% at full power compared to the previous measurement (IMW120R060M1H). Simultaneously, the efficiency characteristic is almost constant from half of the output power compared to the previous measurements. For this situation, where the switching frequency was increased up to 200 kHz (Figure 11), it can be seen that at nominal output power, the efficiency drop compared to 160 kHz is 0.1% (this is valid almost for the whole power range of the PFC converter). This difference represents approximately 3 W (Figure 12). Electronics 2021, 10, x FOR PEER REVIEW 12 of 19 kHz ( Figure 11), it can be seen that at nominal output power, the efficiency drop compared to 160 kHz is 0.1% (this is valid almost for the whole power range of the PFC converter). This difference represents approximately 3 W (Figure 12). Results for CoolSiC Transistor IMZA65R072M1 for Switching Frequency 200 kHz For a more correct comparison of SiC and GaN technologies, it is more appropriate to compare transistors with approximately the same voltage and current parameters. So far, we have tested the original SiC transistor, which has a maximum blocking voltage of UDS = 1200 V. It would not be adequate to compare this transistor with a 600 V GaN transistor, and therefore a 650 V SiC transistor IMZA65R072M1, the latest generation of CoolSiC from Infineon, was incorporated into the evaluation process. The PFC converter was adapted to the operation at 200 kHz of switching frequency. As opposed to the previous situation, here we were evaluating the effect of the gate resistor on the efficiency performance (i.e., 12 Ω and 3.3 Ω values were tested). It is assumed that the value of the gate resistor will have a significant effect on the switching losses, and thus also on the overall efficiency of the PFC converter (Figures 13 and 14). Results for CoolSiC Transistor IMZA65R072M1 for Switching Frequency 200 kHz For a more correct comparison of SiC and GaN technologies, it is more appropriate to compare transistors with approximately the same voltage and current parameters. So far, we have tested the original SiC transistor, which has a maximum blocking voltage of U DS = 1200 V. It would not be adequate to compare this transistor with a 600 V GaN transistor, and therefore a 650 V SiC transistor IMZA65R072M1, the latest generation of CoolSiC from Infineon, was incorporated into the evaluation process. The PFC converter was adapted to the operation at 200 kHz of switching frequency. As opposed to the previous situation, here we were evaluating the effect of the gate resistor on the efficiency performance (i.e., 12 Ω and 3. 3 Ω values were tested). It is assumed that the value of the gate resistor will have a significant effect on the switching losses, and thus also on the overall efficiency of the PFC converter (Figures 13 and 14). Comparing the situation with two different values of gate resistors, the difference in efficiency at full power is 0.3% (Figure 13), which is a difference of 13 W of power losses ( Figure 14). Compared to the reference configuration, in the case of a gate resistor with a value of 3.3 Ω, the efficiency is just 0.3% higher at full power. However, at an output power of around 1.2 kW, this efficiency is 0.65% worse compared to the original configuration. Results for GaN Transistor IGT60R070D1 for Switching Frequency 200 kHz-250 kHz The last type of verified transistor is GaN IGT60R070D1 from Infineon. Unlike conventional MOSFET transistors, this transistor is not controlled by driving voltage, but by driving current. For this reason, modification of the driving circuitry of the PFC converter was required, therefore, additional circuitry was required ( Figure 15) to be added to the original one. Comparing the situation with two different values of gate resistors, the difference in efficiency at full power is 0.3% (Figure 13), which is a difference of 13 W of power losses ( Figure 14). Compared to the reference configuration, in the case of a gate resistor with a value of 3.3 Ω, the efficiency is just 0.3% higher at full power. However, at an output power of around 1.2 kW, this efficiency is 0.65% worse compared to the original configuration. Results for GaN Transistor IGT60R070D1 for Switching Frequency 200-250 kHz The last type of verified transistor is GaN IGT60R070D1 from Infineon. Unlike conventional MOSFET transistors, this transistor is not controlled by driving voltage, but by driving current. For this reason, modification of the driving circuitry of the PFC converter was required, therefore, additional circuitry was required ( Figure 15) to be added to the original one. At the instant when the transistor is required to be turned on, 5 V are applied at the OUTH output and a current pulse, which can be in the order of hundreds of milliamperes, is applied to the transistor gate through the low-ohmic resistor R5 (in the order of ohms) and capacitor C9 (in the order of nanofarads). This current pulse opens the transistor, which is opened until capacitor C9 is charged. The current that keeps the transistor opened is in the order of tens of milliamperes (up to 20 mA) and flows through resistor R9, which is in the order of hundreds of ohms. The transistor is turned off by disconnecting 5 V (OUTH) and then grounding resistor R6 via pin OUTL, which applies a negative voltage from capacitor C9 to the gate of the transistor. During the turn-off state, the manufacturer recommends that a negative voltage should be applied continuously to the gate of the transistor. This is provided by the capacitor C9. The measurements were performed for two values of switching frequencies, i.e., 200 kHz and 250 kHz. The results of the efficiency characteristic and power loss characteristic At the instant when the transistor is required to be turned on, 5 V are applied at the OUTH output and a current pulse, which can be in the order of hundreds of milliamperes, is applied to the transistor gate through the low-ohmic resistor R5 (in the order of ohms) and capacitor C9 (in the order of nanofarads). This current pulse opens the transistor, which is opened until capacitor C9 is charged. The current that keeps the transistor opened is in the order of tens of milliamperes (up to 20 mA) and flows through resistor R9, which is in the order of hundreds of ohms. The transistor is turned off by disconnecting 5 V (OUTH) and then grounding resistor R6 via pin OUTL, which applies a negative voltage from capacitor C9 to the gate of the transistor. During the turn-off state, the manufacturer recommends that a negative voltage should be applied continuously to the gate of the transistor. This is provided by the capacitor C9. The measurements were performed for two values of switching frequencies, i.e., 200 kHz and 250 kHz. The results of the efficiency characteristic and power loss characteristic in dependency on output power are evaluated in Figures 16 and 17. OUTH output and a current pulse, which can be in the order of hundreds of milliamperes, is applied to the transistor gate through the low-ohmic resistor R5 (in the order of ohms) and capacitor C9 (in the order of nanofarads). This current pulse opens the transistor, which is opened until capacitor C9 is charged. The current that keeps the transistor opened is in the order of tens of milliamperes (up to 20 mA) and flows through resistor R9, which is in the order of hundreds of ohms. The transistor is turned off by disconnecting 5 V (OUTH) and then grounding resistor R6 via pin OUTL, which applies a negative voltage from capacitor C9 to the gate of the transistor. During the turn-off state, the manufacturer recommends that a negative voltage should be applied continuously to the gate of the transistor. This is provided by the capacitor C9. The measurements were performed for two values of switching frequencies, i.e., 200 kHz and 250 kHz. The results of the efficiency characteristic and power loss characteristic in dependency on output power are evaluated in Figures 16 and 17. At 200 kHz and at full power, the efficiency improved by 0.65% compared to the reference values. This represents the best result so far among the evaluated transistor structures. The value of power losses at full power of PFC operation decreased by 28 W, which represents a reduction of almost 2/9 losses compared to the losses of the original configuration, while power density was maintained. Increasing the switching frequency to 250 kHz reduced the efficiency by 0.3% for a output power value of 1 kW, but at full power, the efficiency value was almost identical compared to 200 kHz of switching frequency. This experiment represents the final step in the circuit optimization of the proposed three-phase PFC converter. Experimental analysis and verification have confirmed that transistors based on GaN technology represent a promising way to increase the quality indicators of power electronic converters. Conclusions In this paper, the evaluation of the power transistor technology on the efficiency At 200 kHz and at full power, the efficiency improved by 0.65% compared to the reference values. This represents the best result so far among the evaluated transistor structures. The value of power losses at full power of PFC operation decreased by 28 W, which represents a reduction of almost 2/9 losses compared to the losses of the original configuration, while power density was maintained. Increasing the switching frequency to 250 kHz reduced the efficiency by 0.3% for a output power value of 1 kW, but at full power, the efficiency value was almost identical compared to 200 kHz of switching frequency. This experiment represents the final step in the circuit optimization of the proposed three-phase PFC converter. Experimental analysis and verification have confirmed that transistors based on GaN technology represent a promising way to increase the quality indicators of power electronic converters. Conclusions In this paper, the evaluation of the power transistor technology on the efficiency performance of the proposed three-phase PFC converter was given. Because SiC and GaN materials are currently identified as the most widely used transistor technologies, the main focus was given to the determination of their properties within target application. Each material has certain advantages and disadvantages. Specifically, we analyzed SiC and GaN technologies theoretically, analytically and, in the last step, practically by implementing selected transistor types within the main circuit of the PFC converter. From theoretical knowledge, the most advantageous is SiC transistors due to their properties such as switching speed, relatively low switching losses even at higher switching frequencies, relatively low R DS_on , and high voltage blocking capability (1200 V). However, GaN technology offers better properties, especially in terms of switching speed, but currently, they are not at the stage of research and development to be applicable to industrial power supplies. There are few manufacturers in the world that produce reliable GaN transistors capable of operating at voltages greater than 650 V. After theoretical analysis of the selected SiC and GaN transistors, the selection of certain types was provided reflecting on currently available devices. Each transistor was evaluated within a certain range of switching frequency. Frequency range was selected dependent on the results of at least two successively measured efficiencies at different switching frequencies. In case the efficiency was worse than the previous one, another type of transistor was tested. The original intention was to optimize the switched power supply operating in three-phase interleaved boost PFC topology. The 600 and 650 V SiC and GaN transistors were specified, which are manufactured by several manufacturers. In the first step, we measured the efficiency and total losses of the original transistor device. These measured values served as a reference, and with increasing switching frequency, we tried to optimize the inverter. In the test, we used four types of transistors, namely, two types of SiC and two types of GaN, where one of the GaNs was in cascade connection with a MOSFET at the gate part. The best efficiency performance was reached for 200 kHz of switching frequency (five times higher value compared to reference), while the efficiency increase compared to reference was 0.7%. This achievement is remarkable if we consider that the efficency is several times higher compared to the original solution. In this way, it is possible to markedly optimize the converter main circuit and thus reduce dimmensions, size, and costs. The summary of achieved results is represented in the last figure ( Figure 18). Here, the representation of the achieved results from the measurement is being shown for each of the tested transistors, while only the efficiency characteristic with the best performance is shown. The reference efficiency characteristic remains the original configuration of the PFC converter. If we look at the achieved results, it is clear that GaN transistor technology enables us to maintain very high operational efficiencies, even the switching frequency rises several times compared to original conditions. A very significant parameter, which must be considered if the presented approach is to be practically used, is the evaluation of the transistor parameters. From the presented information in Table 5, it can be seen that not only technology, but also parasitic capacitances together with dynamic performance (rise and fall times) are influencing switching losses. The output capacitance COSS is influencing the amount of turn-off switching, while CISS is influencing turn-on switching. A good compromise between their values and a very low value of miller capacitance are predicting superior performance if high-frequency, high-efficiency operation is expected. The results given in this paper confirm that in the future, GaN transistors will naturally replace SiC transistors in industrial power supplies. Currently, the biggest disadvantage of available GaN transistors is that they are not usable for 1200 V. (rise and fall times) are influencing switching losses. The output capacitance COSS is influencing the amount of turn-off switching, while CISS is influencing turn-on switching. A good compromise between their values and a very low value of miller capacitance are predicting superior performance if high-frequency, high-efficiency operation is expected. The results given in this paper confirm that in the future, GaN transistors will naturally replace SiC transistors in industrial power supplies. Currently, the biggest disadvantage of available GaN transistors is that they are not usable for 1200 V. Finally, it must also be mentioned here that the impact of the modifications related to circuit components and operational conditions, i.e., change of transistors, inductors, and switching frequency, does not have significant influence on the performance related to THD and PF. This is confirmed by the measured results, which are given in Appendix A. It can be seen that there is no visible change or fluctuation of these parameters within the whole power range of evaluated PFC converter. Finally, it must also be mentioned here that the impact of the modifications related to circuit components and operational conditions, i.e., change of transistors, inductors, and switching frequency, does not have significant influence on the performance related to THD and PF. This is confirmed by the measured results, which are given in Appendix A. It can be seen that there is no visible change or fluctuation of these parameters within the whole power range of evaluated PFC converter. Conflicts of Interest: The authors declare no conflict of interest. Appendix A In this section, the values for PF and THD of individual phases are given, while the results are listed for the original PFC circuit with alloy inductor and transistor, as well as for optimized and tested solutions.
9,298.8
2021-06-30T00:00:00.000
[ "Engineering", "Physics" ]
Microstructures define melting of molybdenum at high pressures High-pressure melting anchors the phase diagram of a material, revealing the effect of pressure on the breakdown of the ordering of atoms in the solid. An important case is molybdenum, which has long been speculated to undergo an exceptionally steep increase in melting temperature when compressed. On the other hand, previous experiments showed nearly constant melting temperature as a function of pressure, in large discrepancy with theoretical expectations. Here we report a high-slope melting curve in molybdenum by synchrotron X-ray diffraction analysis of crystalline microstructures, generated by heating and subsequently rapidly quenching samples in a laser-heated diamond anvil cell. Distinct microstructural changes, observed at pressures up to 130 gigapascals, appear exclusively after melting, thus offering a reliable melting criterion. In addition, our study reveals a previously unsuspected transition in molybdenum at high pressure and high temperature, which yields highly textured body-centred cubic nanograins above a transition temperature. In-situ XRD signal from a different experiment, but at nearly identical conditions as (a), showing single Bragg reflection spot with extremely high photon count (insert). (c) Integrated counts vs. 2theta from (a) comparing the Mo diffraction at low and high temperature (2550 K and 3500 K), the Mo diffraction signal is seen weakened at high temperature. (d) In contrast to (c), the crystalline diffraction on Mo (integrated from b) appears significantly stronger at higher temperature (3500 K) compared to the lower temperature (2500 K). Supplementary Tables Supplementary Table 1 Qb is the boundary heat source (laser power); σ is the width of the laser beam Gaussian distribution. Observations of the microstructure transition and melting As we introduced in the main text, our method of liquid detection depends on the observation of recrystallized fine-grained microstructure in the quenched sample. Therefore, in order for the new fine-grained microstructure to be observable, we found that sample material must first be made to undergo sufficient grain growth. That is, without sufficient grain growth, the XRD signal from new fine grains may be obscured by the signal from the original fine-grained microstructure of the sample. Grain growth in the samples was promoted by laser annealing at sufficiently high temperature, using either continuous-wave (CW) or SMP modes. As we noted above, the successful observation of new fine-grained microstructure from quenched melt depends on sufficient prior re-crystallization of the sample. In subsequent analysis, we found that the samples at above 80 GPa were not re-crystallized sufficiently before melting; therefore, in order to clearly observe the new fine-grain microstructure, we performed an image background subtraction (using previous XRD image as a background for the XRD under examination). Therefore, the two data points at the highest pressure were arbitrarily assigned less weighed in the fitting of the melting curve function. We have also performed a separate fit of melting curve function with the highest pressure points excluded, and confirmed that such a fit did not deviate significantly from the fit obtained using all of the experimental points. The lowest temperature showing the appearance of randomly oriented fine-grain microstructure was recorded as the melting temperature at a given pressure. An example of a typical observed sequence of grain growth including the highly textured fine-grain microstructure transition at 300 to 400 K below melting, followed by an abrupt appearance of a new fine-grained microstructure after melting, is shown in Supplementary Figure 1. Microstructure transition and preferred orientation in molybdenum In some of the experimental runs, the two MgO crystals of the thermal insulation layers were closely aligned with respect to each other, and in some cases, they were not. The diffraction from MgO, at sufficiently high pressures, spread out over some range of the azimuthal angle, indicating the breaking of the single crystal layers into smaller crystallites oriented in the approximately the same orientation as the starting single crystal. In cases where the MgO crystals were closely aligned, the XRD patterns from each of the layers overlapped. In cases where the two starting MgO crystal layers were not aligned in the same orientation, we could distinguish the XRD patterns from each of the layers. In these cases, where the two MgO layers were misaligned with one another in azimuthal orientation, we then observed two distinct groups of Mo (110), (200) and (211) reflections -aligned with both groups of MgO reflections respectively, above the microstructure transition temperature. The fact that Mo aligned with both MgO layers indicates that the Mo alignment is a Mo-MgO boundary phenomenon, which can be explained by an epitaxial growth. Molybdenum carbide In a few cases, we found that the crystalline XRD from a minor Mo carbide phase emerged in samples quenched from T > ~2600 K. The XRD of this Mo carbide phase, in samples quenched from T > ~3000 K, showed no crystalline XRD peaks, however, presented as a single weak and diffuse Debye ring. This signified the melting of the phase. Further annealing of the sample at T < ~ 3000 K, the carbide phase with crystalline diffraction peaks re-appeared. The existence of the minor Mo carbide phase has the slight effect of lowering the Mo melting point as shown by the open squares in Supplementary Figure 3b. Laser system The experiment described here used a laser-heated diamond anvil cell (LH DAC) setup with in-situ synchrotron x-ray micro-diffraction probe and double-sided heating, as described elsewhere 1 . The laser was electronically modulated to follow a square-shaped power profile (Supplementary Figure 4a). We noted that in the beginning of the square pulse output there is a power-overshoot, of 1-2 µs in duration, typical for the type of laser used in the LH DAC system (Supplementary Figure 4b). However, the power overshoot of the laser does not lead to a temperature overshoot in the sample due to the sufficiently short time scale comparing to the time scale for thermal response of the sample in the experimental setup used (See Sample temperature response section). A typical experiment In our experiment, the pulses of laser, temperature measurement, thermal imaging and in-situ XRD were synchronized as shown in the "box-car" timing diagram in Supplementary Figure 5. Timeresolved temperature measurements with millisecond resolution (Supplementary Figures 6, 7), in combination with gated thermal imaging, were used to record temperature fluctuations during individual pulses. Two-dimensional (2-D) thermal images of the hot spots on both sides of the sample together with their positions relative to the spectrometer pinholes were recorded, displayed on viewing monitors ( Figure 7d in the main text). The initially set lasers' focus position and size were adjusted, in order to obtain precise alignment of the center of the hot spot with the spectrometer pinhole, the x-ray focus position was also aligned with the laser heating spot and the spectrometer pinhole ( Figure 7c in the main text). The position of the sample with respect to laser was also adjusted, if necessary, in order to optimize the symmetry of the hot spots on both sides of the sample. After such initial adjustments, the position of the sample relative to the x-ray beam and spectrometer pinhole was kept constant throughout the remainder of the given experimental run; however, the focus size and position of the laser beams with the respect to the spectrometer pinhole may be adjusted if deemed necessary. In the majority of runs, at different pressure points, the laser power v. temperature relationship remained linear even up to temperatures significantly higher than the previously reported 2-5 Mo melting point temperatures (Supplementary Figure 7). The combination of short single-pulse heating and encapsulating the sample in a micro-fabricated single-crystal assembly 6 ( Figure 7a,b in the main text) allowed to repeatedly reach previously unattainable and stable pressure and temperature conditions for Mo (Supplementary Figure 7). Numerical model Commercially available Comsol  5.0 program and materials properties database were used to conduct numerical heat flow calculations. In the model, a thin Mo sample disk is embedded into an MgO matrix, which acts as the pressuretransmitting medium and as a thermal insulating layer. The laser power is delivered onto both sides through the optically transparent MgO and couples with the Mo sample surface. A thin platelet of molybdenum, encapsulated on all sides with optically transparent oxide layers (e.g. MgO, Al2O3), is placed and compressed between two opposing single-crystal diamond windows (i.e. anvils of the DAC). A rectangular pulse of radiant energy is introduced to the encapsulated sample through the diamond + oxide windows by two laser beams in a double sided LH-DAC arrangement 1 . Variations of the finite-element (FE) approaches have been used by others to obtain numerical solutions for either steady-state [8][9][10][11][12] or transient [13][14][15] temperature distributions in the laser-irradiated samples in this type of a DAC geometry. In the present study, we solve the time dependent heat equation: where k, C, ρ, Q, T, and t denote thermal conductivity, heat capacity, density, heating power per unit volume (laser heat source), temperature, and time, respectively. The latent heat of melting/solidification is modeled with the apparent heat capacity approximation. Assuming that the phase transformation from solid to liquid occurs continuously over a finite temperature interval ∆T, the expression for C used in the heat equation may be formulated as where θ is a function representing the fraction of the solid. The CL term in equation (S2) gives the distribution of the latent heat in the interval ∆T: where L and αm denote the latent heat and mass fraction of the melt, respectively. Similarly, k and ρ are be formulated as The heat equation is usually solved for the 3-dimensional DAC geometry using a 2-dimensional axisymmetric approximation. The set of boundary conditions listed in Supplementary Table 1 is used for in the calculations in the present study (Supplementary Figure 8). Temperature dependent values of the thermophysical properties of Mo, insulation layers, and diamonds were obtained from the materials database. A series of calculations were performed, wherein the boundary heat-source term, Qb, was incrementally increased from zero until the substantial volume of liquid Mo was found in the numerical solution. The calculations were performed assuming values of the thermophysical properties at ambient pressure for all the materials. Sample temperature response Our time-dependent heat flow model (see Numerical model section) indicated that the steady-state heating condition in a SMP heated LH DAC should be achieved after 100-200 µs after the switching on of the laser, and remain stable for the remainder of the square pulse. Indeed, stable heating, wherein the temperature response closely followed the modulation shape of the heating laser, was achieved when pulses were shorter than 5-20 ms. However, when heating to higher temperatures, ~2800 K and above, pulses longer than ~20 ms sometimes resulted in temperature instabilities and a runaway temperature response. Sudden temperature spikes sometimes happened, with an overshoot of the expected temperature by several thousand K for several milliseconds, followed by a decrease in temperature. Constraining of the heating pulse duration prevented the sudden run-away heating in most of our experimental runs. We note that continuous heating LH DAC, as was used in the past 2,4,5 , is not a reliable approach for study of melting of molybdenum. Power levels, approaching the maximum 200-watt output of our laser heating system, were needed to reach required high T conditions. We also found that at such high laser power output, significantly longer time was needed to reach the steady state temperature conditions than was predicted by our heat flow model. The heat flow model used in this study, which assumes a constant temperature of the diamonds, is only approximately valid when very high heating laser power is used. We suspect that the application of very high laser power likely leads to significant heating the diamonds and the surrounding DAC components, which invalidates one of the model assumptions. In light of this, proper temperature state characterization our Mo melting experiments depended strongly on the use of time resolved thermometry. Melt detection limit Each solution, in the series of the calculations performed, was used to calculate the liquid phase distribution function θl(r, z) which quantifies the mass fraction of the liquid phase per unit volume in a specific LH DAC geometry. Due to the experimentally unavoidable thermal gradient, the volume of the melt is expected to be small, compared to the total volume of the sample. Therefore, in the quenched samples, the volume of the material quenched from a liquid state will be small compared to the total volume. In our experiments, the sample volume was probed by a microfocused x-ray beam, normal to the flat sample surface and centered at r = 0. The radial intensity distribution of the micro focused x-ray beam was approximated by a Pseudo-Voight function Ix(r), using experimentally constrained parameters 1 . The FWHM of Ix(r) is comparable in size to the radial dimension of the volume of the material quenched from the liquid state; therefore, a significant fraction of the observed diffracted signal intensity is expected to originate from the unmolten, solid material. Consequently, a dimensionless parameter XL was defined to quantify the fraction of the detected diffracted signal intensity originating from the material quenched from a liquid state. where rs and zs denote the radius of the sample and the thickness of the sample, respectively. The numerically computed function XL(T) was used to estimate the errors melting temperature measurements, as described in the main text. The results of the calculations led us to conclude that our experiment probe for melting detection is sufficiently sensitive for detecting the melt volume produced in the double-sided heating setup we used and should not result in significant TM over-estimation, as we discussed in the main text. We note that single-sided laser heating approach, in contrast to the double-sided heating, produces a much smaller volume of melt ( Supplementary Figure 9), and can lead to large TM over-estimation error if used in combination with a bulk probe such as the XRD. Additional sample notes The majority of the data points correspond to a run done with a fresh sample. However, several of the reported data points correspond to runs done on samples that were used from in a previous run. The experimental pressures (P), and the obtained melting temperature data points (TM), corresponding to the use of fresh vs. re-used sample are tabulated in Supplementary Table 2. Demonstration of the proposed methodology on pure iron system The quenched microstructure method for studying melting has been confirmed with pure iron (Fe) at a relatively low pressure of 36 GPa. Fe may be the most studied system in literature 16 for high pressure melting determination, displaying converging results at pressures below 50 GPa [16][17][18][19][20][21] . This makes the melting of Fe a suitable case for demonstration of the proposed methodology. The study on Fe was carried out at a pressure of 36 (3) GPa. The sample loading was identical to that of Mo and single crystal MgO was used for thermal insulation. At this pressure and at room temperature Fe is a hexagonally-close-packed (HCP) metal (ε-Fe). Moreover, at this pressure, Fe is commonly expected to undergo a phase transition from hcp to a face-centered-cubic (FCC) polymorph (ɣ-Fe) at a temperature of 1500 (200) K (Tε-ɣ). Using the same approach as described for Mo, the ε-Fe microstructure showed incremental coarsening after each heating pulse at T< Tε-ɣ (Supplementary Figure 10a). The XRD of quenched Fe from temperature Tε-ɣ<T<TM showed a mixture of coarse ɣ-Fe grains and ε-Fe (Supplementary Figure 10b). When heating Fe to a sufficiently high temperature, T>TM and quenching rapidly, fine-grained and continuous Debye rings were observed in the XRD images (Supplementary Figure 10c). The appearance of fine-grained Debye rings was accompanied by an abrupt weakening or disappearance of Bragg reflection spots from larger ɣ-Fe grains. The observed TM of Fe [T = 2760(100) K] at a pressure of 36(3) GPa, observed in this study using the quenched microstructure approach, agees remarkably well with the majority of the most recent reports [16][17][18][19][20][21] . Supplementary Figure 11 shows the TM and Tε-ɣ, obtained using the quenched microstructure method of this study, overlaid on Fe phase diagram from literature 17,18,[20][21][22] . It is worthwhile to note that, in the case of Fe, the fine-grained microstructure after T>TM corresponds to the high-temperature phase, ɣ-Fe and not the ε-Fe. This gives an additional confirmation that the fine-grains, which we observe, is a high temperature phenomenon and corresponds to melting.
3,802.4
2017-03-01T00:00:00.000
[ "Materials Science", "Physics" ]
A Particle Swarm Optimization With Adaptive Learning Weights Tuned by a Multiple-Input Multiple-Output Fuzzy Logic Controller In a canonical particle swarm optimization (PSO) algorithm, the fitness is a widely accepted criterion when selecting exemplars for a particle, which exhibits promising performance in simple unimodal functions. To improve a PSO's performance on complicated multimodal functions, various selection strategies based on the fitness value are introduced in PSO community. However, the inherent defects of the fitness-based selections still remain. In this article, a novelty of a particle is treated as an additional criterion when choosing exemplars for a particle. In each generation, a few of elites and mavericks who have better fitness and novelty values are selected, and saved in two archives, respectively. Hence, in each generation, a particle randomly selects its own learning exemplars from the two archives, respectively. To strengthen a particle's adaptive capability, a multiple-input multiple-output fuzzy logic controller is used to adjust two parameters of the particle, i.e., an acceleration coefficient and a selection proportion of elites. The experimental results and comparisons between our new proposed PSO, named as MFCPSO in this article, and other six PSO variants on CEC2017 test suite with four different dimension cases suggest that MFCPSO exhibits very promising characteristics on different types of functions, especially on large scale complicated functions. Furthermore, the effectiveness and efficiency of the fuzzy controlled parameters are discussed based on extensive experiments. A Particle Swarm Optimization With Adaptive Learning Weights Tuned by a Multiple-Input Multiple-Output Fuzzy Logic Controller important role in various fields, such as engineering, data mining, and scientific problems [1], [2], [3], [4]. Although some traditional analytical methods still offer favorable performance on some simple optimization tasks, they do not display promising characteristics on those multimodal, large scale, or noisy problems. To deal with such problems, some evolutionary algorithms (EAs) have attracted much more attention in the last decades due to their reliable and comprehensive performance. Particle swarm optimization (PSO) algorithm, as a popular EA, was proposed by Kennedy and Eberhart [5], [6]. Although a single particle in PSO has very low intelligence, collective behaviors cause a population to display a powerful capability in optimizing various problems [3], [7], [8]. However, similar to other EAs, the canonical PSO also suffers from the premature convergence when optimizing complicated problems though it can achieve a higher convergence speed. It is generally accepted by the PSO community that maintaining an appropriate population diversity is beneficial for preventing the premature convergence and improving the exploration ability [9]. However, it is not a practical and feasible approach that mere seeking for the population diversity because it is harmful for the exploitation ability of PSO. Thus, some researchers have poured attention into achieving favorable balance between the exploration and the exploitation by proper parameters adjustments [10], [11], [12], [13], [14] and learning models [15], [16], [17]. Generally, an effective learning model of a particle relies on its exemplars and corresponding learning weights. In the canonical PSO [5], [6], a learner particle selects its own historical best position and the global best particle, measured by fitness values, as its exemplars to perform the learning process. The simple fitness-based selection of exemplars enables the canonical PSO to offer very promising and efficient performance on simple unimodal functions. However, the selection method exclusively considering the fitness may cause the population to be easily trapped into local optima. Hence, to overcome the inherit weaknesses of the fitness-based selection, some study incorporate various disturbances during the search process, which can be deemed a randomness-based selection, intending to improve the exploration capability [18], [19]. Unlike pursuing a solution with the best fitness in many EAs, searching for a system without an explicit objective has captured some researchers' attention in artificial life fields [20]. A widely accepted method in the fields is generating a complex artificial system with a more novelty rather than a higher fitness. A few studies verify that the novelty-based search is immune to problems of deception and local optima since the noveltybased search entirely ignores an explicit objective. Some results support a counter-intuitive conclusion that disregarding (or partial disregarding) the objective in this way may be beneficial for searching for the objective [21], [22], [23]. However, it is dangerous to regard that the novelty-based search dominates the traditional fitness-based search. In fact, the novelty-based search sacrifices performance of the exploitation capability though it is favorable for the exploration ability. In other words, the fitnessbased search and the novelty-based search have their own merits. Thus, rationally and efficiently utilizing the two search mechanisms may bring very comprehensive performance for PSO. Based on the above discussions, this article proposes a PSO variant based on a Multiple-input multiple-output (MIMO) Fuzzy logic Controller, named as MFCPSO. In each generation, two archives are used to save a few selected elites and mavericks, who have better fitness and higher novelty values, respectively. During the search process, each particle randomly select two particles, respectively, selected from the two archives as its exemplars. Moreover, to efficiently utilizing the two exemplars, two accelerate coefficients, which can be regarded as two learning weights of the two exemplar, and a selection ratio are adjusted by the MIMO fuzzy logic controller. The main characteristics of MFCPSO can be summarized as follows. 1) Instead of using fitness as a single criterion for exemplars selection, the novelty is considered as an additional criterion. 2) In each generation, a particle randomly selects two exemplars from the two types of candidate exemplars who, respectively, have better fitness and higher novelty. Thus, the particle has two exemplars with distinct properties. 3) During the search process, weights of a particle learning to the two exemplars are controlled by a fuzzy logic controller, aiming to take advantages of distinct merits of the exemplars. The rest of this article is organized as follows. Section II presents a framework of the canonical PSO and reviews some PSO studies. Details of MFCPSO are described in Section III. The experimental results and corresponding discussions are detailed in Section IV. Finally, Section V concludes this article. A. Canonical PSO In PSO, a particle i at generation t is associated with two vectors, i.e., a position vector where D represents the dimension of a problem under study. The vector X t i is regarded as a candidate solution while the vector V t i is treated as a search direction and step size of the particle i at generation t. During the search process, the particle adjusts its flight trajectory based on two vectors, named as personal historical best position (1) and (2), where w represents an inertia weight determining how much the information of previous search is preserved; c 1 and c 2 are two acceleration coefficients deciding relative learning weights for PB t i and NB t i , respectively; r 1,j and r 2,j are two random numbers uniformly distributed in the interval [0, 1]; x t i,j and v t i,j represent the jth dimension values of X t i and V t i , respectively. Note that, when a particle i regards all other particles as its neighbors, NB t i is the historical global best position of the entire population. B. Study of PSO It can be observed from (1) that three parameters (i.e., w, c 1 , and c 2 ) and two learning exemplars (i.e., PB t i and NB t i ) play crucial roles in improving the exploration and the exploitation abilities. Thus, majority of PSO variants focused on parameters adjustments and learning exemplars selections, which are briefly reviewed hereinafter. 1) Parameters Adjustments: There's a general consensus in the PSO community that a population should pay more attention on the exploration and the exploitation in the early search stage and the later search stage, respectively. Thus, various timevarying parameters are proposed in last decades. For instance, the most ubiquitous update rule of w is linearly decreasing from 0.9 to 0.4 over the entire search stage [10]. Motivated by the study, Ratnaweera et al. [24] further advocated a PSO with time-varying c 1 and c 2 in HPSO-TVAC. The experimental results manifest that a larger c 1 is beneficial for keeping the diversity of population at the early search stage, while a larger c 2 is propitious to speed up the convergence at the later search stage. However, considering that the search process of PSO is nonlinear and complicated, various nonlinear-varying strategies are proposed to tune the parameters [11], [12], [13], [25] aiming to give particles diverse search behaviors. To layout a more flexible and satisfactory adjustment for the crucial parameters, various adaptive adjustments have been proposed in last few years [26], [27]. For instance, adjustments of w, c 1 , and c 2 in [27] no longer rely on (or only rely on) iteration numbers. Instead, particles' fitness [14], [28], [29], [30], [31] and velocity [32] are selected as criteria when adjusting the parameters. Extensive experimental results verify that the adaptive strategies can exhibit a proper trade-off between the exploration and the exploitation, and then endow PSO with more comprehensive and reliable performance. 2) Learning Exemplars Selections: In the canonical PSO, the global version PSO (GPSO) and the local version PSO (LPSO) are two basic topological structures when a particle choosing its learning exemplars [15]. Generally, it is a common strategy that a particle selects its own historical best position and its neighbors historical best position as learning exemplars. However, the exemplars selections cannot efficiently deal with a deception problem lied in complicated multimodal tasks. To overcome the shortcoming, many researchers adopt comprehensive information of multiple particles to generate exemplars for a particle. In this sense, the comprehensive learning strategy [34], the orthogonal learning strategy [35], the interactive learning strategy [36], the dimensional learning strategy [37], and the multiple exemplars [31] are remarkable works. Motivated by the division of labour in human society, Li et al. [38] proposed a self-learning PSO (SLPSO) in which particles are assigned four different roles according to distinct local fitness landscapes that the particles belong to. Accordingly, the different roles representing four distinct learning strategies enable the particles to independently deal with various situations. Moreover, Xia also proposed a fitness-based multirole PSO (FMPSO) [29], in which different particles in a subpopulation play different roles, and then select their own learning exemplars to perform distinct search behaviors. Extensive experiments in these studies demonstrate that it is a promising strategy to satisfy distinct requirements in different search stages that assigning different roles to different particles according to their properties. A. Motivations of MFCPSO In the majority of PSO variants, it is a widely accepted strategy that evaluating a particle with respect to the specific objective function. Based on the measuring results, only those particles with higher fitness have more chance to be exemplars. Although the fitness-based exemplars selection mechanism is intuitively reasonable, it may cause a population to be easily trapped into local optima when optimizing complicated multimodal problems. In recent years, on the contrary, many studies in artificial life often focuses on tasks without explicit objectives. For instance, a common approach is to create an open-ended system by searching for good behavioral novelties instead of pursuing high fitness values [21], [22]. A few studies on neural networks [39] and robotics systems [21], [23] also verify that the novelty-based search process can offer very promising properties on many tasks. Why using the novelty instead of the fitness as a driving force for individuals? The motivation is that a deceptive fitness landscape in a complicated problem may cause a search algorithm following the fitness gradient to perform worse. On the contrary, ignoring the objective fitness entirely (or partially), the algorithm cannot be (or not easily to be) deceived with respect to the objective. In the following, a maze experiment based on PSO is used to illustrate the performance of the fitness-based driving and the novelty-based driving. In the experiment, 3 maze maps with different difficulties are introduced noted as easy map, medium map, and hard map. To illustrate comprehensive characteristics of the fitness-based driving force and the novelty-based driving force in different circumstances, six experiments are performed. Concretely, in each experiment, a population with 30 particles is randomly generated in a shadow rectangle located in the upper left corner of the map. Then, each particle selects its own learning exemplars based on fitness or novelty to perform the search process. After 1000 generations, the population stops its search process. Final positions of all particles over six typical runs on different maps are demonstrated by Fig. 1. From results of Fig. 1(a) and (d), we can see that the fitnessbased driving force can help all particles convergent to the goal position. On the contrary, no particle can search the goal position under the novelty-based driving force, though the population has a higher diversity. On the medium map, who has many simple traps, only two particles in the population can reach the goal position under the fitness-based driving force, while a few particles under the novelty-based driving force can reach the area round the goal position. Moreover, the population under the novelty-based driving force still keeps higher diversity. On the hard map, many difficult traps cause the population to fall into the traps, and all particles are far away from the goal. On the contrary, a few particles driven by the novelty reach the goal region. It can be observed from the comparison results that the fitness-based driving force yields more favorable characteristics than the novelty-based driving force in simple problems who have no traps. On the contrary, the novelty-based driving force dominates the fitness-based driving force on complicated problems who have many difficult traps. Thus, inspired by the improvements in artificial life and the comparison results introduced above, we think that the novelty value as well as the fitness value also can be regarded as a criterion when choosing exemplars for a specific particle. As a result, the particle can extract different knowledge from two different types of exemplars, which are separately selected based on the two criteria. Concretely, an exemplar with higher novelty can enhance the particle's exploration ability, while an exemplar with better fitness can improve the particle's exploitation ability. Thus, we regard that adjusting the learning weights from exemplars who have higher novelty or fitness values is a promising strategy to satisfy various requirements of the exploration and exploitation in different optimization stages. During the last decades, fuzzy logic theory has attracted many scholars' attentions due to its superior uncertainty and noise-handling ability from the usage of human-like linguistic variables [40], [41]. Furthermore, the fuzzy logic theory also has exhibited distinctive and outstanding characteristics on control field [42], [43], [44] and optimization field [45], [46]. Thus, in this study, we will apply a MIMO fuzzy logic controller to tune a particle's learning weights of the two different types of exemplars who have higher novelty or fitness values. Relying on the fuzzy controlled weights, particles in different search stages pour different attentions on fitness or novelty (i.e., the exploitation or the exploration), and then satisfy distinct requirements of different search stages. To implement aforementioned motivations, three main steps are involved in MFCPSO. First, particles with better fitness value (i.e., elites) or higher novelty value (i.e., mavericks) should be separately saved in two archives. Second, each particle randomly chooses its two exemplars from the two archives in each generation. Last, a fuzzy logic system is used to adjust each particle's parameters, and then control its learning weights on the two exemplars. Detailed information of MFCPSO is introduced as follows. B. Saving Elites and Mavericks In this study, two exemplars of a particle i in the canonical PSO, i.e., PB i and NB i , are replaced by two distinct exemplars, i.e., an elite and a maverick who have a promising fitness value and a high novelty value, respectively. Due to that the fitness associated with a function value is a widely known term in EA field, the definition of it is not discussed in this part. Since a maverick is measured by its "novelty" value, thus, how to define a particle's novelty must be firstly dealt with. In this study, for simplicity, a particle's novelty is calculated by an average distance between the particle and its the K-nearest neighbors. As a result, the novelty of the particle X i can be defined as as follows: where µ j is the jth-nearest neighbor of X i ; and dist(X i , µ j ) denotes Euclidean distance between X i and µ j . In each generation, the personal best positions (i.e., PB i ) of all particles are sorted according to their fitness, and the p · N best results are saved in an archive A E , where N denotes the population size. Note that, a solution's fitness fit(X) denotes an error value f (X) − f (X * ), where f (X) is the function value of the solution X, and X * denotes the real global optimum of a Algorithm 1: Update_Archives (). problem. Meanwhile, the current positions X i of all the particles are also sorted according to their novelty measured by (3), and then the (1 − p) · N best positions, in terms of the novelty value, are saved in another archive A M . During the search process, the update of the two archives can be described as Algorithm 1. Without loss of generality, minimization problems are considered in this article. Considering that not only different particles demonstrate distinct properties in a generation, but also a same particle may offer diverse characteristics in different generations, we assign distinct p i to each particle i aiming to satisfy distinct requirements of different search processes. Note that, p * N in Algorithm 1 denotes that rounding up the value p * N to an integer. C. Update of Velocity In MFCPSO, the update of velocity is based on the two archives, i.e., A E and A M . In each generation, the particle i randomly selects two exemplars, named as E i and M i , from A E and A M , respectively. According to the definition of A E and A M we can see that the particle i can obtain different knowledge from the two exemplars. Because different particles may display distinct properties and need to execute distinct search behaviors, thus, each particle should have different learning weights on the two exemplars. As a result, the particle i updates its velocity based on the following: where e i,j and m i,j are the jth dimension values of E i and M i , respectively; c i,1 and c i,2 are two acceleration coefficients of the particle i. we can observe From (4) that c i,1 and c i,2 decide the learning weights that the particle i learns knowledge from exemplars E i and M i , respectively. Considering that E i and M i , respectively, have a higher fitness value and a greater novelty value, we regard that a larger c i,1 and a smaller c i,2 are beneficial for learning much helpful information from elites and extracting little knowledge from mavericks. As a result, the exploitation capability of the particle i can be improved. On the contrary, a smaller c i,1 and a larger c i,2 can help the particle i learn much more knowledge from M i rather than E i . Thus, the exploration ability of the particle i can be enhanced. To satisfy distinct requirements of a fitness landscape that a particle belongs to, the particle should be able to adjust its own acceleration coefficients in different search stages. In this study, c i,1 and c i,2 are controlled by a fuzzy logic system. Details of the control process are introduced in Section III-D. Furthermore, it can be see From Algorithm 1 that the parameter p determines the size of the two archives. Concretely, a greater p makes A E have a larger size. In this condition, the archive A E can save more PB i with a high fitness value. Thus, a PB i saved in the archive with a relatively lower fitness also has a chance to be selected as an exemplar E i for a particle i. Meanwhile, a small size A M caused by the greater p only saves a small amount of X i with a very higher novelty value. As a result, an exemplar M i randomly selected from X i can provide more novelty information for the particle i. The above discussions denote that a greater p can give the particle i more chance to learn from exemplars with relatively lower fitness, as well as bring more novelty information to the particle. In other words, a greater p is beneficial for the exploration ability. On the contrary, a smaller p enables that only a few PB i with a very high fitness value can be candidate exemplars for the particle i. Thus, we regard that a smaller p is favorable for the exploitation ability. Due to a fact that the particle i may display distinct properties in different search processes, it needs to adjust its p accordingly. The control process of p i based on a fuzzy logic system is detailed in the following section. D. Parameters Controlled by Fuzzy Logic In recent years, various fuzzy logic control methods have been successful applied in many EAs, and offer distinct positive characteristic [46], [47], [48]. Thus, in MFCPSO, a MIMO fuzzy logic system is responsible for determining c i,1 , c i,2 , and p i of the particle i, the values of which control the learning weights for two types of exemplars (i.e., elites and mavericks). The fuzzy control system makes decisions based on three types of information about the particle i. 1) Ideal Control Objective: When utilizing a fuzzy controller to control a system, an ideal control objective is essential. Considering that the search process of a population is a dynamic progress, the ideal control objective of the fuzzy controller should be adjusted during the search process. In this study, two adjusted ideal control objectives, named as ideal fitness F it ideal and ideal novelty Nov ideal , need to be set during the optimization process. Concretely, in each generation, the average fitness and average novelty are regarded as F it ideal and Nov ideal , respectively. Thus, the definitions of the two ideal control objectives at the generation t can be defined as (5) and (6), respectively. 2) Fuzzy Control of Parameters: In this study, c i,1 , c i,2 , and p i of each particle i are controlled by a MIMO fuzzy logic controller. In each generation t, ef it t i and enov t i of the particle i are two inputs of the controller, which are defined as (7) and (8), respectively. After the definitions of the state variables input to the MIMO fuzzy controller, the two crisp input values (i.e., ef it t i and enov t i ) are transformed into fuzzy values via the process of fuzzification. Commonly, there may be significant difference in function values between two different functions. Moreover, from (5), we can observe that the value of F it ideal changes dynamically due to that the value of fit(X t i ) may be different in each generation. Thus, it is unfeasible to predefine a fixed range for ef it t i . In this work, the maximum value and the minimum value of ef it t i , denoted as max(ef it t i ) and min(ef it t i ), respectively, are determined in each generation. Based on the upper and lower limitation values in a generation, each ef it t i can be transformed into a fuzzy value as follows. For each input, we have defined seven fuzzy sets, which denote negative big (NB), negative medium (NM), negative small (NB), zero (ZO), positive small (PS), positive medium (PM), and positive big (PB), respectively. Before the fuzzification process, max(ef it t i ) and min(ef it t i ) need to be calculated. Then, the range [max(ef it t i ), min(ef it t i )] is divided into six equal parts: Δ=(max(x)-min(x))/6. Based on the value of Δ, thus, the membership function employed to ef it i can be seen in Fig. 2(a). Note that the fuzzification process of enov t i is similar as that of ef it t i . Thus, it is not described in this part. In each generation t, after the inputs are converted from real values into fuzzy values, the MIMO logic controller makes determinations for c t i and p t i based on two sets of the fuzzy rules as shown in Tables I and II, respectively. To illustrate the fuzzy inference process based on Tables I and II, two fuzzy rules marked with the star symbol are given as follows: Implication #1: IF ef it t i is NB and enov t i is NB, THEN c t i,1 is NB and p t i is PB. Authorized licensed use limited to the terms of the applicable license agreement with IEEE. Restrictions apply. Implication #2 IF ef it t i is PB and enov t i is PB, THEN c t i,1 is PB and p t i is NB. In Implication #1, the particle i displays the most unfavorable performance measured by fitness value, while it has the lowest novelty. In this case, we can regard that the particle i may be trapped into a local optimum. Thus, a greater c t i,1 helps the particle learn much more knowledge from elites, which is beneficial for increasing fitness. Meanwhile, a greater p t i enables the particle to select two exemplars from large-scale A E and A M . As a result, the particle can jump out of the local optimum and then pay more attention on searching other promising regions. On the contrary, in Implication #2, the particle i offers the most favorable performance measured by both fitness and novelty. In such a situation, on the one hand, the particle needs a greater c t 1,i to increase its learning weight for other elites, intending to further improve its fitness. On the other hand, the particle also can use a larger p t i to find out whether there are better solutions in other regions. After the fuzzy inference process, the obtained results, i.e., c t i,1 and p t i , are fuzzy values located in a range of output values. To obtain two crisp values of the results, the process of defuzzification, which is the exact opposite of fuzzification process, needs to be conducted. Note that, membership functions of p t i and c t i,1 are demonstrated in Fig. 2(b) and (c), respectively. The applied defuzzification strategy in our MIMO controller is the centriod of area, which allows to define the output as follows: where f is the defuzzified output result; |R| is the number of rules of the fuzzy system; µ B k (y k ) and y k are a membership function and the output result of kth rule, respectively. Finally, two output variables, i.e., c t i,1 and p t i , of the MIMO logic controller are fed back to the PSO algorithm module, and then each particle's learning weights for elites and mavericks are adjusted. E. Framework of MFCPSO By incorporating the aforementioned components, the general idea of MFCPSO is demonstrated by Fig. 3, and the pseudocode of it is shown in Algorithm 2. The source code can be downloaded. 1 A. Benchmark Functions and Peer Algorithms In this work, the CEC2017 test suite is utilized to testify the performance of MFCPSO. In the test suite, 30 benchmark functions are categorized into four different types, i.e., unimodal functions (F 1 -F 3 ), simple multimoal functions (F 4 -F 10 ), hybrid functions (F 11 -F 20 ), and composition functions functions For i=1 to N Do 08: Update A E and A M based on Algorithm 1; 09: Randomly select E i and M i from A E and A M , respectively; 10: Update the velocity and position by (4) and (2) Compute F it ideal and Nov ideal by (5) and (6), respectively; 15: For i=1 to N Do 16: Compute ef it t i and enov t i by (7) and (8) [49]. To explore the scalability of MFCPSO, four different dimension cases of the CEC2017 test suite, i.e., D=10, 30, 50, and 100, are tested, while the maximum number of function evaluations (MaxF Es) is set to 10000 × D for each dimension case. Note that, other six state-of-the-art PSO variants, including CCPSO-ISM [50], SRPSO [14], GLPSO [51], XPSO [31], TAPSO [52], and AWPSO [4], are chosen as peer algorithms. Parameters settings of all the peer algorithms are summarized in Table III. To obtain statistical results, each peer algorithm is carried out 51 independent runs on each function. The experimental results, measured by mean value (Mean) and standard deviation (S. D.), on the four dimension cases are detailed in Tables IV-VII, respectively. The best results of the Mean on each function among all algorithms are marked with shadow background. Moreover, results of t-test between MFCPSO and the other six competitors as well as rank values (the lower the better) of all the peer algorithms are also presented in the tables. Concretely, the results of the t-test recorded as symbols "+," "−," and "=" denote that MFCPSO is significantly better than, significantly worse than, and almost the same as a competitor algorithm, respectively. Symbols (#)+ and (#)− denote the number of "+" and "−" in each column, respectively; and Avg.(Rank) means an average value of that an algorithm attains the Rank values on all the test functions. Note that, freedom at a 0.05 level of significance is adopted in the t-test. For instance, the result "2.44+02(1−)" in the first row of Table IV means that the mean fitness of CCPSO-ISM on F 1 is 2.44+02, the Rank value of it is 1, and MFCPSO is significantly worse than CCPSO-ISM on F 1 . B. Solutions Accuracy In this part, the comparison results are presented, in terms of solutions accuracy. The experimental results of all the peer algorithms on the four different types of functions are discussed, respectively. Table IV, we can see that MFCPSO and GLPSO achieve the the same performance on F 2 and F 3 , measured by the solution accuracy. Furthermore, the two algorithms also display almost the same performance on F 1 , in terms of the t-test result. Moreover, CCPSO-ISM also offers very outstanding performance on this type of functions. However, with the increasing of problem scale, the results presented in Tables V-VII show that GLPSO exhibits more favorable characteristics than other competitors. Specially, GLPSO attains the best mean values on 2 out of the 3 simple unimodal functions in 30D, and 100D. On the contrary, MFCPSO achieves the best result on one function in 30D, and 50D, while it cannot yield the best result on any functions in 100D. Although CCPSO-ISM displays the best performance in 10D, its performance rapidly deteriorate. The results indicate that the scalability of CCPSO-ISM and MFCPSO on the unimodal functions is unfavorable. 1) Unimodal Functions 2) Simple Multimodal Functions (F 4 -F 10 ): It can be observed from the results that MFCPSO offers the best results on 6 out of the 7 simple multimodal functions on all the four dimension cases, followed by TAPSO who attains the most favorable performance on F 4 on all the cases. Although GLPSO exhibits unfavorable performance on the seven simple multimodal functions when dimension of them is D=10, it displays very promising characteristics on the type of functions in higher dimension cases because the its Rank values are 2 or 3 on all the multimodal functions on the higher dimension cases. The results verify that MFCPSO as well as GLPSO has very promising and comprehensive performance on the simple multimodal functions. 3) Hybrid Functions (F 11 -F 20 ): The comparison results on the 10 hybrid functions in the 4 dimension cases show that CCPSO-ISM attains the most outstanding performance in 10D and 30D cases. On the contrary, MFCPSO displays the most favorable properties in 50D and 100D cases. Meanwhile, TAPSO also offers very more promising performance in higher dimension cases (i.e., 50D and 100D) than in lower dimension cases (i.e., 10D and 30D). The experimental results indicate that MFCPSO and TAPSO dominate other competitors on this type dimension cases are displayed in Table VIII, in which symbols "(#)+" and "(#)−" denote the number that MFCPSO are significantly better than and significantly worse than the corresponding competitor algorithm, respectively. The comprehensive performance (CP) is equal to "(#)+" minus "(#)−" of all the four dimension cases. 2) Friedman-Test Results: In this part, a set of Friedman-tests of Mean values is applied to compare the performance among all the seven peer algorithms in 10D, 30D, 50D, and 100D D. Effectiveness of Parameters Adjusted by the Fuzzy Controller In this part, effectiveness of the new introduced parameters is analyzed. From the experimental results, we can observe that c 1,i and p i controlled by the MIMO fuzzy logic system endow MFCPSO with very promising characteristics. In this section, a set of experiments is conducted to analyze the effectiveness of the two parameters. Due to the space limitation, only three unimodal functions (F 1 -F 3 ) and three composition functions (F 21 -F 23 ) are selected as test functions, and the dimension of them is D = 30. 1) Effectiveness of c 1,i : As discussed in Section III-C that c i,1 determines a learning weight that the particle i learns knowledge from an elite exemplar. In this part, performance of the fuzzy controlled c i,1 is testified by comparison results between it and other three different constant values assigned to c i, 1 . From the experimental results demonstrated in Fig. 4, we can see that c i,1 adjusted by the fuzzy controller is beneficial for the convergence speed before the middle search stage on the three unimodal functions. However, the population cannot be further improved during the later search stage, which causes the solution accuracy achieved by the fuzzy controlled c i,1 is worsen than that by other three different values of c i,1 . On the contrary, the fuzzy controlled c i,1 enables MFCPSO to attain the best solution accuracy on all the three composition functions, which have complicated properties. Moreover, MFCPSO also offers a faster convergence speed on F 22 and F 23 , while it displays a significant improvement at the end of search stage on F 21 . 2) Effectiveness of p i : In MFCPSO, the parameter p i determines how many elites and mavericks should be selected as candidate exemplars. From the experimental results demonstrated in Fig. 5, we can observe that the population offers the highest convergence speed on the majority of the unimodal functions when p i = 0.2. The reason is that the smaller p i causes only few elites are selected as candidates exemplars. In other words, only those elites with very better fitness can be chosen, which is favorable for speeding up the convergence, especially for unimodal functions. On the contrary, the fuzzy controlled p i displays mediocre performance on both convergence speed and solution accuracy. However, the population cannot yield a fast convergence speed on 2 out of the 3 composition functions during the initial search stage when p i = 0.2. The reason may be that complex fitness landscapes in the three complicated functions cause the population cannot finding the real optimal solution only relies on fewer elite exemplars. On the contrary, the fuzzy controlled p i applied in MFCPSO offers the best solution accuracy. In addition, it also helps MFCPSO offer a fast convergence process. V. CONCLUSION In PSO community, it is a popular method that applying fitness as a criterion when selecting exemplars for a particle. Although the fitness-based selection mechanism plays a positive role in the exploitation ability, it sacrifices the exploration capability when optimizing some multimodal functions. In recent years, some novelty-based selection mechanisms enable a system immune to deceptions and local optima in some artificial life study due to that the selection mechanisms entirely ignore an objective of a specific problem. In this article, to take advantages of the two selection mechanisms, a new PSO variant, name as MFCPSO, is proposed. In MFCPSO, the fitness-based selection and the novelty-based selection are used to select two types of candidate exemplars for a particle, which are separately named as elites and mavericks in this study. In each generation, the MIMO fuzzy logic controller is applied to tune two acceleration coefficients and a selection ratio for each particle. During the search process, two ideal control objectives (i.e., fitness and novelty) are dynamic adjusted. Based on the control objectives, each particle can adjust its own parameters by the fuzzy controller. As a result, not only each particle can perform distinct search behaviors during different search stages, but also different particles in a generation can display various search characteristics. To testify the performance of MFCPSO, the CEC2017 test suite with four different dimension cases were selected as benchmark functions. From the comparison results between MFCPSO and other six PSO variants, we can obtain some preliminary conclusions. First, MFCPSO cannot offer very outstanding performance on simple unimodal functions in higher dimension cases though it achieves the most favorable performance on the unimodal functions in lower dimension case. Second, on the contrary, MFCPSO exhibits more promising performance on multimodal functions, especially on complicated multimodal functions in higher dimension cases. Lastly, MFCPSO attains the best overall performance on the CEC2017 test suite in different dimension cases. Furthermore, the effectiveness of the proposed strategy also has been verified be a set of experiments. From the comparison results we can obtain a few preliminary conclusions. First, the fuzzy controlled c 1,i is favorable for speeding up convergence on unimodal functions, especially during the middle search stage. Second, the c 1,i can help MFCPSO obtain more accurate solutions on complicated multimodal functions. Last, the fuzzy controlled p i is more suitable for the complicated functions than unimodal functions. Although the new introduced selection mechanism casts the performance of PSO in a new perspective, some issues need further study. First, how to define an effective criterion for "novelty" should be dealt with since it is crucial for a specific problem. In this article, a particle's novelty is measured by an average distance between the particle and its the K-nearest neighbors. However, it is not to say that the definition is the optimal choice due to that different problems possess their own distinct properties. Hence, it is more realistic that designing an appropriate measurement for "novelty" relying on a problem's characteristics. In addition, some crucial factors in the MIMO fuzzy logic controller need to be further studied, including definitions of fuzzy rules, membership functions, and defuzzification strategy. Last, performance of MFCPSO on complicated real applications also needs to be verified. In fact, the optimization process of MFCPSO can be regarded as a response system for a dynamic environment. Based on the MIMO fuzzy logic controller, the optimization system can exhibit proper responses for the dynamic environment. Thus, we will further improved our study for some dynamic optimization problems, including dynamic logistics management and multi-UAVs cooperative coverage search, etc.
9,226
2023-07-01T00:00:00.000
[ "Computer Science" ]
CXCL12/SDF-1 and CXCR4 Chemokines are a large family of structurally related chemoattractive cytokines, which have four conserved cysteines forming two disulfide bonds, and act through seven-transmembrane-spanning receptors coupled to heterotrimeric GTP-binding proteins (G-protein-coupled receptors). Chemokines were thought to be signaling molecules that attract leukocytes to sites of inflammation; however, CXC chemokine ligand (CXCL)12 [also known as stromal cell-derived factor (SDF)-1α and pre-B-cell-growth-stimulating factor (PBSF)] is the first member that was shown to be critical for developmental processes, including hematopoiesis (1), cardiogenesis (1–3), vascular formation (2), and neurogenesis (3), as well as the maintenance of tissue stem cells (4). Identification of CXCL12 Our interest is how bone marrow microenvironments regulate hematopoiesis, including B lymphopoiesis. To address this, we tried to identify a cytokine, which was important for B cell development in the marrow. In 1988, Namen et al. identified interleukin 7 (IL-7) produced by a bone marrow-derived stromal cell line as a cytokine, which enhanced the proliferation of B cell precursors. However, several studies suggested that IL-7 was not sufficient to support B lymphopoiesis. Hayashi et al. speculated that at first stage in B cell development, progenitors depended on unidentified molecules produced by the stromal cell line called PA6 alone for proliferation and differentiation into the second stage, where progenitors depended on both PA6-derived factors and IL-7 for proliferation (5). It was unclear whether PA6-derived factors were soluble factors or not in Hayashi's model (5). To address this issue, we cultured bone marrow hematopoietic cells in the absence or presence of PA6 cells separated by a membrane filter, allowing the passage of proteins but not cells. We showed that while very few viable B cell precursors were present 7 days after the culture of bone marrow hematopoietic cells in the presence of IL-7 and absence of PA6 cells, the proliferation of B cell precursors were enhanced in the presence of PA6 and IL-7. These findings suggested the existence of soluble factors produced by PA6 cells that stimulated the proliferation of B cell precursors in the presence of IL-7 (6). We tried to develop more simple culture system suitable for molecular cloning and found that a stromal cell-dependent B cell precursor clone, DW34, which was established from Whitlock-Witte-type culture by limiting dilution on a stromal cell line, could proliferate in the presence of a conditioned medium from PA6 cells (6). An expression cDNA library was prepared from PA6 cells using the vector pME18S, and then more than 10 4 pools were screened for the activity to stimulate the growth of DW34 cells after enforced expression in COS-7 cells, and positive pool was subdivided until a single positive clone was identified. We revealed that a conditioned medium from the positive clone-transfected COS-7 cells had DW34 growth stimulating activity and termed this molecule PBSF (6). The nucleotide sequence and deduced amino acid sequence of the clone were determined and its product was identical to a chemokine called SDF-1α (6, 7). We felt these results somewhat disappointing because chemokines were thought to be rather inflammatory mediators at that time. In 1993, Tashiro et al. developed a method for molecular cloning of cDNAs that contain signal sequences, such as those encoding secreted proteins and receptors without the use of specific functional assays, and identified SDF-1α; however, its function was unclear (7). Thus, we revealed that SDF-1α/PBSF (now formally named CXCL12) stimulated the proliferation of B cell precursors (6). Identification of a Receptor for CXCL12 All known chemokine receptors are G-protein-coupled receptors (GPCR) and amino acid sequence is conserved among these molecules. Based on this, we synthesized four degenerate oligonucleotides corresponding to conserved amino acid sequences in transmembrane regions of the chemokine receptors, including murine CXCR2, CCR2, and human HUMSTR, and used them as primers in PCR experiments to identify chemokine receptors abundantly expressed by murine CXCL12 responsive DW34 cells (8). The deduced amino acid sequence of a cDNA yielded by this approach shared 90% amino acid identity with previously identified human HUMSTR/HM89/ LESTR/fusin, a HIV-1 entry co-receptor and designated murine HUMSTR/HM89/LESTR/fusin (now formally named CXCR4) (8). CXCL12 induced an increase in intracellular free Ca 2+ in DW34 cells and CXCR4-transfected Chinese hamster ovary (CHO) cells, suggesting that CXCR4 is a receptor for CXCL12 (8). On the other hand, Bleul et al. and Oberlin et al. demonstrated that human HUMSTR/HM89/LESTR/fusin is a receptor for human CXCL12 (9, 10). The majority of chemokine receptors recognize more than one chemokine, and many chemokines bind to more than one chemokine receptor. However, we and others revealed that mice lacking CXCR4 showed hematopoietic and cardiovascular phenotypes strikingly similar to those of CXCL12 deficient mice, as described below, indicating that CXCR4 is the primary physiologic receptor for CXCL12 in mammals (1-3). Essential Physiological Roles of CXCL12-CXCR4 Signaling To determine the role of CXCL12 in hematopoiesis, we generated and analyzed CXCL12 and CXCR4 deficient mice, which died perinatally. Consistent with the activities of CXCL12 in promoting the proliferation of B cell precursors (6), CXCL12-CXCR4 signaling was essential for the development of B cells from the earliest precursors in fetal liver and bone marrow (1,11). Surprisingly, CXCL12-CXCR4 signaling was also essential for homing of hematopoietic stem cells (HSCs) and neutrophils to fetal bone marrow during ontogeny (1-3, 12). Subsequently, we generated CXCR4 conditionally deficient mice and revealed that CXCL12-CXCR4 signaling was essential for the maintenance of HSCs, the production of immune cells, including B cells, plasmacytoid dendritic cells (pDCs), which expressed high levels of type I interferon (IFN), and were thought to play important roles in antiviral immunity, and NK cells and homing of end-stage B cells, plasma cells into bone marrow (4,11,13). In addition to hematopoiesis, we found that CXCL12-CXCR4 signaling was essential for homing of primordial germ cells (PGCs) to gonads, a cardiac ventricular septal formation and vascularization of the gastrointestinal tract during ontogeny (1)(2)(3). In the meantime, Littmann's group described that CXCR4 was essential for migration of granule cells in appropriate positions in the cerebellum during neurogenesis (3), and besides these additional physiological roles of CXCL12-CXCR4 signaling, other groups revealed its relevant pathological roles. In 1996, Feng et al. found that CXCR4 acted as an essential co-receptor for T cell-tropic strains of human immunodeficiency virus type-1 (HIV-1), and Bleul et al. and Oberlin et al. demonstrated that CXCL12 had HIV-suppressive activities (9,10). Furthermore, CXCL12-CXCR4 signaling has been reported to be involved in migration of cancer cells, including presumptive cancer stem cells, to sites of metastasis and increased their survival and/or growth in various cancers, such as breast and lung cancers, as well as leukemia and lymphoma. CXCL12-Expressing Cells in Bone Marrow As the CXCL12-CXCR4 signaling plays a key role in hematopoiesis, we were prompted to visualize cells, which expressed CXCL12 in bone marrow. For this, we generated mice with the green fluorescent protein (GFP) reporter gene knocked into the CXCL12 locus and found that CXCL12 as well as stem cell factor (SCF), which was essential for HSC proliferation, was preferentially expressed in a population of stromal cells with long processes, termed CXCL12-abundant reticular (CAR) cells (11)(12)(13). CAR cells are adipo-osteogenic progenitors, which express adipogenic and osteogenic genes, including peroxisome proliferator-activated receptor γ (PPARγ) and Osterix (Osx), and largely overlap with SCF-expressing cells predominantly expressing leptin receptor (Lepr) (13)(14)(15). Histological analysis showed that most HSCs and very early B cell progenitors were in contact with CAR cells (4,11), and the experiments using diphtheria toxin-based system that allows the inducible, shortterm ablation of CAR cells in vivo revealed that CAR cells were essential for maintenance of hematopoietic stem and progenitor cells (HSPCs) in bone marrow (Figure 1) (14). Recently, we found that the transcription factor Foxc1 was expressed preferentially in CAR cells and was essential for CAR cell development and maintenance of bone marrow niches for HSPCs up-regulating FIGURE 1 | CXCL12-abundant reticular (CAR) cells. In adult bone marrow, the transcription factor Foxc1 induces development of CAR cells and maintains bone marrow niches for HSPCs, up-regulating the expression of CXCL12, which is essential for the maintenance of HSCs, common lymphoid progenitors (CLPs), B cells, pDCs, and NK cells, in CAR cells. SCF and CXCL12, which plays major roles in HSC maintenance and immune cell production (Figure 1) (15). Taken together, CXCL12 and CXCR4 have been identified as key spatiotemporal regulators of migratory stem and progenitor cell behavior, and our studies provide considerable new insights into the biology and pathology of tissue stem cells as well as hematopoiesis, vasculogenesis, and neurogenesis, and in some cases, for clinical application in various diseases.
1,969.8
2015-06-12T00:00:00.000
[ "Biology", "Medicine" ]
A Parallel High Speed Lossless Data Compression Algorithm in Large-Scale Wireless Sensor Network In large-scale wireless sensor networks, massive sensor data generated by a large number of sensor nodes call for being stored and disposed. Though limited by the energy and bandwidth, a large-scale wireless sensor network displays the disadvantages of fusing the data collected by the sensor nodes and compressing them at the sensor nodes. Thus the goals of reduction of bandwidth and a high speed of data processing should be achieved at the second-level sink nodes. Traditional compression technology is unable to appropriately meet the demands of processing massive sensor data with a high compression rate and low energy cost. In this paper, Parallel Matching Lempel-Ziv-Storer-Szymanski (PMLZSS), a high speed lossless data compression algorithm, making use of the CUDA framework at the second-level sink node is presented. The core idea of PMLZSS algorithm is parallel matrix matching. PMLZSS algorithm divides the data compression files into multiple compressed dictionary window strings and prereading window strings along the vertical and horizontal axes of the matrices, respectively. All of the matrices are parallel matched in the different thread blocks. Compared with LZSS and BZIP2 on the traditional serial CPU platforms, the compression speed of PMLZSS increases about 16 times while, for BZIP2, the compression speed increases about 12 times when the basic compression rate unchanged. Introduction With the increasing in the production and propagation of data carriers, such as computers, intelligent mobile phones, and sensing equipment, the data growth of the whole world has increased rapidly and there are also various data types. The total amount of information in the world has doubled every two years in the last 10 years; the total amount of data established and duplicated was 1.8 ZB in 2011 and will be 8 ZB in the near future. Furthermore, it will be 50 times in the next 10 years according to International Data Corp. Three kinds of dominant data types are transactional data, represented by electronic business, interactive data, represented by social networks, and wireless sensor data represented by wireless sensor networks (WSNs). These types occupy 80% to 90% of the total data. The growth rate of unstructured data is much higher than that of structured data [1]. WSNs are considered as one of the most important technologies in the new century. They connect the Internet through a large number of wireless sensors and MEMS (microelectromechanical systems), thus becoming a bridge between the real world and the virtual world of the network. They also allow real world objects to be perceived, recognized, and managed, thus providing the information on the physical environment and other related data for people directly, effectively, and genuinely. In terms of the large scale of a WSN, there are two main points: first, the sensors can be distributed in a vast geographical area, such as a large number of sensor nodes deployed in a large environmental monitoring area, and second, a large 2 International Journal of Distributed Sensor Networks number of sensor nodes can be densely deployed in a small geographical area to obtain the precise data. Since the Smart Earth plan proposed by USA, the largescale wireless sensor network (LSWSN) has become an important factor in the comprehensive national strength contest. The new LSWSNs have been listed as a crucial technology in the economy and national security of America. Furthermore they are a key research field in UK, Germany, Canada, Finland, Italy, Japan, South Korea, and the European Union [2]. As a new technology for acquiring and processing information, LSWSNs have been widely used in military and civilian fields. LSWSN has the characteristics of rapid deployment, good concealment, and high fault tolerance, making it suitable for some applications in the military field. The wireless sensors can be scattered into the enemy military positions through air delivery and long-range projectiles. Those sensors will deploy a self-organizing WSN to secretly collect real-time information in the battlefield at close range [3]. It is also more widely used in civilian fields, such as environmental monitoring and forecasting, medical care, intelligent buildings, smart homes, structural health monitoring, urban city traffic information monitoring, large workshop and warehouse management, safety monitoring of airports, and large industrial parks [4][5][6][7][8]. According to Forrester, the ratio of the number of transactions of the Internet of Things to the business of the Internet will be 30 : 1 in 2020 due to the application and popularization of LSWSNs [9]. However, the application of LSWSNs has encountered many challenges in their rapid development process. For example, on the one hand, a large number of redundant data are generated by sensor nodes whose forwarding between the nodes causes a lot of energy to be wasted at the nodes and the delay of network transmission; on the other hand, as shown in Figure 1, the second-level sink node centralizes massive sensor data from the first-level sink node, seriously affecting the responses of the application layer. This series of problems undoubtedly restrict the further development of LSWSNs. According to the characteristics of the LSWSN, the research focuses on two aspects. (a) The Data Compression Algorithm at the Sensor Nodes. The algorithm reduces the transmission of redundant data, causing less energy wastage and thus lengthening the service life of the LSWSN. The study [10] shows that the energy consumption of data communication is much higher than that of data operation at sensor nodes, as the energy required to transmit one bit is about 480 times that of executing one addition operation. Some data compression schemes of sensor nodes have been proposed, such as the lifting wavelet transform for wireless sensor networks [11], the coding-by-ordering data compression scheme [12]. is connected with and integrated into the dynamic network. Meanwhile a large number of nodes in the network carrying out real-time data collection and information interaction have produced massive sensor data to be stored and processed. As shown in Figure 1, massive sensor data would finally converge at the second-level sink node and would then be transmitted to the remote servers to be calculated and processed through the network. Then the data preprocessing at the second-level sink node affects the value of application of the LSWSN [13][14][15]. Therefore, study of the compression of massive sensor data in networks is a hot topic in the field of wireless sensor networks. In fact, the current research on sensor networks mainly adopts lightweight processing nodes as sensor nodes and sink nodes. The calculation abilities of sink nodes do not meet the performance demand of massive sensor data compression by traditional algorithms. Ohara et al. [16] introduced multicore processors as sensor nodes for wireless sensor networks for special purposes. But, for sink nodes, the calculation ability is still not satisfactory. All of these factors are due to the characteristics of the CPU design. Most of the transistors in the CPU are used for cache and logic control, and only a small part are used for calculation for speeding up a single thread of execution. It is not possible to run hundreds of threads in parallel on CPU. But the design intent of a GPU [17] is not the same as that of a CPU. A large number of transistors are used in the data execution units such as the processor array, multithreads management, and shared memory. However only a small number of transistors are used by the control units. Contrary to those of CPU, the performance and execution time of a single thread of the GPU lead to the improvement of the overall performance of GPU. Meanwhile thousands of threads are International Journal of Distributed Sensor Networks 3 executed on the GPU in parallel and a very high memory bandwidth between threads is provided. GPU has a distinct advantage over CPU in dealing with parallel computing without data association and interaction between threads. In this work, we study the challenges of a parallel compression algorithm implemented on a CPU and a GPU hybrid platform at the second-level sink node of the LSWSN. As the matrix matching principle introduced, it divides the compressed data into multiple dictionary strings and preread strings dynamically along the vertical and horizontal axes in the different blocks of the GPU and then it forms multiple matrices in parallel. By taking advantage of the high parallel performance of the GPU in this model, it carries out the data-intensive computing of the LSWSN data compression on the GPU. Furthermore it allocates threads' work reasonably through careful calculation, storing the match result of each block in the corresponding shared memory. Thus it is possible to achieve a great reduction of the fetch time. At the same time, the branching code is avoided as far as possible. Our implementation makes it possible for the GPU to become a compression coprocessor, lightening the processing burden of the CPU by using GPU cycles. Many benefits are shown through the above measures: the less energy consumption of intercommunication and more importantly the less time spending in finding the redundant data, thus speeding up the data compression. It supports efficient data compression with minimal cost compared with the traditional CPU computing platform at the second-level sink node of the LSWSN. The algorithm increases the average compression speed nearly 16 times compared with the CPU mode on the premise that the compression ratio remains the same. The paper is organized as follows. Section 2 reviews the related works. Section 3 introduces the LZSS algorithm and BF algorithm. Then the parallel high-speed lossless compression is accounted based on the parallel matching LZSS (PMLZSS) algorithm in LSWSN and our implementation details are put forward in Section 4. The experiments and analysis of results are presented in Section 5, and finally Section 6 concludes the paper. Related Works Sensor node data compression technology is adopted to study how to effectively reduce data redundancy and to reduce the data transmission quantity at sensor nodes without losing the data precision. Most of the existing data compression algorithms are not feasible for LSWSN. One reason is the size of the algorithm; another reason is the processor speed [10]. Thus, it is necessary to design a low-complexity and small-size data compression algorithm for the sensor network. Wavelet compression technology has evolved on the basic theory of wavelet analysis and wavelet transform. The core idea presents that most energy of one data series is centered on partial coefficients through the wavelet transform, when another part of the coefficient is set to 0 or approximately 0. Then small parts of the important coefficients are maintained by the certain coefficient decision algorithm. Finally the approximate data sequences of the original data are reconstructed by taking the inverse wavelet transform of the small important coefficients when the original data sequence is needed. Haar Wavelet Data Compression algorithm with Error Bound (HWDC-EB) for wireless sensor networks was proposed by Zhang et al. [18] based on the wavelet transform, which simultaneously explored the temporal and multiplestreams correlations among the sensor data. The temporal correlation in one stream was captured by the one-dimensional Haar wavelet transform. Ciancio et al. [19] proposed the Distributed Wavelet Compression (DWC) algorithm, which extracted the spatial-temporal correlation of sensing data before transmitting to the next node through the interaction of pieces of information with each other among the closed sensor nodes. Although the algorithm greatly reduces the transmission of redundant data, the whole complicated processing leads to serious network time-delay. For the local less jitter and time sequence data, Keogh proposed the Piecewise Constant Approximation (PCA) algorithm [20], whose basic idea was to segment long time sequence data; then, every segment could be represented by the data mean constant and end position mark. Then the Poor Man Mean Compression (PMC) algorithm put forward by Lazaridis and Mehrotra [21] made best use of the mean data in each subsegment of the data sequence as the approximation constant to replace the subsegment. But the compression algorithm based on the subsegment is lack of a global view as only a data sequence is concerned within the current continuous time. With the massive data increasingly produced by the LSWSN in the application process, the difficulties of data storage and process arise, seriously affecting the large-scale use of the LSWSN. To solve the problem, the sensor data should be compressed at the second-level sink node before being transmitted to the remote servers via the network. For massive data compression, the big problem lies in how to perform the compression quickly in a certain period of time. However the present compression algorithms are required to go through compression processing on the basis of full serial analysis of the raw data, which leads to low speed and low compression efficiency for massive sensor data. In view of the present situation, the key question is how to implement parallel compression based on the existing compression algorithms in order to solve the problems. Data compression can be classified as lossy compression and lossless compression according to basic information theory [22]. Lossy compression compresses the redundancy of the input data and the information it contains, but some information is lost. Lossless compression compresses the redundant information of the input data, and the information is not lost in the compression process. Lossless compression can be divided into two different modes: stream compression mode and block compression mode. The block compression mode divides the data into different blocks according to a certain policy and then compresses each block separately. The classic compression algorithms such as Prediction by Partial Matching (PPM), Burrows-Wheeler Transform (BWT), Lempel-Ziv-Storer-Szymanski (LZSS), Lempel-Ziv-Welch (LZW), and Block Huffman Coding (BHC) all take advantage of block compression. Gilchrist proposed the BZIP2 algorithm [23] with multithreads, whose core idea was to chunk the data into blocks, with different threads completing compression tasks in each block, respectively. GZIP took advantage of the multicore technology to compress data and Pradhan et al. [24] introduced the distributed computing technique to improve the performance of data compression. All of these improvements are achieved by optimization algorithms confined to the CPU platform. But the improvement of the performance is limited by the number of multithreads running concurrently and the number of communication data among the multithreads on the CPU platform. Since the advent of GPU, some scholars have also done a lot of work on data compression. Many lossy compression algorithms based on GPU are successful, such as the use of GPUs to speed up the execution time of JPEG2000 image compression [25] and the use of GPUs to compress space applications data [26]. Recently, this has been a hot research topic for improving the performance of lossless data compression algorithms based on GPUs. Taking the image compression field as an example, many improvements in image compression and transmission performance have been made by GPU. O'Neil and Burtscher [27] proposed a parallel compression algorithm based on a GPU platform specifically for double precision floating point data (GFC), whose compression speed was raised by about two orders of magnitude compared with BZIP2 and GZIP running on the CPU platform. Although RLE is not a very parallelizable algorithm, Lietsch and Marquardt [28] and Fang et al. [29] took advantage of the shared memory and global memory of the GPU to improve it. But the acceleration effect was not very obvious in practice. Cloud et al. [30] and Patel et al. [31] improved the classic BZIP2 algorithm; their basic ideas were to make use of the block compression and to improve the parallel code fit for the GPU, mainly in the three stages of the algorithm: the Burrows-Wheeler Transforms (BWT), Move-To-Front (MTF), and Human Coding. In most of the above studies, the data are chunked into blocks directly, and then the blocks are processed in parallel. Data dependencies exist if we only chunk the data simply. The acceleration effect is not ideal in practical applications. Thus the emphasis of our work is to focus on how to find inherent parallelism in compression algorithms and how to transplant them to the GPU platform. LZSS Algorithm and BF Algorithm The LZSS algorithm [32], a widely used data compression algorithm and being a CPU-based serial algorithm, is not suitable for GPU architecture. The BF algorithm is a serial string matching algorithm, although its time complexity is ( * ). However, compared to KMP, BM, and BOM algorithm, it can be easily converted from the serial computing model to the parallel computing model after modification. LZSS Algorithm. LZSS is an improvement of LZ77 [33]. First, it establishes a binary search tree, and second, it changes the structure of the output encoding, which solves the problem of LZ77 effectively. The standard LZSS algorithm uses a dynamic dictionary window which is 4 KB and a prereading window to store the uncompressed data whose buffer size is usually between 1 and 256 bytes. The basic idea of LZSS is to find the longest match of the prereading window in the dictionary window dynamically. The output of the algorithm will be a two-tuple (offset, size) if the length of the matching data is longer than the minimum matching length. Otherwise the output will be the original data directly. For example, for the raw data AABBCBBAABCAC, it outputs the result AABBC(3, 2)(7, 3)CAC using the LZSS compression processing. The dictionary window and the prereading window slide back once every time a datum is processed to repeatedly deal with the rest of the data. When coding in practice, LZSS combines the compressed coding and raw data to improve the ratio of compression. Each byte has a one-bit identifier and consecutive eight-bit identifiers, which constitute a flag byte. The output format is one flag byte and eight data bytes continuously, which indicates the original data when the identifier bit is 0 and compressed data when it is 1. Basic Serial BF String Matching Algorithm. For the object string and pattern string, the serial BF string matching algorithm matches the pattern string from the start of the object string to compare object [0] with pattern [0]. If they are equal, it continues to compare subsequent characters and the match is successful if all of the characters are the same. Otherwise the pattern string goes back to the start position and the object string goes back to the start+1 position to continue comparing. The pseudocode is referred to as Algorithm 1. Implementation of Lossless Compression Based on Parallel Matching LZSS at Sink Node The architecture of the GPU is Single Instruction Multiple Thread (SIMT), which is very suitable for handling repetitive character matching. It converts the serial computing model of the original BF algorithm to the parallel computing model and supplements the LZSS compression algorithm with the BF algorithm. Thus an efficient parallel lossless compression algorithm based on GPU and CPU platforms at the secondlevel sink node is described in this section. With regard to improving the compression ratio, the speed of the compression is improved slowly [34] for the latest relevant research on the LZSS algorithm. The key to the compression speed is to speed up the matching of the two strings in the two dynamic sliding windows through the analysis of the LZSS algorithm. BF is a typical serial algorithm according to the analysis in Section 3.2, which matches the strings using two layers of loops. The inner loop judges whether the string whose length is equal to the length of the pattern string in the dictionary window matches the pattern string, and the outer layer is used to move the dictionary window. The process of searching for pattern string matching in the object string is completely independent, which provides an opportunity to convert BF into a parallel algorithm on the GPU platform. GPU supports a large number of threads running concurrently. If one GPU thread corresponds to one match of the compressing data, with regard to the 4 KB compression dictionary window in the LZSS algorithm, 4096 GPU threads should be run. It is no problem for the GPU to run 4096 or even higher order of magnitude of threads concurrently. Although running several threads in parallel does not work for the general program development of the actual GPU, it is necessary to deal with more practical questions. GPU is inefficient for branch operation because it is not suitable for logic control. During the task running process, different data leads to different thread speeds executing different subtasks. In the execution of such a task scheduling, the execution time of the slowest threads will decide the whole task execution time. In accordance with the features of GPU, the expensive calculations of the task are accelerated in parallel on GPU, and the serial parts of the task are preperformed on CPU. In principle, the above has stated that on the one hand the tasks of matching the dictionary strings with several prereading window strings are implemented on GPU, which achieve acceleration of the parallelization. On the other hand, the serial operations such as matching result synthesis and data compression are implemented on CPU. The Improved Flag Byte. In LZSS, in order to combine the compression coding and the raw data, a flag byte is set every 8 data bytes. Negative compression would occur if fewer data could be compressed in a file. In this paper two categories of flag bytes are set: the mixed flag byte and the original flag byte. The first bit of the mixed flag byte is 1, and the other 7 bits are marked as 7 mixed data bytes. The first bit of the original flag byte is 0 and it outputs 128 raw data bytes consecutively at the most. It greatly reduces the number of flag bytes to increase the compression ratio. The output of the above raw string is (0001001)AABBC(11111000)(3, 2)(7, 3)CAC. 4.2. Setting the Length of the Dictionary Window. The length of the dictionary window is set as long as possible to discover more compressible data, but it also brings the problem of the expansion of search range. The length of the offset of the matching data relative to the dictionary window becomes longer. Two bits are used to represent the offset in the mixed flag byte, making the maximum length of the dictionary window up to 64 KB. PMLZSS Parallel Matching Model. Each thread has to frequently access the global memory via the general parallel matching algorithm, thus reducing the compression performance. In the CUDA environment, each thread has its own shared memory, and all the data in the shared memory are accessed directly for all the threads in the same block. Making use of the high parallel of GPU, combining the advantages of LZSS algorithm and BF algorithm, the PMLZSS speeds up the data compression. As shown in Figure 2, using the idea of LZSS for reference, the PMLZSS algorithm divides the data compression file into two parts: the compressed dictionary window and the prereading window. The lengths of the two windows are KB and 16 * B, respectively. In order to make full use of GPU parallel processing capabilities, the data compression file should be divided into several pairs of compressed dictionary windows and prereading windows, not just one pair as in LZSS. As shown in Figure 3, it builds up a matrix where the data in the compressed dictionary window in bytes is shown on the vertical axis and the data in the prereading window in bytes is shown on the horizontal axis. After studying the BF algorithm, PMLZSS adopts the violence matching method to perform parallel matching so that for each byte on the vertical axis one thread should be invoked to match all bytes on the corresponding horizontal (cKB) axis (i.e., the bytes in the corresponding prereading window). If it finds a match, the position in the matrix will be set to 1; otherwise it will be set to 0. Finally, it finds the longest oblique line segment with consecutive 1 s through the whole matrix, recording the start and end positions, and the length of the oblique line segment, sending them into the CPU as parameters for data compression. PMLZSS Algorithm Implementation. According to the parallel matching model, the specific data parallel compression process entailed the following steps: (1) reading the data compression file and then copying this file from memory to the global memory of GPU; (2) setting the thread block groups on GPU as [ ], in which is the total number of thread blocks and the number of threads in each block is ; (3) setting the length of the compressed dictionary window as B and setting the pointer to the first compressed dictionary window as ℎ, whose initial value points to the beginning of the data compression file; (4) setting the size of the prereading window as and setting the pointer to the first prereading window as , whose initial value is ℎ-; (5) initializing the thread group ℎ [ * ] and ( * /2)/ matrices, whose size is * ; (6) invoking ( * /2) threads in the thread group ℎ [ * ] to deal with = ( * /2)/ data segments in the data compression file, whose length is + . The compressed dictionary windows and the corresponding prereading windows are shown in Figure 2. ℎ points to the header of the 0th compressed dictionary window, points to the header of the 0th prereading window, ℎ + points to the header of the 1st compressed dictionary window, + points to the header of the 1st prereading window, and so on. pairs of a compressed dictionary window and a prereading window can be dealt with in a cycle. Specifically, for the data in each compressed dictionary window and corresponding prereading window, the algorithm performs the following steps respectively, (6-1) setting the counter = 0; (6-2) setting thread 1 , whose thread number is ℎ1 and then using thread 1 to judge whether the ( ℎ1 )th byte in the th compressed dictionary window matches the bytes from the ( * 16)th to the (( + 1) * 16 − 1)th byte in the th prereading window (i.e., whether the two bytes are equal), where 0 ≤ < . It returns 1 if the two bytes match and 0 otherwise. Then the results are written back to the global memory in the th matrix from position (( ℎ1 ) * + * 16) to position (( ℎ1 ) * + * 16+16); (6-3) + +, return to (6-2) when < ; otherwise go to (7); (7) Finding the longest oblique line segment with consecutive 1 s in the results matrices , determining the result triads array [ ], whose element has stored three components, which were ( , , ℎ). This shows that a match is not found when the ℎ is less than 3 (if the length of the matching substring is less than 3 bytes, the length of the compressed code would be longer than the raw data), and and are set to −1 directly for meaningless; This step includes the following substeps: (7-1) setting thread 2 , whose number is ℎ2 and then using 2 to find the longest oblique line segment with consecutive 1 s, recording its corresponding parameters , , and ℎ; (7-2) thread that 2 gets the corresponding data of , , and ℎ and then stores them in the element of the result triads array whose index is ( ℎ2 ); (8) finding the element that has the maximum value of ℎ. Setting thread 3 , whose number is ℎ3, using 3 to find the element that has the maximum value of ℎ in the corresponding array of each , and storing them in the global match result array ℎ[ ]. The elements of this array store the results triad ( , , ℎ); (9) compressing the data according to the matching results array ℎ[ ], including the following steps: (9-1) copying the matching result array ℎ[ ] from the GPU to the memory of CPU; Prereading window (16 B) Compressed dictionary window (4096 B) . . . (10) determining whether the pointer has pointed to the end of the data compression file: the process is finished if the pointer has pointed to the end of the data compression file. Otherwise the algorithm slides the dictionary windows and prereading windows forward, that is, setting = + * and ℎ = ℎ + * , and then returns to (6). Example of PMLZSS. Setting the data at the beginning of 4096 bytes of the compression file as "ℎ ℎ . . . , " the data of the following 16 bytes were " ℎ ℎ. " According to the above PMLZSS algorithm process, the steps are as follows: (1) CPU that reads the data compression file and then copies this file from memory to the global memory of GPU; (2) setting the thread block groups on GPU as [1024] and then the number of threads in each thread block is 512; (3) setting the length of the compressed dictionary window as 4096 B, while the pointer to the first compressed dictionary window is ℎ = 0; matrixes, whose size is 4096 * 64; (6) invoking 1024 * 256 threads in the thread group ℎ [1024 * 512] to deal with 64 data segments in the data compression file, whose lengths are (4096 + 64) bytes. As shown in Figure 4, the resources below describe the processing work of the 0th compressed dictionary window and the corresponding prereading window in detail. From 0 to 4095 , altogether 4096 threads are in a thread group, and the thread numbers are 0 to 4095, respectively. 1 ( 1 ∈ [0, 4095]) is one of the threads whose thread number is 1. It is responsible for judging whether the 1th byte in the compressed dictionary window matches the bytes from the 0th to the 15th byte in the prereading window (i.e., whether the two bytes are equal). It will return a value of 1 when the two bytes are matched; otherwise it will return a value of 0. Then the results are written back to the position 1 * 64 to 1 * 64+16 of the corresponding matrix in the global memory. When 16 bytes of data are compressed, a loop is executed with = 0 and the result is 4096 * 16, which is a quarter of one ; (7) as shown in Figure 5, the example uses the 0th to describe the process of finding the longest oblique line segment with consecutive 1 s; from 0 to 4110 , all (4096+15) threads are in a thread group. The thread numbers are 0 to 4110, respectively. (00000100) ℎ. The first 4096 bytes in the compressed dictionary window cannot be compressed; the data are output originally. The following (11100000) is a mixed byte. The two bytes 5 and 6 after the are compressed codes, which denote that a substring whose length is 6 bytes of the prereading window is compressed, and the corresponding compressed dictionary is at the 5 of the 0th compressed dictionary window. Then 4 bytes of raw data are output. Finally, an original flag byte whose value is 00000100 is output. The raw data following it is the 4 bytes " ℎ. " (10) Finally by deciding that the pointer has pointed to the end of the data compression file, the compression process finishes. The compression algorithm proposed here mainly consists of the following steps: (1) copying the data from CPU to GPU; (2) building multiple matrices for the dictionary string and the prereading string concurrently; (3) matching multiple matrices concurrently; (4) obtaining the triple array from the result matrix; (5) merging the triple array; (6) copying the triple array back to CPU; (7) compression of the data by CPU according to the triple array. The total time complexity of the algorithm is the sum of the above seven steps: ( ) = 1 ( ) + 2 ( ) + 3 ( ) + 4 ( ) The time complexities of the first, second, fifth, sixth, and seventh steps are constants; that is, For the third step of the algorithm, when the length of the prereading window is and the length of the source data to be compressed is , then the 64 matrices whose dimensions are 4096 * are processed in one cycle. The total number of loops is /( * 64), and the time complexity for the step is For the fourth step of the algorithm, similar to the third step, 64 matrices whose dimensions are 4096 * can be processed in one cycle each time. For multiple threads that are executed concurrently, the time complexity for a single cycle is ( ). Then the total time complexity for the fourth step is The total time complexity of the algorithm is ( ) = (1) + (1) + ( ) + ( ) + (1) Thus, the final time complexity of the algorithm is linearly proportional to the length of the source data being compressed. Experimental Platform Setting. In order to test the efficiency of the new lossless data compression algorithm PMLZSS on GPU platform in LSWSN, the data compression algorithms BZIP2 and LZSS on CPU platform and the PMLZSS compression algorithm on three different GPU platforms are tested. The four kinds of test platforms at the second-level sink node are as follows: (i) CPU: a six-core Intel Core i7 990x processor running at 3.46 GHz and 24 GB main memory. The operating system is Ubuntu 2.6.32-33, and the compiler is a gcc C compiler 4.4.3; (ii) NVIDIA Tesla C2070 GPU, which has 448 cores with 8 streaming multiprocessors running at 1.15 GHz; (iii) NVIDIA GTX480 GPUs, which has 480 cores with 15 streaming multiprocessors running at 1.4 GHz; (iv) NVIDIA GTX 580 GPUs, which has 512 cores with 16 streaming multiprocessors running at 1.5 GHz. On GPU platform, the CUDA compiler 4.0 is employed. The communication between CPU and GPU uses a PCIe-x16 whose bandwidth is 6.4 GB/s. Test Data Sets. In a large supermarket logistics system supported by the Internet of Things, it is necessary to keep track of location and status information of 50,000,000 items. Assuming that 2,000 times are read every day and that 20 bytes are read each time, then 2 TB is the amount of data generated daily. The sensor data, which amounted to 128 MB in the experiment, is output by the simulation program. Experimental Analysis. The BZIP2 algorithm, the original LZSS algorithm, and the PMLZSS algorithm are tested by comparing the data sets. BZIP2 code references [35] and LZSS code references [36] are running on the CPU platform, while the PMLZSS were running on three different GPU platforms, respectively. Definition 4. Compression throughput is the total quantity of data handled by the compression procedure per unit time. Definition 5. The capacity reduction ratio is the ratio of the difference of the length of data before compression and the length of data after compression to the length of data before compression. The capacity reduction ratio is expressed as a percentage; that is, Capacity Reduction = (the length of data before compression − the length of data after compression) ⋅ (the length of data before compression) −1 * 100%. Relationship between the PMLZSS Compression Throughput and the Length of Compression Dictionary Window. The compression throughput of LZSS compression algorithm running on CPU is 28.5 MB/s, while the BZIP2 running on CPU is only 37.35 MB/s, which could not meet the performance requirement of big data compression. When the compression throughput of LZSS is set to 1, the compression throughput speedups of PMLZSS running on the different GPU platforms are shown in Figure 7, whose lengths of prereading windows are set to 64 B, while the lengths of the compression dictionary windows are set to 1 KB, 2 KB, 4 KB, and 8 KB, respectively. From Figure 7 the speedup of compression throughput of PMLZSS, which runs on GTX580 while the compression dictionary window is set to be 1 KB, reaches nearly 34 times more than that of LZSS. Furthermore, the speedup of compression throughput reaches 13 times when PMLZSS runs on Length of compression dictionary window Capacity reduction ratio (%) GPU C2070. Its speedups of compression throughput are in decrease trend with the increase of the lengths of the compression dictionary windows on different GPU platforms. The longer the lengths of the compression dictionary window, the more the calculation of the matching of strings in the prereading window and the compression dictionary window and the lower the speed of compression. Three factors determining the increase of PMLZSS compression throughput are the numbers of stream processors in a single GPU, the sizes of caches, and the sizes of shared memory in each block of GPU. Therefore, with the development of GPU, the increases of caches, the shared memories, and the number of stream processors in a single GPU chip all account for the improvement of the parallel computing capability. So PMLZSS compression throughput is going upward with it in the expectation. Relationship between the PMLZSS Capacity Reduction Ratio and the Length of Compression Dictionary Window. PMLZSS capacity reduction ratio is only related to the length of compression dictionary window, having nothing to do with GPU platform as shown in Figure 8. When the length of the compression dictionary window is set smaller, the smaller the possibility of the string in the Prereading Window finding the matching sub-string in the Compression Dictionary Window. Then the less the redundant data, the smaller the capacity reduction ratio and the vice versa. From our research the biggest PMLZSS capacity reduction ratio is only 13.53%, having decreased by nearly 2% compared to LZSS on CPU, far smaller than the BZIP2 on CPU. Moreover it is shown that the LZSS-CPU capacity reduction ratio is only 13.72% and the BZIP2-CPU capacity reduction ratio 22.65% in [37]. The two reasons accounting for the smaller PMLSZZ capacity reduction ratio are as follows. (1) PMLZSS capacity reduction ratio is smaller than that of BZIP2 while PMLZSS focusing on the improvement of compression throughput, not optimizing the related capacity reduction; (2) the data unit in the experiment is chunk which is no longer than 64 KB, while the average length is about 10 KB, restraining the increase of capacity reduction ratio to some extent. Time-Consuming Comparison of PMLZSS at Different Stages. We first test the time cost in various compression stages of the PMLZSS running on three different GPU platforms: (1) MHtoD: the time taken to transmit the data from CPU memory to GPU memory; (2) CMatrix: the time taken to construct the matrix in GPU; (3) findOne: the time taken to find oblique segments with the greatest number of consecutive 1 s in the matrix on GPU; (4) MDtoH: the time taken to transmit the data from GPU memory to CPU memory; (5) cpuCompress: the time taken to compress the source data on CPU based on the displacement and the length of data obtained from GPU; (6) totaltime: the time taken by the whole compression process. A subset of the test data set with a size of 128 MB is chosen, and the test is repeated five times. The average time of each stage is indicated in Table 1. Table 1 shows that the time costs at the three stages of , , and are not much different on the three various GPU platforms, while the time cost of is very different from the time cost on the PMLZSS-GTX580, which is only just above one-third of that of the PMLZSS-C2070. This is because the GTX580 has more cores, caches, and shared memories, while the frequency of the cores increases. All of this improves the parallel computing ability significantly and should therefore make it possible to create more matrices quickly. At the same time, the time cost of the stage takes almost a quarter of the entire processing time. This should be the spotlight for us in future performance optimization for the reduction of the time taken by this stage. Conclusion In this paper, we propose a parallel high speed lossless massive data compression algorithm PMLZSS under the framework of CUDA at the second-level sink node of an LSWSN. It introduces a matrix matching process that divides the source data being compressed into multiple dictionary strings and prereading strings dynamically along the horizontal and vertical axes, respectively, in various blocks of GPU, which constructs multiple matrices to match concurrently. The main aim is to speed up the compression of massive sensor data at the second-level sink node of a LSWSN without decreasing the compression ratio. The tests are performed on a CPU platform and three different GPU platforms. The experimental results show that the compression ratio of PMLZSS decreased by about 2%, compared with the classic serial LZSS algorithm on the CPU platform, and the compression ratio decreases by about 11%, compared with the BZIP2 algorithm, which paid more attention to the compression ratio. But the compression speed of PMLZSS is greatly improved. It is improved by about 16 times compared with the classic serial LZSS algorithm and by nearly 12 times compared with the BZIP2 algorithm. The PMLZSS compression speed is expected to be further improved with the continuous improvements of GPU hardware structure and parallel computing capability. With the continuous improvement of GPU hardware, especially cache technology and shared memory, a series of problems have also emerged. The first is the cache consistency problem, which needs to use complex logic control that is inconsistent with the GPU hardware design goal; the second is the low hit ratio of the cache. The introduction of caching would slow down reading and writing if the hit ratio of the cache is too low. Last but not least is the cost of the large number of transistors caused by the introduction of the cache. All of these should be considered in the future works.
9,751
2015-06-01T00:00:00.000
[ "Computer Science", "Engineering" ]
An engineered SARS-CoV-2 receptor-binding domain produced in Pichia pastoris as a candidate vaccine antigen Developing affordable and easily manufactured SARS-CoV-2 vaccines will be essential to achieve worldwide vaccine coverage and long-term control of the COVID-19 pandemic. Here the development is reported of a vaccine based on the SARS-CoV-2 receptor-binding domain (RBD), produced in the yeast Pichia pastoris. The RBD was modified by adding flexible N- and C-terminal amino acid extensions that modulate protein/protein interactions and facilitate protein purification. A fed-batch methanol fermentation with a yeast extract-based culture medium in a 50 L fermenter and an immobilized metal ion affinity chromatography-based downstream purification process yielded 30–40 mg/L of RBD. Correct folding of the purified protein was demonstrated by mass spectrometry, circular dichroism, and determinations of binding affinity to the angiotensin-converting enzyme 2 (ACE2) receptor. The RBD antigen also exhibited high reactivity with sera from convalescent individuals and Pfizer-BioNTech or Sputnik V vaccinees. Immunization of mice and non-human primates with 50 µg of the recombinant RBD adjuvanted with alum induced high levels of binding antibodies as assessed by ELISA with RBD produced in HEK293T cells, and which inhibited RBD binding to ACE2 and neutralized infection of VeroE6 cells by SARS-CoV-2. Additionally, the RBD protein stimulated IFNγ, IL-2, IL-6, IL-4 and TNFα secretion in splenocytes and lung CD3+-enriched cells of immunized mice. The data suggest that the RBD recombinant protein produced in yeast P. pastoris is suitable as a vaccine candidate against COVID-19. Introduction Mammalian cell expression systems such as human embryonic kidney cells (HEK293T) are preferred for the production of complex therapeutic proteins due to their ability to introduce post-translational modifications identical or similar to those found in humans [1]. However, within the resource-poor contexts of low-income countries, these platforms are beset by issues of technological complexity, high operating costs and challenges such as the possibility of viral contamination in large-scale cultures [2]. An alternative is the use of yeast, such as Pichia pastoris (Komagataella phaffii). P. pastoris exhibits many advantages over mammalian cells regarding the simplicity and cost of culture media, growth rate and ease of genetic manipulation, while sharing many of the characteristics of their protein folding and secretion processes [2]. Growth on methanol as sole carbon source induces in P. pastoris the strong and tightly regulated AOX1 promoter, which can be employed to drive heterologous gene expression [3] and, taking advantage of secretion signals such as those of the alpha mating factor, can be used to obtain secreted recombinant proteins directly in culture supernatants with low levels of host contaminant proteins [4]. The RBD of SARS-CoV-2 is a glycosylated 25 kDa protein domain spanning residues N331-K529 of the spike protein, including eight cysteine residues forming four disulfide bonds. RBD mediates cell entry through the ACE2 host receptor and the levels of RBD-binding antibodies strongly correlate with neutralizing antibodies titers in convalescents [5]. The domain contains two glycosylation sites (N331 and N343) and a central twisted anti-parallel β-sheet formed by five strands connected by short helices and loops [6]. Glycosylation plays a key role in the immunogenicity and stability of the RBD protein [7]. Possibly for this reason, the glycosylation introduced by P. pastoris being more distant from mammalian cells could contribute to the protein's immunogenicity [8]. The SARS-CoV-2 RBD with reduced glycosylation has been produced by others at high levels in P. pastoris as a suitable vaccine candidate against COVID-19 [3,[9][10][11][12][13]. Comparison by CD and tryptophan fluorescence between RBD from P. pastoris and HEK293T mammalian cells showed that the proteins were properly folded as well as having similar temperature stabilities, despite differences in glycosylation of the two expression platforms [3]. Here, the design of an RBD protein vaccine candidate is reported, with its production in P. pastoris, purification, physico-chemical characterization, capacity to elicit ACE2 receptor binding inhibitory antibodies and neutralizing responses in rodents and monkeys. The approach differs from the previously reported production of RBD in P. pastoris [3] by the inclusion of N-and C-terminal extensions aimed at modulating potential protein-protein interactions, and by optimizing the protein fermentation and purification processes. Biological reagents, protein designations and serum panels Human ACE2 receptor chimeric proteins fused to a human or murine Fc antibody domain (hFc-ACE2, mFc-ACE2) as well as RBD fused to human Fc (hFc-RBD) were supplied by the Center of Molecular Immunology (CIM, Havana, Cuba). The chimeric proteins were purified by protein-A affinity chromatography (GE Healthcare Bio-Sciences, Uppsala, Sweden) from supernatants of stably transduced HEK293T cells and eluted with glycine 100 mM pH 3 (Merck, Darmstadt, Germany). hFc-RBD was conjugated to horseradish peroxidase (HRP) (Sigma-Aldrich, St. Louis, MO, USA) and designated hFc-RBD-HRP. H6-RBD refers to an N331-S531 RBD carrying an N-terminal His(6) tag produced as inclusion bodies in E. coli [14], while RBD-H6 refers to an N331-K529 RBD carrying a C-terminal His(6) tag, secreted into the supernatant of stably transduced HEK293T cells. Both proteins were purified by immobilized metal ion affinity chromatography (IMAC) (GE Healthcare Bio-Sciences, Uppsala, Sweden) and the final buffer was exchanged to phosphate buffered saline (PBS). The engineered RBD constructs described in the present work carry an amino-terminal segment denominated C-tag, and a carboxy-terminal six histidine tag (H6), and are referred to as C-RBD-H6, with the suffix PP or HEK describing the host (P. pastoris or HEK293T cells) in which they were produced. The panel of human sera used as controls included sera from volunteers vaccinated with Pfizer/BioNtech [15] or Gamaleya's Sputnik V (Gam-COVID-Vac) vaccine [16], and sera from convalescent patients. All individuals gave written informed consent for use of their serum. Construction of Pichia pastoris strains producing SARS-CoV-2 RBD (C-RBD-H6 PP) A sequence encoding residues 331-529 of the spike protein of SARS-CoV-2 strain Wuhan-Hu-1 (NCBI Acc. No. YP_009724390) with the Nand C-terminal extensions was codon-optimized for Saccharomyces cerevisiae, using J-Cat [17] and cloned in-frame with the KEX2 cleavage site of the pre-pro MATα sequence of pPICZαA (Invitrogen, Waltham, MA, USA), placing it under transcriptional control of the P. pastoris AOX1 promoter. Codon usage was optimized for S. cerevisiae because expression of C-RBD-H6 was initially attempted in both hosts and the codon usage patterns of both are similar [18,19]. After sequence verification by Sanger sequencing using primers flanking the C-RBD-H6 gene (Macrogen, Seoul, South Korea), the expression plasmid was linearized with Sac I (New England Biolabs, Ipswich, MA, USA) and used to transform P. pastoris strain X-33 [20]. Following incubation for about 96 h at 28 ºC on YPD-zeocin medium (1 % yeast extract (Condalab, Torrejón de Ardoz, Spain), 2 % peptone (Condalab), 2 % glucose (Merck, Darmstadt, Germany), 2 mg/mL zeocin (Merck) 100 recombinant colonies were randomly picked and used to prepare frozen glycerol stocks. Fermentation Fermentation was carried out in a 75-liter Chemap fermenter (Chemap, Volketswil, Switzerland) with a working volume of 50 L of culture medium [21]. Four Petri dishes were seeded from a frozen vial of a working cell bank, and incubated for 60-84 h at 30 • C. 8-10 isolated colonies were used to inoculate 2 L Erlenmeyer flasks, each containing 500 mL of basal salts medium [5 g/L yeast extract (Condalab), 15 g/L (NH 4 ) 2 SO 4 (Merck), 36 g/L glycerol (Tecsiquim), 40 mg/L histidine (Merck), 7.75 g KH 2 PO 4 (Merck) or 5 g/L K 2 HPO 4 (DC Fine Chemicals), 0. Chemicals), and 6.2 g/L KH 2 PO 4 (Merck)], which were then incubated for 20-28 h at 30 • C and 250 rpm. The fermentation run was started by pooling the shake flask cultures and inoculating the fermenter with 2.7 L of this inoculum, with a starting pH of 5.0, regulated by pumping liquid ammonia (Merck) and a temperature of 30 • C. The fed-batch phase started, once dissolved oxygen increased, by adding 50% glycerol (Tecsiquim) at 540 mL/h for 2-3 h. Temperature was lowered to 25 • C and pH was increased to 6.0 one hour after starting the fed-batch phase. At the end of this phase, 800 mL of methanol were added at 60 mL/min flow rate of a Watson Marlow 520 peristaltic pump (Watson Marlow, Wilmington, MA, USA), and once the cells were adapted to this new carbon source, methanol was added first at 6 mL/L/h, then at 9 mL/L/h 4 h later, and at 12 mL/L/h when cell density reached 200 g/L. This last flow was maintained until the end of fermentation (38-44 h). Purification of C-RBD-H6 PP After 48 h of fermentation, the culture was harvested, and cells were removed by continuous centrifugation with a retention time of 5-10 min at 21,420 RCF, 4 o C. The resulting supernatant was filtered sequentially through 8 µm, 3 µm and 0.45 µm cellulose filters (Merck), and then concentrated and buffer-exchanged against PBS containing 5 mM imidazole (Merck) by tangential flow filtration with a 30 kDa Hydro-sart® membrane (Sartorius, Göttingen, Germany). The conditioned sample was loaded onto a Chelating Sepharose™ FF colum (Cytiva, Marlborough, MA, USA) with cross-linked 6 % agarose beads modified with iminodiacetic (IDA) matrix (Cytiva) charged with Cu 2 + and equilibrated in the same buffer, washed sequentially with 30 column volumes of PBS containing 10 mM and 20 mM imidazole (Merck), and eluted with 250 mM imidazole in PBS. The eluted protein was further purified on a 50 × 250 mm RP C4 column (Tosohaas, Tokyo, Japan) with a resin volume of 500 mL and a particle size of 15-20 µm, coupled to a Shimadzu LC-20AP semi-preparative HPLC purification system (Shimadzu, Kyoto, Japan). The column was equilibrated with solution A C18 ZipTip (Merck Millipore, Burlington, VT, USA) and loaded into a metal-coated nanocapillary for ESI-MS analysis. The remainder was digested following a previously reported in-solution buffer-free trypsin digestion protocol [22] adapted to the analysis of SARS-CoV-2 RBD proteins that provides full-sequence coverage of the tryptic peptides and detection of post-translational modifications in a single ESI-MS spectrum [14]. Other experimental conditions for ESI-MS analysis were similar to those reported previously [14]. Surface plasmon resonance (SPR) The interaction between mFc-ACE2 and C-RBD-H6 PP was studied by SPR in a BIACORE X unit (GE Healthcare, Tokyo, Japan) at 25 • C in multi-cycle mode. Briefly, mFc-ACE2 was immobilized on flow cell 1 (FC1) of a Protein A biosensor chip (GE Healthcare, Amersham, UK) following manufacturer's instructions, and flow cell 2 (FC2) was used as the reference cell for background binding affinity. The real-time response of C-RBD-H6 PP over immobilized mFc-ACE2 was recorded in duplicate across a concentration range of 15-2000 nM, at a flow rate of 10 µL/min for 120 s, while the dissociation took place for another 120 s. The running buffer was PBS (pH 7.2). After each cycle the chip was regenerated using glycine buffer pH 2.0. The equilibrium dissociation constant (K D ) was estimated with BIAevaluation® software (GE Healthcare, Tokyo, Japan) using the Langmuir 1:1 interaction model. At least five curves were taken into account for the calculation of kinetic parameters. Animals and immunization schedules Three different animal species were used to evaluate the immunogenicity of C-RBD-H6 PP: BALB/c mice, Sprague-Dawley (SD) rats, and African green monkeys (Chlorocebus aethiops sabaeus). The experimental protocols were approved by the Ethical Committee on Animal Experimentation of the Center for Genetic Engineering and Biotechnology (CIGB, Havana, Cuba) and the Center for Production of Laboratory Animals (CENPALAB, Bejucal, Cuba). Procedures for mouse and rat immunization are described in Supplementary Material S1. Evaluation of serum antibodies Antibody detection by Enzyme Linked Immunosorbent Assay (ELISA). Serum antibody titers were expressed in arbitrary units (AU) with reference to a SARS-CoV-2 neutralizing serum, a value of 1 corresponded to 5 times the optical density reading of the blank control. Monoclonal antibody SS-8 was used as the reference for mouse sera, and a hyperimmune polyclonal serum as the reference for rat serum samples. The polyclonal serum was obtained by pooling sera from 10 animals submitted to dose-repeated toxicology studies showing the highest virus neutralization titers (Geometric Mean above 1:2500). For nonhuman primates (NHP) antibody titers, serum from a convalescent subject with a high SARS-CoV-2 neutralization titer was used as the reference. Plate-based RBD to ACE2 binding assay A competitive ELISA was performed to determine the inhibitory activity of anti-RBD polyclonal sera on the binding of an hFc-RBD-HRP conjugate to hFc-ACE2-coated plates. Briefly, the wells of ELISA plates were coated with 0.25 µg of recombinant hFc-ACE2 as described above. Then, mixtures containing hFc-RBD-HRP conjugate and serial dilutions of the sera were pre-incubated for 1 h at 37 • C. 100 µL of the mixtures were added to each Fc-ACE2 coated well and further incubated for 90 min at 37 • C. Binding of the HRP-tagged RBD to the receptor was detected with 3,3-5,5-tetramethylbenzidine as substrate, reading the results at 450 nm. A similar assay was used to characterize the ability of C-RBD-H6 PP and C-RBD-H6 HEK to block the interaction of hFc-RBD-HRP with hFc-ACE2-coated plates. Microneutralization of live SARS-CoV-2 virus in Vero E6 Neutralization antibody titers were determined by a traditional virus microneutralization assay (MN50) using SARS-CoV-2 (CUT2010-2025/ Cuba/2020 strain). Virus neutralizing titers (VNT50) were calculated as the highest serum dilution at which 50% of the cells remained intact according to neutral red incorporation in the control wells (no virus added). For detailed procedure see Supplementary Material S2. Cellular immune response Long-term cellular immune responses were evaluated in BALB/c mice. Five animals per group received 25 µg of C-RBD-H6 PP or placebo in a 100 µL volume delivery subcutaneously at days 0, 14 and 35. Blood samples were drawn two weeks after the last immunization, and the animals were euthanized 3 months later to assess the response to an in vitro antigen recall in systemic (splenocytes) and lung-resident cells. The splenocytes were isolated by organ perfusion with gentamycinsupplemented RPMI1640 culture medium (Gibco, Invitrogen, Waltham, MA, USA), and lung cells were dissociated with the Miltenyi reagent set (130− 095− 927) in C-tubes (130− 093− 237), using an automated dissociator (Gentle MACS Octo dissociator, Miltenyi, Bergisch Gladbach, Germany). In both cases the remaining erythrocytes were lysed with ACK solution (A1049201, Gibco, Invitrogen, Waltham, MA, USA). Then, the splenocytes were resuspended in RPMI 1640, gentamycin 10 µg/mL, fetal bovine serum (FBS) 10% (Gibco) and directly used to study cellular response. Lung cells were suspended in a buffer for negative selection of CD3 + cells and further purified, after two washes, with the Pan T Cell Isolation Kit II (130-095-130, Miltenyi, Bergisch Gladbach, Germany), suspending the resulting CD3 + preparation in the same medium as the splenocytes. Live lung CD3 + enriched cells and splenocytes were counted with a flow cytometer (CyFlow, Sysmex, Norderstedt, Germany). ELISpot assay with samples from previously infected individuals Peripheral blood mononuclear cells (PBMCs) from COVID-19convalescent subjects were isolated from 7 mL of whole blood collected into CPT tubes (Becton Dickinson, Franklin Lakes, NJ, USA), and stored in liquid nitrogen until analyzed. After resting cells overnight in Optmizer™ media (Gibco, Invitrogen, Waltham, MA, USA) CD3 + live cells were counted by flow cytometry and seeded on round-bottom plates (650160, Greiner Bio-One GmbH, Kremsmunster, Austria) at 5 × 10 4 cells per well with 10 µg/mL of C-RBD-H6 PP for 72 h. Cells were transferred to anti-IFNγ pre-coated plates (3420-4APW, Mabtech), and the numbers of IFNγ-secreting T cells were determined after 20 h of incubation. All individuals gave written informed consent for use of their samples. Statistical analysis Prism 8.4.3 software was used for statistical analysis. The normality of all datasets was assessed with the Shapiro-Wilk test. Normally distributed data were compared using the Student's t-test for paired or unpaired samples, depending on experimental design. Non-normal data were compared with Mann-Whitney's or Wilcoxon's rank match-paired tests. Comparisons of more than two groups used Kruskal Wallis multiple analyses followed by Dunn's post test. Spearman's test was used to assess parameter correlations. Sigmoidal dose-response curves were transformed into a linear form using natural log transformation for dilutions and the function NORM.S.INV (Microsoft Excel) function for normalized OD 450 nm data according to: Paired comparison of slopes and X and Y intercepts, after adjusting data to a linear equation, indicated no significant differences between the results for each coating condition. Experimental designs included two replicates per sample and three independent experiments. Design of the C-RBD-H6 expression cassette The protein denominated here denoted as C-RBD-H6 PP was designed as a potential subunit vaccine candidate against SARS-CoV-2. This protein has a modular structure consisting of a globular central -RBD-domain comprising residues N331-K529 of the spike protein, flanked by additional N-and C-terminal segments that contain polar and flexible linkers rich in Glycine and Serine (Gly 9 -Ser 15 and Gly 215 -Ser 229 , sequence of C-RBD-H6 PP are shown in Supplementary Material S3). These extensions prevent potential protein-protein interactions and facilitate protein purification by ensuring that a C-terminal His(6) tag is well exposed. Both extensions are spatially well separated from the receptor binding motif, and their presence should limit, through steric hindrance, potential aggregation problems associated with the presence of exposed and disulfide-bonded Cys76 and Cys210. Expression and purification of C-RBD-H6 PP A construct for the expression in P. pastoris, under control of the AOX1 promoter, of protein C-RBD-H6 was prepared as described in Materials and Methods and designated pPICZα-CtagRBDH6. This construct was used to transform P. pastoris strain X-33 to obtain C-RBD-H6-producing clones. After screening for the most productive clone, a single strain denominated X33-23 was chosen for further work. This strain, when used in fermentation runs at a scale of 50 L, yielded a dry cell weight of 58.15 ( ± 14.54) g/L, and a C-RBD-H6 titer of 68.38 ( ± 15.70) mg/L (data averaged from 15 independent processes). SDS-PAGE and Western blotting profiles of culture supernatants from three separate C-RBD-H6 PP-producing clones, including X33-23, are shown in Supplementary Material S4. C-RBD-H6 PP protein was purified from fermentation supernatants of strain X33-23 by IMAC column charged with Cu 2 + , and purified by RP. The entire process yielded 30-40 mg of pure C-RBD-H6 PP per L culture medium, with a purity equal to or higher than 98% (Fig. 1). Structural analysis of C-RBD-H6 PP by ESI-MS The sequence, N-glycosylation status and disulfide bonding pattern of C-RBD-H6 PP was examined by ESI-MS after deglycosylation with PNGase F and tryptic digestion. Full-sequence coverage of C-RBD-H6 PP was achieved, confirming the identity and integrity of the resulting protein (signal assignment from mass spectra is summarized in Table 1. ESI-MS/MS analysis of the signal detected at m/z 1399.64 (four charges) confirmed full N-glycosylation of Asn331 and Asn343 (the two Nglycosylation sites of the RBD within the context of the viral Spike protein), as they were transformed into Asp residues by the action of PNGase F (see underlined residues in Table 1). The four disulfide bonds (C336-C361, C379-C432, C391-C525 and C480-C488) present in the native S protein of SARS-CoV-2 were also detected in this ESI-MS spectrum (Table 1). Tryptic peptides containing free cysteine residues or S-S scrambling variants were not detected. ESI-MS analysis of the deglycosylated C-RBD-H6 PP protein and its derived tryptic peptides confirmed that N-glycans in its structure increased considerably its molecular mass by SDS-PAGE analysis. Sugar content calculated in 12 protein batches was 42.7 % (40.7-43.8) relative to the molecular mass. The analysis of ESI-MS spectra is included as a Supplemental Material S5. Characterization of binding affinity of C-RBD-H6 PP to ACE2 by Surface Plasmon Resonance In order to study the affinity of the C-RBD-H6 PP/ACE2 interaction, mFc-ACE2 was immobilized via its Fc region onto a Biacore Protein A chip. This chip produced no appreciable signal in terms of response units (RU) with a non-related protein (negative control), as shown in Fig. 2. In contrast, C-RBD-H6 PP exhibited an association rate to mFc-ACE2 of 5.4 × 10 5 M -1 ⋅s -1 and a dissociation rate of 7.7 × 10 -3 s -1 . Equilibrium was reached after 25-30 s, with an estimated dissociation constant of K D = 14.3 × 10 -9 M. The association/dissociation rates as well as K D were in the range previously reported in the literature for the RBD-ACE2 molecular interaction [24]. Secondary structure analysis by Circular Dichroism (CD) Spectroscopy The far UV CD spectrum of C-RBD-H6 PP revealed characteristic bands similar to those previously reported for other recombinant RBD proteins [3], with maxima at 192 and 231 nmdue to the aromatic contribution -and a minimum at 207 nm (the procedures and CD spectrum are described in Supplementary Material S6). Furthermore, as shown in Table 2 the secondary structure content of the protein estimated by BeStSel (7.9% helix, 28.7% beta -antiparallel, relaxed and right-handed-, 12.9% turn and 50.5% others) was very similar to the values assigned using the 3D coordinates [6]. Moreover, as observed in Fig. 3, the near UV CD spectrum of the protein is well structured, with bands at 263, 269, 277, 281 and 299 nm, indicating the presence of well-packed aromatic and cysteine residues as expected of a correctly folded protein. Analysis of the antigenicity of C-RBD-H6 PP To confirm antigenicity of C-RBD-H6 recognition by known sera and monoclonal antibodies, the binding of the murine anti-RBD monoclonal antibodies SS-1, SS-4, SS-7 and SS-8 to C-RBD-H6 PP and C-RBD-H6 HEK was first compared by ELISA. These mAbs were obtained by immunization with RBD-H6 (produced in HEK293T cells), and one, SS-8, competes with ACE2 for binding to the RBD with an IC50 of 122.7 pM [23]. As shown in Fig. 4D,H, the reactivity of C-RBD-H6 PP to these four mAbs was indistinguishable from that of C-RBD-H6 HEK, except for SS-7 that shows increase recognition of the HEK derived protein. Next, eight convalescent human sera with high neutralization titers against live SARS-CoV-2 as determined in Vero E6 cells, as well as eight sera from Pfizer-BioNTech or Sputnik V vaccinees were incorporated into the testing panels. A non-folded and non-glycosylated C-RBD-H6 protein produced as inclusion bodies in E coli BL-21 was used as a negative control. C-RBD-H6 PP protein displayed binding comparable to the protein purified from mammalian cells (Fig. 4A,E and B,F). Further characterization was conducted using polyclonal sera with known anti-SARS-CoV-2 neutralizing activity, obtained by immunization of mice and NHP with C-RBD-H6 HEK. Again, there were no significant differences between the reactivity of C-RBD-H6 produced in either system (yeast or mammalian) toward these polyclonal sera (Fig. 4 C,G). The C-RBD-H6 PP protein was also used to recall an in vitro a cellular response, measured by detecting IFNγ secretion using an ELISpot with PBMCs from COVID-19 convalescents sampled at least three months after hospital discharge. As shown in Fig. 5, stimulation with 10 µg/mL of the P. pastoris-produced protein induced IFNγ secretion from the PBMCs of COVID-19 convalescent subjects. C-RBD-H6 produced in P. pastoris elicited RBD-ACE2 receptor binding inhibitory and SARS-CoV-2 neutralizing antibodies in rodents and NHP The immunogenicity of C-RBD-H6 PP was evaluated in NHP (green monkeys) using a short 0-14-28 days intramuscular administration schedule with two dose levels (50 µg and 100 µg). The results indicate the presence of a trend to a dose-response effect since seroconversion rates after the first booster were 83 % (5 out of 6 animals) and 100 % (all 10 animals) for the 50 and 100 µg dose levels, respectively, and 100 % after the second booster for both dose levels. Total IgG median titers increased to 216 (25-75 %: 105-558) AU/mL and 727 (25-75 %: 149-1410) AU/mL for the low and high-dose groups respectively. In both cases, the titers were significantly higher than those detected in the convalescent serum panel (p < 0.05, Kruskal-Wallis) (Fig. 6A). Median ACE2 binding inhibition titers were 1:209 (25-75 %: 115-673) and 1:1081 (25-75 %: 169-1938) for the 50 µg and 100 µg dose groups, respectively, correlating well with the higher binding titers exhibited by the latter group. In this case, only the 100 µg dose group exhibited inhibitory titers higher than those of the convalescent panel (p < 0.001, Kruskal-Wallis) (Fig. 7B). Similarly, the 100 µg dose group exhibited a median viral neutralization titer (VNT50) of 1:192 (25 %− 75 %= 26-156), which dropped to 1:48 (25 %− 75 %=14-112) for the 50 µg dose group but was still at the level of the 1:40 (25 %− 75 %= 1-80) VNT50 of the convalescent panel. Altogether, these data are consistent with a trend to a dose-related effect (Fig. 6). Specific IgG antibodies and RBD-ACE2 binding inhibition were detected only in C-RBD-H6 PP protein-inoculated animals and not in control animals. These antibody responses evidenced the existence of dose dependence and boosting effects, compared to the negative results obtained in control animals. Details of the immunization procedures of mice and rats and data on the immunogenicity, ACE2 binding inhibition and neutralizing titers of the resulting sera are available in Supplemental Material S1. Cellular response Cellular recall responses were evaluated three months after the last immunization of BALB/c mice receiving subcutaneous doses of 25 µg of C-RBD-H6 in alum at days 0-14-35. The presence of a memory response in the splenocytes of immunized animals was evidenced by a significant induction of IFNγ-secreting clones upon incubation with the C-RBD-H6 PP antigen (Fig. 7C). An analysis of the supernatant of recall reactions showed that the most strongly induced cytokine was IFNγ followed by IL-2, IL-6 and, to a lesser extent, TNFα and IL-4. This pattern was very similar between cells from the systemic compartment (Fig. 7A) and lung CD3 + -enriched cells (Fig. 7B), demonstrating that cellular responses can also be recalled in the organ primarily affected during SARS-CoV-2 infections. Discussion Results from this study demonstrate a vaccine candidate based on C-RBD-H6 PP, an antigen derived from the RBD of the Spike protein of SARS-CoV-2, can produced in the yeast P. pastoris. The specific RBD sequence used in this candidate was selected based on state of the art evidence regarding the contribution of this domain to the induction of neutralizing antibodies [8,[25][26][27]. Optimal conditions for production of a recombinant protein in the P. pastoris expression system differ according to the target protein. For C-RBD-H6 PP we were able to obtain 30-40 mg/L RBD in a 50 L fermentation process with purity > 98%, which is close to the yield reported by others for a Pichia-produced RBD in a 7 L fermentation setup, although only 90% purity was reported [3]; high purity is an essential condition for a vaccine candidate. The Gly/Ser rich segments introduced in the genetic construction have been widely used in protein engineering as linkers for fusion proteins [28], This design was aimed in part to sterically preclude potential protein aggregation involving the protein surface sequence spatially close to the disulfide bridge Cys76-Cys210; Gly offers flexibility because it can adopt dihedral angles not possible for other amino acids; and Ser confers solubility as a hydrophilic amino acid. Unlike other reports expressing the RBD in P. pastoris [29][30][31] here the potential N-glycosylation site at N331 in C-RBD-H6 PP was included. Although inclusion of N331 leads to more heterogeneous N-glycosylation [3,32,33], it should be noted that P. pastoris N-glycans are often hypermannosylated [34,35], and mannosylation enhances the activation of antigen-presenting cells like macrophages and dendritic cells, increasing immunogenicity over that of non-glycosylated counterparts [8,[36][37][38]. In fact, others [29,31] used a lipid-modified alum adjuvant or a saponin-based adjuvant, respectively, to overcome the poor immunogenicity of their N331-less RBD antigens. Another potential problem stemming from the absence of N331 is protein aggregation, detected by [29]. Again, the ionic interactions provided by the additional sugars attached to N331 may help to counteract this problem, which was absent in the case of C-RBD-H6 PP. Physico-chemical analysis of C-RBD-H6 by mass spectroscopy demonstrated the presence of all four correctly formed disulfide bonds without scrambled species, and circular dichroism spectroscopy showed that its secondary structure makeup was compatible with that expected from the crystallographic structure of the spike trimer from SARS-CoV-2. The correct folding of C-RBD-H6 was further confirmed by functional assays such as its ability to bind to ACE2, as well as SPR assays in which it bound ACE2 with an affinity similar to that reported by others for this interaction [39][40][41]. Correct folding is one of the virtues of P. pastoris as an expression host, which combines the quality control mechanisms of the eukaryotic secretory pathway with the advantages of a microbial system [3,42], and is a necessary requirement for the induction of an antibody response able to neutralize SARS-CoV-2 by blocking viral entry Remarkably, despite the known differences between yeast and mammalian N-glycosylation, the reactivity of C-RBD-H6 PP was essentially identical to that of C-RBD-H6 HEK towards sera from mice and monkeys immunized with the latter, sera from COVID-19 convalescents, and sera from Pfizer-BioNTech or Sputnik V vaccinees. C-RBD-H6 PP was also able to stimulate cellular responses mediated by IFNγ secretion in lymphocytes isolated from individuals previously infected with SARS-CoV-2. Immunization of mice, rats and NHP with C-RBD-H6 PP elicited antibodies that inhibited RBD-ACE2 receptor binding and were able to neutralize live SARS-CoV-2 in microneutralization tests. Even though the time constraints imposed by the accelerated development of this vaccine candidate required that the neutralizing activity of NHP sera be evaluated just one week after administration of the third dose, the resulting neutralizing titers were still higher than those of the convalescent serum panel used as a control. A longer period before evaluation might yield further improved results, allowing for the maturation and selection of B cell clones producing antibodies of higher avidity and, consequently, higher neutralizing titers. Funding This work was supported with funds from the BioCubaFarma, the Center for Genetic Engineering and Biotechnology, and by the Grant of the National Science and Technology Program -Biotechnology, Pharmaceutical Industry and Medical Technologies, of the Ministry of Science and Technology, project code PN385LH007-048. The Civilian Defense Scientific Research Center supported the microneutralization assays. Author contributions GGN: provided original ideas and study concept and design, data curation, analysis and interpretation of data, and drafting, review and editing of the final version of the manuscript and studies supervision. MLF: contributed to analysis and interpretation of data, study design, review of the manuscript and studies supervision. LJGL: contributed to ESI-MS study design, acquisition of data, analysis and interpretation of data, and drafting and review of the manuscript. LAER, IAM and YRG: performed the ESI-MS studies, acquisition of data, analysis and interpretation of data. GCH: performed glycosylation studies. ACR: performed drafting and execution of Biacore studies. GMP: performed the protein purification studies. MPI and JZS: performed fermentation studies. GCS: provided original idea and drafting of the genetic construction, and performed structural analysis experiments by CD spectroscopy. AMMD: provided original ideas and performed the genetic construction and review of the manuscript. and DGR: performed the genetic construction and protein expression experiments. MBR: provided original ideas, study designs, analysis and interpretation of data, drafting and review of the manuscript, graphic and statistical processing and execution of antigenicity, immunogenicity and cellular response studies. IGM and CCHA: performed antigenicity, immunogenicity studies and analysis of cellular response in mouse, rat, and NHP. OCS: study design and supervision of the microneutralization experiments. GLP: contributed to analytical procedures and donor patients selection and evaluation of the immunological and functional response. JVH, EMD, EPV and MAA: study concept and supervision. Conflict of Interest MLF, MBR, AMMD, DGR, ACR, GCS, GMP, EPV, MAA and GGN are co-authors of a patent application submitted by the Center for Genetic Engineering and Biotechnology, comprising the C-RBD-H6 PP protein as a vaccine antigen against SASR-CoV-2. All authors approved the final article.
7,292.6
2022-08-01T00:00:00.000
[ "Medicine", "Engineering" ]
DeepDISE: DNA Binding Site Prediction Using a Deep Learning Method It is essential for future research to develop a new, reliable prediction method of DNA binding sites because DNA binding sites on DNA-binding proteins provide critical clues about protein function and drug discovery. However, the current prediction methods of DNA binding sites have relatively poor accuracy. Using 3D coordinates and the atom-type of surface protein atom as the input, we trained and tested a deep learning model to predict how likely a voxel on the protein surface is to be a DNA-binding site. Based on three different evaluation datasets, the results show that our model not only outperforms several previous methods on two commonly used datasets, but also demonstrates its robust performance to be consistent among the three datasets. The visualized prediction outcomes show that the binding sites are also mostly located in correct regions. We successfully built a deep learning model to predict the DNA binding sites on target proteins. It demonstrates that 3D protein structures plus atom-type information on protein surfaces can be used to predict the potential binding sites on a protein. This approach should be further extended to develop the binding sites of other important biological molecules. Introduction DNA carries genetic information about all life processes, and proteins perform many essential functions for maintaining life. Interactions between proteins and nucleic acids play central roles in a majority of cellular processes, such as DNA replication and repair, transcription, regulation of gene expression, degradation of nucleotides, development (growth and differentiation), DNA stabilization, and immunity/host defense [1,2]. Moreover, the processes controlling gene expression through protein-nucleic acid interactions are critical as they increase the versatility and adaptability of an organism by allowing the cell to produce proteins when they are needed. However, revealing the mechanisms of protein-nucleic acid binding and recognition remains one of the biggest challenges in the life sciences [1][2][3][4]. Identifying the potential binding sites and residues on proteins is essential to understanding the interactions between proteins and their binding nucleic acids. A reliable prediction method will address this critical need and influence subsequent studies. Protein binding site prediction is a critical research infrastructure, which has direct applications in drug discovery and targeting. Although numerous complex structures comprising of proteins and their binding partners, including protein-nucleic acid complexes, have been described in the public domain, many existing nucleic acid binding site prediction methods only utilize sequence (evolutional) data or residue propensities and have not yet achieved sufficient accuracy [5][6][7]. Statistical analysis of nucleic acid binding residues has helped researchers to understand the binding propensities of 20 amino acids [8][9][10][11]. However, molecular binding and recognition is a sophisticated process and is affected not only by the composition of amino acids. The subtle changes of main chain and side chain atoms, and their relative positions, change the local chemical environments on the protein surfaces. Previous studies which performed large-scale assessments of nucleic acids binding site prediction programs [5,6] also demonstrated that structure-based predictors often show better performance than their sequence-based counterparts. However, neither approach has yet achieved a satisfactory level of prediction. The sensitivity of most of the prediction methods range from 0.2 to 0.6, as some methods may have lowered their specificity to increase their sensitivity; therefore, their highest Matthews correlation coefficient (MCC) value is about 0.3 [5][6][7]. On the contrary, the methods used to predict small molecule binding sites have demonstrated sensitivity and specificity over 0.8, and their highest MCC is around 0.8 [12,13]. The reason that the accuracy of nucleic acid binding site prediction is relatively low compared to small molecule binding site prediction can be explained as follows: (1) Small molecules tend to bind to the largest cavities on the protein surface, based on the observations of previous studies [12,14]. Therefore, prediction methods which have employed the geometrical features of proteins or combined them with other chemical or energy features have often produced reliable results [14][15][16][17][18][19][20][21][22][23]. On the other hand, DNA is a long-stretched molecule and binds to relatively flat surfaces on proteins. It is less useful to apply geometrical data in nucleic acid binding site prediction than in small molecule binding site prediction. (2) The definition of a nucleic acid binding residue has not yet been standardized, and there are several definitions [5]. Different cutoffs ranging from 3.5 Å to 6.0 Å have been used to define "binding residues" [4,9,[24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40][41]. A previous study demonstrated that a distance cutoff of 6.0 Å leads to a two-times-higher number of binding residues than that obtained with a cutoff of 3.5 Å [5]. The inconsistent cutoffs make it very difficult to evaluate, compare and improve the performance of different methods. (3) The energy-based approach has not been employed for nucleic acid binding site prediction and the binding affinities between proteins and nucleic acids has not been considered. (4) Often, DNA and RNA binding site prediction methods are developed separately [5,6]. Although DNA and RNA binding proteins usually perform different functions in vivo, DNA and RNA are two highly similar molecules. Their binding surfaces and binding mechanisms may be highly similar to each other [6,42]. In other words, considering RNA-binding residues/surfaces as non-DNA binding residues/surfaces or considering DNA-binding residues/surfaces as non-RNA binding residues/surfaces may interfere with the training and prediction processes. In recent years, deep learning has been attracting attention. These methods, which generally differ from past statistical methods, do not rely heavily on human-designed hyperparameters such as feature weighting, combinations, etc. Instead, such relationships and architectures emerge after periods of training. Neural networks have shown great promise in other domains, such as the object detection and classification performed by AlexNet in the 2012 ImageNet competition [43]. Other researchers have applied similar network topologies to the problem of binding site prediction in the past with good success [44]. A common limitation to many of these approaches is that they rely on multi-layer perceptrons (MLPs) at some stage in their network. MLPs are the conventional neural network type and are essentially groups of neurons (represented through matrix operations) that connect to each other and "fire" in relation to a linear combination of their connections, often paired with a final non-linear function. The major drawback of the conventional neural network is that the input data size must be exactly the same for all data, both in training and in inference [45]. This is because they are represented by multiplying an input in the form of a matrix (the number of samples by the number of input features) by a weight matrix (the number of input features by the number of output features). Finally, in most cases a bias matrix (the number of samples by the number of output features) is then added to the output. Both operations are therefore flexible with respect to the number of samples used (weights and biases can simply be copied to form the correct matrix size), but the number of features must remain constant. Therefore, images must be resized prior to input into networks such as AlexNet. Although this is not a major issue for 2D images, which typically can be resized without significantly changing the information represented, a general method for resizing 3D graphs such as protein complexes without the risk of changing the information does not exist. This means that models using MLPs must instead only crop the data, thus creating barriers to information flow across the cropped regions. The goal of this study, like others before it, is to develop an efficient method by which a large portion of the initial pool of candidates can be screened out prior to the more expensive steps in the aforementioned pipeline. Model Statistics and Prediction Outcomes The purpose of this study is to develop a deep learning model for DNA binding site prediction. After the training was completed, the prediction outcomes were retrieved and the performance of our prediction model was calculated on the training dataset (Table 1) and two external test sets (i.e., PDNA62 and PDNA224, Tables 2 and 3). The two test sets are not totally independent of the training sets. Based on the sequence alignment outcomes (see Supplementary Materials), there are 22 and 97 entries in PDNA62 and PDNA224, respectively, which may be homologs (sequence identity > 40%) of one or more entries in the training sets (see Supplementary Materials). However, the performance of our prediction model on the entire test sets and the non-redundant sets (i.e., excluding the homologs) shows no significant differences (see Tables 2 and 3). This demonstrates that our prediction model is robust. Table 3. Performance of DeepDISE compared with previous methods using PDNA224 [46]. In order to avoid overfitting during the training process, the training data were split into a 9:1 training and validation set (see Materials and Methods). The model performs well overall, with an MCC of 0.584 (Table 1). As expected, it performs slightly better on the training subset than the validation subset, but the overall performance on the validation set, with an MCC of 0.558, is still satisfactory. One observation is that the model generally performs best on "medium"-sized complexes ( Figure 1). This may be because the relatively limited and rugged binding surfaces of small DNA-binding proteins and complexes are difficult to recognize through deep learning. Moreover, most complexes are in the "medium size" category, which means that the deep learning model "learns" the patterns of medium sized proteins the best. It also should be noted that the very large complexes could not be assigned to the training set but were placed in the validation set due to memory constraints. Figure 2 shows the prediction results for the intron-encoded homing endonuclease I-Ppol (PDB ID 1a73). The prediction outcomes of the deep learning model are continuous numbers between 0 and 1, colored from blue to red. In the left panel of Figure 2, the heat map basically located most of the DNA binding surface grids, with a few false positives and false negatives. To produce binary outcomes and reduce false positives and false negatives, DeepDISE performs a clustering step. The results for 1a73 are shown in Figure 2. Although it did not achieve 100% accuracy, the algorithm largely predicted the binding area and provided hints for further research and drug design. Figure 2. The prediction outcomes for protein PDB ID 1a73 before (left) and after (right) clustering. Figure 2. The prediction outcomes for protein PDB ID 1a73 before (left) and after (right) clustering. Comparing Performance In comparison with other methods, DeepDISE was tested against the PDNA62 and PDNA224 datasets, which have been used by previous studies. As shown in Tables 2 and 3, DeepDISE outperforms other existing methods in terms of accuracy, specificity, precision and MCC values, except its sensitivity is lower than that of two other methods. However, according to the visualization results, the "false negative" grids were not totally undetected but rather were predicted with relatively lower scores. It also needs to be noted that, unlike previous studies, our model was trained using another dataset independently of these two datasets, PDNA63 and PDNA224. Comparing our prediction results, as shown in Tables 1-3, the prediction performance was very consistent, rather than showing dramatic decreases from one dataset to another, which demonstrates that our model does not present the issue of overfitting. Discussion The key innovation of this study is the use of a network topology that does not require the standardization of data input. This is accomplished by using a fully convolutional neural network architecture. Convolutional network layers were originally designed to address one disadvantage of MLPs when applied to images-MLPs do not share "insights" with other neurons in the same layer. This means that when applied to images, there will almost certainly be redundant relationships stored in the network and if patterns do not appear in the exact location as in the training set, the network will not be able to recognize them easily. Convolutional layers solve this by using a set of "filters" that are convolved over the input data, creating (in most cases) an output that is the same dimension/size of the input except for the feature dimension. Thus, such layers are used in networks like AlexNet. Moreover, convolutional layers are perfectly capable of solving "segmentation" problems in which the desired result is a region of points. Given that binding site predictions can be easily formulated in this way, we proposed that a fully-convolutional network would likely achieve more desirable results than prior projects. Deep learning algorithms have been successfully applied to image recognition. Although a few previous methods used both sequence and "structure" features, including DSSP (secondary structure), accessible surface area (ASA) and the number of H-bonds and B-factors, these features are mostly one-dimensional (i.e., features highly related to amino acid sequence). However, the input data of our model is four-dimensional (3+1, the 3D coordinates plus the atom type). This exploits the strength of a deep learning algorithm in 3D image processing and leads to the outperformance and robustness shown by DeepDISE in different datasets. Atom type alone may contain many integrated physicochemical properties, such as polarity, charge, and hydrophobicity; however, adding secondary structural information and sequence conservation to the input data may further boost the accuracy of the prediction. Figure 3 shows the prediction outcomes for 2xma. In this case, DeepDISE achieved a prediction accuracy of 0.839, sensitivity of 0.624, specificity of 0.963, precision of 0.905 and an MCC of 0.651. The DNA-binding surfaces were mostly correctly identified, with a wide score range, illustrated by blue and red colors. Although based on the grid count, the prediction accuracy is far from perfect, the purpose of identifying the DNA-binding site was achieved. We still need to develop a better clustering algorithm to precisely group adjacent medium-to high-scored grids together in the proposed binding surface. Moreover, some false-positive grids on the protein surface may be able to bind or attract DNA molecules distantly, but the potentially-bound DNA is not shown in the PDB structure because the interactions are not strong enough to stabilize the binding of the DNA 3 or 5 -terminals in the crystalized protein-DNA complex. This issue should be further investigated in the future. Deep learning algorithms have been successfully applied to image recognition. Although a few previous methods used both sequence and "structure" features, including DSSP (secondary structure), accessible surface area (ASA) and the number of H-bonds and B-factors, these features are mostly one-dimensional (i.e., features highly related to amino acid sequence). However, the input data of our model is four-dimensional (3+1, the 3D coordinates plus the atom type). This exploits the strength of a deep learning algorithm in 3D image processing and leads to the outperformance and robustness shown by DeepDISE in different datasets. Atom type alone may contain many integrated physicochemical properties, such as polarity, charge, and hydrophobicity; however, adding secondary structural information and sequence conservation to the input data may further boost the accuracy of the prediction. Figure 3 shows the prediction outcomes for 2xma. In this case, DeepDISE achieved a prediction accuracy of 0.839, sensitivity of 0.624, specificity of 0.963, precision of 0.905 and an MCC of 0.651. The DNA-binding surfaces were mostly correctly identified, with a wide score range, illustrated by blue and red colors. Although based on the grid count, the prediction accuracy is far from perfect, the purpose of identifying the DNA-binding site was achieved. We still need to develop a better clustering algorithm to precisely group adjacent medium-to high-scored grids together in the proposed binding surface. Moreover, some false-positive grids on the protein surface may be able to bind or attract DNA molecules distantly, but the potentially-bound DNA is not shown in the PDB structure because the interactions are not strong enough to stabilize the binding of the DNA 3′ or 5′terminals in the crystalized protein-DNA complex. This issue should be further investigated in the future. Materials and Methods The data pipeline began with a publicly available list of PDB files, containing both proteins and DNA. Using this source eliminates the need to hand-curate thousands of PDB files by removing duplicates, low accuracy positions, etc. The PDB files were parsed into custom format files and those erroneous ones were removed (Figure 4). These intermediate files were then processed by a C++ program that used the FreeSASA library to determine which atoms were located on the surface of the protein and to classify them according to atom types. The resulting outcomes were then fed to a preprocessor Python Materials and Methods The data pipeline began with a publicly available list of PDB files, containing both proteins and DNA. Using this source eliminates the need to hand-curate thousands of PDB files by removing duplicates, low accuracy positions, etc. The PDB files were parsed into custom format files and those erroneous ones were removed (Figure 4). These intermediate files were then processed by a C++ program that used the FreeSASA library to determine which atoms were located on the surface of the protein and to classify them according to atom types. The resulting outcomes were then fed to a preprocessor Python program that converted the atoms into a 4D Numpy array (3 spatial dimensions in 1-Å voxels plus a 1-hot encoded vector representing atom type including non-surface). For training purposes, a 3D "ground truth" array was also generated to indicate whether the locations were the binding region or not. These Numpy arrays were ultimately passed into the main Python program for training or inference using the DeepDISE model, resulting in a final prediction Numpy array. This prediction array represented a continuous heat map of where the model predicted the binding region was. For the purpose of calculating the final accuracy, a final Python program ingested the prediction array and applied k-means clustering to classify each point as binding or non-binding. program that converted the atoms into a 4D Numpy array (3 spatial dimensions in 1-Å voxels plus a 1-hot encoded vector representing atom type including non-surface). For training purposes, a 3D "ground truth" array was also generated to indicate whether the locations were the binding region or not. These Numpy arrays were ultimately passed into the main Python program for training or inference using the DeepDISE model, resulting in a final prediction Numpy array. This prediction array represented a continuous heat map of where the model predicted the binding region was. For the purpose of calculating the final accuracy, a final Python program ingested the prediction array and applied k-means clustering to classify each point as binding or non-binding. PDB Entries To train our deep learning model and to test and compare its prediction accuracy with that of other existing methods, we needed two datasets of protein and DNA complexes. We obtained a PDB list of 560 DNA-interacting proteins from a manually curated database, ccPDB 2.0 [47]. The 560 PDB files were initially collected via a Python script and we automatically fetched PDB files from the RCSB database [48]. Then the same package allowed us to parse the downloaded file into a Python dictionary object for ease of use later in the pipeline. The protein complexes were then screened to insure that they contained only atoms that we could type and which had DNA within them. Finally, 274 PDB files were then saved in a custom format that allowed for an easy interface with the rest of the programs in the pipeline. In addition, we also downloaded two datasets, PDNA62 and PDNA224, consisting of 62 and 224 complexes, as two test datasets in order to test our model and compare the performance of our algorithm with that of existing ones. Atom Classification and Atom Type Assignment The PDB files were parsed, and passed to a C++/CUDA executable for atom classification. During this step, protein atoms were represented in isolation from DNA atoms to allow for solvent accessible surface area calculations to occur using the FreeSASA library. Using this process, protein atoms were classified as either surface or non-surface. Next, all surface atoms were further classified as one of 16 different atom types based on atom and residue names. These atoms were then recombined with the DNA atoms to export. Proteins are generally made of a few elements (i.e., carbon, nitrogen, oxygen, sulfur and hydrogen). Simply classifying protein atoms into different element groups ignores their bonding and chemical environment. A common approach designed to augment the prediction performance is to label atoms not based on element alone, but also by other features such as bond order, (partial) charge, parent residue, etc. In previous studies, we developed an atom type classification scheme to describe protein-ligand interactions with a total of 23 atom types, of which 14 were for protein atoms and 20 for atoms on other ligands, with many of them shared by both [49,50]. In order to improve the performance of this classification scheme to avoid assigning chemically dissimilar atoms into the same atom type (e.g., nitrogen located on the main chain and the histidine side chain), we made some modifications and used it as the basis to create a new nucleic acid prediction method. As shown in Table 4, the atom types are identified by a 3-code or a 3-letter name for those that do not need to be further classified, because they are relatively rarely observed in our datasets of protein-ligand complexes (i.e., metals and phosphorus). Some general rules for the 3-code names were as follows: the 1st code is the name of the element (C, N, O, and S) and the 2nd and 3rd codes indicate the surroundings and electrostatic properties of the atom. The 2nd code can be 2, 3, R, or C, which, respectively, correspond to sp2 or sp3 hybridization or inclusion in an aromatic ring or conjugated system. The 3rd code can be N, P, V, or C, which respectively correspond to a nonpolar, polar (can be a hydrogen bond donor or acceptor), variable or charged atoms. The "variable (V)" code is associated with the atom type NRV, which is used primarily for the two nitrogen atoms on the imidazole ring of a histidine, as both of the nitrogens can be either protonated (hydrogen bond donor) or deprotonated (hydrogen bond acceptor). For simplicity, the nitrogen of tryptophan, which is more infrequently seen than histidine, especially in the active sites, was also assigned the atom type NRV. We developed an algorithm which automatically assigned an atom type to each protein atom. To assign an atom type to each atom on a binding complex, we need to know the element, the bond orders that connect the atom to others, and which atoms it connects to. Based on our knowledge about the nomination system and structure of common amino acids, we had all the bonding information we mentioned above as long as we knew the atom names and residue names and compiled them in a PDB file. Preprocessing After determining the atom types of all protein atoms, final preprocessing was performed. Upon ingestion, the script constructed a 4D Numpy array, where the dimensions corresponded to spatial dimensions x, y, and z, and an additional atom type dimension was constructed. The array was designed to be 3 Å in all 3 directions and was subdivided into 1-Å voxels. After the array was allocated, the script iterated over all the protein atoms and populated the voxel which was closest to the center position of the atom with its type, which was recorded into the array. For training purposes, distance calculations were performed to determine the binding region and generate a corresponding 3D mask. In this case, the binding region was defined as the up to 6-Å region between the center of a protein atom and a DNA atom. All voxels within this region were set to 1 and all voxels not in this region were set to 0. Finally, both the input and mask arrays were rotated into 24 unique 90 • 3D rotations and were saved to a Numpy compressed archive with PDB and rotation IDs in the file name. Model The DeepDISE model project is a fully convolutional neural network written with Pytorch Lightning. Any model of which the input and output are in the form of arrays can be executed using traditional CPUs or much faster GPUs. Architecture The high-level architecture of DeepDISE was based on a fully convolutional neural network called UNET [51]. Under this architecture, data entered the network and passed through a series of blocks composed of convolutional layers. For the first half of the network, the output of each block was down-sampled using pooling layers, before being passed on to the next block in the sequence. The last half of the network up-sampled the output of each block by the same ratio as the previous layer's down-sampling. The final output of the network was the same size as the input, which led itself very well to segmentation problems, in which the output needed to act as a mask on the input. It has been theorized that UNETs perform well relative to other architectures since they have multiple scales for the convolutional operations to act on, in contrast to ResNets, and multiple pathways for the gradient to flow through, in contrast to simple feedforward networks. Within each block of the network, a separate architecture was implemented based on the DenseNet architecture [52]. Each layer in the DenseNet architecture was a single convolutional layer paired with an MISH activation function [52]. Under this architecture, the input of each layer was the concatenated sum of the initial input and all the outputs of the previous layers. This architecture worked under a similar assumption as the UNET models-they were more easily trained because the optimizer had clear paths through the gradients of the initial layers. In the DeepDISE model there are 4 such layers. Training The DeepDISE model was trained using a curated set of PDBs with proteins binding to DNA. The PDB records were preprocessed and split into a 9:1 training and validation set. Although the training set is not totally non-redundant, the all-against-all pairwise sequence alignment results (see Supplementary Material) showed that less than 1% of pairs of sequences were homologs. Any complexes that were too large to be trained on using a Nvidia 2070 Max-Q GPU were also added to the validation set. In the training set, each protein-DNA complex was rotated 24 times to generate 24 structural files with different orientations. Among the 24 orientations, 2 were added to the validation set to better track the model's performance while training, but they were removed prior to the final statistical calculations for this paper. The model was trained over the course of 48 h using the Ranger optimizer and binary cross entropy for the loss function. The Ranger optimizer was chosen as it has been shown to produce good results in other applications and was essentially the combination of AdamW and LookAhead. While training, real-time statistics were exported in the TensorBoard format via a Pytorch Lightning callback so that we could determine the model's overall convergence and spot issues without need to wait for training to be fully completed. In Figures S1 and S2 (see Supplementary Materials), there is a raw representation denoted by light blue and a "smoothed" representation denoted by dark blue in each image. The training error essentially converged, but the validation error slowly decreased over time. This was expected, as the training data points represent individual proteins, whereas the validation data points represent the full validation set, thus averaging out the perceived variance. Training was stopped after roughly 2.5 epochs of going through the training data (validation was calculated every 1/8th epoch). The final binomial cross-entropy score across both the training and the validation sets together was 0.02358, compared to 0.69315 if the model had only predicted non-binding for the full dataset (the most common true prediction for points). Clustering and Final Prediction The DeepDISE model creates a continuous output. For applications where a binary classification is needed, an additional step to generate the final prediction is required. Initial experiments with linear classification were explored, but ultimately the accuracy did not seem to align with the qualitative results of the model output. Because of this, we decided to use a system based on k-means clustering. This allowed for the binding site determination not only to leverage the prediction score given by the model, but also the spatial location relative to other scores as well. To arrive at a binary classification, two rounds of clustering were used. In the first round, n clusters were generated, where n equals the number of atoms divided by 1000, rounded up to the nearest integer. The clustering algorithm was then shown the list of points comprising the prediction, where all the values were first standardized then the score dimension was increased by 5× to bias the clustering towards it. The n clusters were then passed to a second round of clustering, where the algorithm was only given the average score of each cluster and was required to cluster them into 2 clusters. Finally, the cluster of clusters with the highest average score was labeled the binding cluster and all points within it were assigned to the binding region. Assessment of the Binding Site Prediction The final statistics were computed on a per-grid basis, where each grid represented a 1-Å voxel within the complex. Ground truth was determined by labeling each voxel as either binding or non-binding as a function of its proximity to both a DNA atom and a protein atom. For each voxel, the algorithm first iterates over each protein atom in the complex. If the atom is within 6 Å of the voxel, the algorithm then checks if the voxel is within 6 Å from a DNA atom. If this is the case, it finally checks to see if that DNA atom is also within 6 Å of the protein atom from the first step. If this is the case, the voxel is labeled as part of the binding site. Conclusions In this study, we have developed a deep learning-based method to model and predict the DNA binding sites on target proteins. Due to its robustness, this model can be applied to different datasets to identify the potential DNA-binding sites of most of the target proteins successfully. We have also demonstrated that by using only the 3-dimensional protein structures plus the assigned atom type on the surface atoms, we were able to train a deep learning model to predict DNA binding sites. This approach should be also applied to create models to predict other binding partners of a target protein, such as medical compounds or other proteins. When we build up all these prediction models and integrate them together, we will be able to detect all the functional patches on a target protein and further reveal the recognition mechanism of our proteome. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ijms22115510/s1, Figure S1: The trajectory of the loss function of the training dataset, Figure S2 Data Availability Statement: The data presented in this study are openly available on https://gitlab. godfreyhendrix.com/cddl/deepdise-prediction-results (accessed on 30 April 2021) and are free for academic users.
7,231
2021-05-24T00:00:00.000
[ "Computer Science", "Biology" ]
Differences in Financial Performance of LQ45 Companies Listed on the Indonesian Stock Exchange during the Covid-19 Pandemic : The global economy has been hit by a crisis, including the Covid-19 pandemic, which is no different than what Indonesia is experiencing. The pandemic has infected and affected the economic power of all countries. Performance during a pandemic should be studied very diligently. This phenomenon led to the first research on Indonesian companies. The purpose of this study is to determine the company's performance before the pandemic and during his Covid-19 pandemic. For this, the researcher uses the "strong" firms in the Indonesian capital market - his LQ-45 firms. A total of 45 and 21 companies from various sectors were obtained using a targeted random sampling method. This data is collected through annual financial reporting for the 2018-2019 pre-pandemic and 2020-2021 during the Co-19 pandemic. Variables used to define company performance are current ratio (CR), gearing (DER), total assets turnover (earnings), return on equity (ROE), and earnings per share (EPS) is. Using these variables is suggested by researchers as representative of each company's financial metrics. The research method used is another test of paired data. A data normality test was previously performed and found that the data used were not normally distributed. Therefore, for further analysis to determine whether there were differences before and during the Covid-19 pandemic, the Wilcoxon paired difference test was used in the analysis. We found no difference in firm performance between CR and DER variables before and during. However, when it comes to revenue, ROE and EPS, there are differences in company performance in the LQ-45. Apart from that, these results also show that business performance has declined during the Covid-19 pandemic. INTRODUCTION The global economic crisis of 2020 was felt by all countries of the world, including Indonesia.This condition is associated with the Covid-19 pandemic.The pandemic has lasted over 2 years from november 2019 to 2022.The state of the Covid-19 pandemic has affected various sectors of the Indonesian economy.(Kurniawan & Makarim, 2022) stated that one of the most affected industry sectors is the hotel, restaurants and tourism sub-sector.(Kurniawan & Makarim, 2022) stated that the trading sector has been affected by a decrease in demand for goods, which in turn has also affected industrial activity.(Sitohang, 2021) stated that all industry indices showed negative performance, led by various industrial sectors (-19.34 percent), financial sector (-18.58 percent) and infrastructure sector (-14.76 percent).The Covid-19 pandemic has affected almost every industry, both primary and secondary. The Covid-19 pandemic has affected various industries, causing a downturn in the national economy.Several stock prices in Indonesia declined due to company fundamentals and economic conditions in the country (Fatimah, Prihastiwi, & Islamiciyatun, 2021).According to (Krismawati, 2022), in the investment world, especially in the capital market industry, the COVID-19 pandemic has been responded by a sharp decline in stock prices in various stock market around the world.There were 7 companies that experienced a decline in their stock prices during the pandemic, including PT Astra International Tbk., PT Perusahaan Gas Negara Tbk., PT Semen Indonesia Tbk., PT United Tractor., PT Gudang Garam Tbk., PT Indocement Tunggal.Perkasa Tbk., PT Bank Negara Indonesia (Violandani, 2021).Share prices are subject to change due to company fundamental conditions, investor buying and selling trends, share price manipulation and panic, and economic conditions in the country.Investor anxiety is a trigger for falling stock prices. The Indonesian government's efforts to slow down the spread of the Covid-19 virus is through the adoption of several policies in 2020, such as 1).Calling people to 3M (wearing masks, washing hand and keeping distance) (Hayati, 2020), 2).Introduction of LSR (large scale social restrictions) from april 2020 and several times change of names and formats to transitional LSR, emergency RCM, four-level RCM at the end of July 2021 (Permatasari, 2021), 3) vaccination program starting in January 2021 (Ministry of Health, 2021).Since the vaccination was done, the government has relaxed the community to go to school, work and worship as before.According to the Asian Development Bank, Indonesia's economic growth is projected at 5,3%.This forecast is based on the assumption that the Indonesian economy has returned to stability and the Covid-19 virus is under control with a vaccine (Setiawan & Setiadin, 2020). Other researchers (Fatimah, Prihastiwi, & Islamiciyatun, 2021) analyzed the financial statements of LQ45 companies before and during Covid, and the results showed that their financial performance was even better before the pandemic.According to a study conducted by (Martini, 2020) regarding the analysis of the performance of LQ45 shares before and during the Covid-19 pandemic in Indonesia, the results showed that the performance of LQ45 shares on the Indonesian stock exchange declined during the pandemic. A study of companies with the LQ45 index on the Indonesian stock exchange was carried out.Companies with the LQ45 index are interesting to study, because they are the ones investors are most interested in.In addition, the shares of the LQ45 index have a high level of liquidity and market capitalization. Based on the phenomenon of the covid-19 pandemic in Indonesia, which has had a negative impact on various business sectors, and in line with several policies implemented by the Indonesian government between 2020 and 2021, especially after the vaccination program, the researchers want to analyze how differences in financial companies that indexed LQ45 during the 2020 and 2021 pandemics.2020 and 2022 financial statements will be used.companies with LQ45 index, listed on Indonesian stock exchanges. LITERATURE REVIEW The financial performance of a company can be measured with several ratios that show the company's ability to manage the company's finances.Several ratios that can be used as a benchmark to measure a company's financial performance include: 1. Liquidity Ratio The current ratio (CR) is a ratio that compares how a company's short-term debt can be repaid against the current assets owned by the company (Candradewi, 2016). Solvability Ratio Debt to Equity Ratio (DER), which is one of the financial ratios where this ratio can pay off all long-term and short-term debts.The larger this ratio, the more unprofitable it will be, because the greater the risk associated with the failure that can occur in the company (Anisa & Putri, 2022).DER = Debt/Equity (Sari, Rahmawati, & Helmmiati, 2022).EPS is a form of providing benefits to shareholders and each share they own.Where EPS is the ratio being compared. METHODOLIGY This study is a type of comparative study with a quantitative approach.This is based on the purpose of the study, which is to find out if there were differences in financial performance before and during the Covid-19 pandemic in companies included in the LQ45 index.According to (Sekaran &; Bougie, 2017) a population is a group of people, events, or interesting things for which researchers want to make opinions (based on sample statistics).The population of this study is companies that are members of the LQ45 Index for the period 2018 to 2021.The population in this study is 45 companies engaged in various sectors. The sample in the study was selected using the purposive sampling method.According to (Sugiyono, 2018) purposive sampling is a sampling technique with certain considerations.In this study the criteria for sample selection are: 1.Companies incorporated in the LQ45 Index during 2018 to 2021 according to research case studies.2. Companies that are not the type of financial institutions.This is because the financial ratios in companies with financial institution types are different from non-financial institution companies.3. Provide annual financial statements that are reported regularly and completely according to the variables used in research case studies on the Indonesia Stock Exchange during 2018 to 2021.Based on the criteria above, a sample of 21 companies from 45 companies was obtained.The sample will be described in chapter four. Research Data and Sources Below is described the types and sources of data used and their collection techniques. Data Types and Sources The source of this research data is secondary data.Secondary data in this study are interim financial statements that have been published by the company from the research sample.The financial report is obtained from the Indonesia Stock Exchange (IDX, 2022) website. Data Collection Techniques The data collection method used is a method of collecting data from a database, because researchers take secondary data.This method is carried out through the collection and recording of financial statement data originating from the website of the Indonesia Stock Exchange (IDX) organization, namely idx.co.id.Secondary data taken for all variables starting from 2018 to 2021 for each manufacturing company listed on the Indonesia Stock Exchange. Descriptive Statistics Descriptive statistics are statistics that provide an overview or description of a data seen from the mean, standard deviation, variance, maximum, minimum, sum, range, kurtosis, astonishing distribution.The presentation of descriptive statistics in this study is limited to the calculation of minimum, maximum, average, and standard deviation. Data Normality Test The normality test is carried out to determine whether the data from the research variables are normally distributed or not.This test must be done because it is to do the next test.To perform this normality test, the research sample amounted to more than 30 samples.This normality test was performed using the Kolmogorov-Smirnov Test and Shapiro Wilk.According to (Ghozali, 2018) the provisions used are residuals said to be normal if the significant value of Kolmogorov-Smirnov and Shapiro Wilk is greater than 0.05.If the significance value is less than 0.05, then the research data is not normally distributed.If the data is normally distributed, then the test performed is a paired sample t-test.Meanwhile, if the data is not normally distributed, then the test carried out is the Wilcoxon signed rank test. Paired Sample T-Test This study compares financial ratios before and during the Covid-19 pandemic, so the test carried out is a paired sample t-test.The paired sample t-test is carried out with the aim of finding out whether different treatments or conditions will give different results on the statistical average.According to (Santoso, 2014) after the data is normally distributed, it can be continued with the t-test difference test.This test was performed on two paired samples.A paired sample is defined as a sample with the same. As a subject but subjected to two different measurements.Similarly, the research conducted (Mengkuningtyas &; Adib, 2016), based on the results of hypothesis testing if the significance > 0.05, then the data is not different, while if the significance is < 0.05, the data is different. The following is a formula for finding a comparison using a paired sample t-test: So, the formula is used: So, it can be concluded with an accepted hypothesis if, 3 above, it is explained that in the variables CR, TO, ROE, and EPS the value of Ties (the distance of values between groups before and during the Covid-19 period) is 0, which means that there is no similarity in values both before and during the Covid-19 period.Meanwhile, the results of hypothesis testing of average differences in data before and during the Covid-19 period are presented in the Table .The following 4: 2 above, based on the SPSS output, Wilxocon test results for CR and DER variables obtained significance values of 0.940 and 0.188, resulting in more than α = 0.05, so that H0 is accepted, this shows that there is no difference in CR and DER variables both before and during Covid-19.As for the TO, ROE, and EPS variables, a significant value of 0.000 and 0.014 is produced, so that H0 is rejected.So, from the output results above, there are average differences in TO, ROE, and EPS variables before and during the Covid-19 period. Based on the problems in this study, whether there is a difference or not in the Current Ratio variable before and during Covid-19 in LQ45 companies, it shows that there is no difference.This shows that companies indexed by LQ45 show good performance through their ability to pay debts or short-term obligations both before and during the pandemic, as stated by Candradewi (2016) that the Current ratio is a ratio that compares how the company's short-term debt can be met with the company's current assets. In line with the research problem, whether there is a difference in Debt Equity Ratio in LQ45 companies before and during the Covid-19 pandemic also shows that there is no difference.According to the understanding of Anisa (2022), the Debt Equity Ratio is one of the financial ratios, where this ratio can pay off all long-term and short-term debts.The analysis shows that LQ45's performance in terms of fulfilling its obligations to repay long-term debt both before and during the Covid-19 pandemic was quite good. In contrast to the research problem, whether there is a difference in Total Asset Turnover in LQ45 companies before and during the Covid-19 pandemic shows that there are differences.Based on the understanding expressed by Brigham &; Houston in (Irsan, 2021) Total Asset Turnover is a ratio that measures the turnover of all company assets, and is calculated by dividing sales by total assets.The difference in performance before and during the Covid-19 pandemic in terms of sales was felt by all business sectors that experienced a decrease in revenue through sales.This decline in sales can be seen from the data on the decline in stock prices during the Covid-19 pandemic in 2020. Likewise, the issue of research whether there are differences in Return on Equity in LQ45 companies before and during the Covid-19 pandemic shows that there are differences.Based on the understanding expressed by Seventeen, (2021) that Return on Equity (ROE) is a ratio used to measure the net profit obtained by capital managers invested by company owners and ROE is measured by a comparison between net profit and total capital (Seventeen, 2021).Because ROE is a ratio to measure the level of profit obtained from sales with total capital, then, when sales experience a decline as described in the third problem, it also indirectly has an impact on one measure of company performance in the profitability ratio. The last research problem is whether there are differences in Earnings Per Share in LQ45 companies before and during the Covid-19 pandemic also shows that there are differences.according to Fahmi in Sari (2022), EPS is a form of providing benefits to shareholders and every share owned.Where EPS is a ratio that compares net income with the number of outstanding shares, it is in line with the decline in sales in the previous problem which had an impact on decreasing revenue, it also has an impact on the percentage value of EPS.The results of the analysis of this latest problem are also supported by data on the decline in stock prices in the capital market in 2020 during the Covid-19 pandemic. CONCLUSIONS AND SUGGESTION Based on the results of research on the LQ-45 company for 2018 to 2021, two main conclusions can be drawn.First, there is no difference in financial performance when viewed from the variables Current Ratio (CR) and Debt to Equity Ratio (DER) before and during the Covid-19 period.However, looking at the results of Wilcoxon's analysis, the conclusions of these two studies show that there are differences in financial performance when viewed from the variables Turnover (TO), Return on Equity (ROE), and Earnings Per Share (EPS) before and during the Covid-19 period. From the results of this study, researchers expect to provide additional study materials in the field of financial management, especially on the theme of financial performance during special conditions such as the COVID-19 pandemic.Further research is recommended to add several things such as indicators of each financial ratio used, companies listed on other indices besides LQ-45, and other research places besides Indonesia as a developing country. In this study, the methods for data analysis are descriptive statistics, normality tests, while the average difference tests used are paired sample t-test and Wilcoxon signed rank test. 19 : 19 : 19 : 19 : 19 : 19 : 19 International Average population during Covid-Total data before Covid-Total data during Covid-Average before Covid-Average during Covid-Standard deviation before the Covid-19 period : Standard deviation during Covid-and During the Covid-19 Pandemic At this stage, it shows the results of rank testing based on the values produced by the variables CR, DER, TO, ROE, and EPS before and during the Covid-19 period.These results occur in the Table. Table 1 . Wilxocon Signed Rank Test Results of CR, DER, TO, ROE, and EPS Variables on Data Before and During Covid-19 Table 2 . Results of Financial Performance Analysis Before and During the Covid-19 Pandemic ISSN:
3,803.8
2023-04-21T00:00:00.000
[ "Business", "Economics" ]
Structural basis for potent and broad inhibition of HIV-1 RT by thiophene[3,2-d]pyrimidine non-nucleoside inhibitors Rapid generation of drug-resistant mutations in HIV-1 reverse transcriptase (RT), a prime target for anti-HIV therapy, poses a major impediment to effective anti-HIV treatment. Our previous efforts have led to the development of two novel non-nucleoside reverse transcriptase inhibitors (NNRTIs) with piperidine-substituted thiophene[3,2-d]pyrimidine scaffolds, compounds K-5a2 and 25a, which demonstrate highly potent anti-HIV-1 activities and improved resistance profiles compared with etravirine and rilpivirine, respectively. Here, we have determined the crystal structures of HIV-1 wild-type (WT) RT and seven RT variants bearing prevalent drug-resistant mutations in complex with K-5a2 or 25a at ~2 Å resolution. These high-resolution structures illustrate the molecular details of the extensive hydrophobic interactions and the network of main chain hydrogen bonds formed between the NNRTIs and the RT inhibitor-binding pocket, and provide valuable insights into the favorable structural features that can be employed for designing NNRTIs that are broadly active against drug-resistant HIV-1 variants. Introduction HIV-1 reverse transcriptase (RT) (hereinafter referred to as RT) plays an essential role in the viral life cycle by reverse transcribing the single-stranded RNA genome to a double-stranded DNA copy (Deeks et al., 2015;Engelman and Cherepanov, 2012). For this reason, it has been an important target of anti-HIV therapies (Esté and Cihlar, 2010;Gubernick et al., 2016). There are two main types of RT inhibitors: nucleoside RT inhibitors (NRTIs), which act as chain terminators and compete with incoming nucleotides in the polymerase active site (Ren et al., 1998;Sarafianos et al., 1999;Tu et al., 2010;Yarchoan et al., 1988), and non-nucleoside RT inhibitors (NNRTIs), which inhibit the activity of RT noncompetitively (Merluzzi et al., 1990;Spence et al., 1995). NNRTIs are a group of structurally diverse compounds that bind to the non-nucleoside inhibitor-binding pocket (NNIBP) located~10 Å from the polymerase active site (Ding et al., 1995;Kohlstaedt et al., 1992;Ren et al., 1995). NNIBP is a hydrophobic pocket that emerges only when NNRTIs bind and induce conformational rearrangements of the residues defining the pocket Hsiou et al., 1996). NNRTIs are key components in highly active antiretroviral therapy (HAART) due to their high specificity, desirable pharmacokinetics and generally good tolerance (Moore and Chaisson, 1999;Pomerantz and Horn, 2003). Despite the success of NNRTIs in suppressing HIV-1 replication and reducing viral loads, their effectiveness is compromised by the emergence of drug-resistant mutations in RT (Wainberg et al., 2011). Earlier NNRTIs, including nevirapine (NVP), delavirdine (DLV) and efavirenz (EFV), have low genetic barriers for resistance and are extremely susceptible to mutations in the NNIBP of RT Arnold, 2013a, 2013b). K103N, Y181C and Y188L are among the most prevalent NNRTIresistant mutations identified in RT (de Béthune, 2010;Wensing et al., 2017). Y181C and Y188L mutations introduce steric hindrances between NNRTIs and the pocket, and/or eliminate critical p-p stacking interactions between side chains of the two tyrosine residues and the aromatic rings in NNRTIs (Hsiou et al., 1998;Ren et al., 2001). As to the K103N mutation, it was long believed that it prevented the entry of NNRTIs by stabilizing the closed conformation of NNIBP (Hsiou et al., 2001). However, a more recent study indicates that the resistance is more likely caused by the electrostatic difference between Asn103 and Lys103 (Lai et al., 2016). In the light of this new piece of data, the K103N mutation seems to utilize the same mechanism as Y181C and Y188L do to confer resistance to NNRTIs: by altering the shape or surface property of the NNIBP. Next-generation NNRTIs are designed with conformational flexibility and positional adaptability and are able to target the NNIBPs of an array of drug-resistant RT mutants (Das et al., , 2004. Etravirine (ETR, also known as TMC125) and rilpivirine (RPV, also known as TMC278) are two U.S. Food and Drug Administration (FDA)-approved second-generation NNRTIs belonging to the diarylpyrimidine (DAPY) family ( Figure 1). Both drugs show potent antiviral activities against wildtype (WT) HIV-1 and many HIV-1 variants displaying significant resistance to first-generation NNRTIs (Janssen et al., 2005;Ludovici et al., 2001). However, some existing resistance-associated RT mutations, such as K101P and Y181I, can still cause substantial decreases in susceptibility to ETR and RPV (Azijn et al., 2010;Giacobbi and Sluis-Cremer, 2017;Smith et al., 2016). Besides, new resistant mutations can arise from prolonged use of ETR and RPV, which undermine their anti-HIV-1 activities (Wensing et al., 2017). In patients who failed ETR-or RPV-based therapies, E138K/Q/R are among the most frequently occurred mutations identified in RT (Xu et al., 2013). Therefore, it is imperative to develop new NNRTIs with improved drug-resistance profiles. Our previous efforts have led to the design and synthesis of two piperidine-substituted thiophene [3,2-d]pyrimidine NNRTIs using ETR as a lead compound ( Figure 1) (Kang et al., 2017(Kang et al., , 2016. Compound K-5a2 features a thiophene[3,2-d]pyrimidine central ring, and replaces the cyanophenyl right wing of ETR with a more extended piperidine-linked benzenesulfonamide group, while keeping the 4-cyano-2,6-dimethylphenyl structure in the left wing of ETR. Compound 25a shares the same central ring and right wing structures with K-5a2, but grafts the 4-cyanovinyl-2,6-dimethylphenyl structure of RPV onto its left wing. Compared with ETR, compound K-5a2 displays much lower cytotoxicity and increased anti-HIV-1 potency against WT virus and virus strains with a variety of NNRTI-resistant mutations, except K103N and K103N/Y181C (Kang et al., 2016). The further optimized compound 25a is exceptionally potent in inhibiting WT HIV-1 and exhibits significantly better anti-HIV-1 activities than ETR against all of the tested NNRTI-resistant HIV-1 strains in cellular assays (Kang et al., 2017). In this study, we demonstrated that 25a is superior to RPV in inhibiting RT bearing a wide range of resistance mutations, including K101P, Y181I and K103N/Y181I, against which RPV loses considerable potency, and determined the crystal structures of WT and mutant RTs in complex with either K-5a2 or 25a. These structures illustrate the detailed interactions between RT and the two inhibitors, and explain why K-5a2 and 25a are resilient to NNRTI-resistant mutations in the NNIBP. Additionally, comparison of the binding modes of K-5a2 and 25a with those of ETR and RPV suggests the possible mechanisms for the susceptibilities of ETR and RPV to E138K and K101P mutations. Our results outline the structural features of NNRTIs that can be employed for future drug design to overcome prevalent NNRTI-resistant mutations. Structure determination The complexes of 25a or K-5a2 bound to WT RT or RT variants with drug-resistant mutations were prepared by soaking either NNRTI into the RT crystals. The structures were determined by molecular replacement using the structure of WT RT/RPV complex (PDB ID: 4G1Q) as the search template and were subsequently refined to 1.9-2.23 Å resolution (Supplementary file 1). Overall, the structure of RT in the complexes has the same 'open-cleft' conformation as observed in prior RT/NNRTIs structures ( Figure 2A and C) (Das et al., , 2004Ding et al., 1995;Ren et al., 1995). The Interactions between piperidine-substituted thiophene[3,2-d]pyrimidine NNRTIs and RT The RT-bound 25a and K-5a2 adopt a horseshoe conformation, which is similar to that seen with NNRTIs in the DAPY family (Das et al., , 2004. Both inhibitors exhibit remarkable structural complementarity to the NNIBP with substantial extensions into the three channels (tunnel, entrance and groove) characterizing the pocket ( Figure 3A and B and Figure 3-figure supplement 1A and B). The left wing structures of 25a and K-5a2 form hydrophobic interactions with Pro95 and Leu234, and project into the tunnel lined by Tyr181, Tyr188, Phe227, and Trp229, forming p-p interactions with these residues. The entrance channel gated by Glu138 in the p51 subunit and Lys101 in the p66 subunit is an underexplored region in the NNIBP. By substituting the central pyrimidine ring of Figure 1. Chemical structures of NNRTIs. The torsion angles defining the rotatable bonds are labeled as t1 to t7 in K-5a2 and t1 to t8 in 25a. The equivalent torsion angles in ETR and RPV are labeled as t4 to t7 and t4 to t8, respectively. The structures of K-5a2 and 25a can be divided into three functional regions: a thiophene[3,2-d] pyrimidine central ring, a piperidine-linked benzenesulfonamide right wing, and a 4-cyano-(or 4-cyanovinyl-) 2,6dimethylpheyl left wing. DOI: https://doi.org/10.7554/eLife.36340.002 DAPY NNRTIs to a thiophene[3,2-d]pyrimidine heterocyclic structure, 25a and K-5a2 are able to establish nonpolar interactions with the alkyl chain of Glu138, while retaining the favorable hydrophobic contacts with Val179 and Leu100 manifested in the complexes of RT with ETR or RPV (Das et al., , 2004. The piperidine-linked aryl structure of the right wing arches into the groove surrounded by Lys103, Val106, Pro225, Phe227, Pro236, and Tyr318, developing numerous van der Waals contacts with their lipophilic side chains, and directs the terminal sulfonamide group to the solvent-exposed surface of RT. In addition, binding of 25a and K-5a2 to the NNIBP is stabilized by an extensive hydrogen-bonding network between the inhibitors and the main chains of several key residues around the pocket ( Figure 3C and Pro236 through a bridging water molecule; (iii) the amine group linking the central thiophene pyrimidine and the piperidine ring interacts with the carbonyl oxygen of Lys101, forming a conserved hydrogen bond observed in a number of second-generation NNRTIs/RT complexes Lansdon et al., 2010); (iv) additionally, the nitrogen and sulfur atoms in the central thiophene pyrimidine ring are involved in two water-mediated hydrogen bonds with the backbone nitrogen of Lys101 and the carbonyl oxygens of Glu138, respectively. These extensive interactions between the two piperidine-substituted thiophene[3,2-d]pyrimidine NNRTIs and RT lock the enzyme in an opencleft conformation and inhibit its polymerization activity. The above interactions between RT and the two NNRTIs generally agree with the results from molecular docking (Kang et al., 2017(Kang et al., , 2016. Nevertheless, a close inspection of the inhibitors observed in the crystal structures and those predicted by molecular docking reveals a few notable differences in their binding modes. First, the thiophene pyrimidine nitrogen in the inhibitors is not directly hydrogen bonded with the backbone nitrogen of Lys101, as predicted by inhibitor docking, but through a bridging water molecule instead. Second, the crystal structures define a water-mediated hydrogen bond between the carboxyl oxygen of Glu138 and the sulfur group in the inhibitors central ring, which is absent in the predicted binding modes. Due to their free movement and transient involvement in the binding process, it is difficult to predict the role of solvent molecules in the interactions between enzymes and inhibitors using ligand-docking programs. These water-mediated interactions, however, can be critical for the enzyme-inhibitor complex formation and thus can provide important insights in understanding the resistance mechanisms of RT mutants. Inhibition of HIV-1 RT by piperidine-substituted thiophene[3,2-d] pyrimidine NNRTIs Our previous MT-4 cell-based antiviral activity evaluations showed that K-5a2 displays~3-fold greater efficacy than ETR against the WT HIV-1 strain, and higher or similar efficacy against virus variants bearing four prevalent single-residue mutations (L100I, K103N, E138K and Y181C) in RT (hereinafter referred to as L100I RT, K103N RT, E138K RT and Y181C RT, respectively). However, K-5a2 is less effective than ETR in inhibiting HIV-1 strains containing K103N/Y181C RT (Kang et al., 2016). The compound 25a, resulting from further optimization of K-5a2, overcomes the limitations of K-5a2 and exhibits significantly better inhibitory effects on all tested HIV-1 strains (Kang et al., 2017). To better compare the anti-HIV-1 potency of 25a with that of existing NNRTIs, we measured the EC 50 values of RPV towards WT HIV-1 and mutant RT-bearing HIV-1 variants using the same method. 25a holds advantages over RPV in most of the tested drug-resistant HIV-1 strains while retaining similar antiviral potency against HIV-1 strains containing WT RT ( Table 1). It is noteworthy that the particularly challenging K103N/Y181C double-mutation only causes a~4.6-fold change in susceptibility to 25a, whereas it reduces the anti-HIV-1 efficacy of RPV by more than 10-fold. The superiority of 25a over RPV in targeting K103N/Y181C RT was further validated in the in vitro RT inhibition assay using purified recombinant RT variants, where K103N/Y181C mutation confers lower level of resistance to 25a (7.2-fold change in the IC 50 value) than to RPV (15-fold change) ( Figure 4A and B and Table 2). To further evaluate the resistance profile of 25a, we compared the RT inhibitory activities of 25a and RPV against two additional clinically relevant RT mutants, Y188L RT and V106A/F227L RT. While 25a and RPV exhibit similar inhibitory potency against WT RT, 25a is more resilient to Y188L and V106A/F227L mutations (0.70-and 1.7-fold change, respectively) than RPV (2.2-and 4.0-fold change, respectively) ( Figure 4A and B and Table 2). To assess whether mutations against which RPV loses considerable potency would be susceptible to 25a, we tested the inhibitory activities of 25a and RPV against K101P RT, Y181I RT and K103N/Y181I RT, which were shown to cause substantial reduction in susceptibility to RPV (Azijn et al., 2010;Giacobbi and Sluis-Cremer, 2017;Smith et al., 2016). As expected, all three mutations dramatically lower the anti-RT potency of RPV and cause 20-fold, 90-fold and 1805-fold change in the IC 50 values, respectively. In contrast, there is considerably less resistance to 25a for all three RT mutants (1.3-, 8.8-and 96-fold, respectively) ( Figure 4C and D and Table 2). The longer right wing of 25a enables its interactions with NNIBP residues that are not contacted by RPV, such as Pro225 and Pro236. To gauge the likely impact of these mutations on 25a efficacy, we measured the RT-inhibiting potency of 25a against RT containing P225H or P236L substitutions, Table 1. Anti-HIV-1 activity and cytotoxicity of K-5a2, 25a, etravirine (ETR) and rilpivirine (RPV) against wild-type (WT) HIV-1 and selected mutant HIV-1 strains in MT-4 cell assays. (Kang et al., 2016). † Results from (Kang et al., 2017). ‡ The data were obtained from the same laboratory using the same method. § Data reported as mean ± standard deviations. two clinically identified mutations shown no significant reduction in susceptibility to RPV (Basson et al., 2015). Like that of RPV, the potency of 25a was not negatively affected by either P225H or P236L mutations (0.58-and 0.60-fold change, respectively) ( Figure 4E and F and Table 2), indicating that 25a has a relatively high genetic barrier to the development of novel drug-resistant mutations. It is worth mentioning that 25a has steeper dose-response curve slopes than RPV in the inhibition of all above RT variants ( Table 2). This characteristic can help 25a achieve greater inhibition of RT activity at higher than IC 50 concentrations, which are usually more clinically relevant (Shen et al., 2008). Taken together, by comparing the inhibitory potency of 25a and RPV in a wide range of RT mutants, we have shown that 25a has an improved resistance profile over RPV and is able to effectively inhibit the RT mutants causing high-level resistance to RPV. RT and P236L RT by 25a and RPV. Each data point is shown as mean ± standard error (n = 3). The data are fitted into inhibition dose-response curves with variable slopes. All datasets have excellent goodness of fit with R 2 ! 0.99 except for the inhibition curve of RPV against K103N/Y181I RT (R 2 = 0.98). The IC 50 and curve slope values are summarized in Table 2. DOI: https://doi.org/10.7554/eLife.36340.007 Structural basis for improved resistance profile of piperidinesubstituted thiophene[3,2-d]pyrimidine NNRTIs To shed light on the mechanism underlying the outstanding resistance profile of the two piperidinesubstituted thiophene[3,2-d]pyrimidine NNRTIs, we determined the crystal structures of K103N RT, E138K RT, and Y188L RT complexed with compound K-5a2, as well as K103N RT, E138K RT, K103N/ Y181C RT, V106A/F227L RT, K101P RT, and Y181I RT complexed with 25a. The attempt to obtain the crystal structure of 25a in complex with K103N/Y181I RT proved unsuccessful, possibly due its suboptimal anti-RT potency towards K103N/Y181I RT, although it has displayed marked improvement over RPV in inhibiting this specific mutant ( Table 2). Superposition of these mutant RT/NNRTI complexes structures onto their respective WT RT/NNRTI complexes structures shows no major deviation in the conformations of the enzyme and inhibitors ( Figure 5 and Figure 5-figure supplement 1). Root-mean-square deviations (RMSDs) for the structural alignments between WT RT/NNRTI complexes and mutant RT/NNRTI complexes range from 0.094 to 0.283 Å for the overall Ca atoms, and from 0.095 to 1.108 Å for the Ca atoms of the NNIBP regions (residues 98-110, 178-190, 226-240 of the p66 subunit, plus residues 137-139 of the p51 subunit) ( Table 3). Examination of the interactions between the RT mutants and the two NNRTIs reveals that all the hydrogen bonds depicted in Figure 3D are preserved, although there are some variations in the bond lengths. To analyze the extent of interactions between the inhibitors and different RT mutants, we measured the buried surface areas between the inhibitors and the whole NNIBP as well as a selection of key residues in the NNIBP of each RT variants (Table 4). In the structures of K103N RT in complex with 25a or K-5a2, the Lys to Asn substitution in RT shortens the aliphatic side chain and reduces the contact interface between residue 103 and the inhibitors, but 25a and K-5a2 are able to establish more contacts with Phe227 and Pro236 by varying their multiple torsion angles ( Table 5) to counterbalance the loss ( Figure 5A and Figure 5-figure supplement 1A). Similarly, in the structure of Y188L RT/K-5a2 complex, the cyano-dimethylphenyl group in K-5a2 is diverted away from Leu188 to avoid steric clashes, leading to declines in the buried areas between the inhibitor and Leu188 and Phe227. However, this mutation-caused damage is alleviated by its enhanced interactions with Lys103, Val106 and Pro236 (Figure 5-figure supplement 1C, Table 4). In the case of E138K RT, since the mutation does not disrupt the hydrophobic interactions between the inhibitors' central thiophene ring and residue 138, 25a and K-5a2 maintain almost the same binding poses in the NNIBP and similar buried areas with each of the residues along the pocket as in their complexes with WT RT ( Figure 5B and Figure 5-figure supplement 1B). In regard to RT carrying the more disruptive K103N/Y181C double-mutation, Y181C mutation abolishes the favorable p-p stacking interactions between the Tyr181 side chain and the dimethylphenyl ring of 25a, and greatly reduce the binding interface between 25a and Cys181. Moreover, the dramatic changes in the NNIBP result in a decrease of the buried interface between 25a and residue 103. Nonetheless, the markedly weakened interactions between 25a and both mutated residues are remedied by the increase in the contact areas between 25a and several other residues in the NNIBP, including Tyr183, Phe227 and Pro236. ( Figure 5C). In the V106A/F227L RT/25a complex structure, the much smaller side chain of Ala106 buries significantly less surface area with the inhibitor. Furthermore, the double-mutation causes more dramatic changes in the conformations of the NNIBP and the bound 25a. In particular, the cyanovinyl group of 25a is flipped so that it can maintain similar extent of interactions with the mutated Leu227. This torsional change, however, diverts the inhibitor away from the tunnel lined by Tyr181, Tyr183 and Tyr188 and diminishes the contact areas between 25a and all three tyrosine residues. To compensate for the loss, the right wing of 25a shifts closer to Lys101. Such movement shortens the distance between the linker amine group of 25a and the carboxyl oxygen of Lys101 from 3.05 to 2.76 Å and strengthens the hydrogen bond between them ( Figure 5D). This hydrogen bond is conserved in the binding of many NNRTIs, including ETR and RPV, and contributes greatly to the binding affinities of NNRTIs Lansdon et al., 2010). With respect to K101P RT, the deprotonation of Pro101 main-chain nitrogen attenuates its watermediated hydrogen bond with the pyrimidine nitrogen in 25a. Nonetheless, the Lys to Pro substitution places its cyclic side chain in the vicinity of the central thiophene pyrimidine ring of 25a and leads to enhanced hydrophobic interactions ( Figure 5E). As to the Y181I RT/25a complex structure, the mutation not only removes the p-p stacking interactions in the left wing of 25a, but also introduces steric hinderance with the linker oxygen and thiophene sulfur group in 25a, which pushes the inhibitor slightly away from the tunnel. This unfavorable change in the NNIBP is mitigated by the enlarged contact areas between 25a and the more closely placed side-chains of Lys101 and Tyr183 ( Figure 5F). It is noteworthy that in both K103N/Y181C RT/25a and Y181I RT/25a complex structures, Tyr183, a residue in the conserved YMDD motif at the polymerase active site, moves 2.3-2.5 Å towards the inhibitor, enhancing its hydrophobic interactions with the cyanovinyl group of 25a. The recruitment of Tyr183 by the cyanovinyl group of 25a is reminiscent of that observed in the structure of K103N/ Y181C RT in complex with RPV, whose left wing has the same 4-cyanovinyl-2,6-dimethylphenyl structure . Interestingly, in the structure of K103N RT/K-5a2 complex, Tyr183 also undergoes a significant conformational change and is placed closer to the inhibitor, although it is still outside the contact radius of the cyano group in K-5a2. Superposition of the structures of WT RT/K-5a2, K103N RT/K-5a2, WT RT/25a, and K103N/Y181C RT/25a complexes reveals a gradual rotation of Tyr183 from the 'down' position in WT RT/K-5a2 complex to the 'up' position in K103N/Y181C RT/25a complex ( Figure 5-figure supplement 1D). This stepwise movement of Tyr183 is likely triggered by three factors: (i) inhibitor repositioning because of the K103N mutation, (ii) loss of aromatic interactions due to the Y181C or Y181I mutation, and (iii) presence of a cyanovinyl group in the inhibitor left wing. Tyr183 makes the most significant contribution to the NNRTI-binding in the circumstance that all of the above factors are present. The ability of K-5a2 and 25a to recruit Tyr183 is particularly significant for their function of inhibiting the polymerase activity, because Tyr183 is completely conserved among all HIV-1 sequences and makes direct contacts with the nucleic acid substrate (Das et al., 2012;Sarafianos et al., 2001). The repositioning of Tyr183 towards the NNIBP removes this important interaction and destabilizes the binding of nucleic acid. Comparison of the binding modes for K-5a2, 25a and DAPY NNRTIs By adopting the typical horseshoe conformation, K-5a2 and 25a substantially overlap the binding sites occupied by ETR and RPV ( Figure 6). The thiophene substituent in the central ring of K-5a2 and 25a extends further into the entrance channel and is proximal to Glu138 located at the opening. The positions of their left wing structures are adjusted to small conformational changes of Tyr181, Tyr183 and Tyr188 to maximize the contacts with the pocket residues in this region. The piperidine ring in the right wing of K-5a2 and 25a slants away from the cyanophenyl plane in ETR and RPV through a~90˚tilt of t4 (from 16˚/10˚to -71˚/-79˚) ( Table 5) Lansdon et al., 2010), leading to the displacement of Tyr318~1 Å away from the inhibitors. The main difference in RT conformation is the uplift of the loop proceeding b9 and that connecting b10-b11. Upon the binding of K-5a2 and 25a, Pro225 and Pro236, two residues sitting at the groove channel opening, are pushed apart to accommodate the benzenesulfonamide group, which protrudes to the solventexposed surface of the enzyme. There is no structure available for E138K RT and K101P RT in complex with either ETR or RPV; however, the structures of these two RT mutants in complex with K-5a2 and 25a provide a structural basis to understand why the mutations can render ETR and RPV less effective. In the structure of WT RT/ETR complex, the amino substituent of the central pyrimidine ring forms a salt bridge with the Table 5. Torsion angles and energies of K-5a2 and 25a in different binding poses. Torsion angles (˚) NNRTI energy * (kcal/mol) t1 t2 t3 t4 t5 t6 t7 t8 carboxyl side chain of Glu138. Transplanting Lys138 from the structure of E138K RT/K-5a2 complex into this structure reveals a severe charge-charge repulsion between the amino group in ETR and the side chain of Lys138, which would destabilize the binding of the inhibitor (Figure 6-figure supplement 1A). In the case of PRV, Glu138 contributes to the RT-RPV interactions by bonding Lys101 and placing it in the vicinity of the central pyrimidine ring for several critical interactions: (i) hydrophobic interactions between the pyrimidine ring of RPV and the Ce atom of Lys101, (ii) the hydrogen bond between the pyrimidine nitrogen atom and the main-chain nitrogen of Lys101, and (iii) the hydrogen bond between a linker nitrogen atom of RPV and the carbonyl oxygen of Lys101. Modeling Lys138 from the structure of E138K RT/25a complex indicates that Lys at residue 138 of the p51 subunit would shove Lys101 away from RPV. This conformational change in RT would not only disrupt the hydrophobic interactions but also weaken the two key hydrogen bonds due to the deviation of the Lys101 backbone ( Figure 6-figure supplement 1B). The K101P mutation considerably reduces susceptibility to RPV. Superposition of the WT RT/RPV and K101P RT/25a complex structures reveals that the mutation would remove the hydrophobic interaction between the RPV pyrimidine ring and the long aliphatic side chain of Lys101, and possibly introduces steric clashes between the polar groups in the pyrimidine ring and the non-polar side chain of Pro101. More importantly, Lys to Pro substitution will abrogate the critical hydrogen bond between the pyrimidine nitrogen of RPV and the backbone nitrogen in RT due to deprotonated form of Pro101 main-chain nitrogen and hence its inability to act as a hydrogen bond donor (Figure 6-figure supplement 1C). The improved resistance profiles of 25a over RPV against other RT mutants, especially Y181I RT, Y188L RT and K103N/Y181C RT, are likely due to the bulkier substituents in its right wing and more extensive hydrogen bond interactions with NNIBP residues. Therefore, the p-p interactions provided by Tyr181 and Tyr188 likely make a much less contribution to the binding affinity of 25a than to that of RPV. Moreover, the higher degree of strategic flexibility of 25a (because of more rotatable bonds) can make it more effective to preserve or even enhance its interactions with other NNIBP residues when Y181I and Y188L mutations displace the left wing structure of 25a. Discussion Emergence of drug-resistant mutations in HIV-1 RT remains a major challenge for the design and development of NNRTIs. Using ETR as a lead compound, our previous efforts led to the design of two piperidine-substituted thiophene[3,2-d]pyrimidine derivatives, K-5a2 and 25a, with single-digit nanomolar EC 50 values against HIV-1 strains containing either WT RT or RT variants bearing various resistance-associated mutations. In the current study, we have shown that 25a is more effective than RPV against a broad set of RT mutants and determined the crystal structures of both WT RT and a number of RT mutants in complex with either K-5a2 or 25a. These high-resolution structures enable unambiguous determination of the binding modes of K-5a2 and 25a, and accurate illustration of the detailed interactions between RT and these highly potent NNRTIs. By virtue of their structural flexibility, K-5a2 and 25a are able to adapt to the conformational changes of RT induced by mutations in the NNIBP and optimize their complementarity with the mutated pocket by varying their multiple torsion angles. As such, the buried areas between the inhibitors and RT are similar across WT RT and various RT mutants, suggesting K-5a2 and 25a can occupy the NNIBP of RT mutants as effectively as they bind to the pocket of WT RT (Table 4). Energy calculation of the K-5a2 and 25a shows that the NNRTIs in different RT-bound conformations are almost isoenergetic (Table 5), indicating the conformational changes of K-5a2 and 25a induced by NNIBP residue mutations do not bring in significant strain energy penalties. Although in cellular environment, both RT and the bound inhibitors are in constant motion, and the interactions between them are undergoing repeated breaking and re-forming, the binding mode captured in the crystal structure should represent the averaged state of the complex or a state with high likelihood. Aside from structural flexibility, hydrogen bonding with the main chains of NNIBP residues was previously suggested as another method to design NNRTIs that can overcome the effects of drugresistant mutations in RT (Zhan et al., 2009). Compared with ETR and RPV, K-5a2 and 25a form considerably more hydrogen bonds between their polar groups (the thiophene sulfur, piperidine nitrogen and solvent-exposed sulfonamide) and the main chains of residues throughout the binding pocket. This extensive network of main-chain hydrogen bonds contributes substantially to the free energy of RT-inhibitor binding and is less susceptible to side chain mutations in the pocket. Furthermore, the more extended right wing structures of K-5a2 and 25a contact with a larger set of NNIBP residues than DAPY NNRTIs. Although it potentially makes K-5a2 and 25a susceptible to mutations of the residues not contacted by ETR or RPV, our results have shown that mutations of Pro225 and Pro236, whose side chains interact with the right wing of 25a, but not that of RPV, do not cause resistance to 25a. Additional RT-25a interactions that are not present in the RT/RPV complex include the hydrogen bonds between the sulfonamide group of 25a and Lys104 and Val106. The mutations of these two residues are unlikely to cause loss of potency of 25a because the two hydrogen bonds are established through the main chains of Lys104 and Val106. Even if the side-chain substitutions deviate the main chains, a minor change in the torsion angle t1, which was shown to span a wide range without significant energetic penalty (Table 5), would readily place the sulfonamide group of 25a in an optimal position for hydrogen bonds formation. In conclusion, our study depicts the binding poses of two newly developed NNRTIs, compounds K-5a2 and 25a, in their complexes with WT and mutant RTs, and exemplifies how broadly active NNRTIs retain satisfactory activities against RT containing drug-resistant mutations by taking advantage of the plasticity of both the inhibitors and the NNIBP of RT. Our findings provide a reliable model to analyze the structural effects of drug-resistant mutations in RT, and will contribute to structure-based design of novel NNRTIs that can effectively target multiple variants of RT. Materials and methods Key resources Cloning, protein preparation and crystallization An engineered HIV-1 RT construct, RT52A Das et al., 2008), here referred to as WT RT, was used as the template for site-directed mutagenesis to introduce E138K mutation in the p51 subunit, K101P, K103N, Y181I, Y188L, K103N/Y181C, K103N/Y181I and V106A/F227L mutations in the p66 subunit. WT and mutant RTs were expressed and purified as described previously Frey et al., 2015). Briefly, the p51 subunit with an N-terminal 6xHis tag followed by a human rhinovirus (HRV) 3C protease cleavage site and un-tagged p66 subunit were coexpressed in E. coli BL21 star (DE3) (Thermo Fisher Scientific, Waltham, MA). Cells were grown at 37˚C and induced at 17˚C for 16 hr. WT and mutant RTs were purified on a HisTrap affinity column and a HiTrap Heparin affinity column (GE Healthcare), sequentially. The N-terminal 6xHis tag was removed by HRV 3C protease, and the un-tagged RT was purified on a Superdex 200 gel filtration column (GE Healthcare) in buffer containing 10 mM Tris (pH 8.0), 75 mM NaCl and 2 mM Tris(2-carboxyethyl)phosphine (TCEP). Crystallization of WT and mutant RTs were set up using the sitting drop vapor diffusion method at 4˚C, with 2 ml of protein solution added to 2 ml of well buffer containing 50 mM MES or imidazole buffer (pH 6.0-6.6), 10% (v/v) polyethylene glycol (PEG) 8000, 100 mM ammonium sulfate, 15 mM magnesium sulfate, and 10 mM spermine. Crystals were grown for 2 weeks, and RT/NNRTI complexes were prepared by soaking RT crystals in buffer containing 0.5 mM K-5a2 or 25a, 50 mM MES or imidazole buffer (pH 6.0), 12% (v/v) polyethylene glycol (PEG) 8000, 100 mM ammonium sulfate, 15 mM magnesium sulfate, 10 mM spermine, 25% ethylene glycol, and 10% DMSO for 2 days. Soaked crystals were harvested and flash-frozen in liquid nitrogen. Data collection and structure determination X-ray diffraction data were collected at the Advanced Photon Source at Argonne National Laboratory on beamline 24ID-E at a wavelength of 0.97918 Å . Data sets were integrated and scaled with XDS software package (Kabsch, 2010). Structures of RT/K-5a2 and RT/25a complexes were determined by molecular replacement in Phaser (McCoy et al., 2007) using the structure of WT RT/RPV complex (PDB ID: 4G1Q) as a search template. One RT molecule was present in the asymmetric unit. The ligand restraints and 3D structures of K-5a2 and 25a were generated in eLBOW (Moriarty et al., 2009) using SMILES strings as inputs. Models of K-5a2 and 25a were built into the structures based on the unbiased F o -F c difference Fourier electron density map calculated in the absence of an NNRTI. Models were manually rebuilt in Coot (Emsley et al., 2010), and refined in PHENIX (Adams et al., 2010). The quality of the final models was analyzed with MolProbity . Data collection and refinement statistics are summarized in Supplementary file 1. All figures were generated using PyMOL, UCSF Chimera (Pettersen et al., 2004) or UCSF ChimeraX (Goddard et al., 2018). Cell lines MT-4 cells were obtained from the NIH AIDS Reagent Program and authenticated by the supplier. All cells are tested negative for mycoplasma, bacteria, and fungi. T cell-based anti-HIV-1 activity assays The anti-HIV-1 activities of rilpivirine (RPV) against WT HIV-1 (IIIB strain) as well as seven mutant RTcarrying HIV-1 variants (L100I, K103N, E138K, Y181C and K103N/Y181C) were evaluated in MT-4 cells using MTT method as described previously (Kang et al., 2017(Kang et al., , 2016Pannecouque et al., 2008). Briefly, stock solutions (10 Â final concentration) of RPV were added in 25 ml to two series of triplicate wells in order to allow simultaneous evaluation of their effects on mock-and HIV-1-infected cells. Using a Biomek 3000 robot (Beckman Instruments, Fullerton, CA), nine five-fold serial dilutions of RPV (final 200 ml volume per well) were made directly in flat-bottomed 96-well microtiter trays, including untreated control HIV-1 and mock-infected cell samples for each sample. Stock of WT HIV-1 or mutant HIV-1 strains (50 ml at 100-300-fold of 50% cell culture infectious dose (CCID 50 )) or equal amount of culture medium was added to either HIV-1-infected or mock-infected wells of the microtiter tray. Mock-infected cells were used to evaluate the cytotoxicity of the compounds. Exponentially growing MT-4 cells were centrifuged for 5 min at 220 Â g and the supernatant was discarded. The MT-4 cells were resuspended at 6 Â 10 5 cells/ml, and 50 ml aliquots were transferred to the microtiter tray wells. At five days after infection, the viability of mock-and HIV-1-infected cells was determined spectrophotometrically in an Infinite M1000 microplate reader (Tecan, Zü rich, Switzerland). All data were calculated using the median optical density (OD) value of triplicate wells. The 50% effective antiviral concentration (EC 50 ) was defined as the concentration of the test compound affording 50% protection from viral cytopathogenicity. The 50% cytotoxic concentration (CC 50 ) was defined as the compound concentration that reduced the absorbance (OD 540 nm ) of mock-infected MT-4 cells by 50%. The results are presented as mean ± SD (n = 3). Reverse transcriptase inhibition assays The HIV-1 RT inhibition assay was performed using a PicoGreen-based EnzChek Reverse Transcriptase Assay kit (Thermo Fisher Scientific) according to manufacturer's protocol with minor modifications. Briefly, 58 ml of Recombinant WT or mutant RT (final concentration in reaction is 20 nM) in buffer containing 50 mM Tris (pH 8.0), 50 mM KCl, 6 mM MgCl 2 , and 10 mM DTT was incubated with 2 ml 25a or RPV (Sigma-Aldrich) with a concentration gradient comprising eleven three-fold serial dilutions of each inhibitor, or equal amount of DMSO at 25˚C for 1 hr. 30 ml of pre-annealed poly(rA).d(T) 16 in buffer containing 50 mM Tris (pH 8.0), 50 mM KCl, 6 mM MgCl 2 , 10 mM DTT, and 100 mM dTTP was added to the RT-inhibitor mixture to start DNA polymerization reaction. After 30 min of incubation at 25˚C, 10 ml of 150 mM EDTA was added to stop the reaction. 100 ml of 2x PicoGreen reagent was then added to each reaction and product formation was quantified using a TriStar LB 941 microplate reader (Berthold Technologies) with excitation/emission = 485/535 nm. The activity of WT or each mutant RT in the presence of inhibitors was normalized against the DMSO control. The IC 50 and curve slope values were calculated by fitting the data into inhibition dose-response curves with variable slopes using GraphPad Prism version 7.0a. The experiment was repeated three times independently.
8,461.6
2018-07-25T00:00:00.000
[ "Chemistry", "Biology" ]
Animal cultures: but of which kind? The concept of animal culture began to be increasingly used in the context of animal behaviour research around the 1960s. In spite of its success, I shall argue that animal culture as it is currently conceived does not represent a fully articulated “natural kind”. But how does it fail in this regard and what consequences follow? Firstly, an analysis of the epistemological landscape of author keywords related to the concept of animal cultures is presented. I then systematically enumerate the ways in which culture cannot be considered a natural kind in the study of animal behaviour. Finally, a plausible interpretation of the scientific status of the animal culture concept is suggested that is congenial to both its well established use in animal behaviour research and its inferential limitations. the same time, it might be regarded as one of the most successful 1 philosophical concepts of the last two hundred years. Equally ironic is the fact that around the same time that some of the main figures of anthropology sought to abandon "culture" as a theoretical term, it found fertile ground in the discipline of animal behaviour, hinting to some form of strong naturalization of this elusive idea. Indeed, as different strands of anthropology 2 negated the theoretical power of "culture" as an explanatory device, the term's use continued to spread eventually permeating animal ecology textbooks (Manning & Dawkins 1999;see previously Elton 1930 about animal "tradition"). During this period, many philosophers and social commentators also continued using the concept of "culture" or the adjective "cultural" in a theoretically loaded way (see references in Pinker 2003; Ramsey 2007), often presupposing this predicate's high inferential power, that is, an ability to refer to a property whose very attribution warrants the inference of other properties that are related to it in principle. Tacitly assuming such inferential powers, one thing or behavior being termed 'culture' or 'cultural' usually meant that a diverse bundle of properties could be attached to it (Bueno 1996). But does "culture" really constitute one such natural kind from which reliable inferences toward other interesting and meaningful related properties or states of events can be made? This question has certainly been raised many times before -mainly in the context of the nurture vs. culture controversy, as well as in classical debates concerning the explanation vs. interpretation of social facts. However, it has never been raised as such specifically in an effort to examine the status of "animal culture" as a natural kind. This is not an ineffective approach to the problem of determining the scientific value of 1 This grandiloquent seeming statement can be affirmed on the basis of some simple statistics regarding the frequency with which it has been used over time. The substantive "culture" was used only very rarely up until the end of the 18 th century, after which time it began to take hold. And indeed, what little use was made of the term occurred mostly in reference to the cultivation of certain specific artistic or humanistic abilities. As the theoretical scope of the term expanded during the 19 th century within both philosophy and the nascent discipline of anthropology, it was used with greater frequency, until penetrating everyday language in the main Western languages, especially after World War II. According to Google's N-Gram research engine based on millions of digitalized books and journals (Michel et al. 2011), there was less than 1 occurrence of "culture" for every 50,000 words at the beginning of the 19 th century. Nevertheless, the word attained frequencies exceeding 7 occurrences every 50,000 words by the end of the 20 th century. Compare this with the relative failure of the word "civilization" during this same period at frequencies bordering on 1 occurrence every 100,000 words -exceptions to this trend took place during brief periods peaking around 1920 and 1940 at 1 occurrence every 25,000 words; although "culture" is a polysemic term in English, similar results are obtained for other languages. This suggests that the spread of the notion of culture may be one of the most noticeable examples of the way that philosophy and science penetrate everyday language, even though this says nothing about the validity of the notion as a general scientific term. 2 I will here overlook the subtleties that may derive from the consideration that the American Anthropological Association, the main anthropological association in the world in terms of membership, decided to erase the word "science" from its mission statement a few years ago. Since several very influential scientific anthropologists have also raised serious doubts about the relevance of "culture" as a theoretical term I believe my reference to a philosophy of science take on anthropological perspectives is warranted. the concept of culture. Triumph in revealing the contours of the kind "animal culture" may provide a basis for its naturalization more generally. Conversely, lack of success in establishing a scientific natural kind may also be judged, in the extreme, as an indicator of the unreasonableness of trying to make culture part of the natural furniture of the world, to be conceived on equal footing with other more prototypical natural concepts such as electrons, chemical elements, cells, or galaxies. A conceptual landscape in animal behaviour research First, let's explore the way scientists use the term culture in the context of animal behaviour research. When researchers in animal behaviour publish a contribution in this area, they are usually asked to provide an abstract of what their contribution amounts to, as well as a few keywords describing connected topics. After assembling a data base of around three hundred and fifty articles 3 which were published in some of the main journals of animal behaviour, primatology, and ornithology, and which included the notion of "culture" or "tradition" in the title, abstract or keywords, a few preliminary, purely descriptive observations can be made. Particularly visible are what one might call the main "epistemological interests" or subjects on which the researcher is able to generate Fig. 2 Twelve most connected author-keywords in articles related to "animal culture". Thickness of links is proportional to frequency of co-ocurrence of these author-keywords. publishable material. In Fig. 1, you can see the most frequently used conceptual stems in the abstracts of these articles. A series of epistemological interests are clearly apparent: the notion of learning, a marked interest in behavioural ecology, the study of differences between groups, as well as the stability and variability of behaviour are among the interests that researchers allude to in the most visible part of their publication. The conceptual landscape can be represented in a more articulated manner once the links between author-keywords enter the picture. This can be seen in Fig. 2 with its depiction of the twelve main keywords connecting the articles of this data base. Links between concepts represent the co-occurrence of two keywords together within the same article. The thickest links represent pairs of concepts that co-occur more often than others. This figure is helpful in representing the actual areas in animal behaviour research in which the concept of culture appears most often in connection to. Curious about what the broader picture in which these concepts emerged looks like? A conceptual landscape of animal culture can be generated by plotting those keywords that appear at least in two articles. In this picture (see Fig 3), different colors refer to relatively different modular networks or relatively independent areas of research. Some of the main regions seem to constitute different epistemological approaches, at least in the sense that the most highlighted epistemological interests or keywords do not completely overlap with those of other regions. Among these, one might point to the different epistemological interests that crystallize around different regions: a general region related to primate behaviour, tool-use, and differences in foraging technology, another region more connected to the general ecology of learning mechanisms and avian and cetacean traditions, or yet another one linked to fish social cognition and the behavioural ecology of public information. A minimal definition of animal culture Which concept of culture unites all of the aforementioned work in the study of the evolution of behaviour? Let me be clear about the precise nature of the question I am addressing: success in solving this question should be determined by the adequacy of the definition for describing generally the researchers' activity in this area of knowledge. If I was to advance a definition of culture that does not address what researchers do and how they use the language of culture to describe their findings, I would have either failed miserably or have attempted something entirely different. Some philosophers have attempted to offer a normative concept of culture, proposing how scientists should use the term. My aim here is descriptive. In order to pursue this goal, I will mostly follow a definition of the concept "animal traditions" proposed by Susan Perry and Dorothy Fragaszy (2003) and adapt it to a general definition of animal culture. There is no intrinsic originality in this definition. Indeed, anthropologists Alfred Kroeber and Clyde Kluckhohn famously brought together and inventoried more than 150 definitions of culture in the 1950s. Their list has surely expanded considerably since that time. The definition that now follows owes much to a long tradition of definitions going back at least to Franz Boas, the famous forefather of American cultural anthropology. Again, my aim here is to establish a reasonable minimal concept capable of capturing the commonalities shared by the hundreds of contributions to the study of animal behaviour that deal with the idea of culture or tradition. With no further ado 4 : A phenotypic character, an artifact or any byproduct of an individual's behaviour can be said to be 'cultural" to the extent that it fulfills at varying degrees the following cultural properties: (a) being the result of a specific mechanism of social learning 5 (b) being distributed in a population (c) having a certain stability or permanence in time. A corollary to this definition is that each of these different dimensions of what constitutes a cultural entity admit of degrees. This proposed minimal concept of culture has a number of characteristics that are worth mentioning (for details see Anonymized): -It is transparent to the extent that it is independent of strong theoretical commitments. -It is a concept based on prototypes or instances of what actual practicing scientists consider cultural behaviour. -It is a distributional or populational concept, to the extent that variation is an intrinsic part of its characteristics (Godfrey-Smith 2009). Under this definition, behaviours, and effects of behaviour can be "more or less cultural" (Sperber 1996). At the risk of being redundant, our aim here is not to provide a brand new definition, but rather to take stock of what unites all of the interesting animal behaviour research conducted to date that treats of the notion of culture. An elusive natural kind As we have shown above, the concept of culture has been prevalent in the study of animal behaviour for some decades now. However, the question arises as to whether or not animal culture is a natural kind. But why is this an interesting question at all? Why does it matter? The question should arise particularly considering the previously proposed culture concept. Such a concept, which attempts to represent what most researchers in animal behaviour refer to when they use the notion of culture, is a very minimal concept. In fact, the requirements for the behaviour of a social animal to qualify as cultural are very low indeed. Under this concept of animal culture, animal culture becomes an almost trivial phenomenon by itself. With such a low threshold for qualifying as a cultural behaviour, the interesting question becomes not so much whether a certain animal behaviour is cultural, but rather: how is it cultural? In other words, what are the mechanisms contributing to the propagation of behaviour? What are the diffusion patterns followed in its propagation? What is the ecological function of these mechanisms? Given the general nature of this concept, any apprehension one might have concerning the prospects of understanding culture in terms of a natural kind might be justified. A typical argument for what it means to be considered a natural kind states that a grouping of entities within the framework of a well corroborated scientific theory is a natural kind if the category that is formed by those entities is underpinned by a series of deep and intrinsic characteristics that allow a series of coherent causal generalizations to be based on the existence of that category. In philosophy it is often claimed that an inventory of natural kinds aspires to capture the 'furniture of the world" or at least the main elements that emanate from the scientific view of reality. Typically, the most basic categories of the physical sciences, such as electrons or chemical components are considered to be prototypical bona fide natural kinds. More recently, however, an increasing number of voices in philosophical theory have recognized the need to expand this view to include a larger set of natural kinds. On this more liberal view, it is not only the hard sciences that can provide us with the most basic elements of reality, but largely corroborated elements of "soft" sciences such as psychology or economics are also candidate natural kinds. Under this new view of natural kinds, it is no longer the case that scientific categories are either natural or spurious. Rather, certain scientific kinds can be seen as positioned somewhere between two extremes, one purely explanatory of the structure of reality and the other linked to more particular interests of a pragmatic kind (Craver 2009). Kinds can be seen as more or less natural. Our concern, therefore, should not be so much to list or inventory the deep constituents of reality, but rather to establish some rigorous regulative ideals as to what kind of categories should be part of science 6 . The aim, to be sure, is both regulative and descriptive, for by examining the way scientific communities structure their conceptual landscapes, questions about the naturalness of kinds may also inform us about which practices are useful to the pursuit of scientific knowledge. To further understand the relevance of the question, it may first be useful to quickly mention two different ways in which animal culture was seriously (and unsuccesfully) thought to be based on a natural kind. The way the concept of animal culture is currently used in animal behaviour research is largely independent of the exact social learning mechanisms at the root of cultural propagation. In fact, the great diversity of mechanisms of social learning has been and continues to be a subject of intense study (see Whiten et al. 2004;and also Hoppitt & Laland 2013, Chapter 4). Up until the 1990s, however, imitation was considered by some researchers to be a key diagnostic sign of the presence of culture in a species. In 1992, in a much cited article provocatively titled "The question of animal culture", Bennett Galef (who later became president of the Animal Behavior Society for a number of years) noted that in absence of proof of the existence of real imitation, certain behaviours observed in birds or chimpanzees could not be said to be cultural. Primatologist Michael Tomasello took the logic behind this idea a bit further by conceiving of a general model of cumulative culture in which such a form of cultural propagation was not possible without what he then termed "true imitation" (Tomasello 1999). Despite their considerable influence in this area of research, Galef and Tomasello did not succeed in imposing their terminological and theoretical points of view. The view linking true imitation and culture no longer holds. Forms of true imitation have been observed in other animals, including apes. Since then, the use of the animal culture concept has expanded considerably without really taking into account the requirement of a very specific form of social learning. The presence of what amounts to a diversity of forms of imitative learning has also been established in chimpanzees. And both Galef and Tomasello have revised their initial positions on this matter. To be clear, appeals to true imitation as a diagnostic sign of the presence of culture were not gratuitous, but were rather aimed at establishing a genuine natural kind based on the evolutionary study of behaviour. Part of the logic at work here was that if social learning was sustained by true imitation, then a series of nomothetic cultural dynamics should follow (for example what Tomasello called the "ratchet effect" of cultural propagation). Stated more simply, the operation of the social learning mechanism of true imitation was thought to provide an inductive basis robust enough to characterize a natural form of culture, i.e, natural, in the sense that one could use the concept of culture to justify meaningful generalizations based on a causal account. This is not the place to discuss the specifics, but the empirical basis for the inductive generalizations premised on true imitation is not as strong now as it once was thought to be (Morin, 2015). Another once relatively popular stance on the question of the naturalness of culture can be linked to the popularity of memetics, or if you prefer, to the belief in the existence of an entity that underlies culture, that is, a cultural substance. Although one can accept that modeling the causality of cultural propagation in this way can be useful in some instances, it is certainly not the case that this results in a valid general characterization of culture. The memetic approach to culture typically appeals to models provided by Mendelian genetics, population dynamics, and DNA replication. In this manner, culture is considered as a form of heredity that allows one to infer a number of nomothetic regularities and causal generalizations. On other related accounts, those nomothetic regularities are supposedly derived from the nature of culture as "information" (Lewens 2014;Ramsey 2013). The problem with this approach is that it places undue focus on the general characteristics of so-called 'cultural information", thereby disregarding the specific diversity of mechanisms that drive social learning and propagation. Moreover, however useful the culture as information approach might prove as a modeling simplification in some instances, if taken as a definition of culture in general, it seriously hinders our ability to understand cultural phenomena. The main reasons (for details see Anonymized), are twofold: 1. By appealing to the concept of information one may be presupposing exactly what deserves an explanation, namely the nature of social influence and the properties (both evolutionary and mechanistic) that make that influence relatively lasting and relatively widespread in a population. 2. In the case of "animal culture" reifying information is very much at odds with the current practice and methodology of most studies on the phenomena linked to this concept. In these studies, information as such is seldom invoked as an explanatory resource (if anything it serves as an explanandum more often than as an explanans). Information, it is true, is a concept that is frequent in the mathematical modeling approach to the evolution of cultural capacities. Such a use, however, may be easily considered to be one of the assumptions or simplifications at work in those models rather than as a very solid ontological statement regarding the reality that these models aim to describe. Given the range of diverse social learning mechanisms by which a form of animal behaviour can be said to be cultural, and given the lack of any general causal property or substance ("cultural information") that offers a solid inductive basis for making valid generalizations, it seems legitimate to ask whether animal culture is a natural kind. And if it isn't then how might we best describe it? Homology In order to tackle the naturalness of the concept of animal culture, other more promising strategies than the two already outlined still remain. We might find inspiration in the way that other wide-ranging biological or psychological traits have been characterized as natural kinds. Two general strategies can be deployed in an effort to carve biological traits at nature's joints: the search for homologies, and the search for an evolved function. The first approach relates to the quest for biological precursors to human culture in other animals. Since the publication of Darwin's The Origin of Species, homology has been considered to be the product of descent by modification. In the same way, from a natural kinds perspective, it is descent by modification that might explain the resemblance among biological traits, and that guarantees the inductive generalizations which may derive from such resemblance (Brigandt & Griffiths 2007). Thus, human dispositions for culture may maintain certain homology relations with other capacities present in primates, most especially, our closest living relatives, the great apes. But homology of what? At what level is a resemblance to be traced in order for the category of culture to have some basis in homology? Usually, findings of anatomical homologies enjoy a more robust theoretical status. The notion of anatomical homology is less disputed and less controversial than the notion of functional homology (Love 2007). However the notion of anatomical homology is also problematic in the context of searching for precursors to a given type of behaviour. Linking anatomy or genetics with behaviour is not a straightforward task. Besides, the description of behaviour itself typically requires the use of finalistic or functional language. In practice, when faced with the lack of precise genetic or cerebral data needed for sustaining a comparative approach between human cultural capacities and those of other primate species, behavioural level functional homologies (Herrmann et al. 2007 ) have been the most intensively studied in the search for the naturalness of animal culture. Such was the state of the field when the debates about the lack of true imitation in other species arose. One way of framing the question about true imitation was to ask whether it was a human evolutionary innovation or what is referred to in systematics as an apomorphy, or whether it was shared with other primates by common descent thus constituting a synapomorphy. Our actual knowledge of the comparative study of behaviour shows that our species shares several behavioural synapomorphies with other species that are relevant to the description of social behaviour (Gomez 2005). However, our species also presents a series of behavioural apomorphies that probably were not present in our common ancestor with other great apes (see Carruthers 2006, pp. 154-157 for a long list of plausible candidates). The existence of these apomorphies, many of which may have cultural significance, as well as the ubiquity of animal cultures in taxa as distant from each other as corvids, primates, or even fruit-flies (Lihoreau & Simpson 2012;Logan et al. 2016) may suggest that the homological approach is very limited in its ability to respond to the question of the naturalness of animal culture. General selection pressures What about the other option of grounding a biological natural kind on its evolved function? This approach is linked to the quest for selection pressures that are strong and general enough to account for the emergence of cultural capacities. If such sufficiently strong and general selection pressures are detected, these could in principle inform us about the form and function of the adapted trait in a relatively wide range of environments, thus providing a causal basis for inductive generalizations. The idea of convergent evolution supposes that given enough biological variation, natural selection is able to produce highly similar biological traits in fairly distant taxonomic lineages provided their evolutionary environments are sufficiently similar. Much in the same way dolphin fins and shark fins resemble each other by virtue of their common evolutionary environment, different forms of animal culture may resemble each other by virtue of a given trait or disposition's more general evolved function. The most common critique against this approach is certainly the limiting role of morphogenetic factors (see Thierry 2000). Not just anything can evolve from anything. A great deal of the time, behavioural ecological theory is just theory in search of empirical corroboration. To assume, for argument's sake, that this is not an issue is to subscribe to the usual "phenotypic gambit" (Grafen 1991), a working hypothesis that can be legitimately pursued as such. So let's judge this approach on its own terms. Certain evolutionary models that are general enough in scope could in principle provide an anchor based on sufficiently strong and general selective pressures. For instance, the "costly information hypothesis" (Coolen et al. 2003;Kendal et al. 2011) links the evolution of a general form of social learning with the costs and benefits of exploring problems in the environment when these problems have already been tackled by other individuals. According to other general models linking cultural learning with certain forms of variability in the selective environment, the development of a cultural form of life would be closely linked with changing selection pressures and the need to adapt to a plurality of environments (see Potts 1998 on the variability selection thesis). It follows that culture would be, in Robert Boyd and Peter Richerson's (2000) felicitous phrase, "built for speed not for comfort". In other words, a capacity for acquiring adaptive solutions that have already been acquired by some other individual in response to problems in the environment. Such a disposition would be especially well-suited for rapidly changing selective environments. However rich these general models might be in theoretical insights they also have obvious limitations when it comes to providing a general explanation for the vast domain of animal cultures. Some of these limitations are intrinsic. For instance, variability selection models are only valid within certain parameters of environmental variability, leaving aside other forms of social learning mechanisms that would be expected under different conditions (Mcelreath & Strimling 2008). Moreover, ceteris paribus, cultural stability as such ("animal traditions") is not selected for in rapidly changing selective regimes. But other limitations are extrinsic, almost by definition. Thus, in as much as certain forms of social learning could be an evolutionary accident or byproduct of other evolved characteristics, an evolved function, no matter how general, could not cover those cases that are not strictly functional. In this case, the developmental constraints that we had ruled out for the sake of the argument, would come back with a vengeance. They would do so not so much in the form of evolutionary constraints but rather as components and aspects of certain social learning processes not strictly covered by an approach that focused exclusively on evolutionary function. Homeostatic cluster? Another one of the most recently favored notions of what a biological natural kind is, points to yet another distinctive approach. This is the concept of a natural kind as a "homeostatic property cluster" (Boyd 1991). According to this modern view of natural kinds, many natural kinds are not so much characterized by necessary and sufficient conditions that establish membership, but rather by a more flexible set of properties, some of which tend to cluster together following causal regularities. Thus, the presence of one or several of these characteristics may be considered a reliable indicator of the statistical cooccurrence of other properties. In order for the category to constitute a natural and not simply notional kind, this statistical cooccurrence must be established on a causal basis. In recent years, certain wide-ranging biological categories whose naturalness was also disputed (the concepts of "species", "organism" or the concept of "life" itself) have been approached from this point of view (Dieguez 2013). The fact that the most common concept of animal culture is composed of what we called "cultural properties" may provide an idea of how to proceed. If the aforementioned cultural properties tended to cluster together on a sufficiently reliable basis, established from the causal properties of certain forms of social learning mechanisms, then it would make sense to talk of a homeostatic property cluster of culture. Is this indeed the case? The answer cannot be given on an a priori basis. A population of cultural agents can satisfy some of the properties of a cultural behaviour (social learning, stability, relative frequency in the population) to varying degrees without those properties being necessarily linked. Logical necessity is precisely the kind of necessity that is invoked and rejected here. The empirical details depend on the specifics of the social learning mechanisms and the diffusion process (see Claidiere & Sperber 2010). The homeostatic property cluster of culture may be positively regarded as an ambitious but interesting working hypothesis in the search for a natural kind of culture. It is not, however, a hypothesis whose methodology appears straightforward. Louis Lefevbre, Simon Reader and collaborators have shown how a related behavioural kind -the rate of behavioural innovation-can be evolutionarily associated to a cluster of biologically relevant characteristics such as rate of social learning or relative size of association areas in the brain in both primates and birds (Reader & Laland 2002;Lefevbre et al. 2004). The use of a similar methodology could test the foundations of some forms of homeostatic property cluster concepts of culture. Success, however, is not guaranteed in advance. Reduction Considering the diversity of mechanisms and patterns of diffusion that potentially participate in the propagation of cultural behaviour one might reasonably wager that if any clusters of properties are to be found, the most reliably co-occurring ones will be found at a specific rather than general level. Were clusters -or even families of clusters -of interesting causal properties to be discovered exclusively at a lower level, that could, in principle, also be a reason for a reduction of the original category. In this kind of reductionism, the upper level category is now absorbed by a narrower category. The loss of extension of the older term could then be justified to the extent that the new category has a more robust inductive structure. The division of previously established biological or psychological categories into more natural categories (Grifiths 1997) --thereby resulting in the older categories' loss of extension -is not without precedent. On this scenario, considering the naturalness of culture, the bottle is half full. According to this, we may have one or various populational concepts of animal cultures well anchored in the existence of generally recognized case studies or prototypes (Catherine Driscoll's 2016 proposal amounts to a similar strategy). This general reduction strategy allows the proliferation of special models to explain different cultural dynamics. Stated in the terms of a prominent text book on the categories of the philosophy of biology, the naturalness of a kind is discovered "not through the construction of definitions at the beginning of inquiry, but, if we are lucky, as the culmination of inquiry" (Sterelny & Griffiths, 1999, p. 357). Elimination One can also claim that the bottle is empty. A few years ago, asked to state one scientific idea whose time is due, several researchers in anthropology answered with the concept of culture 7 . As mentioned in the introduction, this is not a radically new idea in that discipline. Consider anthropologist Pascal Boyer's argumentation, and how it can be similarly applied to the case of animal culture. Briefly, he argues that if culture is an overly encompassing concept there may be nothing of interest which can be said "in general" about it. In the same sense that there can not be a science of trees -he claims-there can be no science of culture. Group dynamics and social psychological models may allow for generalizations at a lower level, but not at the most general one. Pascal Boyer is calling for what philosophers of science call an "elimination" of the concept of culture. Perhaps, in the field of animal behaviour, there is but a small step between the actual landscape in which culture is an articulated concept inside a network of other concepts, and an eliminativist landscape in which social learning occupies the large central node of the network much as it does already. Consequently, the other properties associated with cultural phenomena (stability, distribution in a population, etc.) should be referred to in a more explicit fashion. In fact, "social learning 8 " does already play a larger articulating role than that of "culture", a term which tended to be avoided by some researchers (e.g. Fragaszy & Perry 2003 considered the epistemological interest of the term "culture" to be too anthropocentric). A weaker version of this eliminativist position may still accept the use of the concept in a descriptive fashion, as an explanandum, while proscribing its role as an explanans. Under such a view, the culturality of a trait is a feature in search of an explanation (not an explanation itself). The adjective "cultural" can thus survive easily (Sperber 1996), whereas the reference to culture would be unduly essentialist. The strongest eliminativist version calls for a stricter use of language and proscriptions against the idea of culture altogether. 6 Conclusion: Carving into an outdated epistemic object I have shown that there are serious reasons why one can doubt that animal culture is a natural kind in its current state. The fact that it is not solidly anchored in one of the several available theories (homology, selection pressures, information, etc.) purported to reveal its inductive nature is the main obstacle. It was reasonably hoped that the animal culture concept could in principle provide a basis for the foundation of the natural kind of culture. That it did not, also raises doubts as to the naturalness of the idea of culture in general. If I have suggested a methodologically challenging way to explore Homeostatic Property Cluster concepts of animal culture, the truth is that elimination (in some of its different forms) appears as the risk-averse choice from a natural kinds perspective. One could also glean a less radical epistemological lesson from the previous analysis of the conceptual network of animal culture. Even strong eliminativists such as Pascal Boyer recognize that culture is a convenient term to describe "cultural stuff", however various and disparate this stuff might be. Moreover, scientific research does sometimes need central concepts that are not strictly natural kinds. Culture might be seen in retrospect to have played the role of "epistemic object" (Mûller-Wille & Rheinberger 2012), a placeholder whose definition and conceptual range remain vague and yet nevertheless prove powerful enough to assemble a field of research and create wide meaningful connections deemed worthy of exploration due in part to the existence of available techniques. In the field of animal behaviour, a series of research methods and techniques have been deployed both in the lab and in the field in the pursuit of this epistemic object that is culture (Sabater Pi 1978;Whiten et al. 1999;Rendel & Whitehead 2001;Horner & De Waal 2009; and more generally Hoppit & Laland 2013, Chapters 5-7). These methods and techniques opened a new space to build knowledge around a topic that was almost entirely ignored only a few decades ago. And yet these methods and techniques come with their own array of limitations which have already been pointed out in the past (e.g. Laland & Janik 2006;Langergraber et al. 2016;Koops et al. 2014) and which leave their own grey areas. There have always been influential figures in this field of research which at some time or other have called a halt on using the concept of culture altogether. I suspect that the simple removal of the central term of an important amount of work led in this area cannot transform the field by mere fiat. Pragmatic interests might also privilege the continued use of the term for the purpose of scientific communication. However, understanding the crudeness of the ethological concept of culture can only promote progress. For this partly outdated epistemic object (recall its humble XIXth century roots! ) should be considered a rough rock from which to smooth and carve more specific causal models related to learning mechanisms, behavioural ecology, diffusion dynamics and the stability of traditions.
8,346.8
2020-05-23T00:00:00.000
[ "Philosophy" ]
The NA62 Liquid Krypton calorimeter readout module The NA62 experiment [1] at CERN SPS (Super Proton Synchrotron) accelerator will be focused on precision tests of the Standard Model via studies of ultra-rare decays of charged kaons. The high resolution Liquid Krypton (LKr) calorimeter of the former NA48 experiment [2], together with other detectors, will provide a photon-veto with hermetic coverage from zero out to large angles from the decay region. The old backend electronics [3] does not satisfy the NA62 specifications and the study of a new readout system began in 2008. This paper presents the Calorimeter REAdout Module (CREAM), an upgrade project for the backend part of the LKr data acquisition chain [3]. The CREAMs will provide 40 MHz sampling of 13248 calorimeter channels, data buffering during the SPS spill, zero suppression, and programmable trigger sums for the experiment trigger processor. Introduction The NA62 experiment at CERN aims at studying ultra-rare kaon decays, in particular K + → π + νν. It will be housed in the CERN North Area on a new dedicated high intensity beam line, where 400 GeV/c protons, extracted from the SPS accelerator, will produce a secondary charged hadron beam by impinging on a beryllium target. Since the small signature of the decay process under study, high resolution of the reconstruction of physical quantities and good veto and particle identification capabilities are keys requirements of the experiment. NA62 detectors layout The experimental subsystems are spread along a 170 m long region starting about 100 m downstream of the beryllium target. It consists of several sub-detector systems for a total channel count of about 100 thousand. The fiducial decay region is located in a ∼117 m long and a 2.4 m in diameter (in average) vacuum tube. Around and in front of the decay region many detectors will guarantee the hermeticity for photons up to 50 mrad. A straw tube spectrometer, housed in vacuum to reduce multiple scattering, will be devoted to measure the momentum of the charged particle produced in the kaon decay. The identification of the particle type will be done by dedicated detectors. A schematic layout of the experiment is shown in figure 1, more details can be found in [1]. NA62 trigger system The average event rate integrated over the NA62 detectors is about 10 MHz and a very small fraction of these events contains valuable data. A multilevel trigger structure is being implemented in order to reduce this rate to a few kHz. The first level trigger (called L0) selects events after processing of trigger primitives (prepared on a subsample of data by the few detectors participating in the trigger) on specific hardware. The latency of this processing is fixed, i.e. the L0 accept signal (L0A) occurs a fixed time after the instant of the event seen by the detector, and it can be as high as 10 ms. The L0A signal is distributed to the sub-detector readout electronics via the timing, trigger and control (TTC) links [4]. The next trigger level (called L1) is software based and uses the data kept after L0 of some subdetectors. The L1 Trigger processor always sends both accept and reject requests to the readout. Subsequent requests are put in Ethernet packets in an asynchronous way. Upon the reception of a L1 request, each sub-detector sends its data to a farm of PCs for subsequent decisions. The last trigger level (L2) will be based on correlations between different sub-detectors' L1 data. The information, upon which these correlations are determined, will be provided by eventbuilding PC farms. The latency of the L2 trigger is not fixed and can extend into the SPS inter-spill period. 2 LKr calorimeter and its front-end electronics Calorimeter The LKr calorimeter is a quasi-homogeneous electromagnetic calorimeter, which ensures a very good intrinsic energy resolution. It is a key element for vetoing photons from K decays, with the requirement to have a photon detection inefficiency of better than 10 −5 for energies larger than 35 GeV. In addition, the calorimeter is required to provide trigger signals based on energy deposition to contribute to reducing the L0 trigger rate. The calorimeter active medium consists of a bath of ∼10 m 3 of liquid krypton at 120 K with a total thickness of 125 cm (∼27 radiation lengths) and an octagonally shaped active cross-section of 5.5 m 2 . An 8 cm radius vacuum tube goes through the centre of the calorimeter to transport the undecayed beam. Thin copper-beryllium ribbons (of dimensions 40 µm×18 mm×127 cm) stretched between the front and the back of the calorimeter form a tower-structure readout. The 13248 readout cells each have a cross-section of about 2×2 cm 2 and consist (along the horizontal direction) of a central anode (kept at high voltage) in the middle of two cathodes (kept at the ground). The assembled LKr calorimeter structure and details of the ribbons and spacer plate layout are shown in figure 2. Readout electronics The front-end part of the calorimeter readout was built for the NA48 experiment and comprises two circuits. The initial current is derived from the charge measured by a preamplifier mounted inside the cryostat at liquid Kr temperature and connected to the anode electrode by a blocking capacitor. The integration time constant of the charge preamplifier is 150 ns. The signal from the preamplifier is transmitted to a combined receiver and differential line driver mounted outside the calorimeter close to the signal feed-through connectors. The receiver amplifies the preamplifier signal and performs a pole-zero cancellation. The signal after pole-zero cancellation has a rise-time of about 20 ns and a fall-time of 2.7 µs. The maximum signal level, 50 GeV, corresponds to ±1 V into 100 Ω at the digitizer electronics input. The required signal to noise ratio is 15000 to 1. The back-end part of the readout chain, CPD [3], performed final signal shaping and digitization. At the design time in 1995, only a few low cost 10-bit 40 MS/s FADC were available and a custom dynamic range switching ASIC was developed to fulfil the experiment requirements. The maximum event readout rate was 13 kHz. While two front-end elements remain untouched for NA62, the former back-end digitizer (CPD) performance is not compliant with new requirements and has to be upgraded. The updated LKr readout chain scheme is sketched in figure 3. Overview The CREAM is a 1-slot wide VME 6U form-factor module. One module houses 2×16 channels 40 MS/s ADC with at least a 14-bit dynamic range and effective number of bits (ENOB) greater than or equal to 10. The module can run with external and internal clock sources, and the sampling frequency is 40.08 MHz (hereinafter called the 40 MHz clock). The external reference sampling clock will be provided by the TTC links. The module data processing flow with multiple levels of triggering is illustrated in figure 4 and can be summarised as follows: • analog inputs, after proper shaping, are continuously digitised using the 40 MHz clock; • trigger sums are continuously formed in digital form and sent to the L0 trigger logic; • data are continuously written in a circular buffer waiting for the L0 decision; • upon receipt of a L0A, the related data, stored a fixed latency time before L0A, are extracted from the circular buffer and stored into another buffer called L0 event buffer, waiting for a possible L1A; • upon receipt of a L1A, the corresponding data are sent to a PC farm through a gigabit Ethernet port. The module conforms to the IEEE-1014-1987 and ANSI/VITA 1-1994 [5]. The board hosts the VME P0, P1, and P2 connectors and fits into both VME and VME64 standards. A custom backplane is used to distribute TTC signals via the P0 connectors. Figure 5 shows a block diagram of the module and the following sections detail its different parts. Input signal shaping The signal at the input of the CREAM module has a 20 ns rise-time, a 2.7 µs fall-time and a ±1 V maximum amplitude. Each of the input channels consists of an AC-coupled differential line receiver and a pulse shaper. A 14-bit DAC allows tuning the DC offset of each channel in the range ±1 V in order to correctly adjust the pedestals and to preserve the dynamic range. The signal is shaped before the ADC input into a differential semi-Gaussian signal with a 40 ns rise time and a 70 ns full width at half maximum (FWHM). Thus 8 consecutive, 25 ns spaced samples, cover the entire pulse shape. ADC Due to the required performance and total number of channels per board, the AD9252 ADC [6] from Analog Devices is chosen. It is an octal, 14-bit, 50 MSPS ADC with an on-chip sampleand-hold circuit and one serial output data link per channel. The IC has built-in test and control features accessible through a Serial Peripheral Interface (SPI) that includes programmable data pattern generation, along with custom user-defined test patterns. Thus, the test patterns are acquired in the same way as digitised 'analog' inputs, in order to simplify exhaustive analysis of entire acquisition chain. Table 1 gives the main parameters of the analog to digital conversion of the module. Data acquisition The experiment data taking sequence is defined by the SPS accelerator cycle. The accelerator burst time is the data-taking active phase and it can vary in the range of 1-15 s with a period of up to 50 s. Within the burst, a common time reference is defined by a 31-bit timestamp word, thus covering a maximum time range of about 53.6 s. The burst duration is defined by two commands: the start-of-burst and the end-of-burst. The start-of-burst command sets the time stamp to 0. The timestamp associated to each event is defined by the arrival time of the L0A signal. Thus, a unique relationship between the time stamp and the event number is established within each burst. Digital data processing and trigger modes The data taking sequence is controlled via the TTC system that drives all timing references. The start-of-burst signal initialises the 31-bit timestamp counter and 32-bit L0ID counter, which are then incremented at each clock cycle and each L0A respectively. The data streams from all enabled inputs are stored in the circular buffer. The depth of this buffer covers the 10 ms maximum L0 trigger latency, i.e. 800 kB per channel. Upon receipt of the L0A signal, the corresponding data samples (programmable number of samples in the range 4-255 with a default value of 8) are extracted from the circular buffer and stored in the L0 event buffer together with the timestamp counter content at the L0A signal arrival and with a 32-bit L0ID. The L0 event buffer is able to accommodate a 1 MHz L0 average rate during 10 s when 8 samples per L0 event are kept. This requires 2.56 GB memory for 16 channels. The circular buffers, as well as the L0 event buffers, are implemented in two (one per 16 channels) DDR3 SODIMM modules, with 4 GB storage capacity each. The events, once written in the L0 event buffer, become available for readout via the Gigabit Ethernet link and/or the VME bus interface, with an optional zero suppression processing. During the memory readout process, the CREAM is still able to store new data in the circular and L0 event buffers. The acquisition process is therefore "dead-time-less", as long as the L0 event buffer is not full. L0 and L1 readout The default data taking is initiated by the L1 trigger request. L1 trigger request packets are sent to the module through the Ethernet interface with TCP protocol. One L1 trigger request packet can contain more than one request for data. Events from the same L1 trigger request and with the same Event Builder (EB) destination can be formatted in the same packet to optimize the network bandwidth. The data transmission to the EB is done using TCP or UDP protocol. The readout and the L1 trigger request packets share the same Ethernet link. The "Event data" and "Detector data" formats are presented in figure 6. For test purposes, it is possible to readout data directly after the L0A signal occurs, but with lower than nominal rate. The data format will be the same as for the L1 readout. Data compression Both the L0 and L1 data copying mechanisms described above will allow reading interesting events without zero suppression, in order to have, at a later stage of the analysis, all the original data available. But, since for each event a large fraction of channels will only contain pedestal counts, a simple zero suppression algorithm to individual channels is foreseen. Channels where the difference between the maximum and the minimum value of the samples is below a predefined value (programmable and possibly different for each channel) will be discarded. The flexibility of the CREAM architecture could be exploited also for an alternative, triggerless way of readout. In this scheme, continuously digitized data are analyzed by a pipeline process running on the FPGA. This process performs a final calibration and feature extraction with the standard reconstruction algorithms (i.e. digital filters). Precise information on the energy and time of each pulse could be obtained and continuously sent to a cluster of PCs to build complete events. Trigger sums outputs During the data acquisition, digitised signals from the selected channels are summed up to build a Trigger Sum (Super-Cell) to be sent to the LKr L0 trigger system. The sensitive area of the calorimeter forms an octagon, and can be considered a 128×128 cell square grid with the corners missing. The selection of the channels contributing to a particular Super-Cell, as well as the number of Super-Cells formed in a module is programmable. Before the sums are calculated, all inputs are normalised: each channel pedestal is subtracted from the data and the gain variations are compensated by means of the calibration parameters. The 16-bit sums data are serialised and sent to the L0 trigger processor via standard Ethernet cables. The embedded clock bit coding is implemented and high-speed differential cable extender buffers are used. Up to 4 links are foreseen in order to be able to accommodate up to 4 Super-Cells per CREAM module with effective data rate per link of 720 Mbps. Project status Due to the huge number of modules (∼450) needed to instrument the LKr readout and the maintenance requirement over the lifetime of the experiment (∼10 years), the decision to sub-contract CREAM development and production to industry has been taken. A market survey was finished in October 2010 and an invitation to tender was sent out in April 2011. The bids were checked for completeness and compliance with the conditions specified in the invitation to tender documents. CERN management approved the project and the Contract was awarded to CAEN as the lowest bidder conforming to the specification in all respects. The first prototype delivery is foreseen in July 2012 and these modules will go through rigorous acceptance tests to ensure that design meets the specification and is ready for production. Full production series delivery is planned in July 2013 and entire LKr readout commissioning is foreseen in fall 2013.
3,535
2011-01-01T00:00:00.000
[ "Physics" ]
Effect of tube current on computed tomography radiomic features Variability in the x-ray tube current used in computed tomography may affect quantitative features extracted from the images. To investigate these effects, we scanned the Credence Cartridge Radiomics phantom 12 times, varying the tube current from 25 to 300 mA∙s while keeping the other acquisition parameters constant. For each of the scans, we extracted 48 radiomic features from the categories of intensity histogram (n = 10), gray-level run length matrix (n = 11), gray-level co-occurrence matrix (n = 22), and neighborhood gray tone difference matrix (n = 5). To gauge the size of the tube current effects, we scaled the features by the coefficient of variation of the corresponding features extracted from images of non-small cell lung cancer tumors. Variations in the tube current had more effect on features extracted from homogeneous materials (acrylic, sycamore wood) than from materials with more tissue-like textures (cork, rubber particles). Thirty-eight of the 48 features extracted from acrylic were affected by current reductions compared with only 2 of the 48 features extracted from rubber particles. These results indicate that variable x-ray tube current is unlikely to have a large effect on radiomic features extracted from computed tomography images of textured objects such as tumors. The field of radiomics, in which quantitative image features are used to determine the tumor phenotype, has been demonstrated to have various potential roles in clinical decision-making. These include classifying tumors (e.g., benign or malignant), determining mutation status, improving patient risk stratification, predicting appropriate treatment strategies, and monitoring treatment response to improve outcome predictions [1][2][3][4][5][6][7][8][9] . However, radiomic features and results are sensitive to a variety of noise sources. For example, inter-scanner variations in image features can be relatively large 9,10 . Similarly, details of the imaging protocol, such as pixel size, can significantly affect the values of the calculated features 11 . To maximize the amount of useful information obtained from computed tomography (CT) images in radiomics (and avoid incorrect interpretation of results), researchers must understand all sources of noise. This understanding can help with the development of solutions, such as image preprocessing, to mitigate the effects of the noise. In retrospective studies, understanding of the noise sources could be used to guide which image data are analyzed (e.g., only images with a pixel size within a specified range). In prospective studies, noise analysis could be used to guide the creation of harmonized imaging protocols in which the most important parameters are controlled. Examples of noise sources that have been examined in previous work include different CT scanners, pixel size, image spacing, and reconstruction kernels 10,[12][13][14][15] . The impact of tube current on diagnostic tasks has been covered extensively in the literature [16][17][18][19][20][21] , but only preliminary data are available regarding the effects of tube current in radiomics studies. Fave et al. simulated the effect of tube current on measured image features by adding Gaussian noise to patient CT images 22 . They observed no significant effect on image features (compared with inter-patient variations) but acknowledged that their noise model was very basic and did not properly reflect the changes in noise as the tube current was reduced. Larue et al. found that optimizing the number of gray-levels in images to improve prognostic value did not adversely affect feature stability. Further, they found that feature values were not correlated to tube currents or to slice thickness after resampling 23 features had CCC values greater than 0.9 24 . A more reliable answer would involve scanning an object using different tube currents and then assessing the impact of this variation on the calculated image features. Mackin et al. recently described a texture phantom that can be used to assess the impact of the imaging device or protocol on extracted image features 10 . In the current study, we experimentally examined the effects of tube current on quantitative image features by scanning this texture phantom using a range of tube current values. Methods and Materials The credence cartridge radiomics (CCR) phantom as described by Mackin et al. 10 was used to study the effects of tube current on radiomic features. The CCR phantom has ten cartridges with various textures. In the current study, we analyzed four of the cartridges: solid acrylic, cork, rubber particles, and wood. These were selected to give a full range of textures from minimal to highly varied, similar to the texture of non-small cell lung cancer tumors (Fig. 1). The phantom was imaged on a GE LightSpeed VCT scanner (GE Healthcare, Waukesha, WI) and a Toshiba Aquilion ONE scanner (Toshiba Medical Systems, Tustin, CA) using a range of tube current settings. The GE LightSpeed images were acquired in helical mode at 120 kVp, 0.969 pitch, STANDARD reconstruction kernel, 50-cm display field of view, and 2.5-mm image thickness. Each voxel was 0.98 × 0.98 × 2.5 mm 3 . Twelve scans The images were imported into IBEX, a freely available radiomics software program 25 . The location of a small radiopaque marker on the edge of the phantom was manually identified, and its image coordinates were entered into an in-house Python script (Python version 2.7), which created a rectangular region of interest (ROI) of 8 × 8 × 2 cm 3 for each cartridge in each scan. The same ROI file was used for all images from a particular scanner. Results for smaller (2 × 2 × 2 cm 3 ) voxels are included in the supplementary materials. Forty-eight features were calculated in IBEX: 10 intensity histogram, 22 gray-level co-occurrence matrix 26 , 11 gray-level run length matrix 27,28 , and 5 neighborhood gray tone difference matrix features 29 (Table 1). These features were selected because they are commonly used in radiomics studies 30,31 . As noted below, features were calculated with one and four CT numbers per bin. The texture features were calculated for each slice in the ROIs and then combined, a procedure referred to as 2.5D 25 . Reducing the tube current used in CT scans will increase the noise in the images. In this study our primary concern is not measure the size of tube current effect in the phantom materials. Our primary concern is to gauge the size of the effect tube current may have on radiomics studies of patients. Therefore, we used a metric that scales the effect seen in phantom materials by the variability in patients. If a feature is highly variable in patients, a small tube current effect is unlikely to weaken a radiomic feature. On the other hand, a large tube current effect is likely to weaken a radioimic feature when the patient variability is small. To gauge the size of the effects of variable tube current relative to the variability in patients, we normalized the extracted feature values by the coefficient of variation for the same features extracted from CT images of non-small cell lung cancer tumors. This patient-normalized feature, f i , was defined as where f i is the feature value for a given tube current value i (mA•s) and f 0 is the feature value at the baseline tube current value (300 mA•s for the GE scanner and 250 mA•s for the Toshiba scanner). σ T,i and µ T,i are the standard deviation and mean, respectively, of the features from the non-small cell lung cancer tumors. The numerator of f i is the fractional difference for a tube current scan and the baseline, and the denominator is the coefficient of variation for the same feature calculated on 106 non-small cell lung cancer tumors. This normalization assesses the variability caused by tube current relative to inter-patient differences. The patients in this normalization cohort were part of a clinical trial approved by the Institutional Review Board. Informed consent for participation in the trial was obtained for all patients, and all procedures were performed in accordance with the Declaration of Helsinki on Ethical Issues. Additional informed consent for this retrospective study was waved by the Institutional Review Board. The gross tumor volumes from the end-of-exhale phase of the planning CT images were used as the ROIs for feature extraction. The features were extracted after applying a threshold of 900 HU. The end-of-exhale phase is considered the most stable 22,32,33 . The mean and median tumor ROI volumes were 96 and 42 cm 3 respectively (range 5-568 cm 3 ). ROIs smaller than 5 cm 3 were excluded from the normalization cohort. This patient cohort was part of a prior radiomics study which details its clinical characteristics 9 . Features from the phantom and patients were extracted in four ways: (1) no preprocessing, (2) intensity rescaling (10-bit depth rescaling), (3) Butterworth smoothing, and (4) Butterworth smoothing and intensity rescaling. These preprocessing techniques were chosen on the basis of work by Fave et al., who showed that preprocessing affects the significance of features in prognostic models 34 . Rescaling the intensity from the initial 4096 bins (12-bit) to 1024 bins (10-bit) combines 4 CT numbers per bin. Data availability. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request. Results The effects of tube current on image intensity histograms for the acrylic, cork, rubber particle, and sycamore wood cartridges from images acquired on the GE scanner are shown in Fig. 2 (results for images acquired on the Toshiba scanner are shown in Supplementary Fig. 1). Results did not vary substantially between the two scanners used. Therefore, all figures show results obtained using the GE LightSpeed VCT scanner, and results obtained using the Toshiba Aquilion ONE scanner are shown in corresponding supplementary figures. The acrylic cartridge image from the 25 mA•s scan had a much greater dispersion of intensities than did the image from the 300 mA•s scan. The differences between the intensity histograms for these two scans were not as apparent in the images of the other three materials. Preprocessing the images by rescaling, smoothing, or both rescaling and smoothing had only a marginal effect on the dispersion differences. The effects of tube current on the intensity features, shown in Fig. 3 for the GE scanner (results for the Toshiba scanner are shown in Supplementary Fig. 2), are consistent with the effects seen in the intensity histograms. The values used in this figure were scaled according to Equation 1 to produce patient-normalized feature differences. Reducing the tube current values produced the largest changes in the acrylic cartridge features. As with Grey-level co-occurrence matrices for the acrylic and rubber particles cartridges for images acquired on the GE scanner are shown as images in Fig. 4 (results for images acquired on the Toshiba scanner are shown in Supplementary Fig. 3). Gray-level co-occurrence matrices record the frequency of image intensity values being adjacent each other, or "co-occurring", in the image. The difference in the dispersion of the CT numbers for the 25 and 300 mA•s scans produced obvious changes in the gray-level co-occurrence matrix for the acrylic cartridge. This dependence is also evident in the features derived from the gray-level co-occurrence matrices, shown in Fig. 5 (results for images acquired on the Toshiba scanner are shown in Supplementary Fig. 4). The tube current dependence was greatest for the acrylic cartridge. A more moderate effect was seen in the sycamore wood cartridge and little effect was seen in the heterogeneous cork and rubber particles cartridges. A high-level look at the 48 features from the four feature groups is shown in Fig. 6, for images acquired on the GE scanner (results for images acquired on the Toshiba scanner are shown in Supplementary Fig. 5). In this figure, the patient-normalized feature values are color-coded into one of four categories indicating the degree to which the feature values depend on the tube current value used when the images were acquired. The features extracted from the most homogeneous materials, acrylic and sycamore wood, were much more dependent on tube current than were the features extracted from the more textured materials, cork and rubber particles. More specifically, 25 of 48 features extracted from the acrylic cartridge were strongly dependent on tube current ( > f 2 i ). In contrast, > . Discussion We investigated the effect of reducing tube current on materials with varying amounts of texture and found that objects with more intrinsic texture were not substantially affected by tube current changes. Reducing the tube current of a CT scan increases the image noise, and therefore increases the spread of CT numbers. We found that the impact of this noise was more apparent for homogeneous materials (e.g., acrylic) than for textured materials. The increased spread observed in the intensity histogram for the acrylic cartridge at 25 mA•s compared with 300 mA•s implies that image intensity features for this cartridge are expected to be dependent on tube current. For the most textured cartridges, rubber particles and cork, dispersion at the lower tube current values was minimal, and thus, features extracted from these types of materials are not expected to be dependent on tube current. Indeed, the expected feature dependence was observed: feature dependence in acrylic varied whereas this dependence in rubber particles and cork -which have texture more similar to NSCLC tumors -stayed constant. This is shown through the plots of the patient-normalized feature differences (Figs 3 and 5), in which values for cork and rubber particles stayed near 0 for all points whereas values for acrylic approached 2 or -2 or even extended beyond the displayed range for many features. These findings suggest that when a material has minimal texture, such as acrylic, the texture feature values are heavily dependent on noise, i.e., tube current values. Materials that did have texture showed little effect from tube current variation. Additionally, image preprocessing (intensity rescaling or smoothing) did not substantially change this relationship. Overall, these results are consistent with the simpler model that was examined by Fave et al. 22 . The results presented here indicate that retrospective radiomics studies should not be significantly affected by variations in tube current. In other words, there Figure 4. Images of the gray-level co-occurrence matrices for the acrylic and rubber particle cartridges for computed tomography scans acquired using 25 and 300 mA•s tube current on a GE LightSpeed VCT scanner. For each co-occurrence matrix, the relative frequency of intensity pairs is plotted and scaled from 0 in dark blue to the max value for that matrix in dark red. Differences in the matrices were apparent for the homogeneous acrylic cartridge where increasing the mAs led to a diagonal distribution of intensity pairs compared to the lower mAs with its circular distribution. should be no need to exclude patient data from retrospective studies on the basis of differences in tube current alone. These results also indicate that harmonizing tube current values between scans need not be a big concern when planning prospective radiomics studies. This is important because most scanners now modulate mA•s to control the overall noise level, and these settings may be determined locally (e.g., on the basis of radiologist feedback). Thus, it may be difficult to harmonize the tube current between institutions. Instead, harmonization can focus on details of the image reconstruction such as pixel size and reconstruction kernel, which can both be achieved in a second reconstruction which is specifically designed for radiomics studies and which does not increase the radiation dose to the patient. The current study has a few limitations. Phantom materials are not perfect surrogates for tissue, and some quantitative features extracted from some human tissues might be more sensitive to changes in the tube current values. It is possible that some regions in tumors might be more sensitive to the tube current than other regions. In addition, the acquired scans had an image thicknesses of 2.5 or 5.0 mm and all ROIs had a volume of 128 cm 3 . These factors may reduce the impact of tube current owing to the reduced effect of noise. We repeated our study using smaller, 8 cm 3 , voxels, and found that the results for the smaller ROIs were similar for both the GE and Toshiba scanners (supplementary Figs 6 and 7). Also, the results for 2.5 and 5.0 image thicknesses were similar to each other. Additionally, the current study evaluated only the effects of tube current on measured radiomic features within a defined volume. The effects of tube current on physician delineation were not evaluated. With more noise at lower tube current values, it may be more difficult to determine ROI bounds within patients. A change in the contouring would affect the radiomic features measured 35 . Finally, we did not evaluate many of the other parameters of the imaging protocol, such as tube voltage (kV). These should be investigated in future studies. Conclusion To provide noninvasive and relatively inexpensive biomarkers, radiomic studies may rely on images acquired using diagnostic imaging or radiation therapy simulation protocols. Retrospective reconstruction of medical images where the imaging procedure is performed one time but the images are reconstructed multiple times using a radiomics protocol in addition to the standard protocol may help to standardize the images. For a CT scan, however, some parameters, including the pitch, tube voltage, and tube current, cannot be changed for retrospective reconstruction. Most CT scans of adults use 120 kVp. It seems unlikely that the pitch of helical scans will affect quantitative imaging features. Thus, the tube current, which influences both the overall noise in the image and the radiation dose to the patient, is most concerning of these three parameters. Our finding that radiomic features are robust to changes in tube current in CT studies of tumor-like materials indicates that variations in the tube current used while imaging patients is unlikely to weaken the study. It is unlikely that radiomic image features calculated from CT images of textured objects (such as tumors) are significantly affected by x-ray tube current. Figure 6. Maps of the patient-normalized features extracted from the acrylic, cork, rubber particle, and sycamore wood cartridges. The columns represent the tube current (mA•s) used to acquire the computed tomography scan and are grouped by the material. Colors other than blue indicate that the effect of the reduced tube current is large relative to the variability of the feature calculated for tumor samples from patients with non-small cell lung cancer. The almost solid blue table for the rubber particle cartridges indicates that reducing the tube current has little effect on the radiomic features. The images were acquired using a GE LightSpeed VCT scanner. NGTDM, neighborhood gray tone difference matrix.
4,316.2
2018-02-05T00:00:00.000
[ "Physics" ]
Histological characteristics of exercise‐induced skeletal muscle remodelling Abstract This study aims to analyse the pathological features of skeletal muscle injury repair by using rats to model responses to different exercise intensities. Eighty‐four rats were randomly divided into five groups for treadmill exercise. The short‐term control, low‐intensity, medium‐intensity and high‐intensity groups underwent gastrocnemius muscle sampling after 6, 8 and 12 weeks of exercise. The long‐term high‐intensity group underwent optical coherence tomography angiography and sampling after 18 weeks of exercise. RNA sequencing was performed on the muscle samples, followed by the corresponding histological staining. Differentially expressed genes were generally elevated at 6 weeks in the early exercise stage, followed by a decreasing trend. Meanwhile, the study demonstrated a negative correlation between time and the gene modules involved in vascular regulation. The modules associated with muscle remodelling were positively correlated with exercise intensity. Although the expression of many genes associated with common angiogenesis was downregulated at 8, 12 and 18 weeks, we found that muscle tissue microvessels were still increased, which may be closely associated with elevated sFRP2 and YAP1. During muscle injury‐remodelling, angiogenesis is characterized by significant exercise time and exercise intensity dependence. We find significant differences in the spatial distribution of angiogenesis during muscle injury‐remodelling, which be helpful for the future achievement of spatially targeted treatments for exercise‐induced muscle injuries. | INTRODUC TI ON Exercise-induced muscle injuries (EIMIs) are prevalent in sports involving high-speed running or high volumes of running load, acceleration, deceleration and upon fatiguing conditions of play or performance. 1Among them, calf muscle injuries are common in sports involving high-speed running, explosive jumping, and kicking.The calf complex is an essential body structure for weight bearing and locomotive activity.Skeletal muscular dysfunction, pain and oedema are the major presenting characteristics of calf muscle injuries. 2The duration of rehabilitation until return to regular sports is usually quite lengthy, especially for athletes with significant injuries. The pathological features of EIMI mainly involve muscle fibre rupture and skeletal muscle remodelling, including the remodelling of the extracellular matrix, myofibre and vascular bed. 3 Myofibre rupture, microvessel damage and inflammatory infiltration in the early stages of injury can induce tissue regeneration and repair mechanisms. 4,5However, the course of repair is prolonged with an uncertain prognosis.If the injury exceeds the capacity of the tissue to self-repair, pathological muscle healing and irreversible damage can emerge, including chronic inflammation, muscular fibrosis, heterotopic ossification and muscle atrophy or stiffness. 6,7ese outcomes can have a particularly negative impact on athletes.Unfortunately, there is no optimal treatment and rehabilitation programme for EIMI.Symptomatic treatments, physical therapy and mild rehabilitative exercises often cannot fully correct the injury resulting in a suboptimal outcome. 8Therefore, it is necessary to understand the temporal pathological characteristics of skeletal muscle tissue under different exercise loads and cycles to identify precise treatment choices and suitable intervention times.However, few studies have addressed these topics. 9ysiopathological features of skeletal muscle exhibits remarkable heterogeneity and dynamic changes during exercise or injury repair.The heterogeneity and dynamics of muscle fibres are fundamental to a muscle's ability to perform a variety of tasks ranging from continuous low-intensity activity (such as maintaining posture) to repetitive submaximal contractions (such as during locomotion) and rapid and intense maximal contractions (such as during jumping and kicking). 10Currently published studies show that many cytokines or signalling pathways are favourable for EIMI repair and thus have marked potential for clinical translation. 11,12A better understanding of exercise-dependent muscle change can help identify potential therapeutic targets. We hypothesized that the histological and transcriptomic characteristics of skeletal muscle were distinctly different and highly heterogeneous in response to time and exercise intensity.In this study, we explored the histological and transcriptomic characteristics of temporal changes in the rat gastrocnemius muscle in response to treadmill exercise of different intensities and cycles.We analysed the effect of exercise-induced angiogenesis on skeletal muscle remodelling and the regulation of the FHL2/sFRP2 signalling axis.The study helps uncover the dynamic pathological characteristics of skeletal muscle remodelling, provides new references for molecular mechanisms of EIMI and helps in the development of clinical intervention programmes from the perspective of angiogenesis. | Rats Twelve-week-old male specific pathogen-free Sprague Dawley rats weighing 250-300 g were used (Shanghai Jihui Laboratory Animal Care Co., Ltd.; n = 86).Rats were housed under controlled conditions (22°C, 12 h light/12 h dark cycle) with ad libitum access to water and standard laboratory rat chow.One rat died during the experiment due to exercise fatigue.One additional rat was withdrawn from the experiment after refusing to exercise continuously.At the end of the intervention, the animals were anaesthetised with 1.25% Avertin (10 mL/kg) and euthanized by cervical dislocation.Surgical interventions, treatments and animal care procedures were performed strictly with a protocol approved by the Animal Care and Use Committee of the University School of Medicine.Exclusion criteria for rats: 1. death or trauma during the experiment, for example, broken nail causing severe bleeding in the toe; 2. inability to tolerate the exercise intensity set in the exercise protocol or refusal to exercise; 3. severe physiological reactions or other conditions that affect daily life after exercise, for example, prolonged refusal to eat. | Exercise intervention Firstly, we determined the maximum tolerance and minimum threshold for muscle damage in rats using previous literature, [13][14][15][16] then developed a gradient training protocol within this range and finally clarified the feasibility of the training protocol through preexperiments and haematoxylin and eosin staining to maximize the simulation of the clinical EIMI disease state by active exercise. Eighty-four rats were randomly divided into five groups for treadmill exercise (n = 6, per time point, per group), including control (no treadmill exercise), low-intensity, medium-intensity, high-intensity and long-term high-intensity groups.All rats underwent adaptive pretraining except for the controls, with 5 m/min speed settings at 10° uphill for 10 min daily for 1 week.Rats were trained at varying load intensities according to the designed speed, time and treadmill angle (Figure 1).Low -intensity was 17 m/min speed, 10° uphill, 1.5 h All exercise group rats were motivated to run with a shock grid set at 0.4 mA.At each sampling time point (6, 8, 12 and 18 weeks) six rats were randomly selected from each group for analysis. | Muscle samples Rats were intraperitoneally administered 1.25% Avertin (10 mL/ kg) (Nanjing Aibei Biotechnology Co., Ltd) to induce anaesthesia and euthanized by cervical dislocation 3 days after the relevant exercise protocol was completed. 17Gastrocnemius muscle tissue was obtained surgically.A portion of gastrocnemius muscle tissue was cryopreserved in liquid nitrogen and used for RNA sequencing (RNA-seq).The remaining tissue was fixed in 4% paraformaldehyde and embedded in paraffin.Four-micrometre sections were deparaffinized and mounted on glass slides at room temperature (pathology slicer and Leica embedder provided by Shanghai Leica Instrument Co., Ltd.) until staining.We mainly chose the larger midsection of the muscle for our sections: coronal sections for muscle fibre type analysis and longitudinal sections for other histological analyses.Six microscopic views at the same magnification will be selected for analysis for each tissue section, and the mean or median will be calculated. | RNA sequencing Four rats (six from the long-term high-intensity group) were randomly selected from each group from each time point.Total RNA was extracted from fresh gastrocnemius muscle tissues using an RNeasy Mini Kit (Cat#74106, Qiagen).Agilent Bioanalyzer 4200 (Agilent Technologies) was used to detect RNA quality.Sequencing libraries were prepared using the VAHTSTM Stranded mRNAseq Library Prep Kit (NR612, Vazyme).The cDNA library was sequenced through the Illumina sequencing platform (Novaseq). The RNA isolation, library construction and sequencing were performed at Shanghai Biochip Co. Ltd.Differentially expressed genes (DEGs) were identified according to Q < 0.05 and |log 2 (fold change)| ≥2.Gene Ontology (GO) terms and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathways with Q < 0.05 were considered to be significantly enriched.Data visualization with volcano plots was performed using Hiplot software (https://hiplot.com.cn)(p < 0.05 and |log 2 (fold change)| ≥0.5).Heat maps were created using the R package clusterProfiler.R software was used to conduct the weighted gene co-expression network analysis (WGCNA) (Time; Exact intensity). We screened for all DEGs, and then calculated the average expressions of differential genes using the corresponding FPKM values at each time point.A time-series gene cluster analysis was performed on gene expression data using the R package Mfuzz (2.52.0) to identify the clusters with consistent expression trends. | Histological staining The prepared muscle sections were stained with Haematoxylin and Eosin, periodic acid-schiff (PAS), Masson's trichrome (Masson) and sirius red (SR) following routine procedures. 17Microscope images were obtained at different magnifications using a scanning imaging F I G U R E 1 Rat exercise protocol (Created with BioRe nder.com).Rats were divided into control, low, medium, high and long-term high-intensity groups.All rats underwent adaptive pretraining except for the controls, followed by treadmill exercise training per the predetermined procedures.The low, medium and high-intensity groups underwent training for 6, 8 and 12 weeks.The long-term highintensity group was trained for 18 weeks.system (ECLIPSE E100 and DS-U3, NIKON).The Supplementary Methods file shows the method details. | Optical coherence tomography angiography (OCTA) OCTA (LSM02/03, spectral bandwidth 100 nm, central wavelength 1310 nm, transverse image resolution 15 μm, Beijing HealthOLight Technology Co., Ltd) was used to assess the intravital vessels in the control and long-term high-intensity groups.Live rats were anaesthetised and shaved over the gastrocnemius muscle region of the hind limbs.Vascular proliferation and distribution were assessed directly using the OCT system. | Scratch wound healing assay Human umbilical vein endothelial cells (HUVECs) (iCell Bioscience Inc, Shanghai; 5 × 10 5 cells/mL) were seeded in six-well plates (10% serum medium for HUVECs culture and expansion, HUVEC-90011 with the growth factor supplementation, OriCell®) and cultured in 37°C incubators at 5% CO 2 for 24 h.The six-well plate was removed from the incubator after the cells grew to the logarithmic phase and the adhesion rate reached 80%-90%.The spent medium was aspirated and a 20 μL pipette tip was used to make a transverse scratch on the culture plate, with the tip remaining vertical during the procedure.Each well was manipulated in the same method.Subsequently, the cells were rinsed three times with phosphate buffered saline (PBS).Next, the scratched cells were aspirated and divided into four groups by adding 2% low serum medium (HUVEC-90011 without the growth factor supplementation, OriCell®) with 1× PBS (Wuhan Servicebio Technology Co., Ltd.), 10 pM Recombinant human frizzled-related protein 2 (rhsFRP2) (CSB-MP021139HU, Cusabio Biotech), 18 10 nM Recombinant human Yes-associated protein 1 (rhYAP1) (CSB-YP026244HU, Cusabio Biotech) and 10 pM rhsFRP2 + 10 nM Peptide 17 (YAP-TEAD Inhibitor 1, S8164, Selleck). 19The medium was stored in the incubator for 12 h before imaging (100×, the same area of the well imaged at both time points). MyoD + Desmin, iNOS + CD68 and CD163 + CD206 were assessed using double IF staining.The Supplementary Methods file showed the method details. | Terminal deoxynucleotidyl transferase dUTP nick end labeling (TUNEL) assay TUNEL assay was performed on sections utilizing conventional methods to quantify the apoptotic cell proportion. 20The Supplementary Methods file shows the method details. | Statistical analysis All data were analysed using IBM SPSS Statistics for Windows, version 20 (IBM Corp.).Statistical significance was set at p < 0.05.Data from PAS, Masson, SR, IF, IHC and TUNEL were evaluated using a simple effect analysis with a factorial design (Table S1 shows a significant interaction between groups and time variables).Scratch closure ratio values were normally distributed and showed homoscedasticity and thus assessed by a one-way anova (Table S2).All specific relevant statistical results are presented in Tables S3-S24. Spearman's rank correlation coefficient was adopted for analysis in the correlation heat map. | DEGs in gastrocnemius muscles under different exercise intensities The mRNA heat map demonstrated a considerable temporal fluctuation in the correlation between DEGs and exercise intensity (Figure 2A).A Venn diagram revealed that the number of DEGs showed a significant temporal change in the low, medium and longterm high-intensity groups, compared to the control group (Q < 0.05, |log 2 (fold change)| ≥2).The expression of DEGs rose from 6 to 8 weeks and decreased at 12 or 18 weeks (Figure 2B).Partial least squares discriminant analysis showed significant differences between the control group and the exercise model at 6, 8, 12 and 18 weeks (Figure 2C); However, no significant differences were shown among 6, 8, 12 and 18 weeks, which may be related to the fact that the exercise intensity variable was not considered.Significantly upregulated DEGs increased in the low, medium and high-intensity groups at 8 weeks. Significantly downregulated DEGs decreased mainly at 12 weeks.However, upregulated DEGs dramatically increased at 18 weeks.The low-intensity group showed an extensive change range (1753 upregulated DEGs at 8 weeks and 1155 downregulated DEGs at 12 weeks) (Figure 2D).The trend was also found in DEGs associated with fibrosis, inflammation, myogenic response, metabolism (cholesterol, glucose and proline) and vascular remodelling (Figure 2E).It is clear that the DEGs are characterized by significant time-series variability, and we will carry out further time-series analysis. | DEGs timing analysis DEGs were subjected to a time-series cluster analysis to identify gene clusters with broadly consistent expression trends.The DEGs were divided into 40 clusters (Figure S1).As seen in Figure 2, many genes associated with muscle remodelling showed a downward temporal gradient, so we selected five representative clusters for analysis (Figure 3).In Cluster 12, DEGs were significantly upregulated in the Low and Medium groups at 6 weeks compared to the control group.Although DEGs could be slightly upregulated with increasing exercise intensity at 8 weeks, the overall expression of DEGs was still lower than the control group at 8, 12 and 18 weeks.GO/KEGG analysis revealed that these DEGs were mainly associated with antigen presentation.Cluster 31 DEGs also showed a peak of upregulation at 6 weeks, followed by a temporal decrease in expression.However, the overall expression level was higher than that of the control group, and these DEGs were mainly associated with immune regulation.The expression trend of DEGs in Cluster 11 decreased stepwise over time, while the expression levels increased with increasing exercise intensity between the 8, 12 and 18-week groups, with the expression levels of DEGs at 12 weeks being significantly lower than those in the control group.Therefore, angiogenesis is not only related to the duration of exercise but also the intensity of exercise and that the two do not appear to have the same effect on the transcriptome levels of angiogenesis.DEGs were transiently upregulated in the Low group at 6 and 8 weeks in Cluster 7. In contrast, the expression levels of DEGs at other time points and groups were similar to controls, and these DEGs were mainly enriched in lipid metabolism and VEGF signalling pathways.The DEGs of Cluster 28 were mainly closely associated with the inflammatory response.They were significantly upregulated at 6 and 8 weeks, while the expression levels were similar to the control group at 12 and 18 weeks. | Gene modules closely connected with time and exercise intensity Twenty-one gene modules were divided by WGCNA analysis (Figure S2A).The MEgrey module was strongly positively correlated with time (0.44) and exercise intensity (0.65), whereas the MElightgreen module showed a negative correlation with time (−0.52) and exercise intensity (−0.41).MEpink, MEbrown, MEcyan and MEmagenta were additional modules independently correlated with time (Q < 0.05, negative correlation).The modules independently associated with exercise intensity (Q < 0.05, positive correlation) included MEblack, MEgrey60, MEsalmon and MEtan. Modules with significant differences (Q < 0.05) were subsequently examined using GO/KEGG analysis.The genes of the MEgrey module were enriched in biological behaviours such as muscle regulation.In contrast, the MElightgreen module genes were primarily enriched in biological processes or molecular functions such as immune regulation (Figure S2B).In addition, the genes in MEpink, MEbrown, MEcyan and MEmagenta modules were mainly enriched in biological processes or signalling pathways such as blood vessel regulation (Figure S2C), suggesting that some of the DEGs that regulated blood vessels showed gradually decreasing expression levels over time.The genes of MEblack, MEgrey60, MEsalmon and MEtan modules were mainly enriched in biological processes or signalling pathways such as muscle remodelling and cell metabolism (Figure S2D), indicating that improvements in these areas were closely linked to increased exercise intensity. | Characteristics of histological chronotropic changes in skeletal muscle remodelling under different exercise loads Haematoxylin and Eosin staining showed partial rupture injury and slightly disordered arrangement of the skeletal muscle fibres of the exercise model rats, compared with the controls.The most significant injury was detected at 12 and 18 weeks while minor signs of muscle fibre injury were seen at 6 weeks (Figure 4A).Muscle glycogen content was considerably lower at the early exercise stage (≤6 weeks), remained low at 8 weeks in the medium and high-intensity groups, but rebounded in the low-intensity group, and gradually accumulated in each group at 12 and 18 weeks (Figure 4B,C).Masson's staining significantly distinguishes between collagen fibres and muscle fibres, with the muscle fibres appearing red and the collagen fibres blue.Masson staining revealed increased collagen fibrils in the low and medium-intensity groups at 6 weeks, while there was no appreciable difference in The CoL-I content was significantly higher in the long-term highintensity group than in the control group after 18 weeks.However, it appeared unaffected in the high-intensity group in the early stages of exercise. Skeletal muscle fibre types also underwent significant changes. The fast-twitch fibre (MYH1 + ) proportion was significantly decreased in the three exercise models at 6, 8 and 12 weeks but increased at 18 weeks in the long-term high-intensity group.The proportion of slow-twitch fibres (MYH7 + ) decreased transiently at 6 weeks, whereas in exercise groups (Low/Medium/High/Long-High) it was significantly higher than that of the control group at 12 and 18 weeks (Figure 5A-D).Additionally, we observed a considerable rise in the proportion of proliferating cells in all groups, followed by a declining trend.However, the proportion of apoptotic cells increased with time and increased exercise intensity (Figure 5E-H).At the same time, we observed that myoblasts (Desmin + MyoD + cells), which were mainly located between the epimysium and perimysium, increased dramatically over time and with greater exercise intensity (Figure 6).Therefore, continuous exercise caused significant remodelling of rat skeletal muscle tissue. | Exercise-induced inflammation and macrophage polarization changed over time Inflammation was one of the essential characteristics of exercise injury.TNF was the predominant cytokine.IL-1β, IL-6 and TNFα IHC showed significantly higher levels of inflammation in the Low, Medium and High groups, with TNFα being the main inflammatory infiltrating factor in the Long-High group at 18 weeks (Figure 7). Figure 8 shows that CD68 + monocytes increased gradually over time and with increasing exercise intensity.M1 macrophages (CD68 + iNOS + cells) transiently increased in the medium and high-intensity groups at 6 weeks.The proportion of M1 macrophages in the exercise model did not significantly differ from that in the control group at 8, 12 and 18 weeks.However, M2 macrophages (CD163 + CD206 + cells) transiently increased in the low-intensity group at 12 weeks and the medium-intensity group at 6 weeks.In comparison, the ratio of M2 macrophages increased significantly at 18 weeks after dropping at 8 and 12 weeks in the high and long-term high-intensity groups.It was evident that there was a vital timing of macrophage polarization during skeletal muscle injury and repair, and the timing change characteristics were closely related to exercise intensity. | Exercise-induced chronological vascular bed remodelling in the intermuscular and extramuscular myofascial of skeletal muscle The blood vessels provide crucial energy needed for tissue injury repair.RNA-seq showed that some of the common provascular DEGs were highly expressed in the early exercise phase, and their expression levels gradually decreased over time (Figure 3). Interestingly, the intermuscular vessel density of skeletal muscle increased from 6 to 12 weeks in the low-intensity group, while the epimysial vessels decreased at 12 weeks.In the medium and high-intensity groups, there was an increase in the intermuscular microvessels from 6 to 12 weeks (a slight decrease was noted at 12 weeks compared with 8 weeks) and a persistent increase in the epimysial vessels.There was a significant increase in intermuscular vessels but an evident decrease in the epimysial vessels in the long-term high-intensity group.OCTA imaging showed that, in the long-term high-intensity group, vasculature with good perfusion function was fewer at 18 weeks compared to the control group (Figure 9A-E; All statistics are shown in Tables S17 and S18).The overall vascular density of skeletal muscle was higher in the exercise groups than in the control group (Figure 9F, Table 1). Therefore, we are aware that there are spatial differences in the alteration of vascular density following exercise and that the shift in the spatial distribution of blood vessels may be related to muscle remodelling.However, the exact role and mechanisms still need to be clarified. | The FHL2/SFRP2 axis is closely related to angiogenesis regulation in skeletal muscle The volcano plot and correlation heat map show that TGF-β1, FHL2, SFRP2 and YAP1 gene expressions were significantly temporally different and correlated with common angiogenic genes (Figure S3).IHC data showed that the overall expression of VEGFA, sFRP2 and YAP1 increased at the early exercise stages.However, VEGFA and YAP1 decreased in the high and long-term high-intensity groups at 12 and 18 weeks (Figure 10A-H; All statistics in Figure 10 are shown in Tables S19-S22).The high-intensity group showed persistently elevated sFRP2 and a substantial rise in YAP1 phosphorylation at 12 weeks.Although the overall YAP1 expression decreased in the long-term high-intensity group at 18 weeks, it was elevated around the microvessels (Figure 10C).In the scratch assay, sFRP2 and YAP1 significantly promoted endothelial cell migration, while Peptide 17, a YAP1 inhibitor, hindered this effect (Figure 10I,J).Therefore, we propose that sFRP2 is crucial in regulating skeletal muscle angiogenesis during the late phase of EIMI tissue remodelling.We also found that the TGFβ + cell proportion was significantly reduced (Figure 11A,B), accompanied by a reduction in FHL2, with the lowest value present at 18 weeks (Figure 11C,D). | DISCUSS ION Skeletal muscle plasticity is highly dynamic, and includes responses to microinjury, physiological or pathological repair and eventual muscle remodelling.EIMI is a widespread concern in athletes.Improvement of physiological muscle receptivity and optimizing training intensity may avoid severe permanent injury. 21,22here is a tremendous significance and clinical value in exploring the characteristics of temporal changes in skeletal muscle physiopathology in response to various levels of exercise.In this study, we conducted longitudinal analyses of the transcriptomic, histological and vascular bed remodelling responses of rat skeletal muscle to various exercise paradigms and showed that vascular bed remodelling is a critical pathological feature of skeletal muscle remodelling. 4][25] Paola et al. found that, in rats, horizontal treadmill exercise of 15-45 min per day for 2-4 weeks at a speed of 30-45 cm/s can accelerate functional recovery following traumatic muscular injury. 15,26 18 m/min for 5 min (rest interval 2 min and 18 times), can induce skeletal muscle injury and contraction excitation-coupling failure in rats. 27From these data, it is evident that exercise intensity is not the only factor that modulates the repair of skeletal muscle injury.RNA-seq revealed significant temporal changes in rat skeletal muscle transcriptome profiles in each exercise intensity (Figures 2 and 3).Upregulated DEGs in skeletal muscle dramatically increased between 6 and 8 weeks, declined at 12 weeks and slightly rose again at 18 weeks.Some gene clusters associated with cell catabolism displayed similar trend traits.However, only our high-intensity exercise group was followed over 18 weeks.We do not know if low-and moderate-intensity exercises present the same chronotropic effects. Although the number of upregulated DEGs increased at 18 weeks, compared with 12 weeks, we found that DEG gene modules and functional characteristics were altered by analysing high and longterm high-intensity exercise groups.The temporal expression trend showed that some genes involved in vascularization and muscle remodelling mainly presented features of a dual correlation between time and intensity.For example, the DEGs of Cluster 11 related to vascular remodelling showed a decreasing trend over time; their expression rose with increased exercise intensity.EIMI transcriptomics may not accurately reflect the ultimate changes in tissue architecture.The injuries may include primary and secondary sarcolemmal disruption, sarcotubular system swelling or disruption, myofibre contractile component disruption, cytoskeletal damage and extracellular myofibre matrix abnormalities. 28We saw that muscle fibre tissue was severely damaged at 12 and 18 weeks, and the amount of glycogen content had dramatically decreased. There was a significant change in the proportion of collagen fibres at 12 weeks.It was interesting to see that the glycogen content accumulated at 12 and 18 weeks.High-intensity exercise required a high metabolic level and fast muscle fibre activity.The continuous training was crucial for slow muscle fibres, and their dependent metabolic pathways were relatively altered. 29We found that myoblasts were markedly stimulated, and the muscle fibre type was significantly altered over time in response to the different exercise intensities (Figures 5 and 6).It was evident that tissue remodelling had occurred in rat skeletal muscle.We further showed that the exercise groups had more fast-and slow-twitch fibres than the control group at 18 weeks.However, the impacts of muscle remodelling on subjective feelings, including muscle endurance, explosive power and soreness, remain unclear. Inflammatory infiltration is an essential pathological feature of tissue injury repair.The characteristics and time-contingent appropriateness of the response determine whether it has a good or detrimental impact on tissue repair.As an important etiological factor of tissue soreness, persistent inflammation is one of the factors involved in remodelling and impairing the tissue microenvironment. 4,17wever, inflammatory signals are also critical in initiating tissue repair. 30The homeostatic control of pro-and anti-inflammatory mediators is essential for the orderly, timely and controlled regulation of inflammation.Using RNA-seq, we showed that mRNA levels of common inflammatory factors were considerably raised at 6 weeks and then reduced.However, IHC revealed that the total levels of IL-1β, IL-6 and TNFα rose with increased training load, especially for TNFα.Whether this ongoing inflammation exacerbated the injury or promoted self-repair was not resolved by this study.Nevertheless, research has revealed that persistent muscle inflammation is one high-intensity group at 18 weeks.Macrophages have been found to affect tissue repair positively. 34,35Nevertheless, we demonstrated that significant tissue remodelling had already occurred in skeletal muscle at 18 weeks, and it remains to be determined whether an M2 increase constituted a defence mechanism to improve tissue repair in the body. There is a significant increase in the delivery and absorption of oxygen and nutrients during exercise to meet the metabolic demands of contracting muscles.Increased muscle capillarization following exercise is a hallmark adaptation to exercise. 1 We found that the expression of angiogenesis-related gene clusters increased markedly during the beginning of training and then trended downward (Figure 3).However, increased exercise intensity promoted the upregulation of these DEGs.Using IHC we showed that exercise enhanced vascularization in muscle.Even though the vascular density increased at 18 weeks in the long-term high-intensity group, an OCTA scan showed a modest decline in perfusion (Figure 9).The increased blood vessels provide a safeguard for energy supply.Nevertheless, normal vascular perfusion is required. 36Studies have shown that physiological angiogenesis is necessary for skeletal muscle regeneration and repair. 37However, it is equally important to avoid abnormal pathological vascular proliferation. 38High permeability and poor perfusion are traits of pathological blood arteries, which impair the body's ability to enhance the movement of oxygen free radicals or inflammatory cells, altering the balance of the tissue microenvironment, and impairing tissue repair.Although blood vessels were found to be hyperplastic, IHC showed a significant decrease in VEGFA expression in the high and long-term high-intensity groups at 12 and 18 weeks.In studies on melanoma and chronic myocardial ischemia, elevated SFRP2 was found to be a crucial component of pathological vascular proliferation; VEGF antagonism by advanced pathological vascular proliferation did not limit angiogenesis. 39,40FRP2 and perivascular YAP1 were consistently overexpressed in our high-and long-term high-intensity groups.Using RNA-seq, we showed significant temporal differences in TGF-β1, FHL2, sFRP2 and YAP1 expressions.YAP1 is a vital regulator of the Hippo signalling pathway and plays an essential role in tissue growth and development and the control of skeletal muscle fibre size. 41Therefore, we speculated that skeletal muscle remodelling and angiogenesis were strongly associated with the decline in TGF-β1 and FHL2 and increased sFRP2 and YAP1 expression.The underlying regulatory mechanisms remain to be investigated.There were some limitations.First, although this work examined the pathological features of rat EIMI such as the transcriptomic, histological, immunomodulatory and vascular remodelling, it did not profoundly explore the exact regulatory mechanisms of FHL2/sFRP2 and Hippo pathways.We seek to investigate the molecular mechanisms of angiogenesis during EIMI in future work.Second, we examined the effects of low-and moderate-intensity exercise on rat skeletal muscle and only identified the pathological features of high-intensity training at 18 weeks.Our pre-experiments showed that the exercise intensity in the High group (25 m/min, 15° uphill, 1.5 h/d, 6 d/w) was close to the maximum tolerated exercise intensity, and we wanted to explore whether prolonging exercise at this intensity would alter the pathological features of EIMI, so there was only one exercise intensity at 18 weeks.We do not have more funding to add more exercise conditions.More exploration of EIMI pathology at more exercise intensities will be one of our future research directions.Haematoxylin and Eosin staining has shown significant structural damage to the muscle.However, we do not deny that there is physiological remodelling of some of the muscles.This study did not construct a permanent model of severe dysfunctional muscle damage (complete layer rupture), so presenting the histological features of the more severe EIMI in the clinic is challenging.Future research should expand the duration and intensities of experimental exercise regimens. | CON CLUS ION This study examines the pathological characteristics of rat skeletal muscle injury and repair in response to various exercise intensities. We identify the temporally controlled processes underlying EIMI using transcriptomics, histology and angiogenesis assessments, which have important implications for exploring intervention time windows or different time point intervention options.We find significant differences in the spatial distribution of angiogenesis during muscle injury-remodelling, which be helpful for the future achievement of spatially targeted treatments for EIMI. of cumulative daily training (30 min of rest every half hour of exercise to provide food and water) and 6 days per week for 6, 8 and 12 weeks.Medium intensity was 25 m/min speed, 10° uphill, 1.5 h of cumulative daily training (30 min of rest every half hour of exercise to provide food and water) and 6 days per week for 6, 8 and 12 weeks.High -intensity was 25 m/min speed, 15° uphill, 1.5 h of cumulative daily training (30 min of rest every half hour of exercise to provide food and water), and 6 days per week for 6, 8 and 12 weeks.The long-term high-intensity group condition was 25 m/min speed, 15° uphill, 1.5 h of cumulative daily training (30 min of rest every half hour to provide food and water) and 6 days per week for 18 weeks. F I G U R E 2 RNA-seq analysis of rat skeletal muscle.(A) Heat map of overall gene expression; (B) venn diagram showing the intersection of differentially expressed genes in the low, medium, high and long-term high-intensity groups at different time points (vs.control group, Q < 0.05, |log 2 (fold change)| ≥2); (C) partial least squares discriminant analysis; (D) statistical plots of the number of upregulated and downregulated differentially expressed genes in the low, medium, high and long-term high-intensity groups (vs.control group) at different time points; (E) heat map of commonly expressed genes associated with fibrosis, inflammation, myogenic response, metabolism and angiogenesis. F I G U R E 3 Cluster and enrichment analyses of time-series genes.(A) The analysis was carried out for Clusters 7/11/12/28/31; (B) Gene Ontology (GO) analysis; (C) Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis.collagenfibre contents between the above three groups and the control group at 8 and 12 weeks.We also found that the highintensity group had significant muscle fibre damage and enhanced intermuscular collagen fibril deposition at 12 weeks.In addition, the long-term high-intensity group had significantly more collagen fibrils (Figure4D,E).SR staining further revealed that muscle collagen fibre type changed with continuous exercise (Figure4F,G).CoL-I content transiently increased in the medium-intensity group F I G U R E 4 Non-specific histological features of skeletal muscle.(A) Haematoxylin and eosin staining (scale bar = 50 μm); (B) periodic acidschiff (PAS) staining (scale bar = 100 μm); (C) the average optical density (AOD) was used to quantify the glycogen content in PAS staining (*p < 0.05); (D) Masson's trichrome staining (scale bar = 50 μm); (E) statistical diagram of collagen volume fraction (CVF) (CVF% = Collagen area/Total muscle area × 100%, *p < 0.05); (F) sirius red (SR) staining (scale bars = 50 or 100 μm); (G) statistical plot of Type I and III collagen area ratio in SR staining (*p < 0.05).at 6 weeks.It kept increasing over time in the low-intensity group. Different exercise durations and patterns are critical for the physiopathological features of skeletal muscle to develop.There is still a F I G U R E 6 Myoblast proportion and distribution.(A) Immunofluorescence for myoblasts (Desmin + MyoD + cells, scale bar = 20 μm); (B) histogram of myoblast proportion (*p < 0.05).F I G U R E 7 Inflammatory infiltration of skeletal muscle by immunohistochemistry. (A) IL-1β (scale bar = 50 μm); (B) IL-6 (scale bar = 50 μm); (C) TNFα (scale bar = 50 μm); (D) histogram of IL-1β average optical density (AOD) (*p < 0.05); (E) histogram of IL-6 AOD (*p < 0.05); (F) histogram of TNFα AOD (*p < 0.05).lack of systematic, long-term and multi-intensity EIMI studies, and the temporal physiopathological characteristics of repair after skeletal muscle exercise injury are unclear.In our study, low-intensity exercise was able to induce rat skeletal muscle injury or histological changes in microstructure after 6 weeks.We employed an uphill treadmill test, three exercise intensities, and four exercise durations that were selected based on previous studies.We further investigated the physiopathological timing characteristics of EIMI under different exercise modes to identify the critical factors affecting injury repair. F I G U R E 9 Skeletal muscle vascular remodelling.(A) Immunohistochemistry (IHC) for CD34 showed the proliferation of intermuscular and intramuscular blood vessels (black arrows indicate intermuscular blood vessels, red arrows indicate intramuscular blood vessels, scale bar = 50 μm); (B) IHC for vWF showed epimysial vascular bed remodelling (blue arrows indicate adventitial vessels, scale bar = 50 μm); (C) optical coherence tomography angiography showed fewer vessels with good perfusion function in the long-term high-intensity group than in the control group; (D-F) Line plots of the density values of the intermuscular, epicardium and blood vessels.F I G U R E 1 0 High YAP1 and sFRP2 expressions.(A) Immunohistochemistry (IHC) showed a decreasing trend of VEGFA expression in the low, high and long-term high-intensity groups at 12 and 18 weeks (arrows indicate blood vessels, scale bar = 50 μm); (B) IHC demonstrated that high-intensity exercise (high and long-term high-intensity groups) could induce a sustained increase in SFRP2 expression (arrows indicate blood vessels, scale bar = 50 μm); (C) IHC showed that YAP1 expression was elevated in the low, medium and long-term highintensity groups.Although YAP1 expression was decreased in the long-term high-intensity group, its expression was elevated around blood vessels (arrows indicate blood vessels, scale bar = 50 μm); (D) IHC for phospho-YAP1 S127 revealed a significant increase in the degree of YAP1 phosphorylation in the medium-intensity group at 6 weeks and the high-intensity group at 12 weeks, with no significant differences observed in the other groups (arrows indicate blood vessels, scale bar = 50 μm); (E-H) line plots of VEGFA, sFRP2, YAP1 and phospho-YAP1 S127 average optical densities; (I) scratch wound healing assay; (J) statistical plot of scratch closure rate (*p < 0.05).
8,206.8
2023-07-30T00:00:00.000
[ "Biology" ]
Screening patients in general practice for advanced chronic liver disease using an innovative IT solution: The Liver Toolkit Background: Identifying patients with undiagnosed advanced chronic liver disease (ACLD) is a public health challenge. Patients with advanced fibrosis or compensated cirrhosis have much better outcomes than those with decompensated disease and may be eligible for interventions to prevent disease progression. Methods: A cloud-based software solution (“the Liver Toolkit”) was developed to access primary care practice software to identify patients at risk of ACLD. Clinical history and laboratory results were extracted to calculate aspartate aminotransferase-to-platelet ratio index and fibrosis 4 scores. Patients identified were recalled for assessment, including Liver Stiffness Measurement (LSM) via transient elastography. Those with an existing diagnosis of cirrhosis were excluded. Results: Existing laboratory results of more than 32,000 adults across nine general practices were assessed to identify 703 patients at increased risk of ACLD (2.2% of the cohort). One hundred seventy-nine patients (26%) were successfully recalled, and 23/179 (13%) were identified to have ACLD (LSM ≥10.0 kPa) (10% found at indeterminate risk [LSM 8.0–9.9 kPa] and 77% low risk of fibrosis [LSM <8.0 kPa]). In most cases, the diagnosis of liver disease was new, with the most common etiology being metabolic dysfunction–associated steatotic liver disease (n=20, 83%). Aspartate aminotransferase-to-platelet ratio index ≥1.0 and fibrosis 4 ≥3.25 had a positive predictive value for detecting ACLD of 19% and 24%, respectively. Patients who did not attend recall had markers of more severe disease with a higher median aspartate aminotransferase-to-platelet ratio index score (0.57 vs. 0.46, p=0.041). Conclusions: This novel information technology system successfully screened a large primary care cohort using existing laboratory results to identify patients at increased risk ACLD. More than 1 in 5 patients recalled were found to have liver disease requiring specialist follow-up. INTRODUCTION More than 1.5 billion people around the world are living with chronic liver disease with the majority being undiagnosed. [1]Globally, cirrhosis leads to more than 1.3 million deaths annually and is responsible for 3.5% of allcause mortality. [1,2]Detecting patients with advanced chronic liver disease (ACLD) is a major current public health challenge.Patients with compensated ACLD have minimal symptoms and a good overall prognosis, [3] and early detection of patients can allow for disease-specific treatment that can lead to fibrosis regression and resolution of portal hypertension. [4,5]Unfortunately, the majority of patients are only diagnosed when they exhibit features of end-stage decompensated disease, which has a high rate of mortality without liver transplantation. [6]Thus, there is an urgent need to change the current paradigm for diagnosing chronic liver disease.This has been recognized by the European Union in their new endeavor called the LiverScreen project (not yet undertaken), which has similarities to the study reported in our paper. [7]ver the last 2 decades, several serum noninvasive tests (NITs) have been developed to identify patients with liver fibrosis.10] Over time, the best thresholds for these tests have changed, and the literature in this area is still in flux.Less work has been done validating these tests in primary care settings or in developing novel, innovative models of care which integrate them into routine clinical practice to allow generalized screening of at-risk populations. There has been an exponential rise in the testing of liver biochemistry in primary care, with most of these investigations being ordered for other clinical reasons. [11]s many as 20% of these tests may be abnormal, presenting a significant burden for primary care physicians, and up to 50% of results receive no follow-up. [11]owever, it is known that even minor abnormalities in liver function biochemistry can be associated with a higher risk of long-term mortality. [12]Identifying patients in primary care who will benefit the most from specialist referral for further assessment remains a challenge, and there is debate about the best approach. [13]his study evaluates the use of a novel information technology cloud-based software solution, which accesses medical records of general practices and uses existing clinical information and laboratory results to identify patients who are at increased risk of ACLD.The aim of this study is to assess if this approach is effective in detecting patients with undiagnosed ACLD and linking them to specialist care. Liver Toolkit development The Liver Toolkit is a cloud-based software platform that was developed by a medical software company (Outcome Health) as a pilot instrument specifically for this project.This tool was designed in collaboration with staff from the Central and Eastern Sydney Primary Health Network (CESPHN), Sydney and South Eastern Sydney Local Health Districts, and community general practices (sometimes referred to as primary care practices). The tool was designed to analyze general practice electronic medical records to identify patients at increased risk of ACLD.The tool accessed both clinical information and investigation results stored with General Practitioner (GP) practice software.Clinical parameters assessed included previous diagnoses of viral hepatitis infection, alcohol use disorder or excess alcohol consumption, and hepatic steatosis (previously known as NAFLD).Laboratory test results from the last 2 years were accessed to calculate the aspartate aminotransferase-to-platelet ratio index (APRI) [14] , fibrosis 4 (FIB-4) [15] score, and NAFLD fibrosis score (NFS). [16]For patients with multiple blood tests within the last 2 years, the most recent results were used to calculate these noninvasive metrics. The Liver Toolkit was designed to remotely review the medical records from consenting practices.All data transferred to the Liver Toolkit platform were in a deidentified format.A list of at-risk patients identified by the Liver Toolkit was only downloadable inside each general practice by project staff (Figures 1, 2).The tool underwent comprehensive beta-testing with simulated patient data prior to implementation. Practice enrollment Suitable general practices were identified by the CESPHN.To be eligible, practices required appropriate practice software (Medical Director or Best Practice), which allowed integration of the digital Liver Toolkit.Practices were contacted by the CESPHN and offered to participate, and those who agreed signed written consent on behalf of their medical practice.The CESPHN attempted to engage practices in a range of different geographic areas across their catchment area. Inclusion criteria The Liver Toolkit assessed adult patients between 18 and 79 years of age.This group was chosen pragmatically to include patients who would benefit from the detection of undiagnosed ACLD while minimizing potential harm from recalling older patients who would be less likely to benefit. Exclusion criteria Patients were excluded from the project if they: (1) Were outside the above age range, (2) Had not been seen by GP in the last 2 years or, (3) Had a known diagnosis of cirrhosis or HCC Recall criteria Patients were flagged for recall by the liver toolkit if they met either of the following criteria: (1) APRI score ≥ 1 and/or FIB-4 score ≥ 3.25, (2) An existing diagnosis of metabolic dysfunctionassociated steatotic liver disease (MASLD) and an elevated NFS ( > −1.455 for those 65 or less and > 0.12 for those older than 65 y of age). These thresholds were developed based on previous studies demonstrating good sensitivity for the detection of compensated ACLD. [17]The recall criteria are outlined further in Table 1. Recall process A list of patients identified based on the above criteria was given to each general practice for recall (see Supplemental Figure S1 clinicians reviewed this list and could exclude patients if they were deemed inappropriate for recall (significant medical comorbidities making further assessment futile or a clear alternative explanation of their abnormal result).This was left up to the discretion of the primary care specialist.Practice staff were asked to contact patients directly to inform them of the project and invite them to attend a recall visit.Practices were provided with proforma materials for recall letters, emails, and text messages developed in consultation with a consumer representative from Hepatitis New South Wales (a state-based community organization).Practices were requested to contact patients at least 3 times using a range of methods (telephone calls, letters, or text messages), and if recall was not possible, to record a reason why. Direct referral Throughout the project, GPs were also allowed to directly refer other patients not on the recall list for assessment by the Liver Toolkit team if they had clinical concerns for undiagnosed liver disease. Clinical assessment Patients successfully recalled underwent clinical assessment with the Liver Toolkit team (comprised of a hepatology nurse and gastroenterology doctor).The preferred model for this was as an outreach service within the general practice to reduce complexity for patients and maintain the relationship with their general practice.Patients were also able to visit the hospital outpatient clinic as an alternative model. The recall visit included assessment of body mass index, alcohol intake and risk factors for liver disease and a physical examination, including transient elastography (TE) using FibroScan (Echosens, Paris, France).Alcohol intake was qualified in standard drinks per week (with 1 standard drink defined at 10 g of ethanol). [18]TE was conducted using the standard approach to obtain a liver stiffness measurement (LSM) using the appropriate probe type for each patient (either M or XL).TE assessments required a minimum of 10 measurements with a success rate of ≥ 60% and an IQR of ≤ 30% (if LSM ≥ 7) were deemed valid. [19,20] Assessment of liver disease severity at recall All patients recalled underwent a brief intervention designed to increase awareness of liver health and modifiable risk factors for liver disease.Following TE assessment, patients with a normal LSM or a result suggestive of low risk of significant fibrosis (< 8.0 kPa) were returned to their GP for ongoing care.Patients with an indeterminate of level of fibrosis (8.0-9.9 kPa) or high risk of ACLD (defined as an LSM ≥ 10.0 kPa) [21] were recommended to undergo a comprehensive liver screen and further follow-up in secondary care.There, LSM thresholds were developed based on the results of a previous meta-analysis. [10]Cirrhosis was defined as an LSM ≥ 13.0 with radiological and/or biochemical evidence of cirrhosis as assessed by hepatology.HCC surveillance was commenced in patients diagnosed with cirrhosis. Outcomes The primary outcomes for this study were the proportion of patients successfully recalled and assessed via this novel Liver Toolkit model and the proportion of patients identified with ACLD (LSM ≥ 10.0 kPa).Secondary outcomes included the number of TE visits conducted, the number of referrals to secondary care, and the overall utility of this approach in detecting undiagnosed liver disease.Staff time to perform recall assessments was also recorded throughout the project. Comparison to patients who did not attend recall Patients who did not attend recall were compared to those who did attend to assess for differences in demographics or laboratory results.For this purpose, historical data stored with GP practice software on nonrecalled patients was used.In some, this was incomplete.HIV and HCV coinfection [15] 18-79 y with relevant laboratory results 31,199 AST, platelet count HCV [14] 18-79 y with relevant laboratory results MASLD nomenclature This study was designed and conducted prior to a recent change in nomenclature recommended by major hepatology societies. [22]As such, the study protocol initially referred to NAFLD.In an attempt to prevent confusion and adopt the new terminology, MASLD has been used where possible to refer to patients with hepatic steatosis and cardiometabolic risk factors.Hepatic steatosis has been used to refer to patients with a previously coded diagnosis of NAFLD, where it is not clear if they satisfy the new MASLD criteria. Practice recruitment Fifteen practices were recruited for the Liver Toolkit project, and study ran for a 27-month period (October 2020 to December 2022).Two practices were excluded shortly after consent due to software incompatibility between the Liver Toolkit and the practice software (Zedmed).Four other practices withdrew from the project during the COVID-19 pandemic.The remaining 9 practices had a total of 114,640 active adult patients at baseline with a median number of patients of 10,878 (IQR: 7400-14,664) (Table 2).Liver biochemistry and full blood count results were available to calculate APRI and FIB-4 scores for 32,270 and 31,199 patients, respectively (28.1% and 27.2% of the active cohort), (Figure 2).A total of 2632 patients with an existing diagnosis of MASLD were assessed using the additional NFS criterion. Patients flagged by the Liver Toolkit There were 703 patients identified for recall (Figure 2).Reasons for recall were APRI ≥ 1 (n = 214), FIB-4 ≥ 3.25 (n = 254), and MASLD with an elevated NFS (n = 304) (69 patients had both an elevated APRI and FIB-4 score).The median number of patients identified for recall per practice was 67 (IQR: 60-95).Twentyeight practice recall visits were conducted by the Liver Toolkit team between November 2020 and December 2022.The project was initially scheduled to finish in June 2022 but was extended due to interruptions related to the COVID-19 pandemic.A total of 225.5 hours were spent in general practices by project staff as part of the recall (consisting of 116, 62, and 47.5 hours by medical staff, TE technicians, and nursing staff, respectively). The majority of recall visits were conducted in general practice (n = 165, 92%), with only 14 patients (8%) preferring to visit the hospital for assessment.A further 28 patients were directly referred by their GP to the project team (not based on the Liver Toolkit) and were assessed through the project.All patients consented to participate in the project analysis.No complaints or negative feedback were received from participants in the project. Patient characteristics Patients successfully recalled were mostly male (59%) with a mean age of 62.0 years (SD ± 10.5) and a median body mass index of 28.1 kg/m 2 (IQR: 25.0-32.6)(Table 3).A diagnosis of MASLD, dyslipidemia, and diabetes/impaired glucose was present in 74% (n = 132), 51% (n = 92), and 39% (n = 69), respectively.Patients referred directly to the project team by their GP for assessment had similar overall characteristics with no differences in age, gender, weight, alcohol use, or prevalence of dyslipidemia or diabetes compared to patients identified by the Liver Toolkit.Patients directly referred by their GP were less likely to have a diagnosis of MASLD (p < 0.01) and more likely to have a diagnosis of chronic hepatitis B (p < 0.01) compared with Liver Toolkit-identified patients. Elastography results All patients recalled from the Liver Toolkit underwent LSM using TE with no unsuccessful or invalid results.The median LSM was 5.9 kPa (IQR: 4.4-7.9)(Table 4).ACLD (LSM ≥ 10.0) was identified in 23 patients recalled (12.8%) (15 of which were determined to have cirrhosis).A further 18 (10.1%)had an indeterminate result (LSM 8.0-9.9 kPa), requiring further evaluation.The majority of patients (n = 138, 77.1%) had an LSM result indicative of low risk of fibrosis (including 94 [52.5%] with a normal LSM).Among the cohort referred directly by their GP for assessment, only 1 had ACLD, with 27/28 (96.4%) having a LSM suggestive of low fibrosis risk.Patients identified by the Liver Toolkit were significantly more likely to have ACLD or an indeterminate result requiring ongoing follow-up compared to those directly referred by their GP (22.9% vs. 3.6%, p = 0.021).There were no significant differences in the rates of ACLD detection between patients recalled based on an elevated APRI, FIB-4, or MASLD with an elevated NFS. Characteristics of patients found to have significant liver disease Among patients found to have ACLD (n = 23), MASLD was the most common etiology (n = 20, 83%) followed by alcohol (n = 5, 20%) (Table 5).In most cases, the patient was unaware of the diagnosis of liver disease (83%) and had never seen a gastroenterologist (79%).Concerningly, 5 patients with cirrhosis detected through the project had evidence of portal hypertension, and 2 had decompensated disease (both Child-Pugh B). DISCUSSION This novel program, the first of its kind in Australia, used an information technology solution (the Liver Toolkit) to screen existing data stored within general practice medical records to identify patients with undiagnosed ACLD.Blood tests from over 32,000 adult patients were analyzed to identify 703 individuals at high risk of ACLD (~2.2% of the cohort).However, only 25% of these patients were successfully recalled for further assessment, and only 1 in 5 patients were confirmed to have either ACLD or an indeterminate result requiring further evaluation.This study highlights the complexities and limitations of population screening for ACLD. The Liver Toolkit model used pre-existing readily available data and leveraged electronic practice software to identify patients at risk of liver disease.The tool did not interfere with frontline health care by GPs.unlike other models that require active GP recruitment of patients at the time of clinical consultation (resulting in poor uptake). [11]Patients and GPs anecdotally reported satisfaction with this approach as it maintained the patient's established relationship with their primary care specialist.The Liver toolkit assessed three blood-based nonpatented NITs for the detection of ACLD.NIT selection was based on available literature at the time the project was designed.It should be acknowledged there are other NITs that were not assessed as part of this study, such as the BARD score (BMI, AST/ALT ratio and diabetes) [23] , aspartate aminotransferase/alanine transferase ratio, and the newer Agile 3+ score. [24]he Liver Toolkit compares favorably to other similar studies.In a large German cohort study, ~11,000 patients were screened, and those with an APRI ≥ 0.5 (488 patients or 4.12% of this cohort) underwent further assessment with an overall ACLD detection rate of ~17%. [25]Our model, with a more stringent APRI cutoff and incorporation of other indices (FIB-4 and NFS) was able to screen a larger volume of patients with a similar overall utility.A British study examined a liver referral pathway using the aspartate aminotransferase:alanine transferase ratio and clinical risk factors for liver disease (harmful alcohol or metabolic disease) and found a rate of abnormal LSM ( ≥ 8 kPa) requiring further assessment (22.9%) similar to our study. [26]A recent Chinese and Abbreviations: ACLD, advanced chronic liver disease; APRI, aspartate aminotransferase-to-platelet ratio index; FIB-4, fibrosis 4 score; GP, General Practitioner; kPa, kilopascal; LSM, liver stiffness measurement; MASLD, metabolic dysfunction-associated steatotic liver disease.Malaysian study used a similar software approach with "pop-up messaging" for patients with elevated FIB-4 or APRI scores and found this was superior to the standard of care for linking patients to hepatology services for fibrosis assessment. [27]rimary care approaches to detect significant liver disease in the community are now seen as crucial in order to decrease the burden of liver disease and introduce personalized strategies to prevent disease progression.A recent example of this is the publication in Nature Medicine of the planned LiverScreen project funded by the European Union's Horizon Program. [7]he overall aim of the program is to provide costeffective screening for the early detection of liver fibrosis using NITs.The first phase plans to use blood biomarkers as well as TE in patients with known liver disease already under care.The planned second phase is much more like our Liver Toolkit project, where 10,000 persons without known liver disease, selected from the community, across 4 countries will undergo noninvasive blood tests followed by the confirmation of the presence of liver fibrosis in specialized hepatology services.Outcomes from this project are likely sometime away and it will be of interest to compare results to our Liver Toolkit project and similar other studies. The overall precision of Liver Toolkit, however, was low, with > 50% of patients assessed having an entirely normal result on TE.APRI ≥ 1.0 and FIB-4 ≥ 3.25 both had PPV for ACLD of <25%.This is much lower than has been previously reported in secondary care cohorts but similar to the rate found in a large population study. [28]This highlights the difficulties with using these tests in the general population.This has been described as the "spectrum effect" where sensitivity and PPV fall in relation to the disease prevalence [29] .Additionally, when the NITs were repeated, APRI and FIB-4 scores normalized in one-third of patients suggesting in realworld usage, there may be significant day-to-day variability.This confirms previous work suggesting there may be a role for serial measurement of such indices. [30]nterestingly, the Liver Toolkit process identified more patients with liver fibrosis than direct GP referrals during the same period.This may be partially explained by GPs being more likely to refer patients with viral hepatitis or abnormal liver function tests for management advice as opposed to screening for cirrhosis.However, only 3.6% of patients referred by their GP due to clinical concern had an LSM requiring further followup for ACLD compared to 22.9% of patients identified by the toolkit.This highlights the difficulties in detecting asymptomatic compensated ACLD in the general population and the possible utility of NITs using readily available parameters. This project relied on TE to quantify the extent of liver fibrosis in patients flagged by blood-based indices.Currently, TE is the accepted reference NIT for detecting fibrosis due to its high accuracy and reproducibility [17,31,32] ; however, it is not a perfect test, and liver biopsy continues to have a role in some cases.As demonstrated in our study, 18 patients (10% of the recall cohort) had an indeterminate result on TE.It is currently unclear whether such patients should have ongoing TE surveillance or proceed directly to liver biopsy.There is, however, evidence to suggest further evaluation can lead to changes in clinical management. [33]The considerable indeterminate cohort makes more widespread population screening for liver disease potentially problematic. The lack of a control group was a potential limitation of this project.As such, it is unknown what proportion of the 97.8% of adult patients not flagged for recall by the Liver Toolkit may have had a significant liver disease that was not recognized (ie, the false negative rate).This project, however, was designed with a pragmatic, real-world approach to attempt to stratify the very large number of patients in primary care with abnormal results to determine who would best benefit from a targeted liver intervention in the setting of finite health care resources. Another limitation of the study was the relatively low response rate to recall (~25% overall).This is only slightly less than the 33% response rate achieved with the Australian National Bowel Cancer Screening Program, highlighting the potential difficulties in engaging patients in preventative screening for asymptomatic disease even when programs are evidence-based and well resourced. [34]The COVID-19 pandemic may have also contributed to the low response rate, with the project being interrupted for 8 months.The rates of successful recall also differed significantly between practices (range 10%-40%), with higher-performing practices often having additional staff to assist with recall.Interestingly, the rates of recall uptake varied significantly between patients with an elevated APRI or FIB-4 and to those with MASLD.Finally, patients who did not respond to the recall had higher liver transaminases and APRI scores, suggesting that patients at the highest risk of underdiagnosed liver disease may have not been assessed.An independent qualitative analysis of the liver toolkit, including structured interviews of project staff and patients and quantification of resource requirements, would be required to fully inform the feasibility of adapting this model more broadly. There are several potential improvements that could be made to the Liver Toolkit to streamline future use.In this pilot study, the recall list was generated by project staff, and patients were flagged and subsequently recalled for assessment.The project took over 200 hours of direct staff time, equating to 1.25 hours per patient successfully recalled.A more automated system that used real-time "pop-up" notifications, could potentially identify patients at increased risk of ACLD when they were seeing their GP for other reasons.This would eliminate the need to recall patients separately, reduce overall time requirements, and potentially improve the number of patients reached.Such an approach has been trialed recently, and it found 1 in 3 patients were successfully linked to fibrosis assessment. [27]This Liver Toolkit evaluated several NITs; from our data, the simpler tests (APRI and FIB-4) had better reach than the more complex NFS (which incorporates body mass index and diabetes status), as these additional factors were not always available in primary care records.This highlights that the higher potential diagnostic accuracy of an NIT must be balanced against test complexity when selecting the optimal NIT for community screening.Although we did not find a significant difference in the PPV for ACLD between APRI and FIB-4, using a single NIT would make the project logistically easier.FIB-4 has also now been recommended as the first-line NIT by several societies. [17,35]Recalling patients only once an NIT was positive on 2 occasions would also increase the PPV of the tool.Finally, limiting the target population to adults between 40 and 75 years of age would reduce the number of patients needing recall to detect 1 case of ACLD while maximizing yield, as most cases of new ACLD were found in this age range. CONCLUSIONS This pilot of a novel information technology system (the Liver Toolkit) successfully screened a large primary care cohort to identify patients at increased risk of ACLD.More than 1 in 5 patients recalled were found to have significant liver disease needing T A B L E 1 Noninvasive tests assessed using the Liver Toolkit Noninvasive test (NIT) Components Initial validation cohort Patients assessed No. patients evaluated with NIT by liver toolkit software Threshold used for recall by liver toolkit Fibrosis 4 score (FIB-4) Age, AST, ALT, platelet count T A B L E 4 Transient elastography results by patient type and reason for recall Transient elastography result (kPa) for example list, Supplemental Digital Content 1, http://links.lww.com/HC9/A949).Treating F I G U R E 1 Liver Toolkit model.Abbreviation: GP, General Practitioner. Patients flagged for recall for high risk of ACLD (n=703) data were collected and stored using the REDCap electronic data capture tool hosted by the Sydney Local Health District.All analyses were performed using SPSS (version 27.0, Armonk, NY). a Lower number of patients assessed using the NFS due to required variables to calculate the (BMI, hyperglycemia) not be available in all patients.Abbreviations: ALT, alanine transferase; APRI, aspartate aminotransferase-to-platelet ratio; AST, aspartate aminotransferase; BMI, body mass index; FIB-4, fibrosis 4 score; MASLD, metabolic dysfunction-associated steatotic liver disease; NIT, noninvasive test; NFS, NAFLD fibrosis score.Study T A B L E 2 Adult patient numbers and Liver Toolkit recall rates by general practice T A B L E 3 Characteristics of patients assessed as part of liver toolkit project Liver toolkit patients (n = 179)(%) GP direct referral (n = 28)(%) p a Data missing in 25 patients.Bold values indicate significance p < 0.05.Abbreviations: ALT, alanine transaminase; APRI, aspartate aminotransferase-to-platelet ratio index; AST, aspartate aminotransferase; BMI, body mass index; FIB-4, fibrosis 4 score; GP, General Practitioner; IGT, impaired glucose tolerance. T A B L E 5 Characteristics of patients identified with advanced chronic liver disease or indeterminate risk of fibrosis T A B L E 6 Comparison of patients successfully recalled compared to sample of those not recalled a Data available for 320/524 patients not recalled.b BMI and alcohol intake assessed at appointment in recall group versus GP record for patients not recalled.Bold values indicate significance p < 0.05.Abbreviations: ALT, alanine transaminase; APRI, aspartate aminotransferase-to-platelet ratio index; AST, aspartate aminotransferase; BMI, body mass index; FIB-4, fibrosis 4 score; GP, General Practitioner; kPa, kilopascal; NFS, NAFLD fibrosis score.
6,223
2024-06-27T00:00:00.000
[ "Medicine", "Computer Science" ]
Two Design Principles for the Design of Demonstrations to Enhance Structure–Property Reasoning : Structure–property reasoning (SPR) is one of the most important aims of chemistry education but is seldom explicitly taught, and students find structure–property reasoning difficult. This study assessed two design principles for the development of structure–property reasoning in the context of demonstrations: (1) use of a POE task (predict–observe–explain) and (2) use of the domain-specific particle perspective, both to increase student engagement and to scaffold micro-level modeling. The aim of the demonstration series was to teach structure–property reasoning more explicitly to pre-university students (aged 15–16). Demonstrations pertained to the properties of metals, salts and molecular compounds. The SPR instrument was used as a pretest and posttest in order to gain insight into the effects on structure–property reasoning. In addition, one student (Sally) was followed closely to see how her structure–property reasoning evolved throughout the demonstrations. Results show that after the demonstrations students were more aware of the structure models at the micro-level. The students also knew and understood more chemical concepts needed for structure–property reasoning. Sally’s qualitative data additionally showed how she made interesting progress in modeling micro-level chemical structures. As we used conventional demonstrations as a starting point for design, this could well serve as a practical tool for teachers to redesign their existing demonstrations. Introduction In chemistry, structure-property reasoning is considered to be one of the most important overarching constructs [1]. It is a type of chemical reasoning in which chemists explain the macroscopic properties of a compound in terms of the structure level of this type of compound, namely the particles, their organization and interactions. The properties refer to observable properties of compounds such as melting point, hardness and solubility. Structure-property reasoning is important for explaining and predicting properties of compounds. It is also critical for designing new compounds with desired properties. Therefore, this type of reasoning takes a prominent place in various curricula over the world including in the Netherlands [2,3]. Chemistry students generally have problems developing such reasoning resulting from rather particular difficulties. The first difficulty is the requirement to switch between different levels of thought within chemistry [4]. When observable phenomena need to be explained and interpreted at the micro-level, students have to connect the two levels using models of particles and their interactions. However, as students are mostly novices in structure-property reasoning, they tend to stick with the macro-level observations and simply use former experiences to explain the properties instead of using the microlevel models [5]. Reasons for this are being unfamiliar with the micro-level models and experiencing difficulties with their interpretation, e.g., precisely how micro-level particles interact to account for the observed properties at the macro-level. The second difficulty in developing structure-property reasoning is that micro-level particles cannot be seen with the naked eye or even with the best optical microscope. Consequently, structure-property reasoning becomes rather abstract and students draw on more general problem-solving skills to solve chemical problems instead of on a deep understanding of structure-property relationships [5,6]. Difficulties in structure-property reasoning may primarily be regarded as a consequence of how chemistry is taught and how chemistry curricula are organized. As most national curricula are organized around chemical topics (chemical bonding, etc.) instead of explicit conceptual relationships or cross-cutting forms of chemical thinking, teachers are not facilitated to explicitly teach structure-property reasoning. As a result, students develop heuristics such as "surface similarity" (compounds with similar appearances are compounds of the same group and thus they have the same properties) [7] to answer questions in this realm. However, their understanding of these structure-property relations remains poor [7][8][9]. Literature suggests that the teaching of structure-property reasoning should be explicit and centered on the "core idea" of structure-property relationships [1,10]. Students should learn to connect the real with the modeled world and to use structure models to explain real chemical phenomena [1]. The use of demonstrations to show chemical phenomena has been suggested as a teaching practice to explicitly teach structure-property reasoning [11][12][13]. In conventional demonstrations, learning starts at a macro-level familiar to most students. Teachers demonstrate real chemical phenomena, and students are expected to observe what happens at the macro-level before the teachers provide a micro-level explanation for the chemical phenomenon at hand (observe-explain demonstration) [14]. What is lacking in most of the typical observe-explain demonstrations is that: (1) teachers do not let their students activate prior knowledge to build on what they already know and (2) most teachers do not ask students to model micro-level explanations themselves [14]. These two imperfections of a conventional demonstration lead to a twofold need: an approach in which students can actively build on what they already know and a means to stimulate and guide micro-level modeling by students in connection with a demonstration. The former may be done by predicting outcomes prior to performing a demonstration. For the latter, an explicit scaffold for students' micro-level modeling may be introduced [6]. In the study presented in this article, we designed and tested a demonstration-based lesson series aimed at improving structure-property reasoning. For the design of the lesson series, we explicitly used conventional demonstrations as a basis and applied two design principles: (1) the introduction of a POE task (predict-observe-explain) to demonstrations [13,[15][16][17] to stimulate students' engagement and their modeling process and (2) scaffolding of the POE task with a domain-specific particle perspective [18,19] in order to explicitly guide the modeling at the micro-level for students in the "explain" phase of the POE task. This particle perspective consisted of a question agenda with questions on which type of compound, which properties and which type of particles. Next, the demonstration-based lesson series was tested for the level of students' structure-property reasoning as reproduction, understanding, application and evaluation. We studied student engagement as they developed models for the structure level for three dominant types of chemical compounds: metals, salts and molecular compounds. The learning objective for the students in the upper pre-university tier of secondary education was to acquire these structure models. We performed the lesson series and investigated how students reproduced, understood, applied and evaluated structure-property reasoning. Theoretical Framework To improve students' structure-property reasoning, it should be taught explicitly and be in line with students' macroscopic orientation [5]. This can be achieved by using real chemical phenomena in which properties of substances are investigated [1]. Students can be effectively engaged with such real-life chemical phenomena by using demonstration experiments [20]. Due to the way many teachers incorporate the demonstration into their teaching practice, learning efficiency for the students is low [21]. Although the students are engaged by questions about what they have just observed, they are given few opportunities to discover for themselves how to explain the chemistry phenomena using the structurelevel models, let alone to discover and create these models themselves. As a result, the demonstration is little more than a beautiful show and learning efficiency remains low [21]. The question arises: to what extent teachers are able to offer students opportunities to think for themselves during a demonstration? How can students be actively and explicitly engaged in structure-property reasoning? Teaching practices that use conventional demonstrations are often characterized by teachers presenting theory before the demonstration, thereby reducing students' explicit engagement with structure-property reasoning. Consequently, students passively observe the demonstration and opportunities for active learning are missed. To overcome these problems described in the paragraph above, an active role for the student is necessary [22]. Our first design principle, the addition of a POE task (predictobserve-explain) to conventional demonstrations [13,[15][16][17] was intended to achieve this. In a POE task, students are challenged to learn actively by predicting the outcome of a demonstration and justifying their predictions. Next, they describe what they observe during the demonstration, and, afterward, they explain their observations and reconcile any discrepancy between their predictions and observations. This technique has been frequently investigated over the years [15,16,[22][23][24][25][26][27][28][29][30][31][32]. Therefore, it is a known approach in science education research, but the implementation is lacking. Besides fostering engagement, the POE task can be used to reduce misconceptions [29,30] and it can help students to improve their learning outcomes [31]. The POE task is also suitable for enabling students to model the structure level of a compound in order to explain a certain property [33]. Finally, it encourages students to engage in explicit structure-property reasoning by connecting their macroscopic observations to their models of the structure level [32]. We believe that these characteristics of the POE task, such as students' active engagement and the opportunities for students to model themselves, give the POE task potential to reinforce demonstrations and improve students' structure-property reasoning. Adding a POE task changes the order of teaching activities for demonstrations ( Figure 1). A conventional demonstration has three steps: introduction or orientation, show and observe and explain. In all three steps, the teacher takes the lead. Even in the observation step, teachers direct the students' attention to important observations (and distract them from undesired observations). The target teaching practice of a POE-task-based demonstration consists of an extra step "predict". In addition, all the steps (except for "show") have become student-centered to get students more engaged with the demonstration. In this way, students are challenged to reason using micro-level structures to explain the demonstrated properties on the macrolevel. For this, they can create and use micro-level structure models. When students start creating the micro-models in the "explain" step of the POE task, they need to know the conditions for a micro-macro explanation. The students need insight into the underlying structure of the explanation and the corresponding questions that can be asked to systematically address the problem. A scaffold can be of assistance [34]. Hence, we introduced our second design principle: scaffolding of the POE task with a domain-specific particle perspective. [18,19] As illustrated in Figure 2, this particle perspective consists of a set of questions (a question agenda) that experts in chemistry would unconsciously ask themselves when dealing with structure-property relations [18,19]. For example, when dealing with a problem about the potential solubility of poly-4-hydroxystyrene in a basic solution, one needs to know-besides which substance and which property-which type of particles is involved (polymer with hydroxy groups and basic particles) and which type of bonds (ion-dipole bonds) plays a role. Relative novices, as students mostly are, can use the question agenda to interrogate the problem. In this way, the particle perspective can act as a scaffold for the students to model the structure models of the micro-level, which in turn can be used for explaining the macrolevel. Students need to get acquainted with the questions and the answers, i.e., chemical concepts of this particle perspective, to become proficient in structure-property reasoning. In addition, the questions and answers can be seen as thinking tools for structure-property reasoning. Figure 3 shows an elaborated version of the particle perspective with all the chemical concepts needed for structure-property reasoning explored in this study. In the "explain" phase of the POE task, students need to explain their observations. The particle perspective can therefore act as a scaffold to facilitate the students in this phase. By answering the questions of the perspective, the students are scaffolded to the appropriate chemical concepts needed to explain their observations. The domain-specific perspectives can also act as a stepping stone to expand the required chemical concepts needed for structure-property reasoning. When used repeatedly in multiple settings such as lessons, new chemical concepts can be added to the question agenda, and questions can be divided into several sub-questions. As students' knowledge grows, the particle perspective grows, and more complex problems can be investigated. Furthermore, the students' knowledge will be organized by the particle perspective. This gives students an overview, and interdependences become clear [18,19]. The two design principles served as the basis for our design study. We aimed to design demonstrations with a POE task and the scaffold of the particle perspective in order to en-gage pre-university students in modeling the structure level of metals, salts and molecular compounds and to enable them to learn how to perform structure-property reasoning. Research Design Using a one-group pretest-posttest design, the effectiveness of a demonstration-based lesson series with a POE task and the particle perspective as scaffold was investigated. The aim of the demonstration-based lesson series was to stimulate and develop students' structure-property reasoning. To be more specific, the students had to learn the chemical concepts, e.g., hydrogen bridge or ions, that are associated with the micro-models of metals, salts and molecular compounds. Figure 3 shows all the chemical concepts offered in these demonstrations and thus the learning objectives for the students. They also had to construct and apply the micro-models themselves with the associated chemical concepts. The activities were designed by the first author and piloted in her own teaching practice. This pilot showed that the selected design worked well for the group metals and the group salts. For molecular compounds, however, we noticed that students generally were not able to predict properties. Consequently, we redesigned the lesson on molecular compounds (see "Overview of the Lesson Series" below). The adapted lessons were again provided by the first author. Setting and Participants The lesson series was performed in two cohorts in a Dutch secondary school: cohort 18-19 and cohort 19-20. Table 1 shows the number, gender and ages of students in both cohorts. The students were in the fourth year of the pre-university track. In the third year, they had been introduced to chemistry with an introduction to the topics: substances and their properties, particle models, separation methods, chemical reactions, atoms, molecules, metals, organic compounds, reaction heat, reaction rate, stoichiometry, fuels and plastics. Our designed demonstration lessons were part of a topic about chemical bonding. Before this course, the students had learned about Bohr's atomic model, mole, stoichiometry and concentration. Overview of the Lesson Series The two design principles were incorporated into the lesson series which comprised three lessons of 50 min each. Properties of the three types of substances (metals, salts and molecular compounds) were demonstrated. Students then engaged in activities to discover the structure models underlying some of the common properties of these three types of compounds ( Figure 4). STEP 1: PREDICT The demonstration lesson series started with metals. Several metals were displayed on the teacher's desk, such as iron, copper, lead, aluminum and zinc. First, the question agenda of the particle perspective was handed out to the students, and the teacher asked them to complete this with answers suitable for the metals (in order to obtain their prior knowledge). Next, the students were asked to predict the properties of this group of substances (P = predict). These predictions of the properties were collected in a class discussion. STEP 2: OBSERVE Properties shown in this step were: general aspects such as color, phase at room temperature, malleability, hardness and electrical conductivity. During the demonstration, there was a class discussion solely about the macroscopic properties of the metals. The order of the shown properties was chosen to facilitate a step-by-step development to build on what students had already studied in their third year (a more general particle model) toward a more sophisticated structure model of metals, the learning objective for this course. The students had to observe the demonstrated properties of each of these metals. STEP 3: EXPLAIN In this step, the students were asked to produce a structure model of the shown substance based on their observations of the properties. They could use the question agenda from the particle perspective as a scaffold (Figure 1), and they discussed their models in small groups. After that, their structure models were discussed in a whole-class discussion to enable the students to test their own structure models. After approximately three iterations of the first student-generated models, their models were compared with the commonly accepted teaching models. The students were asked to create a structure model for the metals that explained the properties shown in the demonstration. After the group discussions, the teacher discussed the common denominator of the student-generated structure models. Then, the teacher asked the students to show how the property malleability could be explained in the structure model. After group discussions, this was again discussed in the whole class. After that, the teacher asked the students to produce two iterations: (1) to adapt the structure model to explain the hardness of alloys and (2) to adapt the structure model to explain conductivity of electricity. Finally, students supplemented the question agenda of the particle perspective from the beginning with the discovered concepts and structure models. More information about the demonstration lesson series can be found in the Appendix A supplementary information, where the demonstration protocols for the metals, salts and molecular compounds are described (See Appendix A). Based on the pilot, we used a slightly different approach for the molecular compounds. Students found it difficult to predict the various properties of the molecular compounds. Consequently, they found modeling of the underlying structures to be complicated, and thus less complex structure models were needed to enable a constructive modeling process. For this reason, we simplified the structure model of the molecular compounds by concentrating on the general molecular interaction. In later lessons, this simplified model was explored by giving instructions about the various types of molecular interactions. In addition, it appeared the students needed an explicit scaffold for the modeling process. To offer this, we used the question agenda of the particle perspective to structure the demonstrations. For these reasons, we adapted the structure of the demonstration lesson as follows: the lesson started with the questions of the particle perspective, and the class was asked to give suggestions on which property should be demonstrated by the teacher for discovering the answer to those questions (P phase). For example, to discover the type of particles, students could argue that conductivity should be demonstrated to reveal whether the particles were charged. Subsequently, all necessary properties, such as boiling point and conductivity, were demonstrated (O phase) by the teacher, and the questions of the particle perspective concerning the micro-level were discussed together with the associated chemical concepts (E phase). At the end of the demonstration for the molecular compounds, the students completed the particle perspective again. An additional advantage of this approach was that it gave students the opportunity to explicitly practice the questions of the particle perspective. Data Collection and Data Analysis We gathered several types of data about the level of proficiency in structure-property reasoning using a combination of quantitative and qualitative instruments. We gathered data using the SPR instrument (structure-property reasoning instrument) [35], which was developed in previous research in order to estimate various aspects of structure-property reasoning at different levels of mastery. To provide insight into how the demonstrationbased lesson series impacted learning, we also gathered data in the form of manifest student products (student structure models, perspectives) and audio recordings of student group discussions. The SPR instrument was administered as a pretest and posttest for both cohorts. Student results on the pre-and posttests were compared statistically to determine significant growth in structure-property reasoning. The SPR instrument consists of four tasks (see Table 2): an unframed and framed sorting task and an unframed and framed mapping task, all four based on the particle perspective. Shuffle your 16 cards and sort them in the four groups as stated on your worksheet: molecular/bonding, molecular/lattice, ionic, metallic. Every group should contain at least one card. 3 Unframed mapping task Participants receive questions of particle perspective (see Figure 2). The questions should be completed by answers in form of chemical concepts. Creating hierarchy is allowed. In front of you, you see the questions of the particle perspective. A perspective is a way of questioning your topic or problem. Complete the questions with the appropriate chemical concepts. You are allowed to form a hierarchy. 4 Framed mapping task Participants receive questions of particle perspective (see Figure 2) and 30 chemical concepts. The concepts should be placed under the appropriate question. Creating hierarchy is allowed. Again, you are given the questions of the particle perspective. Complete the questions with the given 30 chemical concepts. You are allowed to form a hierarchy. In the sorting tasks, the percent pairs (%P) for the structure level and the property level were determined. The percentages of pairs of cards equal to pairs found in the ideal structure or property sort formed by the participant were determined. The more similar a sort is to an ideal sort-either structure or property-the higher the percent pairs will be. To provide insight into the type of group names the students used to categorize the formed groups in the unframed sorting task, the group names were coded by type of category name with the codes "referring to structure", "referring to property" or "other", as shown in Table 3. In the framed sorting task, the framed difference (FD) score was determined. The FD score is defined by the number of cards that are placed in a group other than the ideal sort. In the unframed mapping, the number of given chemical concepts was counted and judged on correctness. The given chemical concepts were compared to the reference map ( Figure 3). In the framed mapping task, the percentage of correctly placed chemical concepts was determined. We also gathered qualitative data. These consisted of manifest student products (student structure models, perspectives) and audio recordings of student group discussions. In order to gain insight into how the demonstration-based lesson series impacted student learning, we chose to present a case study in which we described how one student's learning (Sally, cohort [19][20] was impacted by the lesson series. Sally (a fictive name) was chosen because her learning progression was a clear example of how students developed during the demonstration lessons. This student, Sally, collaborated with three female students in one group. We collected Sally's structure models about the topic of the group metals (first demonstration) and audiotaped Sally's group discussions. Resulting drawings and group discussions were analyzed as follows. First, we transcribed Sally's group discussions around the topic of metals and compared these to the drawings that Sally made in order to map how the modeling process of the structure level (for metals) progressed. Next, we analyzed in these transcripts how Sally's group spoke about the properties of malleability, hardness and electrical conductivity and how these properties were visible in their structure models. The main starting question for analysis was how the group adapted their first structure model of metals and how they progressed to the final structure model of metals. Results In this section, we first present the quantitative SPR instrument outcomes. Next, we present the case study of Sally. The framed mapping task tested whether the students acquired the offered chemical concepts of the particle perspective. Table 4 shows that the number of correctly placed chemical concepts increased significantly in the posttest. Students barely made any mistakes after the demonstration series. This result shows that students acquired and understood the chemical concepts needed for structure-property reasoning. The students were able to connect the chemical concepts with the corresponding question of the particle perspective. Table 5 shows that the perspective maps created by the students were more comprehensive in the posttest compared to the pretest. The students provided more answers, i.e., chemical concepts, and their mapping was more comparable to the reference map. The results show that students were able to reproduce the learned concepts. Furthermore, they understood and applied the learned chemical concepts by connecting them to the corresponding question. The demonstration lesson contributed to greater proficiency in structure-property reasoning. Both cohorts showed an increase in the %P-structure score in the posttest. Students' sorts in the predetermined categories (framed mapping task) were more similar to the ideal structure sort. This probably means that after the demonstrations, students were more able to apply their acquired knowledge of the particle perspective to problems concerning structure-property reasoning. Students were more able to evaluate these problems on the less visible structural aspects such as type of particles or bonding. In the framed sorting task, the data showed a decrease in the FD score in both cohorts (Table 6). This means that after the demonstration-based lesson series students made fewer mistakes in sorting the problems into the appropriate predetermined categories. This applied specifically to the categories of metals and salts where students were found to make considerably fewer mistakes. This implies that the demonstration-based lesson series helped students to acquire structure models and, in their application, to solve problems. Students found the difference between bonds and lattices in the category of molecular compounds more difficult. These predetermined categories helped the students to sort the problems into a structure level. This can also be seen in the higher %P-structure score for the pretest and the posttest compared to the unframed sorting task ( Table 6 for the framed sorting task and Table 7 for the unframed sorting task). Unframed Sorting Task As shown in Table 7, students of both cohorts, mainly sorted on property aspects in both pretest and posttest. This finding confirmed the pre-university students' macroscopic orientation [4,5]. Examples of these category names were melting point, density and solubility. This macroscopic orientation was also depicted in the high %P-property scores in the pretest and the posttest of both cohorts (Table 7). At the same time, Table 7 shows a noteworthy increase in the mean %P-structure scores in the posttest. In other words, students' sorts in the posttest were more similar to the ideal structure sort. This result shows an increase in students' proficiency in structure-property reasoning after the demonstration lessons. As the questions of the particle perspective were derived from the way an expert thinks [19], we could say that the students' evaluation of the problems became more similar to that of an expert. Case Study Sally To provide insight into how the demonstration-based lesson series impacted student learning, we present the case study of one student. Sally was a female student enrolled in the 19-20 cohort who was followed during the demonstration lesson on the topic of metals. She worked in a group with three other girls: Ryanna, Cathy and Fatima (fictive names). During the Demonstration Lesson FIRST STEP: PREDICT As the lesson started, Sally received an empty particle perspective and was asked by the teacher to complete the questions for the group of metals. The resulting perspective (see Figure 5 for Sally's first particle perspective, dotted underline) showed that her prior knowledge of the metals-properties and structure models-was quite comprehensive, meaning that her prior knowledge was good. Sally compared her personal particle perspective with that of others from her group in the subsequent group discussion, in which the four girls discussed the six questions for the metals and tried to formulate appropriate answers. The audio recordings of Sally's group discussion revealed that her group started with the question "type of substance" ( Figure 5). Next, they discussed "type of particles", and they named, among others, valance electrons ( Figure 5) because of their role in conductivity. They also discussed "type of organization", how the valence electrons move through the lattice and the regularity of this lattice. They constantly switched between these two questions: "type of particles" and "type of organization" (Figure 5). Then they switched to the properties of metals and talked about the malleability, conductivity and mixability of metals ( Figure 5, "which properties?"). Again, they discussed the ability of valence electrons to move through the lattice ( Figure 5, "type of particles" and "how the particles are organized?"). At this point, they also named metallic particles. Then they listed all the bonds they knew, looking for a bond that fitted for the metals ( Figure 5, "which force between the particles?"). The teacher then brought the predictions of the properties together in a class discussion. Together, the class named all the important properties of the metals, i.e., gray-colored, shiny after polishing, hard, malleable, solid at room temperature and able to conduct electricity and heat. SECOND STEP: OBSERVE In the demonstration, the teacher showed the properties of the metals. These were general aspects such as color, phase at room temperature, malleability, hardness and conductivity of electricity. There was a class discussion solely about the macroscopic properties of the metals. THIRD STEP: EXPLAIN Sally's group started with the general model for particles, as shown in the second column of Table 8, and in their discussion, they immediately tried to take into account the conductivity of electricity, something they also named in their starting situation. They correctly suggested that valence electrons play a role in conductivity and that a neat lattice is needed for these electrons to move. They used the same size of circles for the particles in their drawing, showing that all the particles are equal. However, they paid no attention to the fact that a metal is malleable in their discussion of their first models (Table 8, column 2). In the class discussion about the property malleability and the corresponding structure model, the teachers drew the common divisor of all the drawings she observed: equal round particles in a neat lattice. The teacher asked whether this structure model explained the property malleability. Sally mentioned that a row of particles can be moved without obstructions, and she showed it to her classmates in the drawing (Table 8, column 3). After this adaptation of the structure model, the teacher asked the students to adapt the model to explain the hardness of an alloy such as steel (Table 8, column 4). Sally's group now recognized that differently sized particles are not able to move along easily and that malleability decreases, as Sally remarked: "With another substance in it, other particles which are larger or smaller" (Table 8, column 4). In a short class discussion, the adaptation of the structure model was discussed. To explain the conductivity of electricity, the students had to adapt their structure model again. Sally's group discussed the role of the negatively charged valence electrons again, but the girls did not discuss the existence of the positively charged metal atoms. ( Table 8, column 5). They correctly recognized that these electrons move freely through the lattice. In the class discussion, the role of valence electrons was discussed. The students said that these electrons move freely: "they are playing tag". The teacher discussed the positively charged metal atom that appeared when a negatively charged electron moves through the lattice. The teacher asked the students to explain the conductivity of heat using this knowledge. Sally's group found it difficult to explain, but they discussed that the shaking or movement of a particle might play a role. The adapted structure model as drawn in Table 8 (column 4) was now used to explain the high boiling point of metals. The teacher asked the class what was necessary on the structure level to obtain a high melting point on the property level. Sally suggested a strong bond between the particles. She also suggested that this strong bond originates in the attraction of positive and negative charges. The last step of the lesson was the completion of the particle perspective again, now with all the concepts learned in this lesson (see Figure 5 for Sally's result; underlined with dashes). In a class discussion, the appropriate concepts for the metals were formulated for each question of the perspective. Results of SPR Instrument of Sally's Group before and after the Demonstration Lesson Series One week after the exam and five weeks after the last demonstration lesson, the students were asked to perform the SPR instrument as a posttest. Part of this test was the unframed mapping task, in which the students had to complete the questions of the particle perspective with the appropriate chemical concepts. The personal particle perspective showed the knowledge organization of the individual students. Sally's particle perspective ( Figure 6) was very complete and comparable to the reference particle perspective (Figure 3). Some chemical concepts were missing from her particle perspective, mainly on the question "which type of particle?" (Figure 6). This concerns the concepts of metal and non-metal atoms and molecules. Sally's unframed sorting task showed a macroscopically orientated sort in the pretest. She used four categories: solubility, conductivity, melting point and hardness. These are the same categories that she also used in her posttest. Her %P-property scores in the preand posttest were 72% and 84%, respectively, meaning that her sorts were comparable with the ideal property sort, underlining her macroscopic orientation in these sorts. In the framed sorting task, she had a lower FD score in the posttest (decrease from 10 to 5). Her %P-structure increased from 29% in the pretest to 44% in the posttest. With the predetermined categories in the framed sort, Sally was able to evaluate the problems on structural aspects after the demonstration series. This suggests that Sally increased her proficiency in structure-property reasoning. Conclusions and Discussion This paper describes two design principles for demonstration-based lessons aiming to help students develop structure-property reasoning. The two design principles were: (1) adding a POE task to demonstrations to stimulate students' engagement and their modeling process and (2) scaffolding of the POE task with a domain-specific particle perspective [18,19] in order to explicitly guide the modeling at the micro-level for students in the "explain" phase of the POE task. For the design of the lesson series, we explicitly used conventional demonstrations as a basis. The demonstration-based lesson series design with these two design principles was tested in two cohorts of upper pre-university students to investigate the effects of the two design principles on the level of their structure-property reasoning. First, the results of the SPR instrument indicated that the demonstration series contributed to students' proficiency in structure-property reasoning. The unframed and framed mapping task of the SPR instrument showed that most students acquired and understood the chemical concepts needed for structure-property reasoning. In the framed mapping task, all students matched 97% of the given chemical concepts with the associated question of the particle perspective. In the posttest, the unframed mapping tasks were notably more comprehensive. Furthermore, the framed and unframed sorting tasks showed that students were better able to apply their acquired knowledge to problems concerning structure-property reasoning in the posttest. Students were also found to use more structure-property reasoning to evaluate and sort the types of problems that they were presented with in the posttest. The unframed and the framed sort in the posttest bore a greater resemblance to an ideal structure sort. Students also placed more cards in the "correct" category in the framed sort. In the unframed sorting task, the students used more category names referring to the structure level. Considering design principle 1, adding the POE task, our data showed that the POE task engaged students in modeling of the structure level and, therefore, in acquiring structure level understanding. The qualitative data of Sally and her classmates (cohort [19][20] started with the general model for solids to explain the first property demonstrated. Sally and her classmates extended this model step-by-step by reviewing the model for the other demonstrated properties. Finally, they came to a model that explained all demonstrated properties of metals. During classroom reasoning, we could see that the properties demonstrated-solid at room temperature, malleability, hardness and conductivity of electricity-were used to create, extend and test their structure models. Moreover, Sally's particle perspective from the posttest ( Figure 6) showed that she had acquired all the chemical concepts needed for proficient structure-property reasoning. In this study, we only followed the learning progression of one student. In further research, the learning progressions of a bigger group of students should be investigated. Literature shows that one of the difficulties students experience with structureproperty reasoning is the connection between the macroscopic level of the properties and the micro-level where the structure models emerge. Due to their inexperience, students tend to start their reasoning from their macroscopic orientation [5,9]. By adding the "predict" step and the student-centered "explain" step, in which students actively construct the structure models and explain the predicted and observed properties, students had to make explicit connections between the two levels of representations. This might increase their proficiency in structure-property reasoning. The qualitative data-Sally's drawings and her discussions with her classmates-also showed that the addition of the POE task to the demonstrations gave the students the opportunity to model the structure level themselves. After adapting her prior particle model of a solid (see the second row in Table 8), Sally formulated a structure model of metals that explained the demonstrated properties. It appears that the POE task facilitated students in the modeling process. Interestingly, the data also showed that this modeling process appeared to consist of several stages. Sally and her classmates did not merge all the shown properties into one comprehensive structure model in one take. Instead, the group first constructed a tentative model based on their prior knowledge and a general particle model and then extended this tentative model step-by-step into a more extensive model so that it could continue to explain new properties. In this way, Sally and her classmates explicitly commuted back and forth between properties and structure models. Such an iterative modeling process and back-and-forth thinking between properties and structure require the teacher to scaffold this well, e.g., by showing the properties in an order that supports an iterative modeling process. Considering design principle 2 (scaffolding of the POE task with a domain-specific particle perspective) the SPR instrument-especially the unframed and framed mapping tasks-showed that students' particle perspective was more developed in the posttest. Developing the particle perspective also increased its value as a scaffold for the students. By obtaining the answers-the chemical concepts-to the question agenda, the students acquired the tools for structure-property reasoning. Furthermore, these chemical concepts were connected in functional coherence in the particle perspective. Normally, students learn the chemical concepts, and as a next step, they apply these concepts in specific situations. In these demonstrations, students developed the chemical concepts in a context of structure-property reasoning. During the modeling process, the students had to work through the question agenda of the particle perspective several times, in an iterative process. Each time the particle perspective was extended, more options became available. These added concepts increased their proficiency in structure-property reasoning, and the students were able to question more complex and increasingly varied problems. It is known from literature that one of the difficulties of structure-property reasoning that students experience is the invisibility of the structure level [5]. The structure level cannot be seen with the naked eye or through a microscope and models are needed to describe it. Because of this, structure-property relations become abstract and students are prone to misconceptions and experience various difficulties in solving problems [5,6]. In our study, the use of the particle perspective gave the students a scaffold to support the reasoning process by offering the concepts and the questions from the question agenda in coherence. Furthermore, the question agenda of the perspective gave the students insight into domain-specific reasoning. It enabled them to reason more like experts by questioning the problems with the aid of the question agenda. In sum, the particle perspective with its questions and the associated chemical concepts will increase the proficiency in students' structure-property reasoning and will help them to solve problems with structure-property relations in the future. The design of the demonstrations and the modeling process by the students worked best for metals and salts, as can be seen in the results of the framed and the unframed sorting tasks. These groups of compounds are well-defined groups with clear structureproperty relations and hardly any exceptions. The molecular compounds group was more difficult to demonstrate due to its complexity in terms of properties but also in terms of structure models: various types of bonds and lattices. This could probably be solved by dividing this group into several sub-groups of molecular compounds based on properties such as solubility and/or boiling points. In our demonstration, the problem was solved by reversing the design. Instead of asking them to model the structure level of molecular compounds, the students were asked to design appropriate demonstrations to help them find the answer to the questions of the particle perspective. This change of design was beneficial for teaching the students the particle perspective and thus for the explicit teaching of structure-property reasoning. Both design principles helped to promote structure-property reasoning among students. However, these principles will only be used in day-to-day practices and on a wider scale if teachers estimate the principles to be practical. We know from the literature that teachers judge innovations to be practical based on three criteria: (1) the teaching practice needs to contain instrumental content so that teachers know how it will work in their setting; (2) the teaching practice needs to be congruent with teachers' goals and regular teaching practice; and (3) the teaching practice should be low-cost in terms of time and en-ergy that need to be invested [36,37]. In the present study, we used existing demonstrations as a starting point for redesign. We used the design principles to adapt these demonstrations. We expect that using existing elements (high instrumentality), a redesign close to teachers' existing teaching practices and materials already present (high congruency and cost-effectiveness) and the small change of the order of existing building blocks (Figure 1) amounts to high practicality for teachers. As demonstrations could be an important online teaching method in the present time of COVID-19, and as a weakness of online teaching is the lack of interaction with students, the addition of a POE task to demonstrations could increase the interactions with students making the online demonstration minds-on. The combination of the two design principles together could be used in any situation in which students (from primary school to higher education) are asked to develop a model to explain phenomena. This is not only the case for science-related subjects but also, for example, in economics, social studies, geography or linguistics (as part of teaching a language). For the modeling of phenomena, the POE task could be used in the same manner each time, but for each domain, a different domain-specific perspective should be used. The perspective could act as a thinking frame for the students, and this might enhance the domain-specific way of thinking. Further research is needed for the development and implementation of both the particle perspective and the other domain-specific perspectives. One lesson series to develop the particle perspective, associated chemical concepts and proficiency in using the question agenda of the particle perspective is clearly not enough. Repetitive use of the question agenda and application of the chemical concepts in several assignments and tasks would be needed to mature structure-property reasoning. The lesson series described here could be the start of systematic use of the particle perspective for explicit teaching of structure-property reasoning. Further research could aim to develop strategies (tools) for teachers to design additional lessons using the particle perspective. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy legislation. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses or interpretation of data; in the writing of the manuscript or in the decision to publish the results. Appendix A. Teacher Demonstration Protocols In Tables A1-A3, the demonstration protocols for the demonstration experiments are provided. In these protocols, for each property, an accompanying demonstration is described. The structure model concept which could be modeled by the students is also indicated. These demonstrations fit in the "observe" phase in the POE task as described in "3.3. Overview of the Lesson Series". For each demonstration, the properties of several substances of that group are demonstrated. The choice of substances depends on what is available at school to properly demonstrate the properties. The substances mentioned in the demonstration protocols are therefore only indicative. The teacher tries to bend the metal plates. Metallic lattice Melting point The teacher holds the metal lead (mp = 327 K) or zinc (mp = 420 K) in a blue flame. The metal becomes soft. Next, the teacher holds the metals copper (mp = 1083 K) and/or iron (mp = 1535 K) in the flame. These melting points are above the temperature (1273 K) of the blue flame and will not soften. Conductivity of electricity The teacher builds the setup to measure current conductivity: lamp, voltage source, wires and, if necessary, adds a conductivity meter. The teacher measures the current conductivity of various metals. Metallic lattice, metallic bond Behavior when heated The teacher keeps a ribbon of magnesium in the flame. The teacher sprinkles some metal powders (such as iron or magnesium) through the flame. n/a
10,428.6
2021-09-04T00:00:00.000
[ "Chemistry", "Education" ]
Children with speech sound disorder: comparing a non-linguistic auditory approach with a phonological intervention approach to improve phonological skills This study aimed to compare the effects of a non-linguistic auditory intervention approach with a phonological intervention approach on the phonological skills of children with speech sound disorder (SSD). A total of 17 children, aged 7–12 years, with SSD were randomly allocated to either the non-linguistic auditory temporal intervention group (n = 10, average age 7.7 ± 1.2) or phonological intervention group (n = 7, average age 8.6 ± 1.2). The intervention outcomes included auditory-sensory measures (auditory temporal processing skills) and cognitive measures (attention, short-term memory, speech production, and phonological awareness skills). The auditory approach focused on non-linguistic auditory training (e.g., backward masking and frequency discrimination), whereas the phonological approach focused on speech sound training (e.g., phonological organization and awareness). Both interventions consisted of 12 45-min sessions delivered twice per week, for a total of 9 h. Intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. No significant improvement on phonological skills was observed in any of the groups. Inter-group analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. Therefore, both analyses suggest that although the non-linguistic auditory intervention approach appeared to be the most effective intervention approach, it was not sufficient to promote the enhancement of phonological skills. INTRODUCTION Speech sound disorder (SSD) is defined as a developmental disorder characterized by articulatory and/or phonological difficulties that affect a child's ability to be understood by others, leading to reduced speech intelligibility, in the absence of other cognitive, sensory, motor, structural, or affective issues (Shriberg, 2003;Raitano et al., 2004;McGrath et al., 2007). It is currently wellestablished that, in most cases, the primary characteristics of SSD are difficulties in acquiring the phonological representations of speech sound systems in addition to deficits in speech perception and phonological tasks (Bird and Bishop, 1992;Leitao and Fletcher, 2004;Kenney et al., 2006;Fey, 2008). Despite the overlap of symptoms between SSD and language impairments, such as specific language impairment (SLI), SSD have their own characteristics (primarily increased substitution or omission of sounds from words compared to same-aged peers and speech production errors) and constitute the largest group of speech and language impairments observed in children (Shriberg and Kwiatkowski, 1982;Shriberg et al., 1994;Broomfield and Dodd, 2004;Tkach et al., 2011). According to Shriberg et al. (1999), the prevalence of SSD is ∼2-13%, and the rate of comorbidity between SSD and SLI in 6-years-old children, for instance, is 0.51%. Several studies have investigated the effects of different intervention approaches on phonological impairments in children with SSD. For many years, the most common treatment approach in speech language pathology was the traditional articulation approach (Van Riper, 1939), which focuses on how to articulate individual phonemes to improve speech intelligibility. Over time, several phonological intervention approaches were incorporated in speech therapy by focusing on the phonological representations of speech sound systems, including phonemic awareness, vocabulary, and/or phonological memory tasks. Williams et al. (2010) documented 23 different intervention approaches for children with SSD, with the cycles approach Paden, 1983, 1991) and the core vocabulary approach (Holm et al., 2005) as examples of recognized phonological therapies. The Cycles Phonological Remediation Approach Paden, 1983, 1991) aims to increase a child's intelligibility by facilitating the emergence of the following primary target patterns for beginning cycles such as final consonants, clusters, velars, and liquids. The Core Vocabulary approach establishes consistency of production and enhances consonant and vowel accuracy. According to Crosbie et al. (2006), this approach is effective for children with an inconsistent phonological disorder. As previously mentioned, numerous studies have demonstrated that one symptom of SSD is speech perception deficits. However, the role of this deficit in developmental phonological disorders remains unclear. Since the 1980s, research has supported the hypothesis, initially proposed by Tallal and Piercy (1973), that an auditory-sensory deficit, more specifically, an auditory temporal processing deficit, may be the underlying cause of speech perception deficits (Tallal and Piercy, 1973;Tallal, 1980;Tallal et al., 1996;Fitch et al., 1997;Habib, 2000;Ingelghem et al., 2001;Share et al., 2002;Murphy and Schochat, 2009a,b). This auditory temporal processing difficulty can be described as a limited ability to process "acoustic elements of short duration" such as consonants with rapid formant transitions. Thus, children with language impairments, including SSD, would have difficulties perceiving and distinguishing these sounds properly within the speech spectrum and subsequently developing the phonological representation of each one to produce them properly. Based on this hypothesis, a large number of studies have investigated the effects of auditory temporal training on language and phonological skills (Merzenich et al., 1996;Tallal et al., 1996;Kujala et al., 2001;Hayes et al., 2003;Cohen et al., 2005;Russo et al., 2005;Strehlow et al., 2006;Gaab et al., 2007;Lakshminarayanan and Tallal, 2007;Gillam et al., 2008;Given et al., 2008;Heim et al., 2013). Despite this body of research, the extent to which auditory perceptual learning is generalized to higher phonological skills remains controversial and this controversy is often discussed in terms of methodology issues. In the research conducted by Tallal et al. (1996), for instance, the trained group was composed of children with both speech and language impairments (described by the authors as languagelearning impairments). Therefore, combining children with SSD and SLI together might confound the observation of a relationship between pure speech perception deficits and auditory temporal processing skills. In addition, there is no consensus as to whether the changes in language skills that follow auditory training are due to specific auditory-sensory learning or to a general enhancement in cognitive skills. Numerous studies have demonstrated that auditory training can also promote improvement in cognitive skills (especially with regard to working memory and attention) in addition to the enhancement of auditory-sensory skills (Mahncke et al., 2006;Adcock et al., 2009;. Although a great number of studies have addressed the effectiveness of auditory and phonological intervention approaches on the language skills of children with either SLI or dyslexia, only a few studies have investigated the effect of these intervention approaches in the speech production and phonological awareness skills of children with SSD. Lousada et al. (2012) described the presence of learning generalization in a study evaluating the effectiveness of a phonological intervention approach and an articulation intervention approach in children with SSDs. Either a generalization probe of the trained sound or phonological process to five non-intervention words was used. The authors demonstrated that the children in the phonological group showed greater generalization to untreated words than those who received articulation therapy. No study has investigated the efficacy of the auditory training or even attempted a direct comparison of the effectiveness of auditory and phonological intervention approaches on speech production and phonological awareness skills. Baker and McLeod (2011) for example, mentioned that few studies have demonstrated that one intervention approach is more efficient to another with a specific disorder group. Besides, most of the studies reporting efficacy studies were quasi-experimental designs or no experimental, indicating the need of more controlled studies including groups of children and randomized controlled interventions (Brumbaugha and Smita, 2014). Therefore, the aim of the present study is to compare the effect of an auditory and phonological intervention approach on speech production and phonological awareness skills in children with SSD. Taking into account previous studies demonstrating a strong link between impaired phonological processing and SSD as well as the hypothesis associating speech perception deficits to an auditory-sensory impairment, we will be able to explore the real contribution of phonological skills as well as the auditorysensory aspects in language skills by comparing both intervention approaches. We also aim to investigate the extent to which both interventions may improve other deficits present in children with SSD, including sustained attention (Murphy et al., 2014) and phonological working memory deficits (Adams and Gathercole, 1995). We hypothesized that each of the interventions will improve the performance in the trained tasks (auditory and phonological skills) and result in learning transfer to associated tasks in the same or different domains (language, auditory, memory, and attention skills). MATERIALS AND METHODS This study was conducted at the Department of Physical Therapy, Speech-Language Pathology and Occupational Therapy in the School of Medicine (FMUSP/HC) at the University of São Paulo and was approved by the Research Ethics Committee in the Analysis of Research Projects at the Hospital das Clínicas, School of Medicine, University of São Paulo, under Protocol Number 575/09. A written consent form with detailed information on the aim and protocols of the study was also approved by the same ethics committee. All parents provided written informed consent on behalf of the children involved in the study. Apparatus The experiment took place in an isolated room in the Speech-Language Pathology Clinic. Auditory tests were administered binaurally in a sound-treated booth at a level of 40 dB NS using an audiometer, headphones, and compact disks. Attention and short-term memory tests were administered using the E-Prime Professional Software to display the stimuli and collect the data. The language tasks were recorded using a JVC® Everio video camera and a Zoom H2 digital recorder for audio. Auditory intervention was delivered individually using a laptop, headphones, and specific software. The stimuli were presented binaurally at a comfortable listening level, which corresponded to a sound level of 70 dB (A). In the phonological intervention approach, children were positioned face-to-face with the speech and language pathologist to provide visual support of the therapist's mouth. Target Frontiers in Psychology | Auditory Cognitive Neuroscience sounds were presented at approximately 50-60 dB HL at a distance of 1 m. Outcome measures The intervention outcomes were categorized as "auditory-sensory measures" (i.e., auditory temporal processing skills) and "cognitive measures" (i.e., attention, short-term memory, speech production, and phonological awareness skills). Auditory-sensory measures. Frequency Pattern Test (FPT; Musiek, 1994). The FPT consists of 20 trials with ∼6-s intervals between each trial pair. Each trial consisted of three stimuli for 150 ms with an inter-stimulus interval of 200 ms. The low stimulus (L) was 880 Hz, and the high stimulus (H) was 1122 Hz. There were six possible stimulus combinations: HHL, HLL, HLH, LHL, LLH, and LHH. The children were instructed to carefully listen to all three stimuli and respond by naming them in the order in which they were presented (e.g., "low, low, high"; "high, low, low"; etc.). After the study, we calculated the percentage of correct answers. This test was administered binaurally in a sound-treated booth at a level of 40 dB NS. In nonimpaired Brazilian children (ages 7-11 years-old), the expected result varies between 47.5 and 69.4% (Schochat et al., 2000). (GIN -Musiek et al., 2005). The GIN Test consists of stimuli with ten different gap lengths of 2-30 ms. In this test, the participants listened to segments of broadband noise that contained 0, 1, 2, or 3 silent intervals (i.e., gaps). As Musiek et al. (2005) described, the broadband noise was turned off and on instantaneously to produce gaps. Listeners were instructed to raise their hands each time they heard a gap. Gaps were separated by at least 500 ms for each trial. The test was performed in a sound-treated booth at a level of 40 dB NS. The task consisted of 35 trials presented binaurally. In non-impaired Brazilian children (ages 8-10 years-old), the expected result is ∼6.1 ms (Amaral and Colella-Santos, 2010). Cognitive measures. Auditory and Visual Attention Tests (Murphy et al., 2014). In both tests, performance is assessed using tasks that require participants to remain prepared to respond to infrequent targets (e.g., digits, letters, or symbols) over an extended period of time. In the present research, both tests were developed using E-Prime Professional software. In the visual test, digits between 1 and 7 were presented on a screen and participants pressed a button as quickly as possible each time a 1 or 5 appeared. The auditory task was identical to the visual task except that the participants heard the digit spoken over a set of calibrated headphones. The stimuli were presented binaurally at a comfortable listening level corresponding to a sound pressure level of 70 dB (A). The duration of each test was ∼6 min and consisted of 210 trials. Three performance measures were compared across blocks: correct detection (HIT), false alarms (FAs: errors of omission and commission), and reaction time (RT). Participants were tested individually in a quiet, well-lit laboratory on campus. The testing session was composed of two parts: evaluation of auditory sustained attention and evaluation of visual sustained attention. The order was counterbalanced among participants. Before each section, the participants were given appropriate instructions and asked to perform approximately 15 practice trials. Visual digit span (forward recall; Murphy et al., 2014). This task was developed using E-Prime Professional software. The digit span task begins with a series of three digits, with 12 attempts for each series. Children verbally repeat each numerical sequence after viewing the numbers on a computer screen. If the children are correct more than 50% of the time, longer series are gradually presented. The span result is the last series for which the subject's responses were more than 50% correct. Speech production. Assessed by the picture-naming and the word imitation tasks (Wertzner, 2004), derived from the Infantile Language Test-ABFW (Andrade et al., 2004). The picture-naming task was composed of 34 pictures of objects (24 dissyllable and 10 trisyllable words) with 90 consonants and the word imitation task was composed of 39 words (25 dissyllable and 14 trisyllable words) with 107 consonants. Two researchers transcribed each trial to ensure the accuracy of the data. There was ≥90% inter-reliability. The percentage of consonants correct -revised (PCC-R; Shriberg et al., 1997) was calculated separately for both speech production tasks by dividing the number of correct productions by the total number of consonants in the sample and multiplying by 100 to determine the production acuity of each subject. Phonological awareness. Assessed by the Lindamood Auditory Conceptualization Test (LAC; Lindamood and Lindamood, 1979), adapted to the Brazilian Portuguese language (Rosal, 2002;Wertzner et al., 2014). The LAC test assesses phonological awareness skills without requiring verbal responses (children use colored blocks to represent their responses). This method provides superior information on phonological representations, as they prevent speech production errors from affecting the respondent's performance. The test comprised two categories: phonological awareness 1 (PA1) and phonological awareness 2 (PA2). PA1 assesses perception skills through the auditory selection of speech sounds. It comprises six complex sameness/difference sequences covering three possible variations in sequence of three gross and three fine contrasts. The subject must discriminate how many sounds he or she heard in a pattern, and in what sequential order their sameness or difference occurs. Examples of this category are the sound patterns (/b/ /b/ /z/) and (/k/ /t/ /k/). PA2 assesses comprehension skills associated with the child's ability to perceive and compare the number and order of sounds in a spoken pattern (including 12 stimuli that assess the manipulation of one phonemic change such as addition, substitution, omission, transfer, and repetition). Intervention program Because the impact of both approaches will be investigated for the group as a whole (not individually), we chose to adopt, for both interventions, more general training tasks instead of specialized training focused on specific speech difficulties or impaired auditory skills. AUDITORY INTERVENTION The training focused on different auditory-sensory aspects, such as frequency discrimination, ordering, and backward masking. Each of the three tasks took ∼15 min to complete, resulting in 45 min www.frontiersin.org of total training per session. The following software was used for the training tasks: 1. Backward masking and frequency discrimination: the System for Testing Auditory Responses/STAR (Moore et al., 2008). This software was responsible for training backward masking and frequency discrimination skills. A laptop computer with headphones was used to present the stimuli. The stimuli were presented binaurally at a comfortable intensity. A three-interval, three-alternative, forced-choice oddball design was used for both tasks. In the frequency discrimination task, three soundemitting characters were presented, one of which emitted a sound at a different frequency from the others. The objective of the task was to detect the different frequency by clicking on the corresponding character. During this activity, the degree of difficulty was automatically modified by decreasing the difference between the standard stimuli and the target through an adaptive staircase assessment. The backward masking task was performed in a similar manner. Three sound-emitting characters were presented, of which one emitted a 20-ms pulse tone target 50 ms before the noise. The goal of the task was to recognize which character emitted the pulse tone and the noise. The degree of difficulty was modified via the automatic reduction of the pulse tone intensity. 2. Frequency ordering: sweep frequency was conducted using Auditory temporal training with non-verbal and verbal extended speech® software. This task trains both frequency discrimination and ordering skills. During the task, participants listened to two or three stimuli (depending on the task phase) and matched the stimuli to a sign on the screen. The following acoustic characteristics were presented: stimulus durations of 40-200 ms and frequencies that varied by 6.8 octaves per second. The initial and final frequencies were 0.5, 1 or 2 kHz, with an inter-stimulus interval that varied between 20 and 500 ms. The task consisted of 18 stages of varying difficulty levels (i.e., variations in the inter-stimulus interval and stimulus duration). PHONOLOGICAL INTERVENTION As mentioned previously, because the impact of this approach was investigated for the group as a whole (not individually), for the present study, we designed a phonological stimulation program (PSP) for the stimulation of different sounds of the phonetic inventory. The PSP was formulated to expose the participants to all sounds from the Brazilian Portuguese system independent of the phonological processes observed during evaluations such that phonological acquisition could occur gradually over a short period of time (12 sessions of stimulation). Compared to more traditional phonological intervention approaches, the current approach is more closely linked to the Cycles Phonological Remediation Approach Paden, 1983, 1991), which also predicts that phonological acquisition in children with phonological disorders is gradual, as in typically developing children, and should be associated with kinesthetic and auditory sensations in order to acquire new patterns. Therefore, this approach intends to increase the child's intelligibility by facilitating the emergence of primary target patterns for beginning cycles such as final consonants, clusters, velars, and liquids. During the 12-weeks period of the intervention, all 21 consonantal sounds (CVs) and 13 clusters (CVC) of Brazilian Portuguese were stimulated through activities involving the auditory perception of the target sound, articulatory production, phonological organization, and metalinguistic abilities. Every 2 weeks, each child was exposed to a new specific sound pattern within CV syllables, such as stops, fricatives, liquids and nasals, as well as more complex syllables such as CVC and CCV, regardless of the child's performance and the phonological processes observed in evaluations. The sound patterns stimulated were as follows: sessions 1 and 2 -fricatives (/f/, /v/, /s/, /z/, / /, /Z/); sessions 3 and 4 -stops (/p/, /b/, /t/, /d/, / k/, /g/); sessions 5 and 6 -liquids (/l/, /R/, /λ/) and the velar fricative (/x/); sessions 7 and 8 -(/m/, /n/, /ñ/) and (/s/, /R/) in CVC syllables; sessions 9 and 10 -/l/ in CCV syllables and sessions 11 and 12 -/R/ in CCV syllables. We based the target sequence of stimuli on different studies with Brazilian Portuguese-speaking children (Wertzner, 2004;Wertzner et al., 2006Wertzner et al., , 2007, which indicate that difficulties with the liquids production followed by devoicing of fricatives and stops are the most common speech deficits in children with SSD. As the liquid sounds are complex sounds due to both its production and its occurrence in Brazilian Portuguese distribution, we chose to begin the PSP with the presentation of the fricatives followed by the stops so we could also be able to present the differentiation of the contrast between voiced and voiceless sounds. After these sounds, we presented the liquids and the velar fricative followed by the most complex syllables (CVC and CCV) to finish the program. A variety of tasks were used during the PSP, some of which will be highlighted here. One of the auditory perception tasks was to read three words beginning with each target sound to the child and then perform auditory recognition training for the sounds. In the articulatory tasks, the child had to pay attention to the sound and how the sound was produced by the researcher. Explanations regarding the sound's production were also given. Then, the child had to name specific objects beginning with the target sounds. In the tasks concerning phonological organization, the researcher asked the child to create a sentence including the name of a picture. Metaphonological tasks including syllable, rhyme, and alliteration activities were also performed in addition to phonological memory tasks with words beginning with the target sounds. Participants A total of 19 children diagnosed with SSD were invited to participate in this study. The children were recruited through the Laboratory of Investigation in Phonology within the Department of Physical Therapy, Speech-Language Pathology, Audiology and Occupational Therapy at the School of Medicine at the University of São Paulo. The children were diagnosed using the phonology test (Wertzner, 2004) derived from the Infantile Language Test-ABFW (Andrade et al., 2004). Diagnosis of a SSD was made by the by the presence of phonological impairments, which were determined by the presence of phonological processes that were not age expected and the absence of impairment in the other Frontiers in Psychology | Auditory Cognitive Neuroscience language areas (vocabulary, pragmatics, and fluency), which are also measured using the Infantile Language Test-ABFW (Andrade et al., 2004). After diagnosis, the PCC-R (Shriberg et al., 1997) was determined based on the speech samples obtained by picturenaming and an imitation of word tasks from the phonology test (Wertzner, 2004). This quantitative measure was chosen because it is highly sensitive to differences in phonological deficits and provides information pertaining to the two primary error types: omissions and substitutions (Shriberg et al., 1997). The children were monolingual Brazilian-Portuguese speakers and were not undergoing rehabilitation. The inclusion criteria were as follows: age between 7 and 12 years, diagnoses of a SSD using the phonological output/speech production test described above; no deficits in other language areas (vocabulary, pragmatics, and fluency), IQ > 80 (based on the WISC-IV); and no familial or personal history of diagnosed or suspected auditory, otological or neurological disorders or injuries. This specific age range was chosen because the complexity of the some auditory tasks included in the auditory intervention, which would not necessarily be easily comprehended by younger children. In addition, participants were required to demonstrate normal tympanometry and acoustic reflexes. Auditory sensitivity was required to be within normal limits (≤15 dB HL for octave frequencies from 250 to 8000 Hz) and symmetrical (interaural differences ≤5 dB HL at each frequency). In order to investigate these inclusion criteria, they were required to pass a series of inclusion tests consisting of a parent questionnaire, an audiological evaluation, language tests and a non-verbal IQ test (the Raven test of Colored Progressive Matrices with Brazilian norms (Angelini et al., 1999) and a conversion table of IQ values (Strauss et al., 2006). The results of these tests (i.e., the IQ test and audiological evaluation) led to the exclusion of two children. Then, the selected children were randomly assigned into either the auditory intervention group (AG, n = 10) or the phonological intervention group (PG, n = 7). Table 1 displays the characteristics of these two groups (gender, age, IQ, and language skills). There were no significant inter-group differences with regard to age (p = 0.053), IQ (p = 0.35), short-term memory (p = 0.17), auditory processing (Frequency Pattern Test: p = 0.21, Gaps in Noise test: p = 0.80), and one of the language skills (picturenaming: p = 0.06). Differences were found only for imitation of words (p = 0.013). The significance threshold was set at p < 0.05 (Table 1). Procedures After the groups were established, a series of tests concerning attention, short-term memory, language, and auditory processing were applied before and after the interventions (outcome measures). The characteristics regarding each of these tests are described in the Materials section. Each participant was allocated to one of the two intervention groups. Both of these approaches consisted of 12 45-min sessions twice per week, for a total of 9 h of training. The details regarding each program are also described in the Materials section. Both groups received approximately the same number of training sessions (AG: mean = 11 sessions, PG: Age (M ± SD) 7.7 ± 1.2 8.8 ± 1.06 0.053 Auditory tests (M ± SD) Audiological mean = 11.4 sessions; p = 0.62). Figure 1 demonstrates the sequence of procedures adopted from the initial invitation to participants until the number of completed training sessions for each group. STATISTICAL ANALYSIS The data were analyzed using Minitab Statistical Software version 16.1. Non-parametric statistics were used because both groups violated the assumption of normal distribution necessary for parametric analysis. Intra-and inter-group analyses were used not only to investigate the effect of each intervention approach separately (intra-group analysis) but also to compare the level of improvements following interventions in both groups (inter-group analysis). For the first analysis, the pre-and post-intervention performances were compared separately for each group in each of the tests (intra-group analysis using the Wilcoxon test). In the second analysis, the differences between the pre-and postintervention performances ("improvement-following training") were compared between both groups in each of the tests (intergroup analysis using the Mann-Whitney test). The significance threshold was set at p < 0.05. Table 2 displays the performances in auditory-sensory and cognitive measures for both groups (pre-and post-training). Auditory group The Wilcoxon test demonstrated significant differences between the pre-and post-intervention performances for both auditory measures (FPT: p = 0.01 and GIN: p = 0.05), one of the visual attention measures (RT: p = 0.03), one of the auditory attention www.frontiersin.org INTER-GROUP ANALYSIS With regard to the auditory-sensory measures, the Mann-Whitney test showed a significant difference between the gains in both groups for both auditory measures (PF: p = 0.01; GIN: p = 0.02). With regard to the cognitive measures, the Mann-Whitney test demonstrated significant differences between the gains in both groups for visual RT (p = 0.02) and no significant differences between gains in both groups for language tasks (IB: p = 0.58; II: p = 0.52; picture-naming task: p = 0.69; imitation of words task: p = 0.32), the short-term memory task (p = 0.45) and the other auditory and visual attention measures (visual HIT: p = 0.72; visual FA: p = 0.41; auditory HIT: p = 0.35; auditory FA: p = 0.88; auditory RT: p = 1.0). To summarize, intra-group analysis demonstrated that the auditory intervention group showed significant gains in both auditory and cognitive measures, whereas no significant gain was observed in the phonological intervention group. Intergroup analysis demonstrated significant differences between the improvement following training for both groups, with a more pronounced gain for the non-linguistic auditory temporal intervention in one of the visual attention measures and both auditory measures. No significant improvement on phonological skills was observed in both analysis in any of the groups (Table 3 and Figure 2). DISCUSSION The purpose of this study was to compare the impact of a nonlinguistic auditory and a phonological intervention approach on the phonological skills of children with SSD. Before discussing the present results, it is important to discuss the characteristics of the groups, specifically the age and the pre-training performance in phonological tasks. Although no significant differences were observed with regard to age, there was a difference of ∼1 year between the groups (children in the phonological intervention group having the highest mean age). Although several studies have corroborated the hypothesis regarding the existence of a critical period for learning (Knudsen, 2004), a difference of 1 year is insufficient to influence significant differences in the way that the learning process occurs, especially comparing 7-and 8-years-old. , for instance, observed a significant difference between the gains following auditory training only between a younger group (ages 7-10) and an older group (ages 11-14). However, the age difference in our study possibly influenced the performance on the phonological and short-term memory tasks pre intervention. This result is expected given that, even in children with SSD, these two skills improve with development (to some extent). Therefore, specifically for the imitation of words task, the phonological group had a significantly better performance than the auditory group; however, the difference between the groups in the shortterm memory task was not significant. The implications of the performance of the phonological group on the phonological Imitation of words (PCC) 3.3 ± 5 -0.3 ± 5.9 0.32 PA1-discrimination 0.6 ± 1.5 0.29 ± 0.7 0.58 tests will be discussed further, with the comments concerning the improvement following training on the same tests. Regarding gender, both groups contained a higher number of boys, which corroborates previous research on the higher prevalence of SSD in boys (Shriberg et al., 1986(Shriberg et al., , 1994Wertzner and Oliveira, 2002). The Intra-group analysis demonstrated that although no significant improvement following training was observed for the phonological group, the auditory group showed significant gains in both auditory, one of the visual and one of the auditory attention measures as well as in the digit span measures. Regarding the auditory group, the improvements for both the FPT and GIN test were expected because the trained task in the auditory intervention approach is similar to both of these outcome measures. Thus, this improvement is likely to represent mid-transfer, that is, the learning generalization from the trained task to a different task in the same domain. Other studies, like the present research, have also demonstrated improvements following a non-linguistic auditory intervention approach in a similar trained task (Kujala et al., 2001;. Kujala et al. (2001), for instance, used non-linguistic audiovisual computer training, with sound elements varying in pitch, duration, and intensity, in reading-impaired children. After training, improvements in a behavioral auditory frequency discrimination task were demonstrated, corroborating the results of the present www.frontiersin.org research. applied frequency discrimination training in children with dyslexia. After training, there was a significant improvement in the trained group on a similar trained task. Despite the improvement of the auditory group on both auditory-sensory measures, no significant improvement was observed for language tasks, suggesting no generalization from non-linguistic auditory tasks to higher phonological skills. Previous research has demonstrated that this is a controversial topic. Some studies have observed improvements in verbal skills after auditory training (Kujala et al., 2001;Lakshminarayanan and Tallal, 2007;, whereas others failed to show the same results (Halliday et al., 2012). Kujala et al. (2001), for instance, implemented an audiovisual training program including only non-linguistic stimuli for a group of 7-years-old dyslexic children (n = 24). The results showed that whereas before training, there were no differences in performances on reading tests between the "trained" and "untrained" groups (both composed of dyslexic children), after training, the "trained" group had better results than the "untrained" group. Electrophysiological auditory tests also showed similar resultslarger amplitudes of the mismatch negativity wave were seen after training. The researchers suggested that non-linguistic auditory training, such as in the current research, can improve reading skills. In contrast, in a study conducted by Halliday et al. (2012), no learning generalization across different tasks or stimuli was found when different types of sensory training were given (auditory frequency discrimination, auditory phonetic discrimination, and visual frequency discrimination tasks). The authors concluded that learning following auditory training was specific to the task or stimulus. Most likely, these controversial results are due to the methodological differences among studies, such as the training delivered (amount of training, type of task, and stimulus), the outcome measures (how far from the trained task the effect extends) and the population (typically developing children or those with language disorders). Regarding the length and intensity of the training, for instance, we administered both training approaches over 12 sessions of 45 min each (one per week, totaling 12 weeks), whereas Kujala et al. (2001) administered 14 sessions of 10 min (twice per week, totaling 7 weeks) and Halliday et al. (2012) administered 12 sessions of 30 min (three times per week, in total 4 weeks). Although Halliday et al. (2012) provided the most intensive training, no generalization was observed from the auditory stimulus or task to higher level measure of language ability. One possible explanation was demonstrated by Molloy et al. (2012), who claimed that optimal training regimens should have short sessions spaced by several days in early learning, as done by Kujala et al. (2001), which was the only study that demonstrated learning transfer from the non-linguistic stimuli to language skills. Despite the lack of generalization from the trained tasks to language skills, intra-group analysis demonstrated improvements in short-term memory and attention outcome measures. This result suggests a positive benefit of training on the attention and memory skills of children with SSDs; moreover, it demonstrates the influence of an auditory-sensory intervention on top-down skills. As in the present research, previous studies also reported enhanced attention skills following auditory-sensory training in different populations (Stevens et al., 2008;Soveri et al., 2013). Stevens et al. (2008) demonstrated better selective auditory attention performance following Fast ForWord (FFW) training in children with SLI, suggesting that the neural mechanisms of selective attention are remediated through training. Soveri et al. (2013) also demonstrated improved auditory attention in healthy adults, suggesting that auditory training can modulate the allocation of auditory attention in the adult population. It is also important to note that in the current research, the improvement in shortterm memory seemed to be insufficient for the enhancement of phonological skills. This transfer may occur given that poor phonological representations of speech sound systems are often attributed to deficits that involve memory skills (Bird and Bishop, 1992;Raitano et al., 2004;Kenney et al., 2006). Because short-term memory improvements were observed only for the intra-group analysis, additional studies are necessary to better investigate this result. Contrary to the auditory group, the phonological group exhibited no improvement, after training, in auditory-sensory measures. This result was expected given that the tasks included in the phonological intervention approach did not have a close or even underlying relationship with these auditory-sensory measures. However, the lack of improvement in phonological tasks was not expected because the phonological training tasks were similar to the phonological outcome measures; therefore, it would be reasonable to expect a more pronounced gain for the phonological group. It is possible that this result is associated with the type of phonological intervention approach adopted in this study. As noted above, the phonological intervention approach consisted of more general tasks, with no focus on the individual's performance before the intervention (deviant or missing phonemes). Therefore, the improvement in phonological outcome measures had to be linked to learning transfer from this general stimulation to some specific deviant or missing phonological process. Previous studies have demonstrated this generalization when the phonological intervention approach was based on the child's target speech production goals. Lousada et al. (2012), for instance, described the presence of learning generalization in a study evaluating the effectiveness of a phonological intervention approach and an articulation intervention approach in children with SSDs. A generalization probe of the trained sound or phonological process to five nonintervention words was used. The authors demonstrated that the children in the phonological group showed greater generalization to untreated words than those who received articulation therapy. The results of the inter-group analysis demonstrated no significant difference between both groups with regard to improvement on the phonological tests following intervention. One of the issues with this comparison is that the phonological group, compared to the auditory group, had a significant better performance on the phonological tests before training. Thus, the phonological group had less chance to develop, which could negatively impact the observation of increased improvement of the phonological group following intervention. Therefore, this might be a www.frontiersin.org reason for the lack of a more pronounced gain in the phonological group. However, in the intra-group analyses, in which both groups were analyzed separately, the phonological group had no significant improvement, even for phonological awareness task that included manipulation, in which the score obtained prior to intervention was only 67.5%. Thus, at least for this task, there was no ceiling effect, which means that it would be absolutely reasonable to observe a significant improvement following intervention. The initial hypothesis of this study was that each one of the interventions would improve the performance in the trained tasks (auditory and phonological skills), leading to the learning transfer to associated tasks (language, memory, and attention skills). As previously mentioned, significant improvement in the trained tasks were observed only in the auditory group. We hypothesize that this improvement might be related to the increased similarity between the auditory training tasks and the auditory outcome measures compared to the phonological trained tasks and the phonological tests. Therefore, further studies should investigate the effect of a more specific intervention approach that focuses on specific speech difficulties/phonological processes. Despite that, previous studies has also demonstrated the positive effect of more general remediation. The auditory program FFW , for instance, is one of the examples of a successful general approach given that the program comprises varied skills such as auditory temporal, phonological awareness and reading skills and it is not focused in a singular aspect. In this case, research has demonstrated generalization from more perceptual trained aspects to language skills of children with language disorder (Merzenich et al., 1996;Gaab et al., 2007). Lousada et al. (2012) also described the presence of generalization from a trained phonological process to non-trained words. The observed transfer from the auditory training to the attention and memory skills might be related to the different characteristics of the two interventions. Whereas the auditory training was administered via a computer with fixed audiovisual tasks demanding attention and time to answer, the phonological training was administered by a speech therapist with more flexible tasks and more time to answer. With regard to the transfer to phonological skills, because no significant enhancement was observed (even with auditory-sensory improvement), the results do not corroborate the initial hypothesis, which associates auditory temporal processing and phonological skills. Therefore, although the nonlinguistic auditory intervention approach appears to be the most effective intervention approach, this was insufficient to promote the enhancement of speech production and phonological awareness skills. Further studies are necessary to ascertain the extent to which auditory-sensory is involved with the etiology of SSD and the process of learning generalization across bottom-up and top-down skills. These results are based on preliminary data from 10 participants who received auditory training and seven who received phonological training. It is clear that additional data are needed to confirm and extend these findings. Further research is also required to investigate the presence of a test-retest effect through the inclusion of a control group (non-trained group).
9,080.4
2015-02-04T00:00:00.000
[ "Education", "Linguistics", "Medicine" ]
InAs-based quantum cascade lasers grown on on-axis (001) silicon substrate We present InAs/AlSb quantum cascade lasers (QCLs) monolithically integrated on an on-axis (001) Si substrate. The lasers emit near 8 μ m with threshold current densities of 0.92–0.95 kA/cm 2 at 300 K for 3.6-mm-long devices and operate in pulsed mode up to 410 K. QCLs of the same design grown for comparison on a native InAs substrate demonstrated a threshold current density of 0.75 kA/cm 2 and the same maximum operating temperature. The low threshold current density of the QCLs grown on Si makes them suitable for photonic integrated sensor implementation. Integrating mid-infrared (IR) semiconductor lasers onto silicon platforms is highly desired for the development of compact, cost-effective, smart sensor systems. 1 The direct epitaxial growth of III–V emitters on silicon substrates is better suited for mass-scale pro-duction than their heterogeneous integration, 2 but this monolithic approach is complicated by the difference in the crystal structure and a large lattice mismatch between Si and the III–V materials usu-ally Integrating mid-infrared (IR) semiconductor lasers onto silicon platforms is highly desired for the development of compact, costeffective, smart sensor systems. 1 The direct epitaxial growth of III-V emitters on silicon substrates is better suited for mass-scale production than their heterogeneous integration, 2 but this monolithic approach is complicated by the difference in the crystal structure and a large lattice mismatch between Si and the III-V materials usually employed in semiconductor lasers. Indeed, there are only a few reports on electrically pumped mid-IR lasers directly grown on Si. GaSb-based quantum well lasers operating at 2 μm in the continuous wave (cw) regime above room temperature (RT) have been first demonstrated. 3 More recently, we have also reported the first quantum cascade lasers (QCLs) directly grown on Si. 4 These InAs-based devices emitting near 11 μm exhibited RT threshold current densities Jth of 1.3 kA/cm 2 and operated in pulsed mode up to 380 K, which was close to the characteristics of QCLs grown on a native InAs substrate. 4 InP-based QCLs grown on a silicon substrate have also been subsequently reported, but their performance was much poorer than that of similar devices fabricated on InP substrates. 5 These lasers emitting at 4.35 μm operated in pulsed mode only up to 170 K with a Jth of 1.85 kA/cm 2 at 80 K. In order to suppress the formation of antiphase domains (APDs) caused by the different crystal structures of non-polar Si substrates and polar III-V compound epitaxial layers, 6 these mid-IR lasers were all grown on silicon substrates exhibiting a large miscut angle (>6 ○ ) with respect to the (001) orientation. This makes them incompatible with the standard industrial process based on on-axis (001) Si wafers. In this letter, we report the first QCLs directly grown on an on-axis silicon substrate. The active zone of the InAs/AlSb QCLs is based on vertical transitions in four coupled quantum wells and resonant phonon extraction. It was designed to emit around 7.5 μm, a region rich in strong absorption features of different molecules. 7 It contains 40 repetitions of the following layer sequence in Å, starting from the injection barrier: 21/69/3/56/3.5/54/4.5/ 51/7/48/8/46/9/46/12/42/13/39/18/37, where AlSb layers are in bold and the Si-doped layers (n = 6 × 10 16 cm −3 ) are underlined. The total electron sheet density in the structure, taking into account residual doping of InAs (n ∼ 8 × 10 15 cm −3 , estimated), is considered to be 1 × 10 11 cm −2 per period. The plasmon enhanced dielectric waveguide of the laser was formed by 2-μm-thick cladding layers made of n-InAs doped with Si to n = 4 × 10 18 cm −3 , separated from the active zone by undoped InAs spacers of the same thickness in order to reduce the overlap of the guided mode with the absorbing doped material. The electromagnetic modeling of the guided modes, using a finite element solver, gives an overlap of the fundamental mode APL Photon. 5, 041302 (2020); doi: 10.1063/5.0002376 5, 041302-1 LETTER scitation.org/journal/app with the active region Γ = 55% and the waveguide loss α = 3 cm −1 for an empty waveguide. The QCL structure (EQ609) was grown in a RIBER 412 solidsource molecular-beam epitaxy (MBE) reactor equipped with As and Sb valved cracker cells. InAs and AlSb layers were grown using deposition rates of 3.03 Å/s and 1 Å/s, respectively, and a V/III flux ratio of 2 on a GaSb-on-Si template prepared in a separate growth run as follows. Prior to growth, the (001) Si substrate exhibiting a ∼0.5 ○ residual miscut was annealed at 1000 ○ C for 10 min under ultra-high vacuum in a dedicated preparation chamber before being vacuum transferred into the III-V epitaxy chamber where a 1.5 μm GaSb buffer layer was grown underneath the QCL structure. This GaSb-on-Si template was inspected before being reloaded into the MBE reactor for growth of the QCL structure. After thermal oxide removal from the GaSb surface under Sb 2 flux at 560 ○ C, the growth of the full QCL structure was performed at a temperature of ∼450 ○ C. In addition, a similar QCL structure (EQ746) was grown under the same conditions in a separate growth run on an InAs substrate for the sake of comparison. The surface morphology of the QCL structure grown on Si was evaluated by atomic force microscopy (AFM) and scanning electron microscopy (SEM). The corresponding images are shown in Figs. 1 and 2(a), respectively. The surface morphology of the wafer was much better than that of the previous QCLs grown on a Si substrate with a 6 ○ miscut [ Fig. 2(b), sample from Ref. 4], but the surface was nevertheless quite rough. The rms roughness measured by AFM is 9.9 nm. On the other hand, no antiphase domains were visible in the AFM images, which is ascribed to the high-temperature preparation of the Si substrate. The grown wafers were processed into ridge lasers using wet etching and standard photolithography, the ridge width w being varied between 9 μm and 17 μm. Hard baked photoresist was employed for electrical insulation. The laser ridges were etched down to the bottom n + -InAs cladding layer. Electrical contacts were formed on the top of the ridges and on the etched part of the wafer using non-alloyed Ti/Au metallization. The Si substrate was thinned down to 50 μm by mechanical polishing, and the wafer was cleaved to form 3.6-mm-long Fabry-Perot lasers. The lasers were then soldered with indium on copper heatsinks, wire-bonded, and tested in pulsed mode (333 ns, 12 kHz). Emission spectra of the devices were measured using a Bruker V70 infrared Fourier transform spectrometer (FTIR). No special selection was made to choose the lasers for measurements. Electrical contacts to the devices fabricated from the QCL wafer grown on InAs were taken on the top of the ridges and at the back of the substrate. The QCLs grown on Si had RT pulsed threshold current densities in the 0.92-0.95 kA/cm 2 range and operated up to a temperature of 410 K (Fig. 3). The reference devices grown on InAs exhibited threshold current densities around 0.75 kA/cm 2 at 300 K and also operated up to 410 K (Fig. 4). The temperature dependence of the threshold current density of the tested QCLs is shown in Fig. 4. Straight lines in this figure indicate a slope corresponding to a characteristic temperature of the exponential dependence T 0 = 125 K, identical around RT for lasers grown on both InAs and Si substrates. However, above 340 K, Jth of the QCLs grown on Si increased slowly with temperature. Although dedicated aging studies will be needed to assess device robustness, we noticed that the threshold current density of the lasers measured at 300 K did not change after the high temperature characterizations around 400 K. Typical emission spectra are shown in the inset in Fig. 4. The lasers emitted wavelengths of 7.7 μm and 8.0 μm at 300 K for the lasers grown on InAs and Si, respectively. In general, the operation of InAs/AlSb QCLs grown on silicon can be affected by antiphase boundaries (APBs), delimiting APDs, and threading dislocations originating from the large (≈11%) difference in the lattice parameters of InAs (and AlSb) and Si. A degradation of the performance is therefore expected in these devices compared with QCLs grown on native InAs substrates. The procedure used for preparing the GaSb-on-Si template allowed us to ensure full annihilation of APBs within the 1.5-μm thick GaSb buffer layer sitting well below the QCL active zone, the absence of emerging APBs being confirmed by AFM during preliminary studies. However, the unavoidable high density of threading dislocations (estimated to be in the 10 8 cm −2 range from preliminary transmission electron microscopy investigations) does penalize the device performance. Yet, the studied QCLs grown on silicon exhibited RT threshold current densities below 1 kA/cm 2 and only 22% higher than the reference lasers. Both Jth values and their deviation from the values observed on identical devices grown on InAs are smaller than those in the previous InAs/AlSb QCLs that were grown on an off-axis Si substrate, where a 30% increase in Jth was observed. 4 In lasers with short resonators, this difference was even smaller, which was explained by a higher optical gain due to smoother interfaces in the structure grown on an off-axis substrate that favored a step-flow regime of MBE growth. This argument is probably still valid for the lasers studied in this work since the on-axis Si substrate exhibited a residual 0.5 ○ misorientation that can stimulate the step-flow growth mode, thus reducing interface scattering in the QCL structure. A smaller difference in Jth at high temperatures can be considered as an indication of a higher gain in the new lasers grown on Si compared with the devices grown on InAs. However, the most likely explanation of the high QCL performance achieved in this work is the much better crystalline quality of the wafer grown on the on-axis Si substrate (Fig. 2). The Jth degradation in QCLs grown on Si can be due to the broadening of the gain curve caused by non-homogeneity of the layer thickness in the active zone of the device. As suggested in Ref. 4, in InAs/AlSb QCLs, this effect is quite weak because of the opposite influence of the fluctuations of the AlSb barriers and InAs wells on the transition energy in the case of their in-phase variations. 4 In order to verify this assumption, we measured spontaneous emission spectra of the devices studied in this work. The spectra were measured at 300 K at a low current density of 0.4 kA/cm 2 , corresponding to about 50% of Jth for these devices, in order to avoid narrowing due to the optical amplification (Fig. 5). The full width at half maximum of the spectra extracted from the Lorentzian fits of the data was comparable for both devices, 12.5 meV and 13.5 meV for the lasers grown on InAs and Si, respectively. Measurements on the samples without resonators are necessary for more reliable analysis of the emission linewidth, but the observed trend confirms the conclusion made in Ref. 4. Another reason of the observed increase in Jth of the QCLs grown on silicon can be higher optical losses due to additional absorption on crystal imperfections. This mechanism can similarly explain a weaker performance degradation in short devices observed in Ref. 4. The reference lasers EQ746 mounted epi-side down operated in the cw regime at temperatures up to 30 ○ C. At high temperatures, the QCLs EQ609 grown on Si exhibit Jth close to the characteristics of the devices grown on InAs, and they should thus be able to operate under continuous wave operation near RT provided a suitable heat dissipation scheme, such as thick gold plating, is implemented. In summary, we demonstrated InAs/AlSb quantum cascade lasers monolithically integrated on an on-axis (001) Si substrate. At room temperature, the lasers emitted near 8 μm and exhibited threshold current densities below 1 kA/cm 2 , only 22% higher than reference QCLs of the same design grown on a native InAs substrate. The operating temperature of the QCLs grown on silicon reached 410 K, demonstrating the same performance as the reference devices. The low threshold current density of these devices makes them suitable for photonic integrated sensor implementation. Even though at this stage, the substrate preparation process is not fully CMOS compatible, various remotely controlled sensor chips can already be envisioned based on the co-integration of QCLs and quantum cascade detectors on the same Si photonic circuit or III-V-on-Si photonic circuit.
2,948.4
2020-04-01T00:00:00.000
[ "Physics", "Engineering" ]
Analog Vector-Matrix Multiplier Based on Programmable Current Mirrors for Neural Network Integrated Circuits We propose a CMOS Analog Vector-Matrix Multiplier for Deep Neural Networks, implemented in a standard single-poly 180 nm CMOS technology. The learning weights are stored in analog floating-gate memory cells embedded in current mirrors implementing the multiplication operations. We experimentally verify the analog storage capability of designed single-poly floating-gate cells, the accuracy of the multiplying function of proposed tunable current mirrors, and the effective number of bits of the analog operation. We perform system-level simulations to show that an analog deep neural network based on the proposed vector-matrix multiplier can achieve an inference accuracy comparable to digital solutions with an energy efficiency of 26.4 TOPs/J, a layer latency close to $100~\mu \text{s}$ and an intrinsically high degree of parallelism. Our proposed design has also a cost advantage, considering that it can be implemented in a standard single-poly CMOS process flow. I. INTRODUCTION The increasing requirements of cognitive capabilities in electronic systems is driving research toward highly efficient and dense specialized hardware to implement Deep Neural Networks (DNNs). Migration toward architectures beyond the Von Neumann paradigm and towards in-memory computation may lead to an improvement in terms of Energy Efficiency (EE), defined as the ratio of the number of elementary operations to the energy consumed to perform these operations, and of throughput, i.e. the number of performed elementary operations per unit time. In the implementation of a DNN, the most recurring complex operation is the vectormatrix multiplication, i.e. the multiplication of a vector of features (e.g. input of a layer) with a matrix of learning weights, that are constant quantities during the inference phase. The large number of multi-bit elementary arithmetic operations performed by the vector-matrix multiplier (VMM) and the The associate editor coordinating the review of this manuscript and approving it for publication was Artur Antonyan . heavy data exchange between the memory and logic elements represent the main limiting factors of both EE and throughput in conventional digital CPU architectures [1]- [4]. The recurring nature of these arithmetic operations can be exploited by taking advantage of the parallel computing capability of GPUs [5] and of embedded ASIC accelerators [6], [7]. Parallelism in computation and in memory access can be better exploited through in-memory computing architectures, consisting of a large number of modularized processing elements distributed in space and operating in parallel, where each processing element contains both the logic and the memory to perform the assigned partial processing task. In this context, analog circuits enable the implementation of in-memory computing architectures where analog computations are performed by exploiting fundamental circuital laws and devices properties. Analog processing blocks are usually affected by circuit nonidealities such as noise, non-linearity and process variations. However, their finite precision can be well tolerated by the inherent capabilities of neuromorphic networks, which feature high VOLUME 8, 2020 This tolerance of functional parameter variations [8] and to limited precision [9]. In this paper we focus on the design, operation and experimental validation of an analog VMM realized by means of an array of tunable conversion-factor Current Mirrors (CMs) based on single-poly floating-gate (FG) cells, as illustrated in Fig.1. In each tunable CM, the current conversion ratio I out /I in can be interpreted as the weight associated to the charge stored in the FG. The FG cell is obtained by an n-type MOSFET (nMOS) and a p-type MOSCAP (pCAP) sharing an isolated polysilicon-gate. The multiplier is realized in a standard 180 nm single-poly CMOS technology, by using devices with 3.3 V nominal voltage domain realized with a thick gate oxide (∼7 nm), typically required to achieve the ten-year retention time adequate for a non-volatile memory. Single-poly FG cells have been designed and fabricated. In particular, we have experimentally verified the possibility to program an analog weight with a current conversion ratio equivalent to a nominal 8-bit integer. We have performed system-level simulations of trained DNNs, using parallel VMMs to implement both fully-connected and convolutional layers. The inference accuracy of the same network operated either with floating-point precision or with reduced bitwidth fixed-point precision was compared. This analysis has been repeated for a simple DNN purposely designed to classify the MNIST dataset [10], as well as for AlexNet [11] employed for ImageNet [12] dataset classifications. We have verified that a reduced bitwidth might allow for comparable inference accuracy as the original network, with a minimum number of equivalent bits that is a function of the particular application (dataset and DNN architecture). Then, we have selected a 6-bit specification to design an analog CM-based VMM and have proposed a general design flow applicable to different CM topologies. We demonstrate with experiments and simulations the operation and performance of CM-VMM. The best option exhibits an energy efficiency of 26.4 TOPs/J and a layer latency of 100 µs. A 100 × 10 VMM has an area of 0.868 mm 2 and a throughput of 19.9 MOPs/s, with each multiplying cell of the matrix occupying a layout area of 85.5 µm 2 . The remainder of this paper is organized as follows. In section II we present a discussion on the background of this work, by reviewing approaches using CMOS analog circuits to implement neuromorphic building blocks. In section III we present the CM-VMM basic principle and we introduce its main figures of merit (FOMs). Experimental results measured on silicon demonstrators are shown in Section IV, proving the analog multi-level storage capability of single-poly FG cells. Measurements on an experimental proof-of-concept of a programmable CM multiplier are also shown. Then, in Section V, four possible implementations of FG CM-VMM are designed and compared, in order to choose the best CM topology for the implementation of a FG-cell CM for a given ENOB specification. Our chosen design is finally benchmarked against state-of-the-art VMMs in Section VI. The conclusions of the paper are drawn in Section VII. II. BACKGROUND As DNNs are concerned, it has been shown that digital approaches with fixed-point data representation can provide comparable classification accuracy to a floating-point computation [13]. In addition, due to the intrinsic resilience of DNN algorithms to noise and uncertainty [8], data representation based on a limited number of bits reduces the arithmetic complexity of processing elements, leading to an improvement of both power consumption and computing time, possibly without losing classification accuracy [9]. In this regard, different approaches have been proposed, for instance relying on reduced bitwidth of the weight [14], of both weights and activation function [15], or by implementing the entire network with a limited data precision [16]. As discussed in the introduction, this consideration opened the opportunity to exploit analog computing circuits in implementing DNN blocks. Several papers have proven the capability of analog computing elements to achieve an acceptable trade-off between algorithmic accuracy and numerical precision. Analog solutions are also suitable to be implemented with an in-memory circuit architecture [17], [18], avoiding costly memory access. In addition, analog data might be stored in an analog non-volatile memory. Innovative non-volatile memory solutions such as the Resistive Random Access Memories (RRAMs) have been proposed in the literature for this tasks, such as oxygen vacancy memory (OxRAM) [19], conductive bridging memory (CBRAM) [20], and spintransfer torque magnetic memory (STT-MRAM) [21]. However, the intrinsic variability of OxRAMs and CBRAMs makes them not suitable for very large-scale integration; on the other hand, despite the high industrial maturity of STT-MRAMs, they are intrinsically bistable and are therefore not suitable as analog non-volatile memories, which would require continuous tuning. In fact, simulations of DNNs based on RRAMs have been recently proposed [22]- [26], but the lack of experimental demonstrators suggests that viable alternatives must be investigated. A worthy option is the industry-standard double-poly embedded FG memory cells, which have been proposed for similar applications [27], [28]. In fact, they can rely on the fine tuning of stored charge (up to 4-bit single transistor memory cells have reached the market with a tunable 16-level threshold voltage and 10-year retention time [29]). However, the double-poly process flow is relatively expensive and the geometry of each single cell cannot be independently modified by designers, since the layout of an FG array is generally provided as foundry intellectual property [27], [30], [31]. An interesting option is to use single-poly embedded non-volatile cells, where the FG can be realized with a floating polysilicon area among two planar MOS devices, at the cost of larger area occupation [18], [32], [33] with respect to the double-poly case. Different techniques have been proposed to perform a vector-matrix multiplication in the analog domain: time-domain approaches [22], [34] and current-mode sum operation [18], [27], [32], [33], [35]. Current-mode operation can be implemented by relying on the addition performed using Kirchhoff's current law; currents resulting from weight multiplication of different inputs are added by letting all currents flow to the same node. III. CURRENT MIRROR VMM BASIC PRINCIPLE The basic principle of an analog VMM implemented with CMs and the representative FOMs used in this paper are discussed in this section. In subsection III-A, the concept of CMs with tunable conversion factors used as current multipliers is introduced. A discussion on the VMM operation is proposed, emphasizing nonidealities in terms of both linearity and noise immunity level, and their impact on the maximum achievable accuracy. In subsection III-B, FOMs normally used for generic analog-to-digital converters (ADCs), such as the Signal-to-Noise And Distortion ratio (SINAD) and the Equivalent Number Of Bits (ENOB) are introduced and matched to the particular VMM design parameters. A. CURRENT-MIRROR VMM BASIC PRINCIPLE Fig.1(a) sketches the architecture of an analog current-mode M×N VMM, with M input currents (I in,(i) is a generic input, for i = 1. . . M), M×N weights and multiplying blocks in the matrix (each element is w (i,j) ), and N output currents (a generic output is I out,(i,j) , for j = 1. . . N). Each input signal is applied to all the matrix cells in the same row, where the multiply operation is performed between each row input and the corresponding weight in the cell, according to The output of the column is then obtained by summing over all terms to implement the scalar product operation The VMM basic implementation proposed in this paper is detailed in Fig.1(b), which show the CM approach where the current entering in an ''input cell'' (block in red) is multiplied by a scaling-factor by a ''multiplying cell'' (in blue) and provided as an output current, while all the currents of the same column are summed at the same circuit node. An additional p-type CM (in grey) is also added at the top of each column to provide the I out,(j) with the appropriate direction. The storage capability of the multiplying cell associated to a generic w (i,j) is obtained via a FG cell, implemented by an nMOS sharing an FG with a pCAP. By relying on specific programming and erasing schemes, charge can be added to or removed from the FG. The net charge in the FG results in a shift V th,(i,j) of the threshold voltage determining the current magnification factor (i.e. the weight). For a given input current I in,(i) , if the nMOS is operated in the subthreshold region, the corresponding output current I out,(i,j) depends exponentially on the V th, (i,j) . Ideally, we have: where the exponential represents ideal weight, Beyond enabling a wide range of variations of the output of the multiplying operation, sub-threshold operation regime is also beneficial to reduce power consumption [27], [31], [32], [35], [36]. Practical CMs do not exhibit the ideal behavior described by Eq. (3). Indeed, one should note the different V DS of the input and multiplying cells. The small output resistance of VOLUME 8, 2020 short channel devices can thus degrade the linearity. This weakness can be worsened if devices with poor electrostatics are used, due to finite pCAP capacitance. An additional degradation arises if the input current becomes too low, due to poor transistor saturation when the V DS of the diode-connected nMOS input cell becomes comparable with V T . The non-linearity can be described in terms of Total Harmonic Distortion (THD). Another root cause of precision degradation comes from intrinsic noise sources of the devices implementing the CM. The Signal-to-Noise Ratio (SNR) of the CM output current increases with the input current, and it inversely depends on the bandwidth [35]. Furthermore, for short channel devices, it decreases with the square of the channel length [37]. Provided that in analog circuits both noise and nonlinearity can severely impact the accuracy of the analog function, distortion and noise nonidealities are normally considered together within the Signal-to-Noise And Distortion ratio (SINAD) [38], which depends on SNR and THD as Eq. (5): THD, SNR and SINAD are all expressed in dB and their definitions are given in Appendix A. B. FIGURES OF MERIT FOR ANALOG MULTIPLIERS When DNNs consisting of multiple layers are considered (e.g. AlexNet [11]), the VMM arrays become the dominant functional blocks in the system, the main factor determining total area occupation and power consumption [11]. The design of an efficient analog VMM then involves different trade-offs among performance (throughput), EE, computation accuracy, and area occupation. To provide a comparison with DNN implemented in digital architectures, FOMs for analog VMMs are normally expressed in terms of elementary operations, such as P-bit (where P is the bit-width) multiplications and additions. An M×N VMM includes N columns of M-sized multiplyand-accumulate (MAC) operations as shown in Eq. (2). We consider a number of M multiplications and M-1 additions per each MAC, corresponding to a total number of (2M-1)×N elementary operations in a VMM. The (2M-1)×N elementary operations are performed in parallel in a VMM, then the throughput is given by the ratio of (2M-1)×N to the worst case time T op needed by the CM multiplier to provide an output current corresponding to the expected result (within a confidence interval dependent on the assumed accuracy) in response to an input current step. The EE is the calculated as the ratio of the (2M−1)×N parallel operations to the average energy consumed by the VMM to perform a vector-matrix multiplication (i.e. the consumed power integrated over the T op ). The energy is extracted using actual trained weights and it is averaged over a number of operations, each corresponding to an input array related to an actual input of the tested database (i.e. test images in case of MNIST or ImageNet). The accuracy of an analog VMM can be described by the SINAD, which can be related to linearity and noise immunity. In order to enable an intuitive comparison between the precision of an analog function and its digital counterpart, we can use the Effective Number of Bits (ENOB) linked to SINAD as [38]: 6), as a function of SNR and THD. SINAD, and therefore ENOB, is generally limited by the smaller between SNR and -THD. This plot is relevant in the choice of design tradeoffs, since in several cases both SNR and THD play a role in the accuracy of an analog function. In fact, in most cases they should be balanced up in order to get a fine-grain optimization of the ENOB. In the definition of the ENOB given by Eq. (6), it is assumed that a sinusoidal input signal spanning the full-scale of the ADC input swing is used. Similarly, these definitions can be adapted to an analog circuit, where SNR includes all kind of noises which affects the circuit, while THD accounts for the nonlinear behavior of its transfer function. For consistency, in our study we use a sine waveform for the input current spanning between 0 and the target maximum input current I in,MAX , also referred to as full-scale (FS) hereafter. When the ENOB characterization is performed for a unitary weight (i.e. I out,MAX = I in,MAX ), the FS current levels as for I in are also spanned by the I out waveform. On the other hand, when the ENOB characterization is performed with a weight < 1, the resulting peak-to-peak value of the sinusoidal output current is I out,MAX = I in,MAX ×w < FS. To account for this partial sweep of the assumed full-scale of the output, a ''−log 2 (w)'' correction term is added in Eq.(6) to extract the equivalent full-scale ENOB by a projection. IV. EXPERIMENTS ON SINGLE-POLY FG CURRENT MIRROR VMM In this section, the electrical characterization of single-poly FG cells fabricated with UMC 0.18 µm CMOS technology is discussed. The analog storage capability with a possible current resolution larger than 8 bits (i.e. I out,(i,j) /I in,(i) < 256 −1 ) is first demonstrated. Then, a simple CM multiplier implemented with these cells is measured at different stored weight conditions. A good matching between experiments and simulations is demonstrated. With reference to the CM implementation shown in Fig.1(b), the non-ideal coupling between the pCAP and the nMOS enhances the asymmetry between the input cell and the multiplying cell, which is already in place due to difference in V DS . This asymmetry leads to a linearity degradation, which could be avoided by using a very large pCAP (so that A pCAP A nMOS ) resulting in an almost ideal coupling factor, but this cannot be appointed as a recommended solution for obvious reasons. A better option to increase the symmetry can be the use of an additional pCAP in the input cell. In this case, the FG on the input cell is not used to store data but just for electrostatic symmetry. Experimental data for a symmetric CM are reported in Fig.3. The CM is realized with 0.5 µm long nMOS transistors sharing the floating poly with a pCAP area 49 times larger than the nMOS gate area, while the control gate (CG) is the N-well hosting the pCAP shorted with the P-diffusions implementing its S/D regions. All transistors have a 3.3V nominal voltage. Fig.3(a) and (b) report the voltage levels used for the program and erase operations, which are both possible by applying positive voltage pulses on the CG and D terminals, activating different gate injection phenomena in agreement with [39]: for a V DS in the range 4.5 V ∼ 6.5 V, at high V CG−S voltages (>3V) both channel hot electron injection (CHEI) and impact-ionized hot-electron injection (IHEI) lead to an increase of the equivalent V th , while the impact-ionized hot-hole injection (IHHI) is the dominant mechanism at relatively low V CG−S voltage (e.g. 1V ∼ 1.5V) leading to a V th variation in the opposite direction. This means that the threshold voltage can be moved in both directions without the need to design complicated circuitry to generate the negative voltage levels normally needed to reset FG memory cells. It is important to highlight that this flexibility is possible only on the multiplying cell, given that in the input cell the CG and the D are short-circuited. This issue is not really critical since the current conversion ratio (i.e. the weight of the CM) is dependent on the V th difference between the input and output cell, thus a possible charge in the input cell FG can be compensated by offsetting the charge to be added in the multiplying one. Measurements for a typical cell are shown in Fig.3(c), on both the input and multiplying cell. The I D -V GS (the gate is the CG, since the FG is not accessible) transfer characteristics were measured. The input-cell has been measured with the FG discharged (symbols), while the multiplying-cell has been characterized at different stored charge conditions (lines). A possible threshold voltage shift V th larger 500 mV has been verified, although few hundred of mV are enough to enable a sufficient conversion factor considering an average inverse subthreshold slope of 90 mV/dec in the current range upper-limited by 20 nA (e.g. Vth ∼215 mV for a weight of 256 −1 , i.e. 8 equivalent bits). For the same weights as programmed in Fig.3(c), the CM has been tested by providing an input current swept in the range 0.2 nA ↔ 20 nA. The resulting output current is shown in Fig.3(d) and post-processed to calculate the error and the corresponding ENOB in Fig.3(e) and (f), respectively. Similar data have been extracted from transient noise simulations performed with UMC 0.18 µm PDK models. The matching between theoretical data and experiments is quite good. Since gate current is not implemented in the transistor models, in the simulations we have used an ideal pulsed current source to inject the needed charge in the FG. Finally, Fig.4 demonstrates the operation of a 2 × 1 CM-VMM, implemented with two separated input cells driving two multiplying cells whose weight is independently set to various conditions and whose currents are summed, by implementing the I OUT = w A ×I in,A + w B ×I in,B operation. V. OPTIMIZATION OF FG CURRENT MIRRORS After the demonstration of an experimental proof-of-concept of analog programmable CM multiplier, there is the need to better understand how to optimize the design of the CM in order to meet a desired precision specification. We have selected an ENOB of 6 bit as a reference specification for the remainder of this study, considering it a good trade-off between precision and cost of the VMM function (in terms of silicon area and power consumption). In section VI, by considering a simple DNN case study trained with the MNIST database, we have verified that with 6 bits the inference accuracy loss is almost negligible compared to higher resolution. However, the choice of a 6-bit ENOB does not affect the generality of our analysis. Input current FS, transistor sizing and CM topology are design knobs that determine the final performance of the VMM. Concerning the topology, we have already suggested the possible improvement provided by a symmetric CM. In addition, feedback can be also exploited in order to improve the linearity of the analog multiplier. For instance, a cascode CM topology relies on two additional transistors to regulate the V DS of the multiplying cell, by forcing it to follow the one of the input cell. The four topology options reported in Fig.5 have been considered, consisting in the asymmetric and symmetric versions of simple CMs ((a) ASCM and (b) SSCM, respectively) and of cascode CMs ((c) ACCM and (d) SCCM, respectively). Symmetric and cascode solutions require additional transistors to be implemented. For instance, in a fixed M×N VMM, there will be M additional pCAPs in the input cell array for a symmetric solution with respect to the asymmetric one, or M×(N+1) additional nMOS transistors for a cascode CM topology with respect to the simple one. It is important to highlight that it is not obvious that the additional transistors required by more complicated topologies will result in a larger are occupation, if we consider that solutions with a reduced number of transistors will likely require a different sizing of the cell in order to compensate for the reduced linearity performance (for instance a much longer channel transistor L). A detailed discussion on the linearity (THD), on the noise immunity (SNR), and on the resulting ENOB trends as a function of input current full-scale, supply-voltage V DD , and transistor sizing, as well as a suggested design flow to properly set the W and L sizes of CM transistors, can be found in the Appendix B. In Fig.6(a) and (c), THD and SNR were extracted at the input current FS I MAX value of 5 nA, V DD = 1.5 V, W/L = 1 µm/2 µm, for different pCAP/nMOS coupling ratios, for both symmetric and asymmetric versions of both simple (SSCM and ASCM) and cascode (SCCM and ACCM) topologies. Symmetric versions show much better linearity for smaller pCAP/nMOS ratio compared to the asymmetric counterpart. In addition, SNR depicted in Fig.6(c) is almost constant for the symmetric solution (∼43 dB) down to the minimum considered point of pCAP/nMOS area ratio, while it shows a sudden degradation with reducing pCAP/nMOS ratio for the asymmetric options. From Fig.6(a) we have extracted the minimum pCAP/nMOS ratio (with a margin) which features a THD value of ∼−40 dB for each topology: 49 for ASCM, 36 for ACCM, 25 for SSCM, and 9 for SCCM. Starting from these 4 conditions, we have plotted in Fig.6(b) and (d) the THD and SNR degradation with L scaling. Curves depicted in this plot have been obtained at fixed normalized input current (with respect to the widthto-length ratio, i.e. I norm = I × L/W), basically meaning that when the L is halved the corresponding current is doubled, so that the transistor working point is maintained in similar sub-threshold operating condition (and similar linearity in case of long channel devices). As regards the THD trends, both asymmetric options require a longer channel device compared to the respective symmetric counterparts, despite a much larger pCAP/nMOS ratio initially selected. Then, the area advantage of using a symmetric solution is twofold (i.e. smaller pCAP/nMOS ratio and shorter L), although a pCAP is needed also in the input cell. In addition, if we focus on the symmetric options, it can be observed that SSCM features a small degree of linearity degradation at extremely short length, while the SCCM features a THD value which is optimum at L MIN . This result is attributed to the intrinsic feedback property of the cascode topology, whose action in enforcing similar V DS to the input and multiplying cell transistors results in an effective workaround for reduced output resistance of short-channel devices. As regards the SNR shown in Fig.6(d), a similar behavior is observed for all the configurations, with SNR degrading as the L is reduced. However, one should note that SNR can be independently adjusted by proportionally increasing the transistor width and the input operating current (i.e. at fixed I MAX /W) without impacting the THD (see related discussion in Appendix B). In Table 1, the final transistor sizing and occupied areas of each topology, independently designed in order to meet a 6-bit ENOB specification (i.e. SNR & -THD > 40 dB, according to Fig.2) are listed. Asymmetric multiplying cells occupy from ∼3.8× to ∼6.5× more gate area compared to the one of symmetric multipliers. In particular, SCCM is the best solution in terms of ENOB per unit area (with a single multiplying cell gate area equal to 33.8% and 15.4% the ones of SSCM and ACCM, respectively), due to the smallest required coupling ratio and transistor length needed to reach the ENOB target, despite the fact that such topology needs additional transistors compared to the simple CM. In case of a 100 × 10 VMM (i.e. one column array of 100 input cells, a 100 × 10 multiplying cell matrix, and 10 P-mirror adders) the advantage of SCCM persists, with an overall gate area equal to 33.9% and 15.6% the ones of SSCM and ACCM, respectively. The example layout of an SCCM multiplying cell is depicted in Fig.7. We want to clarify that the overall area on the layout is much bigger than the one estimated by using the gate area. This is mainly due to the spacing needed to avoid the turn-on of PNP and NPN parasitic transistors (e.g. n+ diffusions of the nMOS S/D (emitter) / pwell of the nMOS (base) / n-well of the pCAP (collector)). In the reported layout, we have used 2 µm spacing for well-to-well parasitic bipolar paths, and at least 1 µm spacing for diffusion-to-well cases. One should however consider that standard design rules available in the PDK are not intended for such a specific design, then we can speculate that there is some margin to scale the overall layout, e.g. after a specific characterization of any of these paths with dedicated test-structures. Due to this extra area, the overall layout of a multiplying cell of the SCCM is 9.7× larger than the one extracted considering the gate area only (see Table 1). However, the advantage of symmetric multipliers is still verified, and the best solution, which is SCCM cell, occupy much less layout area than the SSCM (−59%) and the ACCM (−132%) multiplying cells. VI. SYSTEM-LEVEL ASSESSMENT ON ANALOG DNNS This section is dedicated to a system-level assessment of DNNs, using MATLAB, in order to link the behavior and the FOMs of analog VMMs to the system-level performance of a complete DNN. Two DNNs have been trained and simulated by relying on two different datasets in order to be used as test benches. The grey-scale MNIST [10] dataset has been used to train a purposely designed network (''Net A'' in the following) depicted in Fig.8(a), while a subset of classes from ImageNet [12] has been used to train AlexNet [11], as sketched in Fig.8(b). The training has been performed by relying on floating-point data precision. The designed DNN Net A operates as follow: the input 28 × 28 pixels gray-scale image is filtered by a convolutional layer with 20 filters on 9 × 9 kernels. The extracted features are then passed to the activation function, which is a Rectified Linear Unit (ReLU). Then the Maxpooling layer halves the overall number of coefficients by extracting the biggest elements in the 2 × 2 submatrices. Processed features are passed to the transform level, whose coefficients are trainable, in order to convert the two-dimensional image into a vector. This vector is the input of the fully-connected layer, containing 100 nodes with ReLU. The output layer has 10 nodes and a softmax activation function for the final 10-digits classification. Details of AlexNet architecture will not be discussed here since they can be easily found in the literature [11]. When the DNNs are used to perform predictions, with floating-point precision, we have found an inference accuracy of 99.8% and 95% for the MNIST and ImageNet datasets, respectively. Beyond extracting the inference accuracy for the original network, we have artificially derived reduced-precision networks from Net A and AlexNet. Two approximation cases were considered, a ''digital'' and an ''analog'' version: a) in the ''digital case'', floating-point numbers have been replaced with integers with different number of bits; b) in the ''analog case'', floating-point precision has been maintained, but a white noise has been added to the output of each multiplication based on the assumed SINAD value (and therefore ENOB), according to the following expression: where α is a random value with a Gaussian distribution of average zero and standard deviation 1. The inference error rate of the tested DNNs as a function of the corresponding SINAD and ENOB is reported in Fig.8(c) and (d) for the MNIST and ImageNet cases, respectively. As for the digital case, simulations were run on the complete validation dataset of 2000 images for MNIST and almost 1000 images for ImageNet. Instead, for the analog case, the inference on validation dataset was repeated for 5 times and the mean value of the inference accuracy was extracted. The similarity between the inference capability of a ''digital'' and of an ''analog'' network for similar number of bits and ENOB validates the FOMs used in this study. In addition, it also confirms that 6 equivalent bits represent a reasonable value to provide an almost maximum accuracy for MNIST classification by the Net A, while at least 7 bits would be required in the case of ImageNet tested with AlexNet. As a result, we can conclude that the ENOB which must be targeted when designing an analog VMM is dependent on the specific DNN architecture and dataset applications, as expected. In order to provide a dependable estimation of the energy efficiency of the designed symmetric cascode current-mirror VMM, featuring 6 equivalent bits, we have extracted a 100 × 10 weight matrix from a trained fully-connected layer of Net A. The estimation was performed by assuming to operate the VMM at V DD = 1.5 V, I MAX = 12.5 nA and Finally, in Table 2 we have benchmarked our proposal against state-of-art analog VMMs, by selecting the analog VMMs executing arithmetic operation in either current mode or time domain, implemented with memristors [22], embedded FG arrays [27], [30], [31], [36], or single poly FG memories [18], [32], [33]. Both gate and layout areas of a VMM cell, as well as the energy efficiency, are compared to the other design solutions. A single VMM multiplying cell occupies a total gate area of 8.8 µm 2 , while the estimated layout area is 85.5 µm 2 . Although other single-poly FG solutions are implemented with a more scaled 130 nm technology, the area of our VMM cell is almost one order of magnitude smaller than other proposals based on a similar process technology. The 6-bit ENOB precision is lower than other single-poly multipliers, but similar precision can be matched with a trimming of the design. Compared to the double poly embedded FG array based multipliers, our solution is much bigger, but it has to be considered that the counterpart can rely on the advantages of the double poly and of the more scaled technology node (55 nm). On the other hand, one should note that with double poly technologies it is not possible to modify the geometry of a single cell, thus the optimization of transistor size aiming at increasing the accuracy of the cell is not feasible. Another weakness is that the CMOS double poly process is much more expensive than the single poly one. As regards to the energy efficiency, our multiplier reaches 26.4 TOPs/J, which is better than all the other single-poly VMM counterparts, but worse than the one proposed in [31] (55 nm embedded NOR solution) and the one based on memristors in [22] (only simulations, no experimental data are provided). VII. CONCLUSION We have demonstrated an in-memory analog VMM based on current mirrors realized in a commercial 180 nm CMOS technology platform with experiments, circuit-level and systemlevel simulations. Single-poly floating-gate memory cells provide the possibility to implement the in-memory computing approach. FG cell programming/erasing methods and storing capability have been validated by experimental measurements showing the possibility to set a single poly FG current mirror with a current scaling factor corresponding to more than 256 levels (i.e. >8-bit). Measurements on a symmetric simple current-mirror multiplier resulted to be well matched to circuit level simulations. With the validated simulation deck, a design optimization has been performed for four current mirror topologies, by relying on a proposed design flow targeting a specific precision. It has been demonstrated that complex current mirrors such as the cascode topology feature a better trade-off between ENOB and area occupancy than the simpler version implemented with a reduced number of transistors. Furthermore, the electrostatic symmetry produced by placing a pCAP in both the input and multiplying cell allows to further reduce the area, allowing the current mirror multiplier to reach the accuracy specifications with much smaller transistor sizes. Both MNIST and Ima-geNet databases have been used as representative examples to train two DNNs, which are a purposely developed DNN and the well-known AlexNet, respectively. System-level simulations were performed for both cases, and the inference accuracy has been extracted as a function of the assumed ENOB. We have found that a precision of 6 equivalent bits allows an almost maximum accuracy in classifying images from the MNIST database, while ImageNet requires at least 7 bits. Our CM-VMM reach an energy efficiency of 26.4 TOPs/J, that is very promising with respect to the state-of-the art of experimentally tested analog neuromorphic circuits considering the relatively high precision (ENOB = 6) and small area occupation of the proposed VMM. Details of THD and SNR trends as a function of basic design parameters are discussed in this Appendix. In this analysis a simple and idealized CM is considered, where both the input and the multiplying cell are implemented by a nMOS transistor only; the magnification of the current conversion ratio (i.e. the weight) is not accounted with the realistic FG but by relaying on an ideal weight represented by a DC voltage generator simulated in series to the gate of the multiplying transistor. By analyzing simulated trends, we are able to suggest a consistent design flow which can be applicable also to more complicated current mirror topologies. A current sine waveform with a peak-to-peak amplitude equal to the selected I in,MAX is applied to the input cell. The THD and SNR FOMs are computed by post-processing the waveform of the corresponding output for a variable weight. In this discussion it will be recurrent the normalization of the operating current with respect to the width-to-length ratio (i.e. I NORM. = I in,MAX ×L/W), so that the transistor working point is maintained in similar sub-threshold region conditions (and similar linearity) when the transistor aspect ratio is changed. In Fig.9, simulations were carried out by varying electrical parameters such as the maximum amplitude of the input signal I in,MAX ×L, ((a) and (e)), the supply voltage V DD ((b) and (f)), the transistor L ((c) and (g)) and the transistor W ((d) and (h)). Two different trends can be observed in Fig.9(a) and (e): first, there is a trade-off between THD and SNR in terms of I in,MAX . If the current is increased, the THD curves worsen and, at currents higher than ∼100 nA, their shape and the related slopes change as transistors are on the edge between subthreshold and inversion regions. According to this trend, it would be recommended to operate the transistors in deep sub-threshold to increase linearity, although in case of short channel devices, e.g. L = 0.5 µm, the benefit of reducing the current is less pronounced considering that the shortchannel effects (SCEs) affect the THD. As an opposite trend, increasing I in,MAX is instead beneficial from the SNR point of view, as shown in Fig.9(e). In addition, even at the same biasing condition (constant I in,MAX ×L), longer devices feature higher SNR, as highlighted by the three different curves. According to Fig.9(b) and Fig.9(f), once the bias point is set by the operating current, THD and SNR values are typically not affected by the supply voltage variation. A V DD dependence can be observed only for short channel devices featuring a worsening of the THD for an increasing V DD , and in those cases a low supply voltage should be preferred in order to save power consumption. We choose a value of V DD = 1.5 V for the remainder of the analysis. The analysis based on geometrical parameters, L and W, were still performed for constant I MAX ×L/W, where W = 1 µm when L is varied, and L = 1 µm when W is varied. In Fig.9(c) there is a very small length range where the linearity increases by moving toward longer devices because of the reduction of SCEs. Furthermore, the curves taken at 10 and 100 nA×µm/µm show a flat THD region in the longer cases. Here the simulated length is sufficient to screen any impact of SCEs, and the similar operating points in subthreshold (guaranteed by the same I MAX ×L) result in similar values for linearity. However, for very low current levels (i.e., 1 nA×µm/µm), after the initial rise, a flat region extends only for few µm, i.e. up to ∼3 µm, considering that beyond this value THD starts to decrease for increasing length. This is due to the fact that, for long transistors, a normalized current of 1 nA×µm/µm corresponds to a very small unnormalized current (e.g. 200 pA for L = 5 µm), and at this current value it corresponds a V DS lower than 4V T which does not guarantee a proper transistor saturation. However, if we focus on the 1 ∼ 3 µm range, lower I MAX ×L values always corresponds to a better linearity, in agreement with Fig.9(a). SNR in Fig.9(g) has a strong increase with increasing L for short channel devices, although it tends to saturate for longer values. Finally, when varying W for fixed normalized currents, linearity is practically independent (Fig.9(d)), while SNR always increases for an increasing width ( Fig.9(h)), with an almost linear dependence on the square root of the width. By taking into account all the plots depicted in Fig.9, as a conclusion we can assert that there is a certain region in the design space where THD and SNR can be independently set, and a possible design flow with the target to reach a given ENOB can be suggested, as detailed in Fig.10. For instance, by using one of the curves depicted in Fig.9(a) (e.g. with at least L = 1 µm to screen SCEs), one could decrease I in,MAX ×L down to the value which guarantee the specification on the linearity (i.e. desired THD). Then, by referring to Fig.9(c), the length can be scaled down to the value that does not produce a linearity degradation (still for fixed I in,MAX ×L). Both design choices aim at linearity by trading off against a SNR degradation (see Fig.9(e) and Fig.9(g)). However, the specification on the SNR can be reached by a final trimming of W (according to Fig.9(h)) which can be modified -for fixed normalized current -without impacting the linearity obtained by previous design choices (see Fig.9(d)). MAKSYM PALIY received the M.S. degree in electronic engineering from the University of Calabria (UNICAL), Italy, in 2016, with a thesis titled ''Design of 3T CMOS current reference for ultra-low voltage application''. He is currently pursuing the Ph.D. degree in electronic engineering with the University of Pisa (UNIPI), Italy. He is developing analog neural networks with CMOS and beyond CMOS technologies. His current interests include the design of low-power analog integrated circuits, analog neural networks, device modeling, and power management circuits. SEBASTIANO STRANGIO (Member, IEEE) received the B.S. and M.S. degrees (cum laude) in EE and the Ph.D. degree from the University of Calabria, Cosenza, Italy, in 2010, 2012, and 2016, respectively. In 2012, he was with imec, Leuven, Belgium, as a Visiting Student, working on the electrical characterization of Resistive-RAM memory cells. He had been with the University of Udine as a Temporary Research Associate from 2013 to 2016 and with the Forschungszentrum Jülich, Germany, as a Visiting Researcher, in 2015, researching on TCAD simulations, design, and characterization of TFET-based circuits. From 2016 to 2019, he had also been with LFoundry, Avezzano, Italy, where he worked as a Research and Development Process Integration and Device/TCAD Engineer, with main focus on the development of a CMOS Image Sensor Technology Platform. He is currently a Researcher of electronics with the University of Pisa. He has authored and coauthored over 25 articles, most of them published in IEEE journals and conference proceedings. His research interests include technologies for innovative devices (e.g., TFETs) and circuits for innovative applications (CMOS analog building blocks for DNNs), and CMOS image sensors, power devices, and circuits based on wide-bandgap materials. PIERO RUIU received the B.S. degree and the M.S. degree (cum laude) in electronic engineering from the University of Pisa, in 2017 and 2020, respectively, with a master thesis on non-volatile memory design for analog computation. He has worked on the design of analog and mixed signal integrated circuits for analog deep neural networks (DNNs) and on the design of analog-based physical unclonable functions (PUFs). He is currently an Analog Design Engineer with the University of Pisa. TOMMASO RIZZO received the B.S. and M.S. degrees (cum laude) in EE from the University of Pisa, in 2017 and 2019, respectively. He is currently pursuing the Ph.D. degree in electronics with the University of Pisa, in the field of analog and mixed-signal IC design using standard and non-standard CMOS technologies. From 2014 to 2019, he was ''Allievo Ordinario'' at Sant'Anna School of Advanced Studies, Pisa. In 2017, he was with Fermilab, Batavia, IL, USA, as a Visiting Student working on test structures for CMS tracker upgrade. In 2019, he joined imec, Eindhoven, NL, USA, as a Visiting Student, working on a wireless powering receiver system for deep implants as his Master Thesis project. His research interests include the design of CMOS analog blocks for DNNs and the development of wireless power transfer solutions for IMDs. GIUSEPPE IANNACCONE (Fellow, IEEE) received the M.S. and Ph.D. degrees in EE from the University of Pisa, in 1992 and 1996, respectively. He is currently a Professor of electronics with the University of Pisa. He has coordinated several European and National Projects involving multiple partners and has acted as principal investigator in several research projects funded by public agencies at the European and National level, and by private organizations. He is also active in academic entrepreneurship through Quantavis s.r.l., and other technology transfer initiatives. He has authored and coauthored more than 230 articles published in peer-reviewed journals and more than 160 articles in proceedings of international conferences, gathering more than 7500 citations on the Scopus database. His research interests include quantum transport and noise in nanoelectronic and mesoscopic devices, development of device modeling tools, new device concepts and circuits beyond CMOS technology for artificial intelligence, cybersecurity, implantable biomedical sensors, and the Internet of Things. He is also a Fellow of the American Physical Society. VOLUME 8, 2020
10,500
2020-01-01T00:00:00.000
[ "Computer Science", "Engineering" ]
microRNA Expression and Its Association With Disability and Brain Atrophy in Multiple Sclerosis Patients Treated With Glatiramer Acetate Background MicroRNAs are small non-coding RNA that regulate gene expression at a post-transcriptional level affecting several cellular processes including inflammation, neurodegeneration and remyelination. Different patterns of miRNAs expression have been demonstrated in multiple sclerosis compared to controls, as well as in different courses of the disease. For these reason they have been postulated as promising biomarkers candidates in multiple sclerosis. Objective to correlate serum microRNAs profile expression with disability, cognitive functioning and brain volume in patients with remitting-relapsing multiple sclerosis. Methods cross-sectional study in relapsing-remitting multiple sclerosis patients treated with glatiramer acetate. Disability was measured with Expanded Disability Status Scale (EDSS) and cognitive function was studied with Symbol Digit Modalities Test (SDMT). Brain volume was analyzed with automatic software NeuroQuant®. Results We found an association between miR.146a.5p (rs:0.434, p=0.03) and miR.9.5p (rs:0.516, p=0.028) with EDSS; and miR-146a.5p (rs:-0.476, p=0.016) and miR-126.3p (rs:-0.528, p=0.007) with SDMT. Regarding to the brain volume, miR.9.5p correlated with thalamus (rs:-0.545, p=0.036); miR.200c.3p with pallidum (rs:-0.68, p=0.002) and cerebellum (rs:-0.472, p=0.048); miR-138.5p with amygdala (rs:0.73, p=0.016) and pallidum (rs:0.64, p=0.048); and miR-223.3p with caudate (rs:0.46, p=0.04). Conclusions These data support the hypothesis of microRNA as potential biomarkers in this disease. More studies are needed to validate these results and to better understand the role of microRNAs in the pathogenesis, monitoring and therapeutic response of multiple sclerosis. INTRODUCTION MicroRNAs are promising biomarkers in multiple sclerosis (MS). They are endogenous, non-coding RNA particles, between 20 and 25 nucleotides in length that regulate gene expression at a post-transcriptional level, by blocking translation or inducing the degradation of messenger RNA (1). It is estimated that up to one third of the human genes are regulated by these miRNAs (2). They participate in multitude of cellular processes and, thus, they could play important roles in several mechanisms of MS such as remyelination, neurodegeneration, autoimmunity or blood brain barrier homeostasis (3,4). Over the last years, dysregulated miRNA function and patterns of expression has been demonstrated in MS patients compared to healthy subjects (5)(6)(7) and in different aspects of the disease: relapses versus remission (8), different clinical phenotypes (9,10), radiological patterns (11) and even treatment effects (12)(13)(14). Moreover, they can be easily, repeatedly, reliably and non-invasively measured in different samples. For these reasons they are promising candidates to become clinically useful biomarkers to monitor the progression of the disease and as predictors of the therapeutic response to disease-modifying treatments (15,16) Less information is available regarding the use of miRNAs to monitor clinical evolution such as disability worsening measured with EDSS (17), or progression of non-motor symptoms like cognitive impairment (18,19). This complication is a frequent manifestation of MS, even in early stages of the disease (20)(21)(22), with a high impact on the overall clinical situation of these patients (23). Symbol Digit Modalities Test (SDMT) is the most recommended screening test to monitor for this complication in MS (24). For these reasons, in this study we aimed to investigate the correlation between serum miRNAs profile expression and clinical disability, cognitive functioning and brain volume in patients with remitting-relapsing multiple sclerosis treated with glatiramer acetate, in order to increase the knowledge of the relationship between microRNA and the whole MS clinical and radiological spectrum. Study Design Cross-sectional study in a cohort of MS patients attending the demyelinating diseases unit at the Hospital Universitario de Torrejon and Hospital Universitario de Getafe in Madrid, Spain, from September/2016 to September/2017. We selected relapsing remitting multiple sclerosis patients (RRMS) according to McDonald 2010 criteria (25), on stable treatment with glatiramer acetate (GA) during at least 6 months. GA was selected to try to have the most possible homogeneous sample. In the first hand we decided to choose first line treatment patients. Normally these patients represent the early phase of MS with less time of evolution of MS and fewer therapies, which would reduce the possible changes of miRNA expression due to longer durations of the disease or associated with previous treatments. Regarding first line drugs, GA pharmacokinetics could be associated with fewer metabolic changes and therefore with less changes of miRNA profiles not directly related with the mechanism of action of the drug (26,27). Exclusion criteria were: secondary progressive multiple sclerosis (SPMS) and primary progressive multiple sclerosis (PPMS) according to Lublin 2013 phenotypes classification (28), relapse or corticosteroids treatment in the last 3 months previous to the study, and any contraindication to perform MRI. All patients were prescribed GA in accordance with Spanish Society of Neurology clinical practice guidelines that include protocols about using disease modifying treatment and monitoring its effects (29). All patients gave their consent to participate in the study. The study complied with the Helsinki declaration (30), and was approved by the ethical committee of the Hospital Universitario de Getafe and by the Spanish Agency of Drugs and Health Products (code CLD-GLA-2017-01). Sex, age at disease onset, age at GA onset, age at the moment of the study and Expanded Disability Status Scale (EDSS) were collected. Cognitive function was assessed using the symbol digit modalities test (SDMT) (31). MicroRNA Selection and Analysis We selected the best miRNAs candidates for RRMS and cognitive dysfunction (CD) through simple topological analysis (Anaxomics ® ). A cut-off ≥ 0,8 for the global score was established. The first 20 miRNAs met this criteria and were finally selected. First of all, the molecular characterizations of MS and CD were performed through hand-curated evaluation of indexed scientific publications in PubMed, obtaining 293 proteins for MS and 59 proteins for CD. MicroRNAs were collected by search in the databases HMDD, miR2Disease, miRWalk 2.0, NSDNA, PhenomiR 2.0, miRdSNP and miREnvironment. These miRNAs were mapped to genes/ proteins through miRTarBase, a miRNA-protein relationship open database that stores information about experimentally validated miRNA targets. Finally three different types of scores were calculated for each miRNA to obtain the final ranking score: 1) percentage of miRNA-disease related elements over the total of miRNA targets; 2) percentage of miRNA-disease related elements over the total of disease effectors; and 3) Haussdorf distance between the whole set of miRNA targets and the conditions of interest. A final global score was obtained by calculating a weighted mean of the three rankings, with the percentage over the total of miRNA targets weighted twice as strongly as the other two measures ( Table 1). Blood (10 ml) was drawn from each patient using CPT tubes (Becton Dickinson NJ, USA). Peripheral blood mononuclear cells (PBMC) were extracted after centrifugation at 2500 g during 30 minutes. RNA was isolated with the QIAmp RNA blood Mini Kit, following the manufactere's instructions (QIAGEN, Hilden, Germany). The cDNA was obtained via reverse transcription with the kit multiplex RT for Taqman ® microRNA assays (Life Technologies, Foster City, CA). MiRNAs profile was determined with Locked nucleic acid (LNA) SYBER green-based quantitative real-time polymerase chain reaction (LNA-based qPCR) (Exiqon). Normalization was performed using the mean expression of two miRNAs: miR191-5p and miR30c-5p. The normalized cycle quantification (Cq) value was calculated as mean Cq -assay Cq. MRI and Brain Volume Analysis MRI images were acquired following the MAGNIMS recommendations on the use of brain MRI in multiple sclerosis (32) with a minimum magnetic field strength of 1.5T, a maximum slice thickness of 3 mm without a gap, and the following sequences: axial pre and post-gadolinium T1-weighted, axial proton density and/or T2-weighted, and axial and sagittal T2-fluid-attenuated inversion recovery. Isovolumetric sagittal T1 (3D-SPGR) sequence was performed with the following parameters TR 1/4 8.5 ms; TE1/43.2ms; TI1/4700ms; flip angle (FA)1/412; bandwidth 1/4 31.25 kHz, to perform the volumetric analysis. Whole brain volume, grey matter volume, white matter volume, cerebellum volume, basal ganglia volume and T1 lesion load volume were obtained using the automatic software NeuroQuant ® . Statistics Numerical variables were expressed as median and interquartile range (25th, 75th percentile), and categorical variables as percentages. Correlation between miRNAS and EDSS, cognitive status and MRI data were analyzed with Spearman correlation coefficient (r s ). Statistical significance was set at p<0,05. Data were analyzed using the Statistical Package for Social Sciences, version 19.0 (IBM SPSS, Inc., Chicago, IL, USA) RESULTS We recruited 27 patients. Demographic and clinical data are summarized in Table 2. They included the typical RRMS population, with a female preference (19 female VS 8 male patients) and young onset of disease (median: 31.9 years). They also represent a typical early MS phase with early treatment initiation (median: 32.8 years at GA onset) and a mild EDSS (median: 1; interquartile range: 0-2.5). MRI data is represented in Table 3. Seven scans were unavailable due to technical incompatibility with NeuroQuant software. We performed correlations between miRNA and sex and age. Only miR-203a.3p was correlated with age (r s :-0,523; p=0,026). Since this microRNA was not associated with any clinical or WBV, whole brain volume; CGMV, Cortical grey matter volume; WMV, White matter volume; T1, T1 lesion volume; Md, Median; ICR, Intercuartile Range; Volume in ml; T1, T1 lesion volume. radiological variable we did not adjust with it for the other comparisons. None microRNA was associated with sex. Correlations between miRNA and clinical data are represented in Table 4. We found a positive association between miR-146a.5p (r s :0.434, p=0.03) and EDSS, and between miR-9.5p (r s :0.516, p=0.028) and EDSS. Regarding the cognitive function we found a negative association of miR-146a.5p (r s :-0.476, p=0.016) and miR-126.3p (r s :-0.528, p=0.007) with SDMT (Figure 1), and also a trend to a negative association between miR-9.5p and SDMT (r s :-0.464, p=0.06). Both measures were consistent, with greater values of the miRNAs related to greater EDSS and lower SDMT punctuations. Correlations between miRNA and MRI data are summarized in Table 5. Of notice, we found a negative association between miR-9.5p and thalamic volume (r s :-0.545, p=0.036), and between miR-200c.3p and pallidum and cerebellum (r s :-0.68, p=0.002; r s : -0.472, p=0.048) (Figure 2). Again, the findings of mir-9.5p were consistent with the clinical data, with higher values of this miRNA associated with worse clinical outcomes and less thalamic volume. On the other hand, we found a positive association between miR-138.5p and amygdala and pallidum volume (r s :0.73, p=0.016 and r s :0.64, p=0.048) and between miR-223.3p and caudate (r s :0.46, p=0.04) ( Figure 3). Surprisingly, we did not find any correlation between miRNAs and whole brain volume (WBV), white matter volume (WMV), cortical grey matter volume (CGMV) and hypointense T1 lesion volume (T1LV). DISCUSSION Several microRNAs have been associated with MS in many articles. Most of these studies are very heterogeneous and mainly describe differences in MS versus controls, or in different stages (remission versus relapses) or phenotypes of the disease (relapsing versus progressive). There are fewer studies focusing on the association of miRNAs with clinical or radiological variables (8,11,17,33), and only one of them combining EDSS, cognitive functioning and MRI together, but in pediatric population (19). In our work we analyzed the relationship of a preselected microRNAs through topological analysis with both clinical, cognitive and MRI variables in adult MS. GA is a synthetic polypeptide made of the random combination of 4 amino acids, similar to the myelin basic protein (34). Its mechanism of action is not completely understood, but it is assumed that it binds to the HLA-II complex regulating the immune response at several cellular processes (35). Thus, different patterns of miRNA, as well as changes in miRNA expression related to its immunomodulatory effects, could modify the therapeutic response to this drug. There are several articles describing the miRNA changes associated with other disease modifying treatments such as IFN (36)(37)(38)(39)(40), natalizumab (41-44), dimethyl fumarate (45,46) or fingolimod (47-51), but less information is available regarding GA (52,53). In one article that analyzed the miRNAs changes in a mouse-EAE model treated with GA, the authors found miR-155.5p, miR-27a.3p, miR-9.5p and miR-350.5p as putative GAtreatment response biomarkers (52), all of which were associated with an altered polarization of T cells toward a Th1 and Th17 phenotypes. In the other study, with MS patients, a change in miR-146a.5p and miR-142.3p after GA treatment was found, both of those miRNAs related to immunotolerance via increasing the suppressor function of T regulatory cells (53). In our article, we notably found a correlation between GA and two of those previous microRNAs. And more interestingly they were associated with more than one clinical or radiological variable. They were miR-146a.5p (correlated with EDSS and SDMT), and mir-9.5p (correlated with EDSS and thalamic volume, and with a trend to association with SDMT). MicroRNA-155.5p was investigated in our study, but we did not find any correlation with the clinical or radiological variables. The other reported microRNAs were not included in this research as they were not preselected in the 20 best candidates made by the simple topological analysis (Anaxomics ® ). MicroRNA 9.5p has important functions in the regulation of immune responses. It appears to promote inflammatory responses inducing Th17 cells and microgial activation through different mechanisms (54). In this way, specific blockage of miR-9.5p has been suggested as a potential therapeutic strategy for treating different neuroinflammatory conditions (55). In fact, as previously reported, in an Experimental Allergic Encephalomyelitis (EAE) model of MS, miR-9.5p was increased at the peak of the disease, and its levels were reduced with GA treatment (52). In our study the association of elevated levels of miR-9.5p with higher EDSS and thalamic atrophy reinforce this possible pathogenic effect of miR-9.5p. MicroRNA 146a.5p has been previously associated with the response to GA (53). In that article miR-326, miR-155, miR-146a.5p and miR-142.3p were aberrantly expressed in peripheral blood mononuclear cells from RRMS patients compared to controls. This pattern did not changed in IFN-b treated patients, but miR-146a.5p and miR-142.3p were significantly reduced after GA treatment. MicroRNA-146a.5p is an important regulator of the immune system and seems to participate in the suppressor function of T regulatory cells (56), down-regulation of Th17 cells (57) and promotion of M2 (immunosuppressive) polarization of macrophages (58). Given these immunotolerance functions, the increase of miR-146a.5p in MS could be related to an indirect mechanism to try to counterbalance the inflammatory state in these patients rather a direct effect of miR-146a.5p in MS pathology. The association of higher levels of this microRNA with worse EDSS and SDMT outcomes would mean a higher MS activity and a worse prognosis rather than a direct pathogenic effect. MicroRNA-223.3p has been shown to have neuroprotective effects in an animal model of MS. It seems that miR-223.3p could exert its functions by blocking the glutamate receptor signaling. In human studies it has been shown a higher expression of this miRNA in MS versus controls (59), as well as in relapses versus remission and in RRMS versus PPMS (60). For these reasons it has been postulated that miR-223.3p would be upregulated as a compensatory mechanism in response to inflammation, and would exert a direct neuroprotective effect by reducing excitotoxicity. These data are in line with the protective effect found in our study, but they do not allow us to extract any conclusions about it utility as clinical biomarker. Even though the strongest associations were found with miR-126.3p and miR-200c.3p, less data is available regarding these microRNAs and miR-138.5p. MicroRNA 126.3p has been linked on the one hand with fibrotic responses (61) and on the other hand with clinical response in other autoimmune diseases (62). In MS it has been shown to be upregulated during the remission phase of the disease (60). In this regard it could be postulated as having a protective effect in MS. But this data is in contrast with the supposed pathogenic effect found in our work, and there are not other data to support any implication of mir-126.3p in MS pathology. Finally, miR-200c.3p and miR-138.5p seem to regulate apoptosis and cell proliferation in different forms of cancer, but they have not been previously related to multiple sclerosis either, and, as with mir-126.3p, there are not sufficient data to elaborate any conclusions about the findings of our study. It would be very interesting to further evaluate these microRNAs to confirm these associations. One limitation of our study was the small number of patients. But the oriented pre-selection of microRNAs made by the topological analysis described in the methods could have overcome this problem. This procedure would allow to minimize the sample to bring out significant clinical relationships, and it would explain the meaningful and high number of statistical associations found in our study. In conclusion, these data support the hypothesis of miRNA as potential biomarkers in this disease. More studies are needed, with bigger samples, controls and longitudinal designs to validate these results and to better understand the role of miRNAs in the pathogenesis, monitoring and therapeutic response of MS. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Getafe University Hospital. The patients/participants provided their written informed consent to participate in this study.
3,966.2
2022-06-14T00:00:00.000
[ "Biology", "Medicine" ]
Time-course responses of circulating microRNAs to three resistance training protocols in healthy young men Circulating microRNAs (c-miRNAs) in human plasma have been described as a potential marker of exercise. The present study investigated the effects of three acute resistance training (RT) protocols on the time-course changes of the c-miRNAs profiles in young males. The subjects (n = 45) were randomly divided into three groups: muscular strength endurance (SE), muscular hypertrophy (MH) and maximum strength (MS). Venous blood samples were obtained before exercise and immediately, 1 h and 24 h after each RT protocol to assess the following biological parameters: c-miRNAs, anabolic and catabolic hormones, inflammatory cytokines and muscle damage markers. The results revealed that the levels of two c-miRNAs (miR-208b and miR-532), six c-miRNAs (miR-133a, miR-133b, miR-206, miR-181a, miR-21 and miR-221) and two c-miRNAs (miR-133a and miR-133b) changed significantly in response to the SE, MH and MS protocols (p < 0.05), respectively. The nature and dynamic processes of the c-miRNAs response were likely influenced by the RT modality and intensity. Moreover, miR-532 was negatively correlated with insulin-like growth factor-1 and positively correlated with interleukin-10, whereas miR-133a was negatively correlated with cortisol and positively correlated with testosterone/cortisol. These findings suggest that these c-miRNAs may serve as markers for monitoring the RT responses. Results Subject characteristics. The characteristics and one repetition maximum (1RM) of the subjects are shown in Table 1. Statistical analysis revealed that there were no baseline differences in the subject characteristics between groups. The data for hormones, muscle damage biomarkers and inflammatory cytokines in response to the three RT protocols are summarized in Fig. 1. Both SE and MH protocols could induce significant changes of plasma testosterone and cortisol levels, with peak values observed 0 h postexercise (p < 0.05) (Fig. 1A,B). Plasma levels of cortisol were increased in response to MS, peaking after 24 h recovery (p < 0.05) (Fig. 1B). All of the three protocols could change the ratios of testosterone and cortisol (T/C), peaking for SE and MH whereas declining to minimum for MS after 24 h recovery (p < 0.05) (Fig. 1C). The plasma levels of insulin-like growth factor-1 (IGF-1) were altered in response to SE and MH, with peak values observed 0 h postexercise (p < 0.05) for SE whereas minimum values observed after 24 h recovery (p < 0.05) for MS (Fig. 1D). Circulating miRNAs screening by TaqMan Low Density Array. The global plasma miRNAs expression patterns in response to the SE, MH and MS protocols were analyzed using TaqMan Low Density Array (TLDA). The Pearson correlation coefficient (R) for the three paired groups was 0.90 for SE, 0.95 for MH, and 0.82 for MS ( Fig. 2A-C). A miRNA was considered to be differently changed if its Ct value was below 35 and if there was a larger than 2-fold change in concentration. Class-comparison analysis of all 754 human miRNAs showed that plasma miRNAs responded differently to the three RT protocols (SE: 1 miRNA increased, 93 miRNAs decreased; MH: 75 miRNAs increased, 7 miRNAs decreased; MS: 16 miRNAs increased, 60 miRNAs decreased). The markedly altered miRNAs in plasma (fold change > 10) after the SE, MH, and MS protocols are listed in Table 2. As shown in Fig. 2 and miR-628-5p) were considered for subsequent validation individually because they showed the greatest fold change in response to (1) one of the three RT protocols only (miR-208b for SE, miR-542-5p for MH, and miR-206 for MS), (2) two of the three RT protocols (miR-219 and miR-532 for SE and MS, and miR-628-5p for MH and MS), or (3) all of the three RT protocols (miR-216a and miR-205). Additionally, given that exercise can induce disturbances in skeletal muscle structure and function as well as a cascade of inflammatory responses 9 , 8 miRNAs related to muscle or inflammation were also selected, including miR-1, miR-133a, miR-133b, miR-146a, miR-181a, miR-21, miR-221 and miR-378 [15][16][17] . Five of the selected miRNAs (miR-1, miR-133a, miR-133b, miR-206 and miR-208b) belong to the myomiRs, which are exclusively or preferentially expressed in muscle 18 . All of the 16 selected miRNAs were quantified individually among the three RT protocols. Circulating miRNAs in response to the acute muscle endurance protocol and association with conventional biomarkers. For each of the three groups, the Ct values for Let-7d/g/i before and after resistance exercise showed low variability (see Supplementary Fig. S1). The plasma levels of four myomiRs (miR-1, miR-133a, miR-133b and miR-206) and other tissue-specific miRNAs or unknown origin miRNAs displayed no significant changes at different time points (p values between 0.06 and 0.97). The relative concentrations of the 16 c-miRNAs are presented in Table 3. However, the plasma levels of miR-532 were markedly increased after 1 h recovery (p < 0.05) and remained elevated after 24 h recovery (p < 0.05) (Fig. 3A). Additionally, the plasma levels of another myomiR, miR-208b, were observably decreased 0 h postexercise (p < 0.05) and did not return to the basal level after 24 h recovery (p < 0.05) (Fig. 3B). In particular, IL-10 displayed a consistent increase (Fig. 4A) similar to that for miR-532 (Fig. 4B). A distinct positive correlation (R = 0.42, p = 0.004; Fig. 4D) was observed between changes in plasma levels of miR-532 and Circulating miRNAs in response to the acute muscle hypertrophy protocol and association with conventional biomarkers. Three myomiRs, miR-133a, miR-133b and miR-206 that were not affected by SE, showed different expression patterns in response to MH. The plasma levels of miR-133a were significantly decreased (p < 0.05) 0 h postexercise, and restored to the baseline level after 1 h recovery (p > 0.05) (Fig. 5A). The plasma levels of miR-133b level were markedly increased after 24 h recovery (p < 0.05) (Fig. 5B) miR-206 reached peak levels after 1 h recovery (p < 0.05) and decreased to the baseline level after 24 h recovery (p > 0.05) (Fig. 5C). Additionally, other muscle or inflammation-related miRNAs also responded to MH. The plasma levels of miR-21 were markedly decreased 0 h postexercise (p < 0.05) and reached peak values after 1 h recovery (Fig. 5D). The plasma miR-181a levels peaked after1 h recovery (p < 0.05) and decreased to the baseline value after 24 h recovery (p > 0.05) (Fig. 5E). The plasma levels of miR-221 displayed a nonsignificant increase 0 h postexercise, following with a significant decrease after 1 h recovery (p < 0.05) and increased again after 24 h recovery (p < 0.05) (Fig. 5F). No distinct changes in the plasma levels of the other 10 miRNAs were observed (p values between 0.06 and 0.96) ( Table 3). There were no correlations between the levels of the six changed c-miRNAs and conventional parameters (R values between −0.43 and 0.43, p values between 0.08 and 0.96). Circulating miRNAs in response to the maximum muscle resistance protocol and association with conventional biomarkers. Two myomiRs, miR-133a and miR-133b which responded to MH, were also affected by MS. The plasma levels of miR-133a reached its minimum 0 h postexercise (p < 0.05) and restored to the basal level after 1 h recovery (p > 0.05) (Fig. 6A). The plasma miR-133b levels were not observably changed 0 h postexercise, peaking after 1 h recovery (p < 0.05) (Fig. 6B). Additionally, no significant changes were found for other c-miRNAs at different time points (p values between 0.13 and 0.97) ( Table 3). Correlation analysis indicated that changes in miR-133a had a negative correlation with cortisol (R = −0.53, p = 0.04; Fig. 7A,B,D) and a positive correlation with T/C (R = 0.59, p = 0.02; Fig. 7B,C,E). Summary of the validated and predicted targets of the changed circulating miR- NAs. Subsequently, we summarized the validated target genes regulated by the altered plasma miRNAs which responded to the three RT protocols. The target genes and their functions that involved in muscle biogenesis and structure as well as exercise induced adaptations (such as inflammations and angiogenesis) are shown as Supplementary Table S1. To explore the potential roles of emerging miR-532, we performed a bioinformatics prediction of the possible targets via the use of multiple target prediction databases (DIANA, Microinspector, Miranda, Mirtarget, Mitarget, Nbmirtar, Pictar, Pita, Rna22, Rnahybrid and Targetscan). The targets which were simultaneously predicted by at least five prediction databases were selected for subsequent Gene Ontology (GO) analysis. The predicted miR-532 targets and the involved biological processes which are potentially related to exercise induced adaptations (such as energy metabolism and immune response) are listed in Table 4. Discussion Each RT modality leads to differentiated molecular and structural adaptations not only in the exercised muscle but also in distant tissues, resulting in a whole body adaptive response 1 . Overall, the results of the present study suggest that RT can lead to distinct time course changes in the profiles of c-miRNAs, and these changes likely depend on the RT modality or intensity. Moreover, the correlations of miR-532 and miR-133a in particular with conventional parameters suggested their potential roles as biomarkers of resistance exercise. Regimented RT is well-established as an effective mechanism to achieve a specific training outcome by specifying acute RT program variables 1 . Although these training variables are conducive to good performance and post-exercise adaptations to resistance exercise, the relative intensity (%1RM) appears to be a key factor 19 . Additionally, maximizing the specific response to RT is thought to be best achieved by proper manipulation of exercise 1-3 . Specific signaling pathways are critical for the structural remodeling and functional adjustment of skeletal muscle in response to exercise-induced physiological and biochemical stimuli 20,21 . The principle of super-compensation is necessary to increase exercise stimulus and for adaptations to occur 22 . The time-course changes for an acute hormonal and cytokine or c-miRNA response to RT might provide a thorough understanding of the molecular events that occur during exercise and later recovery phases. Traditionally, changes in anabolic-catabolic hormones, muscle damage markers and inflammatory mediators have been widely used to monitor and understand these acute program variables 23 . In our study, these conventional biomarkers, such as testosterone, cortisol, T/C, IGF-1, CK and IL-10 showed different dynamic responses to the different RT protocols. While the pro-inflammatory biomarkers, such as IL-6 and hs-CRP 24 , did not show any changes at different time points. These results are in partly consistent with previous studies 24,25 . At present, although the mechanism about exercise induced release of these biomarkers remains incompletely understood, the physiological implications have been addressed 24,25 . The c-miRNAs, which are present and highly stable in the bloodstream, have been proposed as potential new biomarkers of specific exercise responses 12,15,26,27 . In our study, the plasma levels of several myomiRs (including miR-133a, miR-133b, miR-206 and miR-208b) showed dynamic changes in response to RT, whereas the plasma miR-1 did not respond to RT. Such differential changes for these myomiRs in circulation might indicate the different degrees of muscle fiber recruitment or stress/adaptations in response to RT. Additionally, the direct correlations between the changes in plasma cortisol level and the T/C ratio and miR-133a level that were observed for the MS protocol likely indicated stressful training (overreaching) 1 . Moreover, cortisol was significantly, albeit weakly, associated with gains in type II fiber area 28 . Thus, the plasma miR-133a level may be a potentially useful biomarker of an actual physiological strain or tissue-remodeling processes for the MS protocol. In the present study, other c-miRNAs besides myomiRs also displayed different responses to the three RT protocols. For the SE protocol, time-course analysis showed that the plasma miR-532 level increased during early recovery and remained elevated for a long time. For the MH protocol, the plasma levels of three c-miRNAs, miR-21, miR-181a and miR-221, also showed dynamic changes. The plasma levels of miR-21, which plays a crucial role in the inflammatory response 29 , has been found to change after a single bout of endurance exercise 15 . In our study, the time course of the plasma miR-21 level, starting with a decrease immediately after exercise followed by an increase during early recovery, may reflect the balance between the initial pro-inflammatory and later immunoregulatory anti-inflammatory responses 29 . There is increasing evidence that many of the pro-oxidative and pro-inflammatory processes that occur after acute exercise may be vital for the long-term adaptive responses to exercise training 30 . The plasma levels of miR-181a, a possible biomarker of acute muscle wasting 31 , was increased during early recovery. Moreover, the plasma miR-221 which is related to vascular biology and fat and glucose metabolism 32 , also showed dynamic responses in the present study. Additionally, the plasma miR-146a level, which increased in response to acute endurance exercise 26 and decreased 3 days post-resistance exercise 13 , did not respond to RT in our study. The plasma levels of miR-378 which is associated with muscle mass gains in vivo 33 , remained unchanged for the three RT protocols. The differences between the findings of our study and other studies may relate to exercise modality and the timing of the blood draws or training state 12,26,33 . Furthermore, a negative correlation between the changes in plasma levels of IGF-1 and miR-532, as well as a positive correlation between that of IL-10 and miR-532 were observed in the present study. The increased IGF-1 level may exert positive effects on adaption to resistance exercise, i.e., muscle hypertrophy or connective tissue 25 . Moreover, exercise training improved the inflammatory profile by increasing the levels of the anti-inflammatory cytokine IL-10 in post-myocardial infarction patients 34 . In consideration of the correlations being the combined results for all data points, it is conceivable that there were c-miRNA-hormone stress or adaptation associations. Moreover, the predicted target genes of miR-532 revealed the potential role of miR-532 in metabolism processes or inflammation response related to RT. Thus, miR-532 may also be a biomarker for the beneficial effect of physical exercise or tissue-remodeling processes in response to RT. Taken together, these results suggest that not all c-miRNAs from skeletal muscle or other tissues respond to an RT stimulus. Furthermore, the variation of c-miRNAs and traditional blood parameters noted above is likely related to the type or amount of muscle activated during resistance exercise. In our study, the absolute workloads for the three RT protocols were the same, thus the differential expression of c-miRNAs may be related to relative intensity. Plasma miRNAs, such as miR-133a, miR-133b, miR-181a, miR-206, miR-208b, miR-21, miR-221 and miR-532, may therefore represent important novel indicators of skeletal muscle, inflammation or metabolism stress or adaptations to RT protocols with different intensity. Figure 7. Correlations of miR-133a with plasma cortisol and testosterone/cortisol levels (n = 15). For each subject, the plasma levels of miR-133a, cortisol and testosterone at baseline (Pre) were assigned a fold change of 1, and measurements obtained immediately after exercise (0 h), after 1 h of recovery (1 h) and after 24 h of recovery (24 h) were compared to the baseline. Scatterplots show the plasma levels of cortisol (A), miR-133a (B) and testosterone/cortisol (C). A direct correlation is observed between plasma levels of miR-133a and cortisol (D) and testosterone/cortisol (E). Scientific RepoRts | 7: 2203 | DOI:10.1038/s41598-017-02294-y Currently, whether the RT induced dynamic changes in c-miRNAs during exercise and the recovery phase directly reflect intracellular miRNA turnover is unknown. A previous study showed that extracellular dystrophy-associated miRNA levels show a dynamic mode of expression that mirrors the process of muscle pathology 35 . It is likely that the levels of different c-miRNAs induced by RT protocols may reflect specific training-related dynamic activities. Thus, c-miRNAs could be useful biomarkers for physiological mediators of exercise-induced stress or adaptations during exercise or the recovery phase. The mechanism and function underlying the uptake and release of c-miRNAs following exercise are still unclear 36,37 , and need to be studied further. However, characterizing these responses is an important step in understanding of the roles of exercise induced c-miRNA changes. Exercise induced c-miRNA elevations may represent a more generalized response to internal and/or external stress. Factors, including mechanical, oxidative or nitrosative stress, damaged cells 26 , changes in blood cell numbers 38 or hemolysis 39 and the release of secreted extracellular vesicles (exosomes) 40 , likely induce c-miRNAs production during exercise. However, c-miRNAs responded differently to the RT protocols in the present study, indicating a specific response to the demands of the exercise rather than a global exercise-induced c-miRNA response. In summary, the findings of the present study indicate that RT can lead to changes in c-miRNA levels with transient and delayed kinetics. The amounts and differential expression of c-miRNAs in response to acute RT, which potentially depend on the relative intensity or other RT variables, may point to a physiological role in the phenotypic changes and metabolic and inflammatory processes induced by exercise. Limitations This study has some limitations. First, the post-exercise analysis was only limited to 24 h after exercise, thus the delayed kinetics changes in c-miRNA profiles for a longer period of recovery were undetected. Second, the participants in the present study had maintained a regular exercise regimen for a period of time. The RT-associated alterations in some c-miRNAs are likely distinct between the untrained state and trained state. Additionally, we only analyzed the differences of the c-miRNAs and blood parameters in the same RT protocol. For practical and ethical reasons, the three RT protocols were performed by different individuals. To avoid the statistical difference that may be caused by the intrinsic and inherent discrepancy of different participants, we did not compare the c-miRNAs and conventional parameters among different RT protocols. Materials and Methods Subjects. Forty-five university cadets who led similar lives were requested to volunteer for this study. The participants had maintained a regular exercise regimen for 10 months, and none had weight machine training experience. Exclusion criteria included any history of neuromuscular, cardiovascular, hormonal and metabolic diseases. The subjects were prohibited from taking any medications, and the subjects maintained the same dietary intake throughout the study. Blood samples were collected according to protocols approved by the Human Research Ethics Committee of Nanjing University. Written informed consent was obtained from all of the subjects. The Human Research Ethics Committee of Nanjing University approved the study protocol in conformity with the Declaration of Helsinki, and all experiments were carried out in accordance with approved guidelines of the Nanjing University. Experimental design. The forty-five students were randomly assigned to one of three groups: strength endurance exercise group (SE), muscular hypertrophy group (MH) and maximum strength group (MS). One week was dedicated exclusively to the individual's 1RM test and familiarization of the participants with the equipment and exercise protocol. For each resistance exercise protocol, blood was collected before (Pre), immediately after exercise (0 h), 1 h after exercise (1 h) and 24 h after exercise (24 h) to assess the c-miRNAs and traditional biomarkers. For each group, the blood samples drawn before the exercise were set as a control. Resistance exercise protocols. The training protocols consisted of five exercises, which activated large or small muscle masses and were performed in the following order: bench press, squat, pulldown, overhead press and standing dumbbell curl. All exercises were performed using free weights or universal weight machines. The maximum strength for every exercise was measured using the 1RM method 41 . The training protocols were designed in conformity with previous studies 1, 5 . In brief, the SE protocol consisted of three sets of 16-20 repetitions at 40% of the 1RM intensity with a 1-minute rest interval between exercises and sets. The MH protocol consisted of three sets of 12 repetitions at 70% of the 1RM intensity with a 2-minute rest interval between exercises and sets. The MS protocol consisted of four sets of 6 repetitions at 90% of the 1RM intensity with a 3-minute rest interval between exercises and sets. All subjects used a complete range of motion and a cadence of a 1-to 2-second positive phase and a 1-to 2-second negative phase. The total workload for the three RT protocols was kept as similar as possible. Experimental protocol. The subjects checked into the laboratory at 4 p.m. and did not perform physical exercise for 72 h before the experimental session. Each subject performed the experimental sessions at the same time of day. All subjects performed a 5-minute warm-up of treadmill walking and calisthenics and a specific warm-up for each exercise with a constant range of motion and without an external load; this warm-up consisted of approximately 10 repetitions, and then, the subjects had a 5-minute recovery interval. Each group performed experiments according to their training protocols. Blood samples. During each acute exercise experiment, five milliliters of blood was collected before exercise and immediately, 1 h and 24 h after exercise in standard anticoagulant (EDTAK2)-treated vacutainer tubes for every subject. All blood samples were centrifuged at 1500 × g for 10 minutes immediately after each blood draw to pellet cellular elements and then centrifuged at 10,000 × g for 5 minutes at 4 °C to completely remove cell debris. The supernatant plasma was then collected and immediately frozen at −80 °C. Biochemical analyses. Blood samples were collected before and immediately and 1 h after exercise for determination of LA. Blood LA was determined using an automatic lactate analyzer (EKF Diagnostic GmbH, Barleben, Germany). Testosterone and cortisol were measured using chemiluminescent microparticle immunoassays (Beckman Coulter Inc., Brea, CA, USA). IGF-1 was measured using chemiluminescence immunoassays (Diagnostic Products Corporation, Los Angeles, USA). IL-6 was measured using electrochemiluminescence immunoassays (Roche Diagnostics, Mannheim, Germany). CK and hs-CRP were measured using an automatic clinical chemistry analyzer (Hitachi 7600, Japan). The concentration of IL-10 was measured using a commercial radioimmunoassay kit (Beijing North Institute of Biological Technology, China). Circulating miRNA screening using a TaqMan Low Density Array. The TLDA was used as described previously 42 . For the three RT protocols, an equal volume of plasma from 10 participants was mixed separately to form the Pre and 0 h sample pools (each sample pool contained 10 ml). RNA was isolated from each pooled sample using TRIzol reagent, and reverse transcription was carried out using a TaqMan MicroRNA Reverse Transcription Kit and Megaplex RT Primers. The miRNA screening of 754 different human miRNAs was performed using the TLDA on an ABI PRISM 7900HT Fast Real-Time PCR System (Applied Biosystems). The concentrations of plasma miRNAs were normalized to Let-7d/g/i trio 43 . The results are shown using the Ct (cycle threshold) value and normalized to the calculated mean Ct value of the Let-7d/g/i of each pooled sample (ΔCt). The relative expression was determined using the comparative Ct method (2 −ΔΔCt ). RNA isolation and quantification of circulating miRNAs. RNA isolation and RT-qPCR were performed as described previously 42 . The total RNA, including miRNAs, was extracted from 100 µL plasma using a 1-step phenol/chloro form purification protocol. To control for variability in the RNA extraction and purification procedures, all samples from a given subject were handled in the same batch. Hydrolysis probe-based RT-qPCR was carried out using a TaqMan PCR kit and an Applied Biosystems 7300 Sequence Detection System 44 . The Ct values were determined using default threshold settings, and the average Ct value was calculated from triplicate PCRs. Ct values were normalized to the Let-7d/g/i trio, and the fold change of individual miRNA was determined using the 2 −ΔΔCt equation. The ΔCt was calculated by subtracting the Ct values of the Let-7d/g/i trio from the mean Ct values of the target miRNAs. The ΔCt values were then compared (ΔΔCt) with each participant's own resting baseline value at the Pre time point (normalized to a fold change of 1). Statistical analysis. The GraphPad Prism 5 and SigmaPlot 10.0 packages were used. Data are presented as the means ± standard error of the mean (SEM). The normality of the data distribution was tested using the Shapiro-Wilk normality test. The non-parametric Friedman test was performed to compare miRNA, inflammatory cytokine and muscle damage marker concentrations. The differences in other variables were compared using repeated measures ANOVA. When appropriate (p value < 0.05), a Dunn multiple comparison (miRNAs, inflammatory cytokines and muscle damage markers) or a Bonferroni multiple comparison (other variables) post hoc test was used to compare groups of different time points. Each result labeled with p < 0.05 indicates p values that resulted from the post hoc test. Correlations of miRNA profiles between baseline and immediately after exercise were calculated using Pearson correlation analysis, and correlations of miRNAs and other blood parameters were performed using Spearman rank correlation analysis as appropriate for the data distribution. A p value < 0.05 was considered statistically significant.
5,726
2017-05-19T00:00:00.000
[ "Biology" ]
Seeking Quantum Speedup Through Spin Glasses: The Good, the Bad, and the Ugly There has been considerable progress in the design and construction of quantum annealing devices. However, a conclusive detection of quantum speedup over traditional silicon-based machines remains elusive, despite multiple careful studies. In this work we outline strategies to design hard tunable benchmark instances based on insights from the study of spin glasses - the archetypal random benchmark problem for novel algorithms and optimization devices. We propose to complement head-to-head scaling studies that compare quantum annealing machines to state-of-the-art classical codes with an approach that compares the performance of different algorithms and/or computing architectures on different classes of computationally hard tunable spin-glass instances. The advantage of such an approach lies in having to only compare the performance hit felt by a given algorithm and/or architecture when the instance complexity is increased. Furthermore, we propose a methodology that might not directly translate into the detection of quantum speedup, but might elucidate whether quantum annealing has a"`quantum advantage"over corresponding classical algorithms like simulated annealing. Our results on a 496 qubit D-Wave Two quantum annealing device are compared to recently-used state-of-the-art thermal simulated annealing codes. I. INTRODUCTION Optimization plays an integral role across disciplines. Not only does modern manufacturing and transport heavily depend on efficient optimization methods to reduce cost and emissions, many fields of research depend on a multitude of optimization techniques to solve a wide variety of problems. Similarly, the ever-increasing amount of data available to mankind means an urgent need for more efficient approaches in querying, parsing, and mining data, approaches that often depend on optimization techniques. Within physicsrelated disciplines alone, optimization is needed to solve many difficult problems ranging from frustrated spin systems [1][2][3] to novel approaches in material discovery, as well as the efficient parsing of high-energy event data or astrophysical spectra. As such, the search for more efficient optimization approaches is of great importance. Because the speedup of current silicon-based computing technologies is slowly coming to an end mostly due to manufacturing and material constraints [4], interest in developing faster optimization methods has shifted to the development of new state-of-the-art algorithms, as well as novel computing paradigms, e.g., based on quantum architectures. Quantum computing [5,6] and, in particular, adiabatic quantum optimization [7][8][9][10][11][12][13][14][15][16][17][18] has gained increased momentum since D-Wave Systems Inc. introduced the D-Wave Two (DW2) quantum annealing device [19]. Inspired by the work of Santoro et al. [12], multiple teams have attempted to demonstrate that quantum adiabatic optimization-or quantum annealing (QA) [20][21][22][23]-has advantages over conventional thermal optimization techniques, such as, for example, simulated annealing (SA) [24]. The idea behind QA is to adiabatically quench quantum fluctuations to optimize a cost func-tion (Hamiltonian) of a given complex optimization problem. Potentially, the wave function of the problem might be able to quantum tunnel through barriers in the free-energy landscape, i.e., QA might be able to outperform other approaches like SA where temperature fluctuations are slowly reduced to find the optimum. Towards the end of the annealing schedule in SA, when these temperature fluctuations are small, the system is unable to overcome free-energy barriers and, especially for problems with rough energy landscapes such as in spin glasses [25,26] and related problems, it might become trapped in metastable states, thus missing the true optimum of the problem. The fact that a broad range of hallmark optimization problems, such as the satisfiability problem (k-SAT), the number partitioning problem, vertex covers, knapsack problems, coloring problems, the traveling salesman problem, etc. can be mapped onto quadratic unconstrained binary optimization problems [27], means that devices that are tailored to solve these, such as the DW2, could revolutionize today's optimization efforts. Although not a fully programmable universal quantum computer, the D-Wave device represents a sizable advance in (quantum) computing. The seminal work of Rønnow et al. [28] took great care and detail in defining the notion of quantum speedup. While at the moment the demonstration of strong quantum speedup remains a distant goal, the detection of limited quantum speedup [29]-a speedup relative to a given corresponding classical algorithm such as SA-seems more graspable. The number of studies (see, for example, Refs. [28,[30][31][32][33]33]) attempting to detect quantum speedup is growing at a fast pace; however, the definite detection of quantum speedup remains elusive. So why, despite these large efforts, does quantum speedup remain to be demonstrated? Potentially, there are many reasons why this might be the case. On one hand the complex circuitry, combined with the extreme fragility of quantum states to perturbations might be a source of decoherence and thus loss of any advantage over conventional techniques. On the other hand, the systems currently available (maximally 512 qubits on DW2, soon up to ∼ 1000) might be too small for the benchmarks to be in the asymptotic scaling regime. However, a more mundane reason that is relatively easy to fix is the choice of the wrong benchmark problem. In Ref. [34], Katzgraber et al. demonstrated that the native benchmark to search for quantum speedup on a device like the DW2-an Ising spin glass with discrete uncorrelated disorder-is likely a problem that not only might be too easy to detect any speedup (think of two world-class skiers on a bunny slope), but the energy landscape of a spin glass on the DW2 Chimera topology [35] might actually favor thermal approaches like SA, simply because the spin-glass state exists only at zero temperature. Furthermore, the use of either bimodal or uniform range-k disorder [28,[31][32][33]33] creates an energy landscape that has a huge number of configurations that minimize the cost function. As such, any method like SA run with multiple restarts will naturally excel in optimizing such a problem. Attempts to mitigate this issue by planting solutions [36] delivers problem instances that might not be challenging enough for both classical algorithms and quantum devices alike. To overcome the limitations imposed by the small size of current devices, it is imperative to use a native benchmark problem that uses as many qubits N as possible on the device. Any embedding of a potentially harder problem [37] will further reduce the number of logical qubits, thus pushing the asymptotic regime farther away. Furthermore, it is hard to mitigate the effects of noise on both qubits and couplers without improving manufacturing. However, it is considerably easier to design hard benchmark instances that attempt to work around the flaws and limitations of the DW2 architecture. Reference [38] focuses on designing instance problems that are affected as little as possible by the chip's intrinsic noise. Here, we present a simple road map that uses insights from the study of spin glasses to design hard, as well as tunable, benchmark instances. In addition, we propose to search for quantum advantages over classical architectures not only by comparing to state-ofthe-art classical algorithms [39], but by studying the effects of tuning the instance complexity for a given type of disorder on both classical and quantum approaches. By studying the performance hit felt by the different approaches on carefully tailored problems with a free-energy landscape that is either dominated by large barriers or is reminiscent of a ferromagnetic system, further insights into the nature of quantum annealing devices can be gained. To perform a fair comparison across instances, here we fix the ground-state degeneracy (ideally) to 1 (or as low as possible) and vary the complexity of the free-energy landscape by using the spin-glass order parameter distribution as a proxy to the dominant features of the landscape [40,41]. We show that, indeed, the spin-glass order parameter distribution produces tunable instances, and that predictions from the study of spin glasses on the complexity of the energy landscape allows us to produce problems on average considerably harder than any previous study. We emphasize that we are not attempting to perform a scaling analysis as done in previous studies, simply because we believe that the currently accessible system sizes of up to 512 qubits are too small to be in the asymptotic limit [42]. We base this statement on previous simulations of two-dimensional Ising spin glasses on a square lattice at zero temperature with discrete disorder [43] where corrections to scaling due to the finite system sizes were very strong for systems with ∼ 10 3 spins. Our results show that the DW2 device is outperformed at finding the ground state by classical state-of-the-art optimization algorithms. However, there is a potential signature that the DW2 device might be able to optimize certain classes of carefully designed native spin-glass problems more efficiently than the classical counterpart SA, especially if noise is reduced. This suggests that the DW2 device potentially has a "quantum advantage" over corresponding classical algorithms like SA for certain problems. In addition, there are signs that the DW2 device might in some cases be more effective at generating low-lying states, as opposed to strict ground states than SA. Finally, our results suggest that "classical computational hardness" in spin glasses seems to carry over to quantum annealing devices, therefore facilitating the design of spin-glassbased instances. The day that quantum annealing machines have lower noise levels, higher connectivity to enable the simple embedding of spin-glass problems with, e.g., a finite transition temperature [34,37], or a larger numbers of qubits, a combination of the approach presented in Ref. [28], with error-correction techniques [31,44], and designer instances described in this work will likely show if quantum speedup is myth or reality. The paper is structured as follows. In Sec. II, we introduce the native benchmark problem, followed by a detailed description of the limitations of current approaches as well as how we design hard instance problems in Sec. III. Section IV summarizes results on both the DW2 device, as well as classical simulation codes, followed by a discussion and summary. Appendix A outlines our experimental methodology on the DW2 device housed at D-Wave Systems Inc., followed by simulation details in Appendix B and numerical results in Appendix C. Appendix D summarizes less fruitful efforts experimenting with other instance classes. II. NATIVE BENCHMARK: SPIN GLASSES We illustrate our benchmarking ideas using the D-Wave Systems, Inc., D-Wave Two quantum annealing machine [45]. The native benchmark problem for the DW2 device is an Ising spin glass [6,[25][26][27] defined on the Chimera topology of the system [35], The N Ising spins S z i ∈ {±1} are defined on the vertices V of the Chimera lattice (see Fig. 7) and can be coupled to a (local) field h i . The sum is over all edges E connecting vertices {i, j} ∈ V. In this study we set h i = 0 ∀i. We emphasize that it is of paramount importance to study native problems that use as many qubits as possible to prevent overhead that might yield smaller embedded problems. At the moment, with approximately 500 (soon 1000) qubits at hand, it will be difficult to detect any quantum speedup. As such, our focus does not lie in performing a detailed scaling analysis with the problem size N , but to show how to select tunable hard problems that have the same disorder distribution, i.e., have the same strengths or weaknesses with respect to the intrinsic noise found in these devices. Tuning the complexity of the problem instances will then allow for a systematic testing of any potential advantages or disadvantages that the DW2 device might have over other architectures and/or simulation approaches. Note that in this study we disregard the effects of noise on the couplers and qubits and will report on these in a subsequent publication with strategies on how to mitigate the effects of perturbed problem Hamiltonians [38]. However, for the generated problems, the resilience to noise (robustness to perturbations) on the qubits and couplers is roughly similar and mostly agrees within error bars for the different instance subclasses that use interactions based on Sidon sets [46]; see Sec. III B for details. This means that the noise of the DW2 does not affect our results. III. DESIGNING HARD INSTANCES We start by describing the shortcomings of previous instances to detect quantum speedup and then outline our approach to produce tunable, hard instances. In Ref. [34] it was shown that a spin glass on the Chimera topology has a zero-temperature phase transition. Although the worst case complexity of finding a ground state of an Ising spin glass on the Chimera graph falls into the NP hard class, performing any minimization of the energy based on any annealing approach will likely have a rather simple phase space to traverse for small system sizes because dominant barriers will not be as pronounced. Embedding problems that have a finite-temperature spin-glass transition is difficult, mainly due to the large overhead; i.e., only systems with few logical qubits can be studied because many physical qubits are needed to emulate long-range interactions. Because the resulting systems are small, the problems are far from the asymptotic regime to detect any quantum speedup in a scaling analysis. A more promising route is thus to use insights from the study of spin glasses and carefully design the interactions between the qubits on the native Chimera graph, such that the problems are as hard as possible in order to challenge any optimization approach. A. Problems with current approaches In addition to a restrictive geometry, the D-Wave hardware has clear restrictions as to what values the interactions be-tween the spins can have. This is rather limiting and, as such, only discrete and well-separated values of the couplers can be set. The simplest approach used in previous studies [28,[31][32][33]33] is to select the disorder from a bimodal distribution, i.e., J ij ∈ {±1} (we shall refer to these as U 1 ), followed by uniform range-k problems where the interactions J ij are chosen from the integer set {±1, ±2, . . . , ±k}. We refer to the latter as U k . The problem with these choices for systems up to N = 512 variables is the huge degeneracy of the ground states that yields again benchmarks too simple to challenge any optimization approach (see Sec. IV). A simple analogy to this problem is a game of golf where the green has, for example, 10 7 holes. Hitting a hole in one is a trivial task! However, having a course with only one hole makes the sport truly challenging. As such, we design herein problems that-within the hardware restrictions of the machine-have a unique configuration that minimizes the Hamiltonian in Eq. (1). Other approaches [36,47] using planted solutions suffer from similar problems: While the instances are harder than for the problems in the U k class, they often still have a large degeneracy and their complexity is not high enough for the current available systems of up to ∼ 10 3 qubits. In particular, the very careful work presented in Ref. [36] shows a clear easy-hard-easy transition of the planted k-SAT solutions that could be exploited to generate hard instances. However, one problem that these instances have is that the disorder is not drawn from a particular distribution; i.e., two different planted k-SAT instances will likely have a very different (classical) energy spectrum and thus also be differently susceptible to the intrinsic noise found in the DW2 device [48]. Furthermore, we perform experiments with planted k-SAT solutions as presented in Ref. [36] using the benchmark codes in Ref. [39] and find that these instances are at times easier than the ones in the U 1 class. The authors of Ref. [36] do emphasize that harder problems must be designed to allow for the optimization of the annealing time, as well as the need to find problems where the benefits of quantum annealing can be assessed ahead of time. Finally, setting the spin-spin interactions within the K 4,4 unit cell of Chimera (see Fig. 7) to be of larger magnitude than those between the cells (often referred to as "cluster problems") has given DW2 an advantage over classical codes in a scaling analysis [49] when cluster Monte Carlo updates are not allowed. However, by design, simulated annealing (and any other Monte Carlo-like simple-sampling variation) will have a large disadvantage. The addition of simple clusterlike moves would again give classical approaches the upper hand and, as such, these approaches are not a viable route to detect any speedup, especially because they are unphysical. B. Designing tunable hard instances Our approach to generate hard instances capitalizes on the similarity between classical hardness of spin-glass-like problems and quantum hardness. In Fig. 6 of Ref. [40], it was shown in detail how the "mixing" or "autocorrelation" time strongly correlates to the complexity of the spin-glass order parameter distribution while performing the simulations with state-of-the-art parallel tempering Monte Carlo methods [50][51][52]. Autocorrelation times uniquely characterize the time a classical algorithm needs to completely decorrelate the system. As such, the time can be used as an indirect proxy of the time complexity of a particular disorder instance. In spin glasses, order is measured by comparing two copies of the system with the same disorder [25]. For simplicity, we set S z i ≡ S i , because we are studying the system classically. In that case, the overlap between two replicas α and β with the same disorder J but independent Markov chains is defined via where the sum is over all spins N . One can then study the distribution of the order parameter P (q) which characterizes a given disorder instance J . After a disorder average [· · · ] av over many instances P(q) = [P (q)] av displays a single peak around q ∼ 0 for high temperatures. For T → 0 two peaks at ±q EA emerge [53,54], a characteristic signature of a broken symmetry. However, for a given instance the structure of the distribution P (q) can be rather complex and can have multiple peaks at different values of q in addition to the two dominant peaks at ±q EA . Individual peaks can be identified with pairs of dominant valleys in the (free-) energy landscape [26]. When these peaks are close to q ≈ 0, one can assume that a thick barrier separates these valleys, whereas when the peaks are close the barriers are typically thin. Reference [40] showed that when the distribution P (q) has large support for an area close to q = 0, then the autocorrelation times were typically larger than when the support around q = 0 is close to zero. As such, by measuring the distribution function P (q), we can predict approximately the time complexity of a particular disorder instance [41]. This is illustrated in the main panel (bottom left) of Fig. 1. There, three characteristic instances are shown (color coded). An instance with many peaks close to q = 0 will typically be computationally harder than one that has only two peaks at q ∼ 1 (red line). Our experiments (shown herein) on the DW2 device show that, indeed, the complexity of an instance can be tuned by studying the structure of P (q) where the distance between two dominant peaks corresponds roughly to the barrier thickness in phase space and the relative depth between the peaks and maxima can be interpreted approximately as the barrier depth. While we are confident that there is a clear correlation between the distance ∆q of two well-defined peaks and the thickness of barriers in the energy landscape, the correlation of the depth between the peaks and the height of the barriers remains to be tested experimentally by a more precise mining of the data. However, if the depth between the peaks is nonzero, then it is safe to assume that there is some relatively trivial path that connects the valleys [55]. In addition to selecting instances according to the complexity of the phase space by studying the behavior of the spinglass order parameter distribution, we estimate the number of configurations for a given instance that minimize the Hamiltonian in Eq. (1). The goal is to make the problem as difficult as possible by restricting the number of minimizing configurations ideally to one, i.e., a unique ground state. To estimate the number of ground-state configurations a given instance has, we use the method pioneered in Refs. [56,57] where states at very low temperatures are sampled with parallel tempering Monte Carlo techniques. Once the ground-state energy is found, a histogram with minimizing configurations is created (indexed by translating the binary configuration string to a number) and sampled until every bin has at least 50 hits. We make sure that we find the true ground-state energy by studying every instance with different simulational heuristics. However, we cannot be completely certain that we have found all configurations that minimize the Hamiltonian, simply because in some cases this number can be huge (in the worst case 2 N ). Having exactly one ground state is not a necessary condition to generate a hard problem. However, if our efficient low-temperature search is unable to find more states that minimize the cost function, it will be unlikely that other methods will. A large source of degeneracy in an Ising Hamiltonian is due to zero local fields. The Hamiltonian in Eq. (1) can be written as a single-spin expression, namely, where the local fields F i are given by Whenever for a given disorder F i = 0, spin S i can take any value without influencing the energy of the system. Therefore, if a given disorder instance has k spins where F i = 0, the degeneracy of the ground state will grow by a factor 2 k . To prevent this from happening, we need to choose the disorder from a distribution that-within the restrictions of the device-minimizes the cases where the local fields are zero. The most convenient choice is thus to select the values of |J ij | from a Sidon set [46]. In a Sidon set, the sum of two members of the set gives a number that is not part of the set. For example, the set {2, 5, 10} is a Sidon set because the pairwise sum of members of the set never adds up to a member of the set. This is not the case for {2, 5, 7}, where 2 + 5 = 7. To illustrate our ideas, we choose the interactions between the spins from the Sidon set S 28 J ij ∈ {±8/28, ±13/28, ±19/28, ±28/28}, where we normalize the interactions to be restricted between ±1 [58]. To select instances with particular properties, we can therefore generate large numbers of random problems using different disorder distributions and then mine the data. We first fix the number of ground-state configurations to 1, and then we divide the instances into subclasses by studying the (normalized) overlap distribution P (q) for each instance. For example, we define the following classes: (a) Hard instances with thick barriers: These are instances where P (q) > 5 for |q| ≤ 0.75. See Fig. 1, main panel. We are interested in instances that have dominant peaks in the central (blue/dark) window. Based on classical simulations, we expect these instances to be on average among the hardest. In particular, we expect that both simulated, as well as quantum annealing will have trouble finding the optimum -see Fig. 1(a). (b) Hard instances with thin barriers: These are instances where P (q) ≈ 0 for |q| ≤ 0.50 and where P (q) > 2.5 for |q| ≥ 0.5 with at least two peaks in the range |q| ∈ [0.5, 1.0]. See Fig. 1, main panel. We are interested in instances that have dominant peaks that are close to each other in the gray boxes close to |q| > 0.5. Based on classical simulations, we expect these instances to be on average hard, however, not as hard as the instances with a thick barrier. We expect that while simulated annealing will have similar problems than with the instances with a thick barrier, quantum annealing might show an enhanced performance if the device has some quantum advantage over classical codes -see Fig. 1(b). (c) (Hard) instances with small barriers: These are instances where P (q) < 0.1 for |q| ≤ 0.75. The overlap distribution is reminiscent of a ferromagnet at low temperature. In this case no peaks are allowed in the large central (red/light) box of Fig. 1, main panel. In these instances we expect one dominant energy valley (up to smaller wiggles), i.e., these should be the easiest instances on average for any annealing approach. See Fig. 1(c). Note that the individual windows we use are tuned such that from 10 5 randomly simulated instances approximately 5000 match the aforementioned criteria. After filtering the instances that have more than one minimizing configuration, we obtain approximately 2500 instances to experiment with. The detailed simulation strategy, as well as simulation parameters, are listed in Appendix B. Noise on the DW2 device is approximately 5% of a particular external field (qubit noise) h and 3.5% of a spin-spin interaction (coupler) J ij . For the instances in S 28 , the smallest classical energy gap is ∆E = 2/28, i.e., slightly larger than the noise found on the DW2 device. While this will affect the success probabilities, it will affect all instances, either easy or hard, approximately the same way. To verify this, we perform detailed simulations where we compute the ground-state energy and configuration of a given instance with no degeneracy, perturb the couplers and qubits with Gaussian random noise of a typical strength found in the current DW2 device, and recompute the ground-state configuration. We apply 10 noise gauges and compute how stable the different instance subclasses defined below are on average. Our results show that all Sidon-set-based instance subclasses with different barrier thicknesses are affected similarly by the intrinsic noise of the device (not shown). As such, when comparing instance classes, on average a fair comparison is performed. Because the barriers are large and thick, we expect both classical and quantum approaches to have difficulties. In (b), we illustrate the expected behavior when the barriers are thin, i.e., double peaks (or more) that protrude from the dark boxes in the region |q| > 0.5. The features in the energy landscape of these hard instances with thin barriers are still very pronounced, but we expect the barriers to be thinner than in (a). While SA should show little to no advantage when the barriers remain high but are thinner, if the DW2 device has any quantum advantage, it might be able to overcome these barriers. Finally, we study instances that have no features for |q| < 0.75 (large red box in the main panel) and only have a single peak at ±qEA. These (hard) instances with small barriers have the simplest energy landscape (c) with mostly only one dominant feature. As such, we expect any annealing approach to efficiently find the optimum of the problem (on average). Note that these are cartoons intended to illustrate the different instance classes and do not represent actual data. IV. RESULTS A detailed list of the average success probabilities is given in Appendix C. To make sure that an approximately fair comparison with a known baseline study is performed, we tune the number of sweeps for the SA codes [39] such that the average success probabilities for SA and the DW2 device are approximately the same for bimodal disorder. This is the case for N sw = 900 sweeps. Note also that below we quote mainly average success probabilities. The reason is that for the hardest instance classes the DW2 device is often unable to minimize the cost function for the number of runs performed; i.e., a median would be zero and thus deliver no useful information. Because probabilities are restricted to be in the interval [0, 1], an average is well defined. A. The ugly-D-Wave Two fails often Figure 2 shows sorted success probabilities p for SA (left) and the DW2 device (right) and different instance classes normalized by the number of samples N sa studied. We compare classes S 28 with thick, thin, and small barriers with uniform range-4 (U 4 ) instances and bimodal disorder (U 1 ) used in previous studies [28]. The data for the DW2 device show a clear progression in complexity and, in particular, that the device is unable to solve many of the harder problems (success probabilities below 10 −4 ). The SA simulations using the codes of Ref. [39] show that bimodal disorder is considerably easier than all other instance classes. Furthermore, for the number of sweeps used, the complexity of U 4 is similar to S 28 with small ("none") barriers. Interestingly, the SA codes do not distinguish between S 28 instances with thin and thick barriers. Note that this is not the case for the DW2 device. Furthermore, SA can solve a much wider range of instances, as can be seen by the distributions dropping to zero only close to n → N sa . This means that while the typical (median) probability to solve a problem is finite for the SA codes, for the hardest instance classes the median is zero for the DW2 device. A double-peaked success behavior of the quantum annealer is consistent with what has been reported in Refs. [28,32], who present it as evidence of quantum behavior, although the hypothesis has been subsequently challenged by studies of quasiclassical models [59,60]. Finally, we emphasize that by optimizing the number of sweeps in the SA codes these can be tuned to outperform the DW2 device for all disorder classes studied. Figure 3 shows averaged (and gauge-averaged) success probabilities in logarithmic scale for both DW2 and SA for different instance classes. The data clearly illustrate that the average success probabilities for bimodal disorder are approximately 1 order of magnitude larger than any other type of disorder studied. Note that we choose the number of sweeps for SA such that the average success probability in the bimodal class is comparable to the DW2 device. For the DW2 device, one can clearly see a progression in difficulty between U 1 , U 4 , as well as the Sidon set S 28 with small barriers, followed by the Sidon sets with thin and thick barriers. For the choice of sweeps in SA, U 4 is comparable to S 28 with no dominant barriers, and the S 28 instances with thick and thin barriers have approximately the same average success probabilities. For all Sidon instance classes studied, the classical SA simulations outperform DW2 based on raw success probabilities. This is seen in more quantitative detail in Fig. 4, which shows the ratio of the average success probability for SA divided by the average success probability for DW2 for each instance class. To establish any quantum speedup, a system-size scaling is needed. However, the fact that the average success probabilities for the bimodal disorder for DW2 and the classical SA codes are much larger than for all other problems suggests that bimodal disorder (or, more generally, highly degenerate i.e., the machine would need many more runs to be able to find the optimum of hard native problems. Error bars are omitted for better viewing. B. The bad-Previous instance classes are too easy random problems) is too easy a problem to detect any quantum speedup. Running any classical SA code in repetition mode with highly degenerate problems potentially represents an advantage over any quantum annealing scheme. Overall, DW2 has far lower average success probabilities on the Sidon sets. This can be explained by the inherent noise present in the device. In the Sidon sets the gap to the first excited state is considerably smaller than for, e.g., bimodal disorder. As such, solving a Hamiltonian that is not the target Hamiltonian due to noise-induced perturbations is likely. Therefore, in an attempt to filter out these effects, we study relative probabilities between instance classes and not between optimization techniques. Because the problem instances are randomly generated, one can expect that within a given instance type, e.g., S 28 , the noise affects all instance classes in a similar fashion [58], as we see in our simulations. This means also that the difference in the performance of DW2 for S 28 instances with thick and thin barriers is likely not an artifact of the chosen values for the couplers. In all Sidon instance classes (S28) the classical codes outperform DW2. Furthermore, success probabilities for bimodal disorder (U1) are much larger than for any other instance class, therefore suggesting that the degeneracy produced by bimodal disorder makes this instance class too easy to detect quantum speedup. Note also that the classical codes, on average, do not seem to distinguish between instances with thick and thin barriers. Labels are from left to right. C. The good-Evidence of a quantum advantage? Figure 3 suggests that-at least with the choice of annealing parameters made-in the Sidon instance class the classical codes do not seem to differentiate between thin and thick barriers on average, whereas DW2 does seem to show an improvement in the average success probabilities when the barrier thickness is decreased. Given the stochastic nature of the classical algorithms, the thickness of a barrier should have a much weaker effect on the algorithmic efficiency than its height. We have selected the instances in such a way that barriers are predominantly tall. Although we have no exact control at the moment as to how tall these barriers are, we can expect them to be on average of similar height for both Sidon sets with thin and thick barriers. However, by selecting instances with peaks in the overlap distribution at a given distance from each other, we have good control over the barrier thickness. Figure 5 shows the ratio of average success probabilities when reducing the barrier thickness (left) and removing dominant barriers (right) for both SA and DW2. While reducing the barrier thickness has no effect on average on the classical algorithms, DW2 experiences a performance increase. To make sure this is not an artifact of our choice of simulation parameters, we run the SA codes with both N sw = 900 and 2000 sweeps obtaining qualitatively the same results. Furthermore, we find no correlation between the barrier thickness and the effects noisy couplers and qubits have on the success probabilities for both instance classes. When removing dominant barriers altogether, both classical and quantum algorithms show a noticeable performance increase. One can, therefore, surmise that when the barriers are thin enough (and tall) the DW2 device might experience a quantum advantage over classical approaches. However, a far more careful and systematic study must be performed before strong conclusions can be drawn. FIG. 5: Average success probability increase when reducing the barrier thickness (ratio between the average success probabilities for S28 thick and S28 thin) and removing the barriers (ratio between the average success probabilities for S28 thick and S28 none). While in the latter case both classical algorithms and the quantum annealer show a performance boost on average, in the former only the quantum annealer shows improvement. To gain a deeper understanding of the noise effects that affect the DW2 device, we relax our criterion for a successful optimization run by allowing the k lowest excited states to count towards a "successful" run in the Sidon sets. In this case, the smallest classical energy gap when flipping a spin is ∆E = 2/28 ≈ 0.0714. This should be compared with the disorder-averaged ground state energy of the system, i.e, [E 0 ] av ≈= −551. We compute the success probabilities for energies in the interval [E 0 , E 0 + k∆E] for different instance classes using SA and the DW2. Figure 6 shows the average success probabilities as a function of the number of energy levels k. Although we only fix the average success probabilities for the U 1 class to be similar for DW2 (full symbols) and SA (empty symbols) and k = 0, it seems this result holds for at least the first 10 excited states. As can be seen, average success probabilities increase with an increased inclusion of low-lying energy levels for all instance classes. The trend is far more pronounced for the DW2 device than for SA in the case of the Sidon sets S 28 , indicating that noise clearly affects the ability of the machine to detect ground states. Furthermore, note that allowing for the lowest 10 energy levels in the S 28 class corresponds to an increase in less than 1% in the overall energy of the system. Averaging over gauges (i.e., different instances of noise terms in the Hamiltonian) does help the DW2 device, thus illustrating that an increased performance strongly depends on reducing noise, and also performing multiple quenches. Is the DW2 device of any use then? For problems affected by noise due to device restrictions, the DW2 thus might efficiently deliver low-lying energy states. This is of particular relevance to problem domains such as machine learning [61] and Bayesian statistical analysis [62]. For optimization, the data suggest that error-correction strategies [31] that enhance robustness to noise should be explored in greater depth. Combined with a hybrid approach that either breaks up the problem into smaller groups that are easier to tackle [63][64][65], or uses other efficient computing architectures [66] to complement the minimization, the DW2 device (or any other quantum annealing machine) might be an efficient optimization tool one day. V. DISCUSSION We illustrate that a careful design of the benchmark instances is key when attempting to detect quantum speedup. In particular, using insights from the study of spin glasses can help in designing benchmark problems that are considerably harder than previous attempts, and are tunable. Noise levels combined with the small number of qubits on the DW2 device make it difficult to detect any quantum speedup at the moment. Below, we attempt to discuss sources of the poor performance of the device as seen from the spin-glass perspective. Disordered frustrated binary systems are the native, likely hardest, as well as simplest benchmark problems for any new (quantum) computing paradigm. It is important to consider some of the hallmark properties of spin glasses that could make it extremely difficult to detect any (quantum) speedup in the presence of coupler, as well as local-field qubit noise. A. Effects of coupler noise The extreme fragility of the spin-glass state was predicted a long time ago [67,68] and analyzed on the basis of scaling arguments [69,70]. These scaling arguments predict that the configurations that dominate the partition function change drastically and randomly when temperature, local fields, or the interactions between the spins are modified. There is strong (numerical) evidence of disorder chaos (coupler noise) in spin glasses [71][72][73][74][75][76][77][78]. Therefore, small perturbations of the couplers due to noise might lead to the destruction of the spinglass state, as well as to a change of the problem to be solved. The latter can be alleviated slightly by performing multiple gauges. However, the weak chaos regime is dominated by rare events that can flip large spin domains that can directly affect experimental results [77]. Increasing the classical energy gap beyond the noise level of the machine can partially reduce these effects, however at the cost of producing considerably easier benchmark instances [38]. One might argue that the minimum classical gap of the Sidon instances (∆E = 2/28) is too small compared to the machine restrictions when encoding problems. However, we perform tests with a different instance class with a larger clas-sical energy gap and where the couplers are drawn from the Sidon set {±5, ±6, ±7}, finding qualitatively similar results. B. Effects of local-field noise In mean-field theory [79], an Ising spin-glass system has a line of transitions in a field [80], known as the de Almeida-Thouless line that separates the paramagnetic phase at high temperatures and fields from the spin-glass phase at lower temperatures and fields [81][82][83][84][85][86]. Although the existence of a de Almeida-Thouless line for short-range spin glasses is still under some debate (see, for example, Refs. [87][88][89]), there is vast numerical evidence for a multitude of geometries and, in particular, low-dimensional systems that the spin-glass state is strongly affected by any longitudinal (random) fields [90][91][92][93]. As for the case of disorder chaos in spin glasses, the spin-glass state can be easily affected by the intrinsic qubit noise of the DW2 device. Therefore, it might be plausible that, again, the high levels of noise might reduce the success probabilities because the studied system is perturbed and dominant barriers are affected. VI. SUMMARY AND CONCLUSIONS We find that for most disorder types studied, DW2 is systematically slower at finding the ground state than the stateof-the-art classical SA codes developed by Isakov et al. [39]. Note that, by optimizing the number of sweeps in the SA codes, these can be tuned to outperform the DW2 device for all disorder classes studied. Although this might be discouraging at first, we argue that an improved machine calibration [94], noise reduction [95], and the ability to likewise optimize the quantum annealing schedule combined with larger system sizes and tailored spin-glass problems might help in the quest for quantum speedup. We also show that a "classically computationally hard" problem seems to typically also be a hard problem for the quantum annealing device. However, it could also be that the DW2 device is a thermal annealer [59,60,[96][97][98][99] in disguise. For the hardest Sidon instances the DW2 device does show a promising trend when the success constraints are relaxed. Furthermore, reducing the thickness between barriers in the free-energy landscapes suggests that for the large Sidon instances studied some quantum advantage might be present. However, this would not be enough to deem the hardware to be efficient, especially because it is unclear if this effect persists for larger problem sizes. We conclude by stressing that a careful design of benchmark instances is key to detecting quantum speedup [28] or any quantum advantage a novel quantum annealing device might have. We thus expect that a combination of the methodologies outlined in this work with the approach outlined in Ref. [28] that defines the notion of "quantum speedup" in detail, combined with better hardware (and maybe quantum error correction [31,44]), will finally show whether or not quantum annealing has an advantage over classical thermal annealing. Circles represent the individual qubits and lines the couplers. White circles represent fully functional qubits, whereas light gray circles represent working qubits with missing couplers. Broken qubits are represented by dark circles (16). This means that the total number of working qubits is 496. D-Wave Two Methodology An annealing time of 20µs is used for all experimental runs on the DW2 processor, which is cooled to a temperature of 18mK. Each problem instance is run N R = 10 4 times in N G = 10 batches of randomly-chosen gauge transformations in order to provide protection against parameter noise and control errors. To generate a gauge transformation, a set of N random variables {t i }, with t i ∈ {−1, 1}, is sampled uniformly, and the transformation is made. In principle, this procedure does not fundamentally change the problem, but due to parameter noise on the physical device, each gauge transformation of a given instance will, in reality, correspond to a different Hamiltonian. Following the analysis performed in Ref. [33], an instance's success probability across gauges is derived from the geometric mean of the gauges' failure rates. If p g is the observed success probability of a gauge g, then A "success" is defined as the occurrence of a state meeting a criterion, for example, of having ground-state energy E 0 , or with energy lying in a range [E 0 , E 0 + ∆], ∆ > 0, of the minimum. The DW2 device is run in the so-called "autoscaling" mode for all problems, which adjusts the nominally specified J and h parameters to fully use the range allowed by the device. Simulated annealing methodology For the software-based simulated annealing experiments, we use the codes developed by Isakov et al. [39] to ensure a fair comparison with previous studies. The authors present a variant of SA that exploits the bipartite nature of topologies such as the Chimera graph's in order to halve the number of variables being simulated. This optimization results in considerably improved performance over plain SA. In this study we use the an ss ge nf bp vdeg routine. All instances are simulated N R = 10 4 times for N sw = 900 Monte Carlo sweeps each; clearly, no advantage would be gained from gauge transformations in the software case. The default geometric annealing schedule described in Ref. [39] was adequate for our purposes, but the (inverse) temperature scales were appropriately adjusted for each instance class. The parameters of the simulation are listed in Table I. Note that we choose N sw = 900, such that the average success probabilities for the DW2 device agree with the SA simulations for the commonly studied bimodal (U 1 ) disorder. We choose this approach to provide a baseline for all other instance classes. Simulations with N sw = 2000 sweeps showed qualitatively similar results. To compute the overlap distribution P (q) we perform finitetemperature parallel tempering Monte Carlo simulations [50][51][52] combined with isoenergetic cluster moves [102] to speed up the simulations. We choose a temperature set with 30 temperatures and the lowest temperature T min = 0.212 is chosen such that thermalization can be completed in a meaningful time and features in the overlap distribution are well defined. Two replicas with N = 496 spins and the same disorder are thermalized for 2 23 Monte Carlo sweeps and P (q) is measured over an additional 2 23 Monte Carlo sweeps to obtain high-resolution data. We compute 10 5 randomly chosen disorder instances for each problem class. The data are then mined according to predefined criteria (see Sec. III B). Table II lists the numerical values of the average success probabilities for the different instance classes we study either on the DW2 device or with SA codes. All numbers are averaged via a jackknife procedure over N sa instances of the disorder. Appendix D: Other Instance Classes Studied We also perform other experiments with different instance classes. However, these are either too easy or it is extremely difficult to obtain unique ground-state instances. Note that for the J 4 instances [34], where the interactions are bimodally distributed and the bonds in the K 4,4 cells are a 1/4, as well as the S 1,3,7 small Sidon instances, we limit the number of configurations that minimize the Hamiltonian to less than 32 because too few unique ground states could be found. As such, (6) 13.3(2) S1,3,7 (thick barriers) 1615 0.063(4) 0.59(1) S1,3,7 (small barriers) 1582 0.22(1) 1.14 (2) we are merely mentioning here the results to prevent other researchers from attempting to study these systems. Average success probabilities are listed in Table III.
10,928.4
2015-05-06T00:00:00.000
[ "Computer Science", "Physics" ]
LoRAS: An oversampling approach for imbalanced datasets The Synthetic Minority Oversampling TEchnique (SMOTE) is widely-used for the analysis of imbalanced datasets. It is known that SMOTE frequently over-generalizes the minority class, leading to misclassifications for the majority class, and effecting the overall balance of the model. In this article, we present an approach that overcomes this limitation of SMOTE, employing Localized Random Affine Shadowsampling (LoRAS) to oversample from an approximated data manifold of the minority class. We benchmarked our LoRAS algorithm with 28 publicly available datasets and show that that drawing samples from an approximated data manifold of the minority class is the key to successful oversampling. We compared the performance of LoRAS, SMOTE, and several SMOTE extensions and observed that for imbalanced datasets LoRAS, on average generates better Machine Learning (ML) models in terms of F1-score and Balanced Accuracy. Moreover, to explain the success of the algorithm, we have constructed a mathematical framework to prove that LoRAS is a more effective oversampling technique since it provides a better estimate to mean of the underlying local data distribution of the minority class data space. Introduction Imbalanced datasets are frequent occurrences in a large spectrum of fields, where Machine Learning (ML) has found its applications, including business, finance and banking as well as medical science. Oversampling approaches are a popular choice to deal with imbalanced datasets (Barua et al., 2014, Bunkhumpornpat et al., 2009, Chawla et al., 2002, Haibo et al., 2008, Han et al., 2005. We here present Localized Randomized Affine Shadowsampling (LoRAS), which produces better ML models for imbalanced datasets, compared to state-of-the art oversampling techniques such as SMOTE and several of its extensions. We use computational analyses and a mathematical proof to demonstrate that drawing samples from an approximated data manifold of the minority class is key to successful oversampling. We validated the approach with 28 imbalanced datasets, comparing the performances of several state-of-the-art oversampling techniques with LoRAS. The average performance of LoRAS on all these datasets is better than other oversampling techniques that we investigated. In addition, we have constructed a mathematical framework to prove that LoRAS is a more effective oversampling technique since it provides a better estimate to local mean of the underlying data distribution, in some neighbourhood of the minority class data space. For imbalanced datasets, the number of instances in one (or more) class(es) is very high (or very low) compared to the other class(es). A class having a large number of instances is called a majority class and one having far fewer instances is called a minority class. This makes it difficult to learn from such datasets using standard ML approaches. Oversampling approaches are often used to counter this problem by generating synthetic samples for the minority class to balance the number of data points for each class. SMOTE is a widely used oversampling technique, which has received various extensions since it was published by Chawla et al. (2002). The key idea behind SMOTE is to randomly sample artificial minority class data points along line segments joining the minority class data points among k of the minority class nearest neighbors of some arbitrary minority class data point. The SMOTE algorithm, however has several limitations for example: it does not consider the distribution of minority classes and latent noise in a data set (Hu et al., 2009). It is known that SMOTE frequently over-generalizes the minority class, leading to misclassifications for the majority class, and effecting the overall balance of the model (Puntumapon and Waiyamai, 2012). Several other limitations of SMOTE are mentioned in Blagus and Lusa (2013). To overcome such limitations, several algorithms have been proposed as extensions of SMOTE. Some are focusing on improving the generation of synthetic data by combining SMOTE with other oversampling techniques, including the combination of SMOTE with Tomek-links (Elhassan et al., 2016), particle swarm optimization (Gao et al., 2011, Wang et al., 2014, rough set theory (Ramentol et al., 2012), kernel based approaches (Mathew et al., 2015), Boosting (Chawla et al., 2003), andBagging (Hanifah et al., 2015). Other approaches choose subsets of the minority class data to generate SMOTE samples or cleverly limit the number of synthetic data generated (Santoso et al., 2017). Some examples are Borderline1/2 SMOTE (Han et al., 2005), ADAptive SYNthetic (ADASYN) (Haibo et al., 2008), Safe Level SMOTE (Bunkhumpornpat et al., 2009), Majority Weighted Minority Oversampling TEchnique (MWMOTE) (Barua et al., 2014), Modified SMOTE (MSMOTE), and Support Vector Machine-SMOTE (SVM-SMOTE) (Suh et al., 2017) (see Table 1) (Hu et al., 2009). Recent comparative studies have focused on SMOTE, Borderline1/2 SMOTE models, ADASYN, and SVM-SMOTE (Ah-Pine et al., 2016, Suh et al., 2017, which is why we will focus on these five models for a comparison with our newly developed oversampling technique LoRAS. LoRAS allows us to resample the data uniformly from an approximated data manifold of the minority class data points and, thus, creating a more balanced and robust model. A LoRAS oversample is an unbiased estimator of the mean of the underlying local probability distribution followed by a minority class sample (assuming that it is some random variable) such that the variance of this estimator is significantly less than that of a SMOTE generated oversample, which is also an unbiased estimator of the mean of the underlying local probability distribution followed by a minority class sample. In this section we discuss our strategy to approximate the data manifold, given a small dataset. A typical dataset for a supervised ML problem consists of a set of features F = {f 1 , f 2 , . . . }, that are used to characterize patterns in the data and a set of labels or ground truth. Ideally, the number of instances or samples should be significantly greater than the number of features. In order to maintain the mathematical rigor of our strategy we propose the following definition for a small dataset. Definition 1. Consider a class or the whole dataset with n samples and |F | features. If log 10 ( n |F | ) < 1, then we call the dataset, a small dataset. The LoRAS algorithm is designed to learn from a small dataset by approximating the underlying data manifold. Assuming that F is the best possible set of features to represent the data and all features are equally important, we can think of a data oversampling model to be a function g : l i=1 R |F | → R |F | , that is, g uses l parent data points (each with |F | features) to produce an oversampled data point in R |F | . Definition 2. We define a random affine combination of some arbitrary vectors as the affine linear combination of those vectors, such that the coefficients of the linear combination are chosen randomly. Formally, a vector v, v = α 1 u 1 + · · · + α n u m , is a random affine combination of vectors u 1 , . . . , u m , (u j ∈ R | F |) if α 1 + · · · + α m = 1, α j ∈ R + and α 1 , . . . , α m are chosen randomly from a Dirichlet distribution. The simplest way of augmenting a data point would be to take the average (or random affine combination as defined in Definition 2) of two data points as an augmented data point. But, when we have |F | features, we can assume that the hypothetical mani-fold on which our data lies is |F |-dimensional. An |F |-dimensional manifold can be approximated by a collection of (|F |−1)-dimensional planes. Given |F | sample points we could exactly derive the equation of an unique (|F |−1)dimensional plane containing these |F | sample points. By Definition 1, for a small dataset, however, log 10 ( n |F | ) < 1, and thus, there is even a possibility that n < |F |. To resolve this problem, we create shadow data points or shadowsamples from our n parent data points in the minority class. Each shadow data point is generated by adding noise from a normal distribution, N (0, h(σ f )) for all features f ∈ F , where h(σ f ) is some function of the sample variance σ f for the feature f . For each of the n data points we can generate m shadow data points such that, n × m ≫ |F |. Now it is possible for us to choose |F | shadow data points from the n × m shadow data points even if n < |F |. Since real life data are mostly nonlinear, to approximate the data manifold effectively, we have to localize our strategy. For each parent data point p in a small dataset D, let us denote by N p k the set of k-nearest neighbors (including p) of p in D. We can always choose m > 0 in such a way that |N p k | × m ≫ |F |. Every time we choose |F | shadow data points as follows: we first choose a random parent data point p and then restrict the domain of choice to the shadowsamples generated by the parent data points in N p k . We then take a random affine combination of the |F | chosen shadowsamples to create one augmented Localized Random Affine Shadowsample or a LoRAS sample as defined in Definition 2. Thus, a LoRAS sample is an artificially generated sample drawn from an (|F |−1)-dimensional plane, which locally approximates the underlying hypothetical |F |-dimensional data manifold. Theoretically, we can generate n ′ LoRAS samples such that log 10 ( n ′ |F | ) ≥ 1 and use them for training a ML model. In this article, all imbalanced classification problems that we deal with are binary classification problems. For such a problem, there is a minority class C min containing a relatively less number of samples compared to a majority class C maj . We can thus consider the minority class as a small dataset and use the LoRAS algorithm to oversample. For every data point p we can denote a set of shadowsamples generated from p as S p . In practice, one can also choose 2 ≤ N aff ≤ |F | shadowsamples for an affine combination and choose a desired number of oversampled points N gen to be generated using the algorithm. We can look at LoRAS as an oversampling algorithm as described in Algorithm 1. The LoRAS algorithm thus described, can be used for oversampling of minority classes in case of highly imbalanced datasets. Note that the input variables for our algorithm are: number of nearest neighbors per sample k, number of generated shadow points per parent data point |S p |, list of standard deviations for normal distributions for adding noise to every feature and thus generating the shadowsamples L σ , number of shadowsamples to be chosen for affine combinations N aff , and number of generated points for each nearest neighbors group N gen . We have mentioned the default values of the LoRAS parameters in Algorithm 1, showing Constraint: Naff < k * |Sp| Initialize loras set as an empty list For each minority class parent data point p in C min do neighborhood ← − calculate k-nearest neighbors of p and append p Initialize neighborhood shadow sample as an empty list For each parent data point q in neighborhood do shadow points ← − draw |Sp| shadowsamples for q drawing noises from normal distributions with corresponding standard deviations Lσ containing elements for every feature Append shadow points to neighborhood shadow sample Repeat selected points ← − select Naff random shadow points from neighborhood shadow sample affine weights ← − create and normalize random weights for selected points generated LoRAS sample point ← − selected points · affine weights Append generated LoRAS sample point to loras set Until Ngen resulting points are created; Return resulting set of generated LoRAS data points as loras set the pseudocode for the LoRAS algorithm. One could use a random grid search technique to come up with appropriate parameter combinations within given ranges of parameters. As an output, our algorithm generates a LoRAS dataset for the oversampled minority class, which can be subsequently used to train a ML model. The implementation of the algorithm in Python (V 3.7.4) and an example Jupyter Notebook for the credit card fraud detection dataset is provided on the GitHub repository https://github.com/narek-davtyan/LoRAS. In our computational code in GitHub, |S p | corresponds to num shadow points, L σ corresponds to list sigma f, N aff corresponds to num aff comb, N gen corresponds to num generated points. For each neighbor n, and p, sample #s shadow points s from a normal distribution (0;σ 1 , ... ,σ |F| ) times randomly select ≈ |F| shadow points to generate a LoRAS sample point l as a random affine combination Figure 1: Visualization of the workflow demonstrating a step-by-step explanation for LoRAS oversampling. (a) Here, we show the parent data points of the minority class points C min . For a data point p we choose three of the closest neighbors (using knn) to build a neighborhood of p, depicted as the box. (b) Extracting the four data points in the closest neighborhood of p (including p). (c) Drawing shadow points from a normal distribution, centered at these parent data point n. (d) We randomly choose three shadow points at a time to obtain a random affine combination of them (spanning a triangle). We finally generate a novel LoRAS sample point from the neighborhood of a single data point p. Case studies For testing the potential of LoRAS as an oversampling approach, we designed benchmarking experiments with a total of 28 imbalanced datasets. With this number of diverse case studies we should have a comprehensive idea of the advantages of LoRAS over other existing oversampling methods. Datasets used for validation Here we provide a brief description of the datasets and the sources that we have used for our studies. Scikit-learn imbalanced benchmark datasets: The imblearn.datasets package is complementing the sklearn.datasets package. It provides 27 pre-processed datasets, which are imbalanced. The datasets span a large range of real-world problems from several fields such as business, computer science, biology, medicine, and technology. This collection of datasets was proposed in the imblearn.datasets python library by Lemaître et al. (2017) and benchmarked by Ding (2011). Many of these datasets have been used in various research articles on oversampling approaches (Ding, 2011, Saez et al., 2016. Methodology For each case study, we split the dataset into 50 % training and 50 % testing data. We did a pilot study with ML classifiers such as k-nearest neighbors (knn), Support Vector Machine (svm) (linear kernel), Logistic regression (lr), Random forest (rf), and Adaboost. As inferred in (Blagus and Lusa, 2013) First, we trained the models with the unmodified dataset to observe how they perform without any oversampling. Then, we oversampled the minority class using SMOTE, Borderline1 SMOTE, Borderline2 SMOTE, SVM SMOTE, ADASYN, and LoRAS to retrain the ML algorithms including the oversampled datasets. We then measured the performances of our models using performance metrics such as Balanced Accuracy and F1-Score. In our study, we benchmark LoRAS against several other oversampling algorithms for the 27 benchmark datasets. To ensure fairness of comparison, we oversampled such that the total number of augmented samples generated from the minority class was as close as possible to the number of samples in the majority class as allowed by each oversampling algorithm. For the credit card fraud detection dataset we compared performances of several oversampling techniques including LoRAS and several ML models as well, ensuring that we build the best possible ML model using customized parameter settings for respective oversampling techniques. For this case we also chose the ML models lr and rf since their performance were the best. LoRAS has several parameters (k, |S p |, L σ , N aff , N gen ). We have ensured, for a fair comparison with other models, to choose the same values for the parameter denoting the number of nearest neighbors of a minority class sample k, where ever applicable. Results For imbalanced datasets there are more meaningful performance measures than Accuracy, including Sensitivity or Recall, Precision, and F1-Score (F-Measure), and Balanced Accuracy that can all be derived from the Confusion Matrix, generated while testing the model. For a given class, the different combinations of recall and precision have the following meanings : F1-Score, calculated as the harmonic mean of precision and recall and, therefore, balances a model in terms of precision and recall. Balanced Accuracy is the mean of the individual class accuracies and in this context, it is more informative than the usual accuracy score. High Balanced Accuracy ensures that the ML algorithm learns adequately for each individual class. These measures have been defined and discussed thoroughly by M. Abd Elrahman and Abraham (2013). We will use the above mentioned performance measures wherever applicable in this article. Table 2 we show the Balanced Accuracies and F1-Scores for the 27 inbuilt datasets in Scikit-learn. Calculating average performances over all datasets, LoRAS has the best Balanced Accuracy and F1-Score. As expected, SMOTE improved both Balanced Accuracy and F1-Score compared to normal model training. Interestingly, the oversampling approaches SVM-SMOTE and Borderline1 SMOTE also improved the average F1-Score compared to SMOTE, but compromised for a lower Balanced Accuracy. Between SVM-SMOTE and Borderline1 SMOTE we noted that SVM-SMOTE improved the F1-Score the most, but has the lesser Balanced Accuracy. In contrast our LoRAS approach produces a better Balanced Accuracy than SMOTE on average by maintaining the highest average F1-Score among all oversampling techniques. From Table 3, we see that LoRAS performs best in terms of Balanced Accuracy and F1-Score for 11 and 9 datasets respectively. Thus, LoRAS outperforms other oversampling algorithms in terms of both Balanced Accuracy and F1-Score for a maximum number of datasets. Interestingly, we also observe a trend that the oversampling approaches that have good performances in terms of F1-Score, have a relatively weaker performance for Balanced Accuracy. Interestingly, not only LoRAS proves to be the best choice for the highest number of datasets but also retains its performance for both of the performance measures. Scikit-learn imbalanced datasets: In Credit card fraud detection dataset: The credit card fraud detection dataset has 492 fraud instances out of 284,807 transactions. The task is to predict fraudulent transactions. In Table 4, we show the number of samples generated from several oversampling approaches. For testing, we have 246 samples of frauds and 142,158 samples of normal non-fraudulent people for each case. We summarize our results in a tabular form in Table 5. In the table we show the scores of our models for the performance measures: Precision, Recall, F1-Score, and Balanced Accuracy for lr and rf ML models. From Table 5 we infer that rf model with LoRAS oversampling has the best F1-Score. Interestingly, LoRAS on both lr and rf produces a Balanced Accuracy higher than 0.85 and an F1-Score higher than 0.8. Other models such as SVM SMOTE (with both lr and rf) and ADASYN with lr also produces very good results. Thus LoRAS produces better F1-Score with a reasonable compromise on the Balanced Accuracy. Discussion We have constructed a mathematical framework to prove that LoRAS is a more effective oversampling technique since it provides a better estimate to the mean of the underlying local data distribution of the minority class data space. Let X = (X 1 , . . . , X |F | ) ∈ C min be an arbitrary minority class sample. Let N X k be the set of the k-nearest neighbors of X, which will consider the neighborhood of X. Both SMOTE and LoRAS focus on generating augmented samples within the neighborhood N X k at a time. We assume that a random variable X ∈ N X k follows a shifted t-distribution with k degrees of freedom, location parameter µ, and scaling parameter σ. Note that here σ is not referring to the standard deviation but sets the overall scaling of the distribution (Jackman, 2009), which we choose to be the sample variance in the neighborhood of X. A shifted t-distribution is used to estimate population parameters, if there are less number of samples (usually, ≤ 30) and/or the population variance is unknown. Since in SMOTE or LoRAS we generate samples from a small neighborhood, we can argue in favour of our assumption that locally, a minority class sample X as a random variable, follows a t-distribution. Following Blagus and Lusa (2013), we assume that if X, X ′ ∈ N X k then X and X ′ are independent. For X, X ′ ∈ N X k , we also assume: where, I E[X] and Var[X] denote the expectation and variance of the random variable X respectively. However, the mean has to be estimated by an estimator statistic (i.e. a function of the samples). Both SMOTE and LoRAS can be considered as an estimator statistic for the mean of the t-distribution that X ∈ C min follows locally. Theorem 1. Both SMOTE and LoRAS are unbiased estimators of the mean µ of the t-distribution that X follows locally. However, the variance of the LoRAS estimator is less than the variance of SMOTE given that |F | > 2. Proof. A shadowsample S is a random variable S = X + B where X ∈ N X k , the neighborhood of some arbitrary X ∈ C min and B follows N (0, σ B ). B (2) assuming S and B are independent. Now, a LoRAS sample L = α 1 S 1 + · · · + α |F | S |F | , where S 1 , . . . , S |F | are shadowsamples generated from the elements of the neighborhood of X, N X k , such that α 1 + · · · + α |F | = 1. The affine combination coefficients α 1 , . . . , α |F | follow a Dirichlet distribution with all concentration parameters assuming equal values of 1 (assuming all features to be equally important). For arbitrary i, j ∈ 1, . . . , |F | , where Cov(A, B) denotes the covariance of two random variables A and B. Assuming α and S to be independent, Thus L is an unbiased estimator of µ. For j, k, l ∈ 1, . . . , |F | , since α k α l is independent of S k j S l j . For an arbitrary j, j-th component of a LoRAS sample L j Var(L j ) = Var(α 1 S 1 j + · · · + α |F | S For LoRAS, we take an affine combination of |F | shadowsamples and SMOTE considers an affine combination of two minority class samples. Note, that since a SMOTE generated oversample can be interpreted as a random affine combination of two minority class samples, we can consider, |F | = 2 for SMOTE, independent of the number of features. Also, from Equation 3, this implies that SMOTE is an unbiased estimator of the mean of the local data distribution. Thus, the variance of a SMOTE generated sample as an estimator of µ would be 2σ ′2 3 (since B = 0 for SMOTE). But for LoRAS as an estimator of µ, when |F | > 2, the variance would be less than that of SMOTE. This implies that, locally, LoRAS can estimate the mean of the underlying t-distribution better than SMOTE. Conclusions Oversampling with LoRAS produces comparatively balanced ML model performances on average, in terms of Balanced Accuracy and F1-Score. This is due to the fact that, in most cases LoRAS produces lesser misclassifications on the majority class with a reasonably small compromise for misclassifications on the minority class. Moreover, we infer that our LoRAS oversampling strategy can better estimate the mean of the underlying local distribution for a minority class sample (considering it a random variable). The distribution of the minority class data points is considered in the oversampling techniques such as Borderline1 SMOTE, Borderline2 SMOTE, SVM-SMOTE, and ADASYN (Gosain and Sardana, 2017). SMOTE and LoRAS are the only two techniques, among the oversampling techniques we explored, that deal with the problem of imbalance just by generating new data points, independent of the distribution of the minority and majority class data points. Thus, comparing LoRAS and SMOTE gives a fair impression about the performance of our novel LoRAS algorithm as an oversampling technique, without considering any aspect of the distributions of the minority and majority class data points and relying just on resampling. Other extensions of SMOTE such as Borderline1 SMOTE, Borderline2 SMOTE, SVM-SMOTE, and ADASYN can also be built on the principle of LoRAS oversampling strategy. According to our analyses LoRAS already reveals great potential on a broad variety of applications and evolves as a true alternative to SMOTE, while processing highly unbalanced datasets. Availability of code: The implementation of the algorithm in Python (V 3.7.4) and an example Jupyter Notebook for the credit card fraud detection dataset is provided on the GitHub repository https://github.com/narek-davtyan/LoRAS. In our computational code, |S p | corresponds to num shadow points, L σ corresponds to list sigma f, N aff corresponds to num aff comb, N gen corresponds to num generated points.
5,954.2
2019-08-22T00:00:00.000
[ "Computer Science" ]
Magnetic dilaton strings in anti-de Sitter spaces With an appropriate combination of three Liouville-type dilaton potentials, we construct a new class of spinning magnetic dilaton string solutions which produces a longitudinal magnetic field in the background of anti-de Sitter spacetime. These solutions have no curvature singularity and no horizon, but have a conic geometry. We find that the spinning string has a net electric charge which is proportional to the rotation parameter. We present the suitable counterterm which removes the divergences of the action in the presence of dilaton potential. We also calculate the conserved quantities of the solutions by using the counterterm method. I. INTRODUCTION The construction and analysis of black hole solutions in the background of anti-de Sitter (AdS) spaces is a subject of much recent interest. This interest is primarily motivated by the correspondence between the gravitating fields in an AdS spacetime and conformal field theory living on the boundary of the AdS spacetime [1]. This equivalence enables one to remove the divergences of the action and conserved quantities of gravity in the same way as one does in field theory. It was argued that the thermodynamics of black holes in AdS spaces can be identified with that of a certain dual conformal field theory (CFT) in the high temperature limit [2]. Having the AdS/CFT correspondence idea at hand, one can gain some insights into thermodynamic properties and phase structures of strong 't Hooft coupling conformal field theories by studying the thermodynamics of asymptotically AdS black holes. On another front, scalar coupled black hole solutions with different asymptotic spacetime structure is a subject of interest for a long time. There has been a renewed interest in such studies ever since new black hole solutions have been found in the context of string theory. The low energy effective action of string theory contains two massless scalars namely dilaton and axion. The dilaton field couples in a nontrivial way to other fields such as gauge fields and results into interesting solutions for the background spacetime. It was argued that with the exception of a pure cosmological constant, no dilaton-de Sitter or anti-de Sitter black hole solution exists with the presence of only one Liouville-type dilaton potential [3]. Recently, the dilaton potential leading to (anti)-de Sitter-like solutions of dilaton gravity has been found [4]. It was shown that the cosmological constant is coupled to the dilaton in a very nontrivial way. With the combination of three Liouville-type dilaton potentials, a class of static dilaton black hole solutions in (A)dS spaces has been obtained by using a coordinates transformation which recast the solution in the schwarzschild coordinates system [4]. More recently, a class of charged rotating dilaton black string solutions in four-dimensional anti-de Sitter spacetime has been found in [5]. Other studies on the dilaton black hole solutions in (A)dS spaces have been carried out in [6,7]. In this Letter, we turn to the investigation of asymptotically AdS spacetimes generated by static and spinning string sources in four-dimensional Einstein-Maxwell-dilaton theory which are horinzonless and have nontrivial external solutions. The motivation for studying such kinds of solutions is that they may be interpreted as cosmic strings. Cosmic strings are topological structure that arise from the possible phase transitions to which the universe might have been subjected to and may play an important role in the formation of primordial structures. A short review of papers treating this subject follows. The four-dimensional horizonless solutions of Einstein gravity have been explored in [8,9]. These horizonless solutions [8,9] have a conical geometry; they are everywhere flat except at the location of the line source. The spacetime can be obtained from the flat spacetime by cutting out a wedge and identifying its edges. The wedge has an opening angle which turns to be proportional to the source mass. The extension to include the Maxwell field has also been done [10]. Static and spinning magnetic sources in three and four-dimensional Einstein-Maxwell gravity with negative cosmological constant have been explored in [11,12]. The generalization of these asymptotically AdS magnetic rotating solutions to higher dimensions has also been done [13]. In the context of electromagnetic cosmic string, it has been shown that there are cosmic strings, known as superconducting cosmic strings, that behave as superconductors and have interesting interactions with astrophysical magnetic fields [14]. The properties of these superconducting cosmic strings have been investigated in [15]. It is also of great interest to generalize the study to the dilaton gravity theory [16]. While exact magnetic rotating dilaton solution in three dimensions has been obtained in [17], two classes of magnetic rotating solutions in four [18] and higher dimensional dilaton gravity in the presence of one Liouville-type potential have been constructed [19]. Unfortunately, these solutions [18,19] are neither asymptotically flat nor (A)dS. The purpose of the present Letter is to construct a new class of static and spinning magnetic dilaton string solutions which produces a longitudinal magnetic field in the background of anti-de Sitter spacetime. We will also present the suitable counterterm which removes the divergences of the action, and calculate the conserved quantities by using the counterterm method. II. BASIC EQUATIONS Our starting point is the four-dimensional Einstein-Maxwell-dilaton action where R is the scalar curvature, Φ is the dilaton field, F µν = ∂ µ A ν − ∂ ν A µ is the electromagnetic field tensor, and A µ is the electromagnetic potential. Here Λ is the cosmological constant. It is clear that the cosmological constant is coupled to the dilaton field in a very nontrivial way. This type of the dilaton potential was introduced for the first time by Gao and Zhang [4]. They derived, by applying a coordinates transformation which recast the solution in the Schwarzchild coordinates system, the static dilaton black hole solutions in the background of (A)dS universe. For this purpose, they required the existence of the (A)dS dilaton black hole solutions and extracted successfully the form of the dilaton potential leading to (A)dS-like solutions. They also argued that this type of derived potential can be obtained when a higher dimensional theory is compactified to four dimensions, including various supergravity models [20]. In the absence of the dilaton field the action (1) reduces to the action of Einstein-Maxwell gravity with cosmological constant. Varying the action (1) with respect to the gravitational field g µν , the dilaton field Φ and the gauge field A µ , yields The conserved mass and angular momentum of the solutions of the above field equations can be calculated through the use of the substraction method of Brown and York [21]. Such a procedure causes the resulting physical quantities to depend on the choice of reference background. A well-known method of dealing with this divergence for asymptotically AdS solutions of Einstein gravity is through the use of counterterm method inspired by AdS/CFT correspondence [22]. In this Letter, we deal with the spacetimes with zero curvature boundary, R abcd (γ) = 0, and therefore the counterterm for the stress energy tensor should be proportional to γ ab . We find the suitable counterterm which removes the divergences of the action in the form (see also [23]) One may note that in the absence of a dilaton field where we have V (Φ) = 2Λ = −6/l 2 , the above counterterm has the same form as in the case of asymptotically AdS solutions with zero-curvature boundary. Having the total finite action I = I G + I ct at hand, one can use the quasilocal definition to construct a divergence free stress-energy tensor [21]. Thus the finite stress-energy tensor in four-dimensional Einstein-dilaton gravity with three Liouville-type dilaton potentials (2) can be written as The first two terms in Eq. (7) are the variation of the action (1) with respect to γ ab , and the last two terms are the variation of the boundary counterterm (6) with respect to γ ab . To compute the conserved charges of the spacetime, one should choose a spacelike surface B in ∂M with metric σ ij , and write the boundary metric in ADM (Arnowitt-Deser-Misner) form: where the coordinates ϕ i are the angular variables parameterizing the hypersurface of constant r around the origin, and N and V i are the lapse and shift functions, respectively. When there is a Killing vector field ξ on the boundary, then the quasilocal conserved quantities associated with the stress tensors of Eq. (7) can be written as where σ is the determinant of the metric σ ij , ξ and n a are, respectively, the Killing vector field and the unit normal vector on the boundary B. For boundaries with timelike (ξ = ∂/∂t) and rotational (ς = ∂/∂φ) Killing vector fields, one obtains the quasilocal mass and angular These quantities are, respectively, the conserved mass and angular momenta of the system enclosed by the boundary B. Note that they will both depend on the location of the boundary B in the spacetime, although each is independent of the particular choice of foliation B within the surface ∂M. III. STATIC MAGNETIC DILATON STRING Here we want to obtain the four-dimensional solution of Eqs. (3)-(5) which produces a longitudinal magnetic fields along the z direction. We assume the following form for the metric [11] The functions f (ρ) and R(ρ) should be determined and l has the dimension of length which is related to the cosmological constant Λ by the relation l 2 = −3/Λ. The coordinate z has the dimension of length and ranges −∞ < z < ∞, while the angular coordinate φ is dimensionless as usual and ranges 0 ≤ φ < 2π. The motivation for this curious choice of the metric gauge [g tt ∝ −ρ 2 and (g ρρ ) −1 ∝ g φφ ] instead of the usual Schwarzschild gauge [(g ρρ ) −1 ∝ g tt and g φφ ∝ ρ 2 ] comes from the fact that we are looking for a magnetic solution instead of an electric one. It is well-known that the electric field is associated with the time component, A t , of the vector potential while the magnetic field is associated with the angular component A φ . From the above fact, one can expect that a magnetic solution can be written in a metric gauge in which the components g tt and g φφ interchange their roles relatively to those present in the Schwarzschild gauge used to describe electric solutions [11]. The Maxwell equation (5) can be integrated immediately to give where q, an integration constant, is the charge parameter which is related to the electric charge of the rotating string, as will be shown below. Inserting the Maxwell fields (12) and the metric (11) in the field equations (3) and (4), we can simplify these equations as where the "prime" denotes differentiation with respect to ρ. Subtracting Eq. (15) from Eq. Substituting this ansatz in Eq. (17), it reduces to which has a solution of the form where b is a constant of integration related to the mass of the string, as will be shown. Inserting (20), the ansatz (18), and the dilaton potential (2) into the field equations (13)- (16), one can show that these equations have the following solution where c is an integration constant. The two constants c and b are related to the charge parameter via q 2 (1 + α 2 ) = bc. It is apparent that this spacetime is asymptotically AdS. In the absence of a nontrivial dilaton (α = 0), the solution reduces to the asymtotically AdS horizonless magnetic string for Λ = −3/l 2 [12]. Then we study the general structure of the solution. It is easy to show that the Kretschmann scalar R µνλκ R µνλκ diverges at ρ = 0 and therefore one might think that there is a curvature singularity located at ρ = 0. However, as we will see below, the spacetime will never achieve ρ = 0. Second, we look for the existence of horizons and, in particular, The function f (ρ) is negative for ρ < r + , and therefore one may think that the hypersurface of constant time and ρ = r + is the horizon. However, the above analysis is wrong. Indeed, we first notice that g ρρ and g φφ are related by f (ρ) = g −1 ρρ = l −2 g φφ , and therefore when g ρρ becomes negative (which occurs for ρ < r + ) so does g φφ . This leads to an apparent change of signature of the metric from +2 to −2. This indicates that we are using an incorrect extension. To get rid of this incorrect extension, we introduce the new radial coordinate r as With this coordinate change, the metric (11) is where the coordinates r assumes the values 0 ≤ r < ∞, and f (r), R(r), and Φ(r) are now given as One can easily show that the Kretschmann scalar does not diverge in the range 0 ≤ r < ∞. However, the spacetime has a conic geometry and has a conical singularity at r = 0, since: That is, as the radius r tends to zero, the limit of the ratio "circumference/radius" is not 2π and therefore the spacetime has a conical singularity at r = 0. The canonical singularity can be removed if one identifies the coordinate φ with the period The above analysis shows that near the origin r = 0, the metric (23) describes a spacetime which is locally flat but has a conical singularity at r = 0 with a deficit angle δφ = 8πµ. Since near the origin the metric (23) is identical to the spacetime generated by a cosmic string, by using the Vilenkin procedure, one can show that µ in Eq. (28) can be interpreted as the mass per unit length of the string [24]. IV. SPINNING MAGNETIC DILATON STRING Now, we would like to endow the spacetime solution (11) with a rotation. In order to add an angular momentum to the spacetime, we perform the following rotation boost in the where a is a rotation parameter and Ξ = 1 + a 2 /l 2 . Substituting Eq. (29) into Eq. (23) we obtain where f (r) and R(r) are given in Eqs. (24) and (25). The non-vanishing electromagnetic field components become The transformation (29) generates a new metric, because it is not a permitted global coordinate transformation. This transformation can be done locally but not globally. Therefore, the metrics (23) and (30) can be locally mapped into each other but not globally, and so they are distinct. Note that this spacetime has no horizon and curvature singularity. However, it has a conical singularity at r = 0. It is notable to mention that for α = 0, this solution reduces to the asymtotically AdS magnetic rotating string solution presented in [12]. The mass and angular momentum per unit length of the string when the boundary B goes to infinity can be calculated through the use of Eqs. (9) and (10). We obtain For a = 0 (Ξ = 1), the angular momentum per unit length vanishes, and therefore a is the rotational parameter of the spacetime. Finally, we compute the electric charge of the solutions. To determine the electric field one should consider the projections of the electromagnetic field tensor on special hypersurface. The normal vectors to such hypersurface for the spacetime with a longitudinal magnetic field are and the electric field is E µ = g µρ e −2αΦ F ρν u ν . Then the electric charge per unit length Q can be found by calculating the flux of the electric field at infinity, yielding It is worth noting that the electric charge is proportional to the rotation parameter, and is zero for the case of static solution. This result is expected since now, besides the magnetic field along the φ coordinate, there is also a radial electric field (F tr = 0). To give a physical interpretation for the appearance of the net electric charge, we first consider the static spacetime. The magnetic field source can be interpreted as composed of equal positive and negative charge densities, where one of the charge density is at rest and the other one is spinning. Clearly, this system produce no electric field since the net electric charge density is zero, and the magnetic field is produced by the rotating electric charge density. Now, we consider the rotating solution. From the point of view of an observer at rest relative to the source (S), the two charge densities are equal, while from the point of view of an observe S ′ that follows the intrinsic rotation of the spacetime, the positive and negative charge densities are not equal, and therefore the net electric charge of the spacetime is not zero. V. CONCLUSION AND DISCUSSION In conclusion, with an appropriate combination of three Liouville-type dilaton potentials, we constructed a class of four-dimensionl magnetic dilaton string solutions which produces a longitudinal magnetic field in the background of anti-de Sitter universe. These solutions have no curvature singularity and no horizon, but have conic singularity at r = 0. In fact, we showed that near the origin r = 0, the metric (23) describes a spacetime which is locally flat but has a conical singularity at r = 0 with a deficit angle δφ = 8πµ, where µ can be interpreted as the mass per unit length of the string. In these static spacetimes, the electric field vanishes and therefore the string has no net electric charge. Then we added an angular momentum to the spacetime by performing a rotation boost in the t − φ plane. For the spinning string, when the rotation parameter is nonzero, the string has a net electric charge which is proportional to the magnitude of the rotation parameter. We found the suitable counterterm which removes the divergences of the action in the presence of three Liouvilletype dilaton potentials. We also computed the conserved quantities of the solutions through the use of the conterterm method inspired by the AdS/CFT correspondence. It is worth comparing the solutions obtained here to the electrically charged rotating dilaton black string solutions presented in [5]. In the present work I have studied the magnetic spinning dilaton string which produces a longitudinal magnetic field in AdS spaces which is the correct one generalizing of the magnetic string solution of Dias and Lemos in dilaton theory [12], while in [5] I constructed charged rotating dilaton black string in AdS spaces which is the generalization of the charged rotating string solutions of [25] in dilaton gravity. Although solution (21) of the present paper is similar to Eq. (16) of Ref. [5] (except the sign of c) and both solutions represent dilaton string, however, there are some different between the magnetic string and the electrically charged dilaton black string solutions. First, the choice of the metric gauge [g tt ∝ −ρ 2 and (g ρρ ) −1 ∝ g φφ ] in the magnetic case which is quite different from the Schwarzschild gauge [(g ρρ ) −1 ∝ g tt and g φφ ∝ ρ 2 ] proposed in [5]. Second, the electrically charged dilaton black strings have an essential singularity located at r = 0 and also have horizons, while the magnetic strings version presented here have no curvature singularity and no horizon, but have a conic geometry. Third, when the rotation parameter is nonzero, the magnetic string has a net electric charge which is proportional to the rotation parameter, while charged dilaton black string has always an electric charge regardless of the rotation parameter. The generalization of the present work to higher dimensions, that is the magnetic rotating dilaton branes in AdS spaces with complete set of rotation parameters and arbitrary dilaton coupling constant is now under investigation and will be addressed elsewhere.
4,638.2
2008-09-07T00:00:00.000
[ "Physics" ]
Non-replication of an association of CTNNBL1 polymorphisms and obesity in a population of Central European ancestry Background A recent genome-wide association (GWA) study of U.S. Caucasians suggested that eight single nucleotide polymorphisms (SNPs) in CTNNBL1 are associated with obesity and increased fat mass. We analysed the respective SNPs in data from our previously published GWA for early onset obesity (case-control design), in GWA data from a population-based cohort of adults, and in an independent family-based obesity study. We investigated whether variants in CTNNBL1 (including rs6013029) and in three other genes (SH3PXD2B, SLIT3 and FLJ42133,) were associated with obesity. Methods The GWA studies were carried out using Affymetrix® SNP Chips with approximately 500,000 markers each. In the families, SNP rs6013029 was genotyped using the TaqMan® allelic discrimination assay. The German case-control GWA included 487 extremely obese children and adolescents and 442 healthy lean individuals. The adult GWA included 1,644 individuals from a German population-based study (KORA). The 775 independent German families consisted of extremely obese children and adolescents and their parents. Results We found no evidence for an association of the reported variants in CTNNBL1 with early onset obesity or increased BMI. Further, in our family-based study we found no evidence for over-transmission of the rs6013029 risk-allele T to obese children. Additionally, we found no evidence for an association of SH3PXD2B, SLIT3 and FLJ42133 variants in our two GWA samples. Conclusion We detected no confirmation of the recent association of variants in CTNNBL1 with obesity in a population of Central European ancestry. Background Obesity is a major health problem worldwide and results from an interplay of social, environmental and genetic factors [1]. Genome-wide association (GWA) studies have contributed to the identification of new polygenic variants contributing to inter-individual body mass index (BMI) differences [2][3][4][5]. Recently, Liu et al. [6] reported that variants in the beta catenin-like 1 gene (CTNNBL1) were associated with increased fat mass and obesity in a GWA conducted with 1,000 adult U.S. Caucasians. In the same report, this observation was validated in a French case-control sample (896 class III obese adults; BMI ≥ 40 kg/m 2 and 2,916 normal weight controls; BMI < 25 kg/ m 2 ). Our study had two objectives. First, we aimed to replicate the association of the obesity risk alleles (rs6013029 Tallele, rs16986921 T-allele, rs6020712 A-allele, rs6020846 G-allele, rs6020395 C-allele, rs16986890 Gallele, rs6096781 C-allele, and rs6020339 C-allele) of CTNNBL1 in two GWA data sets. Second, we explored three other genes SH3PXD2B (rs13356223, rs10077897 and rs13436547), SLIT3 (rs17734503 and rs12654448) and FLJ42133 (rs7363432 and rs6095722), also mentioned by Liu et al. [6] in our GWAs. We analysed three samples: (1) GWA data from 487 cases with early onset extreme obesity and 442 controls; (2) GWA data of 1,644 individuals from a population-based adult cohort and (3) genotyping data of the best CTNNBL1 SNP rs6013029 previously reported in [6] in a sample of 775 independent nuclear families each comprising one or more extremely obese offspring and both parents. Participants and Genotyping Case-control GWA 487 extremely obese children and adolescents (cases: mean age 14.38 ± 3.74; BMI = 33.40 ± 6.81; BMI Z-score = 4.63 ± 2.27; 42.9% male) and 442 healthy lean individuals (controls: mean age 26.07 ± 5.79; BMI = 18.31 ± 1.10; BMI Z-score = -1.38 ± 0.35; 38.7% male) from Germany. The use of lean adults who were never overweight or obese during childhood (assessed by interview [7]) as control group reduces the chances of misclassification compared to the use of lean children as controls that might become overweight in adulthood. This sample was genotyped using the Affymetrix ® Genome-Wide Human SNP Array 5.0 with 440,794 markers. Details on this GWA have been reported elsewhere [7]. The study was approved by the local ethics committee. Population-based GWA KORA (Kooperative Gesundheitsforschung im Raum Augsburg, follow up of Survey 3 (F3); 'Cooperative Health Research in the Region of Augsburg') comprises 3,126 Ger-man adults representative of the population within the age range 25-74 years in Augsburg and surrounding areas (Bavaria, Germany). 1,644 probands (mean age 52.52 ± 10.09; BMI = 27.33 ± 4.12; 48.9% male) were genotyped using the Affymetrix ® GeneChip ® Human Mapping 500K Array Set (for details on the sample see [8]). The ethics committee of the Länderärztekammer for Bavaria approved the study. Family-based study 775 German families comprising 1,058 extremely obese children and adolescents (775 index patients, 283 siblings; mean age 13.88 ± 3.69; BMI 31.12 ± 6.06 kg/m 2 ; BMI Z-score = 3.91 ± 2.02, 45.8% male) and 1,550 parents (mean age 42.56 ± 5.95; BMI 30.37 ± 6.29 kg/m 2 ; BMI Zscore = 1.68 ± 1.83) were recruited at the University of Marburg and the University Duisburg-Essen. Participants were genotyped for the SNP rs6013029 using the Taq-Man ® allelic discrimination assay (C_29958195_10 assay, Applied Biosystems, Germany); the call rate was 99.7%, with 100% concordance of duplicates. All individuals studied are Caucasians from Central Europe, with German ancestry. All studies were conducted in accordance with the guidelines of The Declaration of Helsinki. Statistics Prior to analysis, the genotype distributions of all three samples (case-control GWA sample, population based GWA sample, and the family sample) were tested for deviations from Hardy-Weinberg equilibrium using an exact two-sided test [9]. The association between increased BMI and CTNNBL1 polymorphisms in the KORA cohort was analysed using linear regression analysis adjusted for age and sex while logistic regression was used for data from the case-control GWA. In both cases we used an additive model for the risk allele as described in [6]. In our familybased study we tested for overtransmission of the rs6013029 "T" allele -reported in the original study as being the risk allele -to affected offspring with the Pedigree Disequilibrium Test (PDT-sum) [10] and generated a genotype relative risk estimates using conditional logistic regression. Power calculations based on the effect of genetic variants in rs6013029 were performed for the case-control and the cohort using the program QUANTO Version 1.2.3 http:// hydra.usc.edu/gxe and for the family-based sample using TDT Power Calculator 1.2.1 http://www.bio stat.jhsph.edu/~wmchen/pc.html. For these calculations we assumed a minor allele frequency = 0.05 and genetic effect size of OR = 1.42 as estimated in [6] for the tests which used the case-control and family setting, while a true genetic effect of β = 0.1 (increase in mean BMI with each additional risk allele) was chosen for the cohort. In either case α = 0.05 (one-sided) was chosen. Results and discussion We analysed the data of both GWA studies on the SNPs previously reported in [6] of the CTNNBL1 gene. There was no indication of a deviation from Hardy-Weinberg equilibrium at any of these markers in either GWA sample or among the founders in the families based on the exact test described above (all p-values > 0.05). Furthermore, there was no evidence for an association of any of the SNPs in the CTNNBL1 gene with obesity in our data ( Table 1). The strongest signal in the original report (rs6013029) achieved a two-sided p-value of 0.53 in our case-control GWA with an estimated odds ratio (OR) of 0.88 (95% confidence interval (CI) 0.60 -1.30) for the risk-allele T. Even though there is some overlap in the confidence intervals when comparing our result to the results of the original report's French case-control validation sample with 1.42 (95% CI 1.14 -1.77) the point estimators indicate different directions of the T-allele effect. Combined with the absence of an observed association of this marker with BMI in the KORA cohort a false positive initial observation is the most likely explanation ( Table 1). As rs6013029 was the main initial finding [6], we nevertheless decided to genotype this variant in 775 independent families ascertained for at least one obese offspring. We detected no evidence for an overtransmission of the Tallele -risk allele in the original study -to the obese offspring (two-sided p = 0.50), and an effect size estimate based on this sample of genotype relative risk (GRR) = 0.933 (95% CI 0.717 -1.214) failed to exclude unity as well. The other CTNNBL1 SNPs previously described in [6] are displayed in Table 1. In addition, we also tried to explore the unvalidated results of SNPs from three additional genes (FLJ42133: rs7363432 and rs6095722; SH3PXD2B: rs13356223, rs10077897 and rs13436547; SLIT3: rs17734503 and rs12654448) which were also reported to be associated with increased fat mass (FLJ42133) or increased BMI (SH3PXD2B and SLIT3), respectively ( Table 2). Once again, no evidence for an association in either of our GWAs was detected. In view of the requirement that replication studies need to be adequately powered, we assessed the power of each or our three samples based on the parameters listed above. For the given samples sizes our family-based replication study had a power > 80%, the cohort had a very limited power of about 10% while the case-control GWA had a slightly larger power of about 54%. If the initially reported values overestimate the true genetic effect, which is presumably quite often the case [11], our data nevertheless contribute to a more precise idea of the impact of CTNNBL1 variants on obesity. In sum, our results underline the importance of replication of GWA results in independent samples even though independent validations may have been reported within the same initial study. While replication of association with obesity of intron 1 variants in FTO has been demonstrated robustly in almost all subsequent studies comprising obese adults and children [7,[12][13][14][15], the study by Liu et al. [6] was an exception as none of the intron 1 FTO SNPs showed evidence for a body weight-related association. Interestingly, however, the study did find some evidence for an association of variants in INSIG2 with obesity [5,[16][17][18][19]. Both examples underline the difficulties that arise when trying to validate, confirm and replicate associations with such complex traits as obesity. Our failure to replicate the initial findings [6] also does not appear to be a result of population stratification. All recruitment was done in Germany for which population stratification effects have shown to be of minor importance [20]. Another possible explanation for a lack of replication is that our results are mainly based on data for children and adolescents which are different from [6] where only adults were investigated. Again the example of FTO [6] highlights how validated associations found in adults with obesity may also be present in children with extreme obesity [3,21]. Recently, two independent studies comprising more than 32,000 [22] and 14,000 [23] individuals also did not find significant association of the CTNNBL1 variant rs6013029 and obesity. Our study is a replication and validation attempt with sufficient combined power to independently replicate an initial finding [6], while also providing some evidence to support the decision not to follow-up variants that did not "survive" a validation within the same initial report [6]. Although we were not able to replicate the original findings, our data may be useful for a meta-analytical assessment of the association of CTNNBL1 variants and obesity. A retrospective look at the conflicting reports on INSIG2 and the recent reports on CTNNBL1 suggests that research on mediating and moderating variables to more comprehensively assess phenotype-genotype relationships is urgently needed. Conclusion We did not detect confirmation of association of variants in CTNNBL1 with obesity in a population of Central European ancestry. Further studies have to be performed to validate or not the initial findings about the association of CTNNBL1 variants and obesity.
2,654.4
2009-02-19T00:00:00.000
[ "Biology", "Medicine" ]
Identification and experimental validation of ferroptosis-related gene lactotransferrin in age-related hearing loss Objective To reveal the relationship between ARHL and ferroptosis and screen ferroptosis-related genes (FRGs) in ARHL. Methods Bioinformatics were used to analyze the hub genes and molecular mechanism of ferroptosis in the aging cochleae. Senescence β-galactosidase staining, iron content detection, and micro malondialdehyde (MDA) assay kits were used to measure β-galactosidase activity, and expression of Fe2+ and MDA, respectively. Fluorescence microscope was used for immunofluorescence assay of hub genes. Western blot was used to verify the expression of hub genes in HEI-OC1 cells, cochlear explants, and cochleae of C57BL/6J mice. Data were expressed as mean ± SD of at least three independent experiments. Results The analysis of bioinformatics confirmed that lactotransferrin (LTF) is the hub gene and CEBPA-miR-130b-LTF network is the molecular mechanism for cochlear ferroptosis. Compared with the control group, the experiments proved that the indicators of ferroptosis, including Fe2+, MDA, and LTF were differentially expressed in aging HEI-OC1 cells, aging cochlear explants, and aging cochleae. Conclusion These results demonstrate that ferroptosis plays an important role in ARHL, and LTF is a potential therapeutic target for ARHL via regulating cochlear ferroptosis. Introduction Age-related hearing loss (ARHL), also known as presbycusis, is characterized by bilateral symmetrical sensorineural hearing loss mainly at high-frequency.Currently, ARHL is only treated with cochlear implants or hearing aids, and the number of patients will be over 500 million by 2025 (World Health Organization, 2018).Moreover, ARHL is one of the top five Zeng et al. 10.3389/fnagi.2024.1309115Frontiers in Aging Neuroscience 02 frontiersin.orglevel three causes of years lived with disability (GBD 2019 Ageing Collaborators, 2022) and increases the risk of depression, cognitive decline, and injuries from falling (Vaisbuch and Santa Maria, 2018). Hair cell loss, stria vascularis atrophy, and spiral ganglion neuron degeneration are the main causes of ARHL (Keithley, 2020).However, the exact mechanisms of ARHL are still unknown which mainly focuses on apoptosis (Wu et al., 2022) and autophagy (Cho et al., 2022). Ferroptosis, which is caused by iron overload and lipid peroxidation, was officially classified as a novel form of regulated cell death in 2018 (Galluzzi et al., 2018;Hirschhorn and Stockwell, 2019).Ferroptosis has become the emerging mechanism and therapeutic target for neurodegenerative disease (Wang et al., 2022) and is involved in aging in numerous organs (Stockwell, 2022).Recently, ferroptosis was confirmed to be related to neurodegeneration of the auditory cortex in aging rats (Chen et al., 2020), and inhibition of ferroptosis reduced hair cell loss induced by neomycin or cisplatin (Hu et al., 2020;Zheng et al., 2020).Nevertheless, it has not been reported whether hair cell damage is related to ferroptosis in ARHL.Meanwhile, changes in miRNA expression may lead to presbycusis by inhibiting the development of the inner ear and impairing its homeostasis (Chen et al., 2019;Yoshimura et al., 2019).Several studies have concentrated on the role of miRNAs and ferroptosis in various diseases, including Alzheimer's disease (Tan et al., 2023), ankylosing spondylitis (Zong et al., 2022), and tumors (Dai et al., 2022).Nevertheless, it has not been investigated whether ferroptosis and related miRNA regulatory networks are involved in ARHL. The aim of this study was to explore the relationship between ferroptosis and ARHL, construct related regulatory networks using bioinformatics, and verify potential therapeutic target genes for ARHL in vivo and in vitro (The flow chart was shown in Figure 1). Identification of differentially expressed genes (DEGs) and differentially expressed miRNAs (DEMs) Two datasets of GSE35234 and GSE45026 were both from the gene expression omnibus (GEO) database. 1Cochlear tissues from C57BL/6J mice of different ages were selected for analysis.To identify the DEGs, four samples (GSM864313, GSM864314, GSM864315, and GSM864316) were defined as the old group, and another four samples (GSM864305, GSM864306, GSM864307, and GSM864308) were defined as the young group in the GSE35234 dataset.To identify the DEMs, three samples (GSM1095952, GSM1095953, and GSM1095954) were defined as the old group, and another three samples (GSM1095946, GSM1095947, and GSM1095948) were defined as the young group in the GSE45026 dataset.|log2(FC)| > 1 and adjusted p-value <0.05 were considered statistically significant. The GEO2R online tool 2 was used to identify DEGs and DEMs. Identification of ferroptosis-related genes (FRGs) The GeneCards database4 was used to search FRGs.The Venny online tool5 was used to cross-analyze overlapping genes and identify hub genes. Construction of the TF-miRNA-mRNA network The STRING database6 was used to analyze the protein-protein interaction (PPI) network of hub genes with the minimum required interaction score set to 0.4 (medium confidence).Then, the PPI network was visualized by Cytoscape (version 3.6.1)software platform.The TargetScan database7 was used to predict the target genes of miRNAs.The GeneCards database was used to verify the homology of hub genes in both mice and humans, and the miRBase database8 was used to match the miRNAs of the two species.The miRNet,9 TransmiR,10 and Cistrome DB databases11 were used to predict transcription factors (TFs), then the overlapping TFs were identified by cross-analyzation. Cochlear explants The cochleae of postnatal day 3 (P3) C57BL/6J mice were dissected in transparent Hank's balanced salt solution (Meilunbio, China).After removing the lateral wall and spiral ligament, the basilar membrane was laid flat on a crawling sheet soaked with polylysine (Meilunbio, China).These cochlear tissues were cultured in DMEM/ F-12 medium (Meilunbio, China) containing 10% FBS and 1% penicillin G (Sangon Biotech, China) for 12 h at 37°C with 5% CO 2 .Then different concentrations (10-20-30-40 mg/mL) of D-gal were applied to the culture medium, and an equal volume of PBS was added to the control group.Finally, cochlear tissues were cultured for another 48 h before subsequent experiments. Animals Twelve male C57BL/6J mice were accommodated in the Laboratory Animal Center of Fujian Medical University.Mice were divided into two groups, with six of them in the old group (10-month-old) and the other six in the young group (2-month-old).The animal study protocol was approved by the Animal Ethics Committee of Fujian Medical University (approval No. IACUC FJMU 2022-0623). Measurement of auditory brainstem response (ABR) threshold The ABR equipment (Neuro-Audio, Russia) was accurately calibrated by the National Institute of Metrology (Report No.: LSsx2022-00028), and Neuro-Audio software (Version 1.0.105.1,Russia) was used to analyze the ABR threshold in mice.Briefly, mice were sedated with 0.1 mL/kg xylazine hydrochloride (Sangon Biotech, China) and under anesthesia with 1 mL/kg pentobarbital sodium (Merck, Germany).According to operating instructions, three needle electrodes were inserted at the vertex of the midline, and posterior to bilateral ears.Auditory stimulation was performed at 10, 20, and 30 kHz, with decreasing sound intensity processing from 100 dB to 0 dB, with 5 dB intervals to identify the threshold. Measurement of senescence A senescence β-galactosidase staining kit (Beyotime, China) was used to measure the extent of senescence according to the manufacturer's instructions.Briefly, the treated HEI-OC1 cells or cochlear tissues were fixed with fixing solution for 15 min.After washing with PBS, they were stained with staining solution at 37°C without CO 2 overnight.Stained HEI-OC1 cells or cochlear tissues were observed via optical microscope (Olympus, CX41, Japan), and the staining field was taken randomly to calculate the number of senescent cells per 100 cells. Measurement of Fe 2+ An iron content detection kit (Solarbio, China) was used to measure the content of Fe 2+ according to the manufacturer's instructions.Briefly, the treated HEI-OC1 cells or cochlear tissues were lysed with extract and centrifuged at 4,000 × g for 10 min at 4°C.The supernatant was collected and boiled for 5 min after being mixed with detection reagent.Then chloroform was added to the mixture and centrifuged at 10,000 × rpm for 10 min at room temperature.The newly collected supernatant was added to a 96-well plate, and then the absorbance of the samples at 520 nm was recorded using a SpectraMax i3x (Molecular Devices, USA).The protein concentration of the samples was quantified by a BCA kit (Meilunbio, China).Relative Fe 2+ level was expressed as μg/ mg protein. Measurement of lipid peroxide Malondialdehyde (MDA), an end product of lipid peroxidation, was measured using a micro malondialdehyde (MDA) assay kit (Solarbio, China) according to the manufacturer's instructions.Briefly, the treated HEI-OC1 cells or cochlear tissues were lysed with extract and centrifuged at 8,000 × g for 10 min at 4°C.The supernatant was collected and boiled for 60 min after being mixed with detection reagent.New supernatant was collected after the mixture was centrifuged at 10,000 × g for 10 min at room temperature and added to a 96-well plate.Then the absorbance of the samples at 532 nm and 600 nm was recorded using a SpectraMax i3x.Relative MDA level was expressed as nmol/mg protein. Statistical analysis Data were expressed as mean ± SD of at least three independent experiments.Unpaired two-tailed Student's t-test was used to compare the means between two groups, and one-way or two-way ANOVA was used to analyze the data between three or more groups after Dunnett correction in GraphPad Prism 8.0 program (San Diego, CA, USA).NS = not significant, p < 0.05 was considered statistically significant (All data were shown in Supplementary material). Differentially expressed genes (DEGs) in ARHL 43385 DEGs between the old group (GSM864313~864316) and young group (GSM864305~864308) in the GSE35234 dataset (Data were shown in Supplementary material) were identified with GEO2R (Figure 2A).28 of them were further identified by the indicators: |log2(FC)| > 1 and adjusted p-value <0.05 (Figure 2B).The biological functions of the identified DEGs were analyzed using the WebGestalt database (Figure 2C).One gene was unable to be found in Entrez Gene ID, so the function of 27 DEGs was analyzed.DEGs controlled biological regulation, response to stimulus, multicellular organismal processes, developmental processes, and metabolic processes of the biological processes.DEGs had cellular components that were mainly associated with extracellular space and membrane bilayers.The molecular functions were associated with ion binding, protein binding enzyme regulator activity, and lipid binding.The results were consistent with the characteristics of ferroptosis, i.e., the disorder of intracellular iron metabolism, which causes lipid peroxidation and leads to cytomembrane rupture. FRGs cause ARHL A total of 698 FRGs were searched using the GeneCards database (Data were shown in Supplementary material).LTF, the only overlapping gene, was identified by cross-analyzing between FRGs and DEGs (Figure 2D).The functions of LTF involve regulating iron homeostasis, antioxidation, anti-inflammation, and anticancer (Artym et al., 2021;Bukowska-Ośko et al., 2022;Kowalczyk et al., 2022).LTF, as a ferroptosis-related gene, was related to the differential expression gene of aging cochleae.Therefore, we hypothesized that LTF is a candidate gene for regulating cochlear ferroptosis and was validated in subsequent experiments. Construction network of protein-protein interaction (PPI) PPI is made up of functionally similar proteins that is critical to understanding biological processes (Wanker et al., 2019).PPI demonstrates the importance of hub genes and is a valuable tool for identifying novel protein functions (Revathi Paramasivam et al., 2021).The STRING database contains information on more than 5,000 species, 20 million proteins, and 3 billion interactions.The PPI constructed by the STRING database in this work includes the hub gene of LTF and 12 matching interacting genes (Figure 2E).Among these are SIRT7 (Li X. T. et al., 2022), METTL14 (Zhuang et al., 2023), METTL3 (Lin et al., 2022), CP (Shang et al., 2020), and CD14 (Hu et al., 2021), which have also been confirmed to be associated with ferroptosis.It means that LTF, as a hub gene, is closely related to ferroptosis. A potential TF-miRNA-mRNA network of ferroptosis in aging cochleae was constructed LTF was highly homologous between mice and humans with a similarity of 76.53 (n) in the GeneCars database.Meanwhile, mmu-mir-130b and mmu-mir-205 from mouse miRNAs matched hsa-mir-130b and hsa-mir-205 from human miRNAs in the miRBase database. TFs of NFYC, CEBPA, and CEBPB were predicted by mmu-mir-130b, and TFs of STAT3 and PPARG were predicted by mmu-mir-205 in the miRNet database.According to the TransmiR database, TFs predicted by mmu-mir-130b and mmu-mir-205 are shown in Figures 3C,D, and TFs predicted by hsa-mir-130b and hsa-mir-205 are shown in Figures 3E,F.These results show overlapping TFs of miR-130b are CEBPA and CEBPB, and the overlapping TF of miR-205 is STAT3 (Figures 3G,H). The Cistrome DB database was used to predict TFs for LTF, and the top 20 TFs between mice and humans are shown in Figures 3I,J.CEBPA was the sole overlapping TF by combining the above methods of predicting TFs.Therefore, it was hypothesized that the TF-miRNA-mRNA network of cochlear ferroptosis regulated by LTF between mice and humans was CEBPA-miR-130b-LTF (Figure 3K). Ferroptosis exists in aging HEI-OC1 cells with decreased LTF D-galactose (D-gal) has been widely used to induce senescence in various models (Azman and Zakaria, 2019), and the HEI-OC1 auditory cell line has been commonly used to investigate functions and mechanisms of hair cells in vitro (Zhang et al., 2021;Nan et al., 2022).To verify the occurrence of ferroptosis in aging HEI-OC1 cells, different concentrations of D-gal were applied to the HEI-OC1 cells.Compared with the control group, β-galactosidase activity was increased in the HEI-OC1 cells in a dose-dependent manner (Figure 4A).At a concentration of 20 mg/mL, the extent of senescence of the HEI-OC1 cells was significantly different from the control group (p < 0.001; Figure 4B).Therefore, 20 mg/mL D-gal was used for subsequent experiments.The results showed that the expression of Fe 2+ and MDA were increased (p = 0.006 and p < 0.001; Figures 4C,D), while the expression of LTF was decreased (p < 0.001; Figure 4E) in aging HEI-OC1 cells.To better demonstrate that ferroptosis exists in aging HEI-OC1 cells, 5 μm Liproxstatin-1 (Zheng et al., 2020) was applied to HEI-OC1 cells with D-gal for 48 h.As expected, Liproxstatin-1 (ferroptosis inhibitor) reversed the above phenomenon of ferroptosis in aging HEI-OC1 cells (p = 0.038, p = 0.012, and p = 0.046; Figures 4C-E).Meanwhile, the fluorescence intensity of LTF was also decreased after the induction of D-gal (Figure 4F). Ferroptosis is certificated in aging cochlear explants with low expression of LTF To verify whether ferroptosis occurred in aging cochlear explants, D-gal of gradient concentration was applied to the cochlear explants for 48 h.The hair cells in the basilar membrane exhibited obvious senescence after applying D-gal, with increased β-galactosidase activity at the base of the cochleae than at the apex (Figure 5A).The extent of senescence was also dose-dependent (Figure 5B).At a concentration of 30 mg/mL D-gal, the extent of senescence was significantly different from the control group (p < 0.001; Figure 5B).Therefore, 30 mg/mL was used for the later experiments.After 30 mg/ mL D-gal induction, the expression of Fe 2+ (Figure 5C) and MDA (Figure 5D) was increased more than 5 times and about 3 times, respectively, compared with the control group (All p < 0.001), while the expression of LTF was significantly decreased (p < 0.001; Figure 5E).These results indicate the occurrence of ferroptosis in aging cochlear explants. Aging mouse exhibits ferroptosis in cochleae with descending LTF C57BL/6J mice, a predominant animal model for the study of ARHL, with high-frequency hearing loss starting at 3 months and gradually exhibiting hearing loss at full-frequency over time (Bowl and Dawson, 2019) To further verify the occurrence of ferroptosis in the aging cochleae, 2-month and 10-month-old C57BL/6J mice were selected for comparative analysis.As shown in Figure 6A, the mice developed severe hearing loss at 10 months (p < 0.001).As shown in Figures 6B,C, the expression of Fe 2+ and MDA were both increased in the old group (All p < 0.001).Similarly, LTF expression was decreased in the aging cochleae (p < 0.005; Figure 6D).These results strongly suggest that ferroptosis occurs in the aging cochleae. Discussion ARHL is a global problem, affecting approximately one third of people over 65 years of age worldwide (Wang and Puel, 2020).Despite massive investments in ARHL research, there is still no ideal prevention or treatment.Thus, it is crucial to find novel therapeutic targets for ARHL.In this study, the relationship between ARHL and ferroptosis was identified by bioinformatics, hub gene LTF was screened out and its potential regulatory mechanism was established.Finally, the results of bioinformatic analysis were preliminarily verified by experiments in vitro and in vivo.Our results show that LTF is a hub gene for regulating cochlear ferroptosis which provides important evidence for the treatment of ARHL and lays the foundation for the later verification of molecular mechanism. Unlike necrosis or apoptosis, ferroptosis is an iron-dependent cell death pattern driven by lipid peroxidation accumulation (Dixon et al., 2012).Iron-dependent Fenton reaction is essential for ferroptosis and reducing lipid peroxidation can effectively inhibit ferroptosis (Jiang et al., 2021).As a pivotal actor in maintaining iron homeostasis (Rosa et al., 2017), LTF has been proven to prevent the Fenton reaction by sequestering Fe 3+ (Superti, 2020) and positively regulating lipid metabolism (Xiong et al., 2018).Recently studies have shown that LTF plays an important role in age-related neurodegenerative diseases (Li B. et al., 2022) and relieves neuronal ferroptosis in intracerebral hemorrhagic stroke (Xiao et al., 2022).Moreover, CEBPA is a key transcription factor in adipogenesis (Riera-Heredia et al., 2022), and miR-130b is a post-transcriptional regulator of lipid metabolism (Luo et al., 2022).MiR-130b-3p, a mature sequence of miR-130b, has been confirmed to prevent ferroptosis by reducing iron accumulation and lipid peroxidation (Liao et al., 2021).These studies suggested that LTF, CEBPA, and miR-130b may jointly affect ferroptosis by regulating lipid peroxidation.Consistent with the above theory, LTF, the only hub gene of ferroptosis in the mouse cochleae, was found to be highly homologous between mice and humans using bioinformatics in this work.Furthermore, CEBPA and miR-130b were confirmed to be TF and miRNA, respectively, that regulate LTF expression in the above two species.This suggests that CEBPA-miRNA-130B-LTF (TF-miRNA-mRNA) may be a potential regulatory network that regulates cochlear ferroptosis.Despite the lack of gene data from the cochlear tissue of ARHL patients, the regulatory network we constructed in this work is highly applicable to mice and human species.This will provide a new theoretical basis for regulating ARHL and the feasibility of animal experiments and clinical studies in the future. As anticipated, aging HEI-OC1 cells and aging cochlear explants both showed pathological changes of ferroptosis which includes Fe 2+ overload, lipid peroxidation, and low expression of LTF.Notably, the senescence of hair cells in the basilar membrane exhibited a tonotopic gradient and a concentration-dependent change, which is consistent with the pathological characteristics of ARHL (Wang and Puel, 2020).Moreover, this data shows that C57BL/6J mice suffered severe hearing loss at 10 months, with the occurrence of both iron overload, lipid peroxidation, and decreased expression of LTF.The above results prove the existence of cochlear ferroptosis in ARHL and LTF may be a hub gene via ferroptosis in ARHL progression. More importantly, low-density lipoprotein receptor-related protein 1 (LRP1) was found to localize in the blood-labyrinth barrier and inner hair cells according to new research (Shi et al., 2022).As an important receptor for LTF (Li and Guo, 2021), LRP1 has the potential to break the restriction of BLB and deliver LTF to the inner ear.In combination with this theory, we will conduct experiments to achieve the purpose of treating ARHL by regulating the expression of LTF in the inner ear.The limitation of this study is that regulatory mechanism has not been validated and RNA-silencing or gene editing technology should be used to achieve the target of functional verification in vivo or in vitro.Besides, the LTF expressions in the cochlear implant with immunofluorescent assays need to be further improved in the future.In summary, LTF was identified as a hub gene of cochlear ferroptosis in ARHL and its associated TF-miRNA-mRNA regulation network was constructed.Our findings revealed the relationship between ferroptosis and ARHL and provided a potential therapeutic target for ARHL. FIGURE 2 LTF FIGURE 2 LTF, a ferroptosis-related gene, is identified in aging cochleae.(A) The volcano diagram of GSE35234 shows 43385 DEGs between the old group (GSM864313~864316) and young group (GSM864305~864308).(B) The heatmap shows 28 statistically significant DEGs.(C) Functional analysis of the statistically significant DEGs.(D) LTF, the hub gene, is obtained from the intersection of Up-Gens, Down-Gens (based on the statistically significant DEGs), and FRGs.(E) The PPI network shows that 12 proteins are interacting with LTF. FIGURE 5 FIGURE 5 Ferroptosis in aging cochlear explants.(A) SA-β-gal staining in the basement membrane treated with different concentrations of D-gal for 48 h.(B) Quantification of SA-β-positive cells in (A).Compared with the control group, the percentage of positive cells is observed to have a statistical difference at 30 mg/mL D-gal (p < 0.001; N = 3).(C-E) Compared with the control group, the expression of Fe 2+ (C) and MDA (D) is increased in aging cochlear explants with 30 mg/mL D-gal (p < 0.001; N = 5).Oppositely, the expression of LTF (E) is decreased in aging cochlear explants with 30 mg/mL D-gal (p < 0.001; N = 5).OHC, outer hair cell; IHC, inner hair cell; Ctrl, control.
4,754.4
2024-01-11T00:00:00.000
[ "Medicine", "Biology" ]
Malmquist Productivity Analysis of Top Global Automobile Manufacturers The automobile industry is one of the largest economies in the world, by revenue. Being one of the industries with higher employment output, this has become a major determinant of economic growth. In view of the declining automobile production after a period of continuous growth in the 2008 global auto crisis, the re-evaluation of automobile manufacturing is necessary. This study applies the Malmquist productivity index (MPI), one of the many models in the Data Envelopment Analysis (DEA), to analyze the performance of the world’s top 20 automakers over the period of 2015–2018. The researchers assessed the technical efficiency, technological progress, and the total factor productivity of global automobile manufacturers, using a variety of input and output variables which are considered to be essential financial indicators, such as total assets, shareholder’s equity, cost of revenue, operating expenses, revenue, and net income. The results show that the most productive automaker on average is Volkswagen, followed by Honda, BAIC, General Motors, and Suzuki. On the contrary, Mitsubishi and Tata Motors were the worst-performing automakers during the studied period. This study provides a general overview of the global automobile industry. This paper can be a valuable reference for car managers, policymakers, and investors, to aid their decision-making on automobile management, investment, and development. This research is also a contribution to organizational performance measurement, using the DEA Malmquist model. Introduction It is known that the automobile industry is one of the largest industries and has wide-spread multiple-range products globally. However, despite an evident worldwide growth trend during the 1990s, certain aspects of automotive manufacturing are considered to be more regional, as observed specifically in many developing countries, where vehicle production expanded rapidly during this period. Moreover, at this time, many leading automotive manufacturers have extended some of their operations to these developing countries, driven by the growth in global sales. This move by global producers meant to establish cheaper production sites intended for the manufacturing of selected vehicles and components, and to be able to access new markets for high-end-type vehicles [1]. Some examples of the biggest key players in this industry include the German-based manufacturers Volkswagen and Daimler AG. Japan also has big automotive companies, namely Toyota, Nissan, and Honda. China would not want to be left behind with their very own Shanghai Automotive Industry Corporation (SAIC) Motor and Dongfeng Motor Corporation. Hyundai is a well-known automotive company in Korea, while the United States has a rivalry between Ford and General Motors (GM). The automotive industry plays a vital role in creating jobs or employment opportunities. With this aspect, the automotive industry scores five in the employment multiplier, while the other industries only got three. In the United States, OEMs (original equipment manufacturers) that make original parts used by automakers directly employ 1.7 million people. They indirectly created 1.5 million jobs, while suppliers and distributors supported an additional 4.8 million jobs. According to the International Organization of Automobile Manufacturers (OICA), each $1 million increase in revenue will create approximately 10 jobs. For industries such as energy and utilities, the ratio is even higher. On a global scale, Volkswagen was the company with the greatest number of employees in the automotive industry in 2015, with more than 570,000 employees. Its rival, Toyota, has 340,000 employees, while another manufacturer, Daimler, has more than 270,000 employees. The large number of people employed by the industry has made it a major determinant of economic growth, as well as recession. The automobile industry is also one of many industries which have tremendous expenses in terms of advertising. For example, during the first half of 2014, GM paid $928 million, and its global-advertising spending reached $5.5 billion in 2013, accounting for 3.54% of its revenue. In that same year, Ford Motors had revenues of USD 139.37 billion and expenditures of USD 4.4 billion. Moreover, Fiat Chrysler Automobiles N.V. (a Dutch-based automaker) has recorded a spending of $2.76 billion, accounting for 3.82% of its revenue. The car-manufacturing business model and value chain are very complex, and car development may take several years. Automakers have been challenged to increase the speed, intelligent design, and efficiency of their vehicles. As a result, many manufacturers focus on innovation. Few people know that, in terms of research and innovation, the world's third largest industry is the automotive-manufacturing industry. In 2013, $100 billion was spent globally, including $18 billion in the United States. For Toyota, investing a lot of money in research helped maintain competition, and in 2013, it spent $8.1 billion on research and development. GM also spent $7.2 billion on research and development that year. It can be considered that this is a very important and influential industry in the global economy [2]. Nowadays, cars and other vehicles have become an integral part of modern society, it streamlines transportation and quickens the pace of society's evolution. The automotive industry looks toward a future of producing more fuel-efficient vehicles to comply with governments' initiative of changing fuel economy standards and the need to reduce dependence on fossil fuels. Therefore, the next generation of automobile industry requires more fuel-efficient engines, advanced and creative design, shared intelligence, and systems engineering. Cars are being improved and developed to use electricity, connected and offering onboard GPS and Wi-Fi capabilities. Some auto manufacturers are even experimenting with self-driving cars. This trend leads to a fierce technological race between all the manufacturers in the automotive industry. The performance of automobile manufacturers must be determined, especially in terms of technical efficiency. The application of the Data Envelopment Analysis Malmquist model in this research will be able to calculate the technical efficiency, as well as the frontier-shift (technological change) of the automobile manufacturers, thereby reflecting their trend of technology development. For these measurements, the authors chose this model to evaluate the performance of 20 global automakers that have a great influence on the global automotive industry during the period of 2015-2018. This will also assist them to better understand the industry as one of the most important and influential industries to the global economy, especially in the context of the risks of a second crisis and in the industry 4.0 race. The authors expect that the research results will reflect an overall picture of the global automotive industry (especially the situation of technical efficiency) through the performance of automotive manufacturers so that automaker managers, policy makers, or investors can use this paper as a basis in drafting automobile management policies, direction for development, and investment decisions. The authors hope that this study will be a valuable reference for the studies of global automotive industry, as well as research on the DEA Malmquist model. This paper includes five sections. The first section gives an overview about the research background, motivation, purpose, objects, and scope. The second section indicates some previous studies related to performance evaluation, applying data envelopment analysis, especially those that use the Malmquist model. Research procedures and discussions regarding the theory of the Malmquist index model, as well as the Pearson correlation coefficient, are presented in the third section. The fourth section presents the data and calculation, analysis, and evaluation. The last section of this research discusses the conclusion contributions, shortcomings of research, and indicates some direction for the next research. Literature Review In 1978, Charnes et al. [3] developed Data Envelopment Analysis (DEA). During that time, DEA was a new data-oriented method for the evaluation of Decision-Making Units (DMUs). DMUs is a set of peer entities in which technical efficiencies are calculated. It is a method of optimization with the use of linear programming that will assess the productivity and efficiency of DMUs related to the proportional change in inputs or outputs. The CCR is the first DEA model and is an acronym for Charnes, Cooper, and Rhodes. Then, later, several models of DEA were introduced and broadly applied for performance analysis in many areas, like transportation, mining, logistics, banking, and many other industries and organizations since then. DEA was also used by Martín and Roman [4] in 2001, wherein the performance and technical efficiencies of each individual airports in Spain are analyzed. The results were used to set forth some considerations to the policies that prepare the Spanish airport for the privatization process in 2001. Kulshreshtha et al. [5] utilized DEA to study the productivity of the coal industry of India during the period of 1985-1997 and found that the underground mining has less efficiency than the opencast mining. Leachman [6] used DEA for the development of a quality and output-based performance metric to assess the competitiveness of car firm's manufacturing against its competitors. Pilyavsky et al. [7] deployed DEA to analyze the change in efficiency of 193 community hospitals and polyclinics across the Ukraine, in the 1997-2001 period, and found that the polyclinics somewhat less efficient than community hospitals. Wang et al. [8], in their study, used DEA for the measurement of the marketing and production efficiencies of some 23 companies involved in the Printing Circuit Board (PCB) industry and found that 15 firms need to improve their efficiencies in both production and marketing aspects. Four companies prioritized the progress in their production efficiency, and the remaining four companies focused on enhancing marketing efficiency. Chandraprakaikul and Suebpongsakorn [9], by deploying DEA, found out the weaknesses of 55 logistic firms in Thailand. The goal of their study was to improve the logistics efficiency and evaluate each firm's performances from 2007 to 2010. An application of a DEA with two-stage method was used by Yuan and Tian [10], to analyze the efficiency of resources related to science and technology aspects of several industrial enterprises, along with other factors influencing their performance. Findings show that the elements of the input and output variables are independent. Chang et al. [11] analyzed the environmental efficiency of the transportation sector in China, by using the DEA model to find out the very inefficient environment of the industry. Ren, et al. [12] applied DEA for the assessment of six Chinese biofuel firms' energy efficiency in relation to their life cycle, to determine wasteful energy losses in biofuel production, and indicated that DEA is a feasible and unique tool for establishing efficient scenarios in production of bioethanol. The research also suggested that the most energy-efficient form of ethanol production for China may come from sweet potatoes. DEA has been acknowledged as a practical decision support tool and a valuable analytical research instrument. From a series of the previous studies mentioned above, it is understood that the DEA method has been broadly applied to assess the performance of many companies in different industries, including the automotive industry. These prove that DEA is an effective tool for the authors to evaluate the performance of global automobile manufacturers. Malmquist productivity index (MPI) is a very useful approach for productivity measurement in DEA. In 1982, Caves introduced MPI and named it after Professor Malmquist (1953), whom the ideas are based upon. The Malmquist productivity index has the components which are used in performance measurement that includes the technical efficiency in technological change and the total factor productivity [13]. As stated in a study by Fuentes and Bañuls, [14] the split of the MPI's Total Productivity Efficiency into two different components, the technical efficiency and the technological, change helps clarify the role of manager or skill level in the final performance data. The DEA Malmquist model has been a very effective method in measuring changes in DMU productivity over the past decade. For example, Färe, et al. [15] used it to analyze productivity growth in developed countries and found that productivity growth in the United States was slightly above average, due to technological changes. The growth in productivity of Japan turned out to be the highest among the samples, and because of the changes in efficiency, Japan's productivity growth is almost half. Fulginiti and Perrin [16] used Malmquist to determine whether the results using this approach can confirm the recorded decline in agricultural productivity from less-developed countries (LDCs) using other methods. The earlier results were confirmed, and we found out that agricultural tax gets the most declining rates in terms of productivity change. Odeck [17] focused on the Norwegian Motor Vehicle Inspection Agencies and measured their efficiency and productivity growth for the period of 1989-1991. Using the Malmquist index, the productivity was described through calculation by DEA, being the ratio between efficiency for the similar production unit in two particular periods. The remarkably positive effect of the frontier technology is observed to be the main contributor to the total productivity growth. The efficiency measures that were calculated show that there are unstable efficiency scores for every unit examined throughout the observed year periods. The size of the units does not have an effects on the efficiency scores. Chen [18] used a non-radial Malmquist productivity index for the change in productivity calculation of three major industries in China: chemicals, textiles, and metallurgical within the four five-year plan periods. The research showed that the economic development plans can be used for the evaluation of productivity and technology changes, using the MPI. Sharma [19] examined the productivity performance of the Indian automobile industry via Total Factor Productivity (TFP) measurement from 1990-1991 to 2003-2004 and explored further the factors influencing the car industry efficiency in India. DEA was also used by Liu and Wang [20] to calculate the three components of Malmquist productivity of some Taiwanese semiconductor packaging and testing firms during the period of 2000 to 2003. Aside from revealing the productivity change patterns and introducing another way to interpret the components of MPI with respect to management aspects, this approach likewise determines any strategic shift of each firm due to isoquant changes. Mazumdar [21] applied MPI to examine the technological gap ratio (TGR), technical efficiency, and change in productivity of pharmaceutical companies among particular sectors in India. The study implies that vertically integrated companies that produce both formulation and bulk drugs exhibit higher efficiency and technological innovation, and it also found that imported technology or the establishment of capital-intensive techniques propels the technological growth of firms. Wang et al. [22] compared the results of Malmquist productivity index to the Grey Relational Analysis, to assess the intellectual capital management of the pharmaceutical industry in Taiwan. With the combination of these analysis tools, they were able to conclude that, among the 12 pharmaceutical companies, seven of them have efficiently improved their intellectual capital management, while five companies failed during the four-year period of 2005 to 2008. Chang et al. [23] used the Malmquist DEA model to study the productivity changes of accounting firms in the US, right before and after the implementation of the Sarbanes-Oxley Act. The findings indicated that accounting firms exhibited significant growth in productivity efficiency after the actualization of SOX, and these results were better than pre-SOX performances. These studies cited above prove that the Malmquist productivity index (MPI), which is a DEA-based model, is a very useful tool for measuring the productivity changes of countries, industries, or organizations, through a specific assessment of technical and technological aspects, as well as total factor productivity. Regarding this research, the authors know that car production is a complex activity that combines technical aspects and technological capabilities; therefore, evaluating the performance of an automobile manufacturer not only needs to include an evaluation of overall performance, but it also needs to have specific technical and technological assessments. This Malmquist model is an appropriate research method to perform the most detailed evaluation of the technical, as well as the technological, performance of global automakers. For this reason, the authors chose to apply this Malmquist model to carry out this research. Research Process In this research, the authors deployed the Malmquist productivity index model in the DEA method, to evaluate the performance of the world's top 20 automakers from 2015 to 2018. The study includes four parts of processes, as shown in Figure 1. Part 3. Data analysis: The collected data were checked for the correlation coefficient, to ensure the relationship between input and output variables follows the isotonicity condition. If their correlation coefficients are zero or negative, the data were re-selected, until they met the positive correlation coefficient requirement. After that, the DEA Malmquist model was applied, to calculate the catch-up index, frontier-shift, and Malmquist index of the DMUs. Part 4. Results, discussion: In the last part, the results of catch-up index (efficiency change), frontier-shift (technological change), and Malmquist index (total factor productivity change) of the DMUs are evaluated, discussed, and concluded. Malmquist Productivity Index Evaluating the change in total factor productivity of a DMU within two periods is the main purpose of the MPI, being described as the product of catch-up efficiency change and technological change (frontier-shift). Efficiency change is associated with the intensity of attempts of the DMU to achieve any improvements or deterioration in its efficiency, while technological change reflects any changes in the frontiers' efficiency between the periods 1 to 2 [15]. The authors denote that the DMU i at the time period 1 is x 1 i , y 1 i and at the time period 2 is is measured by the technological frontier t 2 : d t2 (xi, yi) t1 (t 1 = 1, 2 and t 2 = 1, 2). To calculate for the catch-up (C), frontier-shift (F), and Malmquist Index (MI), the following formulas can be used [8]: From the above formulas, we can see that the DMU's total factor productivity (TFP) reflects the advances or declines of the DMUs in technical and technological innovation efficiency. If the values of C, F, and MI are >1, =0, or <1, it respectively indicates the progress, status quo, and regress in the technical efficiency, technological change, and the total factor productivity of the DMU i from period 1 to 2. Pearson Correlation Coefficient The Pearson correlation is widely used in many research studies; it was developed by Karl Pearson and published by Auguste Bravais in 1844. It has a value between −1 and +1, representing the linear dependence of two variables or sets of data, where +1 is total positive linear correlation (when one variable increases in value, the other variable will also increase), 0 is no linear correlation (there is no association between the two variables), and −1 is total negative linear correlation (when one variable increases in value, the other variable will have a decrease in value), as illustrated in Figure 2. The correlation coefficient formula of Pearson's (r) of two variables (x) and (y) is measured as follows [8]: where n is the size of the sample; x i ,y i denotes the individual sample points indexed with i; and x = 1 n n 1 x i is the mean of the sample which is analogous for y. Since the homogeneity and isotonicity are two important DEA data assumptions, these make the correlation test an essential procedure before the application of DEA. This is an assurance that there is an isotonic condition between input and output variables. The input and output data need to have a positive correlation (the values of the output factors should not decrease while the values of the input factors increase); the closer the value to +1, the better positive linear relationship. Selection of Decision-Making Units (DMUs) This study focuses on the performance evaluation of the world's top 20 automakers in terms of production in 2015. They are the 20 automobile manufacturers (of more than 100 global automobile manufacturers) that have a big impact on the global automotive industry, with an annual output of over 1 million units. Out of these 20 automakers, there are 12 from Asia (six from Japan, four from China, one from Korea, and one from India), six from Europe (three from Germany, two from France, and one from the Netherlands), and two from the United States, as listed in Table 1, below. Selecting input and output factors is an important task in employing DEA to measure the efficiency of DMUs. DEA is a complicated technique, wherein that the inputs and outputs have a strong impact on the result. Based from inadequate benefit analysis, determination of the proper number of variables can be neglected. Moreover, currently, there is no precise method in variable selection that must be followed. According to previous studies, the authors found that input variables are financial indicators that the company needs to balance or decrease, while output variables are indicators that the company needs to improve or increase. After thorough study, the researchers decided to choose four input and two output factors, which are stated below: Input Factors 1. Total Assets (TA): the total amount of assets owned by the automaker second item. 2. Equity (EQ): the higher the equity level of a company, the better access to many debt-based funding. If most assets come from equity, the financial leverage is low, and then equity can be a proxy for financial debt-based assets for companies. 3. Cost of Revenue (CR): the total costs that are directly connected with producing and distributing goods and services to customers of the automaker. 4. Operating Expenses (OE): expenditures incurred in carrying out automaker's day-to-day activities but not directly associated with production, including selling, administrative and general expenses. Output Factors 1. Revenue (RE): the total receipts that the automaker obtains from selling goods or services. 2. Net Income (NI): the actual profit of the automaker after accounting for all costs, and taxes. These six important financial indicators play an important role in assessing a company's performance. Every business needs to manage its assets, control its capital well, reduce production and operation costs, and increase its income and profits. The authors limited the input variables to only financial indicators, since the study focused on the aspect of efficiency in terms of financial capabilities. To be able to use these data for the DEA analysis, the authors chose the outputs which are necessarily to be increasing along with the values of the input factors. There must be an isotonic condition between the input and output variables, or else the chosen factors cannot be used. That is the reason why the authors chose these factors as research variables. Research Data The 2015-2018 period's data were collected from the automobile manufacturers' annual reports and from information published on their official websites [25]. The unit is calculated in millions of US dollars. Table 2 below shows the statistical data for each year period. D20 in 2015, and D8 and D12 in 2017 have negative values in Net Income (this is an indication that those firms suffered a loss during that year). Since homogeneity is an important aspect of DEA data assumption, the negative values from the raw data need to be adjusted upward, to positive values. After adjustment, the net income of each DMU in 2015 increased by 1538, and in 2017, increased by 3865. This simultaneous change of value does not affect the DEA calculation results. The process, methods, tools, and research data to carry out this study were discussed above. In the next section, the authors apply the Pearson correlation coefficient testing tool and DEA Malmquist model, to calculate the research data. Correlation Results In DEA, a variable is a factor that greatly affects the research results. Before using the Malmquist or any model in DEA to process the data, the isotonic condition in between the input and output variables must be met. It only means that the increase in the values of the input variables should not make the values of the output variables decrease [26]. Therefore, the research data must first be validated by using the Pearson correlation, to ensure the isotonic relationship between input and output variables. The value range of the Pearson correlation coefficient is from −1 to +1. The results of the Pearson correlation test are shown in Table 3, below. The correlation coefficient ranges from 0.3494379 to 1; all are positive correlations. It means that the used data comply with isotropic conditions and can be used for DEA calculations. This also proves that the choice of inputs and outputs is applicable for DEA. Catch-Up Index (Technical Efficiency) The Malmquist productivity index has the components which are used in performance measurement, such as changes in technical, technological, and total factor productivity. The authors present results of efficiency change. The technical effective changes of the DMUs are expressed through the catch-up index shown in the Table 4 and Figure 3. It is noticeable that, during the period of 2016-2017, most of the DMUs did not achieve progressive technical efficiency (D1, 3, 6, 11, 17, and 18), with catch-up scores greater than 1. D19 (Toyota) was the least-efficient automobile manufacturer in this period, with a score of 0.692. D3 (Chang'an Auto) had an impressive improvement in technical efficiency, being the least-effective producer during the previous period and then becoming the most technically efficient producer in this period. After the low performance from the previous period, the automakers showed significant improvement in technical efficiency in the next period, 2017-2018. Results show that there are only five out of 20 companies (D1, 3, 6, 11, and 18) that have catch-up values less than 1. Being the worst-performing manufacturer in the previous period, D19 (Toyota) had shown improvement and became the most technically efficient manufacturer in this period, with a catch-up value of 1.47. Surprisingly, D3 (Chang'an Auto) failed to maintain high efficiency and suffered a serious decline in technical efficiency, with a catch-up value of only 0.4478, while the other competitors were above 0.9. In the list of 20 DMUs, four of them are from China, six from Japan, six from Europe, two from America, one from Korea, and one from India. Among the four Chinese carmakers (D1-BAIC, D3-Chang'an Auto, D5-Dongfeng, and D16-SAIC), the authors noticed, in Figure 4 below, that Dongfeng and SAIC showed improvement in technical efficiency, while Chang'an Auto and BAIC regressed in regard to efficiency change. After a decline in technical efficiency during the period of 2016-2017, except for Mazda (D11), the remaining five Japanese automakers had a clear improvement, as seen in Figure 5, below. Toyota (D19) showed the most noticeable improvement, by going from being the least-effective manufacturer to becoming superior compared to other competitors. Like Japanese automakers, European carmakers also tended to improve their performances after the previous decline. Only one automaker, Fiat Chrysler (D6), showed a degradation in technical efficiency, as seen in Figure 6. Frontier-Shift Index (Technological Change) The frontier-shift index is applied to measure the efficiency frontiers of DMUs between two periods. Table 5 shows that the technological efficiency of automobile manufacturers increased in the period 2016-2017 and decreased in the period of 2017-2018. Except for D20, the remaining all of the manufacturers (19 out of 20) failed to achieve technological progress in the first period of 2015-2016. However, in the next period (2016-2017), manufacturers made efforts in innovating technology and achieving good results. However, they were not able to maintain this progress in the next period (2017-2018), as all of their frontier-shift indicators were lower than 1, even lower than the 2015-2016 period. This shows that the frontier-shift efficiencies of manufacturers seriously fell down during this period. Due to the low value of technological efficiencies in the periods 2015-2016 and 2017-2018, except for Chang'an Auto (D3) and Daimler AG (D4), the average technological efficiency during the total research period (2015-2018) did not result in a progressive score. It can be observed in Figure 8 that the automobile manufacturers did not achieve technological progress during the period of 2015-2016. Only D20 (Volkswagen) obtained a frontier-shift index greater than 1, indicating that the development in technology and innovation of the global auto industry have not improved very well and have many limitations. After a period of poor technological performance 2015-2016, manufacturers opted to invest in technology innovation and achieved technological efficiency in the period 2016-2017, especially D1, D3, D5, D7, D10, D11, D15, and D17. However, because technology development has been so fast and developing, manufacturers failed to maintain progress and even severely declined in the next period. Figure 9, below, shows the frontier-shift scores of automobile manufacturers in the period of 2017-2018. During this period, the manufacturers with low efficiencies (F < 0.6) were Dongfeng, Chang'an Auto, BAIC (manufacturers from China), and Suzuki (Japan). Manufacturers such as Renault, Mazda, Tata Motors, Ford, and Nissan also had a low performance (F < 0.7). BMW, General Motors, Volkswagen, Hyundai, and Fiat Chrysler had better technological efficiency than the rest, with F > 0.8. Thus, it can be seen that, in this period, manufacturers from Asia, especially China, had a more serious decline in technological efficiency than European and American automobile manufacturers. The simultaneous decline in technological efficiency of all 20 automobile manufacturers shows a close correlation with the decline in global automobile production in this period of 2017-2018. In terms of technological efficiency, it can be seen that the automakers show a common trend, increasing in the second phase and declining in the remaining two periods, showing that no manufacturer performed a stable technological efficiency or took any lead in the race to technology development and innovation. It only shows that the technology playground in the global automotive industry is highly competitive and holds so much potential for all manufacturers. Malmquist Productivity Index (MPI) MPI is one very valuable component in evaluating the performance of global automobile manufacturers. It measures the change in total factor productivity of DMUs at a certain interval periods and is the product of catch-up index (technical efficiency) and frontier-shift (technological change). As shown in Table 6 and Figure 10, the average Malmquist index of DMUs less than 1 (0.9632678) indicates a regression in the total productivity growth of the DMUs. The total performance of most DMUs increased in the period 2016-2017 and significantly decreased in the 2017-2018 period. In the period of 2015-2016, most of the DMUs performed inefficiently, with the MPI less than 1, only five automakers, namely General Motors (G.M.), Nissan, Suzuki, Toyota, and Volkswagen, achieved progress in total factor productivity. After performing low efficiency in the 2015-2016 period, car producers improved their productivity and got a good performance. This can be seen by the positive MPI values of DMUs in the next period, 2016-2017. Only D19 (Toyota) did not achieve an efficient performance during this period, with an MPI of 0.7989. However, the manufacturers could not maintain this progressive performance for the next period, 2017-2018. The productivity scores of automakers even declined very badly, having an average value of only 0.762. The lowest index of 0.2536 belongs to D3 (Chang'an Auto), meaning that it was the least-efficient producer at this period. The two occasional outstanding automakers that achieved a good performance in this period were D8 (General Motors) and D19 (Toyota), with the MPI equal to 1.0383 and 1.0416, respectively. Although DMUs had a great improvement and performance in the period 2016-2017, the performance in the other two periods were poor (especially in the 2017-2018 period), resulting in 14 out of 20 DMUs having a low-efficiency performance during the research period of 2015-2018 (total average MPI less than 1). Among 20 automotive manufacturers, General Motors is the automaker that had the most stable performance in all stages (all MPI values were greater than 1). Despite the inefficient performance in the 2017-2018 period, Volkswagen is still the best-performing automaker in the total research period, with an average MPI value of 1.037. Volkswagen is followed by Honda, BAIC, General Motor, Suzuki, and Chang'an Auto. In contrast, Mitsubishi and Tata Motors were the worst-performing automakers, with the lowest average MPI values. Figure 11 presents the total factor productivity change of Asian automakers over the research period, 2015-2018. In the figure, there is a big difference in the performance between the Asian manufacturers. Chinese manufacturers have the biggest fluctuation in performance. Among four Chinese automakers, namely D1 (BAIC), D3 (Chang'an Auto), D5 (Dongfeng), and D16 (SAIC), SAIC was more stable compared to others in terms of performance; the remaining automakers showed big fluctuations, with performances that increased rapidly but fell drastically afterward. It can be recognized that Chinese car manufacturers have still not improved the stability in production efficiency and are still struggling to stabilize their production performance. As gleaned in Figure 12, there is no significant difference in total factor productivity between the European and American automobile manufacturers. Thus, it can be seen that the performance of European and American manufacturers does not have major fluctuations compared to Asian manufacturers. During the first phase (2015-2016), only Volkswagen (D20) and General Motors (D8) achieved the total factor productivity, while Ford (D7) was the least-efficient producer during this period. Like the Asian companies, European-American automobile manufacturers showed increasing growth in the year 2016-2017. All of the eight European-American manufacturers achieved total factor productivity in this period. Unfortunately, only General Motors was able to maintain this good performance up to the next period, 2017-2018. It is also noticeable that the most unstable manufacturer in terms of performance is Renault (D15). This is due to the automaker's progress on the second-phase performance but fell sharply in the final stage. In contrast to Renault, General Motor is the most stable producer in terms of performance; even when other manufacturers declined in performance, it not only maintained, but also increased, in its performance. In general, Honda is the only Asian automaker with the best performance, while Volkswagen and General Motors are from Europe-America. Asian automobile manufacturers, especially Chinese automakers, have made breakthrough performances during the 2016-2017 period, in terms of productivity. However, since the productivity efficiencies are not stable, they lost the race in the next period, 2017-2018. Thus, Asian manufacturers-China in particular-need to stabilize their production performance, in order to compete with European manufacturers. Based on the technical, as well as the technology, growth trend of the 20 automobile manufacturers, it can be seen that they still have not balanced the technical efficiency and technological efficiency. Typically, in the period of 2016-2017, when they improved technological change, the technical efficiency regressed. Moreover, the manufacturers were able to improve their technical efficiency, but they were faced with challenges in the technological-innovation aspects, during the period of 2017-2018. Therefore, improving production performance is about making a balance between technical efficiency and technological change; and these two aspects can be developed simultaneously Conclusions This study illustrates the results of technical efficiency, technological progress, and the total factor productivity of automakers in the four-year periods. Findings show that, after a period of slight decline, most manufacturers have gradually improved their technical efficiency, which somehow leads to technical progress. Suzuki turned out to be the best and most stable manufacturer in terms of technical efficiency, while Chang'an Auto needs more improvement in this aspect. In terms of technological progress, most manufacturers have an unstable performance, especially the Chinese automakers. This regress in performance is also noticed in the study conducted by Imran et al. [27] wherein the export performance of Chinese automotive sector is the main scope. Exportation of automobile units affects the revenue and income of the company in which this paper uses as variables. Although automakers had a breakthrough in innovation and achievement, as they were able to perform efficiently during the 2016-2017 period, this was not maintained, and they even had a big regression of technological efficiency in the next period. It only affirms that the technology arena in the global automotive industry is highly competitive and shows so much potential for all automakers, since none of them is taking the lead. However, Fathali et al. [28] discussed that there is no strong empirical and theoretical evidence that competition is a major driving force to improvement in technology and innovation. Therefore, whatever the kind of competitive environment the automotive industry is in, there will be continuing technological and innovative development. Since the total factor productivity is the product of technical efficiency and technological efficiency, manufacturers must balance the development to both technical and technological efficiency, to be able to achieve a progressive outcome. Based on the results, only General Motors achieved a Malmquist index greater than 1 in all year periods. This implies that this manufacturer is the best and most stable automaker among the others. Mitsubishi and Tata Motors are the automakers with the lowest performance on average. The results also show that there is less variation in total factor productivity of European-American producers than Asian manufacturers. The year 2017-2018 is the period when automobile output dropped suddenly after a phase of growth. It is also the period when the technological efficiency index of all 20 automakers significantly declined compared to the previous period, confirming the influence of technological changes to this fluctuation. Therefore, automobile manufacturers need to focus on handling their technology and innovation properly, to avoid the occurrence of a second automobile crisis. This is not in reference to any oil crisis, but a possible industry 4.0 crisis. This study applied the DEA Malmquist model to evaluate the performance of the world's top 20 automakers over the period of 2015-2018, based on input and output variables, which are very important financial indicators. Based on results, the performance of the world's top 20 automobile manufacturers in terms of technical efficiency, technological efficiency, and total factor productivity not only indicates the overall picture of the world automobile industry but also provides a comparative performance of manufacturers from different countries and continents. The study presented an overview of the Chinese, Japanese, Asian, or European-American automobile industries. Therefore, this can be a valuable reference for car managers, policy makers, or investors for automobile management, investment, and development decisions. Moreover, this study can be used as a guide for automobile manufacturers to form strategic alliance with one another. Wang et al. [29] strongly recommends that the manufacturers in the automobile industry form a strategic alliance with one another, given that there will be an extensive analysis of companies' performances before forming one in which the DEA is a very effective tool. In addition, the study also contributes to the application of the data envelopment analysis method, specifically the Malmquist model in organizational performance measurement, and is a worthy document for the studies of global automotive industry, as well as other fields. However, the study has several limitations that must be considered. First, the results of this study depend on the value of the input and output variables, which are financial in nature. The results indicated in this study is not applicable or relatable to any other type of industry, even those that are a little related to automobiles, such as the petroleum or fuel industry. However, this study can also be an additional reference, like the other studies cited in this paper. For future studies, the authors must make some modifications in the variables used and then compare the results, to provide a more objective research result. It can also be recommended to improve this kind of study by integrating several input and output factors, such as total number of units produced; undesirable outputs, such as total number of defective units being recalled; and other non-financial variables. Eilert et al. [30] described how the volume of recalled automotive products can somehow affect the sales of the firms, as it may hurt their brand reliability and company's reputation. Research and development expenditure can also be a factor for future studies. Hashmi and Biesebroeck [31] pointed out that the industry leader's aspect of innovation is greatly declining in the efficiency of the automotive firms lagging behind. They mentioned that highly focusing on efficiency leads to an increase in innovation. It will be interesting to know the effects of R&D in the efficiency level of automotive firms. Second, this study focuses on evaluation of the performance from the world's 20 largest automakers. Involvement of other automakers is recommended, in order to give a general overview of the industry performance. Finally, this study is based on a quantitative approach, using DEA Malmquist model, where some external factors are not being considered. The combination with qualitative research will be a worthwhile research direction to more fully evaluate the performance of the automobile manufacturers.
9,335.2
2020-04-14T00:00:00.000
[ "Economics", "Business" ]
DECOMP: a PDB decomposition tool on the web. The protein databank (PDB) contains high quality structural data for computational structural biology investigations. We have earlier described a fast tool (the decomp_pdb tool) for identifying and marking missing atoms and residues in PDB files. The tool also automatically decomposes PDB entries into separate files describing ligands and polypeptide chains. Here, we describe a web interface named DECOMP for the tool. Our program correctly identifies multimonomer ligands, and the server also offers the preprocessed ligandprotein decomposition of the complete PDB for downloading (up to size: 5GB) Availability http://decomp.pitgroup.org Background: The Protein Data Bank [1] started to function as the depository of the crystallographic data, complementing journal publications: researchers solved the structure of a protein, wrote a paper on the result, and deposited the data of the solution in the publicly available PDB. The irregularities of the structure deposited (such as lacking atomic coordinates, broken chains, unidentified substructures) are mostly remarked in the cited publications and also in the remark-fields of the PDB file. The textual annotations in the scientific publication elsewhere or in the remark-fields in the very same PDB-file, however, make the automatic processing of the protein-structures very difficult. This statement may be a little bit confusing, since atoms, carrying the HET label are not supposed to be in the peptide-chain, so those structures that contains HET atoms other than the oxygen of the water would qualify for being a complex. Unfortunately, this is not the case. Metal ions, modified residues (in a surprisingly large number), and small molecules added in the crystallization all contain heteroatoms, and they are frequently not considered to be ligands. With our decomp_pdb program [2] protein-ligand complexes are identified reliably, and the ligands are deposited in separate files. Missing residues and atoms in chains are handled properly, that is, even if several atoms are missing from a chain our algorithm will still not recognize the parts as distinct chains. Placeholders are inserted into chains for missing residues/atoms (an example is given in Figure 2), denoting that the objects were not measured crystallographically, butaccording to the more reliable sequence information -they should be there. This way our algorithm "repairs'' faulty PDB's, or recognizes that flexible chain sequences are present. We should remark, that missing atoms are usually a sign of mobile loop or string in the protein-crystal, since flexible atoms will not give usable electron density maps. Consequently, mapping missing atoms this way may help to automatically identify flexible protein parts. Ligands are identified without using the HET-atom labels, properly handling modified residues and small artifacts, due to crystallization protocols. CONECT records of the ligand-atoms are computed automatically (these records for the ligands generally are not present in the PDB file). Methodology: Our program selects atoms from the PDB entry that are part of a protein or DNA chain. We do not use the chain-identifier for this purpose. However, we use SEQRES data and refined graph-theoretical algorithms described elsewhere [2]. It selects the water molecules, and removes them from the set of possible ligand atoms. Then metal and other small ions are selected, that will not be considered as ligands. A complete list of residue names that were considered as ions (so not as ligands) is given in the file ion_list.txt. All the remaining atoms will form the set of ligand atoms. Within this set, we use a graph-theory component detecting algorithm, so a ligand is defined as a connected component of the graph formed by the ligand atoms as vertices and the covalent bonds between the ligand atoms as the edges. Functionality: The DECOMP tool correctly identifies ligand molecules, even if they are composed of more than one monomers. For example, when decomposing PDB entry 10GS with options "Export ligands", the file 10gs.pdb.out.lig.3 contains the 3monomer GLU-BCS-PG9 molecule correctly (Figure 1). Utility: Provide a list of PDB codes in the appropriate box at the web server and check the desired options. The PDB codes should be separated either by "spaces" or "new line" characters. Press the "schedule job" button and the request will be inserted into a queue. Progress is monitored in the "Log window". The result will be a link in the "Log window" to a tar.gz file. The result file contains one directory for each of the pdb's listed. Each of these directories contains an error log with ".pdb.error" extension, the decomposed pdb file with ".pdb" extension, and if "Export ligands" or "Export ions" option was specified, than a separate file is present for each of the ligands or ions. An error file is presented if there was a fatal error while processing the PDB file. The result files are usually viewed by popular PDB viewer tools. A preprocessed, constantly updated compressed file can be downloaded with the results when the entire PDB file has be decomposed. The result files are stored for 3 days, and log files are stored for 30 days in the server.
1,150.8
2009-07-27T00:00:00.000
[ "Biology", "Computer Science" ]
2 μm emission properties and nonresonant energy transfer of Er3+ and Ho3+ codoped silicate glasses 2.0 μm emission properties of Er3+/Ho3+ codoped silicate glasses were investigated pumped by 980 nm LD. Absorption spectra were determined. Intense mid-infrared emissions near 2 μm are observed. The spectral components of the 2 μm fluorescence band were analyzed and an equivalent model of four-level system was proposed to describe broadband 2 μm emission. Low OH− absorption coefficient (0.23 cm−1), high fluorescence lifetime (2.95 ms) and large emission cross section (5.61 × 10−21 cm2) corresponding to Ho3+: 5I7→5I8 transition were obtained from the prepared glass. Additionally, energy transfer efficiency from the Er3+: 4I13/2 to the Ho3+: 5I7 level can reach as high as 85.9% at 0.75 mol% Ho2O3 doping concentration. Energy transfer microscopic parameters (CDA) via the host-assisted spectral overlap function were also calculated to elucidate the observed 2 μm emissions in detail. Moreover, the rate equation model between Er3+ and Ho3+ ions was developed to elucidate 2 μm fluorescence behaviors with the change of Ho3+ concentration. All results reveal that Er3+/Ho3+ codoped silicate glass is a promising material for improving the Ho3+ 2.0 μm fiber laser performance. Over the past several years, numerous efforts have been devoted to obtain efficient and powerful mid-infrared lasers operating in the eye-safe 2 μ m wavelength region. This is due to their wide applications such as coherent laser radar systems, medical surgery, laser imaging, remote chemical sensing and pump sources for mid-infrared lasers as well as optical communication systems [1][2][3] . Usually, Tm 3+ -doped, Ho 3+ -doped, and Raman fiber lasers are the most widespread way of getting 2-micron laser output. In 2010, three all-fiber Ho 3+ -doped lasers emitting in the range of 2050~2100 nm were fabricated. The lasers were pumped by an Yb 3+ doped fiber laser at 1147 nm with a power up to 35 W 4 . For all the lasers tested, the output power was found to be as high as 10 W, the slope efficiency being 30%. In 2015, lasing at 2.077 μ m is also obtained from a 27 cm long Ho 3+ doped fluorotellurite microstructured fiber. The maximum unsaturated power is about 161 mW and the corresponding slope efficiency is up to 67.4% 5 . Using ultra short (1.6 cm) as-drawn highly Tm 3+ doped barium gallo-germanate (BGG) single mode (SM) fiber, a single-frequency fiber laser at 1.95 μ m has been demonstrated with a maximum output power of 35 mW when in-band pumped by a home-made 1568 nm fiber laser in 2015 6 . In addition, in 2010, a multiple-watt Tm 3+ /Ho 3+ codoped aluminosilicate glass fiber laser operating in narrowband (< 0.5 nm) and tuned across a range exceeding 280 nm was also presented 7 . However, Raman fiber laser requires high power pump sources in the spectral range, which is a shortcoming in the view of practical applications 4 . The Tm 3+ doped sources are typically limited to efficient operation at < 2.05 μ m 8 , although Tm 3+ doped fiber lasers with high output power and slope efficiency have been demonstrated in silica 9 , silicate 10 and germanate fibers 11 . In such case, the transition ( 5 I 7 → 5 I 8 ) of Ho 3+ ions produces radiation in the range of 2.05 μ m to 2.2 μ m 12 , which could totally match the applications which require good atmospheric propagation. Moreover, atmospheric transmission spectra provided by ModTran 8 show the advantage of operating at wavelength beyond 2.1 μ m in comparison to the windows accessible thulium sources. Therefore, it is expected that Ho 3+ activated glasses are promising candidates for 2 μ m fiber laser. But the lack of efficient absorption band at the 980 nm wavelength suggests that Ho 3+ ions cannot be pumped by high-power and commercial 980 nm laser diodes (LDs). Fortunately, Yb 3+ or Er 3+ ions can be codoped to improve the absorption band of Ho 3+ ions at 980 nm. In particular, Ho 3+ doped glasses sensitized by Yb 3+ are recognized as efficient systems for obtaining strong luminescence in both the infrared and visible range of the spectrum 13,14 . This is due to the large absorption and emission cross-section, relatively long lifetime, and simply energy level scheme of Yb 3+ . Moreover, Yb 3+ can be efficiently pumped by a laser diode (LD) near 980 nm which is one of the most popular and convenient commercial pump sources. So far, Ho 3+ /Yb 3+ codoped glasses [13][14][15] have been investigated by researchers. But, compared with Yb 3+ ions, Er 3+ : 4 I 13/2 level can match better with Ho 3+ : 5 I 7 level, which is more beneficial for 2 μ m emissions 16 . Hence, it can be expected that 2 μ m fluorescence can be obtained from the Er 3+ /Ho 3+ codoped sample pumped by 980 nm excitation and there is a rare investigation focused on the 2.0 μ m emission obtained from the Er 3+ /Ho 3+ codoped pumped by 980 nm excitation. In order to get powerful mid-infrared emissions from Ho 3+ , the host glass is another factor to be considered as important as the sensitizer. Multi-component silicate glass is a promising material in realizing 2 μ m lasers. Compared to silica glass, it has a less-defined glass network, which can provide a higher solubility of rare earth ions 17 . In some cases, single-frequency laser operation for example, a much shorter fiber is required to achieve high gain ability 18 . Here silicate glass is a more appropriate material than silica glass. Though the larger multiphonon relaxation rate induced by higher phonon energy in silicate glass (~1050 cm −1 ) compared with other multi-component glasses such as fluoride and heavy metal glasses (e.g., germanate, telluride, bismuth glasses), lower quantum efficiency of ~2 μ m luminescence, it turns out that slope efficiency in the silicate fiber lasers can be much higher than that in other glass fibers lasers 10,19,20 . In addition, it should be noted that in comparison with fluoride and heavy metal glasses, the main glass network of the silicate fiber is SiO 2 , which has strong mechanical strength, high damage threshold and better compatibility with conventional passive silica fibers. To the best of our knowledge, few reports on 2 μ m fluorescence properties in Er 3+ /Ho 3+ codoped silicate glass have been carried out, and it is focused mainly on the spectroscopic properties of lead silicate excited by 800 and 1550 nm LD 16 . But in this work, not only 2 μ m spectroscopic properties of Ho 3+ are investigated in Er 3+ /Ho 3+ codoped silicate glasses pumped by 980 LD. But also the energy transfer mechanism between Ho 3+ and Er 3+ was analyzed based on the built phonon-assisted energy transfer analysis, which is helpful to optimize and ensure the technological applications of the codoped materials. Moreover, the rate equation model between Er 3+ and Ho 3+ ions was also developed to quantitative elucidate 2 μ m fluorescence behaviors. This work reveals that Er 3+ /Ho 3+ codoped silicate glass is a promising material for improving the Ho 3+ 2.0 μ m fiber laser performance and may provide useful guidance for the design of other mid-infrared laser materials. The densities (3.76 g/cm 3 ) were tested by Archimedes principle using distilled water as an immersion liquid with error limit of ± 0.05%. The refractive index of the host glass was measured by the prism minimum deviation method at three wavelengths, 633, 1311 and 1539 nm, and they are 1.6111, 1.6051, and 1.5995, respectively. The resolution of the instrument is ± 0.5 × 10 −4 . The standard deviation in refractive index at different points of the same glass is around ± 1 × 10 −4 . The refractive index dispersion curve was calculated by Cauchy's formula n(λ ) = a + b/λ 2 + c/λ 4 , where a, b and c are found to be 1.5806, 5.0124 × 10 4 and − 1.5269 × 10 10 respectively. Absorption spectra were determined by means of a Perkin Elmer Lambda 900UV-VIS-NIR spectrophotometer in the range of 300~2200 nm with the resolution of 1 nm. Photoluminescence spectra in the ranges of 1750~2300 nm were measured with steady state spectrometer (FLSP 980) (Edingburg Co., England) and detected with a liquid-nitrogen-cooled PbS detector using an 980 nm laser diode (LD) as an excitation source. The fluorescence lifetimes of the 2 μ m (Ho 3+ : 5 I 7 state), and 1.53 μ m (Er 3+ : 4 I 13/2 state) were measured with light pulse of the 980 nm LD and an HP546800B 100-MHz oscilloscope. In addition, the mid-infrared transmission spectra was obtained by using a Thermo Nicolet (Nexus FT-IR Spectrometer) spectrophotometer in the range of 2.6~3.6 μ m with resolution of 4 cm −1 . The same experimental conditions for different samples were maintained so as to get comparable results. All the measurements were performed at ambient temperature. Results Absorption spectra and infrared transmittance spectrum. The room temperature absorption spectra of Er 3+ , Ho 3+ singly doped and Er 3+ /Ho 3+ codoped silicate glasses were obtained within the wavelength region of 300~2200 nm as presented in Fig. 1. The absorption bands at wavelength shorter than 300 nm are not observed due to the intrinsic band-gap absorption of host glass. The absorption spectra are characterized by Er 3+ absorption bands from the 4 I 15/2 level to different excited levels of 4 I 13/2 , 4 I 11/2 , 4 I 9/2 , 4 F 9/2 , 4 S 3/2 , and 4 F 7/2 , along with absorption transitions of Ho 3+ from the ground state ( 5 I 8 ) to higher levels of 5 I 7 , 5 I 6 , 5 F 5 and 5 F 4 + 5 S 2 , respectively. No obvious divergences can be found in the shape and peak positions of the absorption bands between singly doped and codoped samples, which revealed that both Ho 3+ and Er 3+ ions are homogeneously imbedded into the glassy network without apparent clusters in the prepared silicate glasses. Besides, for the absorption spectrum of Ho 3+ singly doped sample, it is noted that few absorption bands can match well with readily available laser diodes, such as 808 and 980 nm. Fortunately, Er 3+ /Ho 3+ codoped sample displays an obvious absorption band around 980 nm owing to the absorption transition of Er 3+ : 4 I 15/2 → 4 I 11/2 . Therefore, the prepared Er 3+ /Ho 3+ codoped silicate glass can be excited by commercially 980 nm LD. Scientific RepoRts | 6:37873 | DOI: 10.1038/srep37873 The inset in Fig. 1 shows the infrared transmittance spectrum of SEH0.75 sample at 1.5 mm thick. The transmittance reaches as high as 91% (the 9% loss may be attributed to the Fresnel reflection dispersion and absorption of the glass). It is noted that the absorption band at 3 μ m is apparent, which can be ascribed to the vibration of hydroxyl groups. Here, the free OH − group, whose fundamental vibration ranges between 2500 and 3600 cm −1 (2.7~4 μ m), is one of the dominant quenching centers in Ho 3+ doped glass. It is because the energy gap of the 5 I 7 → 5 I 8 (~5100 cm −1 ) transition is corresponding to the energy of the first overtone (5000~7200 cm −1 , 1.35~2 μ m) of the fundamental stretching vibration of the OH − groups. So a Ho 3+ ion is coupled to free OH − groups, non-radiative relaxation of one OH − vibration quanta. Therefore, the contents of OH groups have an influence on mid-infrared fluorescence since residual hydroxyl groups in glasses can act as the fluorescence-quenching center. The absorption coefficient α OH (cm −1 ) in the glass network can be evaluated with the following equation 21 : where l is the thickness of the sample, T, T o are the maximum transmittance and the transmittance around 3 μ m, respectively. In addition, the OH − concentration (N OH − ) in the glass network can also be evaluated with the following equation 22 : The value ε is the molar absorptivity corresponding to OH − in silicate glasses (49.1 × 10 3 cm 2 /mol) 22 and N A is the Avogadro constant (6.02 × 10 23 /mol). The absorption coefficient α OH (cm −1 ) and OH − concentration (N OH − ) of the SEH0.75 sample are 0.23 cm −1 and 0.28 × 10 19 cm −3 , respectively, which are significantly lower than the reported values of tellurite glass 23 and comparable to these of germanate-tellurite 21 and germanate glass 24 . Thus, it is expected that better spectroscopic properties will be obtained. In addition, a lower OH − content could be obtained under a controlled atmosphere and a melting procedure in the future studied. Analysis of fluorescence spectra at 2 μm. Figure 2 presents fluorescence spectra of Er 3+ , Ho 3+ singly doped and Er 3+ /Ho 3+ codoped silicate glasses in the region of 1750~2300 nm pumped at 980 nm. All the samples were measured under the same conditions. No emission peaks can be observed for Er 3+ singly doped sample. Due to Ho 3+ ions do not absorb 980 nm photons, 2.0 μ m emission peaks of Ho 3+ single doped glass can't also be obtained although presence of Ho 3+ 5 I 7 → 5 I 8 transition. However, obvious 2 μ m emission peaks can be found in the Er 3+ /Ho 3+ codoped system, which is due to the presence of an energy transfer process from Er 3+ to Ho 3+ ions (this energy transfer phenomenon will be described in a later section). Moreover, with increasing Ho 3+ concentration, the fluorescent intensity increases firstly and then decreases monotonically as presented in the inset of Fig. 2. The optimal Ho 3+ concentration is located at 0.5 mol% (0.93 × 10 20 ions/cm 3 ) and the decreased 2 μ m emissions are owing to the concentration quenching. From Fig. 3, it is worth mentioning that 2 μ m fluorescence band shows broad non-Gaussian peak shape and wider emission band, which may have potential application in mid-infrared fiber amplifier and broad band tunable lasers. It is therefore noteworthy to understand the factors, which control the intensity and the width of the band at almost 2 μ m. For this reason, it is important to know that this band results from the electronic transition between the 7 and 8 Stark sublevels the first excited state 5 I 7 and the ground state 5 I 8 respectively. To verify the spectral components of 2 μ m emission peak, the Gaussian deconvolution procedure is carried out and the result is displayed in Fig. 3. Since the gap between two Stark sublevels is usually much lower than that existing between two neighboring J levels of a free rare earth ion, transitions between the different Stark levels are close in energy and it is not possible to clearly resolve them separately. Therefore, transition lines overlap and appear to form a single large band as observed in Fig. 3. In reality, the gap between two Stark sublevels depend on the electric crystal-field strength generated by the atoms of the surrounding medium and it is a sensitive indicator of the symmetry around Ho 3+ ions in the host matrix. Indeed, the number of Stark levels increases when decreasing the site symmetry of Ho ions. In view of that, we found that a superposition of four bands with Gaussian profile is able to provide a good fit to the overall spectral line shape, as shown by the curves in Fig. 3. Here, it can be observed that the wavelengths of four Gaussian profiles are centered at 1915, 1965, 2026 and 2078 nm, respectively. To understand more comprehensively the four Stark emission bands, an equivalent model of four-level system (exclude the existence of non-equivalent optical centers) for describing the 2 μ m fluorescence band is also revealed in the inset of Fig. 3. The ground state 5 I 8 is composed of two Stark level of lower a and upper b. In the same way, the upper 5 I 7 level contains two Stark level of lower a" and upper b". Therefore, the four Stark emission bands centered at 1915, 1965, 2026 and 2078 nm are the peak 1, 2, 3 and 4 corresponding to the b"→ a, a"→ a, b"→ b and a"→ b transitions, respectively. From the observation of emission peak, it can be seen that the relative intensity of 2026 nm emission peak is the highest that the other peak in the glass indicating here is slight energy re-absorption at the considered dopant ions concentration. In addition, it can be calculated that the total Stark splitting energy of 5 I 7 state is about 124 cm −1 , which is lower than that of 5 I 8 level (277 cm −1 ) in Ho 3+ activated silicate glass. Moreover, the Gaussian peak positions have minor shifts in comparison to that of germanate glass 25 , which indicate that the extent of the Stark splitting is closely dependent on the glass compositions. Stimulated emission cross-section and 2 μm lifetime. Important parameters that used to estimate the emission ability of luminescent center for 2.0 μ m emission transition ( 5 I 7 → 5 I 8 ) include mainly emission cross section (σ em ). The maximum fluorescence peak intensity for the prepared silicate glass is observed around 0.5 mol% Ho 3+ (SEH0.5 sample). Hence, the optimal 2.0 μ m fluorescence spectra is selected to calculate emission cross sections. The emission cross section were subsequently calculated by the following Fuchtbauer-Ladenburg equation 26 ∫ σ λ λ π λ λ λ λ λ = × A cn According to Eq. (3), the emission cross section is determined as depicted in Fig. 4(a). It can be seen that the peak of emission cross sections at 2.04 μ m is 5.61 × 10 −21 cm 2 , which is higher than those of other silicate glass (3.54 × 10 −21 cm 2 ) 27 and germanate-tellurite glass (4.36 × 10 −21 cm 2 ) 15 while slightly lower than that of germanate glass (8.00 × 10 −21 cm 2 ) 28 . Higher emission cross section is extremely useful for better laser actions 29 . Moreover, it is interesting that the measured 2.04 μ m lifetime (2.95 ms) of the prepared glass was depicted in Fig. 4(b), which is more appropriate to evaluate the emission properties of laser glass, is larger than that of the germanosilicate glass (1.44 ms) 30 and tellurite glasses (1.6 ms) 31 . Therefore, the Er 3+ /Ho 3+ codoped silicate glass, which possess large emission cross section and fluorescence lifetime, can be an excellent candidate in achieving intense 2.0 μ m emission. Energy transfer mechanism and nonresonant energy transfer analysis. To elucidate the observed fluorescent phenomenon, energy level diagram and energy transfer mechanism are proposed based on previous investigation and depicted in Fig. 5. Basing on discussions mentioned above, we can summarize that both ET1 and ET2 processes can generate 2 μ m fluorescence. However, from Fig. 6(a), it is found that the 980 nm emission intensity has no substantial change with increasing Ho 3+ concentration while the 1.53 μ m emission intensity decreases quickly as displayed in Fig. 6(b). It is noted that the 980 nm emission of non doped sample (S) was synthesized and make comparison with the samples (SEH0, SEH0.25, SEH0.5, and SEH0.75). It is found that the 980 nm emission intensity of sample S is much smaller than those of the samples (SEH0, SEH0.25, SEH0.5, and SEH0.75). Therefore, it can be confirmed that the excitation radiation almost have almost no effect on the resulting 980 nm spectra and can been ignored as presented in Fig. 6(a). Thus, it can be concluded that ET2 is much more efficient in comparison to ET1 process. Hence, enhanced 2 μ m emission can be mainly ascribed to ET2 process. In order to estimate the energy transfer (ET2) efficiency and rate from Er 3+ : 4 I 13/2 to Ho 3+ : 5 I 7 level, the ion lifetimes in Er 3+ : 4 I 13/2 level with and without Ho 3+ ions have been determined from Fig. 7. The lifetimes were determined by single exponential fitting procedure, as listed in the inset of Fig. 7 as well as the energy transfer efficiency (η). The energy transfer rate (W ET ) and energy transfer efficiency (η ) were evaluated by using the following equation [32][33][34] where τ , τ 0 are lifetimes of Er 3+ : 4 I 13/2 with codoping 0.75 mol% Ho 3+ and without Ho 3+ ions, respectively. The derived maximum energy transfer rate is found to be 1170.6 s −1 as well as the energy transfer efficiency (η) of 85.9%. The higher η, compared with fluorotellurite glasses (67.33%) 35 and fluoride glass (45%) 36 , is beneficial for the design of 2 μ m laser under readily available high power, compact diode laser pumping (980 nm LD). Besides, the fluorescence quantum efficiency has been estimated from the lifetime values by the following equation where τ exp is the measured fluorescence lifetime of the sample (SEH0.5), and τ R is the theoretical lifetime the sample (SEH0.5), which were estimated from the absorption spectrum and J-O intensity parameters and can be calculated by the formula provided in ref. 14. The measured 5 I 7 lifetime (2.95 s) for Ho 3+ is shorter than the calculated lifetime (3.74 s), which is due to non-radiative quenching. It can be found that the fluorescence quantum efficiency is high as 78.88%, which is comparable to that of fluoride glass (80.35%) 37 . Therefore, Er 3+ /Ho 3+ codoped silicate glass is a more promising material for improving the Ho 3+ 2.0 μ m fiber laser performance. The extent of energy transfer (ET2) from the Er 3+ : 4 I 13/2 to the Ho 3+ : 5 I 7 level is dependent on the spectral overlap of the donor's emission (Er 3+ ) with the acceptor's absorption (Ho 3+ ). Although, the higher energy transfer (ET2) efficiency (η = 85.9%) has been confirmed, the spectral overlap between the 1.53 μ m emissions of Er 3+ and 2 μ m absorptions of Ho 3+ is very poor, with an energy gap of ~1400 cm −1 , which suggests that the energy transfer process in the Er 3+ /Ho 3+ codoped silicate glass system may be assisted by host phonons. For such a nonresonant energy transfer, the Dexter model can be generalized to the nonresonant phonon assisted energy transfer case taking account of phonon energy involved (E ET ) as well as the phonon density. The energy transfer probability (P ET ) can be estimated by the phonon-modified spectral overlap integral, I(E ph ) as follows 38 : ET PH where E ph is the phonon energy of host, k B is Boltzmann constant, and T is absolute temperature. According to the 1.53 μ m emission cross section spectra of Er 3+ and 2 μ m absorption of Ho 3+ absorption cross section spectra, the normalized energy transfer probability has been calculated as a function of phonon energy in the range of 0 ~ 3000 cm −1 as presented in Fig. 8(a). It can be observed that the normalized energy transfer probability (P ET ) increases with an increase in phonon energy and it reaches a maximum for the phonons energy around 1200 cm −1 . Then, the energy transfer probability (P ET ) decreases and diminishes with further increase in the phonon energy. Thus, the energy difference between the energy levels of 4 I 13/2 (Er 3+ ) and 5 I 7 (Ho 3+ ) can been bridged by the host phonons for an efficient energy transfer. As maximum phonon energy of present silicate glass (SEH0.75glass) is ~952 cm −1 , as presented in Fig. 8(b), so about one or two phonon is required to bridge the energy gap between Er 3+ : 4 I 13/2 and Ho 3+ : 5 I 7 state. Assisted with the host phonon, the energy level mismatch (~1400 cm −1 ) can be covered. It can be expected that the high phonon energy hosts like silicate glasses, can promote the energy transfer from energy levels of 4 I 13/2 (Er 3+ ) to 5 I 7 (Ho 3+ ) with less number of phonons. But considering that higher phonon energy also lead to higher non-radiative relaxation of 5 I 7 level of Ho 3+ and less probability of 2 μ m emission. Hence, the silicate material with moderate phonon energy is important for highly efficient 2 μ m emissions in Er 3+ /Ho 3+ codoped samples. To understand more intuitively and clearly the phonon assisted energy transfer mechanism from Er 3+ to Ho 3+ , emission cross sections of Er 3+ : 4 I 11/2 → 4 I 15/2 transition with the participation of m phonons (m = 0, 1 and 2) and absorption cross sections of Ho 3+ : 5 I 8 → 5 I 6 transition in prepared sample are depicted in Fig. 9(a), meanwhile, emission cross sections of Er 3+ : 4 I 13/2 → 4 I 15/2 transition with the participation of m phonons (m = 0, 1 and 2) and absorption cross sections of Ho 3+ : 5 I 8 → 5 I 7 transition in prepared samples are depicted in Fig. 9(b). The emission cross section with the participation of m phonons can be determined by following equation 39 : larger spectral overlap among them is determined after the matrix absorbs one or two phonons, as indicated in Fig. 9(a) and (b). In this case, the energy transfer microscopic parameter (C DA ) from Er 3+ to Ho 3+ can be estimated using Forster's spectral overlap model given by 32,40,41 . where c is the light speed in vacuum, n is the refractive index. σ abs is the absorption cross section of Ho 3+ : 5 I 8 → 5 I 6 and 5 I 8 → 5 I 7 transitions, respectively. Based on Eq. 9 and Fig. 9(a) and (b), the energy transfer microscopic parameter of ET2 process is as high as 5.52 × 10 −40 cm 6 ·s −1 , which is significantly larger than that of ET1 process (0.66 × 10 −40 cm 6 ·s −1 ) in the prepared sample. It is suggested that ET2 process is more efficient than ET1 process, which is in accordance with the results of Fig. 6(a) and (b). Furthermore, it further illustrates the enhanced 2 μ m emission can be mainly ascribed to ET2 process in the Er 3+ /Ho 3+ codoped silicate glass. Finally, based on the analysis of the energy transfer mechanisms, the energy transfer microscopic parameters (C DA ) of ET2 process in silicate glass (5.52 × 10 40 cm 6 /s) is much higher than that of germanosilicate glass (4.16 × 10 40 cm 6 /s) 30 , suggesting that more efficient energy transfer between them can be achieved in silicate glass. The result reveals that Er 3+ /Ho 3+ codoped silicate glass possesses suitable phonon energy (~952 cm −1 ) is a more promising material for improving the Ho 3+ 2.0 μ m fiber laser performance. Rate equation analysis. As previously stated, the fluorescent intensity become stronger with increasing Ho 3+ concentration and then become weaker with a further enhancement of Ho 3+ ions. Therefore, to better know the energy transfer process between Er 3+ and Ho 3+ and elucidate the 2 μ m fluorescence behaviors, the rate equation model between Er 3+ and Ho 3+ ions were developed according to energy level diagram of Fig. 5. Here, only ET2 process is considered because of much lower energy transfer probability of ET1, Furthermore, the excited state absorption (ESA), back transfer and energy transfer up-conversion (ETU) processes are neglected 30 . In addition, the quenching effect of OH − is neglected. Basing on Er 3+ : 4 I 15/2 , 4 I 13/2 , 4 I 11/2 levels and Ho 3+ : 5 I 8 levels, the rate equations can be built as follows: where R is the pumping rate. A ij is the spontaneous transition from levels i and j. C ET is the energy transfer rate from Er 3+ : 4 I 13/2 to Ho 3+ : 5 I 7 level. Moreover, the N 1 , N 2 , N 3 , N Er and N Ho are the populations at the Er 3+ : 4 I 15/2 , 4 I 13/2 , 4 I 11/2 , total Er 3+ and Ho 3+ : 5 I 8 levels, respectively. When pumping source is switched off, the following expression can be obtained by solving Es. (12) and as follow: 3 3 31 32 The following equation can be obtained by combining with Eq. (11) and (14), N 2 (t) can be expressed as where N 3 (0) and N 2 (0) is the excited population numbers in Er 3+ : 4 I 11/2 and 4 I 13/2 level, respectively, after the pump source turns off (t = 0). By solving Eq. (11) in the steady state condition (dN 2 (t)/dt = 0), the ratio of N 3 (0) and N 2 (0) can be derived as Basing on Eq. (15) and (16), then the fitting functions of Er 3+ : 4 I 13/2 level can be determined as Finally, The decay data of Er 3+ : 4 I 13/2 levels are best fitting curves via Eq. (17) as showed in Fig. 10(a). C ET can be obtained by fitting the decay curve of the 1.53 μ m emission (Er 3+ : 4 I 13/2 → 4 I 15/2 ) and plotted in Fig. 10(b), which indicates that the energy transfer rate increases firstly and then decreases with increasing Ho 3+ concentration. This tendency is in good agreement with the results of Fig. 2. Moreover, it is noticed the higher C ET can be as high as 4.2 × 10 −18 cm 3 /s. The higher C ET is beneficial for population accumulation of Ho 3+ : 5 I 7 level and improving corresponding 2 μ m emissions. Therefore, it is expected that Ho 3+ activated silicate glasses are promising candidates for 2 μ m fiber laser. Conclusions In brief, Er 3+ /Ho 3+ codoped silicate glasses with low OH − absorption coefficient (0.23 cm −1 ) were prepared. Absorption spectra were determined. Intense mid-infrared emissions near 2 μ m are observed with optimal Ho 2 O 3 concentration of 0.5 mol %. The spectral components of the 2 μ m fluorescence band were analyzed and an equivalent model of four-level system was proposed to describe the 2 μ m emission band. The prepared glass possesses high emission cross section (5.61 × 10 −21 cm 2 ), fluorescence lifetime (2.95 ms) for Ho 3+ : 5 I 7 → 5 I 8 transition. Moreover, the energy transfer mechanism was proposed according to the energy level diagram between the Er 3+ and Ho 3+ ions and 980 nm, 1.53 μ m fluorescence were measured to illustrate energy transfer processes. In addition, the energy transfer rate W ET (1170.6 s −1 ) and energy transfer efficiency η (85.9%) were quantitative analyzed from decay analysis of the Er 3+ : 4 I 13/2 level. Such high energy transfer efficiency was attributed to the excellent matching of the host phonon energy with the energy gap between the Er 3+ : 4 I 13/2 and Ho 3+ : 5 I 7 levels. Energy transfer microscopic parameters (C DA ) were calculated and quantitative analyzed by host-assisted spectral overlap function and further indicate the enhanced 2 μ m emission can be mainly ascribed to ET2 process. Furthermore, rate equation model was developed to elucidate the observed 2 μ m fluorescence behaviors with the change of Ho 3+ concentration. Results demonstrate that Er 3+ /Ho 3+ codoped silicate glass has potential application for 2 μ m laser and may provide useful guidance for the design of other mid-infrared laser materials.
7,578.2
2016-11-30T00:00:00.000
[ "Materials Science", "Physics" ]
On the least common multiple of random q-integers For every positive integer n and for every α∈[0,1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha \in [0, 1]$$\end{document}, let B(n,α)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}(n, \alpha )$$\end{document} denote the probabilistic model in which a random set A⊆{1,…,n}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {A}} \subseteq \{1, \ldots , n\}$$\end{document} is constructed by picking independently each element of {1,…,n}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\{1, \ldots , n\}$$\end{document} with probability α\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\alpha $$\end{document}. Cilleruelo, Rué, Šarka, and Zumalacárregui proved an almost sure asymptotic formula for the logarithm of the least common multiple of the elements of A\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {A}}$$\end{document}.Let q be an indeterminate and let [k]q:=1+q+q2+⋯+qk-1∈Z[q]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[k]_q := 1 + q + q^2 + \cdots + q^{k-1} \in {\mathbb {Z}}[q]$$\end{document} be the q-analog of the positive integer k. We determine the expected value and the variance of X:=deglcm([A]q)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$X := \deg {\text {lcm}}\!\big ([{\mathcal {A}}]_q\big )$$\end{document}, where [A]q:={[k]q:k∈A}\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[{\mathcal {A}}]_q := \big \{[k]_q : k \in {\mathcal {A}}\big \}$$\end{document}. Then we prove an almost sure asymptotic formula for X, which is a q-analog of the result of Cilleruelo et al. Introduction A nice consequence of the Prime Number Theorem is the asymptotic formula log lcm(1, 2, . . . , n) ∼ n, as n → +∞, where lcm denotes the least common multiple. Indeed, precise estimates for log lcm (1, . . . , n) are equivalent to the Prime Number Theorem with an error term. Thus, a natural generalization is to study estimates for L f (n) := log lcm(f (1), . . . , f (n)), where f is a wellbehaved function, for instance, a polynomial with integer coefficients. (We ignore terms equal to 0 in the lcm and we set lcm ∅ := 1.) When f ∈ Z[x] is a linear polynomial, the product of linear polynomials, or an irreducible quadratic polynomial, asymptotic formulas for L f (n) were proved by Bateman et al. [3], Hong et al. [10], and Cilleruelo [6], respectively. In particular, for f (x) = x 2 + 1, Rué et al. [15] determined a precise error term for the asymptotic formula. When f is an irreducible polynomial of degree d ≥ 3, Cilleruelo [6] conjectured that L f (n) ∼ (d − 1) n log n, as n → +∞, but this is still an open problem. However, bounds for L f (n) were proved by Maynard and Rudnick [13], and Sah [16]. Moreover, Rudnick and Zehavi [14] studied the growth of L f (n) along a shifted family of polynomials. Another direction of research consists in considering the least common multiple of random sets of positive integers. For every positive integer n and every α ∈ [0, 1], let B(n, α) denote the probabilistic model in which a random set A ⊆ {1, . . . , n} is constructed by picking independently each element of {1, . . . , n} with probability α. Cilleruelo et al. [9] studied the least common multiple of the elements of A and proved the following result (see [1] for a more precise version, and [4,5,7,8,12,[17][18][19] for other results of a similar flavor). A be a random set in B(n, α). Then, as αn → +∞, we have Theorem 1.1 Let with probability 1 − o(1), where the factor involving α is meant to be equal to 1 for α = 1. Let q be an indeterminate. The q-analog of a positive integer k is defined by The q-analogs of many other mathematical objects (factorial, binomial coefficients, hypergeometric series, derivative, integral...) have been extensively studied, especially in Analysis and Combinatorics [2,11]. For every set S of positive integers, let [S] q := [k] q : k ∈ S . The aim of this paper is to study the least common multiple of the elements of [A] q for a random set A in B(n, α). Our main results are the following: Theorem 1.2 Let A be a random set in B(n, α) and put X := deg lcm [A] q . Then, for every integer n ≥ 2 and every α ∈ [0, 1], we have where Li 2 (z) := ∞ k=1 z k /k 2 is the dilogarithm and the factor involving α is meant to be equal to 1 when α = 1. In particular, as n → +∞, uniformly for α ∈ [0, 1]. Theorem 1.3 Let A be a random set in B(n, α) and put X := deg lcm [A] q . Then there exists a function v : (0, 1) → R + such that, as αn/ (log n) 3 (log log n) 2 → +∞, we have Moreover, the upper bound holds for every positive integer n and every α ∈ [0, 1]. As a consequence of Theorems 1.2 and 1.3, we obtain the following q-analog of Theorem 1.1. Theorem 1.4 Let A be a random set in B(n, α). Then, as αn → +∞, we have as n → +∞, and so no (nontrivial) asymptotic formula for deg lcm [A] q can hold with probability 1 − o(1). We conclude this section with some possible questions for further research on this topic. Alsmeyer, Kabluchko, and Marynych [1, Corollary 1.5] proved that, for fixed α ∈ [0, 1] and for a random set A in B(n, α), an appropriate normalization of the random variable log lcm(A) converges in distribution to a standard normal random variable, as n → +∞. In light of Theorems 1.2 and 1.3, it is then natural to ask whether the random variable converges in distribution to a normal random variable, or to some other random variable. Another problem could be considering polynomial values, similarly to the results done in the context of integers, and studying lcm [f (1) Notation We employ the Landau-Bachmann "Big Oh" and "little oh" notations O and o, as well as the associated Vinogradov symbol , with their usual meanings. Any dependence of the implied constants is explicitly stated or indicated with subscripts. For real random variables X and Y , depending on some parameters, we say that "X ∼ Y with probability 1 − o(1)", as the parameters tend to some limit, if for every ε > 0 we have P |X − Y | > ε|Y | = o ε (1), as the parameters tend to the limit. We let (a, b) and [a, b] denote the greatest common divisor and the least common multiple, respectively, of two integers a and b. As usual, we write ϕ(n), μ(n), τ (n), and σ (n), for the Euler totient function, the Möbius function, the number of divisors, and the sum of divisors, of a positive integer n, respectively. Preliminaries In this section we collect some preliminary results needed in later arguments. Let e := (e 1 , e 2 ) and e i := e i /e for i = 1, 2. Then we have as desired. Let us define for every x ≥ 1 and for all positive integers a 1 , a 2 . Lemma 3.3 We have for every x ≥ 2. Lemma 3.4 We have for every x ≥ 2, where C 1 (a 1 , a 2 ) := a 1 a 2 3 and the series is absolutely convergent. Proof From the identity ϕ(n)/n = d |n μ(d)/d, it follows that Let c i := (a i , d i ) and e i := d i /c i , for i = 1, 2. On the one hand, we have On the other hand, thanks to Lemma 3.2, we have which, in particular, implies that the series C 0 (a 1 , a 2 ) := is absolutely convergent. Therefore, we obtain Now (5) follows from (7) by partial summation and since C 1 (a 1 , a 2 ) = a 1 a 2 3 C 0 (a 1 , a 2 ). We end this section with an easy observation that will be useful later. The following lemma gives a formula for X in terms of I A and the Euler function. Lemma 4.1 We have Proof For every positive integer k, it holds where d (q) is the dth cyclotomic polynomials. Since, as it is well known, every cyclotomic polynomial is irreducible over Q, it follows that L is the product of the polynomials d (q) such that d > 1 and d | k for some k ∈ A. Finally, the equality deg d (q) = ϕ(d) and the definition of I A yield (8). Let β := 1 − α. The next lemma provides two expected values involving I A . Lemma 4.2 For all positive integers d, d 1 , d 2 , we have and Proof On the one hand, by the definition of I A , we have which is (9). On the other hand, by linearity of the expectation and by (9), we have where the last expected value can be computed as and second claim follows. We are ready to compute the expected value of X. Moreover, since n/d = j if and only if n/(j + 1) < d ≤ n/j, we get that where we used Lemma 3.3. Putting together (10) and (11), and noting that, by Remark 3.2, the addend of (11) corresponding to d = 1 is 1 − β n = O(αn), we get (2). The proof is complete. Now we consider the variance of X. Authors' contributions The author thanks the anonymous referee, whose careful reading and detailed suggestions led to a considerable improvement of the paper.
2,315.6
2021-02-18T00:00:00.000
[ "Mathematics" ]
$T_{cc}^{+}(3875)$ relevant $DD^*$ scattering from $N_f=2$ lattice QCD The $S$-wave $DD^*$ scattering in the isospin $I=0,1$ channels is studied in $N_f=2$ lattice QCD at $m_\pi\approx 350$ MeV. It is observed that the $DD^*$ interaction is repulsive in the $I=1$ channel when the $DD^*$ energy is near the $DD^*$ threshold. In contrast, the $DD^*$ interaction in the $I=0$ channel is definitely attractive in a wide range of the $DD^*$ energy. This is consistent with the isospin assignment $I=0$ for $T_{cc}^+(3875)$. By analyzing the components of the $DD^*$ correlation functions, it turns out that the quark diagram responsible for the different properties of $I=0,1$ $DD^*$ interactions can be understood as the charged $\rho$ meson exchange effect. This observation provides direct information on the internal dynamics of $T_{cc}^+(3875)$. Introduction Ever since the discovery of X(3872) in 2003 [1], there have been quite a lot near-D D and B B threshold structures observed in experiments and are generally named XYZ particles [2].In phenomenological studies, they are usually assigned to be conventional heavy quarkonia, D D (B B) molecules, or tetraquarks.Among XYZ states, Z c (3900) may be the most prominent candidate for a multiquark state since it has the minimal quark configuration ccu d and has been observed in different experiments [3,4].Recently, LHCb reported the first doubly-charmed narrow structure T + cc (3875) in the D 0 D 0 π + invariant mass spectrum, whose minimal configuration must be ccū d [5].The mass of T + cc (3875) is measured to be below the D 0 D * + threshold by −273 ± 61 ± 5 +11 −14 keV, and its width is as small as Γ = 410 ± 165 ± 43 +18 −38 keV (A unitarised Breit-Wigner analysis gives an even smaller width Γ U = 48 ± 2 0 −14 keV [6]).LHCb searched other charged channels and found no evidence for the existence of a similar structure, and therefore assigned T + cc (3875) to be an I = 0 state [5,6]. Prior to the observation of T + cc (3875), there have been many theoretical studies on doubly-charmed tetraquarks, whose predictions of the mass and width of the ground state J P = 1 + isoscalar tetraquark are consistent with those of T + cc (3875) [7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42].In the molecular picture, an early quark model calculation predicted the existence of a DD * bound state below the DD * threshold by 1.6 ± 1.0 MeV [13].Recent theoretical studies find that light vector meson exchanges may induce an attractive interaction between D and D * [43,44,45].One can also refer to a recent review of the present status of theoretical studies on T + cc (3875) in Ref. [46].There are also several lattice studies performed on exotic doubly-charmed meson states.The spectra of hidden-charm and doubly charmed systems with various J P quantum numbers are explored in N f = 2 + 1 lattice QCD using meson-meson and diquark-antidiquark operators [26], but the results do not indicate the existence of bound states or narrow resonances, since most of lattice energy levels are close to the corresponding non-interacting meson-meson energies.Another N f = 2 + 1 + 1 lattice QCD study on doubly heavy tetraquarks observes the ground state of ud cc system [30], which is below the DD * threshold by 23 ± 11 MeV after the continuum and chiral extrapolation.However, it is not enough to claim a bound state from the lowest energy level.Apart from these studies focusing on the extraction of the finite-volume energy levels, Ref. [47] Table 1: Parameters of N f = 2 gauge ensembles with degenerate u, d sea quarks.performs the first lattice QCD study on the pole singularity of the DD * scattering amplitude at the pion mass m π ≈ 280 MeV, and reports an S -wave virtual bound state pole below the DD * threshold by approximately 10 MeV, which may correspond to T + cc (3875) when m π approaches to the physical value.Since the LHCb experiment observed T + cc (3875) only in the I = 0 channel, it is conceivable that the isospindependent interaction plays a vital role in its formation.The existing lattice QCD studies focus on the I = 0 channel from the point of view of tetraquark and DD * scattering, and pay little attention to the isospin-sensitive properties.Given the large negative scattering length a = −7.16(51)fm of DD * scattering relevant to T + cc (3875) [6], the characteristic size R a = |a| of T + cc is too large for the present lattice QCD to investigate directly.An alternative way is to study the relevant DD * scattering in several different lattice volumes and then perform the infinite volume extrapolation to check the existence of a bound state [48,49].With only one lattice at hand, we cannot study the property of T + cc in this way yet.We focus on the S -wave DD * scatterings in I = 0 and I = 1 channels, and explore if there are dynamical differences between them.This study may shed light on the property of the DD * interaction and provide qualitative information for future phenomenological investigations. This paper is organized as follows: In Section 2 we describe the lattice setup, operator construction, and the method for studying the hadron-hadron interaction on the lattice.The results of DD * scatterings in I = 0, 1 channels are presented in Section 3, and the discussions can be found in Section 4. Section 5 is a summary of this work. Lattice Setup We generate gauge configurations with N f = 2 degenerate u, d quarks on an L 3 × T = 16 3 × 128 anisotropic lattice.We use tadpole improved gauge action [50,51] for gluons and the tadpole improved anisotropic clover fermion action for light u, d quarks [52,53].The renormalized aspect ratio is determined to be ξ = a s /a t = 5.3, and the temporal lattice spacing is set to be a −1 t = 6.894 (51) GeV [54].Using the a t and the ξ, we get a s ≈ 0.152(1) fm.Our bare u, d quark mass parameter gives m π = 348.5(1.0)MeV and m π La s ≈ 3.9.For the valence charm quark, we adopt the clover fermion action in Ref. [55], and the charm quark mass parameter is tuned to give (m η c + 3m J/ψ )/4 = 3069 MeV.The distillation method [56] is used to generate the perambulators for u, d quarks and the valence charm quark on our gauge ensemble.In practice, the perambulators are calculated in the Laplacian Heaviside subspace spanned by N vec = 70 eigenvectors with the lowest eigenvalues.The parameters for the gauge ensemble are listed in Table 1. Operators and correlation functions In the lattice study of hadron-hadron scattering, one key task is to extract the lattice energy levels as precisely as possible, from which the scattering matrix elements can be parameterized with quantities reflecting the scattering properties, such as the scattering phase shift and scattering length, etc. Concerning the properties of T + cc (3875), we focus on the DD * scattering in the J P = 1 + channel with isospin I = 0 and I = 1.Throughout this work, the D(D * ) operators and DD * operators are built in terms of smeared quark fields.We use quark bilinears O Γ = qγ 5 c for D mesons and O Γ = qγ i c for D * mesons (here q refers to u for D 0 and d for D + ).Accordingly, the operators for D(D * ) mesons moving with a spatial momentum p are obtained by the Fourier transformation The correlation functions of D and D * at a spatial momentum p are calculated precisely using the distillation method and are parameterized as E eff (GeV) where X refers to D or D * and the second term account for the higher state contamination.The modes n of the spatial momentum p = 2π La s n involved in this work are n = (0, 0, 0), (0, 0, 1), ( . Fig. 1 shows the effective energies E eff p (t) of D (left panel) and D * (middle panel) at different momenta p, which are defined by and the grey bands illustrate the fit results using Eq. ( 1 The fitted ξ is 5.329 (12) for D and 5.324 (26) for D * , both of which are consistent with ξ = 5.3 in Table 1. D D * scattering In this work, we only focus on the S -wave DD * scattering in the isospin I = 0 and I = 1 channels.The recent lattice study on T + cc also found that the contribution of D-wave scattering to the J P = 1 + DD * system is small enough to be neglected temporarily [47].The operators for S -wave DD * system with a relative p = | p| momentum can be built through where O D ( p, t) and O D * ( p, t) are the momentum projected single particle operators for D and D * , respectively, R refers to the rotational operations in the lattice spatial symmetry group O (the octahedral group).The operators O (I) DD * for a definite isospin I is built according to the isospin combinations We tentatively assume the coupling between the S -wave the D * D * state and DD * state is weak and do not include D * D * operator in our calculation.Therefore, to extract the energies of DD * systems, we calculate the following correlation matrix in both I = 0 and I = 1 channels in the framework of the distillation method, where we average the source time slices τ to increase the statistics.Then we solve the generalized eigenvalue problem (GEVP) DD * (p) that couples most to the m-th state of DD * system with energy E (I) DD * (p m ).Here p m is the scattering momentum of the m-th state and is determined by DD * (p m ) through the relation In practice, the lowest four momentum modes of p are involved in the GEVP analysis, hence the momentum modes n = (0, 0, 0), (0, 0, 1), ( are replaced by m = 0, 1, 2, 3 to present the state of the m-th optimized operator.It is known that, under the periodic temporal boundary condition, in addition to the physical states that all the physical degrees of freedom propagate alongside in the same time direction, the so-called thermal states or wrap-around states that the D and D * states propagate in opposite temporal directions [58] also contributes to the correlation function C (I) (p, p ; t).Therefore, the correlation function of the optimized operator O (I) DD * (p m ) can be parameterized as where the first term comes from the desired physical state, the second term accounts for the contribution of the thermal state, while the third term is introduced to account for the residual contamination from higher states.Note that ] which guarantees the thermal state term vanishes when T → ∞.It turns out that this function form describes C (I) (p m , t) very well in a wide time range as shown in the left panels of Fig. 2 and 3. Lüscher's formalism provides an approach to extracting the hadron-hadron scattering properties from the energy levels of a two-meson system in a finite box [59,48].When the energies E (I) DD * (p m ) is derived precisely, we can obtain the value of the scattering momentum p m using Eq. ( 7).Usually, one also introduces the dimensionless quantity q = p m La s 2π for convenience.According to Lüscher's formalism, the phase shifts of S -wave scattering can be derived from p (or q) by where Z lm (s, q 2 ) is the Lüscher zeta function [59] and the second equality above is the lattice regularized version of Z lm (s, q 2 ) [48].For the low-energy scattering, the effective range expansion (ERE) up to O(p 2 ) gives where a 0 and r 0 are the S -wave(l = 0) scattering length and effective range respectively.In the following, we discuss the DD * scatterings in the I = 0 and I = 1 channels in detail.where the colored bands are from the function forms of R(p m , t) defined through Eq. ( 1) and Eq. ( 6).Right panel: The phase shifts of S -wave DD * (I = 0) scattering, where the grey band shows the result of Eq. ( 10) with best-fit parameters in Eq. ( 12) and the red band illustrates the fitting range. Table 3: The lattice results of the S -wave DD * scattering in I = 0 channel.Four lowest energy levels E (I) DD * (p m ) corresponding to the four momentum modes are obtained.The energy shifts ∆E and the scattering momenta p m are determined accordingly.The values are in physical units converted from a −1 t = 6.894GeV.The measured aspect ratio ξ = 5.33(3) from the dispersion relation is used to derive the dimensionless q 2 .All the errors here are jackknife ones.3, where the energies with jackknife errors are converted into physical units. It is seen that the energy shifts p) are uniformly negative for all the four momentum modes.This indicates the interaction between D and D * in the I = 0 channel is attractive.The energy shifts ∆E(p) are also checked through the ratio function This ratio function is used sometimes to estimate ∆E(p m ) from the plateau of ∆E(p m , t) ≡ ln R(p m ,t) R(p m ,t+1) .The middle panel of Fig. 2 shows ∆E(p m , t) for momentum modes m = 0, 1, 2, 3 in I = 0 channel, where the data points are the values from the measured correlation functions involved in Eq. (11), and the colored bands illustrate the results through the function forms in Eq. ( 1) and ( 8) with their fitting parameters.Obviously, ∆E(p m , t) does not show a plateau at all, but can be well described by the function mentioned above.the slant behaviour of ∆E(p m , t) in the intermediate time region is caused by the third term in Eq. ( 8) (the excited state term), while its steep behaviour near T/2 is the effect of the second term (the thermal state term).This manifests that the energy shifts ∆E(p m ) listed in Table 3 are derived correctly.Note that the terms for excited states in Eq. ( 1) and ( 6) are necessary to describe the data. The scattering phase shifts p cot δ 0 (q 2 ) are obtained by using Eq. ( 9) at each q 2 and is plotted as data points in the right panel of Fig. 2, where dashed lines illustrate the function form in Eq. (9).The fit to the four data points of lower q 2 using Eq.(10) gives a (I=0) 0 = 0.538(33) fm, r (I=0) 0 = 0.99 (11) fm.(12) Our results are in line with a 0 ∼ 1 fm and r 0 ∼ 1.0 fm determined in Ref [47] at a lighter pion mass m π = 280 MeV. Both results indicate the attractive interaction of DD * in the I = 0 channel.Since we have only one lattice volume, we cannot make a proper discussion on the existence of a bound state yet. 3.2.The I = 1 and J P = 1 + DD * scattering The data analysis of the I = 1 DD * scattering takes the same procedure as the one for the I = 0 channel.The results of E (I=1) DD * (p m ) are listed in Table 4 along with the values of corresponding energy shifts ∆E (I=1) , the scattering momentum p m etc..The major results of I = 1 DD * scattering are illustrated in Fig. 3 similar to Fig. 2 for the I = 0 case: The left panel shows the the effective energies of C (I=1) (p, t) and the related fits using Eq. ( 8).The middle panel shows the verification of the energy shifts ∆E (I=1) for different momentum p m .The right panel is for the S -wave phase shifts of the DD * (I = 1) scattering, which is obtained from the scattering momentum p m .It is seen that E (I=1) DD * (p) is higher than E D ( p) + E D * ( p) when it is not far from the DD * threshold (the lowest two energy levels of E (I=1) DD * (p)).This reflects a repulsive interaction for the low-energy D and D * scattering in the I = 1 channel.When the scattering momentum p is larger, the energy shifts get smaller and finally become consistent with zero within the errors.This is in striking contrast to the case of I = 0 where the energy shifts are uniformly negative in a large range of the scattering momentum.Accordingly, the corresponding q 2 for the two higher energies are consistent with integers, such that when the phase shifts are determined through Eq. ( 9), their errors blow up, as shown in the right panel of Fig. 3.The fit to these phase shifts using Eq.(10) gives the scattering length and the effective range as Discussion In the previous section, we present the numerical results of the S -wave DD * scattering in the I = 0 and I = 1 channels.The major observation is that DD * interaction is attractive for I = 0 in a wide momentum range and repulsive for I = 1 when the energy of DD * is near the DD * mass threshold.It is conceptually in agreement with the observation of LHCb [5] that the T + cc state is found only in the D 0 D * + system.To understand the isospin-dependent interaction of DD * , let us take a closer look at the quark diagrams (after the Wick contraction) which contribute to the correlation functions C (I) (p, t).There are four distinct terms whose schematic quark diagrams are shown in the left part of Fig. 4 where the minus signs of C terms come from the single quark loops after Wick contraction.The contributions of these terms to C (I) (p, t) at p = 0 are checked to have the hierarchy D C 2 (ρ) C 1 (π/ρ) D with each level being smaller by roughly two orders of magnitude, as shown in the right panel of Fig. 4, where the magnitudes of D, C 1 (π/ρ), C 2 (ρ) and D at p = 0 are scaled by the product of single meson correlation functions C D ( p = 0, t) and C D * ( p = 0, t) (abbreviated by C D C D * ).The contribution of D term is quite small and negligible in the following discussion.The C 1 (π/ρ) term contributes equally to C (I=0) (p, t) and C (I=1) (p, t), while the contributions of C 2 (ρ) have opposite signs for I = 0, 1 and are necessarily responsible for the energy difference of E (I=0, in the time range t ∈ [20,50] where δE i t 1.The second term on the right-hand side of Eq. ( 15) comes from the C 1 (π/ρ) contribution and is positive for both I = 0, 1 channels.This means the C 1 (π/ρ) term reflects a repulsive interaction.In contrast, the third term, which is contributed from C 2 (ρ), is positive for I = 1 and negative for I = 0, and thereby manifests a repulsive interaction for I = 1 and an attractive interaction for I = 0. On the other hand, as shown in the right panel of Fig. 4, the curve for C 2 (ρ) is uniformly higher than that for C 1 (π/ρ) and thereby implies 2 1 .In the meantime, the larger slope of C 2 (ρ) indicates δE 2 > δE 1 , such that one has 1 δE 1 e δE 1 t < 2 δE 2 e δE 2 t .In other words, the combined effects of the C 1 (π/ρ) and C 2 (ρ) contribution result in negative energy shifts from the non-interacting DD * energy E D (p) + E D * (p), which reflects the totally attractive interaction between D and D * in the S -wave I = 0 channel.One can see Appendix C for itemized information. On the hadron level, the four terms depicted in Fig. 4 can be interpreted as follows: • D term: It involves two separately closed quark diagrams, each of which is the propagator of D(D * ) meson. After the gauge averaging, the two parts can have an interaction mediated by at least two gluons that are necessarily in a color singlet.Intuitively, quarks frequently exchange gluons among themselves during their propagation.The "motion" status of light quarks can be changed more easily by absorbing or emitting a (not hard) gluon, such that their trajectories in the spacetime are zigzag and may develop meson-exchange interactions, such as exchanges of σ, ω, etc., on the hadron level.Either gluon exchanges on the quark level or meson exchanges on the hadron level, the resultant effects are very tiny since the contribution of this term is very close in magnitude (after the subtraction of the contribution from the wrap-around states) to the product of the correlation functions of single D and D * mesons. • D term: This also involves two closed quark diagrams, however, each one connects two different mesons D and D * .This diagram contributes to C DD * (p, t) only when color singlet gluon exchanges (at least two gluons also) take place between the two parts after the gauge average.On the hadron level, the interaction can be mediated by η, ω, etc.However, empirically in our study, it is found these effects are very weak, and the contribution from the D term is negligible in comparison with the other terms. • C 1 (π/ρ) term: As shown in the right upper part of Fig. 4, there are explicit u, d quark exchanges between D and D * during their temporal propagation.This exchange effect can be viewed as that of the charged meson (π ± ,ρ ± , etc.) on the hadron level.If we flip the positions of D 0 and D * + on the right-hand side, the figure implies a cc exchange process, and accordingly charmonium V c (J/ψ, ψ , etc.) exchange process on the hadron level.Since C 1 (π/ρ) contributes equally to C (I=0) DD * (p, t) and C (I=1) DD * (p, t), according to our discussion above, these intermediate meson exchanges on the hadron level result in a repulsive interaction to the DD * system.Note that vector meson exchange models [44,45] also obtain a repulsive interaction for the J/ψ exchange. • C 2 (ρ): This term also comes from the u, d quark exchanges.On the hadron level, since the P-parity conservation prohibits the DDπ interaction, the effect of light quark exchange can be reflected mainly by the charged ρ exchange, which provides an attractive interaction for the S -wave I = 0 DD * system and a repulsive interaction for the S -wave I = 1 DD * system.Furthermore, the observation E (I=0) DD * (p) < E D (p) + E D * (p) indicates that this attractive ρ-exchange effect overcomes the repulsive interaction reflected by C 1 (π/ρ) term and results in a total attraction interaction.This result is in qualitative agreement with those in Refs.[43,44,45]. Summary The S -wave DD * scattering are investigated from N f = 2 lattice QCD calculations on a lattice with m π ≈ 350 MeV and m π La s ≈ 3.9.Benefited from the large statistics, several lowest energy levels of the DD * s of isospin I = 0 and I = 1 are determined precisely through the distillation method and by solving the relevant generalized eigenvalue problems.In the I = 1 case, the DD * energy E To understand the isospin dependence of the DD * interaction, further analysis is performed on the components of DD * correlation functions.It is found that the difference between the I = 0 and I = 1 DD * correlation functions comes mainly from the C 2 (ρ) term that D and D * exchange u, d quarks when propagating in the time direction.This term can be viewed as the charged vector ρ meson exchange in the hadron level and contributes to the I = 0 and I = 1 DD * correlation functions with opposite signs.As a result, it raises the DD * energy in the I = 1 channel, and pulls it down in the I = 0 channel.This provides a shred of strong evidence that the DD * interaction induced by the charged ρ meson exchange may play a crucial role in the formation of T + cc (3875).This is in qualitative agreement with the results of recent phenomenological studies [43,44,45].• C 2 (t) is uniformly higher than C 1 (t) and has a large slope.This implies 2 > 1 and δE 2 > ∆E 1 . • D term is approximately O(10 −4 ) of magnitude smaller than D term and is ignored in the discussion. Based on these observations, in the intermediate time region, the energy of DD * can be estimated as where i , δE i t 1 is used. Figure 1 : Figure 1: The effective energies and dispersion relation of D and D * .For the effective energies of D (left panel) and D * (middle panel), the grey bands illustrate the fittings using Eq.( 1) in the time window t ∈ [20, T − 20].For the dispersion relations (right panel), the data points are measured energies E 2 X ( p) at different momenta p = 2π as L n (labelled by n 2 ) with X referring to D or D * , and the grey bands are the fittings using Eq.(3). ) for the time interval t ∈ [20, T − 20].The results of E D ( p) and E D * ( p) in the physical units are listed in Table 2 with jackknife errors.It is seen that the hyperfine splitting ∆m = E D * ( 0) − E D ( 0) = 139.70(57)MeV almost reproduces the experimental values m D * − − m D 0 = 142.0(1)MeV and m D * + − m D + = 140.6(1)MeV [57], which manifests our tuning of charm quark mass and the scale setting scheme reasonable.The momentum dependence of E D ( p) and E D * ( p) are plotted in Fig. 1 (right panel), where the shaded line is the fit results using the continuum dispersion relation Figure 2 : Figure 2: The results of the DD * (I = 0) scattering.Left panel: Data points are the effective energies of DD * (I = 0) system and the grey bands are the fits by Eq. (8) in the time window t ∈ [20, T − 20].Middle panel: Effective energy shifts ∆E(p m , t) defined through the ratio function R(p m , t), where the colored bands are from the function forms of R(p m , t) defined through Eq. (1) and Eq.(6).Right panel: The phase shifts of S -wave DD * (I = 0) scattering, where the grey band shows the result of Eq. (10) with best-fit parameters in Eq. (12) and the red band illustrates the fitting range. 3. 1 . The I = 0 and J P = 1 + DD * scattering We carry out the jackknife analysis to the correlation functions C X ( p m , t) (X refers to D and D * ) and C (I) (p m , t) for all the momentum modes using equations Eq. (1) and Eq.(8), respectively (see details in Appendix B).In this procedure, the energies E D ( p m ), E D * ( p m ), E (I) DD * (p m ) for m = 0, 1, 2, 3 are obtained simultaneously along with the energy shifts ∆E (I) (p m ) = E (I) DD * (p m ) − E D ( p m ) − E D * ( p m ) and the squared scattering momenta p 2 m .As shown in the left panel of Fig. 2 as colored bands, the function form Eq. (8) describes C (I) (p m , t) very well in the time range t ∈ [20, T − 20].The dip around t = T/2 also manifests the existence of the thermal states.The final results in the I = 0 channel are listed in Table Figure 3 : Figure 3: The results of the DD * = I) scattering.The three panels are similar to those of Fig. 2. Figure 4 : Figure 4: The components of the correlation function C (I) (p, t).Left panel: The schematic quark diagrams of the four terms D, C 1 (π/ρ), D and C 2 (ρ) that contribute to C (I) (p, t).Right panel: The relative magnitudes of the four terms for the case of p = 0, which are scaled by the C D ( p = 0, t)C D * ( p = 0, t). : The diagram on the upper left side is named D (direct) term which comes from the direct contractions between O D (O D * ) in the sink and source operators.The diagram on the upper right side is called the C 1 (π/ρ) (crossing) term which involves either the u, d quark exchange effects (as illustrated in the figure) or charm quark exchange (if flipping upside down the positions of D 0 and D * + j on the righthand side).In the lower-left diagram, D is the direct contraction between D and D * .The lower right diagram C 2 (ρ) also illustrates a u, d quark exchange one.As such C (I) (p, t) can be abbreviated as 1 ) DD * (p).As shown in the right panel of Fig. 4, in the intermediate time range, C 1 (π/ρ)/(C D C D * ) and C 2 (ρ)/(C D C D * ) show approximately linear behaviors in the logarithmic scale with positive slopes, while D/(C D C D * ) is almost a flat line throughout the time range.Since C D C D * behaves as We −(m D +m D * )t in the intermediate time range, the D term must have a similar time dependence, namely, A 0 e −E 0 t with E 0 ≈ m D + m D * .Accordingly, the time dependence of C 1 (π/ρ) and C 2 (ρ) is also approximately exponential, and can be expressed qualitatively as A 0 i e −E i t ,where i = 1, 2 refer to C 1 (π/ρ) and C 2 (ρ), respectively, and i ∼ O(10 −2 ) 1 is indicated by the figure.On the other hand, the positive slopes of C 1 (π/ρ)/(C D C D * ) and C 2 (ρ)/(C D C D * ) imply that C 1 (π/ρ) and C 2 (ρ) damp in time more slowly than D does, such that one has E 0 − E i = δE i > 0. Thus one has the approximation (see Appendix C) for the energy of the DD * system E (I) DD * ≈ ln (I=1) DD * (p) is higher than the corresponding non-interacting DD * energy E D ( p)+E D * ( p) threshold, and manifests a repulsive interaction between D and D * .But when the scattering momentum p rises large, the difference of E (I=1) DD * (p) and E D ( p) + E D * ( p) becomes smaller and even indiscernible.In the I = 0 case, the DD * energy E (I=0) DD * (p) is uniformly lower than E D ( p) + E D * ( p) when p goes up to around 800 MeV, and reflects definitely an attractive interaction between D and D * in the I = 0 state.It is consistent with the experimental assignment I = 0 for T + cc (3875) given a DD * bound state.Based on these energy levels, the S -wave phase shifts of DD * scattering in I = 0, 1 channels are derived using Lüscher's finite volume formalism.The effective range expansions give the following scattering lengths a (I=0, Figure 6 :•• Figure 6: The statistical errors with the size k in each block. Table 2 : The energies of D and D * at different spatial momentum modes n.The energies are converted into the values in physical units with the lattice spacing a −1 t = 6.894GeV. Table 4 : The lattice results of the S -wave DD * scattering in I = 1 channel (similar to Table3).
7,702.8
2022-08-01T00:00:00.000
[ "Physics" ]
Studying Gambling Behaviors and Responsible Gambling Tools in a Simulated Online Casino Integrated With Amazon Mechanical Turk: Development and Initial Validation of Survey Data and Platform Mechanics of the Frescati Online Research Casino Introduction: Online gambling, popular among both problem and recreational gamblers, simultaneously entails both heightened addiction risks as well as unique opportunities for prevention and intervention. There is a need to bridge the growing literature on learning and extinction mechanisms of gambling behavior, with account tracking studies using real-life gambling data. In this study, we describe the development and validation of the Frescati Online Research Casino (FORC): a simulated online casino where games, visual themes, outcome sizes, probabilities, and other variables of interest can be experimentally manipulated to conduct behavioral analytic studies and evaluate the efficacy of responsible gambling tools. Methods: FORC features an initial survey for self-reporting of gambling and gambling problems, along with several games resembling regular real-life casino games, designed to allow Pavlovian and instrumental learning. FORC was developed with maximum flexibility in mind, allowing detailed experiment specification by setting parameters using an online interface, including the display of messages. To allow convenient and rapid data collection from diverse samples, FORC is independently hosted yet integrated with the popular crowdsourcing platform Amazon Mechanical Turk through a reimbursement key mechanism. To validate the survey data quality and game mechanics of FORC, n = 101 participants were recruited, who answered an questionnaire on gambling habits and problems, then played both slot machine and card-draw type games. Questionnaire and trial-by-trial behavioral data were analyzed using standard psychometric tests, and outcome distribution modeling. Results: The expected associations among variables in the introductory questionnaire were found along with good psychometric properties, suggestive of good quality data. Only 6% of participants provided seemingly poor behavioral data. Game mechanics worked as intended: gambling outcomes showed the expected pattern of random sampling with replacement and were normally distributed around the set percentages, while balances developed according to the set return to player rate. Conclusions: FORC appears to be a valid paradigm for simulating online gambling and for collecting survey and behavioral data, offering a valuable compromise between stringent experimental paradigms with lower external validity, and real-world gambling account tracking data with lower internal validity. Introduction: Online gambling, popular among both problem and recreational gamblers, simultaneously entails both heightened addiction risks as well as unique opportunities for prevention and intervention. There is a need to bridge the growing literature on learning and extinction mechanisms of gambling behavior, with account tracking studies using real-life gambling data. In this study, we describe the development and validation of the Frescati Online Research Casino (FORC): a simulated online casino where games, visual themes, outcome sizes, probabilities, and other variables of interest can be experimentally manipulated to conduct behavioral analytic studies and evaluate the efficacy of responsible gambling tools. Methods: FORC features an initial survey for self-reporting of gambling and gambling problems, along with several games resembling regular real-life casino games, designed to allow Pavlovian and instrumental learning. FORC was developed with maximum flexibility in mind, allowing detailed experiment specification by setting parameters using an online interface, including the display of messages. To allow convenient and rapid data collection from diverse samples, FORC is independently hosted yet integrated with the popular crowdsourcing platform Amazon Mechanical Turk through a reimbursement key mechanism. To validate the survey data quality and game mechanics of FORC, n = 101 participants were recruited, who answered an questionnaire on gambling habits and problems, then played both slot machine and card-draw type games. Questionnaire and trial-by-trial behavioral data were analyzed using standard psychometric tests, and outcome distribution modeling. INTRODUCTION Gambling refers to any activity involving wagering of money (or something of value), on an outcome that is fully or partially dependent on chance, with the possibility of winning money (or something of value). As evident by its long historical roots and popularity around the world, gambling is a popular recreational activity, often without any serious negative consequences (1). A subset of gamblers, however, develop problematic gambling behaviors such as loss-chasing, stake habituation, difficulty stopping, and gambling to escape negative emotions, and experience negative economic, psychosocial, and mental health consequences because of this (2). Gambling is now recognized as an addictive behavior in psychiatric diagnostics (3), yet unlike alcohol and substance addictions, problem gambling does not involve consuming psychoactive chemical agents. From a clinical perspective, this makes it even more important to study the specific learning and extinction mechanisms involved in gambling in order to inform gambling-specific treatment strategies, both for clinical settings and to inform so called Responsible Gambling Tools (RGT) (4). Since the dawn of behavioral analysis, gambling has been considered a prototypical case of the effectiveness of intermittent reinforcement, wherein a behavior is rewarded some, but not all the time (5). Later behavioral analytic research has examined a broader set of learning and extinction phenomena of presumed importance to gambling (6), including other types of reinforcement schedules (7), reward discounting (8), the nearmiss phenomenon (9), establishing operations (10), and verbal rules (11). Behavioral analytic research has challenged some popular preconceptions about what promotes problem gambling, e.g., revealing mixed or even contradictory evidence for the "Early Big Win" hypothesis (12)(13)(14). Recently, attempts have been made to translate these findings into clinical practice (15). However, overall, there are surpassingly few published behavioral analytic studies of gambling behaviors given the population prevalence of both gambling and gambling problems, and its overt similarities with learning experiments (16). While the relatively small and student samples typically used in past research need not present an issue if the expected effects are large and presumed common to all humans, there is still arguably a translational need to bridge these findings with that of account tracking studies from real-life gambling, where legal requirements make it impossible to e.g., randomize participants to definitively demonstrate causality (17). Access to larger samples may also create opportunities to study even minor effects that would nonetheless have a significant public health impact. Additionally, there are surprisingly few experimental studies on specific RGT features and responsible gambling practices, given the clear policy implications and ubiquitous implementation (18). Further, experimental studies that attempt to simulate live casino environments and games played therein, are likely to not fully capture the contextual factors that play a role in learning and extinction (19). With the advent and increasing popularity of online gambling, which is now the most prevalent type of gambling among both problem and recreational gamblers in many countries (1,20), it is now possible to develop research paradigms that are unaffected by contextual confounders, while still accurately simulating real-life gambling. Studying learning and extinction of problem gambling behaviors in a naturalistic setting is arguably of even greater importance if the goal is to study new potential features of RGTs and responsible gambling policies in online gambling environments (21). In the current study, we describe the development and an initial validation of the Frescati Online Research Casino (FORC): a simulated online casino where games, visual themes, outcome sizes, probabilities, and other variables can be experimentally manipulated to conduct a variety of behavioral analytic and experimental RGT research with great flexibility and convenience. Such an experimental platform would be valuable in bridging classic behavioral research and account-tracking studies on real-life gambling data, offering an attractive, translational compromise in terms of internal and external validity. Validation data was collected using an experimental setup that would allow detailed examination of the game mechanics; validity of questionnaire data was also examined using traditional psychometric techniques. Amazon Mechanical Turk Amazon Mechanical Turk (AMT) is a crowdsourcing platform that allows so called Requesters to publish Human Intelligence Task for Workers to complete for a pre-set monetary reimbursement. AMT has been a popular platform for collecting scientific data and running psychological experiments for many years (22)(23)(24) and has been shown to provide data of equivalent quality to traditional data collection methods (25,26), including valid and reliable gambling data specifically (27,28). Connecting FORC to AMT, or in principle any other crowdsourcing platform with similar features, provides access to a large, global, diverse participant pool and is thus particularly suitable to conduct behavioral analytic research that study phenomena that are common to all people. Development and Features Back-and front-end development of the casino and AMT integration was outsourced to a professional web development firm. The application relies on C#, ASP.net, Jquery and Bootstrap CSS frameworks, and an SQL database, and features a responsive design suitable for both smartphones, tablets, and computers. Randomness (both stimuli presentation, outcomes, and arm allocation) is implemented through a trial-by-trial random number generator, ensuring random draws with replacement, as in real-life gambling. The validation analyses described below include examining the randomness generation mechanism, since this is crucial to mimicking real-life gambling (4). Data from multiple experimental arms can be collected at the same time, with random allocation to arms according to a percentage specified in a design matrix. FORC features three types of games, which can be included in any sequence and with varying number of trials: a roulette wheel with a choice of betting on red or black color (potential instrumental learning task, Figure 1C), a three-reel slot machine with no choice (potential Pavlovian learning task, Figure 1D), and a simple cardchoice game with a choice of two decks placed side-by-side either vertically or horizontally (potential instrumental learning task, Figure 1B). While the two former paradigms perfectly mimic real-life gambling, a deliberate design decision was made to not model existing casino card games in order to avoid evoking already learned play strategies that could interfere with the designed contingencies. All games feature realistic sound effects, both on interaction (button pressing) and win outcomes ( Figure 1D). Continuous background music was not included due to technical reasons. Balance is by standard displayed in the lower right corner, as in real-life online casinos, but can be hidden by specifying this in the design matrix. Four distinct visual themes-different color schemes, all with graphical casino connotations (one with four variants with only minor differences in element composition)-are available for both the card game and slot machine, which can be randomly allocated per trial. A basic theme option is also available. For each arm, number of trials per sequence, starting balance, visual theme, bet size(s) and win amount(s) and win probabilities, per choice option (if any), can be conveniently set in the design matrix using an online administrator view. See Figure 1A. Short, customizable messages can be displayed in-between games (sequences) to e.g., mimic the sort of messaging used in RGTs (e.g., "Remember that there is no guarantee that you will win back lost credits") (21). AMT and Casino Procedure Experiments are published on lists of available tasks on AMT; the platform offers the possibility to offer the task only to users with curtained registered characteristics (e.g., country of residence). Task listing includes a short description and reimbursement offered. Interested participants are referred to an AMT landing page featuring a full, customizable description of the experiment, along with participant and informed consent information (see below). Participants consent by clicking on a link that refers to FORC, housed on a separate server. The FORC landing page includes some final instructions, including an emphasis on playing the games as if it were a real working casino. Participants then answer questions on sex, age, last-year gambling frequency (in five steps, from not at all to once a per day or more often, coded 0-4) and types (12 different ones including ones prevalent in non-Western countries, plus a none-option), and the Problem Gambling Severity Index, PGSI (29), a validated screener for gambling problems. Participants then proceed to the games, as dictated by the design matrix. At the end of the games, participants view a customizable message and are shown a custom key, and are then prompted to return to the AMT platform and the key there, which is then used on the AMT side to validate the work performed and approve reimbursement. Data Structure Experimental data are saved and structured trial-by-trial, in long format, and includes anonymous study ID (independent of AMT worker ID), timestamps (temporal resolution was set at seconds at time of collecting validation data, later changed to milliseconds), allocated study arm, game type, trial number, balance in, presented theme, chosen behavior (response), the outcome, and balance out. Survey data can be linked to experimental data through the anonymous study ID generated upon submitting survey data and proceeding to the games. Data can be exported at any time from the administrator view. Validation Data During a roughly 3-h period, n = 102 final participants (see below) were recruited from AMT with an offered reimbursement of 2 USD for a session lasting no longer than 30 min. This reimbursement is relatively high compared to the estimated AMT average (30), and would thus likely have made it an attractive opportunity and likely to have promoted high-quality data. The published task description advertised it as a scientific experiment about online gambling. After completing the survey, the experimental setup had participants complete 40 trials of the card-draw game, then 40 trials of the slot machine game, and finally 16 non-reinforced trials of the card-draw game (not used in analyses). While the recruitment aim was n = 100 participants, the AMT integration procedure by necessity makes it possible for participants to complete the experimental part without completing the AMT part and being registered as having done this, explaining why the final sample size exceeded the intended. One participant was excluded for not completing all trials. A total of k = 9,696 trials from n = 101 participants were thus available for analysis. See Table 1 for participant characteristics. Analyses All analyses were conducted in the R (3.6.3) statistical environment. FORC was validated as an experimental platform by considering three aspects: apparent data quality, randomness mechanics and resulting change in average credit balance over time, and psychometric properties of the survey data. Convergent validity of gambling behaviors observed on FORC was not examined since the experimental setup used was not designed specifically to evoke spontaneous gambling behaviors; however, demonstrating validity of the three aspects independently would suggest that an experimental setup designed to do so can be expected to show also convergent validity. Quality of data was assessed by calculating percentage of participants who in the card-draw part showed no or limited response variation (outside a 10-90% response variation range), indicative of poor data quality due to indiscriminate, repetitive responding; or no such pattern, indicative of satisfactory data quality. Second, three game mechanics aspects of FORC were empirically evaluated. First, the observed random appearances of gambling outcomes (wins) during the slot machine phase (with different win percentages dependent on random stimulus shown) were compared to those programmed in the design matrix (50% for theme S1A and 20% for theme S2), both on a trial-by-trial basis and overall. Second, to ensure that random draws (outcomes) were made with replacement (i.e. independent of previous ones), we calculated percentage of win outcomes during the instrumental acquisition phase (same 45% win probability in all trials) as a function of outcome of the preceding trial. Third, change in credit balance over time during the slot machine phase (same 20 credit possible win outcome in all trials) was compared to the expected credit balance change based on programmed probability. Since win probability differed between 20 and 50% depending on what stimulus was randomly presented for each trial, a perfect distribution of stimuli across trials and participants would give a 35% win probability. Since each bet cost 10 credits (at time of validation data collection not refunded in case of win; changed after collecting data for the current study, altering only the return to player rate but no game mechanics), and the win outcome was 20 credits regardless of theme, the return to player rate was 0.7 credits, meaning that with perfect distribution of themes between trials, a player's balance should decrease with on average 3 credits per trial. Third, we performed psychometric analyses on the questionnaire data to estimate quality and validity of the different included measures. Cronbach's alpha (internal consistency) was calculated for the PGSI and factor structure estimated using parallel analysis (31). Associations between PGSI score, gambling frequency and gambling types were also examined using regression models. Ethics The Regional Ethical Review Board in Stockholm has approved the use of FORC for a set of behavioral analytic research studies on gambling behaviors (2018/1968-32 and 2020-01863). Participant information is provided on the AMT platform, after which users can consent by actively choosing to be directed to FORC. In the participant information, it is recommended that potential participants with a history of or current problematic online gambling habits refrain from participation; As of current, it is however not technically possible to exclude participants with high scores on the included PGSI measure completed prior to beginning the experiment. After completing all trials, the end-message is configured to include a statement about the study aims and structure, that any gambling strategies learned in the experiment will not translate into real-life gambling, that the house always wins in real-life gambling, and that participants worried about their gambling habits should seek help locally. For ethical reasons, participant reimbursement is not made contingent on behavior during the experiment (due to e.g., allocation to different win probabilities). Data Quality and Feasibility During the 40 trials of the card-draw game, no participant showed zero response variation and only n = 6 had a response variation outside the 10-90% range, indicative of poor data quality. The remaining n = 95 showed greater response variation, with a sample average variation score of 52.4% (SD = 17.7%), i.e., equal response frequencies. Mean completion time was 10.05 min (SD = 3.68), with minimum of 6.35 and maximum of 29.28 min. Examining the duration distributions revealed that only a small minority of participants had durations in excess of 15 min (n = 8) and even fewer (n = 3) in excess of 20 min. Importantly, a longer duration need not in itself present an issue since the experiment was divided into phases, and participants could have loaded the game and delayed the start. In lieu of any obvious thresholds for determining quality at this level of detail, duration was not considered a quality indicator and hence not used for further exclusion. Game Mechanic Validation Observed win outcome percentages across slot-machine trials were normally distributed at a sample-level around 49.9 and 20.8%, respectively, against set win percentages of 50 and 20%. Observed percentage wins across card-draw trials was 44.5% when preceding trial had a loss outcome, and 45.2% when preceding trial had a winning outcome, revealing that the random mechanism (random sampling with replacement) worked as intended (set win percentage 45%). See Figures 2A1-A3. Themes were randomly sampled during the slot-machine trials (set probabilities 50-50%), resulting in a 51.5% occurrence of theme S1. This, in combination with the set difference in win percentages between themes (50 vs. 20%), resulted in a total observed win percentage of 35.45% (with perfect 50-50% distribution of themes, the total win percentage would have been 35%, i.e., halfway between 50 and 20%), and in turn an expected credit loss at each turn of −2.911 (which would have been −3 with perfect 50-50% distribution of themes) against a bet of 10 and the equivalent of a return to player rate of 0.71. Observed balance decrease closely followed the expected decrease and was in general normally distributed around it. However, due to a random fluctuation of increased winnings around trial 5-15, and balance being an accumulated measure, the average total momentary expected-observed discrepancy was positively skewed to a mean of M = 1.12 (95% CI: 0.48-1.83). Average balance change from the preceding trial was however a perfect−2.909 (95% CI: −3.22 to −2.60) and normally distributed, revealing that the game mechanics worked as intended when considering that presentations and outcomes were random by design see Figures 2B1-B3. Quality and Validity of Survey Data Quality and validity of survey data was examined among the n = 95 who provided quality data in the card-draw game. As expected from a general population sample with an established overrepresentation of problem gamblers (27), PGSI scores were Cronbach's alpha for the PGSI was calculated to α = 0.95 (95% CI: 0.93-0.96). Even when omitting participants with a PGSI score of zero to avoid artificial inflations of internal consistency due to floor effects (32), α was 0.92 (95% CI: 0.89-0.95). Parallel analysis of PGSI items showed a convincing one-component solution; see Figure 3B. DISCUSSION The Frescati Online Research Casino (FORC) was designed to offer a valuable middle-ground between internal and external validity, providing full and flexible experimental control of a realistic, simulated online casino, in order to study the learning and extinction mechanisms of gambling behavior and evaluate responsible gambling tools and policies in a convenient way. This first validation study showed that data collection through integration with the Amazon Mechanical Turk crowdsourcing platform was feasible, provided a high percentage of high-quality behavioral and survey data, and that the game mechanics worked as intended. This suggests that FORC is ready to be used for experimental studies on gambling behavior and effects of RGTs. Online gambling is now the most prevalent type amongst problem gamblers (1,20) (at least in countries where this gambling form is widespread), and can be simulated for research purposes more easily than a traditional casino games since contextual confounders do not apply: participants engage with FORC in the same environment (on their computer or smartphone) that they would with real online gambling. Online gambling as a modality provides better opportunities for behavioral tracking and collecting other data, as well as providing micro-interventions like automated feedback that can all be packaged as part of RGTs (32), making it easier to simulate for research purposes with retained face validity. While there are empirical studies on RGTs (21), most of these have either prioritize internal validity over external validity (e.g., small samples and a laboratory setting), or vice versa (e.g., lack of randomization, allowing no causal conclusions). Deploying experiments via FORC provides a valuable, translational middleground that could help to establish an evidence base for RGTs on par with the scientific standards of psychological and medical interventions. Of note, by both design and current functionality, FORC is limited in some respects as to what types of gambling that can be simulated (see Limitations below). Prominently, we opted to design a new card game-with familiar symbols and general mechanics-to allow the study of instrumental learning, rather than use existing ones, in order to avoid confounding effects of prior learning (i.e., playing styles). The other two FORC games however are very similar to their real-life equivalents, albeit somewhat simpler in gambling options. Of importance to learning experiments, a deliberate design was made to require user input for every trial of the slot-machine, since we considered this to be a key feature of real-life gambling. Although requiring user action to initiate a learning trial deviates somewhat from traditional Pavlovian paradigms, users were presented with only a single option (to continue, i.e., no option to either quit, change bet etc.). According to the so called functional-cognitive framework wherein learning is seen as an ontogenetic adaptation (33), learning in absence of choice can only be Pavlovian and not instrumental. For ethical reasons, participants with a history of gambling problems are explicitly discouraged from participating. However, it is currently not technically feasible to automatically exclude users with high PGSI scores from participating, for example, or to use this information for arm allocation (although a conditional statement with reference to the PGSI variable would have been easy to add to gate progression from the questionnaire section to games, it would not have hindered participants from simply reloading the page and reporting differently). Not unexpectedly, a large percentage of participants did report at least some gambling problems-even higher than in previous studies using AMT (27), although the international recruitment base make these numbers hard to compare. This observation makes deployment of FORC an ethical issue, rather than theoretically imposing a limitation on generalizability of findings (since little or no selection bias is apparent). As with any research on this topic and/or using similar methods, planned experiments should be vetted by an independent review board. Of importance, FORC includes several features that address this issue directly, including post-experiment debriefing, a reminder that the house always wins, that gambling strategies applied in FORC will not work elsewhere, and encouragements to seek help. Further, considering the ubiquity of online advertisements for gambling opportunities, it could also be argued that presenting AMT users with possibility of participating in a gambling experiments does not in any practical sense increase their exposure to gambling opportunities. A stated aim of FORC was to offer a wide variety of possible outcome measures, the choice of which must be considered for each particular experiment. Delay in specific responses may be of interest in some experiments (34), yet setting up distinct behavioral choices in the card-draw game, e.g., a high vs. low risk option, may have better convergent validity as a proxy measure of problem gambling and has seen use in past research (35,36). Whether such measure shows convergent validity will however ultimately depend on the exact experimental setup and must thus be examined in each study carried out using FORC. Of note, another commonly used proxy measures of problematic gambling, gambling persistence (13), is not possible to examine with FORC since AMT participants have no incentive to continue playing beyond the required trials and reimbursement is fixed for both technical and ethical reasons. The detailed logging procedure featured in FORC also allows for a variety of quality assurance measures. Although AMT experiments do tend to produce high-quality data (26), this does not apply to 100% of participants. In the current study, we examined both within-questionnaire convergent validity and psychometric properties, as well as response variation-the latter on the grounds that fully repetitive gambling would be in violation of experiment instructions and the easiest way to play through the experiment and gain reimbursement as quickly as possible. Response variation is likely to be a sensitive proxy measure of quality, yet possibly at the price of some specificity, and the exact threshold should thus be carefully considered. Since collecting validation data, a new quality assurance feature has been added to FORC in the form of a pop-up question on contingency knowledge acquisition, used in previous research (37). These questions, along with response variation patterns and timing of responses, should be sufficient to make an accurate assessment of data quality in any experimental setup. Since collecting validation data for the current study, some additional changes have been made to FORC. Win outcomes now always return the bet-this decision was informed by parallel beta testing by other researchers and students (unfortunately, not systematically collected or analyzed), who expressed an expectation from real-life gambling experiences that this was expected. Return of bets upon winning is now explicitly explained in the pre-game instructions, and we can thus see no reason why it would change the game mechanics beyond calculation of the return to player rate, which with one exception (see Limitations below) can easily be adapted. Another change is that bet size, which could previously only be observed through change in the credit balance, is now displayed visually immediately upon pressing a button or selecting a deck, then fading rapidly. Temporal resolution of logged behaviors has been updated to milliseconds to enable computational modeling experiment (34). Additional features added include the possibility to display different messages to different experiment arms at the beginning of each game as per the design matrix, as well as the possibility to add a banner-type advertisement to the background. Both these features were included to be able to study the effects of RGTs like pop-up messaging (21) as well as rule-governed behavior (4). Limitations Both this validation study, and the FORC platform itself, have some limitations that need to be acknowledged. First, the experimental setup was designed to allow a detailed evaluation of the game mechanics, rather than to evoke spontaneous gambling behaviors perfectly reflective of real-life gambling. For example, the return to player rate of the slot machine game was 0.7, which is lower than in typical real-life gambling; although the degree to which participants could discriminate this is unknown (38). For this reason, we refrained from examining associations between observed gambling behavior and collected measures of gambling habits and gambling problems. Instead, we emphasize that each study in which FORC is used should examine convergent validity in relation to what can be reasonably expected given the particular experimental setup used. If, for example, a study aims to immediately promote Pavlovian or instrumental learning in order to avoid possible confounding, the resulting gambling behaviors may be shaped more by the newly learned contingencies than regular gambling strategies, decreasing power to detect convergent associations with survey-reported gambling. Second, the current study did not collect any additional data to examine data quality (e.g., participant ratings or free-text evaluations), opting instead to examine data quality using the same metrics that would be available to subsequent experiments run using the same platform. Importantly, data quality assessment should be carried out in every study that uses FORC, adapted to the specific experimental setup and preferably using pre-registered thresholds. Third, this validation study was not designed to evaluate the optimal description used for recruiting AMT workers to complete the experiment. While the FORC platform was designed to offer great flexibility in terms of experimental setup, some limitations nonetheless apply. First, although the aesthetic of FORC was designed to mimic that of modern online casinos, graphical quality is not fully comparable, at least to those prevalent in Western countries. To some extent, this was a deliberate design decision: too complex graphical presentations may have distracted participants and presented technical issues for users running the experiment on smartphones and cellular internet connections. Also for technical reasons, including background sound was not possible, although FORC does feature realistic casino sound effects. The impact of lack of background music on external validity remains unknown; although background music during e.g., slot-machine playing may drive immersion and put the gambler in a so called "Dark Flow" (39), gambler may be equally likely to turn down repetitive background music of this kind if they find it disturbing or distracting. A second FORC feature limitation is that only one win probability and amount can be set for each trial sequence, unlike in real-life gambling where there are often several win outcomes available, with probabilities decreasing with increasing amounts. However, jackpot-type setups can still be simulated by setting up several consecutive trial sequences of the same game, with randomized allocation to different number of trials and specific jackpot outcomes if need be. Third, custom gambling options are not available and cannot be simulated at present, meaning that research questions on this particular topic cannot at present be investigated using FORC. Fourth, our subsequent choice to modify the game mechanic to always return the bet on a winning outcome, entails that FORC cannot at current be used to study the lossesdisguised-as-wins phenomenon (40). Returning this parameter setting would however require only a minor change to the underlying source code. Finally, it should be acknowledged that as with real-life gambling outcomes, appropriate statistical methods may be necessary to properly analyze some outcomes, e.g., if a particular experimental setup generates an of excess zeroes (41). CONCLUSIONS The Frescati Online Research Casino offers a convenient way of performing large-scale experiments on gambling behavior and responsible gambling tools, with an experience resembling real-life online casino gambling. In this first validation study, we show that behavioral and survey data quality appears adequate, and that the game mechanics work as intended. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. The authors will consider requests for FORC source code. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Swedish Ethical Review Authority. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS PL, JR, and PC designed FORC. EI made significant contributions to design and beta-testing. PL oversaw development, analyzed data, and drafted manuscript. JR, EI, and PC substantial contribution to the interpretation of data and revision of the manuscript for important intellectual content. All authors contributed to the article and approved the submitted version. FUNDING The FORC project was made possible by two grants to PL, JR, and PC from the Independent Research Council of Svenska Spel, the state-operated gambling provider in Sweden which has no role in the decisions of the research council. Additional funding comes from the Swedish Research Council for Health, Working Life and Welfare (FORTE) to PC, and an internal grant from the Centre for Psychiatry Research (Region Stockholm and Karolinska Institutet) to PL.
7,656.6
2021-02-05T00:00:00.000
[ "Psychology", "Computer Science" ]
Adaptive entry guidance for the Tianwen-1 mission To meet the requirements of the Tianwen-1 mission, adaptive entry guidance for entry vehicles, with low lift-to-drag ratios, limited control authority, and large initial state bias, was presented. Typically, the entry guidance law is divided into four distinct phases: trim angle-of-attack phase, range control phase, heading alignment phase, and trim-wing deployment phase. In the range control phase, the predictor—corrector guidance algorithm is improved by planning an on-board trajectory based on the Mars Science Laboratory (MSL) entry guidance algorithm. The nominal trajectory was designed and described using a combination of the downrange value and other states, such as drag acceleration and altitude rate. For a large initial state bias, the nominal downrange value was modified onboard by weighing the landing accuracy, control authority, and parachute deployment altitude. The biggest advantage of this approach is that it allows the successful correction of altitude errors and the avoidance of control saturation. An overview of the optimal trajectory design process, including a discussion of the design of the initial flight path angle, relevant event trigger, and transition conditions between the four phases, was also presented. Finally, telemetry data analysis and post-flight assessment results were used to illustrate the adaptive guidance law, create good conditions for subsequent parachute reduction and power reduction processes, and gauge the success of the mission. Introduction The Tianwen-1 mission flew a lifting entry by offsetting the center of mass to produce a trim angle of attack, and used the active on-board guidance to improve the landing accuracy, similar to the case in the successful MSL mission [1]. The design of the entry, descent, and landing (EDL) process using the guidance and control law is a challenging issue for the Tianwen-1 Mars mission. The EDL process starts from the atmospheric entry interface at an ellipsoid altitude of 125 km and an initial velocity of 17,000 km/h. During this process, the vehicle undergoes rapid deceleration through a tenuous atmosphere, while autonomously and reliably deploying the trim wing and the parachute. In this paper, the design of the Tianwen-1 adaptive entry guidance law is described. To meet the requirements of the Tianwen-1 mission, it is essential to balance the lift of the vehicle to minimize the range error while ensuring safe deployment. Thus, several considerations, such as selecting deploying altitudes that are as high as possible, and choosing robust deployment triggers, must be made to design in the entry guidance. The initial primary task of the entry guidance [2][3][4] was to increase the landing accuracy, without sufficient concern for the parachute deployment condition. Moreover, due to the limited control authority of the entry vehicle and the large initial state bias, considerable attention will be required to reduce the downrange and cross-range error, resulting in control saturation. However, the altitude error cannot be corrected in real time, resulting in the parachute deployment altitude being a few kilometers off-nominal. Further, the altitude and dynamic pressure may not meet the parachute deployment constraints, rendering a landing mission impossible. The entry vehicle for the Tianwen-1 mission has low lift-to-drag ratio, limited control authority, and large initial state bias. This causes the parachute deployment constraints [5] that are based on the altitude and dynamic pressure to be unmet, which threaten the security of parachute deployment. To solve the aforementioned problem, the traditional nominal trajectory-based guidance law [6] cannot be adopted for a large initial state bias without the occurrence of multi-fold growth in the entry guidance gain file [7]. The analytical predictor design and the entry guidance performance of the different guidance methods were analyzed in Ref. [8]. As suggested in Ref. [8], the main advantage of predictive guidance is that online updating of the reference trajectory enables to compensate for tracking errors and to improve guidance precision. Adaptive on-board guidance for entry vehicles was investigated in Ref. [9]. Feedback linearization, an effective method for nonlinear control, was applied without considering the initial state bias to address drag profile tracking problems. In Ref. [10], an adaptive entry guidance law was designed to improve landing precision and to compensate for deviations in atmospheric density and aerodynamic coefficients. The altitude for parachute deployment is a primary consideration in guidance law design [11]. In Ref. [11], an improved altitude control algorithm was presented by adjusting the bank angle using the phase with a low Mach number. In this study, an adaptive entry guidance logic based on on-board trajectory planning and an overview of the design process was used to generate optimized guidance tactics. Using the guidance law, the nominal trajectory is first designed and described by the downrange value and the other states, such as drag acceleration and altitude rate. Next, owing to the large initial longitude and latitude bias, we analyzed the maximum allowable range for the initial downrange, considering the flyability of the lander and the parachute deployment constraints. If the actual downrange exceeds the allowable range, the planned onboard trajectory law is adopted. The most important aspect is to modify the nominal downrange value by calculating and compensating for the deviation between the actual value and the allowable value. By using the new nominal downrange value, the successful correction of the altitude error and the avoidance of control saturation, which result from a large initial state bias, is possible using the updated guidance law. The guidance law then accurately drives the lander to the desired landing position with an altitude error as small as possible. To investigate the utility of the proposed method, a simulation was performed to allow for comparison with the MSL guidance law of Ref. [7]. Additionally, optimization designs, such as the improvement of the initial flight path angle and the relevant event trigger designs are also discussed in this study. Finally, telemetry data analysis and post-flight assessment results are shown to demonstrate the perfect performance of the adaptive guidance law for the Tianwen-1 mission. Reference trajectory design The entry process of Tianwen-1 is divided into four phases: trim angle-of-attack (AOA), range control, heading alignment, and trim-wing deployment phases. The key events during the four entry guidance phases are the deployment of the trim wing and the parachute. In addition to the project requirements and design principles, it is important to design the transition conditions between the four phases, such as when to begin the range control, when to begin the heading alignment control, and when to deploy the trim wing and the parachute. These are important for reference trajectory planning and enable the definition of the design requirements and optimization of the initial entry flight path angle (EFPA). Relevant project requirements Tianwen-1 requires a touchdown ellipse of 50 km × 25 km in a flat area selection within the constraining criteria of the trim-wing and the parachute deployment altitude, Mach number, and dynamic pressure. Dynamic pressure A trim-wing deployment dynamic pressure between 500 and 1200 Pa is required. If the dynamic pressure is greater than 1200 Pa, trim wing breakage will occur. If the dynamic pressure is too low, it may cause the trim-wing deployment to fail. Similarly, an adaptive dynamic pressure between 360 and 760 Pa is critical for chute deployment. A sufficient dynamic pressure is beneficial for ensuring inflation. The maximum value limit restrains the resulting peak inflation loads, which may cause the chute to fail. Mach number The Mach number for the chute deployment time should be restricted between 1.5 and 2.1. The Mach number affects the aeroheating and inflation dynamics of the chute. If the Mach number is too high, the parachute may experience excessive heating at the stagnation point and violent inflation may occur. If the Mach number is too low, the inflation dynamics may not be sufficient to deploy the chute. Deployment altitude A minimum parachute deployment altitude of 4 km is required to allow sufficient time to complete the subsequent descent and landing tasks [12][13][14][15]. Here, the altitude is referenced to the Mars ellipsoid altitude, which is approximately equal to the Mars Orbiter Laser Altimeter (MOLA) altitude. Below this altitude, the chute and propulsive system cannot decelerate the lander in time for a soft landing. Initial EFPA and reference bank angle design The reference trajectory is designed to achieve the highest deployment altitude for a given vehicle configuration and atmospheric conditions, because the selected landing site is large and the range is indeterminate. First, the entry corridor is calculated by considering the heating and the parachute deployment dynamic pressure. To increase the time for favorable communication links, the magnitude of the bank angle was limited to 0 • -90 • . Subsequently, we carried out a simulation to acquire thousands of reference trajectories for the undispersed cases. In each simulation, the traversal bank angle records a value for each degree, and the flight path angle records a value for every 0.1 • within the scope of the constraint. A contour plot of the simulation results is shown in Fig. 1. In Fig. 1, one can observe that the altitude changes as a function of the entry flight path angle and the bank angle. The parachute deployment altitude was higher until a peak was reached, after which it decreased. This is due to the entry vehicle entering the atmosphere with a shallower flight path angle and then flying with a fixed bank angle, indicating that there is an optimal flight path angle to any bank angle which helps the vehicle reach the highest parachute deployment altitude. Moreover, the maximum altitude increased as the bank angle decreased. Similarly, the downrange contour map can be plotted, after which the reference bank angle profile with a sufficient vertical current lift-to-drag (L/D) ratio can be set and held in reserve to modify the range error. By considering the robustness of the range control and the optimized parachute deployment altitude, the reference trajectory was designed with a −11.6 • entry flight path angle and a 52 • bank angle. Trim-wing and parachute deployment triggers In other successful Mars missions, the parachute deployment trigger design considered only one simple condition, such as dynamic pressure (in the Pathfinder mission) or Mach number (in the Curiosity and Perseverance missions). In fact, the key events in the EDL process, such as trim-wing and the parachute deployment, are interrelated and irreversible. Each action trigger needs to meet strict constraints, such as dynamic pressure, altitude, and Mach number constraints. The guidance, navigation, and control (GNC) system must be able to perform fully-autonomous decisions under a complex and uncertain Martian environment. Thus, it is crucial that event triggers are designed to be reliable and fault-tolerant. The navigation system only uses inertial measurement unit (IMU) acceleration measurements during entry, lacking other sensors to provide information on the position and velocity. If the parachute deployment trigger relies only on navigated state estimates, such as the Mach number or the dynamic pressure, which are greatly affected by the accuracy of the initial orbit and attitude estimation as well as the error of the inertial sensor, it is probable that some other constraints cannot be met when the event is triggered; thus, the safety of the landing scenario will be seriously affected. In view of the above problems, a highly fault-tolerant triggering method is presented in this paper for critical events in the EDL process of Mars exploration in China. In the design, a master and backup triggering scheme was adopted to improve reliability. The following is an example of a trigger condition design for parachute deployment. Main parachute deployment trigger design In general, we chose the Mach number or dynamic pressure as the main judgment condition. The judgment should be made according to the flight ability of the actual object, depending on which is the more proper condition. The flight capability referred to here is related to the aerodynamic characteristics of the entry vehicle and the Martian atmospheric environment parameters (atmospheric density and temperature). Specifically, it is important to understand the capability of the relationship curve between the Mach number and the dynamic pressure near the parachute deployment of the flight trajectory, as shown in Fig. 2. From the actual flight trajectory, we selected the main parachute deployment trigger using a constraint condition that is not easily satisfied and is regarded as a more stringent condition. The red lines in Fig. 2 represent the constraint ranges of the Mach number and dynamic pressure. When the Mach number exceeds the constraint (greater than 2.3 Mach), the dynamic pressure still meets the constraint; therefore, the Mach number was chosen as the primary trigger condition. Furthermore, 1000 Monte Carlo dispersed simulations were conducted to verify the aforementioned results with aerodynamic model deviation, atmospheric density, temperature, and other state parameter deviations of the entry interface (EI). According to the performance results shown in Fig. 3, the pressure exceeds the boundary much earlier than the Mach number only once, and thus it is more reasonable to choose the Mach number as the trigger. Additionally, we can see that dynamic pressure dispersion corresponding to Mach number 1.8 is the smallest. Therefore, the triggering condition for parachute deployment is that the Mach number is greater than 1.8. Backup parachute deployment trigger design The design of the backup parachute deployment trigger is discussed below. The cumulative value of the axial apparent velocity increment, which can be directly measured by an accelerometer, was selected as the backup trigger. Numerous dispersed simulations were performed, and the cumulative value of the axial apparent velocity increment was determined. The cumulative values from the EI to the trim-wing opening, when the main Mach number condition (2.8 Mach) was triggered, and from the EI to the parachute opening (1.8 Mach) were calculated. According to the statistical results, the maximum value of the cumulative axial apparent velocity increment was selected as the backup trigger condition. Next, the feasibility of the backup trigger must be verified by a Monte Carlo simulation. In the simulation, only the backup trigger was used as the judgment condition. Subsequently, we verified whether all the constraints at the trim-wing and the parachute deployment were met. Heading alignment fixed velocity trigger The critical velocity V t , which is the velocity of transition from the range control to the heading alignment, can be determined using the following steps. First, the dynamic model is simplified. The centrifugal and gravitational forces are relatively small in comparison with the aerodynamic forces during the process of re-entry. Therefore, these two forces can be neglected. Consider the Mars-relative longitudinal motion state of a vehicle. The simplified equation of motion can be described by According to the formula Vγ = L cos σ + ( V 2 r − µ r 2 ) · cos γ, lettingγ = 0, a velocity profile can be obtained as Eq. (2): V = µ/r 2 0.5 × ρ/β × L/D × cos(σ * ) + 1/r (2) where g is the Martian gravity constant. Another velocity profile is the reference trajectory. The velocity at the intersection point of the two velocity profiles is the critical velocity, as shown in Fig. 4. Adaptive entry guidance In this section, a guidance law was designed using online trajectory planning. First, an overview of the guidance law is presented. Next, an adaptive online trajectory design of the guidance law was developed. Guidance law overview The entry guidance is divided into four distinct phases (as shown in Fig. 5) as discussed below in the order of their occurrence. (1) The trim angle-of-attack phase. In this phase, the drag acceleration magnitude is smaller than 0.2g, where g is the Earth's gravity constant. Owing to the small aerodynamic force, the downrange control is not sensitive to the different bank angle commands. The bank angle command is constant at this phase associated with the initial nominal bank angle, and the angle of attack is maintained at the expected trim angle. At the beginning of this phase, the adaptive range compensation and online trajectory planning algorithm, which will be discussed in Section 4.2, was conducted if the initial longitude and latitude error were greater than the threshold. (2) Range control phase. Once the drag acceleration magnitude exceeds 0.2g, the GNC flight software of the lander begins range control using an analytical predictor-corrector guidance algorithm. During this phase, the entry guidance law predicts the downrange to go to error based on the deviation of drag and altitude rate with respect to the nominal reference trajectory profile. Then, the bank angle command is generated to correct for range errors, as shown in Eq. (3): where h is the altitude,ḣ is the altitude rate, DR is the downrange, V is the relative velocity, and σ is the bank angle. The reference value is represented by the superscript * , the feedback coefficients are F 1 , F 2 , and F 3 , and K is set to a value of 5 to produce an "over control" condition [16,17]. The predicted cross-range on landing was used in the controller to correct the cross-range error. (3) Heading alignment phase. Once the velocity has dropped past a critical value V t , the guidance ceases range control and begins heading alignment. When the velocity decreases, the lift force restricts the rate of the flight path angle to be greater than zero. Therefore, the bank angle is not effective for commanding the control range. Then, the bank angle is commanded to steer the lander to offset the cross-range error. The bank angle command can be obtained as Eq. (4): where K 1 is set to a value of ten to produce an "over control" condition, CR pre is the predicted crossrange on landing, which is different from the MSL entry guidance algorithm, and S togo is the downrange to go to the desired landing point. (4) Trim-wing deployment phase. The bank command is modulated to 0 • when the trim wing is deployed. Adaptive guidance law First, the allowed maximum initial downrange [DR min− , DR max+ ] should be analyzed using a Monte Carlo simulation, considering all model parameter uncertainties, bank angle constraints, and all parachute deployment constraints. Then, the actual initial downrange DR 0 is calculated using the longitude and latitude of the actual initial entry point and target landing point. The deviation of the initial downrange and the maximum scope is then given by DR err = DR 0 − DR min−/max+ , where DR err is the compensation of the initial downrange error in different cases. Once the initial longitude and latitude bias are larger than the threshold value, the reference total downrange to go should be updated as DR new * total = DR * total − DR err . Then, the trajectory should be redesigned online using the Newton iteration method: where x = cos σ * and f (x) = DR new * total − DR total (σ * ) are the numerically predicted downrange errors between the predicted total downrange DR(σ * ) with an online estimated drag acceleration and L/D, and the reference total downrange to go. Finally, we substitute the updated downrange reference value with DR * new in Eq. (3) to produce the following adaptive guidance law: To demonstrate the effectiveness of the proposed guidance law, a numerical simulation was conducted using the parameters of the MSL entry vehicle. Figures 6 and 7 illustrate the performance comparison when the initial latitude error is equal to 0.4 • with the application of the presented guidance law and MSL guidance law. The control saturation problem that occurred in the initial entry phase when using the MSL guidance law is shown in Fig. 6. The executed bank angle was 90 • , which lasted for 50 s. The proposed adaptive guidance law avoids control saturation by adopting the onboard trajectory-planned algorithm. Moreover, as shown by the subgraph in Fig. 7, the parachute altitude is found to be 10.78 km using the MSL guidance law, while 11.29 km is obtained when the proposed law is applied. The presented law provides a higher parachute altitude of approximately 500 m. In Fig. 8, the L/D ratio estimation of the entry vehicle changes with the trim angle of attack in the entire process. The maximum dynamic pressure was below 4000 Pa, and the dynamic pressure of the trim-wing deployment and the parachute deployment both met the constraints. In Fig. 9, we can see that there are four bank reversals for the entire entry process. After the control target transitioned to heading alignment from range control, the bank angle was limited to 15 • until the trim wing was deployed. This limiter improves the parachute altitude to 13 km. However, this may lead to a derived adverse effect on cross-range error modification. After the trim wing was opened, the bank angle was kept at zero degrees. The L/D ratio estimation of the entry vehicle changes with the trim angle of attack during the process. After the vehicle enters the Martian atmosphere, lift control begins. Then, the angle of the sideslip was kept in the ±5 • range, as expected. In Fig. 10, the longitude and latitude of the navigated position information changed with time and became increasingly closer to the target landing site. In Fig. 11, the cumulative values of the axial velocity increment at the trim-wing and the parachute deployment instants were 4320 and 4576 m/s, respectively, and neither violated the backup triggers. Conclusions A new adaptive entry guidance law that includes four phase algorithms, namely, the adaptive range compensation algorithm, analytic predictive correction guidance algorithm, heading alignment algorithm, and zero bank command algorithm, was designed for the Tianwen-1 Mars atmospheric entry process. According to the results of the control ability and the large initial state bias, the guidance law allows for the successful correction of the altitude error and avoidance of control saturation by modifying the onboard nominal trajectory. The numerous modifications for this process include initial flight path angle optimization to obtain the highest parachute deployment altitude, and a backup parachute deployment trigger to improve mission reliability. The effectiveness of the entry guidance algorithm was demonstrated using a parachute deployment altitude of approximately 13 km, which provided plenty of time to accomplish the subsequent phase deceleration, and enabled the successful landing of Tianwen-1 with a small touchdown ellipse of 3.1 km × 0.2 km at the selected landing site. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
5,352.2
2022-01-04T00:00:00.000
[ "Engineering", "Physics" ]
Rechargeable Zn2+/Al3+ dual-ion electrochromic device with long life time utilizing dimethyl sulfoxide (DMSO)-nanocluster modified hydrogel electrolytes Despite recent advances in hydrogel electrolytes for flexible electrochemical energy storage, ion conductors still exhibit some major shortcomings including low ionic conductivity and short lifetimes. As such, for applications in electrochromic batteries, a transparent, highly conductive electrolyte based on a dimethyl-sulfoxide (DMSO) modified polyacrylamide (PAM) hydrogel is being developed and implemented in a dual-ion Zn2+/Al3+ electrochromic device consisting of a Zn anode and WO3 cathode. Gelation in a DMSO : H2O mixed solvent leads to highly increased electrolyte retention in the hydrogel and prolonged life time for ionic conduction. The hydrogel-based electrochromic device offers a specific charge capacity of 16.9 μAh cm−2 at a high current density of 200 μA cm−2 while retaining 100% coulombic efficiency over 200 charge–discharge cycles. While the DMSO-modified electrolyte shows ionic conductivities up to 27 mS cm−1 at room temperature, the formation of DMSO : H2O nanoclusters enables ionic conduction even at temperatures as low as −15 °C and retention of ionic conduction over more than 4 weeks. Furthermore, the electrochromic WO3 cathode gives the device a controllable absorption with up to 80% change in transparency. Based on low-cost, earth abundant materials like W (tungsten), Zn (zinc) and Al (aluminum) and a scalable fabrication process, the introduced hydrogel-based electrochromic device shows great potential for next-generation flexible and wearable energy storage systems. In Fig. S3(a), an assembled single layer dual-ion device on ITO/glass was tested towards its ionic conductivity in operation.The fresh device shows a resistance of 17 Ohm as the x-intercept, which is reduced to 16 Ohm when colored and 16.5 Ohm for the bleached device, indicating the insulator/metal transition of the device 2 .Resistance values were determined as the high frequency intercept with the x-axis following a Randles Cell model for an equivalent circuit with mixed kinetic and charge-transfer control (Fig. S14). the unmodified sample dries out after approximately 8 days and does not show any conductivity after 4 days.The modified sample retains 5% of its conductivity after 10 days.With increasing humidity, the unmodified samples retain their conductivity longer, but not as long as the modified samples.The DMSO modified hydrogel retains nearly 75% ionic conductivity after 10 days of storage at 50% RH. The RH was held constant by nitrogen and air flow into a closed of box.Optical transmission spectra of ITO/PET, the polyacrylamide hydrogel and the combination of both. The optical transmission of the single substrate is around 82% throughout the visible spectrum.PAM shows slightly increased transmission around 90%, while the combination of PAM and ITO shows nearly perfect transmission around 95%.The increase in transmission can be attributed to the change at the ITO interface.When in contact with PAM, the scattering from the ITO surface roughness is decreased.The WO 3 electrodes were bleached/colored in aqueous mixed electrolytes before investigation and repeatedly rinsed in DI water to remove traces of the electrolyte.For the colored device, there is a slight increase in aluminum signal, which is a signature of aluminum intercalation.Fig. S10 shows the faradaic capacity over the square root of the scan rate.By interpolating the experimental values (black), the surface capacitive contribution towards the overall capacity can be determined. 3The intercept is 96.7 mF/cm², and the slope is 167 mF/cm². Figure S1 : Figure S1: EDX spectra of dried hydrogels with and without DMSO modification. Fig Fig.S1shows the EDX spectra of dried hydrogel samples.Both samples were soaked in ZnSO 4 for 72h to create a reference background of Zn.Freeze drying was conducted for over 24h to ensure full Figure S2 : Figure S2: Possible mechanisms of DMSO incorporation into PAM-hydrogel.(a) Water molecule Fig Fig. S2 shows the proposed mechanisms for DMSO incorporation upon gelation of acrylamide.The Figure S3 : Figure S3: Electrochemical properties of the electrolyte, electrodes and device.(a) EIS analysis of the Fig. S3( Fig. S3(b) shows CV cycling of a first and a 100 times cycled device with a sweep rate of 100 mV/s Figure S4 : Figure S4: (a) Relative Weight and (b) ionic conductivity over time for samples with and without Figure S5 : Figure S5: Optical transparency of ITO/PET substrate, DMSO-modified hydrogel and a combined Figure S6 : Figure S6: Photographs of hydrogel samples after 28 days of drying with (left) and without (right) Fig Fig. S6 depicts a photograph of two hydrogel samples with and without DMSO modification, which Figure S7 : Figure S7: CV measurement of WO3-zinc electrochromic device at 0.5 mV/s sweep rate.Sample was Fig. S7 illustrates Fig. S7 illustrates the cyclic voltammetry of the electrochromic device at 0.5 mV/s.Indicated in red Figure S8 : Figure S8: Retention of device capacity over 30 cycles of repeated pressure (a) and bending (b).(c) Figure S9 : Figure S9: (a) XPS spectra of Al2p-region with early onset of Al2p X-ray emission.XPS spectra of oxygen 1s region corresponding to lattice oxygen O 2-(530 eV) and OH -as defect oxygen (531.9 eV) in Fig. S9( Fig. S9(b-d) illustrates the XPS signal of O1s in a pristine, single cycled and 100 times cycled WO 3 electrode.The figures indicate that lattice oxygen gets replaced when Al 3+ intercalation occurs, while Figure S10 : Figure S10: Capacity extracted from the CV-measurements in Fig. 4(a) over one over the square root Fig. S11( Fig. S11(a) and (b) show charge discharge profiles for electrochromic double layer dual-ion batteries Fig. S12 : Fig. S12: Comparison of charge-discharge capacity for dual-ion electrochromic devices in different Fig. S12 depicts Fig.S12depicts the dependence of charge capacity on the electrolyte concentration.The best charge capacity is achieved for a balanced electrolyte, while high relative concentrations of Al 3+ lead to high Fig. S13 : Fig. S13: Modified schematic for wearable prototype device.The zinc strip is attached to the side of Fig Fig.S14illustrates the equivalent circuit used to determine the ionic conductivity of devices.Here, R s is the series resistance of the cables and the electrolyte, R ct is the Faradaic charge-transfer resistance at the electrolyte/electrode interface, C dl is the double-layer capacitance at the interface, and Z w is the Warburg impedance which models ion diffusion into electrodes. Table S1 : Elemental composition of hydrogel samples with and without DMSO modification as Table S2 : EDX analysis of a pristine, one time cycled and post mortem (1000 cycles) WO 3 electrode of an electrochromic device in ZnAl electrolyte.EDX analysis was conducted at 20 kV at 2000 times magnification. Table S2 compares the elemental composition of electrochromic devices during their life cycle.The pristine sample shows elements W from the tungsten oxide and In, Sn from the ITO layer.Si arises from the glass substrate.Trace amounts of C are atmospheric impurities.Chlorine and Sulphur species are part of the electrolyte.The first cycle shows an atomic relation of 4.6:1 Al to Zn, while the post mortem electrode shows a relation of 5.4:1 of the two species. Table S3 : EDX analysis of hydrogel samples.Weight percentage of elements in a pristine hydrogel and the one used in cycling an electrochromic device.Both samples were initially soaked in 1M ZnAl electrolyte.
1,633.8
2019-10-07T00:00:00.000
[ "Engineering", "Materials Science" ]
Investigating the Effect of Rainfall Parameters on the Self-Cleaning of Polluted Suspension Insulators : Insight from Southern China The cleaning effect of heavy rain (the rainfall reaches 5 mm every day) on surface contamination of insulators is more effective than dew, fog, mist, and other light rain conditions which can initiate leakage currents and increase the likelihood of flashover. It is well understood that heavy rain can wash away contamination from the surface of high voltage (HV) insulators and thereby reduce the risk of pollution flashover. This study examines the cleaning effect of natural wetting conditions on HV insulators on four 500 kV transmission lines in Hunan Province, China. Historical meteorological data, monthly equivalent salt deposit density (ESDD) and non-soluble deposit density (NSDD) measurements taken over a period of five years were analyzed to investigate the relationship between rainfall intensity and insulator cleaning. The measured data show that the ESDD/NSDD changes with the seasonal variation, which accumulates in dry season (January–April, about 117–122 days) and is washed off in the wet season (June–October, about 118–127 days). According to the measured data, the ESDD and NSDD on the surface of insulators were affected by the rainfall intensity (in the dry season it is about 1 mm/day and in the wet season it is about 5 mm/day). Based on a comparison of the four study sites, we propose a mathematical model to show the functional relationship between rainfall intensity and insulator self-cleaning capability. The mathematical model’s coefficient of determination (R2) is greater than 0.9 and the effective rate of self-cleaning capability reaches 80%. Introduction It is well understood that airborne contamination can strongly influence the voltage that HV insulators can withstand [1][2][3].Therefore, accurate measurement and prediction of natural contamination on insulator surfaces is important for taking the appropriate measures to prevent pollution flashover in power systems.Some examples of typical pollution environments are defined as follows: marine environments, industrial environments, agricultural environments, and desert environments, since there is the release of some pollution, like marine salts, inert dust, and high degrees of salt.These environments may appear alone or in some combination.The winter environment, especially concerning the period with low absolute humidity, dusty, windy and exposure to sand and salt (from road salting) is most similar to the desert environment.This means that some seemingly illogical combinations, such as agricultural and desert conditions, do occur in winter in some areas [4][5][6].Due to the high accumulation rates of natural contamination from industry and agriculture on insulator surfaces in Southern China during the winter months, an insulator cleaning program has been devised for transmission lines in the region [7,8].However, the cleaning scheme is always carried out in December and only once for every year, therefore it is not very effective and the insulator surface could not maintain cleanliness for the whole year.The accumulation of contamination will increase with the air pollution after a few months.Thereby, it endangers the safe operation of power networks. The accumulation of contamination is mainly divided into two processes: contamination of HV insulators is highest during the dry season and lowest during the wet season, when rainfall efficiently removes some of the surface contaminants [9,10].At present, the research mainly focuses on the pollution accumulation process, studies on air quality index (AQI), size and gravity of particles, wind speed, electric field intensity, and the force of adhesion, etc. [11,12].However, there have been few studies on the effect of rainfall flushing.The existing research relies on artificial rainfall experiment platforms [13][14][15][16].However, artificial pollution tests on outdoor insulators mostly consider the parameters of ideal conditions and the test results are not in conformity with reality.Therefore, it is very important to research the cleaning effect of the natural rainfall, which can be used to predict the pollution degree and provides a reference for the electric power departments to clean the external insulation of transmission lines. In Southern China, the rainfall always happens in summer months (April to August, about 90 rain days).During the season, the pollution on the surface decreases due to the rainfall.In some special geographies such as the coastal areas and islands, the rainfall intensity need reach above 4.0 mm/day, it is able to largely and significantly affect the contamination on the outdoor insulator.In [17], the coastal and island meteorological conditions were considered and a mathematical model was introduced, which is used to predict pollution performance of insulators on islands under similar conditions.The Egypt electric power company have selected six regions in Northern Egypt of different types of pollutants to monitor.The study was conducted through measurements of: equivalent salt deposition density (ESDD), surface conductivity, maximum leakage current and flashover voltage.The results are useful to assess the insulation performance in different environmental situations and to improve the design, if necessary, and finally to propose an updated pollution map in Northern Egypt [18].A paper by Ahmad et al. [19] proposes a new relationship of ESDD with six meteorological variables: temperature, humidity, pressure, rainfall, wind speed, and wind direction.Multiple linear regression techniques have been used to predict ESDD from these six relevant meteorological parameters.A paper by Allister et al. [20] proposes specific data in terms of ESDD and the pollution index of the Dhahran test site are presented, including significant meteorological parameters (ambient temperature, relative humidity, quantity of rainfall, pressure, wind speed and wind direction).Quantity of rainfall appears to be the most influential meteorological parameters responsible for self-cleaning.However, the underside of insulators, especially those of anti-fog insulators with deep ribs that are less exposed to wetting, remain relatively un-cleaned.A paper by Lin et al. [21] investigates the influence of insulator shed profiles on insulator contamination performance in heavy industrial contamination areas in Shanghai of China. This paper investigates the role and characteristics of natural rainfall in reducing the surface pollution on HV insulators.Then, for the important and specific pollution sources, such as the industrial, local agriculture, transportation, or geographic conditions, the suitable exponential model of salt losses is considered, which can effectively verify and predict the insulator surface pollution characteristics.According to the large number of historical and measured data during the five years, we provide an analysis of the data that will aid power system operators to decide when or whether to initiate insulator cleaning during winters in Southern China. Test Locations To study the effect of natural rainfall flushing on the accumulated pollution, the natural contamination tests were carried out in Hunan Province and the de-energized glass standard disc insulators of four 500 kV transmission lines were selected.The specimen insulators were installed on the cross-arm near the transmission lines, which had a distance of 500 mm-600 mm from the transmission tower.In each selected area, five adjacent transmission line towers were installed with about 50 pieces of insulators to study the pollution accumulation on the insulator surface (shown in Figure 1).The process of pollution accumulation has been analyzed and the data have been measured for each month from 2011 to 2015.The structural parameters of the insulator are presented in Table 1.Before the experiment, all of the insulators were cleaned up and naturally dried in a clean place.To study the effect of natural rainfall flushing on the accumulated pollution, the natural contamination tests were carried out in Hunan Province and the de-energized glass standard disc insulators of four 500 kV transmission lines were selected.The specimen insulators were installed on the cross-arm near the transmission lines, which had a distance of 500 mm-600 mm from the transmission tower.In each selected area, five adjacent transmission line towers were installed with about 50 pieces of insulators to study the pollution accumulation on the insulator surface (shown in Figure 1).The process of pollution accumulation has been analyzed and the data have been measured for each month from 2011 to 2015.The structural parameters of the insulator are presented in Table 1.Before the experiment, all of the insulators were cleaned up and naturally dried in a clean place. Meteorological Condition To decide the effect of rainfall, the major sources of pollution around the test area, and the pollution level were considered.In this condition, the He-Yun Ⅱ line, Xing-Yun line, Chuan-Gu I line, and Gu-Xing I line were firstly selected (shown in Table 2).Figure 2 indicates the average air quality index for every month, which includes the main air pollutant, particulate matter (PM10), followed by SO2, and 30% of the total months from 2011 to 2015 reach or exceed the level of slight pollution.As shown that the air quality is very serious in the winter season and the rainfall in the wet summer season helps to clear up the air pollution.The test sites are situated in South China, and it has four distinct seasons: a significant amount of rainfall at the end of spring and the beginning of summer (the rainy weather reached more than 800 days); and drought at the end of summer and autumn.Figure 3 shows the average amount of rainfall for every month in 2011-2015.Illustrated in the figure, the main rainy season lasts from April to October, while the dry season is from November to January of the next year (this data come from the local meteorological statistical departments).The weather in spring (February, March, and April) is changeable and wet; more sunny days in summer (May, June, and July) and autumn (August, September, and October).The annual average temperature in Hunan is about 17 °C (63 °F). By comparing the air quality index (AQI) with average rainfall, the level of high pollution always appears in winter when the rainfall days are fewer and the intensity of rainfall is light.Therefore, the rainfall is the main factor influencing the AQI.To study the effect of natural rainfall flushing on the accumulated pollution, the natural contamination tests were carried out in Hunan Province and the de-energized glass standard disc insulators of four 500 kV transmission lines were selected.The specimen insulators were installed on the cross-arm near the transmission lines, which had a distance of 500 mm-600 mm from the transmission tower.In each selected area, five adjacent transmission line towers were installed with about 50 pieces of insulators to study the pollution accumulation on the insulator surface (shown in Figure 1).The process of pollution accumulation has been analyzed and the data have been measured for each month from 2011 to 2015.The structural parameters of the insulator are presented in Table 1.Before the experiment, all of the insulators were cleaned up and naturally dried in a clean place. Meteorological Condition To decide the effect of rainfall, the major sources of pollution around the test area, and the pollution level were considered.In this condition, the He-Yun Ⅱ line, Xing-Yun line, Chuan-Gu I line, and Gu-Xing I line were firstly selected (shown in Table 2).Figure 2 indicates the average air quality index for every month, which includes the main air pollutant, particulate matter (PM10), followed by SO2, and 30% of the total months from 2011 to 2015 reach or exceed the level of slight pollution.As shown that the air quality is very serious in the winter season and the rainfall in the wet summer season helps to clear up the air pollution.The test sites are situated in South China, and it has four distinct seasons: a significant amount of rainfall at the end of spring and the beginning of summer (the rainy weather reached more than 800 days); and drought at the end of summer and autumn.Figure 3 shows the average amount of rainfall for every month in 2011-2015.Illustrated in the figure, the main rainy season lasts from April to October, while the dry season is from November to January of the next year (this data come from the local meteorological statistical departments).The weather in spring (February, March, and April) is changeable and wet; more sunny days in summer (May, June, and July) and autumn (August, September, and October).The annual average temperature in Hunan is about 17 °C (63 °F). By comparing the air quality index (AQI) with average rainfall, the level of high pollution always appears in winter when the rainfall days are fewer and the intensity of rainfall is light.Therefore, the rainfall is the main factor influencing the AQI.D H D is the shed diameter, H is the configuration height, L is the leakage distance and S is superficial area. Meteorological Condition To decide the effect of rainfall, the major sources of pollution around the test area, and the pollution level were considered.In this condition, the He-Yun II line, Xing-Yun line, Chuan-Gu I line, and Gu-Xing I line were firstly selected (shown in Table 2).Figure 2 indicates the average air quality index for every month, which includes the main air pollutant, particulate matter (PM10), followed by SO 2 , and 30% of the total months from 2011 to 2015 reach or exceed the level of slight pollution.As shown that the air quality is very serious in the winter season and the rainfall in the wet summer season helps to clear up the air pollution.The test sites are situated in South China, and it has four distinct seasons: a significant amount of rainfall at the end of spring and the beginning of summer (the rainy weather reached more than 800 days); and drought at the end of summer and autumn.Figure 3 shows the average amount of rainfall for every month in 2011-2015.Illustrated in the figure, the main rainy season lasts from April to October, while the dry season is from November to January of the next year (this data come from the local meteorological statistical departments).The weather in spring (February, March, and April) is changeable and wet; more sunny days in summer (May, June, and July) and autumn (August, September, and October).The annual average temperature in Hunan is about 17 By comparing the air quality index (AQI) with average rainfall, the level of high pollution always appears in winter when the rainfall days are fewer and the intensity of rainfall is light.Therefore, the rainfall is the main factor influencing the AQI. ESDD and NSDD Measured Method The samples were polluted by the natural environment.Before pollution, all samples were carefully cleaned so that all the traces of dirt and grease were removed.The samples were then dried naturally.The ESDD/NSDD were obtained by weighing a dried filter paper, pouring a sample of insulator wash water through it, then redrying and reweighing the filter.The conductivity and the temperature of the water containing the pollutants gives the ESDD.The change in weight (in mg or μg) divided by the surface area wiped (in cm 2 ) gives the NSDD.The details for the measuring procedure are fully described in Annex C of IEC Standard 60815 (2008) [22]. The insulator strings were hung on the left and right sides of towers, the contamination on the second to the forth insulator units were measured.The test time was from 2011 to 2015, and contamination was measured on the 28th of every month. ESDD and NSDD Measured Method The samples were polluted by the natural environment.Before pollution, all samples were carefully cleaned so that all the traces of dirt and grease were removed.The samples were then dried naturally.The ESDD/NSDD were obtained by weighing a dried filter paper, pouring a sample of insulator wash water through it, then redrying and reweighing the filter.The conductivity and the temperature of the water containing the pollutants gives the ESDD.The change in weight (in mg or μg) divided by the surface area wiped (in cm 2 ) gives the NSDD.The details for the measuring procedure are fully described in Annex C of IEC Standard 60815 (2008) [22]. The insulator strings were hung on the left and right sides of towers, the contamination on the second to the forth insulator units were measured.The test time was from 2011 to 2015, and contamination was measured on the 28th of every month. ESDD and NSDD Measured Method The samples were polluted by the natural environment.Before pollution, all samples were carefully cleaned so that all the traces of dirt and grease were removed.The samples were then dried naturally.The ESDD/NSDD were obtained by weighing a dried filter paper, pouring a sample of insulator wash water through it, then redrying and reweighing the filter.The conductivity and the temperature of the water containing the pollutants gives the ESDD.The change in weight (in mg or µg) divided by the surface area wiped (in cm 2 ) gives the NSDD.The details for the measuring procedure are fully described in Annex C of IEC Standard 60815 (2008) [22]. The insulator strings were hung on the left and right sides of towers, the contamination on the second to the forth insulator units were measured.The test time was from 2011 to 2015, and contamination was measured on the 28th of every month.In Figure 4a-e, the ESDD and NSDD reach the peak value (ESDD/NSDD maximum data from Table 3) in February during the period from 2011-2015.At the same time, the rainfall intensity is very small (about 0.91 mm/day~4.0mm/day) in this month.Then, the wet season (starts from March) comes and the extreme value of the average monthly rainfall intensities appears in June or July.The value of ESDD reaches the lower level (about 0.015 mg/cm 2 ) in these months.However, the NSDD does not reach the least value, it still decreases until the rainy season is over in September or October.The least NSDD value is 0.05 mg/cm 2 ~0.12 mg/cm 2 .By comparing the two years (2012-2013) with other years, the effect of rainfall with heavy intensity and long duration directly flushing out the NSDD was more obvious.Since there were more heavy rainy days in the summer during the two years (19.81 mm/day and 24.61 mm/day).The changes of ESDD and NSDD also reflected the relationship between contamination and rainfall intensity. Results and Discussion Based on statistics of actual rainfall, ESDD and NSDD during 2011~2014, the variation trend of ESDD/NSDD with the duration and season in 2015 is predicted.As shown in Figure 5, the predicted values are fitted with the actual values in 2015.The curves show that the air pollution concentration is higher and it promotes the particle accumulation on the insulator surface from October to February in next year.Thus, the actual value and predicted value all reach the maximum (ESDD: 0.187 mg/cm 2 and 0.214 mg/cm 2 , NSDD: 0.416 mg/cm 2 and 0.470 mg/cm 2 ).Then, the actual value and predicted value all reach the minimum during the rainfall season from March to October in 2015 (ESDD: 0.016 mg/cm 2 and 0.018 mg/cm 2 , NSDD: 0.077 mg/cm 2 and 0.083 mg/cm 2 ). According to the ESDD and NSDD trend during different periods of the year, the trend of the ESDD/NSDD value basically met the rule of seasonal variation. Energies 2017, 10, 601 6 of 13 In Figure 4a-e, the ESDD and NSDD reach the peak value (ESDD/NSDD maximum data from Table 3) in February during the period from 2011-2015.At the same time, the rainfall intensity is very small (about 0.91 mm/day~4.0mm/day) in this month.Then, the wet season (starts from March) comes and the extreme value of the average monthly rainfall intensities appears in June or July.The value of ESDD reaches the lower level (about 0.015 mg/cm 2 ) in these months.However, the NSDD does not reach the least value, it still decreases until the rainy season is over in September or October.The least NSDD value is 0.05 mg/cm 2 ~0.12 mg/cm 2 .By comparing the two years (2012-2013) with other years, the effect of rainfall with heavy intensity and long duration directly flushing out the NSDD was more obvious.Since there were more heavy rainy days in the summer during the two years (19.81 mm/day and 24.61 mm/day).The changes of ESDD and NSDD also reflected the relationship between contamination and rainfall intensity. Based on statistics of actual rainfall, ESDD and NSDD during 2011~2014, the variation trend of ESDD/NSDD with the duration and season in 2015 is predicted.As shown in Figure 5, the predicted values are fitted with the actual values in 2015.The curves show that the air pollution concentration is higher and it promotes the particle accumulation on the insulator surface from October to February in next year.Thus, the actual value and predicted value all reach the maximum (ESDD: 0.187 mg/cm 2 and 0.214 mg/cm 2 , NSDD: 0.416 mg/cm 2 and 0.470 mg/cm 2 ).Then, the actual value and predicted value all reach the minimum during the rainfall season from March to October in 2015 (ESDD: 0.016 mg/cm 2 and 0.018 mg/cm 2 , NSDD: 0.077 mg/cm 2 and 0.083 mg/cm 2 ). According to the ESDD and NSDD trend during different periods of the year, the trend of the ESDD/NSDD value basically met the rule of seasonal variation. Comparison of ESDD/NSDD Levels at Different Time of the Year Due to the different washing effects of rainfall intensity on the surface contamination of insulators, the washing effect is different on ESDD and NSDD.The variations trend of ESDD and NSDD were studied, Figure 4b shows that NSDD decreases significantly while the change of ESDD is smaller during the rainy season.The value of NSDD declined 75% and the value of ESDD only declined 50%.Because the ESDD was washed in the initial phase of rain days in April to May and the maximum value of ESDD is smaller than the maximum value of NSDD in the February.Therefore, the value of NSDD is faster declined than the value of ESDD.Generally, the value of ESDD appears to trend downward in March, when the rainfall intensity is not strong.From June to July, the rainfall intensity is strong, but the ESDD decreases gently, while during August to September the ESDD is almost unchanged.The value of ESDD reaches the balanced value and the value (about 0.01 mg/cm 2 ).Since the upper piece of the insulator has sheltered the one below, the residual contamination is not easy to be washed-off by rainfall.In the initial period of rainfall, the NSDD decreased gently with the rainfall duration.From April to July the variation trend is very clear.In the later rainy season, the NSDD still decreased, but the effect of wash-off is not obvious.This is because the effect of weather on the NSDD lags behind that of the ESDD. It can be seen from the above experiment results that, the soluble pollution was flushed at the initial phase of rainfall, and was taken away by the run-off, which formed on the surface of the insulator.Even with small amounts of rainfall, the soluble pollutants can be cleaned effectively.Meanwhile, the wash-off law of insoluble pollutants was different, which can only be washed off by the mechanical force of the moving water and washed away when the surface run-off rate reaches the relevant speed.In other words, the wash-off of insoluble pollutants needs enough rainfall intensity.The main component of ESDD are soluble pollutants and NSDD are insoluble pollutants.Therefore, the ESDD was not only soluble in water, but also suffers from the mechanical force of rainfall during the wash-off process. The Effect of Washing in Different Test Sites In order to investigate the insulator natural contamination trend under different physiognomies of test sites, the ESDD/NSDD values on the four typical of pollutions were measured in 2014.On the basis of the ESDD and NSDD test data, the results show that different areas have different cumulative contamination features. Comparative analysis has been conducted on contamination data based on the four 500 kV transmission lines (as shown in Figure 6).It is easy to determine that, in the four sites, the contamination in the industrial area (Figure 6c) is more serious (the maximum of ESDD and NSDD are 0.658 mg/cm 2 and 0.410 mg/cm 2 ) than other test sites, and the carbon emissions/particulate matter from automobile traffic on a nearby highway in the He-Yun line (transportation) is the second most serious (the maximum of ESDD and NSDD are 0.50 mg/cm 2 and 0.31 mg/cm 2 ).The two areas have a common feature, in that the pollution accumulation on the HV insulators has a sharp increase in NSDD and ESDD during the dry season.When the wet season (June-October about 118-127days) is coming, there is a sharp decrease in NSDD and ESDD. Figure 6d shows that there are fewer contamination sources in the geographic condition (the maximum of ESDD and NSDD are 0.358 mg/cm 2 and 0.250 mg/cm 2 ) than other sites.However, the contamination decreased gently over the rainfall duration.Since, in the geographic condition, the transmission lines run across forests and mountains, the surrounding environment can greatly weaken the cleaning effect on salt, and the NSDD/ESDD value still reaches 0.4 mg/cm 2 and 0.28 mg/cm 2 , respectively.As shown in Figure 6b, the contamination level in local agriculture conditions (the maximum of ESDD and NSDD are 0.41 mg/cm 2 and 0.256 mg/cm 2 ) is in the middle for the four test sites, the main source are fertilizers and soil particles from nearby farmland.Therefore, the power department should make different insulator cleaning plans with different types of pollution found at each site. The Influence of Rainfall Intensity to Flush The contamination accumulation process on insulators shows seasonal variation.During the washing process, the washing effect of rainfall intensity on surface contamination of insulators was useful.According to the measured contamination data of the four test sites, the values of ESDD and NSDD from February to September were selected to study the effect of washing in 2012.During the wet season, the rainfall intensity increased with the months and the degree of contamination was affected by the rain wash-off.Several experimental data results show how the residual contamination deposit densities change with rainfall intensity, as shown in Figure 7. The Influence of Rainfall Intensity to Flush The contamination accumulation process on insulators shows seasonal variation.During the washing process, the washing effect of rainfall intensity on surface contamination of insulators was useful.According to the measured contamination data of the four test sites, the values of ESDD and NSDD from February to September were selected to study the effect of washing in 2012.During the wet season, the rainfall intensity increased with the months and the degree of contamination was affected by the rain wash-off.Several experimental data results show how the residual contamination deposit densities change with rainfall intensity, as shown in Figure 7. The Influence of Rainfall Intensity to Flush The contamination accumulation process on insulators shows seasonal variation.During the washing process, the washing effect of rainfall intensity on surface contamination of insulators was useful.According to the measured contamination data of the four test sites, the values of ESDD and NSDD from February to September were selected to study the effect of washing in 2012.During the wet season, the rainfall intensity increased with the months and the degree of contamination was affected by the rain wash-off.Several experimental data results show how the residual contamination deposit densities change with rainfall intensity, as shown in Figure 7.In Figure 7a-d, the washing effect on insulators contamination is mainly effective in the initial rainfall.The residual contamination deposit densities diminish largely when rainfall intensity is still at a relatively low level (range from 1.5 mm/day to 4.0 mm/day), especially in He-Yun transmission line and Xing-Yun transmission line (Figure 7a,b).The value of ESDD decreases from 0.35 mg/cm to 0.10 mg/cm and the value of NSDD decreases from 0.45 mg/cm 2 to 0.21 mg/cm 2 during the low rainfall intensity, then diminish in a small range with increasing rainfall intensity.Finally, the ESDD and NSDD stop changing after the rainfall intensity reaches a high level (about 10 mm/day).The four test sites' data all show that the value of ESDD and NSDD is very small (only 0.012 mg/cm 2 and 0.10 mg/cm 2 , respectively) when the intensity reaches 10 mm/day.Therefore, the value of rainfall intensity is the common threshold across all sites.The intensity of rainfall of 10 mm/day is regarded to be the standard value, and it helps the power department to measure the effective rainfall days for every month. Based on the measured values in the four test sites, the functional relationship between the intensity of rainfall and measured value was obtained by the curve fitting, as: In Figure 7a-d, the washing effect on insulators contamination is mainly effective in the initial rainfall.The residual contamination deposit densities diminish largely rainfall intensity is still at a relatively low level (range from 1.5 mm/day to 4.0 mm/day), especially in He-Yun transmission line and Xing-Yun transmission line (Figure 7a,b).The value of ESDD decreases from 0.35 mg/cm to 0.10 mg/cm and the value of NSDD decreases from 0.45 mg/cm 2 to 0.21 mg/cm 2 during the low rainfall intensity, then diminish in a small range with increasing rainfall intensity.Finally, the ESDD and NSDD stop changing after the rainfall intensity reaches a high level (about 10 mm/day).The four test sites' data all show that the value of ESDD and NSDD is very small (only 0.012 mg/cm 2 and 0.10 mg/cm 2 , respectively) when the intensity reaches 10 mm/day.Therefore, the value of rainfall intensity is the common threshold across all sites.The intensity of rainfall of 10 mm/day is regarded to be the standard value, and it helps the power department to measure the effective rainfall days for every month. Based on the measured values in the four test sites, the functional relationship between the intensity of rainfall and measured value was obtained by the curve fitting, as: NSDD = M NSDD e −bI (2) Based on the data of ESDD and NSDD, the cleaning model of insulators was established.The model can be used to predict the variation trend of contamination on the surface and a time plan regarding when to clean the insulators in the future is put forward as well. Conclusions The test observation in Hunan Province lasted for five years and the experiments have been carried out to study the washing effect of rainfall intensity.From the investigation, the major results can be summarized as follows: (1) Based on the historical meteorological data, monthly equivalent salt deposit density (ESDD), and non-soluble deposit density (NSDD) measurements taken over a period of five years, the contamination accumulating on the surface of insulators presents seasonal changes: contamination accumulates in the dry season (January-April) and is washed off in the wet season (June-October). (2) The effects of rainfall on flushing the ESDD and NSDD are very different.The main component of ESDD are soluble pollutants which suffer from the mechanical force of rainfall.Meanwhile, the main components of NSDD are insoluble pollutants, which are only influenced by mechanical splashing.Thus, the measured NSDDs change more slowly than the ESDDs with the increasing rainfall intensity and duration. (3) By analyzing the contamination data of 2014 on the term of ESDD and NSDD, the result of the primary contamination law has been determined.In the four test sites, the contamination in industrial areas is more serious than other test sites and there are fewer contamination sources in these geographic conditions than in other sites. (4) Based on the functional relationship between the rainfall intensity and measured value, the residual ESDD and NSDD decrease with increasing rainfall intensity and a rapid decline takes place at a relatively low level of rainfall intensity and then diminishes in a small range with increasing rainfall intensity.Therefore, the washing effect of insulator contamination mainly works during the initial period of rainfall. (5) Based on the data of ESDD and NSDD, the cleaning model of insulators was established. Overall, upon monitoring rainfall intensities throughout the year, and knowing background ESDD/NSDD levels, the proposed model can provide operator decision support to decide if, or when, we need to clean contaminated insulators. 3. 1 . 1 . Seasonal Variation 3.1.1.ESDD/NSDD Trends during Different Periods of the YearAccording to the Figures2 and 3, the data show that the airborne contamination which is present in the atmosphere are seasonally dependent, where the majority of contamination accumulates during the drier winter months (January-April) and is washed during the wet summer months (June-October).The average values of the measured ESDD and NSDD between 2011 and 2015 also show seasonal variation: contamination accumulates in the dry season and is washed off in the wet season.Therefore, the contamination deposit densities change with the amount of rain, as shown in Figure4a-e.ESDD/NSDD Trends during Different Periods of the Year According to the Figures2 and 3, the data show that the airborne contamination which is present in the atmosphere are seasonally dependent, where the majority of contamination accumulates during the drier winter months (January-April) and is washed during the wet summer months (June-October).The average values of the measured ESDD and NSDD between 2011 and 2015 also show seasonal variation: contamination accumulates in the dry season and is washed off in the wet season.Therefore, the contamination deposit densities change with the amount of rain, as shown in Figure4a-e. where MESDD and MNSDD are the ESDD and NSDD values in the February; and a and b represent the test sites' respective washing rates. Table 1 . Main parameters and profile of insulator tested. Type Material Main Parameters (mm) Superficial (mm 2 ) Profile D is the shed diameter, H is the configuration height, L is the leakage distance and S is superficial area. Table 2 . Information of the experimental sites. Table 2 . Information of the experimental sites. Table 2 . Information of the experimental sites. Table 4 . Fitting results of the cleaning model of insulators.
7,697.2
2017-05-01T00:00:00.000
[ "Physics" ]
Emotions in online rumor diffusion Emotions are regarded as a dominant driver of human behavior, and yet their role in online rumor diffusion is largely unexplored. In this study, we empirically study the extent to which emotions explain the diffusion of online rumors. We analyze a large-scale sample of 107,014 online rumors from Twitter, as well as their cascades. For each rumor, the embedded emotions were measured based on eight so-called basic emotions from Plutchik’s wheel of emotions (i.e., anticipation–surprise, anger–fear, trust–disgust, joy–sadness). We then estimated using a generalized linear regression model how emotions are associated with the spread of online rumors in terms of (1) cascade size, (2) cascade lifetime, and (3) structural virality. Our results suggest that rumors conveying anticipation, anger, and trust generate more reshares, spread over longer time horizons, and become more viral. In contrast, a smaller size, lifetime, and virality is found for surprise, fear, and disgust. We further study how the presence of 24 dyadic emotional interactions (i.e., feelings composed of two emotions) is associated with diffusion dynamics. Here, we find that rumors cascades with high degrees of aggressiveness are larger in size, longer-lived, and more viral. Altogether, emotions embedded in online rumors are important determinants of the spreading dynamics. Introduction Social media platforms such as Facebook, Sina Weibo, and Twitter allow users to disseminate content through sharing (e.g., called retweeting in the case of Twitter). As a result, content can go viral and reach a large audience despite that fact that it originated from a single broadcast. To this end, understanding the diffusion of online content is relevant for a number of reasons. Marketers are interested in identifying what makes content go viral, so that marketing content can be designed accordingly [1][2][3][4]. Humanitarian organizations leverage the potential of online diffusion in social media to collect information for effective responses to natural disasters and to inform the wider public [5][6][7]. Public stakeholders are confronted with the diffusion of political content and, by understanding the underlying mechanics, can help prevent the spread of rumors [8][9][10][11]. Previous research has identified several drivers of online diffusion (see Additional file 1 for an overview). These drivers are primarily located in the different characteristics of senders. For instance, senders with a larger follower base (i.e., with more outgoing ties in the network) also reach, on average, a larger audience [12]. Other characteristics of senders are the number of followees (i.e., how many incoming ties a user has [13][14][15]) or their past engagement (i.e., the number of posts or reshares [11]). A different stream of research has examined online diffusion around specific topics (e.g., a specific election [9] or a specific disaster [5][6][7][16][17][18][19]). In this work, we add by studying the role of emotions in the diffusion of online rumors. Emotions have been established as an important determinant of human behavior in offline behavior [20][21][22]. Emotions typically arise as a response to environmental stimuli that are of relevance to the needs, goals, or concerns of users and, as a consequence, also guide user behavior in online settings [23]. Emotions influence what type of information users seek, what they process, how they remember it, and ultimately what judgments and decisions they derive from it. Emotions are themselves contagious and can spread among people, both offline (i.e., in person) [24] and online (i.e., via social media) [25][26][27][28][29]. Following the above, an important driver of online behavior are emotions embedded in online content. For instance, it was previously confirmed that emotions influence posting and liking activities [30], users' willingness-to-share [1], and actual sharing behavior [2,[31][32][33]. As such, embedded emotions explain, to a large extent, the propensity to share posts, as well as user response time. Here, emotional stimuli such as emotion-laden wording trigger cognitive processing [34], which in turn results in the behavioral response of information sharing [35][36][37]. In particular, emotions embedded in online content also explain the dynamics of online diffusion. For instance, emotions describe different properties of diffusion cascades, such as their size, branching, or lifetime [38][39][40][41]. Especially misinformation relies upon emotions in order to attract attention [11,38,[42][43][44][45][46]. Given the importance of emotions in online behavior, we investigate how emotions are linked to the spread of online rumors. Hypothesis Emotions embedded in online rumors are associated with the size, lifetime, and structural virality of the cascade. In this study, we empirically analyze to what extent emotions explain the diffusion of online rumors. For this, we infer the emotions embedded in replies to online rumors through the use of affective computing (see Methods). For each rumor, the degree of emotion is rated along so-called basic emotions. Basic emotions refer to a subset of emotions that are universally recognized across cultures and through which other, more complex emotions can be derived. In this work, we adopt Plutchik's wheel of emotions [22], comprising 8 basic emotions (ANTICIPATION, SURPRISE, ANGER, FEAR, TRUST, DISGUST, JOY, SADNESS). Based on these, we infer 24 dyadic emotional interactions, each representing a more complex emotion composed of two basic emotions (e.g., AGGRESSIVENESS as a combination of ANGER and ANTICIPATION). These emotions are then linked to the spread of online rumors using regression analysis. Thereby, we estimate to what extent emotions embedded in online rumors explain: (1) cascade size, that is, how many reshares a rumor generates; (2) cascade lifetime, that is, how long a rumor is active; and (3) structural virality, that is, how effectively it spreads. The latter, structural virality, provides a quantitative metric [47] aggregating the depth-breadth variation in rumor diffusion. One work [11] contains summary statistics reporting which emotions are present in online rumors but not how emotions affect sharing. Hence, any statistical claims measuring the emotion effect (= which emotions drive a faster and wider rumor spreading) are precluded. This presents the added value of our work. We measure how emotions are associated with the diffusion dynamics (e.g., TRUST as an emotion is present in only a small portion of rumors but it has a large influence on virality). Because of this, our work is different in several ways: (i) we focus not only on basic emotions but also dyadic emotions, (ii) we infer the emotion effect on diffusion dynamics, and, because of that, (iii) we use a regression analysis as opposed to summary statistics. Therefore, this work is-to the best of our knowledge-the first comprehensive study assessing the link between emotions and the spread of online rumors. We analyze a large-scale, representative sample of Twitter rumors and their corresponding cascades [11]. Specifically, our data cover the complete time frame from the launch of Twitter in 2006 until (and including) 2017. Altogether, this results in 2189 rumors associated with 107,014 cascades. The sample comprises approx. 3.7 million reshares that originate from almost 3 million different users. Based on the cascades, various control variables are constructed. Specifically, in our regression analysis, we capture time-and rumor-effects through the use of random effects, based on which we control for the heterogeneity among rumors (see Materials and Methods). Dataset A rumor is defined as a piece of content that is propagated between users but without confirmation of its veracity. This definition is rooted in social psychology literature [43,48]. For this study, a large-scale dataset comprising of rumor cascades from Twitter [11] was analyzed. The resulting sample comprises all rumors from Twitter between its founding in the year 2006 until (and including) 2017. Ethics approval was obtained from ETH Zurich (2020-N-44). Overall, our sample includes 2189 rumors with a total of N = 107,014 cascades (i.e., some rumor contents were shared as part of multiple but different cascades). The rumors had approx. 3.7 million reshares originating from 3 million users (see [11] for details). Characteristics of online rumor diffusion The cascades were then processed as follows in order to generate additional variables. These variables refer to different characteristics of online rumor diffusion and later represent the dependent variables in the regression analysis. For simplicity, we introduce the following notation. We refer to the cascades via j = 1, . . . , N . These belong to i = 1, . . . , 2189 different rumors. Each cascade is a three-tuple T j = (r j , t j0 , R j ), where r j is the root post that corresponds to the original broadcast and where t j0 is its timestamp and R j the set of reshares. A reshare k has a parent p jk and a timestamp t jk , i.e., R j = {(p jk , t jk )} k . (1) Cascade size: The cascade size counts how many reshares a cascade generated. Formally, it amounts to all reshares plus 1 (for the root), i.e., |R j | + 1. (2) Cascade lifetime: The cascade lifetime is the timespan during which a rumor cascade was active, thus the elapsed time between the root broadcast and the last reshare. It is calculated via max k t jkt j0 . (3) Structural virality: Structural virality [47] provides an aggregated metric combining the depth and breadth of a cascade. A higher structural virality corresponds to a cascade that is both of great depth and where each reshare generated a large relative number of additional reshares (i.e., a high branching factor). As proposed in [47], structural virality is based on the idea of the Wiener index, i.e., where d j 1 ,j 2 is the shortest path between nodes j 1 and j 2 in the tree T j . Intuitively, structural virality reflects the average distance between all reshares in the graph. Model variables on heterogeneity between rumor cascades Model variables x j , concerning the heterogeneity among rumor cascades, were computed as in earlier research [11,12,31,38]. These later act as controls. In our study, controls are (1) account age; (2) a binary dummy representing whether the account is officially labeled as "verified" (= 1 if yes, i.e., Twitter displays a blue badge next to it); (3) the number of followers (outgoing ties); (4) the number of followees (incoming ties); and (5) user engagement, that is, the average number of posts, reshares, and likes relative to the account age as in [11]. These variables reflect that the senders of rumors vary in their social influence. Note that all of the above variables were computed at the level of cascades (which is later our unit of analysis). Additional sources of heterogeneity among rumors are captured via rumor-level random effects. Computing emotions embedded in online rumors For all cascades, we measured the emotions embedded in replies to rumor cascades. Here, we distinguish basic emotions, bipolar emotion pairs, and dyadic emotional interactions comprising primary, secondary, tertiary dyads. The computation of the emotions is detailed below (see [22] for further details). Basic emotions: Basic emotions refer to a subset of emotions that are universally recognized across cultures and through which other, more complex emotions can be derived [20,21]. In our study, Plutchik's wheel of emotions [22] is adopted as it is a common tool in affective computing [49]. It defines 8 basic emotions (see Fig. 1 Our computation follows a dictionary-based approach as in [11]. Dictionary-based approaches are widely used when large-scale analyses of emotions are performed with the objective of explanatory modeling and thus reliable interpretations [38,41]. In our work, the NRC emotion lexicon was used [50], which classifies English words into the 8 basic emotions. For all cascades j, the content of the replies was tokenized and the frequency of dictionary terms per basic emotion was counted, resulting in an 8-dimensional emotion score e j . Afterwards, the vector was normalized to sum to one across basic emotions (i.e., e j = 1 e j 1 e j ). We omit rumor cascades that do not contain any emotional words from the NRC emotion lexicon (since, otherwise, the denominator is not defined). As a result, the 8 emotion dimensions in e j ∈ [0, 1] 8 range from zero to one. Owing to this fact, replies to rumors can embed a combination of multiple emotions (e.g., 40% ANGER and 60% FEAR). Plutchik's wheel of emotions [22] negative. We calculate a 4-dimensional score φ pairs j that measures the difference between a specific positive emotion and its complement from the set of negative emotions. For example, ANGER-FEAR refers to the difference between ANGER and FEAR. Dyadic emotional interactions: Plutchik's wheel of emotions further defines 24 dyadic emotional interactions, which are more complex emotions composed of two basic emotions (see Fig. 1, round lines). The dyadic emotional interactions comprise: 1 Primary dyads that are one petal apart from each other (e.g., AGGRESSIVENESS = ANGER + ANTICIPATION). Regression analysis To analyze the role of emotions in online rumor diffusion, we apply a generalized regression model. Regression models are generally regarded as an explanatory approach with the ability to document statistical relationships and, in particular, estimate effect sizes [51]. Furthermore, regression models are widely used to estimate the marginal effect of content on diffusion characteristics [11,31,38,41]. This allows us to later make inferences that test our research hypothesis statistically. Let y j denote a characteristic of the cascade of interest, namely cascade size, cascade lifetime, or structural virality. We then model y j of the cascade via a two-level generalized hierarchical regression: where level 1 refers to the cascade level and level 2 to the rumor level. The other variables are as follows. The coefficient β captures the marginal effect of emotions. This is later our variable of interest as it measures the contribution of emotions to rumor diffusion. The coefficient γ is used to control for other model variables at the rumor cascade level. Both γ 0 and γ i are assumed to be independent and identically normally distributed with mean zero. Then γ 0 reflects the base diffusion in the sample, while γ i controls for variation at rumor level. Notably, this turns α i into a rumor-specific random effect. The error term ε j is assumed to be independent and identically normally distributed with mean zero. The use of regression analysis is imperative for the scope of our study. The reasons are as follows. (1) Our objective is different from predictive modeling [51], where the focus is on accurate estimates of the outcome variable. Instead, we are concerned with the model logic as it allows us to interpret the model coefficients. (2) Our objective is also different from analyzing summary statistics as in [11]. Summary statistics deal with comparisons across groups and thereby ignore other sources of heterogeneity in the sample. For instance, the summary statistics on rumor emotions in [11] only report which emotions are common but not how emotions are associated with sharing dynamics. This is especially relevant for our research as we expect that some properties of rumor diffusion are also due to the social influence of the sender. Hence, by combining emotions and further controls in a joint regression model, we can isolate the marginal effect of emotions on the diffusion dynamics, which would not be possible with summary statistics. Later, a regression analysis based on basic emotions is precluded due to multicollinearity (recall that the emotion scores e j sum to one across basic emotions). Instead, the regression analysis is performed using bipolar emotion pairs φ . For the latter, we fit 12 separate models, i.e., one for each pair among the emotional dyads, due to linear dependencies between the dyads. In our implementation, the estimator depends on the distribution of y j as follows: 1 Cascade size is modeled via a negative binomial regression with log-transformation. The reason is that cascade size denotes count data with overdispersion (i.e., variance larger than the mean). 2 Cascade lifetime is first log-transformed and then modeled via a normal distribution. This is consistent with previous research assuming a log-normal distribution for response times [12]. 3 Structural virality is modeled via a gamma regression with a log-link. This allows us to account for a skewed distribution of continuous, non-negative variables. All estimations are conducted based on the R package lme4. Before estimation, all model variables are z-standardized. Owing to this, the regression coefficients quantify changes in the dependent variable in standard deviations. This is beneficial as it allows us to compare the estimated coefficients across emotions in a straightforward manner. Summary statistics The diffusion dynamics in our data are as follows. Figure 2 compares cascade size, lifetime, and structural virality via complementary cumulative distribution functions (CCDF). On average, a rumor cascade reaches 31.95 users and has a lifetime of 123.18 hours. The mean structural virality is 1.26. Basic emotions: Fig. 3 plots the CCDFs for each of the eight basic emotions, while Fig. 4 reports the relative proportion of emotional intensity averaged over all rumors. We find that a large proportion of rumors embed DISGUST and SURPRISE, whereas comparatively few rumors embed JOY and SADNESS. Evidently, rumors embed more ANGER (relative share of 12.34%) than FEAR (10.74%), more SURPRISE (16.44%) than ANTICIPATION (14.23%), more DISGUST (23.58%) than TRUST (9.05%), and more JOY (7.39%) than SADNESS (6.23%). Overall, Fig. 5 shows the distribution of the dyadic emotional interactions. For the primary emotion dyads, we find that a large proportion of rumors embed CONTEMPT and REMORSE, whereas fewer rumors embed LOVE and SUBMISSION. For the secondary and tertiary emotion dyads, we find that many rumor cascades embed UN-BELIEF and SHAME. In contrast, only a relatively small proportion of rumors embed DESPAIR and PESSIMISM. Dyadic Emotional Interactions: Note that the above summary statistics only report the relative frequency of emotions but do not allow one to draw conclusions regarding how users respond to emotions. This is studied in the following regression analyses. Regression results from bipolar emotion pairs In the following, we report results for the bipolar emotion pairs φ pairs j . We use regression analysis to explain different characteristics of cascades based on the bipolar emotion pairs. The parameter estimates in Fig. 6 show that the 8 basic emotions are important determinants of the spreading dynamics of rumors. Across all dependent variables, we find coefficients that are positive and statistically significant for the ANTICIPATION-SURPRISE, ANGER-FEAR, and TRUST-DISGUST dimensions. Hence, rumors are estimated to diffuse more pronouncedly when embedding positive emotions. For instance, The predicted marginal effects for the bipolar emotion pairs are shown in Fig. 7. Rumors embedding ANTICIPATION, ANGER, and TRUST generate more reshares, spread over a longer time horizon, and become more viral. The coefficient for the JOY-SADNESS emotion pair is not significant. Our regression model controls for heterogeneity in users' social influence. The corresponding estimates are omitted for the sake of brevity (their findings have been discussed elsewhere, e.g., in [31]). In short, rumor cascades initiated from accounts that are verified and younger are linked to a larger, longer, and more viral spread. Similar relationships are observed for users exhibiting a higher engagement level and a greater number of followers. In contrast, a higher number of followees is negatively associated with the size, lifetime, and structural virality of a cascade. We calculated the pseudo-R 2 for each model, resulting in relatively high values of 0.64 for cascade size, 0.43 for cascade lifetime, and 0.31 for structural virality. Evidently, the model variables explain the variation in the dependent variables to a large extent. Furthermore, a visual inspection of the actual vs. fitted plot and goodness-of-fit tests indicate that the models are well specified. This is also supported when considering the differences between the AIC models for individual models estimated with/without emotion variables. For each dependent variable, the difference is greater than the threshold [52] of 10 (difference in cascade size: 226.16; lifetime: 52.22; structural virality: 121.03), indicating strong support for the corresponding candidate models. Therefore, the inclusion of the emotion variables in the regression model is to be preferred. Regression results from dyadic emotional interactions We now study how the presence of 24 dyadic emotional interactions is associated with the diffusion dynamics of online rumors. For this purpose, we employ the previous regression model, but this time include the emotion variables φ primary j , φ secondary j , and φ tertiary j . Figure 8 shows the predicted marginal effects for the 8 primary, 8 secondary, and 8 tertiary dyadic emotional interactions. Primary dyadic emotional interactions: Rumor cascades with higher values of AGGRES-SIVENESS, LOVE, OPTIMISM are larger in size, longer-lived, and more viral. We observe no statistically significant effect for the SUBMISSION-CONTEMPT pair. Overall, the largest positive association is observed for AGGRESSIVENESS (i.e., the combination of ANTICIPATION and ANGER). An increase of one standard deviation in this dimension is linked to a 19.18% increase in the cascade size, an 8.33% increase in the cascade lifetime, and a 1.69% increase in structural virality. Secondary dyadic emotional interactions: Rumor cascades with higher values of HOPE vs. UNBELIEF generate more reshares, spread over a longer time horizon, and become more viral. We further find that rumor cascades embedding GUILT, and DESPAIR are negatively associated with the size, lifetime, and structural virality of a cascade. The CURIOSITY-CYNICISM pair is not statistically significant at common statistical significance levels. Tertiary dyadic emotional interactions: Rumor cascades with higher values of ANXIETY are larger in size, longer-lived, and more viral. We also find a larger size, lifetime, and virality for rumor cascades embedding high levels of DOMINANCE, PESSIMISM, and ANXIETY. We find no statistically significant effect for the SENTIMENTALITY-MORBIDNESS pair. The control variables tend in a similar direction as in the analysis of the basic emotions. Again, the difference in AIC (comparing the model with and without emotions) is above the common threshold of 10 [52]. Therefore, the models that include emotions are to be preferred. Sensitivity across rumor topics Our empirical analysis is based on a large-scale dataset with Twitter rumors across varying topics. We now study topic-specific variations. For this purpose, we employ the topic categorization from [11], which classifies Twitter rumors into topics. Here, we focus on the topics Politics, Business, and Science given their high relevance for society. Note that the topic Science is broadly defined and also comprises related topics such as healthrelated rumors. For each of the three topics, we generate a subset of the data and reestimate our models. The results are visualized in Fig. 9. We find that emotions explain Figure 9 Standardized parameter estimates and 95% confidence intervals for different subsets of rumors filtered by topic differences in cascade size, cascade lifetime, and structural virality at a statistically significant level for the topics Politics and Business. In contrast, we find mixed results for Science. These results are in line with existing literature. For example, [31] find a pronounced role of political content in social media sharing. The authors argue that political topics are more controversial and thus attract more attention, which itself influences sharing behavior. Model checks We conducted a series of additional model checks that contribute to the robustness of our findings. First, we followed common practice in regression analysis and checked that variance inflation factors as an indicator of multicollinearity were below five [53]. This check led to the desired outcome. Second, we controlled for year-level time effects (i.e., via clustered standard errors and different study horizons) in addition to rumor-level random effects that are already included in our regression model. We obtained conclusive findings. Third, we controlled for non-linear relationships via quadratic terms. In all cases, our findings were supported. Validation of emotion scores Our results rely on the validity of dictionaries to extract emotions from online rumors. To check how perceived emotions in rumors align with the dictionary-based emotions, we conducted a survey using the online survey platform Prolific (https://www.prolific.co/). We asked n = 7 participants (English native speakers) to rate the presence of the eight basic emotions on a Likert scale from -3 to 3 (here: -3 indicates no emotion present while 3 refers to a high degree of emotion present) for a set of 100 randomly sampled rumors. As shown in Table 1, the participants exhibited a statistically significant interrater agreement according to Kendall's W for each of the 8 basic emotions (p < 0.01). Overall, when aggregating across all 8 basic emotions, the correlation between the dictionary-based emotion scores and human annotations is ρ = 0.17 (p < 0.01) and thus statistically significant at common significance thresholds. This demonstrates that dictionaries are able to capture emotions in online rumors. Negation handling We performed negation scope detection [54,55] to analyze the robustness to how negations (e.g., "not, " "no") are handled by the dictionary approach. For example, phrases like "I am surprised" and "I am not surprised" contain the same number of emotional words but convey different emotions to the reader. We analyzed emotional words that are negated by surrounding negation words as follows: (i) We searched for negations using a predefined list of negation words. Here, we used the list of negations from the R package sentimentr. (ii) We recalculated the emotion scores by counting all emotional words in the neighborhood of the negation word as belonging to the opposite emotional dimension (e.g., Joy = Joy + Sadness negated ). The neighborhood is set to 5 words before and 2 words after the negation. We then compared the emotion scores with negation handling to the values obtained without negation handling. As a result, we found that merely 5.58% of the emotional words in rumors are affected by negations (i.e., lie within negation scopes). Furthermore, the emotion scores with negation handling are highly correlated with the emotion scores without negation handling (ρ > 0.9). Altogether, this implies that our analysis and findings are robust to negations. Discussion In this work, we provided a large-scale study of emotions in online rumor diffusion. For this purpose, 2189 rumors from Twitter with approx. 3.7 million reshares were analyzed with regard to the embedded emotions. Overall, we found that negative emotions are frequently embedded in rumors. Especially frequent are DISGUST (relative share of 23.58%) and SURPRISE (16.44%). (2) The relationship between emotions and the structure of cascade is statistically significant at common significance levels for almost all emotions under study. (3) Rumors embedding ANTICIPATION, ANGER, and, TRUST are estimated to reach a significantly larger number of individuals and diffuse significantly longer and more virally. Interestingly, while negative emotions are more often embedded in rumors, positive emotions are particularly relevant for explaining the diffusion dynamics. (4) A particularly large effect of emotions on the diffusion characteristics is found for AGGRESSIVENESS (which is a derived emotion composed of ANTICIPATION and ANGER). A one standard deviation higher level of AGGRESSIVENESS is predicted to generate 19.18% more reshares, to be active for 8.33% longer, and to spread 1.69% more virally. Overall, our study establishes emotions as important determinants that describe the spread of online rumor. Our results contribute to the understanding of online rumor diffusion. As shown by our analysis, emotions are important determinants in explaining the structure of rumor cascades, specifically how many users are involved, the active lifespan and, to a lesser extent, structural virality. The findings are consistent across basic emotions and also dyadic emotion interaction (primary, secondary, tertiary). In addition, our results suggest considerable heterogeneity in the role of emotions. Strong effects are found for most basic emotions (ANTICIPATION, SURPRISE, ANGER, FEAR, TRUST, DISGUST), albeit with the exception of JOY and SADNESS. Similar patterns are observed when studying more complex (derived) emotions. Here, the largest estimated effect size is associated with AGGRESSIVENESS. A one standard deviation higher level of AGGRESSIVENESS is predicted to generate 19.18% more reshares, cascade that are 8.33% longer, and a 1.69% increase in structural virality. Thereby, we reveal AGGRESSIVENESS as a dominant driver of rumor diffusion. Our work also expands upon rumor theory from offline settings. Offline rumors have a higher chance of dissemination when conveying anxiety [56] and, in particular, negative emotions [42,43]. However, the underlying evidence stems from offline rumors rather than online rumors. Our work adds in two ways: First, we study the role of emotions in the diffusion of online rumors. While rumor diffusion in offline settings is more pronounced for negative emotions, we observe the opposite for online rumors, for which positive emotions appear more influential. Second, we not only compare positive vs. negative emotions but perform a granular study across primary, secondary, and tertiary emotional dyadic interactions. This provides rich findings on the heterogeneity of emotion effects. As such, we confirm that ANXIETY is an important driver for rumor diffusion not only in offline but also in online settings. However, further emotions are also relevant: a particularly pronounced role is found with regard to AGGRESSIVENESS. To the best of our knowledge, the importance of AGGRESSIVENESS in rumor diffusion was previously overlooked. In our study, inferences were made based on data from Twitter. Twitter has a wide popularity with more than 300 million active users. In addition, it plays an important part in rumor diffusion due to its influential role in the political discourse [10]. This makes our findings directly relevant to both social media platforms and, in particular, public stakeholders. For the same reason, established procedures were followed when compiling the data [11], as this ensures that findings are drawn from a realistic, large-scale dataset of Twitter rumors. To the best of our knowledge, our work is the first statistical analysis linking emotions to online rumor diffusion. As with other studies, ours is subject to limitations that provide opportunities for future research. First, this study is based on observational inferences, while we leave the extension to (quasi-)experimental settings, and thus causal inferences, to future work. Nevertheless, our study design ensures that many potential confounding factors can be ruled out. This is because of the temporal order (i.e., the emotion-laden wording precedes the actual cascade) and the fact that further sources of variability among rumors are captured through rumor-level random effects. Second, our study employs statistical inferences that provide explanatory insights. This allows us to quantify the marginal contribution of emotions to online rumor diffusion. A different objective is to use emotions for predictive modeling, which is discussed elsewhere [57][58][59][60]. Our work entails several implications. It emphasizes the necessity of considering emotions when studying rumor diffusion. Emotions are also relevant in practice, particularly for social media platforms. To counter the proliferation of online rumors, social media platforms should seek solutions, based on which emotions can be actively managed. Our study also encourages a granular investigation of emotions for related research questions, whereby not only basic emotions but also derived emotions are considered. Such granular analyses are comparatively more challenging in lab experiments; however, a remedy is offered by computational social science based on which large-scale datasets from online behavior can be mined.
7,121.8
2021-10-18T00:00:00.000
[ "Computer Science", "Psychology" ]
Farm Vehicle Following Distance Estimation Using Deep Learning and Monocular Camera Images This paper presents a comprehensive solution for distance estimation of the following vehicle solely based on visual data from a low-resolution monocular camera. To this end, a pair of vehicles were instrumented with real-time kinematic (RTK) GPS, and the lead vehicle was equipped with custom devices that recorded video of the following vehicle. Forty trials were recorded with a sedan as the following vehicle, and then the procedure was repeated with a pickup truck in the following position. Vehicle detection was then conducted by employing a deep-learning-based framework on the video footage. Finally, the outputs of the detection were used for following distance estimation. In this study, three main methods for distance estimation were considered and compared: linear regression model, pinhole model, and artificial neural network (ANN). RTK GPS was used as the ground truth for distance estimation. The output of this study can contribute to the methodological base for further understanding of driver following behavior with a long-term goal of reducing rear-end collisions. Introduction Road traffic injuries are among the eight main causes of death, according to the World Health Organization [1]. Rear-end collisions are one of the most frequent among the various types of crashes and account for 6.7 percent of fatalities and injuries yearly [2]. Several factors contribute to the occurrence of rear-end crashes, such as vehicle types, road conditions, and driver characteristics. There are numerous studies that have applied deep learning methods to analyze the underlying factors that may contribute to crashes [3][4][5][6][7][8][9][10]. The National Motor Vehicle Crash Causation Survey (NMVCCS) found possible driver contribution for 94% of crashes [11]. The most common driver attributed factors were recognition errors, including driver inattention and distraction. To mitigate the risk of rear-end collisions, driver assistance systems that can reliably predict the collision and provide timely warnings have been developed [12]. The systems estimate the relative distance to the vehicle ahead. Then, time-to-collision (TTC) is calculated based on the estimated relative distance and the vehicles' speeds, and if the calculated TTC is less than a certain threshold, the collision warning will be issued [13]. To calculate the TTC, the relative distance to the vehicle ahead should be estimated as accurately as possible. Several methods utilizing various types of sensors have been introduced for distance estimation. For example, radar sensors, which are commonly used to estimate depth ranges, are especially beneficial in adverse weather conditions and poor illumination conditions [14,15]; however, these sensors are relatively expensive. Vision-based FCW systems have been investigated as a lower-cost alternative to radar, in which they use cameras to detect the vehicle ahead and provide the necessary warnings to the driver to avoid rear-end crashes [16][17][18][19]. Unlike radar sensor data, image data do not contain depth information. The depth of the objects captured in the image can be estimated by relating the size of the objects present in the image to their size in the real world, as the height of an object in the image is inversely proportional to its distance from the camera [20][21][22][23][24]. Several methods have been used to extract object depth information from image data. Generally, there are two main vision-based methods for depth estimation: stereo-and monocular-vision approaches. The former uses multi-view geometry and stereo image pairs to rebuild a 3D space and generate the depth information of the target. However, errors and computational complexities from the calibration and matching of stereo image pairs reduce the measurement accuracy and efficiency. Monocular-vision methods, however, have certain advantages, such as being less expensive, having a simple hardware structure, and a wide field of application. Generally, monocular-vision methods for distance estimation can be divided into two categories. In the first category, the distance estimation is conducted based on the geometric relationship and camera-imaging model [25]. In these types of methods, several parameters from the camera (e.g., the elevation of the camera and the measured object; the height of the target vehicle) need to be provided in advance. Liu et al. used the geometric positional relationship of a vehicle in the camera coordinate system to construct the correspondence between the key points in the world coordinate system and the image coordinate system and then they established a ranging model to estimate the target vehicle distance [25]. Kim et al. used the camera imaging model and the width of the target vehicle to estimate the distance to a moving vehicle that is far ahead [22]. The second category involves constructing a regression model using machine learning. Wongsaree et al. trained a regression model using the correspondence between different positions in an image and their corresponding distances to complete distance estimation [26]. Gökçe et al. used the target vehicle information to train a distance regression model for distance estimation [27]. The main disadvantage of these methods is that they have to collect a large number of training data with real distances. The primary objective of this study is to develop and validate a distance estimation method using monocular video images recorded by a custom data collection device that was designed to study driver behavior while approaching, following, and overtaking farm equipment traveling in the same direction. Two factors make this applied situation novel. First, existing vehicle TTC estimates are based on calculations from a forward-facing system assessing an object that the equipped vehicle is approaching. The unique nature of our question required the opposite-a rear-facing system estimating distance from an object approaching the equipped vehicle. Second, farm equipment, which has a wide range of size and operational features, behaves differently in roadway interactions than passenger vehicles, potentially influencing which estimates are most valid. The devices are mounted on several farm vehicles to investigate driver-following behavior to collect data over many seasons. To better manage the data, the captured videos are compressed and have low resolution. Consequently, some methods, such as distance estimation based on the license plate [28], cannot be applied. Moreover, frequent calibration of the camera and monitoring of calibration quality is not practical since the device is mounted on the vehicles which routinely travel over rough terrain. As a result, stereo-based distance estimation is not practical. Additionally, since there will be a large number of vehicles instrumented with these devices, the cost per unit should be reasonable, eliminating the option of using more expensive sensors such as LiDAR and/or radar. Therefore, to confirm the distance estimation method, an experiment was designed in which two pairs of vehicles were instrumented with the study devices and RTK GPS sensors. Several trials of vehicle interactions were conducted on a closed course, and GPS data and video footage were captured. The data were then aggregated, cleaned, and processed by employing the Nvidia DeepStream object detection framework [29]. Using the output of detection, three different distance estimation models, i.e., linear regression, pinhole, and artificial neural network (ANN), were applied and their results were compared. The accuracy of the proposed methods was verified by comparing with RTK GPS-based estimated distances, which have sub-inch accuracy. Data Collection Device The data collection devices were designed specifically for a naturalistic study of how drivers approach, follow, and pass farm equipment on the roadway. Contained in rugged, weather-resistant cases approximately 0.23 m × 0.20 m × 0.10 m, the devices attach to farm equipment using switch magnets. Video data were recorded at a frequency of 30 Hz and a resolution of 800 × 600 pixels. Figure 1 depicts the data collection device. mation is not practical. Additionally, since there will be a large number of vehicles instrumented with these devices, the cost per unit should be reasonable, eliminating the option of using more expensive sensors such as LiDAR and/or radar. Therefore, to confirm the distance estimation method, an experiment was designed in which two pairs of vehicles were instrumented with the study devices and RTK GPS sensors. Several trials of vehicle interactions were conducted on a closed course, and GPS data and video footage were captured. The data were then aggregated, cleaned, and processed by employing the Nvidia DeepStream object detection framework [29]. Using the output of detection, three different distance estimation models, i.e., linear regression, pinhole, and artificial neural network (ANN), were applied and their results were compared. The accuracy of the proposed methods was verified by comparing with RTK GPS-based estimated distances, which have sub-inch accuracy. Data Collection Device The data collection devices were designed specifically for a naturalistic study of how drivers approach, follow, and pass farm equipment on the roadway. Contained in rugged, weather-resistant cases approximately 0.23 m × 0.20 m × 0.10 m, the devices attach to farm equipment using switch magnets. Video data were recorded at a frequency of 30 Hz and a resolution of 800 × 600 pixels. Figure 1 depicts the data collection device. Validation Data Collection Validation data were collected on a closed runway about 1000 feet long and 150 feet wide. A lead vehicle was equipped with several devices in a vertical stack such that the camera lenses were at different heights (0.71 m to 2.02 m) to approximate the range of heights from common farm vehicles (e.g., combines, tractors). The devices were set to record continuously. A Trimble R8 RTK GPS receiver was mounted directly above the stack of devices. The Trimble R8 is rated with a horizontal accuracy of ±0.03 feet (8 mm). Realtime corrections were provided via cellular modem by the Iowa Real-Time Network (IaRTN), a statewide system of base stations operated by the Iowa Department of Transportation (IDOT). In the experience of the research center that provided the RTK equipment, the horizontal accuracy of the IaRTN corrections in practice is approximately ±0.05 feet (15 mm). The RTK was recorded at 1 Hz. Another identical Trimble R8 receiver was mounted above each following vehicle. Two different types of following vehicles were used in the data collection: a 2012 Toyota Camry sedan and a 2018 Ford F150 SuperCrew pickup truck. For the sedan, the mounting pole was extended through the sunroof of the cab and, relative to a driver's perspective, was located 4 inches right of center and 92 inches behind the front bumper. For the pickup, Validation Data Collection Validation data were collected on a closed runway about 1000 feet long and 150 feet wide. A lead vehicle was equipped with several devices in a vertical stack such that the camera lenses were at different heights (0.71 m to 2.02 m) to approximate the range of heights from common farm vehicles (e.g., combines, tractors). The devices were set to record continuously. A Trimble R8 RTK GPS receiver was mounted directly above the stack of devices. The Trimble R8 is rated with a horizontal accuracy of ±0.03 feet (8 mm). Real-time corrections were provided via cellular modem by the Iowa Real-Time Network (IaRTN), a statewide system of base stations operated by the Iowa Department of Transportation (IDOT). In the experience of the research center that provided the RTK equipment, the horizontal accuracy of the IaRTN corrections in practice is approximately ±0.05 feet (15 mm). The RTK was recorded at 1 Hz. Another identical Trimble R8 receiver was mounted above each following vehicle. Two different types of following vehicles were used in the data collection: a 2012 Toyota Camry sedan and a 2018 Ford F150 SuperCrew pickup truck. For the sedan, the mounting pole was extended through the sunroof of the cab and, relative to a driver's perspective, was located 4 inches right of center and 92 inches behind the front bumper. For the pickup, the mounting pole was secured to an equipment rack behind the cab, 29 inches right of center and 166 inches behind the front bumper. Figure 2 shows the instrumented vehicles. the mounting pole was secured to an equipment rack behind the cab, 29 inches right of center and 166 inches behind the front bumper. Figure 2 shows the instrumented vehicles. Approximately 40 trials were recorded with each of the following vehicles traveling behind the lead vehicle. For each trial, the driver of the lead vehicle would begin to travel down the runway and attempt to quickly accelerate to and then maintain a consistent speed of about 30 mph or 40 mph. The driver of the following vehicle attempted a wide variety of maneuvers, including following at various time headways (i.e., 1, 3, and 5 s), changing time headways while following, changing lanes, and passing. Distance Estimation Models Three vision-based distance estimation models were evaluated: linear regression, pinhole, and ANN. Since the image data do not contain the depth information of objects within them, that information should be estimated by using the size and position of the objects in the image. To this end, Nvidia DeepStream [30], a deep-learning-based vehicle detection framework, was used to extract object position and size (i.e., bounding box information) in the image space. DeepStream is a complete streaming analytics toolkit for AI-based video and image understanding, as well as multi-sensor processing. It uses the open-source multimedia handling library GStreamer to deliver high throughput with a low-latency streaming processing framework. The DeepStream SDK is based on the open-source GStreamer [29] multimedia framework. A DeepStream application is a set of modular plugins connected in a graph. Figure 3 shows a sample of DeepStream output from our study dataset. It should be noted that some of the DeepStream output were sampled, and its accuracy was confirmed by the authors to be within an acceptable range. It should be noted that the bounding box size is small when the vehicle is far from the farm equipment. However, the focus of this study was to develop a system to investigate the driving behavior when the vehicles are following and preparing to overtake the farm equipment. For the distances observed in these situations, the bounding box is large enough to reasonably estimate the distance. Approximately 40 trials were recorded with each of the following vehicles traveling behind the lead vehicle. For each trial, the driver of the lead vehicle would begin to travel down the runway and attempt to quickly accelerate to and then maintain a consistent speed of about 30 mph or 40 mph. The driver of the following vehicle attempted a wide variety of maneuvers, including following at various time headways (i.e., 1, 3, and 5 s), changing time headways while following, changing lanes, and passing. Distance Estimation Models Three vision-based distance estimation models were evaluated: linear regression, pinhole, and ANN. Since the image data do not contain the depth information of objects within them, that information should be estimated by using the size and position of the objects in the image. To this end, Nvidia DeepStream [30], a deep-learning-based vehicle detection framework, was used to extract object position and size (i.e., bounding box information) in the image space. DeepStream is a complete streaming analytics toolkit for AI-based video and image understanding, as well as multi-sensor processing. It uses the open-source multimedia handling library GStreamer to deliver high throughput with a low-latency streaming processing framework. The DeepStream SDK is based on the open-source GStreamer [29] multimedia framework. A DeepStream application is a set of modular plugins connected in a graph. Figure 3 shows a sample of DeepStream output from our study dataset. It should be noted that some of the DeepStream output were sampled, and its accuracy was confirmed by the authors to be within an acceptable range. It should be noted that the bounding box size is small when the vehicle is far from the farm equipment. However, the focus of this study was to develop a system to investigate the driving behavior when the vehicles are following and preparing to overtake the farm equipment. For the distances observed in these situations, the bounding box is large enough to reasonably estimate the distance. The detection outputs were then used to estimate the vehicle distance . Detection outputs include bounding box height ( ), width ( ), bounding box center vector ( and ), originated from the upper left of the video frame, and type of the vehicle, i.e., pickup truck or sedan. Detection outputs, along with the height of the camera lens, were used to train the distance estimation models. Linear Regression The first distance estimation model is linear regression. Four different models, each devoted to the data collected from one of the data collection devices of differing heights, were fitted using the following equation: where and are × 1 vectors of the response variables, i.e., estimated distances, and errors of observations, and is an × design matrix. Pinhole Camera Model The second distance estimation model presented here is a pinhole camera model [25]. Let P = [X Y Z] T be an arbitrary 3D point seen by a camera placed at the origin O of its camera space OXYZ, and p = [u v] T be the image of P, expressed in the image coordinate system ouv. The point p represents a pixel in an image captured by the camera, which is formed by intersecting the light ray from P passing through the camera optical center O and the image plane. Assuming that the projective plane is perpendicular to the Z-axis of the camera coordinate system, the intersection is at the principal point F = [0 0 f] T , which is expressed in the image coordinate system as c = [cx cy] T . Figure 4 illustrates the pinhole camera model. The detection outputs were then used to estimate the vehicle distance d. Detection outputs include bounding box height (H), width (W), bounding box center vector (l x and l y ), originated from the upper left of the video frame, and type of the vehicle, i.e., pickup truck or sedan. Detection outputs, along with the height of the camera lens, were used to train the distance estimation models. Linear Regression The first distance estimation model is linear regression. Four different models, each devoted to the data collected from one of the data collection devices of differing heights, were fitted using the following equation: where y and ε are n × 1 vectors of the response variables, i.e., estimated distances, and errors of n observations, and X is an n × p design matrix. Pinhole Camera Model The second distance estimation model presented here is a pinhole camera model [25]. Let P = [X Y Z] T be an arbitrary 3D point seen by a camera placed at the origin O of its camera space OXYZ, and p = [u v] T be the image of P, expressed in the image coordinate system ouv. The point p represents a pixel in an image captured by the camera, which is formed by intersecting the light ray from P passing through the camera optical center O and the image plane. Assuming that the projective plane is perpendicular to the Z-axis of the camera coordinate system, the intersection is at the principal point F = [0 0 f ] T , which is expressed in the image coordinate system as c = [c x c y ] T . Figure 4 illustrates the pinhole camera model. The distance of the object P from the center of the camera O can be calculated using the following equation: Since, in the current study, the following vehicle was near the center of the camera image, Equation (2) can be simplified to where is the focal length of the camera, ℎ is the height of the vehicle in the real world, and is the height of the vehicle in the image space in pixel values. Moreover, since the focus of the current study is on sedan cars and pickup trucks, these two types of vehicles were considered, and the average of their heights were measured to find ℎ. The actual heights for the sedan and the pickup truck were 57 inches and 76 inches, respectively. The camera focal length, , was calculated by conducting a calibration process which involved determining the relationship between the height of the detected object in the image space, , and its distance from the camera in the real world, ℎ, by using the pinhole camera model (Equation (3)). To this end, several peaks and valleys of GPS-based distance estimation along with its corresponding value, derived from detection processing, were considered. The calibration process included relating the values to the GPS-based distances using Equation (3). After conducting the calibration, the focal length of 892 pixels was calculated. Artificial Neural Network (ANN) Finally, an ANN structure was designed to regress the distances by considering the detection results as the inputs of the network. The network consisted of the input layer, which had six neurons (equal to the number of variables used for training), the output layer, which had one neuron (estimated distance), and hidden layers, which connect the input layer to the output layer. The number of hidden layers and the number of neurons in each of them are two tuning parameters. Moreover, a dropout layer was considered for the last hidden layer. The idea of dropout is to randomly (with a specific rate) drop neurons along with their connections from the neural network to prevent overfitting. In addition, The distance of the object P from the center of the camera O can be calculated using the following equation: Since, in the current study, the following vehicle was near the center of the camera image, Equation (2) can be simplified to where f is the focal length of the camera, h is the height of the vehicle in the real world, and H is the height of the vehicle in the image space in pixel values. Moreover, since the focus of the current study is on sedan cars and pickup trucks, these two types of vehicles were considered, and the average of their heights were measured to find h. The actual heights for the sedan and the pickup truck were 57 inches and 76 inches, respectively. The camera focal length, f , was calculated by conducting a calibration process which involved determining the relationship between the height of the detected object in the image space, H, and its distance from the camera in the real world, h, by using the pinhole camera model (Equation (3)). To this end, several peaks and valleys of GPS-based distance estimation along with its corresponding H value, derived from detection processing, were considered. The calibration process included relating the H values to the GPS-based distances using Equation (3). After conducting the calibration, the focal length of 892 pixels was calculated. Artificial Neural Network (ANN) Finally, an ANN structure was designed to regress the distances by considering the detection results as the inputs of the network. The network consisted of the input layer, which had six neurons (equal to the number of variables used for training), the output layer, which had one neuron (estimated distance), and hidden layers, which connect the input layer to the output layer. The number of hidden layers and the number of neurons in each of them are two tuning parameters. Moreover, a dropout layer was considered for the last hidden layer. The idea of dropout is to randomly (with a specific rate) drop neurons along with their connections from the neural network to prevent overfitting. In addition, activation functions were used to increase the nonlinearity of the neural network. In this study, we applied two well-known activation functions, i.e., relu and tanh. Batch size is the number of data points on which the training will be conducted. Finally, the epochs are the number of times that the training will be conducted. Comparison of GPS-Based and Video-Based Distance Estimation To investigate the accuracy of the distance estimation models, the ground truth distances, i.e., GPS-based distances, were compared with distances derived using the detection results from the pinhole model. Figure 5 depicts the GPS-estimated and pinhole modelestimated distances. The video-based distance time series and GPS-based time series were recorded at different data frequencies (1 Hz and 30 Hz, respectively). Consequently, in order to quantify the distance estimation error between the two time series, the fast dynamic time warping (FDTW) method was used to find the optimal alignment between them [31]. The FDTW is an approximation of dynamic time warping (DTW) and it has linear time and space complexity. activation functions were used to increase the nonlinearity of the neural network. In this study, we applied two well-known activation functions, i.e., relu and tanh. Batch size is the number of data points on which the training will be conducted. Finally, the epochs are the number of times that the training will be conducted. Since there is no exact solution to find the optimal network architecture and configuration, an exhaustive grid search was conducted to find the best network based on the regression accuracy metric. Comparison of GPS-Based and Video-Based Distance Estimation To investigate the accuracy of the distance estimation models, the ground truth distances, i.e., GPS-based distances, were compared with distances derived using the detection results from the pinhole model. Figure 5 depicts the GPS-estimated and pinhole model-estimated distances. The video-based distance time series and GPS-based time series were recorded at different data frequencies (1 Hz and 30 Hz, respectively). Consequently, in order to quantify the distance estimation error between the two time series, the fast dynamic time warping (FDTW) method was used to find the optimal alignment between them [31]. The FDTW is an approximation of dynamic time warping (DTW) and it has linear time and space complexity. A two-dimensional cost matrix D was constructed where D(i, j) was the minimum distance warp path that can be constructed using the two time series. Figure 6 illustrates the cost matrix of the two time series. Using the minimum distance path derived by FDTW, the two time series were aligned. A two-dimensional cost matrix D was constructed where D(i, j) was the minimum distance warp path that can be constructed using the two time series. Figure 6 illustrates the cost matrix of the two time series. Using the minimum distance path derived by FDTW, the two time series were aligned. Figure 7a shows the scatter plot of GPS and camera estimated distances. It can be seen that the camera distance estimation could successfully estimate the distances obtained by GPS. To quantify the distance estimation error, the residuals were calculated. Figure 7b shows the pattern of residuals. The residuals are roughly randomly scattered, meaning that they are unbiased and homoscedastic. The extreme errors at the middle of the graph show that the camera estimation is less reliable in estimating distances beyond 60 m. Figure 7a shows the scatter plot of GPS and camera estimated distances. It can be seen that the camera distance estimation could successfully estimate the distances obtained by GPS. To quantify the distance estimation error, the residuals were calculated. Figure 7b shows the pattern of residuals. The residuals are roughly randomly scattered, meaning that they are unbiased and homoscedastic. The extreme errors at the middle of the graph show that the camera estimation is less reliable in estimating distances beyond 60 m. Results Once the time series were aligned by using the FDTW, GPS-based distance estimations were used as the "gold standard" labels for training the regression and ANN model. As described in Section 2.3.3, an exhaustive search was conducted to find the most optimal ANN configuration and architecture in the grid search. The review of model accuracy showed that a model with four hidden layers (with layers having 8, 7, 6, and 5 neurons, Figure 7a shows the scatter plot of GPS and camera estimated distances. It can be seen that the camera distance estimation could successfully estimate the distances obtained by GPS. To quantify the distance estimation error, the residuals were calculated. Figure 7b shows the pattern of residuals. The residuals are roughly randomly scattered, meaning that they are unbiased and homoscedastic. The extreme errors at the middle of the graph show that the camera estimation is less reliable in estimating distances beyond 60 m. Results Once the time series were aligned by using the FDTW, GPS-based distance estimations were used as the "gold standard" labels for training the regression and ANN model. As described in Section 2.3.3, an exhaustive search was conducted to find the most optimal ANN configuration and architecture in the grid search. The review of model accuracy showed that a model with four hidden layers (with layers having 8, 7, 6, and 5 neurons, Results Once the time series were aligned by using the FDTW, GPS-based distance estimations were used as the "gold standard" labels for training the regression and ANN model. As described in Section 2.3.3, an exhaustive search was conducted to find the most optimal ANN configuration and architecture in the grid search. The review of model accuracy showed that a model with four hidden layers (with layers having 8, 7, 6, and 5 neurons, respectively), no dropout, relu activations, batch size of 16, and epochs number of 800 is the most optimal model in the grid search. To further analyze the distance estimation models, the distance errors were calculated for multiple randomly selected trials, considering data collected from the device at each height. Then, the mean and standard deviation of errors were calculated for each device height. Figure 8a,b shows the vision-based distance estimation error with two standard deviations error bars for the sedan and the pickup truck, respectively. The review of results for both the sedan and the pickup truck shows that the distances estimated using the linear regression model have the highest standard deviation while the ANN has the lowest standard deviation overall. The ANN model also has the lowest error, having the closest Table 1 summarizes the mean error and standard deviation of error for each distance estimation method for the sedan and the pickup truck. lated for multiple randomly selected trials, considering data collected from the device at each height. Then, the mean and standard deviation of errors were calculated for each device height. Figure 8a,b shows the vision-based distance estimation error with two standard deviations error bars for the sedan and the pickup truck, respectively. The review of results for both the sedan and the pickup truck shows that the distances estimated using the linear regression model have the highest standard deviation while the ANN has the lowest standard deviation overall. The ANN model also has the lowest error, having the closest error to zero. Table 1 summarizes the mean error and standard deviation of error for each distance estimation method for the sedan and the pickup truck. Based on the presented results, the model ANN was determined to be the best estimator for distance. Figure 9 shows the scatter plot of residuals, derived from the ANN model plotted against the ground truth, i.e., GPS-based distances, for both the sedan and the pickup truck. As it can be seen, the residuals do not follow any specific pattern overall, indicating that the ANN model provides a good fit to the data. Moreover, the histogram Based on the presented results, the model ANN was determined to be the best estimator for distance. Figure 9 shows the scatter plot of residuals, derived from the ANN model plotted against the ground truth, i.e., GPS-based distances, for both the sedan and the pickup truck. As it can be seen, the residuals do not follow any specific pattern overall, indicating that the ANN model provides a good fit to the data. Moreover, the histogram of residuals was investigated. Based on the results shown in Figure 9, the residual histograms for both the sedan and the pickup truck follow the normal distribution with the mean close to zero. of residuals was investigated. Based on the results shown in Figure 9, the residual histograms for both the sedan and the pickup truck follow the normal distribution with the mean close to zero. Discussion The results of this investigation indicated that using shallow models, for instance, linear regression, were not very effective for distance estimation due to their inconsistency in prediction, i.e., high variance. This seems to be related to the fact that these models cannot identify all the nonlinear relations between the input, i.e., detection outputs, and the output, i.e., estimated distances. Consequently, the artificial neural network model was determined to be the best option for distance estimation with a reasonable standard deviation. Moreover, increased height impacts the standard error far more than the mean, especially for the regression model. This is important because the diversity of farm equipment makes it impossible to normalize camera height, so this analysis will help adjust for height in future models. It should be noted, however, that regardless of the distance estimation method, the results are highly dependent on the quality of the input, i.e., detection outputs. Detection model performance is related to the quality of the video as well as the detection algorithm; thus, to improve the detection results, having high-resolution videos would be helpful. In addition, detection output could be improved by using the transfer learning methods and retraining the model using the actual videos used in this study. In addition to retraining, which could significantly improve the results, a careful data annotation would also make a difference. The review of detection output showed that, sometimes, the entire or some portion of tires were excluded from the bounding box, which might be related to the careless annotation. Finally, since the recording platform will be Discussion The results of this investigation indicated that using shallow models, for instance, linear regression, were not very effective for distance estimation due to their inconsistency in prediction, i.e., high variance. This seems to be related to the fact that these models cannot identify all the nonlinear relations between the input, i.e., detection outputs, and the output, i.e., estimated distances. Consequently, the artificial neural network model was determined to be the best option for distance estimation with a reasonable standard deviation. Moreover, increased height impacts the standard error far more than the mean, especially for the regression model. This is important because the diversity of farm equipment makes it impossible to normalize camera height, so this analysis will help adjust for height in future models. It should be noted, however, that regardless of the distance estimation method, the results are highly dependent on the quality of the input, i.e., detection outputs. Detection model performance is related to the quality of the video as well as the detection algorithm; thus, to improve the detection results, having high-resolution videos would be helpful. In addition, detection output could be improved by using the transfer learning methods and retraining the model using the actual videos used in this study. In addition to retraining, which could significantly improve the results, a careful data annotation would also make a difference. The review of detection output showed that, sometimes, the entire or some portion of tires were excluded from the bounding box, which might be related to the careless annotation. Finally, since the recording platform will be mounted on farm equipment, it is prone to drastic vibration. The vibration might sometimes cause the captured video to be blurry, and, consequently, the detection algorithm fails. In this case, a vibration resilience enclosure may improve the results. Conclusions In this study, we propose a solution to extract the depth information of objects in an image recorded by a low-resolution monocular camera using the deep-learning-based approach. Video and RTK GPS were collected on a closed course with two types of following vehicles. The data were then aggregated, cleaned, and preprocessed using the Nvidia DeepStream, which is an analytics toolkit for video and image analysis. The vehicle detection and tracking were conducted using DeepStream, and bounding boxes information was obtained. Using the pinhole camera model, the height of detected objects in the image was related to the distance of the object in the real world to the camera. The distance estimation process involved finding the focal length of the camera through a calibration step which was conducted by comparing the distances estimated from pinhole model with that of RTK GPS as the ground truth. In addition to the pinhole model, two more distance estimation models were investigated, i.e., linear regression and artificial neural network (ANN). Among the mentioned three models, the ANN was the best distance estimator by having the least mean error and standard deviation. Finally, the legitimacy of the proposed ANN method was confirmed by investigating the scatter of the residuals and histogram plots. The methodology confirmed in this study can be applied to future study of farm vehicle and passenger vehicle interactions in terms of following distance. The output of this study can be used to inform prevention efforts to reduce the risk of rear-end collisions, especially when a heavy vehicle with large blind spots is involved.
8,548.4
2022-04-01T00:00:00.000
[ "Engineering", "Agricultural and Food Sciences", "Computer Science" ]
Combined Solar Thermochemical Solid / Gas Energy Storage Process for Domestic Thermal Applications : Analysis of Global Performance Thermal energy used below 100 °C for space heating/cooling and hot water preparation is responsible for a big amount of greenhouse gas emissions in the residential sector. The conjecture of thermal solar and thermochemical solid/gas energy storage processes renders the heat generation to become ecologically clean technology. However, until present, few pilot scale installations were developed and tested. The present work is devoted to the experimental study of global performance of a pilot scale thermochemical energy storage prototype. Two working modes, namely fixed packed bed and moving bed, were tested using 2.2 kg and 5.5 kg of composite material (silica gel impregnated with calcium chloride) under indoor atmospheric conditions. The global experimental efficiency of a 49l water tank charging process during 75 min was found as high as 0.8‒0.85. The energy storage density reached in the fixed bed mode by the material was 158 kWh/m3, while in the moving bed mode it was 2.5 times lower. The reasons for such a difference are discussed in depth in the text. Introduction The residential buildings sector is known to be a main consumer of the thermal energy in the temperature range below 100 • C for various heating/cooling purposes.This temperature range refers to the generation of the ultra-low grade heat and fits the typical scope of thermal applications for the comfortable human living, for example space heating/cooling and domestic hot water preparation. Consequently, every energy product used in a household is responsible for the greenhouse gas emissions.The biggest proportion in all energy products is destined solely for space heating/cooling and hot water preparation [1].The European legislation clearly imposes the diminishment of thermal energy demand by buildings.A set of measures encouraging the use of renewable energies and promoting the energy efficiency was mandated by the European Council and the European Parliament in the Directives 2009/25/EC [2], 2010/31/EU [3], and 2012/27/EU [4].The passive house [5] and N-ZEB concept [6,7] are actively studied for the diminishment of the greenhouse gas emissions.According to Becchio et al. [8], the cost relevant solution for a domestic thermal system in N-ZEB can be achieved by increasing the proportion of the on-site energy production.Lund et al. [9] showed that the integration of the thermal storage to the smart grid, being coordinated with N-ZEB, is the most cost effective solution for creating the flexibility and reusing the waste heat.Sameti et al. [10] identified that energy storage renders economically and ecologically favorable the district energy grid in comparison with the conventional energy system, autonomous energy supply scheme or net-zero without storage. In addition to the existent energy-to-heat conversion technologies with renewables in the temperature range of 0 to 100 • C (e.g., compression heat pump, geothermal techniques), the combined solar thermochemical solid/gas energy storage process is a very prominent technology for the delivery of clean thermal energy.The core principle of this technology relies on the reversible sorption phenomenon of gas on a porous solid under given operating conditions (the gas temperature and partial pressure) [11].The capture of gas molecules by the porous solid is an exothermic process, which is thus responsible for the heat release and is referred to the "thermal energy discharge".This operation is used in heating applications forthwith.The intensity of the heat release decreases as soon as the solid becomes saturated by the sorbate species under some operating conditions.Once the solid/gas equilibrium has been reached, the solid has to be regenerated to the initial state.The solid regeneration is an endothermic reaction that can be triggered by supplying the thermal energy from an external source.Consequently, the provision of the solar thermal energy for solid regeneration (the so-called "thermal charge" operation) renders the sorption process to be ecologically clean technology. The principle advantages of the thermochemical way of energy storage over sensible and latent ones are the high energy storage density and the absence of thermal losses [12].However, the beginner level of the technological maturity results in the high seasonal storage capacity cost of 0.6 to 1.4 €/kWh for the building sector [13,14], where the biggest rate of capital investments goes to the solid material [15,16].Nevertheless, there is a steady technological advancement in the design of large or pilot scale solid sorption seasonal systems over laboratory prototypes, comparing the period from 2009 [17] to 2017 [18]. The implementation of the thermochemical energy storage technology requires the selection of the candidate solid/gas working pair, the choice of the heat and mass provision configuration, the design of reactor and auxiliary equipment (e.g., storage vessels, heat exchangers).The general design concept is shown in Figure 1.The materials screening procedures were developed by many researchers.Richter et al. [19] proposed a seven step cation/anion selection algorithm based on the availability of materials in the geosphere.N'Tsoukpoe et al. [20] used the mineral classification approach to select potential hydrates for the thermochemical energy storage with dehydration temperature above 100 • C. Solé et al. [21] presented necessary characterization criteria for solid/gas working pairs to be selected for a solar thermochemical energy storage.Courbon et al. [22] developed a computationally efficient algorithm for the selection of solid/gas reversible reactions particularly accented on the use in domestic heating applications.The cited works commonly discuss the idea that a candidate material must provide an intrinsic energy storage density as high as possible between targeted charging and discharging operating conditions, possess an excellent morphological and cycling stability, and be ecologically friendly and cheap.The salt hydrates possess the highest energy storage density (e.g., 630 kWh/m 3 for SrBr 2 •(1↔6)H 2 O [23]) and they are numerous that makes the reason to develop a special selection procedure. However, the practical use of pure salt hydrates is difficult because of the poor cycling stability, aggressivity (corrosivity) and deliquescence problems.The family of selective water sorbents (SWS), including silica gel (SG), carbonaceous matrices, and molecular sieves such as zeolites, Silico-alumino-phosphate (SAPO), Alumino-phosphate (AIPO), Metal organic framework (MOF), is not so numerous.Although these sorbents have quite different structural properties, the same screening criteria could be applied for both families of solid sorbents, including various composite materials [24].The later materials are synthesized by incorporating the salt inside the porous matrix, and this approach mostly resolves the deliquescence phenomenon and improves the cycling stability of a pure salt hydrate. The reactor designer's work generally consists in maximizing the heat and mass transfer phenomenon in a solid/gas thermodynamic system, in order to achieve the ultimate reactor compactness and energy storage density.The assessment of thermochemical reactor prototypes and components can be found in [11,25].Scapino et al. [14] showed that although the energy storage density does not differ too much between open and closed sorption fixed-bed reactors, the storage capacity costs (€/kWh) are lower for open sorption technology.However, this technology is only reserved for use with moist air.The storage capacity cost is the main factor for the development of open sorption reactors for seasonal energy storage.However, the practical use of pure salt hydrates is difficult because of the poor cycling stability, aggressivity (corrosivity) and deliquescence problems.The family of selective water sorbents (SWS), including silica gel (SG), carbonaceous matrices, and molecular sieves such as zeolites, Silicoalumino-phosphate (SAPO), Alumino-phosphate (AIPO), Metal organic framework (MOF), is not so numerous.Although these sorbents have quite different structural properties, the same screening criteria could be applied for both families of solid sorbents, including various composite materials [24].The later materials are synthesized by incorporating the salt inside the porous matrix, and this approach mostly resolves the deliquescence phenomenon and improves the cycling stability of a pure salt hydrate. The reactor designer's work generally consists in maximizing the heat and mass transfer phenomenon in a solid/gas thermodynamic system, in order to achieve the ultimate reactor compactness and energy storage density.The assessment of thermochemical reactor prototypes and components can be found in [11,25].Scapino et al. [14] showed that although the energy storage density does not differ too much between open and closed sorption fixed-bed reactors, the storage capacity costs (€/kWh) are lower for open sorption technology.However, this technology is only reserved for use with moist air.The storage capacity cost is the main factor for the development of open sorption reactors for seasonal energy storage. The design of the system configuration depends on the way how the storage is planned to be integrated to the building.The assessment of the domestic systems configurations proposed for the space heating with an open sorption moving bed reactor was investigated in [26].Hennaut et al. [27] developed the simulation tool for the closed sorption SrBr2/H2O solar thermochemical seasonal combisystem that included two water storage tanks. Michel et al. [28] demonstrated a 400 kg large scale thermochemical system for thermal storage of solar energy.The system was operated under atmospheric conditions, SrBr2/H2O as the working pair, with the overall storage capacity of 105 kWh.The energy density reached by the reactor module was 203 kWh/m³.However, the authors faced the decrease of material performances over time.This is a known issue of this salt hydrate [29], that was resolved in the recently developed composite materials [23,30].Mette et al. [31] developed the combined thermochemical energy storage with The design of the system configuration depends on the way how the storage is planned to be integrated to the building.The assessment of the domestic systems configurations proposed for the space heating with an open sorption moving bed reactor was investigated in [26].Hennaut et al. [27] developed the simulation tool for the closed sorption SrBr 2 /H 2 O solar thermochemical seasonal combisystem that included two water storage tanks. Michel et al. [28] demonstrated a 400 kg large scale thermochemical system for thermal storage of solar energy.The system was operated under atmospheric conditions, SrBr 2 /H 2 O as the working pair, with the overall storage capacity of 105 kWh.The energy density reached by the reactor module was 203 kWh/m 3 .However, the authors faced the decrease of material performances over time.This is a known issue of this salt hydrate [29], that was resolved in the recently developed composite materials [23,30].Mette et al. [31] developed the combined thermochemical energy storage with moving bed reactor (open sorption process) using zeolite 4A/H 2 O.This system included a separate material storage vessel, where the back-up gas burner and the solar thermal collectors were connected to the hot water storage tank.Gaeini et al. [32] developed the solar thermochemical heat storage combisystem for the domestic hot water production.The household scale prototype (170 kg of zeolite 13XBF packed in four segments) was able to heat the water up to 75 • C at the maximum rate of 3.6 kW for 10 h.The energy density of each segment, working under atmospheric conditions, was reported to be as high as 61 kWh/m 3 .A semi-continuous solid feeding pilot scale system was developed within SOTHERCO project for seasonal thermal energy storage using thermochemical process [33].The system demonstrated 200 Wh/kg of the usable energy storage capacity of 9% hydrated SG/CaCl 2 (43 wt.%) composite with a solid feed rate of 220 kg/h.The present work shows the experimental results on a pilot scale combined solar thermochemical energy storage used for the domestic water heating applications.The system prototype was used to compare performance characteristics of fixed packed and moving bed reactors, when connected to a 49 L water storage tank.Although the aim of the presented installation is the preparation of hot water above 50 • C, the present investigation is devoted to the determination of global energy performance under limited set of tank charging conditions, being similar to low-temperature heating applications at ~30 • C. The work is organized in the following order.First, the configuration concept and the working principle of the system are introduced.Second, the experimental pilot prototype is presented, followed by the description of the selected material.Finally, the experimental procedure and the obtained results are described and discussed. Configuration Concept and Working Principle The configuration concept is based on an open sorption process of water vapor on a composite material in a vertical moving bed reactor.The concept is shown in Figure 2 The full details about the used material and the designed reactor are described in the following sections.The idea of the moving bed reactor is to generate a constant thermal power, which is not possible with the fixed packed bed configuration.Moreover, the use of a single bed, instead of multiple beds, aims at improving the overall system compactness.All the solid material is processed by a reactor R1 and stored separately in storage vessels S1 and S2.However, the main challenge consists to design the solid feed/extractor mechanism (elements V8 and V9 in Figure 2). reported to be as high as 61 kWh/m³.A semi-continuous solid feeding pilot scale system was developed within SOTHERCO project for seasonal thermal energy storage using thermochemical process [33].The system demonstrated 200 Wh/kg of the usable energy storage capacity of 9% hydrated SG/CaCl2 (43 wt.%) composite with a solid feed rate of 220 kg/h. The present work shows the experimental results on a pilot scale combined solar thermochemical energy storage used for the domestic water heating applications.The system prototype was used to compare performance characteristics of fixed packed and moving bed reactors, when connected to a 49 L water storage tank.Although the aim of the presented installation is the preparation of hot water above 50 °C, the present investigation is devoted to the determination of global energy performance under limited set of tank charging conditions, being similar to lowtemperature heating applications at ~30 °C. The work is organized in the following order.First, the configuration concept and the working principle of the system are introduced.Second, the experimental pilot prototype is presented, followed by the description of the selected material.Finally, the experimental procedure and the obtained results are described and discussed. Configuration Concept and Working Principle The configuration concept is based on an open sorption process of water vapor on a composite material in a vertical moving bed reactor.The concept is shown in Figure 2 The full details about the used material and the designed reactor are described in the following sections.The idea of the moving bed reactor is to generate a constant thermal power, which is not possible with the fixed packed bed configuration.Moreover, the use of a single bed, instead of multiple beds, aims at improving the overall system compactness.All the solid material is processed by a reactor R1 and stored separately in storage vessels S1 and S2.However, the main challenge consists to design the solid feed/extractor mechanism (elements V8 and V9 in Figure 2).During the thermal discharge, the ambient cool moist air E1 is supplied to the cold side of the heat exchanger HX2 by the fan P3, where it can be preheated from the hot side of HX2 before entering to the reactor R1.The heat exchanger HX3 is by-passed with valves V1 and V2, and the valve V3 is open in the direction of HX2 (see Figure 2).The hot and dried air at the reactor outlet is introduced to the air-to-water heat exchanger HX1 for the immediate charge of the water tank U1.Finally, the cooled and dried air is rejected from HX2 to the atmosphere.The water pump P1 is normally activated and the water pump P2 is deactivated. There is no evaporator module in this configuration, that makes the system to be more compact.Such a decision is exclusively reserved to the use of a highly hygroscopic composite material.However, this designer's step sets the operational constraints, especially for the outdoor temperature below 5 • C and the air moisture content lower than 5.0 × 10 −3 kg/kg.In such conditions, the charge of the water tank U1 can be ensured by its direct heating with the solar collectors E2 by turning the valves V4, V5, V6, and V7, and activating the pump P2.Moreover, a back-up electric heater E3 can be installed in the water tank too. The thermal charge operation uses the solar energy E2.The type of solar thermal collectors determines the dehydration temperature that can be supplied to the reactor.Skrylnyk et al. [34] showed that the dehydration temperature above 100 • C is achievable with evacuated tube or concentrated solar collectors, which improves the reactor energy storage density by more than 50% in comparison with the dehydration between 50 and 80 • C with glazed solar thermal collectors. During the thermal charge, the outdoor air E1 is blown through the cold side of the air-to-air heat exchanger HX2, after what it receives the necessary heat from the solar thermal collectors E2 through the water-to-air heat exchanger HX3.The valves V1 and V2 stay open in the direction of HX3 and R1 units.Afterwards, the hot air is distributed to the reactor module R1.At the same time, the hot water tank U1 can be also charged from the air-to-water heat exchanger HX1, thus reusing a part of heat rejected from the reactor R1. Experimental Prototype Design The experimental prototype developed for this study slightly differs from the concept in Figure 2 as the solar collectors E2 and the heat exchanger HX3 were replaced by an electric heater.The prototype was implemented using the commercially available modular equipment for the ductwork, custom made heat exchangers, and custom design parts (see Figure 3).The ductwork was realized with stainless steel air ducts of a nominal diameter of 200 mm and was jacketed by fiber glass insulation of 5 cm in thickness.The air-to-air heat exchanger HX2 is the cross flow custom assembled unit, whose nominal effectiveness was measured as high as 0.95 for the air volume flow rate of 150 m 3 /h.The circulation of air across the aeraulic circuit was realized with a centrifugal in-line commercial fan from SIG Air Handling International (commercial office at Zaventem, Belgium), series BCS-EC200.The air-to-water heat exchanger HX1 has a custom design and is composed of a network of horizontal finned tubes mounted in a wooden insulated box.The water storage tank is a vacuum insulated vertical cylinder of 49 L of capacity, containing the stratification device.The water is pumped by the hydraulic rotary vane pump from GOTEC SA (Sion, Switzerland), series TS60.The hydraulic connections were made from insulated polyamide tubing with internal diameter of 10 mm. The reactor module was conceived as a vertical moving bed with controlled solid flow rate.The schematics of the reactor module is shown in Figure 3.The reactor consists of two 72.5 L symmetric stainless steel hollow compartments with tightly fixed metallic sieve on each.The compartments are stacked in such a way they form the 3 L free room between both metallic sieves, which is used for a micro-granular solid material to be flown through the vertical plane.The cross section area for the air flow through the metallic sieves and the solid in between is 0.31 m 2 .The feeding of reactor by the solid material is realized by the 10l upper plexiglass hopper as shown in Figure 3.The control over material flow through the reactor is realized with a mechanically driven rotary valve anchored to the bottom part of the reactor.The speed of the valve rotation is adjusted by the stepper motor (denoted as "M" in Figure 3). The distribution of the air temperature at the inlet and outlet of the reactor is measured by two grids of K-type thermocouples, which are mounted inside the reactor enclosure at the proximity of the metallic sieve on the distance of 3 mm from both sides.The temperature measuring grids at the reactor inlet and outlet are shown in Figure 3 (elements 4 and 5).Moreover, the inlet and outlet air ducts are equipped with additional K-type thermocouples to measure the average inlet and outlet air temperature.The air humidity at the inlet and outlet reactor boundaries is measured by the humidity sensors from VAISALA Corporation (Helsinki, Finland), series HMP5 connected to the analog transmitters Indigo™ 201.The air flow rate is measured by the vane wheel flow sensor from Höntzch, series FA, which is mounted on the reactor inlet airduct.The flow rate of the hot water is measured by the turbine type flow meter FT2 from NATEC Sensors GmbH (Garching, Germany) mounted on the water tank inlet.Furthermore, the air-to-air (HX2) and air-to-water (HX1) heat exchangers are also equipped with one K-type thermocouples at each inlet and outlet.The data acquisition system is assembled from the modular National Instruments™ CompactDAQ C-series hardware.The characteristics of the measurement equipment are presented in Table 1.The reactor module was conceived as a vertical moving bed with controlled solid flow rate.The schematics of the reactor module is shown in Figure 3.The reactor consists of two 72.5 L symmetric stainless steel hollow compartments with tightly fixed metallic sieve on each.The compartments are stacked in such a way they form the 3 L free room between both metallic sieves, which is used for a micro-granular solid material to be flown through the vertical plane.The cross section area for the air flow through the metallic sieves and the solid in between is 0.31 m².The feeding of reactor by the solid material is realized by the 10l upper plexiglass hopper as shown in Figure 3.The control over material flow through the reactor is realized with a mechanically driven rotary valve anchored to the bottom part of the reactor.The speed of the valve rotation is adjusted by the stepper motor (denoted as "M" in Figure 3). The distribution of the air temperature at the inlet and outlet of the reactor is measured by two grids of K-type thermocouples, which are mounted inside the reactor enclosure at the proximity of the metallic sieve on the distance of 3 mm from both sides.The temperature measuring grids at the reactor inlet and outlet are shown in Figure 3 (elements 4 and 5).Moreover, the inlet and outlet air ducts are equipped with additional K-type thermocouples to measure the average inlet and outlet air temperature.The air humidity at the inlet and outlet reactor boundaries is measured by the humidity sensors from VAISALA Corporation (Helsinki, Finland), series HMP5 connected to the analog transmitters Indigo™ 201.The air flow rate is measured by the vane wheel flow sensor from Höntzch, series FA, which is mounted on the reactor inlet airduct.The flow rate of the hot water is measured by the turbine type flow meter FT2 from NATEC Sensors GmbH (Garching, Germany) mounted on the water tank inlet.Furthermore, the air-to-air (HX2) and air-to-water (HX1) heat exchangers are also equipped with one K-type thermocouples at each inlet and outlet.The data acquisition system is assembled from the modular National Instruments™ CompactDAQ C-series hardware.The characteristics of the measurement equipment are presented in Table 1. Material The material used for this study was the salt-in-silica composite material with confined salt of calcium chloride (CaCl 2 ), synthesized by multi-steps incipient wetness impregnation method.This material was recently developed at the Research Institute for Energy (University of Mons, Belgium) aiming at the improvement of the intrinsic energy storage density and the cycling stability in connection with the building heating applications [35].The developed synthesis protocol allowed the salt content of 43 wt.% to be reached inside the silica gel (Davisil®, grade 62, from Grace).The CaCl 2 salt was provided by Solvay in form of the anhydrous 94% purity Caso®granules.The full details about the developed composite material, the synthesis protocol and the characterization methods are available in [35,36]. The water mass uptake was measured by using IGASorp apparatus (from Hidden Isochema) for dynamic vapor sorption isotherm measurements.The characteristic curve representing the equilibrium mass uptakes versus the Polanyi adsorption potential ∆F = RT ln(p vs /p v ) was constructed from the experimental data.By basing on these data, a simple model to predict the equilibrium water mass uptake, which depends on the temperature and pressure conditions, was designed.The characteristic curve of sorption of water vapor on the composite (denoted as SG/CaCl 2 ) is shown in Figure 4. Other physical and thermal properties of the used material are presented in Table 2. The water mass uptake was measured by using IGASorp apparatus (from Hidden Isochema) for dynamic vapor sorption isotherm measurements.The characteristic curve representing the equilibrium mass uptakes versus the Polanyi adsorption potential ∆ = ln ⁄ was constructed from the experimental data.By basing on these data, a simple model to predict the equilibrium water mass uptake, which depends on the temperature and pressure conditions, was designed.The characteristic curve of sorption of water vapor on the composite (denoted as SG/CaCl2) is shown in Figure 4. Other physical and thermal properties of the used material are presented in Table 2. Property Value Method, Apparatus Used Specific surface area (m 2 /g) 75 Brunauer-Emmett-Teller method, BELSORP-max [35] Total pore volume (cm 3 /g) 0.38 N 2 sorption measurement at 77 K, BELSORP-max [35] Packing density of anhydrous material (kg/m 3 ) 703 Bulk measurement [35] Reaction heat (kJ/kg) 2757 Efficiency test using TGA/DSC 111 connected to Wetsys unit (Setaram) [35] Apparent thermal conductivity of material at water mass uptake > 0.14 kg/kg (W/m•K) 0.16 Transient hot bridge method in VTRK300 climate chamber (Heraeus Vötsch) [36] 2.4.Experimental Procedure The reactor can work in two modes: (i) as a fixed packed bed and (ii) as a moving bed.The rotary valve (see Figure 3) is permanently disabled in the mode (i) and all the solid material in the reactor is hydrated progressively from the initial state x 0 to the maximum allowed state x max (t).For the experiments in the mode (ii), the rotary valve is actioned automatically by the stepper motor every ∆t s seconds, making the hydrated material fraction to be evacuated from the reactor, while the new anhydrous fraction refills the reactor from the upper hopper (see Figure 3).Therefore, there is a hydration gradient x = x(z, t) along the vertical axis z inside the reactor in the mode (ii), with the boundary conditions at the reactor inlet x(t) = x 0 , z = 0 and at the reactor outlet x(t) = x max , z = L, where the overall hydration time is t = N i=1 ∆t s and N is total number of valve turns.Moreover, the air-to-air heat exchanger HX2 can be by-passed by the valve V3 (see Figure 2), so that the reactor R1 can be run with or without heat recovery from the air-to-air heat exchanger. The operations used for the experimental protocol in the mode (i) and (ii) were identical, with the only difference that the rotary valve remained permanently disabled in the mode (i).Primary to experiments, the material was dehydrated in the laboratory oven at 150 • C for 24 h.Before loading the material to the upper hopper, the centrifugal fan P3 and the hydraulic pump P1 were running at least 20 min in order to check the measurement equipment and to reach the isothermal conditions.The valve V3 was positioned in such a way that the air-to-air heat exchanger HX2 was by-passed.Once the isothermal conditions were reached, the fan P3 was briefly off, and the material was loaded to the reactor through the upper hopper.The experiment started immediately once the centrifugal fan was again on and all the measured data were logged by the data acquisition system to the computer.For the experiments with reactor running in the moving bed mode, the rotary valve was manually turning 3 to 5 tours to let the material to be better packed in the reactor, afterwards the rotary valve worked in the automatic mode every ∆t s seconds. All the experiments started with the heat exchanger HX2 set in the by-pass mode.The heat recovery by the air-to-air heat exchanger HX2 was activated if the air temperature rejected at the hot side was higher than the air temperature at the reactor inlet.The activation of the heat recovery was done by turning the valve V3 in the direction of the heat exchanger HX2. The decision to stop the experiment was taken with regards to the temperature difference threshold at the cold side of the air-to-water heat exchanger (<1 • C) and the air humidity threshold (~30% of R.H. at 20 • C) at the outlet of the reactor. The performance parameters of the prototype in mode (i) and (ii) were evaluated using the following equations.The thermal power produced by the reactor R1, the heat recovered by the air-to-air heat exchanger HX2 and the heating rate of the water tank U1 are determined as follows: . ( where ∆T {i} (t) represents the variation of the fluid temperature difference recorded at the {i} component's boundaries; {i} = {r, a, w} refers to the fluid that circulates through the reactor R1, the air-to-air heat exchanger HX2 or the water tank in the respective order.The heating rate of the water tank was determined from the thermal power on the cold side of the air-to-water heat exchanger HX1.The hydration thermal power . Q h (t) is identical to . Q r (t) in the stationary regime and was determined from the air humidity measurements: . With ∆w r (t) is the variation of the air absolute humidity at the reactor boundaries.The amount of thermal energy generated by the reactor and charged to the water tank: The dynamic effectiveness of the air-to-air or air-to-water heat exchanger was evaluated as given below: where . m {i} C P,{i} min determines the minimum product of the {i} fluid mass flow rate and the associated heat capacity C P,{i} , and ∆T {i},max is the maximum possible temperature difference on the concerned heat exchanger. The energy storage density of the material can be referred to the energy prognosis either by the reactor, or by the water storage tank.It was calculated with the following formula: where ρ s represents the packing density of anhydrous material and m s,max is the total anhydrous mass processed by the reactor during the experimental time [t 0 , t max ].For the fixed packed bed m s,max represents only the anhydrous bed mass m s , while for the moving bed test it represents the quantity of anhydrous material spent on the time frame [t 0 , t max ]. Considering the energy prognoses by both reactor and the water storage tank, the discharging efficiency was also introduced as the installation performance indicator: It has to be noted, that the listed above performance indicators (1-6) depend on the heat losses along the ducts and pipework, the heat capacities of the used materials and the ratio between .m {i} C P,{i} products. The water mass uptake for the fixed packed bed was determined at each experimental time step t j ∈ [t 0 , t max ] by the following formula: x s (t)dt. x s (t) is the sorption rate in kg/s determined from the air humidity measurements as . m r •∆w r (t).The water mass uptake for the moving bed was estimated by solving the equation of the boundary problem: With u s is the estimated averaged solid velocity and m s is the anhydrous bed mass.For simplicity reasons, this mass was taken as a constant. Results and Discussion Two types of hydration experiments were performed on the installation prototype: the reactor was running as the fixed packed bed (test 1) or as the moving bed (test 2).All the hydration experiments were done under laboratory atmospheric conditions, using the ambient moist air.The heat released during the hydration reaction was used to heat up the water tank.In all experimental cases, the material was dehydrated in the laboratory oven at 150 • C and so its initial water uptake was less than 0.01 kg/kg.This resulted in an initially high adsorption potential and led thus to a high initial step in the thermal power at the reactor boundaries at t = 0 min, which will be analyzed later.Furthermore, the relative air humidity under ambient conditions was measured from 41% to 52% at 20 • C, which cannot always exist under winter climate conditions.The initial and experimental conditions for the modes (i) and (ii) are listed in Table 3.It has to be noted, that only two experimental datasets are presented and analyzed here.These results provide the necessary and sufficient information about the prototype functioning under tested conditions, and clearly demonstrate the performances and flaws.Other tests were also carried out, but the obtained results were equivalent because of the little variation of the atmospheric conditions. As follows from Table 3, the experimental conditions were quite similar between both tests.The average ambient temperature, as well as the air absolute humidity, were 1.6 • C, and respectively 1.6 × 10 −3 kg/kg higher for test 2 than for test 1.Nevertheless, the experimental results can be compared.The time instant, when the heat recovery by the exchanger HX2 (see Figure 2) was switched on, was also identical for both experiments (see Table 3). The main performance indicators for the fixed and moving bed experiments are presented in Table 4.The uncertainties in Table 4 were calculated with the uncertainty propagation method [37].The total hydration energy E h was calculated by integrating equation ( 2) over time.To simplify the calculation, the reaction heat ∆H s was taken constant as given in Table 2.The energy densities ∆E r and ∆E w were found from equation ( 5), considering the total anhydrous material mass m s,max from Table 3 and the packing density of the anhydrous composite from Table 2. Additionally, the energy storage capacity (Wh/kg) of material can be found in form of the ratios E r /m s,max and E w /m s,max using equation ( 3) and data from Table 3. Although, the total hydration time spent in the moving bed operating mode was 168 min, the results are shown only for the first 75 min, when 5.5 kg of solid mass was processed.This allows the fixed and moving beds characteristics to be compared within the same time interval. The analysis of the data from Table 4 shows that the amount of the thermal energy E r produced by the reactor and E w charged to the water tank almost did not depend on the prototype operating mode.Moreover, it is important to note that the value E h perfectly matches E r , that proves the correctness of the measuring principle and validates the use of formulas (1)(2)(3).If the thermal losses are high or the reactor is not enough hermetic, the discrepancies in the measurements between E h and E r are considerable, and therefore the performance characteristics cannot be properly established.Also, it can be found from Table 4 that the values of the discharging efficiency d , maximum and average thermal power tend to approximately the same magnitude.This proves that the experiments had an inconsiderable sensitivity in face of the given ambient conditions. However, both energy storage densities and capacities of material considerably differ between both operating modes depending on the energy prognosis (see values ∆E r and ∆E w , as well as ratios E r /m s,max and E w /m s,max in Table 4).This fact is related to the operating mode and can be explained by the difference in the material mass used in the experiments (see Table 3).Indeed, the material mass used in the fixed bed test was 2.2 ± 0.1 kg, while the material mass spent for the moving bed test during 75 min was 5.5 ± 0.1 kg.At the same time, the amount of the produced thermal energy was identical for the both tests (see Table 4). In principle, the prototype performance can be directly judged by the discharging efficiency d , because this indicator aggregates the constructive imperfections, such as heat exchangers effectiveness, inhomogeneities of the sorption phenomenon inside the reactor, thermal losses in the ducts, pipework, etc.The real discharging efficiency of the thermochemical energy storage process can be roughly estimated as high as 0.80-0.85.This is a reasonable lower limit being technically achievable for such a process.The discharging efficiency is better for the fixed bed test, than for the moving bed, because the average heating rate of the water tank .Q w,av was about 24 W higher for the first case (see Table 4).This fact can be explained by the denser material packing in the fixed bed, which in contrast to the moving bed configuration, resulted in the more homogeneous water vapor sorption throughout the bed.The detailed analysis of the experimental curves is presented in the following subheadings. Reactor Hygrothermal Behavior The comparison of the hygrothermal behavior of the reactor between the fixed packed and the moving bed modes is shown in Figures 5-8.The curves with " " and " " markers refer to the fixed bed (test 1).These curves represent the typical behavior of an open sorption packed bed reactor and thus can be used as a reference to compare with the moving bed operating mode. Since the shape of the curves in test 1 is quite smooth, it was concluded that the material was densely packed in the reactor which resulted in a homogeneous sorption throughout the bed (see Figures 5-8).At the initial time instant t = 0 min the material had the lowest hydration state, less than 0.01 kg/kg (see Table 3, test 1) that corresponded to the initial Polanyi adsorption potential ∆F 0 > 10 kJ/mol (see Figure 4).Before the hydration reaction was initiated, the adsorption potential ∆F 1 = RT ln(p vs /p v ) in the reactor inlet conditions was equal to ∆F 1 = 1.7 kJ/mol.The difference ∆F 0 − ∆F 1 thus determined the maximum possible loading lift of the material with water, which was close to 0.81 kg/kg (see Figure 4).The difference of adsorption potentials, being high, produced the temperature lift of 16.6 • C between the inlet and outlet air temperature in the reactor within 4 min (see Figure 5, test 1), corresponding to 461 ± 37 W of the thermal power (see Figure 8, test 1).At the same time, the outlet water vapor pressure sharply dropped down to 2 mbar (see Figure 6, test 1).The maximum of the thermal power of 551 W, as well as the minimum of the water vapor pressure of 1 mbar in the outlet conditions, were reached in 10 min.By this time, the material was loaded with water up to 0.04 kg/kg (see Figure 7, test 1).The distinct decrease on 50 W of the thermal power appeared after 14 min of work (see Figure 8, test 1), before the heat recovery was activated.The thermal power continued to decrease after the activation of the heat recovery (see Figure 8, test 1). The main driving factor behind the decrease in the thermal power in the fixed packed bed is the extinction of the sorption phenomenon related to the saturation of the material by water (see Figure 7, test 1).The dynamic adsorption potential ∆F(t) = RT(t) ln(p vs /p v (t)) related to the air conditions on the reactor inlet and outlet with p v (t) ≈ 1 2 (p v,in + p v,out ), and to the solid temperature T(t), also varied during the experiment and modified the sorption capacity.Due to the solid heating, the adsorption potential changed to ∆F 1 = 5.6 kJ/mol at 4 min, which caused the dynamic shift of the virtual equilibrium from 0.81 kg/kg to 0.25 kg/kg (see Figure 4).As soon as the solid was cooled down, the adsorption potential sprang back to 3.7 kJ/mol that corresponded to 0.44 kg/kg as the virtual equilibrium water uptake.As follows from Figures 5 and 6, the heat dissipated by the in-line fan and the activation of the heat recovery brought also the slight swing in the operating conditions (the inlet air temperature and water vapor pressure).The adsorption potential, due to the heat dissipated by the fan, was ∆F 2 = 5.6 kJ/mol, while after the activation of the heat recovery, it was ∆F 3 = 5.1 KJ/mol.The value of ∆F 2 is fully equivalent to ∆F 1 , but the potential ∆F 3 is lower than ∆F 1 .The difference between the potentials ∆F 3 < ∆F 1 was clearly produced by the reaction advancement (see Figures 5-7, test 1), but not by the activation of the heat recovery.Therefore, the contributions from the heat dissipated by the fan or from the heat recovery to the sorption performance of the fixed packed bed are negligible.Moreover, there are no non-linearities in the water mass uptake curve (see Figure 7, test 1), which could be linked to the effects described above.The moving bed concept was designed with an idea of keeping the thermal power constant, that might resolve the problem of the gradual decrease of the heat production over time.The experimental results of the reactor running in the moving bed mode are the curves with " " and " " markers in Figures 5-8.The shape of these curves is irregular that proves that the material was not densely packed in the bed.The appearance of discontinuities in the material packing was due to the non-homogeneous flow of the granular medium through the bed, which resulted in the sharp decrease of the outlet temperature (see Figure 5, test 2) and the drop of the thermal power (see Figure 8, test 2).The discontinuities of the material flow can be clearly observed at different time, when the sharp raise of the outlet vapor pressure coincides with the drop of the sorption rate (see Figures 6 and 7, test 2).Since the shape of the curves in test 1 is quite smooth, it was concluded that the material was densely packed in the reactor which resulted in a homogeneous sorption throughout the bed (see Figures 5-8).At the initial time instant = 0 min the material had the lowest hydration state, less than 0.01 kg/kg (see Table 3, test 1) that corresponded to the initial Polanyi adsorption potential ∆ > 10 kJ/mol (see Figure 4).Before the hydration reaction was initiated, the adsorption potential ∆ ′ = ln ⁄ in the reactor inlet conditions was equal to ∆ ′ = 1.7 kJ/mol.The difference ∆ − ∆ ′ thus determined the maximum possible loading lift of the material with water, which was close to 0.81 kg/kg (see Figure 4).The difference of adsorption potentials, being high, produced the Since the shape of the curves in test 1 is quite smooth, it was concluded that the material was densely packed in the reactor which resulted in a homogeneous sorption throughout the bed (see Figures 5-8).At the initial time instant = 0 min the material had the lowest hydration state, less than 0.01 kg/kg (see Table 3, test 1) that corresponded to the initial Polanyi adsorption potential ∆ > 10 kJ/mol (see Figure 4).Before the hydration reaction was initiated, the adsorption potential ∆ ′ = ln ⁄ in the reactor inlet conditions was equal to ∆ ′ = 1.7 kJ/mol.The difference ∆ − ∆ ′ thus determined the maximum possible loading lift of the material with water, which was close to 0.81 kg/kg (see Figure 4).The difference of adsorption potentials, being high, produced the The adsorption potential in the reactor inlet conditions was lower than for test 1, namely ∆F 1 = 1.1 kJ/mol, before the hydration reaction started.Since the initial dehydration state of the material used for tests 1 and 2 was the same (see Table 3), the initial ∆F 0 was also the same.With account of ∆F 1 and using the data in Figure 4, the maximum loading lift for test 2 can be predicted as high as 0.9 kg/kg, that is 11% greater than for test 1.This explains the steep temperature jump observed at t = 2 min in Figure 5 (see test 2).But once the solid was heated up, the dynamic adsorption potential increased to ∆F 1 = 4.6 kJ/mol, causing the drop of the virtual equilibrium to 0.37 kg/kg.The additional heat from the fan and the activation of the heat recovery had likewise a negligible effect on the sorption process: the both adsorption potentials changed only to 4.3 kJ/mol.Therefore, the main drawback of the moving bed reactor came from the non-homogeneous granular flow that did not allow the thermal power to be stable.This drawback can be potentially eliminated by optimizing the thickness of bed and the particles size in face of the reasonable pressure drop through the bed.Nevertheless, the value of ~421 W of the thermal power produced in the moving bed test can be considered as a constant on the time span from 20 min to 56 min (see Figure 8, test 2), and because the outlet air temperature, the vapor pressure and the sorption rate encountered little variations (see Figures 5-7, test 2).The point at 42 min is the outlier.This time span also matches the end of the transient hydration process of the moving bed, when the material hydration state reached the stable level of ~0.1 kg/kg (see Figure 8, uptake curve for test 2). The water mass uptake for the moving bed was verified with the moisture analyzer HE73 from Mettler Toledo after 33 min and 1 h 15 min.The measured values were 0.13 kg/kg and 0.09 kg/kg that generally confirmed the correctness of the uptake estimation with equation ( 8) as shown in Figure 7.As follows from Figure 7, the hydration state of the material leaving the reactor is significantly lower compared to the fixed bed test because of the discontinuities and the non-homogeneous sorption through the moving bed (see Figure 7).Also, the solid residence time in the reactor was not enough to reach an adequate hydration level.As a consequence, the energy density of material in the moving bed energy storage system is 2.5 times lower than for the equivalent fixed bed installation (see data in Table 4).The solid feed rate was set to 1.2 × 10 −3 kg/s, which represented the ~30 min hydration cycle of the material mass per one bed.The estimated averaged solid velocity u s was ~0.28 × 10 −3 m/s.The duration of the hydration cycle was selected in such a way that the material overhydration could be avoided.The overhydration of the tested material leads to the formation of a highly viscous granular mix, which cannot be moved through the bed and the rotary valve.Therefore, the design of the moving bed reactor must be improved. Finally, the curves of the thermal power match the reaction thermal power for the both tests (see Figure 8).This fact also proves the correctness of the application of the equations ( 1) and ( 2), used to calculate the energy densities in Table 4, as well as the water mass uptake in Figure 7. Water Tank Energy Charging The water tank charging process is depicted in Figures 9 and 10.One can see that tests 1 and 2 are very correlated in terms of temperature levels and the energy charged to the water storage tank.The drop of charging performance for both tests occurred when the temperature at the bottom of the tank was going to rise, even before the heat recovery was switched on (see Figure 9, curves with " " and " " markers). The increase of the water temperature measured at the bottom part of the tank represents the loss of the stratification inside the water storage tank.Although, the charging process started at different initial temperatures as it is shown in Figure 9, the reasonable moment to activate the heat recovery was related to the loss of the stratification in the water storage tank. The energy charging process mostly depended on the stratification state of the water storage tank.The maximum of the thermal power, being transmitted from the reactor to the tank, corresponded to the maximum temperature difference between the top and the bottom parts of the water storage (see Figures 9 and 10). Concerning the fixed packed bed, the decrease in the thermal power in Figure 10 was due simultaneously to the attenuation of the sorption reaction (see Figure 8) and to the loss of the stratification in the tank (see Figure 9).The activation of the heat recovery had the positive effect on the global performance, that will be shown in the next subheading. The water tank charging process is depicted in Figures 9 and 10.One can see that tests 1 and 2 are very correlated in terms of temperature levels and the energy charged to the water storage tank.The drop of charging performance for both tests occurred when the temperature at the bottom of the tank was going to rise, even before the heat recovery was switched on (see Figure 9, curves with "" and "" markers).The increase of the water temperature measured at the bottom part of the tank represents the loss of the stratification inside the water storage tank.Although, the charging process started at different initial temperatures as it is shown in Figure 9, the reasonable moment to activate the heat recovery was related to the loss of the stratification in the water storage tank. The energy charging process mostly depended on the stratification state of the water storage tank.The maximum of the thermal power, being transmitted from the reactor to the tank, corresponded to the maximum temperature difference between the top and the bottom parts of the water storage (see Figures 9 and 10). Concerning the fixed packed bed, the decrease in the thermal power in Figure 10 was due simultaneously to the attenuation of the sorption reaction (see Figure 8) and to the loss of the stratification in the tank (see Figure 9).The activation of the heat recovery had the positive effect on the global performance, that will be shown in the next subheading. On the account of the moving bed test, the loss of the stratification was the main reason of the On the account of the moving bed test, the loss of the stratification was the main reason of the decrease of the thermal power.It can be seen in Figure 10 (test 2), that the charging thermal power started to decrease even before the activation of the heat recovery.But it remained constantly stable around the ~320 W level from 30 min to 55 min (see Figure 10, test 2), without considering the outliers at 44 min.However, starting from 57 min, the charge of the water storage was not very effective for the both tests because of the significant uniformization of the tank temperature and because the hydration reaction was in the terminal state. Auxiliary Components Dynamics The prototype global performance can be also analyzed in terms of the effectiveness of auxiliary components, such as air-to-water and air-to-air heat exchangers.The dynamic effectiveness of these heat exchangers was calculated from the measured data using the equation ( 4).The plots of the dynamic effectiveness w (t) and a (t) are shown in Figure 11.As follows from Figure 11, there was no much difference in the dynamic effectiveness between the both tests.The effectiveness of the air-to-air heat exchanger a during the heat recovery operation reached 0.95 (see Figure 11, the curves with " " and " " markers).This value fully corresponds to the nominal heat exchanger effectiveness at 150 m 3 /h of the air flow rate.The effectiveness of the air-to-water heat exchanger is directly correlated with the water storage stratification state.It is clear, that the greater is the temperature difference between the inlet and outlet of this heat exchanger (see Figure 9), the higher is the effectiveness (see Figure 11, the curves with "" and "" markers).The maximum air-to-water heat exchanger effectiveness was 0.96 for the fixed bed (test 1), while for the moving bed (test 2) it was 0.9, before the temperature in the bottom part of the water tank was going to increase. The positive effect of the heat recovery on the effectiveness can be shortly observed in Figure 11 ("" and "" curves) as a small plateau between 28 min and 30 min.For the fixed packed bed, the value of continued to diminish gradually from 0.88 to 0.48 that occurred within the same time as the sorption phenomenon was extinct and the overall temperature of the water tank tended to uniformization.On the contrary, the effectiveness for the moving bed test was kept almost constant at the level of 0.78 during 24 min (see Figure 11), that clearly demonstrates the strong side of the moving bed reactor. Perspectives Regarding the goals and the obtained results, future research will be oriented towards achieving full charge of the hot water storage tank above 40 °C.Some technical solutions are being under development, including the multi-material charge strategy, multi-stage energy storage, and variable flow rate charge operation.It is assumed that each hypothetical material should fit a restricted range of operating conditions according to the shape of its characteristic curve.This would guarantee the highest loading lift and thus the best energy storage density of used materials under variable ambient conditions.The multi-stage storage architecture and the ability to work with variable flow rate in the air-to-water heat exchanger will allow the heat exchanger's effectiveness to be enhanced.These solutions open larger possibilities for the adaptation of the thermochemical energy storage to ambient The effectiveness of the air-to-water heat exchanger w is directly correlated with the water storage stratification state.It is clear, that the greater is the temperature difference between the inlet and outlet of this heat exchanger (see Figure 9), the higher is the effectiveness w (see Figure 11, the curves with " " and " " markers).The maximum air-to-water heat exchanger effectiveness was 0.96 for the fixed bed (test 1), while for the moving bed (test 2) it was 0.9, before the temperature in the bottom part of the water tank was going to increase. The positive effect of the heat recovery on the effectiveness w can be shortly observed in Figure 11 (" " and " " curves) as a small plateau between 28 min and 30 min.For the fixed packed bed, the value of w (t) continued to diminish gradually from 0.88 to 0.48 that occurred within the same time as the sorption phenomenon was extinct and the overall temperature of the water tank tended to uniformization.On the contrary, the effectiveness w for the moving bed test was kept almost constant at the level of 0.78 during 24 min (see Figure 11), that clearly demonstrates the strong side of the moving bed reactor. Perspectives Regarding the goals and the obtained results, future research will be oriented towards achieving full charge of the hot water storage tank above 40 • C. Some technical solutions are being under development, including the multi-material charge strategy, multi-stage energy storage, and variable flow rate charge operation.It is assumed that each hypothetical material should fit a restricted range of operating conditions according to the shape of its characteristic curve.This would guarantee the highest loading lift and thus the best energy storage density of used materials under variable ambient conditions.The multi-stage storage architecture and the ability to work with variable flow rate in the air-to-water heat exchanger will allow the heat exchanger's effectiveness to be enhanced.These solutions open larger possibilities for the adaptation of the thermochemical energy storage to ambient conditions. Conclusions The experimental results obtained from the pilot scale thermochemical energy storage prototype can be summarized in the following conclusions: 1. The amount of the thermal energy produced by the reactor (0.49 kWh) and stored to the water tank (~0.4 kWh) were about the same order for both the fixed packed and the moving bed reactor operating modes.2. The experimental ratio of the energy storage densities between the fixed packed and moving bed was found as 2.5:1 in favor of the fixed packed bed reactor. 3. The significant drop of the energy storage density for the moving bed reactor is explained by the reduced residence time of the solid in the reactor (~30 min), the discontinuities of the granular flow and the non-homogeneous sorption throughout the bed.The later drawback can be nevertheless improved by packing the material in a denser manner inside the reactor.4. The solid residence time for the moving bed reactor is a limiting factor for the energy storage density of material.The prolonged contact of the tested solid with the humid air can result in overhydration, which dramatically tends to increase the viscosity of granular flow in the moving bed reactor. 5. The loss of the stratification in the water tank had a negative consequence on the prototype global performance and resulted in drop of the heat exchanger effectiveness from ~0.93 to ~0.5.6. New energy management scenarios are being currently explored using other materials alongside with other technical solutions. Figure 2 . Figure 2. Combined solar thermochemical energy storage system configuration.Figure 2. Combined solar thermochemical energy storage system configuration. Figure 2 . Figure 2. Combined solar thermochemical energy storage system configuration.Figure 2. Combined solar thermochemical energy storage system configuration. 21 Figure 5 . Figure 5. Inlet and outlet temperatures for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 6 . Figure 6.Inlet and outlet water vapor pressures for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 5 . 21 Figure 5 . Figure 5. Inlet and outlet temperatures for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 6 . Figure 6.Inlet and outlet water vapor pressures for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 6 . Figure 6.Inlet and outlet water vapor pressures for the fixed bed (test 1) and the moving bed (test 2) reactors. 21 Figure 7 . Figure 7. Water mass uptake (left axis) and sorption rate (right axis) for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 8 . Figure 8. Thermal power for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 7 . 21 Figure 7 . Figure 7. Water mass uptake (left axis) and sorption rate (right axis) for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 8 . Figure 8. Thermal power for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 8 . Figure 8. Thermal power for the fixed bed (test 1) and the moving bed (test 2) reactors. Figure 9 . Figure 9. Temperature profiles during the charging process of the water tank storage.Figure 9. Temperature profiles during the charging process of the water tank storage. Figure 9 . Figure 9. Temperature profiles during the charging process of the water tank storage.Figure 9. Temperature profiles during the charging process of the water tank storage. Figure 10 . Figure 10.Energy and heat flow during the charging process of the water tank. Figure 10 . Figure 10.Energy and heat flow during the charging process of the water tank. Table 1 . Precision characteristics of the measurement equipment. Table 1 . Precision characteristics of the measurement equipment. Table 3 . Initial and experimental conditions. Table 4 . Main experimental results.
13,610.8
2019-05-12T00:00:00.000
[ "Environmental Science", "Engineering" ]
Mapping Levees Using LiDAR Data and Multispectral Orthoimages in the Nakdong River Basins , South Korea Mapping levees is important for analyzing levee surfaces, assessing levee stability, etc. Historically, mapping levees has been carried out using ground surveying methods or only one type of remote sensing dataset. This research aims to map levees using airborne topographic LiDAR data and multispectral orthoimages taken in the Nakdong River Basins. Levee surfaces consist of multiple objects with different geometric and spectral patterns. This research investigates different methods for identifying multiple levee components, such as major objects and eroded areas. Multiple geometric analysis approaches such as the slope classification method, and elevation and area analysis are used to identify the levee crown, berm, slope surfaces, and the eroded area, with different geometric patterns using the LiDAR data. Next, a spectral analysis approach, such as the clustering algorithm, is used to identify the major objects with different spectral patterns on the identified components using multispectral orthoimages. Finally, multiple levee components, including major objects and eroded areas, are identified. The accuracy of the results shows that the various components on the levee surfaces are well identified using the proposed methodology. The obtained results are applied for evaluating the physical condition of the levees in the study area. Introduction A levee is defined as "a man-made structure; usually an earthen embankment, designed and constructed in accordance with sound engineering practices to contain, control or divert the flow of water to provide protection from temporary flooding" [1].The elements that constitute a typical levee are illustrated in Figure 1.As can be seen in Figure 1, the levee crown is defined as "the flat surface at the top of a levee that is equal to or narrower than the base", and the levee toe is defined as "the edge of the levee where the base meets the natural ground" [1].The levee berm is a man-made mound located between the levee toe and crown; it has generally been used as a trail road for human activities and vehicles [2].The width and height of the levee berm are primarily dependent on the ground conditions, levee heights, and the amount of available land [2,3]. Levees are generally covered by various materials, such as the asphalt or gravel road on the crown surface and concrete or vegetation on the slope surfaces.Multiple factors, such as the objectives of levee construction, local geological conditions, flooding risk factors, and local weather conditions, affect the type of materials selected to cover a levee's surface.In general, levees constructed in areas of high-value properties are designed to have relatively steep slopes, while levees constructed in areas of low-value properties are designed to have gentle slopes [3].Additionally, the riverside slope surfaces of levees are generally designed with their surfaces covered by concrete or stone blocks because their function is to protect the urban areas built along a river's course, or because the surfaces of such levees are considered to be at risk from wave actions [2,3]. Earthen levees are designed, constructed and maintained by local, state, or federal bodies.In the United States, the United States Army Corps of Engineers (USACE) provides a manual that presents the basic principles used in the design and construction of levees and levee systems [3].USACE also provides the levee safety program to assess the 2500 nationwide levee systems in the U.S. [4].In South Korea, the Ministry of Land, Infrastructure, and Transport (MOLIT) provides a manual to present the basic principles for the design and construction of levees and levee systems [5].The Water Management Information System (WAMIS) website provides information about the levee systems in South Korea, such as the lengths of the systems, the major materials covering the levee surfaces, the locations of the systems, the average slope degrees of the systems, and the expected maximum water elevations [6]. Historically, research on the detection of the specific features on levee surfaces has been carried out using one type of remote sensing dataset.Bishop et al. [7] extracted the levee crown from laser radar data using the least-cost path method and the flip 7 filter.Hossain et al. [8] detected levee slides from IKONOS and Quickbird imagery using the Iterative Self-Organizing Data Analysis Technique (ISODATA) clustering.Mahrooghy et al. [9] detected levee slides from the terrasar-x data.Hossain and Easson [10] detected levee slides from the hyperspectral images using the vegetation indices. The use of a single data set is limited for mapping levees due to the following reasons: Levees consist of various components such as the crown, slope, and berm surfaces with different geometric patterns, and multiple objects such as an asphalt or gravel road, concrete, and vegetation or soil with different spectral patterns.In addition, the eroded areas, randomly located on the levee surfaces, have geometric patterns different from the other objects on levee surfaces.Hence, the use of the multiple data sets is necessary for identifying the multiple objects on levee surfaces with different geometric and spectral patterns.In this research, multiple methods for identifying the multiple major objects and the eroded areas on the levees using the geometric information obtained from the LiDAR data and the spectral information obtained from the multispectral orthoimages are proposed. Study Area and Data Sets The study area for this research is a river basin in a 22 kilometer stretch of Nakdong River, which passes through the South Korean cities of Changnyeong, Milyang and Changwon.Eight levees are located in the study area (Figure 2).The Nakdong River is the longest river in South Korea with a total length of 525 km.The annual rainfall in the study area is 1229.0mm [5]. The levees in the study area were designed to have the minimum height 13.5 m and the minimum crown widths 4 m [6].All levees in the study area are typical levees, and their surfaces are generally covered by asphalt or gravel road on the crowns and concrete, and vegetation and soil on the slopes. This area is chosen for the following reasons: (1) the availability of multiple remote sensing data sets such as LiDAR data and multispectral aerial orthoimages taken at about the similar time, which makes this region an excellent visual site for levee mapping tasks; (2) this region suffers serious damage caused by annual flooding events [11]. The airborne topographic LiDAR data and the multispectral aerial orthoimages are used as the main data sets for this research.The LiDAR data was acquired in December 2009 using the ALTM Gemini 167 sensor at a speed of 234 kilometers per hour.The horizontal datum is International Geodetic Reference System (GRS) 1980, and the vertical datum is the mean sea level (MSL) at Incheon Bay, the vertical datum of Korean geodetic datum.The average point density of the given LiDAR data is 1.5 points/m 2 .The horizontal accuracy is 15 cm and the vertical accuracy is 5 cm.The multispectral aerial orthoimages were acquired in January 2010 using the digital mapping camera (DMC) made by Z/I Imaging GmbH, Aalen, Germany, and the sensor provides four color channels (red, green, blue and NIR bands).The orhoimages are georeferenced to the Transverse Mercator (TM) Coordinate System based on the datum International GRS 1980.The ground resolution of the orthoimages is 25 cm, and the root mean square error (RMSE) is 0.12 m. Methods Multiple components located on the levees have different spectral and geometric patterns that can be identified using multiple analysis methods.Figure 3 shows the procedure for mapping these surfaces.The procedure includes multiple steps such as slope classification method, elevation and area analysis, median filtering, morphological filtering, clustering algorithms, and the breakline detection method for mapping levees. In the diagram shown in Figure 3, the LiDAR Digital Surface Model (DSM) is generated using the linear interpolation method.Then, the slope map is generated by calculating the maximum rate of change between each pixel and its neighbors.The flat and steep polygons are generated separately from the slope map using the slope classification method.The levee locations are identified by manually selecting the slope polygon pairs from the steep polygons.The breakline detection method is applied to distinguish the levee slope polygons and other objects.The elevation and area analysis is carried out to separate the crown polygon, berm, and eroded polygons.Morphological filtering is applied to refine the original crown polygon.Using this procedure, the crown, slope, berm, and eroded polygons with different geometric patterns are identified.Then, the major objects on the levees, such as the asphalt, gravel, and soil roads on the crown and berm polygons, and the concrete, soil, and vegetation on the slope polygons, are identified using the clustering algorithms.Finally, the multiple components, such as the major objects and the eroded areas on the levee surfaces with different spectral and geometric patterns, are identified, and the accuracy of the identified objects is measured. Generating Slope Maps The LiDAR data consists of the irregularly distributed points.To represent the topographic surfaces using the grid format that consists of the constant cells, the DSM is generated from the given LiDAR point cloud (the point density of the given LiDAR data: 1.5 points/m 2 ).The interpolation method is employed to estimate the elevation of each cell in the generated LiDAR DSM.In general, slopes are significantly changed at the levee crown and toe surfaces.Hence, to detect the levee crown and slope surfaces, the linear interpolation method is used to generate the LiDAR DSM, since it has characteristics that can describe the features, sharp edges, and steep surfaces [12].The point density of the LiDAR data plays an important role in determining the grid resolution of the created LiDAR DSM.There is no reference by which to calculate the grid resolution of the DSM as a function of the point density of LiDAR data.In this research, a 1 m resolution is set as the grid resolution of the DSM to make sure that each cell of the DSM includes at least one LiDAR point.Figure 4 shows one section of the LiDAR DSM generated from the LiDAR points using the linear interpolation method.In Figure 4, the objects with brightly colored pixels are relatively higher in elevation than the neighboring pixels. In general, the LiDAR DSM often includes outliers, which are the pixels that are significantly different in elevation compared with all the nearby pixels.These outliers are caused by random errors or objects such as utility poles, and these outliers, located near the levees, can cause difficulty when trying to detect levee mounds, which generally have gradual slopes.To remove these nearby outliers and to preserve the mounds that make up the levee's crown and slope surfaces, filtering is employed.Research on minimizing these outliers by filtering to extract the coastal features has been carried out by Liu et al. [13] and Choung et al. [14].In their research, a median filter was employed, which is a non-linear filter based on neighborhood ranking [15,16].The major advantages of median filtering over other linear filters are eliminating points with much larger values than the immediate neighboring points, and avoiding data modification [13,15].Figure 5 shows a refinement of the LiDAR DSM using the median filter: (a) one section of the original LiDAR DSM that includes the outlier (the feature in the red circles); and (b) one section of the refined LiDAR DSM that does not have the outlier after the median filtering.In Figure 5, the outliers located near the levees in the raw LiDAR DSM (see Figure 5a) are removed, and the mounds that consist of the levee's top and slope surfaces are preserved in the refined LiDAR DSM (see Figure 5b).The next step is to generate the slope map from the refined LiDAR DSM by calculating the maximum rates of elevation difference between each pixel of the refined LiDAR DSM and its neighboring pixels.In the generated slope map, an intensity value for each pixel represents the slope degree of the area.In general, the pixels with low slope values represent the objects that have relatively flat terrains, and the pixels with high slope values represent the objects that have relatively steep terrains.Figure 6 shows one section of the generated slope map. Generating the Levee Component Polygons Typical levees consist of steep surfaces with elevations that gradually increase from the toe to the crown on their surfaces, and flat surfaces with stable surface elevations.Steep and flat surfaces are separately generated from the slope map using the slope classification method.Historically, the levees constructed in South Korea are designed to have slope degrees from 18.43° (1V (Vertical):3H (Horizontal)) to 33.69° (1V:1.5H) on their slope surfaces [2,5].Considering the geometric changes, such as erosion occurring on the levee slope surfaces, ±10° is added to the slope degree range for selecting the pixels representing the levee slope surfaces.Hence, the slope degree range for generating steep areas, including the levee slope surfaces, is set as [8.43°, 43.69°].The slope degree range for the extraction of flat areas, including the levee crown surfaces, is set to avoid the first range and uses the lower degree values.Hence, it is set as [0°, 8.43°].Using the above two ranges, the two types of areas are separately generated from the slope map.In this research, the binary image generated using the first slope degree range is called the steep area image, and the binary image generated using the second slope degree range is called the flat area image.In general, the steep area image shows the objects that have steep terrains, such as the levee slope surfaces, and the building walls, while the flat area image shows the objects that have flat terrains, such as the natural ground, the levee crown surfaces, highways, and roofs.The flat and steep area images separately generated from the slope map, using the slope classification method, are shown in Figure 7.In Figures 7 and 8, the levee crown polygons and the ground are identified in the flat area images, and the levee slope polygons are identified in the steep area images.The levees have different geometric characteristics from the objects such as highways or bridges because the levee mounds consist of the slope surfaces on both sides and are located along a river's course.Hence, the steep surfaces that represent the levee slope polygon pair are manually selected from the steep area image to identify the levee locations in the study areas.Figure 9 shows an example of the selected levee slope polygon pairs (yellow polygons).In Figure 9, some areas of the levee slope polygons (yellow polygons) are not separated from the neighboring objects (houses, trees, buildings, etc.) located near the levees.Additionally, the levee toes generally have sharp edges because their surfaces are usually cut by the water flow [17], and the levees are designed to have certain degrees on their slope surfaces.Hence, for detecting the levee boundaries generally located at the sharp edges of the toe surfaces and for distinguishing between the levee slope polygons and the neighboring objects, the breakline detection method is employed.In this research, the breakline detection method developed by Choung et al. [14] is employed for mapping the levee boundaries.The method is a semi-automatic method for constructing a 3D breakline from the LiDAR data by connecting manually selected line segments.This method is efficient for detecting the line segments located at the sharp edges (step or ramp edges) of coastal features such as blufflines [14].This method includes the following multiple steps.First, median filtering is applied to the LiDAR points to remove the outliers, which are significantly different in elevation compared with the neighboring points.Second, the Delaunay triangulation networks are constructed using the LiDAR points located in the levee slope polygons.The next step is to find an edge that serves as a levee toe line candidate by examining the orientation of the two surface triangles that intersect at this edge.Using the above procedure, Method 1 and Method 2 are employed to extract the levee toe edges from the vectors.Below is the equation used in Method 1 and Method 2 [14]. In Method 1, e is the edge of the Delaunay triangulation network, A(e) is the value of a dihedral angle defined by the two normal vectors of the two adjacent triangles, and ‖n ‖ and n correspond to the norms of these two normal vectors.In Method 2, n i and n j are defined as the average normal vectors of the vertices x i and x j opposite to the edge e.These average normal vectors are computed using all the normal vectors of the triangles sharing vertices x i and x j , respectively.Using both methods, the levee toe line candidates are extracted separately, and we select a set of candidate edges, suitable for the levee toe line segment, from a combination of edge groups A and B extracted by Method 1 and Method 2, respectively.The next step is to remove the unsuitable edges with high elevation difference between the two end points, or having one end linked to the multiple edges by examining the elevation difference between the two endpoints of the edge and the edge connectivity.The final step is to manually select the line segments located in the levee slope polygons that were selected from the steep area image.Using the selected line segments, the levee boundaries are constructed by connecting the selected line segments.The generated levee boundary can separate the levee slope polygons from the neighboring objects.Figure 10 shows the levee slope polygons (brown polygons) separated from the other objects (yellow polygons) by the levee boundaries (red lines) generated by the breakline detection method.The next step is to distinguish the levee crown polygons from the multiple flat polygons extracted from the flat area images.After the levee boundaries are generated using the breakline detection method, multiple flat polygons, located between the levee boundaries, are selected.Figure 11 shows the multiple flat polygons (pink polygons) on the levee surfaces.Among the multiple flat polygons shown in Figure 11, the polygon that has the highest average elevation is selected as the initial levee crown polygon.The polygons with a lower average elevation than the selected polygon are defined as the other flat polygons.Figure 12 shows the selected levee top polygon (pink polygon) having small holes, gaps, or narrow breaks on its surfaces (the red circles).Due to the failures that generally occur on the top surfaces, the selected crown polygon often has small holes, gaps, or narrow breaks on its surface.To remove these narrow breaks, small holes, or gaps on the crown polygon, morphological filtering is applied to the initial crown polygon in the binary image.Morphological filtering is a technique for the analysis and processing of the geometric structure of the image object by creating a newly shaped object by running a specific-shaped Structure Element (SE) over the Input Object (IO) [18].Morphological filtering has a geometric filtering property, which can preserve the geometric features of the input objects and can filter the noises in the input objects out by controlling the filter design [18].Due to these characteristics, morphological filtering is employed to refine the initial crown polygon, which generally has linear structures. In this research, the morphological closing operator is used to fill the small holes and gaps on the selected levee crown polygon shown in Figure 12.Since the levee top polygons have a linear structure with stable widths, the shape of the SE is set as squares.To preserve the original width of the IO and fill the small holes or gaps in the IO, the width of the SE should be similar to the width of the levee crown polygon.According to the construction law from MOLIT in South Korea, the minimum width of the crown of the levees in South Korea is set as the 4 m [2,5].Hence, the width of the SE is also set as 4 m. Figure 13 shows the refined levee top polygon (pink polygon) by using the morphological closing operator.Compared with the initial crown polygon shown in Figure 12, the width of the refined crown polygon shown in Figure 13 is preserved.In addition, the holes or gaps in the initial crown polygon in Figure 12 are removed in the refined crown polygon shown in Figure 13.After the levee crown polygons are refined by morphological filtering, the levee berm polygons and eroded polygons are separated in the other flat polygon group.Historically, the levee berm is designed to be located 3m lower than the levee crown on the levee surfaces [5].Considering the definition of the levee berm, we separate the levee berm polygons from the other flat polygon group by using the elevation and area analysis described in the following paragraphs. Assumption 1: The polygon, with an elevation in the range determined by the elevation analysis, is defined as the levee berm candidate polygons and the range is determined using the following equation. where the LBC denotes the average elevation of the levee berm candidate polygons, the LT denotes the average elevation of the levee crown polygon and the T denotes the threshold of the elevation difference between the crown polygon and the berm polygon.Due to the possible topographic changes occurring on the levee surfaces, ±1 m is added into the range to select the levee berm candidate polygons.Following the law of construction of the levee berm, the T is set as 3 m and the flat polygon, with an elevation in the above range, is identified as the levee berm candidate polygon.Assumption 2: There is no reference to the size and length of the levee berm.Since the levee berms are generally used as roads [5], we assume that they have the appropriate road geometry.Hence, we select the levee berm polygons from the candidate polygons by using the following equation. AB ≥ TA (3) where AB denotes the areas of the candidate polygons and TA denotes the threshold of the area.Based on an empirical analysis, the polygon with an area larger than 100 m 2 is selected as the levee berm polygon, and the polygon smaller than 100 m 2 is defined as the eroded polygon.Hence, the TA is set as 100.Using the elevation and area analysis described in the above paragraphs, the polygon that satisfies the above two assumptions is selected as the levee berm polygon; otherwise, it is defined as the eroded polygon.Figure 14 shows the levee crown polygon (pink), the levee slope polygons (brown) and the eroded polygons (yellow) on the levee surfaces.Figure 15, on the other hand, shows the levee crown polygons (pink), the levee slope polygons (brown) and the levee berm polygon (purple) on the levee surfaces.Through all the procedures, the levee crown, slope, berm and eroded polygons that have different geometric patterns are identified (see Figures 14 and 15). Identification of the Major Objects on the Levee Components Since these objects have different spectral characteristics that can be identified in multispectral image sources, multispectral bands, such as red, green, blue and Near Infra-Red (NIR), are used for identification.In addition, the LiDAR systems also provide the intensity value for each return, and the LiDAR intensity value is determined by an object's reflectance.This can be used to identify the land-cover classes [19].Hence, for identification of the multiple components located on the levee surfaces, the spectral information obtained from the multispectral orthoimages and the LiDAR intensity values obtained from the LiDAR system are used as the main parameters.Clustering is a machine learning technique widely used to extract the thematic information from the multispectral images in remote sensing application and research [20][21][22].It is an unsupervised learning technique for organizing the objects into multiple groups, with members that are similar in some way without the training samples [23,24].In this research, the two traditional clustering algorithms (the K-means and the ISODATA algorithms) are used to identify the major objects located on the crown/slope/berm surfaces.Unsupervised clustering does not require the training samples to classify the clusters with similar members that have the different spectral characteristics.As each major object on the various levee surfaces has definite spectral characteristics, it is assumed that a certain sample of one object cluster cannot be included in another object cluster.For these reasons, this research employs the K-means and the ISODATA algorithms separating the clusters with cluster boundaries to identify the multiple major objects on the crown/slope/berm surfaces.In this research, the K-means clustering and the ISODATA clustering are separately employed to extract the major objects from the levee surfaces.Since the K-means clustering requires the input of the number of generated clusters, the number of necessary clusters is generated based on a priori knowledge of the multiple components covering the levee surfaces, and a priori knowledge is provided by the WAMIS website [6].The ISODATA clustering is similar to the K-means clustering with one advantage: the ISODATA clustering allows for the different numbers of clusters, while the K-means assumes that the number of clusters is known a priori [25].Since the ISODATA clustering is a self-organizing algorithm, it requires little human input compared to supervised classification [26].The appropriate values of each parameter are determined through multiple experiments.After the procedure is completed, several clusters are generated, and the clusters that represent the same objects are manually merged per the user's determination.Figure 16 shows the multiple levee components (asphalt road (red), soil road or patch (orange), gravel road (cyan), concrete block (blue) and vegetation (green)) on the levee crown/slope/berm surfaces identified by the ISODATA clustering (a and b) and the K-means clustering (c and d) and the eroded areas (yellow). Accuracies of the Identified Objects on Levees The crown and berm surfaces generally consist of the asphalt road, the gravel road, the soil road, etc., and the slope surfaces generally consist of the concrete block, vegetation, soil, etc.In this research, the accuracy of the identified objects on the levee surfaces is measured using 148 checkpoints manually determined from the aerial orthoimages by an experienced operator.The average distance between the checkpoints is 100 m. Figure 17 shows examples of the checkpoints located on the various levee surfaces.Table 1 shows the accuracy of the identified objects generated by both clustering algorithms. Discussion of the Accuracies and the Misclassification Errors In Table 1, some misclassification errors are detected on the identified objects caused by the moving objects such as the vehicles on the levee crowns.Figure 18 shows an example of the misclassification errors caused by the moving objects.In Figure 18a,b, moving objects are generally identified in the either data due to the different times that the two different data sets are taken.In general, the moving objects have a height higher than the levee crowns, and it causes the pixels representing the moving objects to have the slope a degree higher than the pixels representing the levee crown on the slope map (see the red circle in Figure 18b).Figure 18c shows that the shape of the generated levee crown polygon is distorted due to the moving objects on the levee crown and the area of the moving object is not classified as the area of the levee crown surface, and it causes the misclassification errors for identifying the objects on the levee crown and slope surfaces. Table 1 shows that both clustering algorithms have similar accuracies for identifying artificially constructed objects such as asphalt roads and concrete blocks.In general, artificially constructed objects are visually well-distinguished and are rarely misclassified due to their paved surfaces.These characteristics also cause the artificially constructed objects to be well-identified by both clustering algorithms.The objects with the unpaved surfaces, however, such as the gravel roads, are easily misclassified as different objects such as soil roads [27].Therefore, soil surfaces appear not only on the slope surfaces but also on the crown surfaces.These characteristics also make it difficult to determine the number of necessary clusters made by the K-means clustering.Examples of the three clusters (gravel road, soil (road or patch), and vegetation) identified by both algorithms are shown in Figure 19.In Figure 19, some segments of the gravel roads are not classified by K-means clustering due to the misclassification of their surfaces (see the red circle in Figure 19c), while these segments are well identified as the gravel road cluster by ISODATA clustering (see the red circle in Figure 19b).In conclusion, both clustering algorithms can be used to identify artificially constructed objects with paved surfaces, while ISODATA clustering is more efficient than the K-means clustering for identifying the objects with unpaved surfaces on the levee surfaces. Comparison of the Results with the Previous Research In the historical research, the spectral parameters obtained from the image sources has been used for detecting the features on levees, and it leads to the fundamental limits for mapping the multiple levee components that have the different spectral and geometric characteristics.The methodology proposed in this research is useful for identifying multiple components on levees by using the geometric and spectral information obtained from the LiDAR data and the multispectral orthoimages.Comparison of the results of multiple components on levees identified using the LiDAR data and the multispectral orthoimages by this research with the results of the slides on levees detected using the multispectral images by Hossain et al. (2006) [8] and are shown in Figure 20. Figure 20 shows that the use of the single data is useful for detecting the specific features such as the slides on levees but limited to identify the multiple levee components having the different spectral and geometric patterns, while the use of the multiple datasets is efficient for identifying not only the single features but also the multiple levee components such as the major objects and the eroded areas on levees. The statistical results show that the procedure introduced in this research is efficient for identifying the major objects on the levee surfaces.The accuracy of the detected geometric features (the levee crowns, the levee slopes, the levee berms and the eroded areas) on the levee surfaces can be measured by using the ground data such as the reference lines that can be obtained through the ground surveying method.However, the ground data for measuring the accuracy of the detected geometric features is currently not available, which means that it is limited to compare the geometric features with the ground data in this study. Discussion of Assessing Stability of the Levees in the Study Area This section discusses the evaluation of the physical condition of levees using the results obtained by this research.The physical condition of levees is evaluated by using the following equations.BPC ≥ EA (4) GPC < EA (5) where the GPC denotes the good physical condition of the levee, the BPC denotes the bad physical condition of the levee, and the EA denotes the entire area of the identified eroded surfaces on the levee.There is no precise rule for evaluating the physical condition of levees with the identified eroded surfaces.Based on an empirical analysis, the EA is set as 100.Hence, the levee with more than 100 m 2 of the eroded surfaces is assumed to have the bad physical condition, while the levee with less than 100 m 2 of the eroded surfaces is assumed to have the good physical condition.Table 2 shows the results of evaluating the physical condition of the eight levees in the study area.In Table 2, Levees 1, 2 and 3 have the good physical condition due to the relatively small eroded areas on their surfaces, while Levees 4, 5, 6, 7 and 8 have a bad physical condition due to the relatively large eroded areas on their surfaces.Table 2 shows that the results obtained in this research can be used to evaluate the physical condition of levees to assess levee stability without human accessibility. Conclusions and Future Works Mapping levees is important for identifying multiple levee components, analyzing levee surfaces, and assessing levee stability.The use of a single data set is limited to map levees due to the different geometric and spectral patterns of the various components on levee surfaces.This research proposes a new methodology for mapping levees using geometric and spectral parameters obtained from the LiDAR data and the multispectral orthoimages by multiple geometric and spectral analysis techniques, such as the breakline detection method, morphological filtering, the 2-D interpolation method, median filtering, clustering algorithms, the slope classification method, and elevation and area analysis.This research contributes to mapping levees that consist of the various components using both geometric and spectral parameters obtained from the LiDAR data and the multispectral orthoimages taken in the Nakdong River Basins, South Korea.The ISODATA clustering has 86% overall accuracy for identifying the objects on levees, while the K-means clustering has 81% overall accuracy.This research shows that the ISODATA clustering has higher accuracy than K-means clustering for identifying the objects on levees, due to the unpaved surfaces that are easily misclassified as other surfaces.In addition, the physical condition of the levees in the study area is evaluated by using the levee components identified in this research.As seen in Figure 1, levees consist of area-based features such as the levee crown, the levee slopes and the line-based features such as the levee lines.Mapping levee lines is also important for evaluating erosion on levees and assessing stability of levees, and the results obtained in this research can be used in research on mapping levee lines.Hence, the work to be performed in the future is to map levee lines using the results obtained in this research. Figure 1 . Figure 1.Elements of a typical levee, modified from a figure in [1]. Figure 2 . Figure 2. Locations of the eight levees in the study area. Figure 3 . Figure 3. Diagram showing the procedure for mapping levees. Figure 4 . Figure 4.One section of the LiDAR Digital Surface Model (DSM) generated by the LiDAR points using the linear interpolation method. Figure 5 . Figure 5. Refinement of the LiDAR DSM using the median filter.(a) Original LiDAR DSM.(b) Refined LiDAR DSM by median filtering. Figure 6 . Figure 6.One section of the generated slope map. Figure 7 . Figure 7. Flat and steep area images separately generated from the slope map using slope classification method.(a) Flat area image; (b) Steep area image. Figure 8 Figure 8 shows the flat polygons (pink polygons) selected from the flat area image, and the steep polygons (yellow polygons) selected from the steep area image. Figure 9 . Figure 9. Example of the levee slope polygon pairs (yellow polygons). Figure 10 . Figure 10.Levee slope polygons (brown polygons) separated from the other objects (yellow polygons) by the levee boundaries (red lines) generated by the breakline detection method. Figure 12 . Figure 12.Initial levee crown polygon (pink polygon) having small holes, gaps, or narrow breaks on its surfaces (the red circles). Figure 13 . Figure 13.Refined levee crown polygon (pink polygon) by using the morphological closing operator. Figure 14 . Figure 14.Levee crown polygon (pink), the levee slope polygons (brown) and the eroded polygons (yellow) on the levee surfaces. Figure 15 . Figure 15.Levee crown polygon (pink), the levee slope polygons (brown) and the levee berm polygon (purple) on the levee surfaces. Figure 16 . Figure 16.Multiple levee components identified by the Iterative Self-Organizing Data Analysis Technique (ISODATA) clustering and the K-means clustering.(a,b) Multiple levee components identified by the ISODATA clustering.(c,d) identified by the K-means clustering method. Figure 17 . Figure 17.Examples of the checkpoints located on the various levee surfaces.(a) Example of the checkpoints located on the whole area of the levee; (b) Example of the checkpoint located on the soil surface; (c) Example of the checkpoint located on the gravel road surface; (d) Example of the checkpoint located on the vegetation surface. Figure 18 . Figure 18.Example of the misclassification errors caused by the moving objects.(a) Levee crown and slopes in the image source; (b) Moving object in the slope map; (c) Generated levee crown polygon. Figure 19 . Figure 19.Examples of the three clusters identified by both algorithms.(a) Image source showing the levee surface; (b) Three clusters identified by the ISODATA clustering; (c) Three clusters identified by the K-means clustering. Figure 20 . Figure 20.Comparison of the results with the previous research.(a) Results of the slides on levees detected using the multispectral images by Hossain et al. (2006) [8]; (b) Results of the multiple components on levee identified using the LiDAR data and the multispectral orthoimages by this research. Table 1 . Accuracy of the identified objects generated by both clustering algorithms. Table 2 . Results of evaluating the physical condition of the eight levees in the study area.
8,390.2
2014-09-16T00:00:00.000
[ "Environmental Science", "Mathematics" ]
Resource-Efficient Multicast URLLC Service in 5G Systems Many emerging applications, such as factory automation, electric power distribution, and intelligent transportation systems, require multicast Ultra-Reliable Low-Latency Communications (mURLLC). Since 3GPP Release 17, 5G systems natively support multicast functionality, including multicast Hybrid Automatic Repeat Request and various feedback schemes. Although these features can be promising for mURLLC, the specifications and existing studies fall short in offering guidance on their efficient usage. This paper presents the first comprehensive system-level evaluation of mURLLC, leveraging insights from 3GPP specifications. It points out (i) how mURLLC differs from traditional multicast broadband wireless communications, and (ii) which approaches to provide mURLLC require changing the paradigm compared with the existing solutions. Finally, the paper provides recommendations on how to satisfy strict mURLLC requirements efficiently, i.e., with low channel resource consumption, which increases the capacity of 5G systems for mURLLC. Simulation results show that proper configuration of multicast mechanisms and the corresponding algorithms for mURLLC traffic can reduce resource consumption up to three times compared to the baseline solutions proposed for broadband multicast traffic, which significantly increases the system capacity. Introduction Ultra-Reliable Low-Latency Communications (URLLC) is a new type of service supported in 5G systems.While URLLC Quality of Service (QoS) requirements on latency and reliability depend on the application, the typical values considered by 3GPP are 1-10 ms for latency and 1 × 10 −4 -1 × 10 −9 for reliability [1].In Releases 15/16, 3GPP has developed a New Radio (NR) access technology that enables unicast URLLC service, i.e., the delivery of a data stream to/from a single User Equipment (UE).For that, NR supports mini-slots, new robust Modulation and Coding Schemes (MCSs), fast Hybrid Automatic Repeat Request (HARQ), etc. Many emerging applications, such as factory automation, electric power distribution, and intelligent transportation systems, require the support of multicast URLLC (mURLLC), i.e., the delivery of the same data from a base station (called gNB) to a group of UEs with strict requirements on latency and reliability.The straightforward approach to enable multicast is to convert a multicast stream into multiple unicast streams addressed to each UE, which manifold increases the channel resource consumption.Being inefficient for a low number of UEs, it becomes completely unsuitable for massive mURLLC because it increases delays above those required for the UEs served last.To save channel resources and reduce delays, since Release 17, NR supports new mechanisms that enable native multicast for different traffic types, e.g., voice and IPTV.Release 18 only slightly enhances multicast functionality, e.g., by enabling data reception in inactive state and dynamic switching between multicast/unicast transmission [2], while Release 17 adds new multicast mechanisms to the NR protocol stack [3][4][5][6] and system architecture [7].The detailed description and analysis of the novelties can be found in recent papers [8][9][10][11], which mainly focus on multicast broadband traffic.In contrast, this paper focuses on those mechanisms and algorithms that are needed for mURLLC. The first mechanism that improves reliability is multicast HARQ.If a multicast packet has not been delivered to some UEs, the gNB can schedule a HARQ retransmission that can be addressed either to the original multicast UE group or to a particular UE.HARQ retransmissions can be carried out either based on the feedback from UE(s) or blindly.Note that 3GPP specifications do not define how to select the number of HARQ retransmissions and their parameters (e.g., MCS). The second mechanism enables several ways in which UEs can provide feedback about the decoding status (success or failure) of previous transmissions.This feedback is needed to perform conditional HARQ retransmissions, i.e., to decide on a transmission retry based on the set of the UEs to which the data have not been delivered yet.However, the 3GPP specifications do not describe how to configure such feedback. In addition to these two mechanisms, in this paper, we study various transmission parameters selection algorithms, which are left for implementation by vendors.Specifically, since Massive Multiple Input Multiple Output (M-MIMO) is a key feature of 5G systems, the gNB shall implement algorithms that select a precoder, allocate frequency resources, and select a single MCS for each multicast transmission.Without proper configuration of the multicast mechanisms and the above-mentioned transmission parameters, either the strict reliability and latency requirements may be not satisfied or the channel resource consumption is too high, which limits the cell capacity. Despite being important, the area of mURLLC has not been well addressed in the literature.Many works evaluate the performance of new multicast mechanisms [12][13][14][15] and propose new transmission parameters selection algorithms [16][17][18][19][20].However, these works only consider multicast broadband traffic (e.g., file transfer, IPTV) with moderate latency and reliability requirements, while mURLLC imposes much stricter requirements.Thus, the considered multicast solutions may not be suitable for mURLLC.In other work, many URLLC-aware transmission parameters selection [21][22][23][24] and scheduling [25][26][27][28] algorithms have been designed for unicast traffic.An open question is how to adapt them for the multicast case.The paper aims to fill this research gap.It shows which approaches to provide mURLLC require changing the paradigm and which approaches can be inherited from existing ones.Table 1 summarizes these findings, which are discussed in detail in the following sections.An arrow with the label "new" means that the paper proposes a modification of a solution to mURLLC.An arrow without the label "new" means that the solution or a specific subset of solutions that is defined in the paper can be applied to mURLLC.The sign "X" means that the solutions proposed for other areas are inefficient for mURLLC. While analyzing the algorithms, we pay much attention to the computational complexity because mURLLC requires data delivery to multiple UEs and the decisions needs to be made within a short time.Thus, only a limited set of algorithms with low complexity can be applied for mURLLC. The contributions of the paper are as follows: 1. We review various existing transmission parameter selection algorithms developed for multicast broadband traffic and unicast URLLC traffic and determine how to adapt them for mURLLC; 2. We carry out extensive performance evaluation and comparison of various algorithms under the same conditions using link-level and system-level simulations; 3. Based on extensive simulation results, we provide a set of recommendations on how to configure the new multicast mechanisms and determine the algorithms providing mURLLC with low resource consumption, which, in turn, increases system capacity.Note that some recommendations contradict those for multicast broadband traffic and unicast URLLC traffic.For example, several works [29,30] report that feedback-based multicast HARQ retransmissions are inefficient for broadband traffic because they slightly reduce the downlink resource consumption while significantly increasing the uplink resource consumption.In contrast, we show that some feedback schemes do significantly reduce the overall resource consumption under strict reliability constraint.Another example of a non-trivial recommendation is related to the resource allocation algorithm.In contrast to the existing works [25,26] that recommend using the Frequency-Selective (FS) scheduler (i.e., allocate resource blocks taking into account their quality) for unicast URLLC traffic, we show that the gain of an FS scheduler for a large multicast group is below 5% with respect to a non-FS scheduler while the complexity increases by 40%.Thus, the type of scheduler (FS or non-FS) should be selected depending on the multicast group size. The rest of the paper is organized as follows.We describe the considered scenario and formulate the problem in Section 2. In Section 3, we analyze various transmission parameter selection algorithms for multicast traffic and show how to adapt them to mURLLC.We evaluate their performance in Section 4. Section 5 concludes the paper with recommendations on providing resource-efficient mURLLC. System Model and Problem Statement Consider a gNB providing the multicast URLLC service for N UEs (see Figure 1).We suppose that all UEs are connected to the gNB.Thus, the gNB allocates uplink resources for transmission of uplink control information (e.g., UE feedback) and Sounding Reference Signals (SRSs) (tspecifications introduce both multicast and broadcast modes.In contrast to multicast mode, broadcast mode allows transmitting to UEs not connected to the gNB.In broadcast mode, UEs do not send any feedback, and the gNB cannot guarantee strict URLLC requirements [8]).The QoS requirements of an mURLLC stream are as follows: (i) the latency for each packet (i.e., the time interval between packet arrival at the gNB and its delivery to all UEs in the multicast group) shall not exceed D QoS and (ii) the Packet Loss Ratio (PLR) shall be lower than PLR QoS .A multicast packet is considered lost if it is not delivered within a given latency budget to at least one UE.The typical values of D QoS and reliability (i.e., 1 − PLR QoS ) for mURLLC traffic are provided at the beginning of Section 1.We consider an M-MIMO system: the gNB is equipped with a large number M of antennas.To simplify the description, we consider single antenna UEs.However, the results can be easily extended for multi-antenna UEs.To provide efficient M-MIMO operation, the gNB uses a Time Division Duplex (TDD) scheme with a periodic structure of downlink (DL) and uplink (UL) time slots, which have equal duration T slot .Specifically, k dl DL slots used for data transmission are followed by k ul UL slots used for UE feedback and/or Sounding Reference Signals (SRSs).SRSs are transmitted by UEs with the period T SRS .Thanks to channel reciprocity in the case of TDD, the gNB can use SRSs to estimate both DL and UL channel quality.In a frequency domain, each slot is divided into B Resource Blocks (RBs), where B depends on the bandwidth and the used numerology. For each multicast stream, the gNB solves the following problems, see Table 2. First, in the long-term timescale, the gNB configures the maximum number of transmission attempts (including HARQ retransmissions) and selects the feedback scheme and the sounding period T SRS .Second, in the short-term timescale, i.e., for each transmission attempt, the gNB dynamically: (i) constructs a precoder in each RB, (ii) allocates RBs taking into account their quality and the current buffer size, and (iii) selects an MCS.Since the 3GPP does not describe how to address these problems, in the following sections, we consider various solutions and evaluate which of them can provide resource-efficient mURLLC service, i.e., provide low overall (DL + UL) channel resource consumption while satisfying strict mURLLC requirements.Low channel resource consumption allows an increased system capacity, i.e., increasing the number of concurrent flows with satisfied QoS requirements and/or increasing the load of each flow.[36] with different target BLERs (MCSs) in case of two TXs. The usage of feedback and multicast OLLA up to three times reduces the overall resource consumption.In the case of two TXs, different target BLERs for the first and second TXs reduce resource consumption up to 40% compared with the same BLERs (see Section 4.5). Analyses of Possible Solutions and Their Adaptation to mURLLC In this section, we consider solutions aimed to address the problems identified in Section 2. As detailed in Section 1, due to the lack of algorithms/solutions specifically developed for mURLLC, in this paper, we adapt the solutions proposed either for broadband multicast traffic or for unicast URLLC traffic.In the latter case, we show how to extend/modify solutions such that they can work with multiple receivers. The following sections are structured as follows.First, we analyze how the gNB can select long-term transmission parameters: the maximum number of transmission attempts, the feedback scheme, and the sounding period.Then, we consider how the gNB selects parameters for each particular transmission: the precoder, RBs, and the MCS. The Maximum Number of Transmission Attempts The number of transmission attempts (TXs) affordable for each multicast packet depends on the latency limitation and the TDD configuration.In particular, for very strict latency requirements, i.e., D QoS < (k dl + k ul )T slot , HARQ retransmissions cannot be delivered in time.Thus, only a single robust TX is possible.For moderate latency requirements, i.e., D QoS ∼ (k dl + k ul )T slot , the gNB can obtain UE feedback and make a conditional HARQ retransmission if the initial TX fails.Taking into account typical values of D QoS ∼ 1-10 ms for URLLC and T slot ∼ 0.5-1 ms for numerologies used in frequency bands below 6 GHz, in the paper, we focus on two cases: (i) One TX case when a single TX is possible, and (ii) Two TXs case, i.e., one initial TX and one HARQ retransmission are possible.We assume that the gNB selects this parameter at the beginning of the multicast flow and changes it very rarely (e.g., when the set of served UEs or their channel conditions significantly change). The Feedback Scheme For each multicast stream, the gNB can configure one out of three feedback schemes.With the first scheme, which, hereafter, we call No feedback, the gNB does not allocate uplink resources for the feedback transmission.Thus, conditional HARQ cannot be used with this scheme.In contrast, with the second scheme, called ACK/NACK feedback, the gNB allocates a separate uplink resource for each UE such that the UE can send a positive (ACK) or negative (NACK) acknowledgment for each transmission.With the third scheme, called NACK-only feedback, the gNB configures a single uplink resource where only the UEs that have failed to decode the transmission send NACK.Thus, with this scheme, the gNB schedules an HARQ retransmission but does not know which UEs require it, which complicates transmission parameters selection for subsequent TXs. For each multicast stream, the gNB can select a feedback scheme depending on the stream QoS requirements and the number of available TXs.Specifically, for the one TX case (i.e., for strict latency requirements), the gNB can use either No feedback or NACK-only feedback schemes.In the case of the No feedback scheme, to provide high reliability, the gNB shall select a very robust MCS, which leads to huge resource consumption.In contrast, the usage of the NACK-only feedback scheme allows selecting proper MCS by taking into account UE feedback at the cost of moderate uplink overhead.For two or more TXs, the gNB can use either the NACK-only or ACK/NACK feedback schemes.Comparing these schemes, the ACK/NACK feedback scheme allows reducing resource consumption for HARQ retransmissions with respect to the NACK-only feedback scheme because the gNB knows the set of UEs that have failed to decode previous TXs.However, ACK/NACK feedback requires higher UL resource consumption, which scales linearly with the number of UEs in the multicast group.In Section 4, we use system-level simulations to study the influence of the used feedback scheme on the overall resource consumption in different scenarios and provide recommendations for selecting the feedback scheme for mURLLC traffic. Sounding Period The choice of short-term transmission parameters, such as precoder and MCS (see the following sections for details), significantly depends on the accuracy of channel measurements available at the gNB.In the case of TDD, the gNB measures both DL and UL channels based on SRS signals periodically transmitted by each UE in UL slots.As we show in this paper, the choice of SRS period can significantly influence the overall channel resource consumption for a multicast stream.In particular, a low SRS period improves the accuracy of channel measurements and, therefore, reduces the DL resource consumption.However, it significantly increases UL resource consumption used for pilot signals, which scales with the number of UEs in the multicast group.In contrast, a high SRS period reduces sounding overhead but increases DL resource consumption because of selecting too-low MCSs.In Section 4, we consider scenarios with different UE mobility and study how to select the SRS period in order to find a good balance between DL resources consumed for data transmission and UL resources consumed for sounding. Precoder Selection For each transmission, the gNB constructs a special matrix called a precoder that determines how signals are generated from different gNB antennas.A proper precoder can significantly boost the performance of the M-MIMO system.The corresponding problem for one-antenna UEs is formulated as follows.For each UE i and RB j, the gNB has estimations of (i) the channel matrix H i,j of size 1 × M that is updated based on periodic SRS, and (ii) the interference plus noise power σ 2 j in RB j, which is obtained based on the channel quality indicator reports provided by the UEs.For each RB j used for the DL transmission, the gNB constructs a precoder W j , which is a matrix of size M × 1.The Signal to Interference plus Noise Ratio (SINR) for UE i in RB j is estimated as: Since the UE with the worst SINR limits multicast transmission parameters, the precoder selection problem for the set U of UEs is stated as follows: max where P TX is the transmission power allocated for a single RB.In the literature, authors often consider a similar problem statement [37]: where γ is the SINR constraint.Problems ( 2) and ( 3) are similar in the sense that the solution to Problem (2) gives the solution to Problem (3) with a proper scaling, which depends on constraints P TX and γ.Both optimization problems are non-convex and proven to be NP-hard in case M ≤ N [37]. Let us classify and analyze numerous approaches to the multicast precoding problem.One of the approaches is to reformulate Problem (3) as follows (see [37]): min where , trace(X) and rank(X) denote the trace and rank of matrix X, respectively.Note that the objective function (4a) and constraints (4b) and (4c) are convex, while only the constraint (4d) is non-convex.By relaxing the constraint (4d), we obtain the Semi-Definite Relaxation (SDR) (4a), (4b), (4c) of the optimization problem, Problem (4).That SDR can be solved with state-of-the-art convex solvers.However, the obtained SDR solution gives the solution to Problems (3) and (4) only if the matrix X j has rank equal to one.Otherwise, it only provides a lower bound on the objective function for Problem (3) and a corresponding upper bound for the Problem (2).Thus, we use that upper-bound solution to evaluate the performance of other precoder algorithms. Since the usage of convex solvers results in high computational complexity inappropriate for mURLLC applications with tight latency requirements, below we consider different approaches proposed in the literature that have lower computational complexity. Mohammadi et al. [35] propose the application of the Successive Convex Approximations (SCA) method to the Problem (3) and use the Alternating Direction Method of Multipliers (ADMM) on each SCA iteration, which is a relatively low-complexity state-ofthe-art method for convex problems.Unfortunately, the resulting algorithm, which we refer to as SCA-ADMM, still has high complexity that depends on the convergence threshold as we show in Section 4. Other approaches considered in the literature are based on different heuristics.For example, Hunger at al. [33] derived a closed-form solution for multicast Problem (3) with only two UEs, and presented an FF-C2 (Full Featured Combine-2) algorithm that performs a full search over all possible pairs of UEs.They also proposed a heuristic to reduce the search space, which is called an RC-C2 (Reduced Complexity Combine-2) algorithm. In Silva and Klein [31], the authors consider adaptations of well-known unicast beamforming algorithms to the multicast case, e.g., multicast versions of Maximum Ratio Transmission (mMRT) [19,32], Zero Forcing (mZF), and Minimum Mean Square Error (mMMSE) precoders.Since these algorithms require relatively simple linear algebra operations, they have very low computational complexity; however, they have very poor performance as we show in Section 4. The trade-off between computational complexity and performance can be reached by iterative algorithms that use relatively simple (compared to SCA-ADMM) linear algebra operations on each iteration.Examples are Iterative Update (IU) [33] and Multiplicative Update (MU) [20] algorithms that on each iteration increase the objective function value for Problem (2) (in case of IU algorithm) or, in case of the MU algorithm, for the following proportional fair problem: max where ζ is a small constant that is necessary for the numerical stability of the MU algorithm.Finally, such algorithms as SBFC (Successive Beamforming-Filter Computation) [33] or QR decomposition-based algorithm, called, hereafter, the QR [34] construct precoder, as a linear combination of orthogonal vectors that form a linear subspace of UE channels.Each vector and corresponding coefficient in a linear combination are selected to increase the SINR of each particular UE, thus satisfying their constraint in optimization problem, Problem (3). We study the above-described precoder construction algorithms in Section 4.2 and select those that best provide low complexity and high performance with respect to the SDR-based upper bound.Note that the channel matrix H i,j may significantly change with time, while the precoder is constructed based on its periodic SRS measurements.Thus, we also study the influence of the SRS period on the performance of the selected precoders. Let us now consider how the gNB selects the precoder for different transmission attempts.For the first TX, the gNB constructs a precoder for the set U that contains all recipients of the multicast stream.The target set for the second TX depends on the used feedback scheme.In the case of NACK-only feedback, the gNB does not know which UEs have failed to decode a packet.Thus, the gNB uses the same set U as for the first TX.In the case of ACK/NACK feedback, the gNB knows the exact set U f of UEs that have failed to decode the first TX.Having the lower number of UEs in the set U f , the gNB can change the precoder and increase SINR for these UEs in the second TX. RB Allocation For each slot, the gNB determines which RBs are used for transmission of enqueued multicast packets.The RB allocation procedure consists of two steps.First, the gNB divides RBs between various multicast streams (packets).Second, it allocates particular RBs to each multicast packet. Khorov et al. [28] evaluate various scheduling policies for unicast URLLC traffic.They show that the well-known Earliest Deadline First (EDF) policy provides high network capacity for URLLC and has very low complexity.This policy can be adapted to mURLLC traffic as follows.Let P = {1, ..., P} be the set of multicast packets pending transmission and B = {1, ..., B} be the set of available RBs. First, the gNB sorts enqueued multicast packets in the ascending order of their remaining lifetimes: RT p = D QoS − D p , where D p is the packet p queuing delay.Note that packets with D p > D QoS are dropped.Having sorted a set of packets P, the gNB considers the first packet and allocates RBs to this packet as described in the following paragraph until all its bytes are transmitted or no free RBs are available.Then the gNB considers the second packet in P and so on. Two approaches can be used to allocate particular RBs to packets: Frequency-Selective (FS) and non-FS scheduling.With FS scheduling, the gNB takes into account the channel quality in the considered RBs and allocates the best RBs to packets in order to minimize resource consumption.For multicast traffic, the FS scheduling is implemented as follows.For each RB j ∈ B, the gNB determines the recipient with the worst SINR, i.e., SI NR j,p = min i∈U p SI NR i,j , where U p is the set of recipients for packet p and SI NR i,j is SINR of UE i in RB j given by (1).Then, the gNB sorts free RBs in set B in the descending order of SI NR j,p and allocates free RBs until all bytes of the considered packet are transmitted or no free RBs are available.With non-FS scheduling, the gNB assumes that RBs have the same quality.Thus, it can select RBs sequentially or randomly. FS and non-FS scheduling approaches both have their benefits and drawbacks.In terms of performance, FS scheduling provides lower resource consumption and, thus, increases the network capacity.However, in terms of complexity, FS scheduling requires the calculation of the precoder in each RB (i.e., to estimate SI NR j,p ).In contrast, with non-FS scheduling, the gNB only needs to calculate the precoder in the allocated RBs.In Section 4, we study in detail this tradeoff between performance and complexity. MCS Selection For each transmission attempt and selected RBs, the gNB shall determine a single MCS.In particular, for a transmission attempt t, the gNB shall find the highest MCS MCS t that provides a block error rate (BLER) for a multicast group below the target value p t .The MCS selection procedure consists of two steps.First, the gNB uses the error model to find the highest MCS MCS SI NR that provides BLER below p t for a given set of worst SINRs (i.e., SI NR j in the allocated RBs).Since the wireless channel may significantly change with time, the precoder and SINR estimations quickly become outdated, and MCS SI NR may not provide the required reliability.To address this issue, at the second step, the gNB adjusts the MCS as discussed in detail below. One TX Case The method to adjust the MCS depends on the used feedback scheme.For No feedback, the gNB does not have information about actual BLER at the receivers.Thus, to provide high reliability, the gNB selects a robust MCS to take into account possible channel fluctuations. In particular, the authors of [30] propose a simple method (called eOLLA) that subtracts a positive constant ∆(N) from MCS SI NR to take into account possible degradation of SINR at UEs: MCS 1 = MCS SI NR − ∆(N).The exact value of ∆(N) is selected based on long-term experiments as the value providing the required reliability for a given multicast group size N. For the schemes with UE feedback, the MCS can be dynamically adjusted using an Outer Loop Link Adaptation (OLLA) algorithm [36].While OLLA is a widely used algorithm for unicast, below we propose its multicast version. With multicast OLLA, the gNB keeps a single subtraction ∆ olla for a multicast group, and the MCS is selected as MCS 1 = MCS SI NR − round(∆ olla ).The gNB updates ∆ olla based on the obtained HARQ feedback.Specifically, ∆ olla is increased by a constant δ + if the transmission fails as defined at the beginning of Section 2. Otherwise, it is reduced by a constant δ − .The average BLER provided by the multicast OLLA algorithm converges to p olla = δ − δ − +δ + .Therefore, in the One TX case, we can set p 1 = PLR QoS and accordingly select OLLA parameters. Two TXs Case In this case, the gNB selects two MCSs.The simplest approach considered in many papers is to use the same MCS for the first and the second transmissions: MCS 2 = MCS 1 , where MCS 1 is selected using the OLLA algorithm with the target BLER p 1 = √ PLR QoS .By taking into account HARQ combining gain, it is assumed that p 2 ≤ p 1 and, thus, the overall reliability requirement is satisfied. Since 3GPP specifications allow the use of different MCSs for various transmission attempts, we propose the following approach.We select two target BLERs such that p 1 • p 2 ≤ PLR QoS (the specific configurations are analyzed in Section 4).For each TX, the target BLER is provided by a separate OLLA adjustment.Note that in the case of ACK/NACK feedback, SINRs for the second TX might be higher than for the initial transmission and, thus, selecting MCS 2 > MCS 1 reduces resource consumption for retransmissions. Simulation Setup To evaluate the performance of the algorithms presented in Section 3, we have significantly extended the system-level simulator NS-3 [38] by implementing the new multicast mechanisms introduced in 3GPP specifications, M-MIMO features, and multicast traffic. Unless otherwise explicitly stated, we consider an Urban Macro scenario with N UEs randomly distributed in the gNB coverage area.Both LOS and NLOS channels are modeled.UEs move with a 3 kmph speed and send SRS with a 20 ms period.The gNB uses the FS EDF scheduler described in Section 3.5.Table 3 lists the main simulation parameters. In the experiments, we measure (i) the average PLR, and (ii) the average DL and UL resource consumption.The lower the resource consumption, the higher the mURLLC capacity.The DL resource consumption is determined as the average number of RBs used in a DL slot for data transmission divided by the total number of RBs.The UL resource consumption consists of two parts.First, some UL resource is used for UEs' feedback transmission.For the ACK/NACK scheme, the resource consumption can be found as , where B f is the number of RBs allocated for transmission of a single UE feedback (by default, B f = 4 RBs).For the NACK-only scheme, the resource consumption is N times lower because a single resource is allocated for all UEs.Second, UL resource is used for SRS transmission: , where k 0 = 14 is the number of OFDM symbols in a slot, and N 0 = 2 is the number of SRSs that is multiplexed in an OFDM symbol.In the following sections, we analyze the performance of the various transmission parameter selection algorithms described in Section 3. To simplify the evaluation, we changed the order compared with Section 3. Specifically, since the literature provides dozens of precoder selection algorithms that significantly affect performance, we start our analysis with their comparison.Based on the analysis, we determine the best precoder selection algorithms for mURLLC.After that, we evaluate other short-term transmission parameters selection algorithms (i.e., RB and MCS selection).Since the implementation of the MCS selection algorithm depends on the maximum number of TX attempts, we evaluate the joint effect of MCS and the number of TX attempts selection.Finally, we study the influence of long-term parameters (the feedback scheme and the SRS period) on the overall system performance. Analysis of Precoder Selection Algorithms Let us start with the performance comparison of precoder selection algorithms.For this, we sampled the values of the channel matrix H i,j in all RBs for 20 UEs using the NS-3 channel model.For each matrix, we construct the precoder with algorithms from Section 3.4.The X-axis in Figures 2 and 3 corresponds to the average difference between the minimal SINR SI NR j obtained with the considered algorithm and the upper bound obtained with the SDR approach.The Y-axis is the mean time needed to construct a single precoder with a 3.3 GHz Intel Core i3-2120 processor [39].Note that the gNB can implement two approaches to compute the precoder: (i) offline, in which the precoder is calculated in advance for each multicast group when the gNB receives the corresponding SRSs, and (ii) online, in which the precoder is calculated each time the gNB schedules transmission to a particular multicast group.For the offline approach, the precoder computation time should be much lower than T SRR , while for the online approach, the precoder computation time should be much lower than D QoS .Since in our experiments, T SRR = 20 ms and D QoS = 10 ms, we consider 10 ms as the reference value which limits the precoder computation time.If the precoder construction algorithm has a computational time greater than 10 ms, we consider it unacceptable for mURLLC applications. Figure 2 shows the results for the M-MIMO case (i.e., 64 antennas at the gNB) for both LOS and NLOS channel models.Let us start with an analysis of the SCA-ADMM algorithm.For this algorithm [35], we check different convergence thresholds ε.We can see that the SINR difference for the SCA-ADMM algorithm reduces for a stricter convergence threshold ε.However, even for the highest convergence threshold ε = 10 −1 , the corresponding precoder construction time is ≈ 0.1 s, which is not acceptable for mURLLC applications.The FF-C2 and RC-C2 algorithms [33], which search a multicast precoder over UE pairs, provide reasonable construction time 1 − 10 ms.However, as each precoder in a search space takes into account only two considered UEs, their SINR is much lower than the SDR upper bound.Multicast adaptations of linear beamforming algorithms from [31], such as mMRT, mZF, and mMMSE, provide the lowest precoder construction time.However, they provide the worst SINRs among considered solutions.The IU and MU algorithms, which iteratively construct precoders using simple linear algebra operations, show much better performance.As mentioned in Section 3.4, the MU algorithm optimizes the proportional fair objective function from Problem (5) instead of the max-min SINR objective function from Problem (2).Thus, the IU algorithm provides better performance than the MU algorithm in terms of minimal SINR over RBs.Finally, the SBFC and QR algorithms, which construct the precoder as a linear combination of UE channels, provide the SINR closest to the upper bound with reasonable construction time.Let us analyze how the number of antennas influences precoder construction algorithm performance and complexity.For that, in Figure 3, we consider the case of four antennas at the gNB, corresponding to 4G systems.Interestingly, the performance of the QR and SBFC algorithms (which are the best in the M-MIMO case) significantly degrade for a low number of antennas because they use orthogonal projections of UE channels on the precoder null-space.This procedure significantly reduces SINR because the UE channels become nonorthogonal when the number of antennas reduces.In contrast, the IU and MU algorithms do not use this orthogonalization procedure and show much better performance.Multicast adaptations of linear beamforming algorithms (mMRT, mZF, and mMMSE) provide the lowest computation time but the worst performance, which is explained as follows.In the case of a low number of antennas, the mZF algorithm requires the computation of the pseudoinverse of an underdetermined matrix, while the mMMSE algorithm requires the inversion of an ill-determined matrix.Because of the lower number of antennas, the complexity of the SCA-ADMM algorithm with convergence threshold ε = 10 −1 significantly reduces.Thus, it can be considered as the candidate solution. From the two considered above cases, we can see that the performance and complexity of various precoders significantly depends on the number of antennas: for 4G systems with low number of antennas, IU, SCA-ADMM ε = 10 −1 , and FF-C2 provide a good balance between performance and complexity, while, for 5G M-MIMO systems, QR, SBFC, and IU show better results.Thus, a particular algorithm shall be selected taking into account the configuration of antennas and the complexity constraints at the gNB.For that, a preliminary link-level evolution similar to that presented in Figures 2 and 3 can be carried out based on the real channel measurements. Let us analyze the performance of the best precoder selection algorithms (SBFC, QR, IU, and FF-C2) using system-level simulations that take into account the channel aging effect, RB allocation, and MCS selection algorithms.In particular, in Figure 4, we consider 5G M-MIMO systems with 64 antennas, the One TX scheme with NACK-only feedback, and MCS is selected based on OLLA with p 1 = PLR QoS .We can see that the conclusions for 20 UEs coincide with those of Figure 2: the performances of the SBFC, IU, and QR precoders are close to each other (the difference is below 5%), while FF-C2 provides higher DL resource consumption because of lower SINR.So, proper configuration of the precoder algorithm can reduce resource consumption by more than 25% and allows the precoder to be computed in real time (i.e., precoder construction time is comparable to D QoS ). Analysis of RB Allocation Algorithms Let us analyze the performance of FS and non-FS scheduling approaches.Similar to the previous section, we consider different numbers of antennas at the gNB: (i) 4 antennas, corresponding to 4G systems, and (ii) 64 antennas, corresponding to 5G M-MIMO systems.Figure 5a shows the following results.First, a higher number of antennas significantly increases SINR at UEs and reduces the DL resource consumption (from 20 to 40% depending on the number of UEs in the multicast group).Second, the usage of frequency selectivity (i.e., FS scheduling) provides lower resource consumption compared to non-FS scheduling.However, the gain of FS scheduling depends on the number of UEs and the number of antennas.Specifically, for a single UE, FS scheduling reduces resource consumption by 35% for 4 antennas and only by 15% for 64 antennas.The lower gain in the case of M-MIMO is explained by the channel hardening effect [40]: a higher number of antennas reduces the channel quality fluctuation both in time and frequency domains.When the number of UEs increases, the gain of FS scheduling significantly reduces.Specifically, when the multicast group includes more than ten UEs, the difference between FS and non-FS scheduling is less than 5% for both considered antenna configurations.The reason is that the channel quality in the RB j is determined by the UE with the lowest SINR: SI NR j .For a higher number of UEs, the difference between SI NR j in different RBs reduces.Figure 5b shows the average time needed to compute the schedule (including precoder construction) for a single slot.The results show that non-FS scheduling allows a significant reduction in the scheduler complexity-by 30% for 4 antennas and by 40% for 64 antennas.So, we can conclude that FS scheduling is fruitful only when the multicast group consists of a few UEs and the gNB has few antennas.For large multicast group size and M-MIMO systems, non-FS and FS scheduling approaches provide almost the same performance but the former has much lower computational complexity. Influence of the Maximum Number of Transmission Attempts Let us consider a 5G M-MIMO system with 64 antennas at the gNB.The precoder construction algorithm is SBFC.Figure 6 shows the results for various configurations of transmission parameters (i.e., the number of transmission attempts, the MCS selection algorithm, and the feedback scheme).First, we can see that for all the considered configurations, strict URLLC reliability and latency requirements are satisfied.Specifically, according to Figure 6c, PLR is below PLR QoS .Note that the packet is assumed lost if it is not delivered within D QoS .Thus, the latency requirement is also satisfied. Second, let us analyze the influence of the number of transmission attempts.The main observation from the obtained results is that because of strict reliability requirements, with a single TX, the gNB uses too-low a MCS (see Figure 6d), which increases resource consumption too much.Though for a single TX, NACK-only feedback does not induce retransmissions, it allows dynamic MCS adjustment with the OLLA algorithm, which reduces resource consumption by up to 70% compared to the No feedback scheme with eOLLA MCS selection algorithm (see Section 3.6).Note that the curve '1 TX no feedback' is non-monotonic and non-smooth because eOLLA uses a discrete set of MCS adjustments ∆(N).Thus, MCS adjustment changes significantly when the number of UEs changes.In contrast, for other curves corresponding to the OLLA algorithm, the MCS adjustment ∆ olla is a real number, which changes smoothly.Switching from one TX to two TXs (i.e., the usage of multicast HARQ) reduces resource consumption up to three times.Note that this effect differs from that observed for loss-tolerant traffic (e.g., IPTV) [30], where multicast HARQs only negligibly reduce resource consumption compared with a single TX.So, in the case of URLLC with strict latency and reliability requirements, if the latency budget allows, the gNB shall enable multicast HARQ to reduce resource consumption. Analysis of MCS Selection Algorithms Let us now consider the influence of the MCS selection algorithm and its parameters (i.e., target BLERs).The main observation is that for the two TXs cases, different target BLERs for the first TX and the second TX may significantly reduce channel resource consumption.Specifically, the usage of target BLERs p 1 = 0.1, p 2 = 10 −4 reduces DL resource consumption by up to 40% compared to the case with p 1 = p 2 = 10 −2.5 because for the first TX, providing BLER of the order of 10 −2.5 requires much more resources than 0.1.The resources for the second TX are consumed only when the first TX fails with the probability p 1 .This effect significantly differs from the one observed for delay-tolerant broadband traffic for which all TXs are carried ou with the same target BLERs and MCSs.To further elaborate on that observation, we consider the downlink resource consumption of only the first TX θ 1 (p 1 ) and total consumption of the first TX and second TX (if any) θ 2TX (p 1 ).As we select p 1 • p 2 = PLR QoS , downlink resource consumption for second TX can be estimated as θ 1 , and θ 2TX (p 1 ) is: Figure 7 shows θ 1 (p 1 ) and θ 2TX (p 1 ) for the NLOS and LOS scenarios.In the NLOS scenario, θ 1 decreases monotonically with p 1 , and the minimum of θ 2TX is achieved at relatively high p 1 = 0.25.In contrast, in the LOS scenario, θ 1 achieves a plateau after p 1 = 10 −3 , since even the highest possible MCS satisfies target BLER 10 −3 .Because of this, the minimum of θ 2TX in the LOS scenario is achieved at p 1 = 10 −3 .Note that the target BLER p 1 = 0.1, which is widely used by default for broadband traffic, provides downlink resource consumption close to that of the minimum (the difference does not exceed 10%).Thus, the selection of p 1 ∼ 0.1 recommended for broadband traffic provides sub-optimal results for both considered scenarios.However, in contrast to broadband traffic, the target BLER for the second TX should be selected to guarantee high reliability.Further, we consider the MCS selection scheme with two target BLERs p 1 = 0.1 and p 2 = 10 −4 . Comparison of Feedback Schemes Let us compare the performance of various feedback schemes.ACK/NACK feedback tells gNB which UEs failed to receive the first TX.As the average number of intended receivers for the second TX is much lower than for the first one, it allows increasing SINR, selecting higher MCS for the second TX, and reducing the DL channel resource consumption.However, the comparison between ACK/NACK and NACK-only feedbacks (see Figure 6a) shows that the gain is tiny.Moreover, as ACK/NACK feedback consumes too many UL resources, the overall gain of the ACK/NACK scheme is negative as shown in Figure 6b.Thus, the main conclusion is that NACK-only feedback provides a good balance between DL and UL resource consumption for mURLLC. Influence of the UE Mobility and Sounding Period As mentioned in Section 3.4, the channel matrix may significantly change with time while the precoder is constructed based on its periodic SRS measurements.This problem is known as precoder aging.In Figure 8, we study the influence of UE mobility and T SRS on the performance of the best configuration of transmission parameters obtained in the previous sections: 2 TXs, p 1 = 0.1, p 2 = 10 −4 , and NACK-only feedback. We see that small T SRS reduces the DL resource consumption for the 3 kmph case by up to 60% because of more frequent channel estimation.However, for 60 kmph, the gain is below 10% because the channel information quickly becomes outdated.At the same time, because of the large number of receiving UEs per stream, the amount of UL channel resources required for SRS is of the same order as for data transmission.Consequently, considering the overall resource consumption (see Figure 8b) for low mobility, the gain of selecting optimal T SRS diminishes, and the optimal value of T SRS changes.For high mobility, T SRS = 5 ms-being the best option for DL resource consumption-is the worst one for the overall resource consumption.Thus, the main observation is that because of a large number of receiving UEs and typically low traffic intensity, mURLLC induces so much channel sounding overhead per stream that obtaining frequent channel information becomes inefficient.To reduce SRS overhead, new adaptive sounding and UE clusterization schemes should be developed that make UEs send SRSs with different periods based on their locations.For example, we can select low T SRS for cell-edge UEs with low SINRs, while selecting high T SRS for cell-center UEs. Conclusions In this paper, we studied the new mechanisms introduced in 3GPP specifications that enabled multicast in 5G systems.We analyzed how to efficiently configure these mechanisms and how to adapt transmission parameter selection algorithms (i.e., precoder selection, RB allocation, and MCS selection) to provide reliable delivery of an mURLLC stream with low channel resource consumption.Based on the extensive simulation results, we provide the following recommendations (see Table 2 for details). 1. The performance and complexity of various precoder selection algorithms significantly depend on the number of antennas at the gNB.For the M-MIMO case, orthogonal subspace construction algorithms (e.g., SBFC, QR) provide the lowest resource consumption with low complexity; 2. The usage of the FS EDF scheduler notably reduces the channel resource consumption only for low multicast group size and a low number of antennas at the gNB.In other cases, the non-FS EDF scheduler provides almost the same resource consumption and up to 40% lower computational complexity; 3. If the latency budget allows HARQ retransmissions, they shall be enabled because, in contrast to traditional broadband multicast traffic, they allow reducing resource consumption up to three times for mURLLC; 4. In the case of two transmissions, the usage of two different target BLERs for MCS selection significantly reduces resource consumption compared with the widely used approach of selecting the same MCS for the initial transmission and retransmission; 5. Out of three considered feedback schemes, the NACK-only scheme provides the lowest resource consumption for mURLLC; 6. In the case of mURLLC, optimization of the sounding period allows a notable reduction in resource consumption only in low-mobility scenarios. Summing up, by implementing the recommendations above, the network operator can provide mURLLC service with much lower channel resource consumption compared with the baseline solutions proposed for broadband multicast or unicast URLLC traffic and, therefore, significantly increase the network capacity in terms of the number of concurrent mURLLC flows or their aggregated load. One of the promising directions for future research is to adaptively select the best sounding period for each multicast group member. Figure 8 . Scenario with different T SRS and UE mobility: (a) downlink resource consumption, (b) overall resource consumption. Table 1 . Inheritance of Multicast URLLC Solutions from Multicast eMBB and Unicast URLLC Solutions. Table 2 . Summary of the considered problems and possible solutions.
10,647
2024-04-01T00:00:00.000
[ "Computer Science", "Engineering" ]
Gene methylation biomarkers in sputum as a classifier for lung cancer risk CT screening for lung cancer reduces mortality, but will cost Medicare ∼2 billion dollars due in part to high false positive rates. Molecular biomarkers could augment current risk stratification used to select smokers for screening. Gene methylation in sputum reflects lung field cancerization that remains in lung cancer patients post-resection. This population was used in conjunction with cancer-free smokers to evaluate classification accuracy of a validated eight-gene methylation panel in sputum for cancer risk. Sputum from resected lung cancer patients (n=487) and smokers from Lovelace (n=1380) and PLuSS (n=718) cohorts was studied for methylation of an 8-gene panel. Area under a receiver operating characteristic curve was calculated to assess the prediction performance in logistic regressions with different sets of variables. The prevalence for methylation of all genes was significantly increased in the ECOG-ACRIN patients compared to cancer-free smokers as evident by elevated odds ratios that ranged from 1.6 to 8.9. The gene methylation panel showed lung cancer prediction accuracy of 82–86% and with addition of clinical variables improved to 87–90%. With sensitivity at 95%, specificity increased from 25% to 54% comparing clinical variables alone to their inclusion with methylation. The addition of methylation biomarkers to clinical variables would reduce false positive screens by ruling out one-third of smokers eligible for CT screening and could increase cancer detection rates through expanding risk assessment criteria. INTRODUCTION Lung cancer (LC) remains the leading cause of cancer-related death for men and women in the US [1]. The success of CT screening in the National Lung Screening trial (NLST) for reducing LC mortality led to the recommendation by The Centers for Medicare and Medicaid (CMS) to screen people ages 55 to 77 who have a minimum 30 pack-year smoking history and currently smoke or have quit within the past 15 years [2]. However, www.impactjournals.com/oncotarget/ Oncotarget, 2017, Vol. 8, (No. 38), pp: 63978-63985 Research Paper www.impactjournals.com/oncotarget these eligibility criteria or similar criteria by NCCN only capture 40% of the incident LC cases [3,4]. Screening is estimated to save more than 12,000 lives, but cost Medicare ~2 billion dollars, annually [5,6]. This is due in part to the high false positive rate of CT screening as evident by the 39% of NLST participants that had at least one positive screening result (detection of indeterminate nodule) with >96% of those findings being classified as false positive [7]. The addition of molecular biomarkers interrogated in accessible biologic fluids such as sputum could provide better risk stratification to prioritize selection of smokers for CT screening and thereby substantially improve its predictive value and lower costs by reducing followup screens and biopsies [8,9]. Gene silencing through methylation of cytosine in CpG islands in conjunction with chromatin remodeling leads to the development of heterochromatin of the gene promoter region, which denies access to regulatory proteins needed for transcription [10]. This epigenetically driven process is a major and causal event silencing hundreds of genes involved in all aspects of normal cellular function during LC initiation and progression [10]. Others and we have shown that gene specific promoter hypermethylation detected in sputum provides an assessment of field cancerization within the lungs of smokers that in turn predicts LC [11][12][13][14][15][16][17]. Specifically, our group showed that detecting gene methylation in exfoliated cells could predict cancer up to 18 months prior to clinical diagnosis, and was independently validated through case-control studies for predicting LC risk [11,12]. However, the incorporation of this validated methylation panel in sputum into existing risk assessment models has not been assessed in a population-based setting that could identify high-risk smokers who would benefit most from a CT screen. A major challenge in conducting a prospective study for predicting LC risk is the need for a large population of high-risk smokers to yield enough cases of LC to accurately define the performance of the methylation panel. Our previous case-control study used prevalent Stage I LC patients compared to cancer-free smoker controls to validate gene methylation panels for predicting LC [12]. Prior findings support our hypothesis of an expanding field of precancerous changes throughout the aerodigestive tract demonstrated initially through histologic changes and subsequently by increasing frequencies of genetic and epigenetic changes detected in exfoliated cells as the cancer develops [11,16,18]. Thus, the increase in number of cancer-associated methylated genes, rather than a single gene, is used in risk prediction [12]. The current study addressed whether our validated gene methylation panel could be extended to improve the existing risk prediction model used to recommend people for a CT screen. To accomplish this goal we used three cohorts of people: ECOG-ACRIN5597 trial participants who had a confirmed Stage I diagnosis of LC (based on pathology following surgical resection), the Lovelace Smokers Cohort ([LSC], current and former smokers at high risk for LC), and the PLuSS Smokers cohort (also current and former smokers at high risk for LC). The ECOG-ACRIN5597 participants were recruited from within the U.S. and Canada to participate in a prevention trial using L-selenomethione [19]. Patients had undergone surgical resection prior to trial enrollment and baseline sputum was obtained prior to randomization to the placebo or intervention group. Gene methylation in sputum reflects lung field cancerization that remains in lung cancer patients post-resection [20]. We hypothesized that because our gene methylation test is based on detecting the field of injury in the lung and not the actual small tumor present, the ECOG-ACRIN5597 trial participants would still have extensive field cancerization and serve in our study as people who should receive a CT screen, while the smokers selected were all cancer-free at time of sputum collection. We initially evaluated the utility of the eight gene panel to classify risk for LC by comparing gene methylation prevalence at baseline in the ECOG-ACRIN5597 patients who met the Medicare guidelines to receive a CT screen to screen eligible subjects from two cancer-free smoker cohorts (LSC and PLuSS) described previously [21]. In addition, the performance of our methylation panel was assessed in all ECOG-ACRIN5597 patients who provided baseline sputum compared to LSC or PLuSS current or former smokers irrespective of meeting eligibility for receiving a CT screen. Study population The characteristics of the entire study populations are shown in Table 1 . As expected the ECOG-ACRIN LC cases were slightly older and more had quit smoking. Pack years were available for 259 LC cases and were comparable to current and former smokers in the PLuSS cohort, but significantly greater than the LSC cohort. Gene methylation in sputum as a classifier for lung cancer risk in CT screen eligible smokers The utility of the eight gene panel to classify risk for LC was evaluated by comparing gene methylation prevalence at baseline in the 371 ECOG-ACRIN5597 patients who met the Medicare guidelines to receive a CT screen to screen eligible subjects from two cancerfree smoker cohorts (LSC [n = 466] and PLuSS [n =597]) described previously [21].Comparative characteristics of "screen eligible" subjects are detailed in Table 2. Two analyses were performed to evaluate prediction accuracy for LC by comparing ECOG-ACRIN5597 and LSC versus www.impactjournals.com/oncotarget ECOG-ACRIN5597 and PLuSS cohort. The prevalence for methylation of all genes was significantly increased in the ECOG-ACRIN patients compared to cancer-free smokers as evident by elevated odds ratios that ranged from 1.6 to 8.9 (Table 3). ROC curves comparing the eight-gene methylation panel for ECOG-ACRIN5597 to LSC or PLuSS showed classification accuracy of 82% and 86% ( Figure 1A, 1B). ROC curves restricted to the subset of ECOG-ACRIN5597 subjects (n = 194) with pack years available were identical to those in Figure 1 (classification accuracies of 89% and 91%, respectively). Most important, the gene panel when added to the clinical variables increased the prediction accuracy from 76% to 87% (p = 7.2 × 10 -9 for delta area under the curve [AUC]) and 74% to 90% (p =3.2 × 10 -16 for delta AUC) when ECOG-ACRIN5597 subjects were compared to LSC or PLuSS, respectively ( Figure 1, Table 4 ). Random sampling to match for the difference in distributions of age, sex, and smoking status between ECOG-ACRIN5597 and LSC/PLuSS had no effect on prediction accuracy of the gene panel (Table 4, Supplementary Figure 1). With the sensitivity set at 95%, the addition of the methylation biomarkers increased specificity from 25% (clinical variables only) to 54%, while NPV and PPV were increased from 88% to 94% and 47% to 58%, respectively (average values comparing ECOG-ACRIN5597 versus LSC/PLuSS; Table 4). Gene methylation classifier extends lung cancer risk assessment beyond medicare screening guidelines The performance of the gene methylation panel was also evaluated in the ECOG-ACRIN versus LSC or PLuSS cohorts independent of their age, smoking history and smoking status (years quit), albeit everyone was 40 years and older and had smoked a minimum of 10 pack years. This design increased the samples sizes to 487, 1380, and 718 for the ECOG-ACRIN, LSC, and PLuSS cohorts, respectively. ROC curves comparing the eight-gene methylation panel for ECOG-ACRIN5597 to LSC or PLuSS each showed classification accuracy of 88% when combining clinical risk factors with the 8-gene methylation panel. Accordingly, this relaxed inclusion criteria also did not significantly diminish specificity when sensitivity was set at 95%. DISCUSSION This large cross sectional study of smokers provides compelling support that a significant increase in classification accuracy and accompanied specificity for predicting LC risk can be achieved by addition of a gene methylation panel in sputum to the inclusion variables for CT screening when comparing these cancer patients to two geographically distinct cancer-free smoker cohorts. Moreover, classification accuracy of the methylation panel was similar when relaxing the Medicare inclusion criteria for CT to include all ECOG-ACRIN cases that smoked compared to all cancer-free subjects from LSC and PLuSS. A limitation of these studies was that our assessment was restricted to the ~70% of smokers who produce sputum. However, with the advent of the Lung Flute, most individuals that do not spontaneously produce sputum will be able to provide a specimen for risk assessment [22]. Thus, implementation of this gene methylation panel for population-based screening could be a paradigm shift for LC management by providing a much improved risk assessment model that will save more lives through increased number of screen-detected cancer, while greatly reducing the number of false screens through exclusion of lower risk smokers. The retrospective nature of this study design allowed us to define the classification accuracy of the biomarker panel in a sample size of cases (n = 487) that was comparable to that detected by the NLST screening trial of 53,439 smokers [7]. Importantly, using participants from the ECOG-ACRIN5597 trial also addressed for the first time the generalizability of a gene methylation biomarker panel for risk assessment through studying LC cases from across the U.S. and Canada with comparison to two geographically distinct cohorts of smokers. Another major distinguishing feature of our study beyond sample size from other sputum-based risk assessment publications is the continued reproducibility regarding the performance of genes within this biomarker panel across five independent studies [11,12,[23][24][25]. This outcome likely results from the fact that the genes studied are not methylated in normal cells of any lineage thereby being cancer-specific and the use of the nested, MSP assay that has a reproducible sensitivity of 1 methylated allele in 20,000 unmethylated alleles to allow interrogation of sputum, a heterogeneous mixture of cells where the epithelial fraction is often less than 3% [11]. Moreover, high specificity is maintained in the stage 2 PCR for detecting methylated alleles through the use of annealing temperatures that exceed the melting point of the primers and short denaturation and extension cycles (15-20 sec; [11]). Finally, the fact that high classification accuracy was achieved through comparison of sputum from resected LC cases to controls strongly substantiates that the expanding field of injury with concomitant methylation is the major feature distinguishing cases from controls. While our studies with a validated gene methylation panel in sputum have improved classification accuracy for LC in screen-eligible smokers as evident by an increase in specificity from 24% to 56% with sensitivity set at 95%, adding other methylated genes to our panel is unlikely to yield significant improvement due to the correlation among genes for differentiating case status [12]. Rather, independent sets of biomarkers that can be used in conjunction with this gene methylation panel are needed to significantly extend specificity. Changes in circulating metabolites that can be quantitated are emerging as sensitive readouts for many diseases and a plasma metabolome signature, because of its dimensionality resulting from genetic and epigenetic changes driving the expansion of field cancerization in the smoker's lung, could extend our prediction model beyond methylation biomarkers [26][27][28][29][30][31]. While this approach remains untested, promising recent metabolomic profiling studies of moderate sample size are identifying discriminatory metabolites with LC classification accuracy of 77-88% [32,33]. The ultimate translation of this work should be to provide primary care and/or pulmonary physicians with the option of ordering a low cost (≤ $200) insurance reimbursable validated LC risk assessment test to guide decision making regarding receiving a CT scan. Our model to date significantly improves classification accuracy beyond the current Medicare guideline, will allow expanding the number of smokers considered for screening, and should better define eligibility for receiving a CT scan by removing smokers with a low probability for LC based on the addition of methylation to the risk assessment. Our patented technology [34] is amenable to a CLIA setting through development of robotic/liquid handling for sputum processing, DNA isolation, bisulfite modification, and assembling of the Stage I and II MSP reactions in a 96-well format in conjunction with low cost SYBR-Green based detection of methylated products using real-time PCR. Subject recruitment and biospecimen collection Study participants were resected LC patients from a prevention trial and subjects from two geographically distinct cancer free smoker cohorts. Eligibility criteria for participation in the prevention trial included the following: age ≥ 18 years; 6 to 36 months from complete resection of histologically proven stage IA (pT1N0) or stage IB (pT2N0) non-small cell LC (carcinoid tumors were excluded [19]). The institutional review board for human studies approved the protocols and written consent was obtained from subjects. Following consent onto the correlative study, the Lovelace study coordinator sent a collection kit to the study site. Sputum was collected at time of entry onto study. Sputum was collected from 85% of ECOG-ACRIN patients within 18 months post-surgery. Each participant was asked to provide two consecutive spontaneous sputum samples collected at home at each time point as described previously [11]. Study participants placed the sputum cups in a postagepaid mailer addressed to the study coordinator at Lovelace. Material from the second 3-day pooled sputum was used for this study. The collection of two sputum samples at each time point was based on the finding by Kennedy et al. [35] that the second sample has a higher success rate (80%) in producing an adequate sputum sample based on established cytologic standards, attributed to a 'learning effect' in adequate sputum collection. Following receipt, the sputum samples were pelleted and washed in Saccomanno's fixative. A small portion was smeared onto two or three slides and stained with Papanicoleau prior to cytologic diagnosis with the remaining sample stored at -80°C until time for DNA isolation. Sputum containing epithelial cells from the upper or lower airways has proven satisfactory for methylation assays and using these criteria, virtually 100% of samples were adequate for study [11]. Two cancer-free cohorts, the LSC and Pittsburgh PLuSS Cohort (PLuSS), were used to validate the classification accuracy of the gene methylation panel for predicting LC risk [21,36]. These participants were cancer-free and methylation was assessed in sputum collected at cohort enrollment. DNA isolation and methylation specific PCR Sputum DNA was isolated using methods previously described with yields of DNA that ranged from 5-100 μg [11,12]. The eight genes selected were based on positive performance in our initial nested, casecontrol study in a Colorado cohort [11]. These genes included P16, MGMT, DAPK, RASSF1A, GATA4, GATA5, PAX5α and PAX5β. These genes are cancer specific genes methylated solely in epithelial cells. DNA was bisulfite modified and two-stage, nested methylation specific polymerase chain reaction (MSP) assays were used for increased sensitivity for detection of promoter methylation in sputum and plasma as described [11]. Methylation was scored as positive or negative based on the detection of a visible band in the gel. The immense cellular heterogeneity in sputum, where the epithelial fraction is typically <3% of the specimen, limits the ability to quantitate methylation, thus methylation was scored as positive or negative. Statistical analysis The association between methylation of each gene measured in sputum collected at baseline and risk for LC using ever smokers enrolled in LSC (n=466), PLuSS (n=597), and ECOG-ACRIN5597 (n=371) was assessed using logistic regression. Study subjects were restricted to those who met the Medicare screening criteria with exception of pack years that was available for 194 of the 371 ECOG-ACRIN subjects [2]. Area under the curve (AUC) of a receiver operating characteristic (ROC) curve was calculated to assess the prediction performance of the logistic regressions with different sets of covariates. The basic model included age (as a continuous variable), sex, and smoking status (as a binary variable) that represent risk factors for LC available from all groups. Methylation status of each gene was defined as methylated or unmethylated based on the gel image with respect to detecting a methylated PCR product. The methylation status of the eight genes as eight independent variables was included in the basic model to evaluate the delta change in AUC. The methylation index approach showed prediction performance that was inferior to using the methylation status of each individual gene in the model (not shown). This may be due to the fact that individual gene methylation is low to moderately correlated between each other and their likely difference in magnitude with respect to driving lung cancer development does not support using equal weight as done with the methylation index. Estimates of sensitivity, specificity, negative and positive predictive value (NPV, PPV) were calculated. Analyses were expanded to assess AUC and ROC using all ECOG-ACRIN subjects (n = 487) compared to LSC (n = 1380) or PLuSS (n = 718) who had provided baseline sputum for methylation interrogation, irrespective of meeting Medicare screening criteria. All statistical analyses used two-sided tests and were conducted using SAS 9.3 and R 3.1.
4,272.8
2017-07-15T00:00:00.000
[ "Medicine", "Biology" ]
Characterizing the Spatial and Temporal Availability of Very High Resolution Satellite Imagery in Google Earth and Microsoft Bing Maps as a Source of Reference Data : Very high resolution (VHR) satellite imagery from Google Earth and Microsoft Bing Maps is increasingly being used in a variety of applications from computer sciences to arts and humanities. In the field of remote sensing, one use of this imagery is to create reference data sets through visual interpretation, e.g., to complement existing training data or to aid in the validation of land-cover products. Through new applications such as Collect Earth, this imagery is also being used for monitoring purposes in the form of statistical surveys obtained through visual interpretation. However, little is known about where VHR satellite imagery exists globally or the dates of the imagery. Here we present a global overview of the spatial and temporal distribution of VHR satellite imagery in Google Earth and Microsoft Bing Maps. The results show an uneven availability globally, with biases in certain areas such as the USA, Europe and India, and with clear discontinuities at political borders. We also show that the availability of VHR imagery is currently not adequate for monitoring protected areas and deforestation, but is better suited for monitoring changes in cropland or urban areas using visual interpretation. Introduction Google Earth and Microsoft Bing Maps provide visual access to very high resolution (VHR) satellite imagery, defined here as imagery with a spatial resolution finer than 5 m. We have started to see this imagery being used across many different disciplines with increasing frequency. For example, using the search terms "Google Earth" or "Bing Imagery" in Scopus, which is a database of scientific abstracts and citations, reveals a steady increase from 2005 to 2016 in the number of papers that mention or use such imagery ( Figure S1), both across general domains ( Figure S2) and more specifically in remote sensing ( Figure S3). The imagery is used for different purposes but in remote sensing, mapping is the most frequent thematic area ( Figure S4) and map validation is the most commonly found application, i.e., producing an accuracy assessment of a map ( Figures S5 and S6). As many detailed features and objects can be seen from VHR imagery, e.g., buildings, roads and individual trees, reference data sets for map validation are increasingly being augmented with visual interpretation of Google Earth imagery, and producers and consumers of land-cover maps are using Google Earth to collect reference data for the validation of these products [1][2][3][4][5]. At the same time, applications such as Geo-Wiki are using crowdsourcing to gather reference data sets for hybrid land cover map development and validation tasks based on visual interpretation of Google Earth and Microsoft Bing Maps [6][7][8][9][10][11], while the Collect Earth tool uses Google Earth imagery to gather data for forest inventories [12,13]. VHR imagery is also extremely useful for a range of different environmental monitoring applications, from detecting deforestation to monitoring cropland expansion or abandonment. Here we do not refer to the use of the imagery directly in classification, either the use of spectral information from VHR imagery that has been purchased or the red-green-blue (RGB) images themselves. Instead we refer to applications such as Collect Earth, which can be used to undertake monitoring activities through statistical surveys with visual interpretation [12,13]. Unlike Microsoft Bing Maps, Google Earth provides access to historical imagery, archiving the images as they are added to their system. This historical imagery represents a valuable source of information for monitoring changes in the landscape over time. However, since Google Earth and Microsoft Bing Maps present the satellite imagery in a seamless fashion, this may lead to the perception that the satellite data are continuous and homogeneous in nature, both in time and space. Yet in reality, the information is actually a mosaic of many images from different time periods, different spatial resolutions (15 m to 10 cm) and multiple image providers (from Landsat satellites operated by National Aeronautics and Space Administration (NASA) and United States Geological Survey (USGS) to commercial providers such as Digital Globe); see e.g., [14]. Moreover, important to note is that Google Earth and Microsoft Bing Maps do not include all of the available VHR imagery from all providers but only a subset of images that have been negotiated through agreements. Hence the satellite image landscape is actually fractured, with much of the globe still covered by Landsat resolution imagery, i.e., 15 m panchromatic. Although the Sentinel-2 of the European Space Agency (ESA) is now freely available and may slowly replace the base Landsat imagery in Google Earth, a 10 m spatial resolution is still not sufficient for visual interpretation of many landscape features. Moreover, for users of Google Earth and Microsoft Bing Maps, little is known about the spatial availability of the VHR imagery or how much historical imagery exists in Google Earth and where it can be found, which can limit the use of this resource for environmental monitoring applications. In this paper we provide an overview of the availability of VHR imagery globally by creating a systematic sample at each latitude/longitude intersection and extracting the type of imagery and the dates available for both Google Earth and Microsoft Bing Maps. As mentioned above, we define VHR imagery as any imagery that has a spatial resolution finer than 5 m. Although the term 'VHR imagery' is often used to denote imagery at a resolution measured in centimeters, there are also other types of imagery available such as SPOT (1.5 to 5 m resolution), which can be useful in recognizing certain landscape features. This is the first time that metadata on the availability of VHR imagery in space and time has been made available for Google Earth and Microsoft Bing Maps. The information can be used, for example, in the design of reference databases for remote sensing, particularly in applications that involve change detection. The overview provided here corresponds to the first week of January 2017, after which Google deprecated the Google Earth application programming interface (API)/plugin and it was no longer possible to obtain the image dates from this source. With a focus on specific geographical areas, we then examine the availability of VHR imagery and its potential impact on monitoring world protected areas, deforestation, cropland and urban expansion using visual interpretation. Materials and Methods The methodology used in this paper is summarized in Figure 1. Starting with a systematic sample, the dates are extracted from Google Earth and Microsoft Bing Maps as described in Section 2.1. This data set is then analyzed for overlap, and a variety of spatial and temporal indicators of VHR availability are calculated (as described in Section 2.2) in order to provide a global and world regional overview. Finally, a series of case studies have been selected, where the data extraction and processing is described in Sections 2.3-2.6. Data Extraction from Google Earth and Microsoft Bing Maps The dates of the images were extracted from Google Earth and Microsoft Bing Maps using the API provided by each application on a systematic grid with a spacing of 1 degree or circa 100 km at the equator placed over land surface areas of the Earth. The Google Earth API was deprecated on 11 January 2017 so the Google Earth historical imagery dates were extracted just prior to this deprecation. The Microsoft Bing Maps dates were extracted at the same time. For Microsoft Bing Maps, only one satellite image is available at each location while Google Earth has historical imagery so the dates of all historical imagery were recorded at each grid point. Spatial-Temporal Patterns of the Image Dates The spatial distribution of the image dates from Microsoft Bing Maps and Google Earth was plotted globally. For Microsoft Bing Maps, this corresponds to the imagery available as of 11 January 2017 while for Google Earth, this was the most recent date at each location. A comparison between A number of additional maps were plotted for Google Earth imagery because of the availability of the historical imagery. The first is the number of historical images available in Google Earth, which shows those regions with abundant time series and those with a lack of historical information. The vector of imagery dates at each location was then queried to extract a set of indicators, as outlined in Table 1. Number of seasons The dates were grouped by the four seasons of winter (December, January, February), spring (March, April, May), summer (June, July, August) and autumn (September, October, November); this indicator shows the number of historical images that fall in each of the four seasons, which is a relevant indicator for landscapes that change seasonally. Average difference between the oldest and the most recent date (years) Total sum of all numbers of unique years per grid points in a certain stratum divided by the total number of grid points in this stratum. Most recent year, calculated as the median At each grid point, we selected the year of the most recent image. Then from a subset of grid points in a certain stratum, we calculated the median of the most recent year. Oldest year, calculated as the median At each grid point, we selected the year of the oldest image. Then from a subset of grid points in a certain stratum, we calculated the median of the oldest years. Average number of different seasons per location Total sum of all numbers of different seasons per grid point in a certain stratum divided by the total number of grid points in this stratum. The image dates were then summarized by region, i.e., at the sub-continental level ( Figure S7), to calculate the percentage of grid points containing VHR imagery and the recent date occurring most frequently in these regions. For Google Earth, additional indicators were calculated including the average number of images per grid point, the average number of unique years per grid point and the average difference between the oldest and the most recent date (Table 1). Finally, the Pearson correlation coefficients between the number of images available in Google Earth and population density (as a proxy for urban areas) was calculated (Table S1) where the number of images in each 250 m 2 grid cell was extracted. Population density was obtained from the Global Human Settlement Population Grid for the year 2015 and has been produced by the Joint Research Centre of the European Commission [15]. The idea was to determine if there is a bias in the amount of VHR imagery in urban areas. Availability of Very High Resolution (VHR) Images in Protected Areas The World Protected Areas data set from the United Nations Environment Programme World Conservation Monitoring Centre (UNEP-WCMC) [16] contains the boundaries of protected areas globally. This layer was used to extract those sample points that fell within protected areas of all categories (from most to least protected), which were then disaggregated by major world region. The percentage of sample points with VHR imagery in Google Earth and Microsoft Bing Maps was then calculated along with the median of the date of the imagery in Microsoft Bing Maps and the most recent and oldest dates in Google Earth ( Table 1). The average number of images in Google Earth was then calculated by region along with the number of unique years and the average number of seasons per location. Availability of VHR Images in Areas with High Rates of Deforestation To examine the availability of VHR images in areas with high deforestation, we selected regions that have the highest forest cover change according to the UN Food and Agriculture Organization's (FAO) Global Forest Resources Assessment in 2015 [17]. In particular, we chose: • Regions where crop expansion is the main driver of forest loss, i.e., the Amazon, the Congo basin, Indonesia and Malaysia; and • Developed countries with intensive forest management: i.e., Sweden and Finland. The sample points falling in the regions listed above were then extracted from the full data set. A forest mask [18] was used to determine the number of sample points that fall within forest areas. We then calculated the percentage of VHR images in Google Earth and Microsoft Bing Maps by region in forest areas along with the most frequent year of the imagery for Microsoft Bing Maps, the most frequent oldest and most recent imagery in Google Earth as well as the average and unique number of images in Google Earth (Table 1). Availability of VHR Images in Areas with Cropland Visual interpretation of VHR imagery in the context of cropland can differ based on whether the image falls inside or outside of a growing season. We used the MEaSUREs (the NASA Making Earth System Data Records for Use in Research Environments) Vegetation Index and Phenology (VIP) Global Data Set, produced by NASA [19], which contains information for a range of different phenological metrics at a 0.05 degree resolution. The relevant measures extracted from this product at the sample locations included the number of growing seasons and their start and end dates. We then compared the dates of the imagery with the growing season dates to determine if the imagery at a sample location falls in or outside of a growing season or whether imagery is available for both cases. This information is relevant for applications related to cropland monitoring, where image interpretation would benefit from having scenes both inside and outside of a growing season. We then selected a list of countries to examine the availability of VHR images in cropland areas in more detail. The criteria used for selection were the following: • Countries with poor cropland monitoring systems, identified as countries with the highest food security risks [20], i.e., Angola, Chad, Ethiopia, Mongolia, Mozambique and Namibia; • Countries with a large cropland expansion or cropland loss since 2000, i.e., Nigeria, Indonesia, Brazil, Argentina, Tanzania, Australia, India and Sudan based on FAO statistics and a recent study on risks to biodiversity due to cropland expansion and intensification [21]; • The USA, which was chosen because it has the best coverage of VHR imagery. The sample points falling in the countries listed above were then extracted from the full data set and divided into two subsets based on whether the points fall inside or outside of a cropland area. The Unified Cropland Layer produced for global agricultural monitoring at a resolution of 250 m [22] was used as a cropland mask to differentiate between areas of cropland presence or absence. The number of locations with VHR images was then calculated along with the most frequent year in both Google Earth and Microsoft Bing Maps, as well as the total and unique number of images in Google Earth (see Table 1). Availability of VHR Images in Urban Areas Using the layer of urban and rural areas developed by the Joint Research Center (JRC) at a 1 km 2 resolution [23], the number of locations with VHR images falling in these two classes was calculated, along with the average number of unique years in Google Earth, the most frequent oldest and most recent images in Google Earth and the most frequent year in Microsoft Bing Maps. Software Used All of the analysis in the paper has been done using the R statistical package and all the figures were prepared using ESRI's ArcMap v.10.1 GIS software. In contrast, imagery from Google Earth is generally more recent than Bing imagery. In Google Earth (Figure 3), continuous areas of very recent imagery (2016) can be found in India, parts of South America and some African countries. There is a noticeable lack of VHR imagery in the northern latitudes, parts of the Amazon and desert areas. This is particularly evident for Australia, where Microsoft Bing Maps is either the only source of VHR imagery or contains the most recent imagery (Figure 4). The coverage of the Amazon is also much better in Microsoft Bing Maps than it is in Google Earth. The results also confirm the previous findings that VHR imagery is available for almost all of Australia and New Zealand when considering Microsoft Bing Maps while the coverage is lower (70%) for Google Earth; the dates are also similar although slightly more recent for Google Earth. Microsoft Bing Maps coverage is also better than Google Earth for South America, and western and Central Asia. Spatial-Temporal Distribution of VHR Satellite Images The worst coverage can be found in North America, where only half the area is covered by VHR imagery in both Google Earth and Microsoft Bing Maps, and eastern Europe, where Google Earth has only 39% VHR imagery and Microsoft Bing Maps has 58%. This is probably due to the fact that these regions cover high northern latitudes, where there is lower availability of VHR imagery (Figure 4). In contrast, Google Earth has better coverage in northern European, south-eastern Asia and middle Africa compared to Microsoft Bing Maps. Overall, Google imagery is more recent than Microsoft Bing Maps but Microsoft Bing Maps provide spatial complementarity to Google in South America, Australia, New Zealand and the northern part of eastern Europe. North America, southern Europe, southern Africa, and southern and south-eastern Asia have the richest archive of images, while eastern and northern Europe, Central Asia, northern and Central Africa have mostly only one or two images per location. As some of the historical images are from the same year, Figure 5 shows the number of unique years for which VHR imagery is available in Google Earth. The areas with the most imagery available are the USA, India, parts of Eastern Europe and Indonesia, and some of the more populated regions across all the continents, e.g., the southern part of Brazil, the eastern coast of Australia and the south-eastern part of South Africa. Overall, the majority of the world is covered by only 1 to 3 images per location, which may explain why there is only a medium correlation between population density and the number of images in the regions of northern Africa (r = 0.46, p-values < 0.001), South America (r = 0.40, p-values < 0.001), Western Asia (r = 0.38, p-values < 0.001), and eastern Europe (r = 0.29, p-values < 0.001) (Table S1) and low or no correlations in the rest of the world. Similar spatial patterns were found when plotting the total number of VHR images available in Google Earth ( Figure S8). Seasonal patterns are also evident in the historical archive of Google Earth. Figure 6 shows the availability of VHR imagery according to the number of seasons represented. Very few places have imagery from all 4 seasons (winter, spring, summer and autumn) while 3 seasons are available in majority of the USA, India and eastern Europe, mirroring the pattern found for the number of images. We now examine the availability of VHR imagery in relation to four domains where such imagery has value for environmental monitoring, i.e., monitoring of protected areas; monitoring of areas that have high rates of deforestation; monitoring areas with cropland, where the latter application has relevance for food security; and monitoring urban areas. Table 3 summarizes the availability of VHR images inside protected areas [16] by major world region. Greenland has been excluded from this analysis due to the absence of VHR images in this area. The coverage of protected areas in Microsoft Bing Maps is better than Google Earth for most regions except for Africa and western, southern and northern Europe, where it is only slightly lower. The comprehensive coverage by Microsoft Bing Maps in Australia and New Zealand is again evident when compared to Google Earth while coverage in South America and eastern Europe are considerably lower in Google Earth compared to Microsoft Bing Maps. Google Earth images are generally more up-to-date than Microsoft Bing Maps, have an average of at least 3 images per location, and cover at least 2 different seasons. Table 4 illustrates the availability of VHR imagery within selected regions that have the highest forest cover change based on the FAO's Global Forest Resources Assessment 2015 [17]. There is good spatial coverage by Microsoft Bing Maps in the Amazon and the Congo basin although temporally, there is only one image available on average at these locations. Moreover, the most recent, frequent year found is 4 to 6 years old. In contrast, Google Earth has relatively poor coverage in the Amazon and only 1 image available on average with similar years. For the Congo basin, the coverage is better but still poorer than Microsoft Bing Maps and only 1 unique image is available on average. For the other regions, the availability of Google Earth VHR imagery is quite good although only 2 unique years are available on average and the images are more recent than Microsoft Bing Maps. Availability of VHR Imagery in Cropland Areas To monitor cropland, particularly the presence of annual crops that can appear quite differently on satellite imagery depending on the growing season, it is useful to know the availability of VHR imagery both inside and outside of a growing season, which is shown in Figure 7. The distribution shows that most of the images are either taken during the growing season or there is imagery available both inside and outside of this period. Areas with imagery available only outside of the growing season can be found in the transition zones between desert and agricultural areas in the Sahel, and in the desert areas of Australia and western China, where there is less agriculture. For the countries selected as having either poor cropland monitoring or large expansion or loss of cropland since 2000, the availability of VHR imagery is shown in Table 5. The USA is also added as a contrast since it has large areas of cropland and good availability of VHR imagery (Figure 7). The results show that the cropland areas in these countries are covered by more than 90% VHR imagery in Google Earth. The only country for which no VHR imagery is available is Mongolia, which is unsurprising given its location in the high northern latitudes where minimal VHR imagery tends to be available. Table S2 also shows that in some countries such as Ethiopia, Namibia, Nigeria, Indonesia, Tanzania and Australia, there are more images available in cropland versus non-cropland areas. Microsoft Bing Maps are generally older than Google Earth's most recent imagery but all countries have 2 or more historical images available in Google Earth; some countries even have 5 or more images available, which span more than one season in a given year. Both the USA and India have the most images available in cropland areas although images from more unique years are available for the USA. Table 6 presents the distribution of sample locations that fall within urban and rural areas [23]; the majority of sample points fall outside of these two classes in unpopulated areas and are not shown here. Of those falling in urban areas, coverage is 100% in Google Earth and still high in Microsoft Bing Maps (87%). In rural areas the coverage is lower but nevertheless good at around 80% for both Google Earth and Microsoft Bing Maps. For urban areas, the number of unique years is 6, with a broad range of older and more recent imagery in Google Earth. Hence it is possible to use the imagery for some change detection in urban areas using visual interpretation or for validation of remotely-sensed urban products. Microsoft Bing Maps tend to be older than the most recent Google Earth imagery but may add additional information for change detection or validation purposes. Discussion The results have shown that there is clearly unequal spatial and temporal coverage by VHR imagery across the globe. There are parts of the world that have no VHR imagery, i.e., high northern latitudes, countries in the north-western part of South America, e.g., Afghanistan, Ecuador and Colombia, parts of the Saharan Desert, parts of the Congo Basin and Indonesia/Papua New Guinea. Hence it is difficult to do any monitoring in these areas since there is only Landsat panchromatic (15 m resolution) base imagery available. In the rest of the world there is some spatial complementarity between Google Earth and Microsoft Bing Maps, e.g., there are only Microsoft Bing Maps present in parts of Canada, the Amazon, former Soviet Union countries and parts of Australia where Google Earth has no coverage. In contrast, Google Earth imagery adds very little additional spatial coverage but tends to be more recent than Microsoft Bing Maps and has the benefit of a historical archive, which adds potential value for change detection and monitoring purposes using visual interpretation. However, the reality is that for applications where a time series of images would greatly benefit monitoring, the amount of historical imagery is actually quite small. We then focused on four applications where the use of VHR satellite imagery would greatly benefit monitoring and change detection, i.e., protected, forested, cropland and urban areas. Due to increased competition for land [24], protected land areas are threatened, impacting biodiversity and natural resources [16,25]; hence monitoring is vital. The availability of VHR imagery in protected areas was surprisingly poor in North America, eastern Europe and South America, particularly in Google Earth within the latter two regions. On average there are only 2 to 3 historical images in different years; hence monitoring is possible in some parts of the world but it is limited. For deforestation, the picture is worse, particularly in a region such as the Amazon. Although coverage by Microsoft Bing Maps is relatively good, less than 50% of the points falling in the Amazon biome were covered by VHR imagery in Google Earth, with on average only 1 year of imagery. Thus, there is a clear lack of information in the historical archive for monitoring change. The spatial-temporal coverage is better for Indonesia and Malaysia where there are three images on average in different years in Google Earth while most of the other regions have 2 years on average. Although new tools and products for monitoring deforestation have appeared recently, e.g., through Global Forest Watch, the basis of change detection is Landsat imagery, which still requires validation with VHR imagery. For studies in crop expansion or abandonment and urbanization, the availability of suitable VHR imagery is much better. The coverage by VHR imagery in countries with poor crop-monitoring systems, i.e., those currently subject to cropland expansion and losses, and those areas classified as urban is extremely high. There are time series of images available, and for cropland, images from more than one season. Hence there is quite some potential for using this resource for change detection in cropland and urban areas and the validation of remotely-sensed products. From the Scopus search and the breakdown by discipline ( Figures S1 and S2), the increasing value of Google Earth and Microsoft Bing Maps is evident. Figures S1 and S5 confirm the increasing use of imagery from Google Earth and for validation tasks in remote sensing, respectively, while new crowdsourced reference data sets based on Google Earth and Microsoft Bing Maps are appearing [8,26]. The collection of in situ data is resource intensive, both in terms of time and money, e.g., the LUCAS (Land Use Cover Area frame Survey) data set represents the only source of in situ data for European Union (EU) member countries where ca 300 K points are surveyed on the ground every 3 years [27]. The implementation in 2018 alone will cost more than 12 million euros [28]. Hence the visual interpretation of VHR imagery (via Google Earth and Microsoft Bing Maps) has become a more cost-effective approach for building reference data sets for the validation of land cover and land-use maps, as well as inputs to the training algorithms that create these products. Hence from an environmental and research perspective, it is important that access to these data sources continues and that gaps in VHR imagery are filled where possible. The costs of purchasing data from providers such as Digital Globe are high although it should be noted that the Digital Globe Foundation does provide data grants for academic purposes. Moreover, we are increasingly moving away from the development of static products of land cover and land use and are interested in detecting change over time, e.g., forest loss and gain over time [29] or monitoring the change in water bodies over a 32 year period [30]. Figure S6 shows that the majority of papers are using imagery from different time periods, which reflects this trend. As new land-cover products appear, e.g., the recent ESA CCI (European Space Agency Climate Change Initiative) land cover time series from 1992-2012, access to VHR imagery for validation of land cover change is vital, particularly if users want to independently validate the product for their own user needs. The spatial-temporal metadata on the image dates and the availability of VHR imagery presented here can be used to guide sample design for validation of land-cover time series. However, this is only an overview in time so having a new API for accessing the dates of imagery in Google Earth as well as other meta-information about the satellite imagery would be extremely useful for a range of applications. Unfortunately, at present, users can only collect such metadata manually with the help of open access tools such as Collect Earth or LACO-Wiki. We acknowledge this as a current limitation but as this field is changing rapidly, this situation may improve in the future. A very good example are the tools provided by Copernicus and the company Sinergise, which were developed to collect and analyze satellite imagery, in particular the open access Sentinel images at 10 m resolution [31,32]. At the same time, there are encouraging initiatives to improve the availability and accessibility of VHR imagery in the private sector, e.g., the satellite company Planet has 149 of their small dove satellites orbiting the Earth, which together provide daily coverage of the Earth's land surface at a 3 to 3.5 m resolution. Free access to 10,000 km 2 of VHR imagery per month is available for non-commercial purposes [33]. The Radiant Earth initiative from the Bill and Melinda Gates Foundation and the Omidyar Network is making a considerable amount of satellite imagery free for humanitarian and environmental causes [34]. Moreover, as mentioned previously, Digital Globe provides grants for academic access. Most of the value in VHR satellite imagery is in the up-to-date nature of the information. Commercial image providers should be encouraged to unlock their historical archives, where the information has much less commercial value, and share the imagery via applications such as Google Earth. Not only does this benefit research, it can aid environmental monitoring by many different stakeholders in the public sector as well as non-governmental organizations and charities. New applications can be built to mobilize citizens to aid in change detection, which can help tackle many pressing environmental problems. The value of VHR satellite imagery available through Google Earth and Microsoft Bing Maps should not be underestimated but it has the potential to be so much more. Supplementary Materials: The following are available online at http://www.mdpi.com/2073-445X/7/4/118/s1, Figure S1: The number of scientific documents that mention the search terms "Google Earth" or "Microsoft Bing imagery" in Scopus ® (n = 5756) from 2005 to 2016. Figure S2: The distribution of documents by subject area that appear in Scopus ® from the period 2005 to 2016 containing the search terms "Google Earth" or "Microsoft Bing imagery". Figure S3: The number of scientific documents found in Scopus ® that mention the search terms "Google Earth" or "Microsoft Bing imagery" and additionally contain the search terms "Validation", "Visualization" or "Calibration" in the abstract (n = 372) from 2006 to 2016 to focus in on paper in the field of remote sensing. Figure S4: Documents using Google Earth imagery for remote sensing purposes broken down by purpose or thematic area (n = 102). Figure S5: Documents using Google Earth imagery for remote-sensing purposes broken down by different remote sensing activities (n = 96). Figure S6: Range of images reported as employed in studies using Google Earth imagery for remote sensing (n = 80). Figure S7: World regions from FAO. The Global Administrative Unit Layers (GAUL, FAO Global Administrative Unit Layers (GAUL). Figure S8: The number of VHR historical satellite images (<5 m resolution) available in Google Earth. Table S1: Correlation (Pearson correlation coefficient) between the number of images at a location and the population density, reported by FAO world region ranked in ascending order by positive correlation. Table S2: Availability of VHR imagery inside and outside of cropland areas for selected countries. The shaded countries indicate those locations where more imagery is available in areas of cropland compared to those falling outside.
7,631.4
2018-10-11T00:00:00.000
[ "Environmental Science", "Computer Science", "Geography" ]
Software Engineering Process Models Strengths and Limitations with SimSE Background/Objectives: This work mainly focuses on how game-based learning approch can improve the understanding of the software engineering courses content. Methods/Statistical Analysis: This study introduced SimSE in our software engineering course. We used it for explaining three different process models to students and we also compare these models and tried to explain the come up with a point that which model is better in a specific situation. Findings: Our results found a significant variance in the selection of the three different process model (F=6.1, 0.05), whereas, the Agile model was found to be more efficient as compare to the rest of two. Application/Improvements: A modern learning approach such as game-based learning can make more interest of students in the theoretical conceptual computer science courses such as software engineering. *Author for correspondence Introduction Information technology is extensively impacting on our daily lives today, it impacts how we did our work, communicate and collaborate 1 . The requirement of skilled IT professional is continuously increasing as required more effective higher education. In the 21 centuries, the computer science discipline necessitates to attract quality students and make them be capable and competent IT professionals 2 . However, computer science education has been multifaceted and complex nowadays due to rapidly change and advancement in technology. In 3-4 the past years, most computer science courses are taught in traditional ways which may not be adequate to keep up with modern concerns. A modern learning approach such as game-based learning can make more interest of students in the theoretical conceptual computer science courses such as software engineering 5 . Game-based learning uses games which define learning outcomes [6][7] . They are designed in a way to balance course content with game play. Games are considered a powerful instructional rule for an objective such as winning 6 they can provide a wide range of benefits such as increasing learning effectiveness, interest, motivation, and persistence [7][8][9][10] . Therefore, game-based learning provides a promising alternative to teach computing in higher education. Keywords: Game-based Learning, Process Model, SimSE, Software Engineering In software Engineering, process models are used to developed quality software in a systematic manner and improved the chance of completing the software on time. They provide a sequence of activities for software development. They are divided into different logical stages that allow organizing the development work efficiently. According to the latest research, there are about 12 to 15 process models and most of the software engineering graduates and undergraduate students find it difficult to follow any one of them for their project. Due to the vast variety in models, it is quite difficult to understand that which model is best for which domain or which model is feasible to follow. In the industrial practices, the selection of the process model is depending on the type of software but most of the cases the choice of a software engineering team. If the team access the software accurately then they choose the right process model. Each process model has strengths and limitations and provides better performance in some situation than the others. It is a prime challenge for every software development team to choose an appropriate model that covers all aspect of the software development. It is a crucial point which decide the success or failure of the software. Therefore, deciding the process model according to the software is a very important skill which must be built in the software engineering students. On the other side, software engineering courses are taught in the universities in a theoretical way without any support of real-life examples that why mostly students complain about the dryness of the subject and proper skills regarding process model has not been developed in the students. It is very important that the students properly understand how the process model implement and they must be familiar with the advantages and limitation of each model. That is why in the previous semester we changed our strategy to teaching process models. We used a game SimSE to teach students about the process models and tried to explain the way of artifacts development in various process models. One of the most significant factor behind the use of SimSE in the classroom is the explanation of how various tasks of software engineering are accomplished in different process models some time it become very difficult to explain in front of students theoretically because in all models' tasks are mostly common. Due to the vast variety of process models, it was quite difficult for us to select models for comparisons. After a lot of research, we have selected: Waterfall Model, Incremental Model, Agile Mode (Extreme Programming). Background and Related Work Computer games are promising method to teach software engineering courses. This method can accomplish high learning outcomes in difficult field of study. Two types of game-based methods are practiced for teaching software engineering: (1) game-based learning systems and (2) the game development process. Programming languages and software process model are most significant areas of software engineering. Games of which teaching programming based areon visualization of program code e.g. turtle graphics [11][12] , Karel The Robot 13-15 , BlueJ 16 , DrJava 17 and Alice [18][19] . They are used to creation of object world through frameworks such as Karel. Current process model teaching games are simulators such as Software Engineering Simulation by Animate Models (SESAM) 20 , Role Playing Game for Software Engineers (RPGSE), Second Life 21 , Value-Based Software Engineering (SimVBSE) 22 , web simulator Simjava SP 23 and others. Many academicians have been used gaming simulator in their lectures such as Alex Baker and educational card game Problems and Programmers which simulates the software engineering process and teach process issues which are not sufficiently clarified in the lectures. They also performed an experiment in the lectures and note the improvement in the understanding of the software process 24 . In 25 authors presented an inwards-class game for teaching software engineering. The proposed game is a simulation of a real software engineering project. He also evaluated the game results through a survey. In the survey, they focused on issues related to the 4 P's model. In 26 authors experimented with 3D virtual world Second Life in their classes, (1) Ohio University, (2) University of Mary Washington. Additionally, they built an educational game in Second Life, and in SimSE. In 27,28 authors addressed the issues which faced by teacher in the software engineering teaching. They developed two game-based simulation tools for teaching software engineering: (1) Problems and Programmers and (2) SimSE. The SIMSE software engineering educational game that simulatessoftware engineering environment according to the chosen process model. It has 6 different process models, each process model has a different scenario and different method. SimSE is an interactive, fully graphical educational game, it is developed specifically to improve the skill of students in circumstances which are required for handling and understanding of software process issues. In the gameplay, at the starting of the game, a detailed description of software engineering task which will be expected to be performed is presented to the player. Usually, the description contains about goal of the game, the budget, availabletime to finish the project, some informative hints to guide about the game and the detail of how the score will be calculated. It provides a fully graphical user interface, as shown in Figure 1. The central part of GUI is a virtual office where the software engineering activities are performed, containing typical office environment and team members. The team members communicate with the project manager (player) via over the head speech bubbles. These speech bubbles are used to inform the player about important information for example when the job was initiated or when it was finished, the occurrence of random events, and response to one of the player's actions. Process Models The waterfall model was the first process model. In it, each stage must be fully complete before the next stage and there is not allowed to parallel execution of development stages. In the incremental model product is designed, implemented incrementally until the product is finalized. It combines the elements of the waterfall model with the iterative development. Agile methodology improves the quality of software and its provides ability to adapt the changes in requirement during the development. It provides iterative and frequent small releases throughout the development. Methodology The course in which we used SimSE was CSC202, a onesemester introductory software engineering course at SMI University. We used SimSE in two classes. In the 4 th weeks of the semester after completely discussed different process models, we gave a short fifteen minutes tutorial about how to play SimSE, and gave assignment to the 68 students of two classes. They played three SimSE models waterfall, incremental, and agile at least five times or until they got above 90% points and note the final score of each play. In the end, we also conducted the student's interview and asked questions about each process model and finally, we compare the process model from students' point of view. Result and discussion 97% of the students attempted the SimSE extra credit assignment, so the interest was significant. The students' scores in the first attempted of waterfall model was very low due to their complex nature. Table 1 shows the score of the first attempt of water fall model the result shows that no student got above 50% and 70% did not complete the project on time in the first play and in the last attempts they could not cross 70% and 60% finish the project on time. In the interview we found that student faced too many problems in the waterfall due to the sequential execution of the phases such as: single activity, requirement gathering, and design required too much of time, difficulty inteam management, and very poor resource management. In the end, students realized that waterfall model is not suitable for long and ongoing project and a very high level of risk and uncertainty and where requirements are at risk of changing. It is used where requirements are clear and there will be no changes in the development of time such as defense projects, migration projects: where (Table 2). Students' scores of the first attempt of the incremental model also low in the incremental model. As shown in Table 2, the score of the first attempt of incremental model no student got above 60% and 50% did not complete the project on time in the first paly and in the last attempts, they could not cross 90% and 75% finish the project on time. Because, in the incremental model, a module or small group of modules are constructed in a single increment and in the one increments the rules of waterfall are followed. In this experiment students already play the SimSE in waterfall model therefore, their results are better than the waterfall. In the interview we found that student also faced some problems with the incremental model such as a lot of advanced planning, clear and complete specification of the system required to properly divide the project into the increment, each phase of an increment is fixed and cannot be overlap. The incremental model can be used in the situation where requirements are clear and can implement phase wise. It is suitable for web site and product line software. Finally, students played the Agile module in SimSE and got above 90% score in the first attempt as shown in Table 3. In interview we found that students faced very few problems in agile model such as difficult to measure progress because progress depends across various cycles, it requires more energy and time due to software engineers and customer constantly communicate with each other and its short cycle not leave enough time for the design review therefore, designers face so many redevelop, the requirements and procedure which are developed in each iteration should be independent and not have dependencies with other requirements outside of the iteration. It is used in small and medium-sized projects, where multiple variantsis required and when major deliverable can be broken down and produced in the incremental discrete package. We also applied F-test between and within the three different software models to check the significance difference among the three (waterfall model, Incremental Model, Extreme Programming Model). A significant variance found among the three software models (F=6.1, Score No Table 4, which is highly suggestive and in favor of the Agile. Conclusion In this paper we have compared three major software development life cycle models with the help of SimSE Simulator. Selection of appropriate development life cycle model is very important, software must be delivered to the client within timeframe and must have desired quality. In this regard, we applied F-test between and within the three different software models to check the significance difference among the three groups. A significant variance found among the three software models. This study makes software development model selection process easy which improves the quality of software and decreases the software failure rate. Table 4. F-distribution between and within three different software models
3,057.6
2019-08-01T00:00:00.000
[ "Computer Science" ]
Research on Alkali-Activated Slag Stabilization of Dredged Silt Based on a Response Surface Method To improve the resource utilization of dredged silt and industrial waste, this study explores the efficacy of using ground granulated blast furnace slag (GGBS), active calcium oxide (CaO), and sodium silicate (Na2O·nSiO2) as alkali activators for silt stabilization. Through a combination of addition tests, response surface method experiments, and microscopic analyses, we identified key factors influencing the unconfined compressive strength (UCS) of stabilized silt, optimized material ratios, and elucidated stabilization mechanisms. The results revealed the following: (1) CaO exhibited the most pronounced stabilization effect, succeeded by Na2O·nSiO2, whereas GGBS alone displayed marginal efficacy. CaO-stabilized silt demonstrated rapid strength augmentation within the initial 7 d, while Na2O·nSiO2-stabilized silt demonstrated a more gradual strength enhancement over time, attributable to the delayed hydration of GGBS in non-alkaline conditions, with strength increments noticeably during later curing phases. (2) Response surface analysis demonstrated substantial interactions among GGBS-CaO and GGBS-Na2O·nSiO2, with the optimal dosages identified as 11.5% for GGBS, 4.1% for CaO, and 5.9% for Na2O·nSiO2. (3) X-ray diffraction (XRD) and scanning electron microscopy (SEM) analyses clarified that the hydration reactions within the GGBS-Na2O·nSiO2 composite cementitious system synergistically enhanced one another, with hydration products wrapping, filling, and binding the silt particles, thereby rendering the microstructure denser and more stable. Based on these experimental outcomes, we propose a microstructural mechanism model for the stabilization of dredged silt employing GGBS-CaO-Na2O·nSiO2. Introduction With the expansion of large-scale urban construction projects along China's coastline, there has emerged an urgent need to manage the vast quantities of dredged silt generated [1,2].Characterized by high water content, extensive porosity, significant compressibility, and low bearing capacity, and laden with considerable amounts of organic matter and pollutants [3][4][5], the accumulation of dredged silt not only depletes urban land resources but also poses substantial environmental risks.Presently, prevalent methods for sludge treatment encompass natural processing, thermal treatment, electro-osmotic consolidation, and chemical stabilization.Among these, chemical stabilization, distinguished by its convenience, versatility in material selection, and cost-effectiveness [6,7], has emerged as the predominant technology for the recycling of dredged silt.This technique entails the addition of stabilizers that initiate a cascade of physicochemical reactions within the silt, thereby reducing its water content and significantly boosting its strength to fulfill the specifications required for use as a roadbed filler [8]. In the stabilization of dredged silt, ordinary Portland cement is frequently employed as a solidifying agent for soft soils [8,9].However, cement production entails considerable carbon emissions and substantial energy consumption, with each ton of cement necessitating approximately 5000 MJ of energy and releasing 0.95 t of CO 2 , contributing from 5% to 8% of global greenhouse gas emissions [10,11].Despite its mechanical robustness, cementstabilized soil frequently exhibits poor durability, water instability, considerable shrinkage, and a propensity for cracking [12,13].To bolster the high-quality economic advancement of coastal cities, an urgent exploration of sustainable and efficacious green alternatives to cement and lime for civil engineering applications is necessary [14,15].Numerous scholars have investigated the formulation ratios of composite stabilizers, endeavoring to enhance the stabilization effects on silty soil by incorporating additional components into the cement mix.Blast furnace slag, a silicate byproduct of industrial iron smelting, possesses mineral components akin to those of cement clinker and displays potential hydraulic activity, activatable through alkali activation [16].Investigations by Liu et al. [17] employing scanning electron microscopy to analyze the micro-morphology of granulated blast furnace slag in varied hydration environments revealed enhanced reactivity during room temperature alkali activation, culminating in more complete hydration of the slag particles.Yi et al. [18] discovered that the strength of activated blast furnace slag-stabilized soil could achieve 2.4 to 3.2 times that of cement-stabilized soil after 90 d.Liang and colleagues [19] employed blast furnace slag powder and cement as a composite stabilizer for zinc-contaminated silty soil, determining that a mixture of 15% cement and 10% slag provided optimal zinc fixation, alongside the greatest strength and stability of the stabilized soil.He Jun et al. [20] used alkali slag and blast furnace slag for silt stabilization, attaining an unconfined compressive strength of 1228.3 kPa after seven days with 30% alkali slag and 8% slag.In evaluating and selecting additives, orthogonal experimental methods are frequently employed, which economize on time and mitigate the experimental workload to some extent but do not outline clear functional relationships between additive dosages and response values across the entire region, thereby precluding the determination of optimal ratios for achieving maximum response values.In contrast, response surface methodology amalgamates mathematical and statistical insights, facilitating the design of experiments, the establishment of fitting models, and the assessment of interactions between variables [21].Moreover, the precision of the Box-Behnken design in response surface methodology has gained widespread recognition in domains such as concrete production, cost analysis, and pharmaceutical testing [22]. By employing response surface methodology, this study synergistically integrated mathematical and statistical approaches to investigate the stabilization of dredged silt using alkaline-activated slag.GGBS-CaO-Na 2 O•nSiO 2 was developed as a composite solidifier, supplanting traditional cement, grounded on outcomes from single-addition experiments.Optimal ratios of GGBS, CaO, and Na 2 O•nSiO 2 were ascertained, and their effects on the unconfined compressive strength of the stabilized silt were evaluated.The mechanical properties and microstructural strength mechanisms were further explained through X-ray diffraction (XRD) and scanning electron microscopy (SEM).The employment of industrial waste in treating dredged silty soil not only conserves resources like cement and lime but also advances the resourceful utilization of industrial waste and dredged silt.This methodology embodies the principle of "treating waste with waste", underpinning sustainable environmental practices. Test Materials The dredged silt samples used in this experiment were sourced from the Xunsi River in Wuhan, China.This dredged silt exhibits a gray-black hue and is in a fluid-plastic state, as illustrated in Figure 1.Prior to initiating the experiment, the fundamental physical properties and principal chemical components of the soil samples were quantified, as detailed in Tables 1 and 2. In Table 1, the liquid limit, defined as the moisture content at which soil transitions from a plastic to a liquid state, and the plastic limit, identified as the moisture content initiating plastic behavior in soil, are quantified.The plasticity index is derived by the subtraction of the plastic limit from the liquid limit.Meanwhile, the liquidity index, which quantifies the soil's consistency relative to its liquid and plastic limits, is calculated by taking the soil's natural moisture content, deducting the plastic limit, and dividing it by the plasticity index.which soil transitions from a plastic to a liquid state, and the plastic limit, identified as the moisture content initiating plastic behavior in soil, are quantified.The plasticity index is derived by the subtraction of the plastic limit from the liquid limit.Meanwhile, the liquidity index, which quantifies the soil's consistency relative to its liquid and plastic limits, is calculated by taking the soil's natural moisture content, deducting the plastic limit, and dividing it by the plasticity index.The S95 slag powder employed in the experiment was procured from the Jiyuan Steel Plant (Jiyuan, China).The particle size distribution curves for GGBS and dredged silt are depicted in Figure 2, while the primary chemical components are listed in Table 2. Silicate (Huasheng Chemical Reagent Co., Ltd., Tianjin, China) and calcium oxides (Sinopharm, Beijing, China) served as alkaline activators in the experiment, with all reagents being of analytical grade.Tap water was used for the experimental procedures.The S95 slag powder employed in the experiment was procured from the Jiyuan Steel Plant (Jiyuan, China).The particle size distribution curves for GGBS and dredged silt are depicted in Figure 2, while the primary chemical components are listed in Table 2. Silicate (Huasheng Chemical Reagent Co., Ltd., Tianjin, China) and calcium oxides (Sinopharm, Beijing, China) served as alkaline activators in the experiment, with all reagents being of analytical grade.Tap water was used for the experimental procedures.which soil transitions from a plastic to a liquid state, and the plastic limit, identified as the moisture content initiating plastic behavior in soil, are quantified.The plasticity index is derived by the subtraction of the plastic limit from the liquid limit.Meanwhile, the liquidity index, which quantifies the soil's consistency relative to its liquid and plastic limits, is calculated by taking the soil's natural moisture content, deducting the plastic limit, and dividing it by the plasticity index.The S95 slag powder employed in the experiment was procured from the Jiyuan Steel Plant (Jiyuan, China).The particle size distribution curves for GGBS and dredged silt are depicted in Figure 2, while the primary chemical components are listed in Table 2. Silicate (Huasheng Chemical Reagent Co., Ltd., Tianjin, China) and calcium oxides (Sinopharm, Beijing, China) served as alkaline activators in the experiment, with all reagents being of analytical grade.Tap water was used for the experimental procedures. Sample Preparation In accordance with the standard for geotechnical testing methods (GB/T 50123-2019) [23], the ratio of the sample's height (h) to its diameter (D) should lie within the range of 2.0 to 2.5, accommodating diameters of 39.1 mm, 61.8 mm, and 101.0 mm.For this particular test, with a diameter of 39.1 mm, the height is accordingly set at 80 mm.The sample preparation process encompasses the following steps: 1 ⃝ extract large impurities such as plastic, leaves, and branches from the dredged silt and air dry the silt to the predetermined moisture content; 2 ⃝ pulverize the air-dried silt and sift it through a 2 mm sieve; 3 ⃝ oven dry the sieved silt at 105 • C for a duration exceeding 24 h; 4 ⃝ homogenize the silt with the solidifying agent according to the specified test ratios, stir thoroughly, and then seal and allow to stand for 24 h; 5 ⃝ employ the layered compaction method for sample preparation.Prior to molding, uniformly apply petroleum jelly to the interior of the mold.Compact the silt amalgamated with the solidifying agent into four distinct layers within the mold to form cylindrical samples measuring 39.1 mm in diameter and 80 mm in height, producing three parallel samples for each test group; 6 ⃝ following preparation, encase the samples in cling film to mitigate moisture evaporation, position them in a standard curing chamber (temperature (20 ± 1) • C, humidity (98 ± 1)%), demold after 24 h, verify the samples' integrity, reseal, and continue curing until the designated durations for unconfined compressive strength testing are reached.The experiment procedure is shown in Figure 3. Sample Preparation In accordance with the standard for geotechnical testing methods (GB/T 50123-2019) [23], the ratio of the sample's height (h) to its diameter (D) should lie within the range of 2.0 to 2.5, accommodating diameters of 39.1 mm, 61.8 mm, and 101.0 mm.For this particular test, with a diameter of 39.1 mm, the height is accordingly set at 80 mm.The sample preparation process encompasses the following steps: ① extract large impurities such as plastic, leaves, and branches from the dredged silt and air dry the silt to the predetermined moisture content; ② pulverize the air-dried silt and sift it through a 2 mm sieve; ③ oven dry the sieved silt at 105 °C for a duration exceeding 24 h; ④ homogenize the silt with the solidifying agent according to the specified test ratios, stir thoroughly, and then seal and allow to stand for 24 h; ⑤ employ the layered compaction method for sample preparation.Prior to molding, uniformly apply petroleum jelly to the interior of the mold.Compact the silt amalgamated with the solidifying agent into four distinct layers within the mold to form cylindrical samples measuring 39.1 mm in diameter and 80 mm in height, producing three parallel samples for each test group; ⑥ following preparation, encase the samples in cling film to mitigate moisture evaporation, position them in a standard curing chamber (temperature (20 ± 1) °C, humidity (98 ± 1)%), demold after 24 h, verify the samples' integrity, reseal, and continue curing until the designated durations for unconfined compressive strength testing are reached.The experiment procedure is shown in Figure 3.The unconfined compressive strength test (UCS) serves as a prevalent method for assessing the mechanical properties of various materials, including soil, concrete, and rock.This test primarily determines the maximum axial compressive strength that a material can withstand without lateral support.Typically, samples are cylindrical with smooth, flat ends to ensure a uniform stress distribution during loading.The prepared sample is positioned between the compression plates of the WDW-10E microcomputer-controlled electronic universal testing machine (Chenda Testing Machine Manufacturing Co., Ltd.(Jinan, China)), ensuring perfect alignment of the sample's axis with the load application direction.The load is administered uniformly at a rate of 1 mm/min until the sample fails.The testing machine automatically documents the load and deformation experienced by the sample throughout the process, halting upon achieving 3% to 5% axial deformation subsequent to the peak stress.The peak axial stress, or in its absence, the axial stress at 20% axial strain, is designated as the unconfined compressive strength of the sample.The unconfined compressive strength of the specimen was calculated using Equation (1).The unconfined compressive strength test (UCS) serves as a prevalent method for assessing the mechanical properties of various materials, including soil, concrete, and rock.This test primarily determines the maximum axial compressive strength that a material can withstand without lateral support.Typically, samples are cylindrical with smooth, flat ends to ensure a uniform stress distribution during loading.The prepared sample is positioned between the compression plates of the WDW-10E microcomputer-controlled electronic universal testing machine (Chenda Testing Machine Manufacturing Co., Ltd.(Jinan, China)), ensuring perfect alignment of the sample's axis with the load application direction.The load is administered uniformly at a rate of 1 mm/min until the sample fails.The testing machine automatically documents the load and deformation experienced by the sample throughout the process, halting upon achieving 3% to 5% axial deformation subsequent to the peak stress.The peak axial stress, or in its absence, the axial stress at 20% axial strain, is designated as the unconfined compressive strength of the sample.The unconfined compressive strength of the specimen was calculated using Equation (1). In Equation ( 1), q u denotes the specimen's unconfined compressive strength in MPa, P represents the maximum failure load in N, and A is the cross-sectional area in mm 2 . Single-Addition Experiment To determine the impact of each stabilization material on the unconfined compressive strength (UCS) of dredged silt, single-addition tests were performed by incorporating three different materials into the dredged silt individually, designated as GGBS-stabilized silt, CaO-stabilized silt, and Na 2 O•nSiO 2 -stabilized silt.After curing to the respective ages, tests for unconfined compressive strength were conducted.UCS was used as the criterion to evaluate the stabilization effects of the materials on the dredged silt and to ascertain the optimal dosage ranges for the stabilization materials.The details of the single-addition test procedure can be found in Table 3, where three replicate samples were taken for each test specimen. RSM Experiment The influence of alkali-activated slag on the macroscopic properties of stabilized dredged silt and its viability as a stabilizing agent are clarified in this study, which examines the effects of various factors on the early and prolonged strength of the stabilized soil.The proportions of GGBS (the ratio of GGBS mass to the dry mass of dredged silt), CaO (the ratio of CaO mass to the dry mass of dredged silt), and Na 2 O•nSiO 2 (the ratio of Na 2 O•nSiO 2 mass to the dry mass of dredged silt) are employed as independent variables in the experiment, designated as A, B, and C, respectively.The compressive strength of the stabilized soil at 7 and 28 d served as the response variable, denoted by Y. X-ray Diffraction Test (XRD) The instrument used for the experiment was a Bruker D8 ADVANCE X-ray diffractometer (Bruker, Billerica, MA, USA).The experiment used Cu Kα radiation with a wavelength of 1.5418 Å.The scanning parameters were set with an angle range of 10 • to 80 • and a speed of 10 • /min.Samples were collected from untreated soil and the optimal mix proportions at intervals of 7 and 28 d.The samples were processed by oven-drying the fragments at a low temperature of 40 • C for 48 h.The dried samples were subsequently pulverized into powder using an agate mortar and sifted through a 0.075 mm sieve. Scanning Electron Microscopy Test (SEM) The instrument used for SEM was a ZEISS Sigma300 scanning electron microscope (ZEISS, Oberkochen, Germany).Samples were extracted from the damaged portions of the unconfined compressive strength tests and sectioned into small cubes of approximately 1 cm 3 .These cubes were positioned in an oven and dried for over 48 h to thoroughly eliminate the free water and bonded water from the samples.Prior to testing, the samples were fractured to yield clean and uniform fracture surfaces.Subsequently, these small soil fragments were gold-coated.The gold-coated samples were positioned in the SEM instrument, and following the establishment of a vacuum, scanning observations were undertaken. Results and Analysis of Single-Addition Experiments Figure 4a depicts the influence of GGBS on the unconfined compressive strength of dredged silt.The graph depicts a trend where, as the GGBS content increases, the compressive strength of the stabilized silt initially rises and subsequently diminishes.Specifically, the strength of the stabilized silt gradually escalates with the GGBS content below 12%; however, beyond this threshold, the strength begins to wane.This trend is ascribed to the formation of cohesive hydration products, notably calcium silicate hydrate (CSH) and calcium aluminate hydrate (CAH), arising from the hydration of GGBS [24].With increasing GGBS content, a greater quantity of these hydration products forms, thereby enhancing the compressive strength.At a constant GGBS content, the strength of the stabilized silt progressively increases with curing age.For example, at a 12% GGBS content, the compressive strength of the samples is 131.67 kPa at 7 d, escalates to 148 kPa at 14 d, and peaks at 175 kPa by 28 d, marking a 33% increase from the 7 d strength.However, at any given curing age, the maximum strength achieved with 12% GGBS content peaks at only 175 kPa, falling short of construction requirements [25].Consequently, it is imperative to incorporate alkali activators into GGBS to enhance hydration [16].According to experimental outcomes, the optimal GGBS content ranges from 9% to 15%. Results and Analysis of Single-Addition Experiments Figure 4a depicts the influence of GGBS on the unconfined compressive strength of dredged silt.The graph depicts a trend where, as the GGBS content increases, the compressive strength of the stabilized silt initially rises and subsequently diminishes.Specifically, the strength of the stabilized silt gradually escalates with the GGBS content below 12%; however, beyond this threshold, the strength begins to wane.This trend is ascribed to the formation of cohesive hydration products, notably calcium silicate hydrate (CSH) and calcium aluminate hydrate (CAH), arising from the hydration of GGBS [24].With increasing GGBS content, a greater quantity of these hydration products forms, thereby enhancing the compressive strength.At a constant GGBS content, the strength of the stabilized silt progressively increases with curing age.For example, at a 12% GGBS content, the compressive strength of the samples is 131.67 kPa at 7 d, escalates to 148 kPa at 14 d, and peaks at 175 kPa by 28 d, marking a 33% increase from the 7 d strength.However, at any given curing age, the maximum strength achieved with 12% GGBS content peaks at only 175 kPa, falling short of construction requirements [25].Consequently, it is imperative to incorporate alkali activators into GGBS to enhance hydration [16].According to experimental outcomes, the optimal GGBS content ranges from 9% to 15%. Figure 4b portrays the impact of CaO on the unconfined compressive strength of dredged silt.As depicted in Figure 4b, with the incremental addition of CaO, the strength of the stabilized silt initially rises and subsequently declines, peaking at a CaO content of 4%, markedly exceeding the effect of GGBS-stabilized silt.At a CaO content of 4%, the strength of the stabilized silt significantly intensifies; however, beyond this threshold, further increases in CaO content result in a decline in strength.This phenomenon is attribut- Figure 4b portrays the impact of CaO on the unconfined compressive strength of dredged silt.As depicted in Figure 4b, with the incremental addition of CaO, the strength of the stabilized silt initially rises and subsequently declines, peaking at a CaO content of 4%, markedly exceeding the effect of GGBS-stabilized silt.At a CaO content of 4%, the strength of the stabilized silt significantly intensifies; however, beyond this threshold, further increases in CaO content result in a decline in strength.This phenomenon is attributable to the intense hydration reaction of CaO upon addition to soft soil, resulting in substantial production of Ca 2+ ions.These ions promote the formation of calcium silicate hydrate (CSH) and calcium aluminate hydrate (CAH) [26].However, as the hydration of CaO progresses, excessive precipitation of Ca(OH) 2 crystals ensues, engendering voids within the soil structure and diminishing the stabilization efficacy of CaO on the silt.Consequently, based on experimental findings, the optimal CaO content ranges between 3% and 5%. Figure 4c portrays the impact of Na 2 O•nSiO 2 on the unconfined compressive strength of dredged silt.The graph depicts an initial increase followed by a decrease in the strength of the samples with escalating Na 2 O•nSiO 2 content, peaking at 6%.Once the content exceeds 6%, the strength markedly declines, detrimentally impacting the stabilization of the dredged silt.This behavior is attributed to the robust adsorptive properties and the formation of cementitious substances during the hydration reaction of Na 2 O•nSiO 2 , which significantly contributes to silt stabilization [27].The hydration reaction of Na 2 O•nSiO 2 yields a substantial amount of fibrous material that permeates the stabilized soil, filling voids and augmenting the soil's density and structural integrity.However, as the content increases, the diminished space for hydration reactions, resulting from the filled voids, impedes further Na 2 O•nSiO 2 hydration.Consequently, based on experimental outcomes, the recommended Na 2 O•nSiO 2 content range is 4% to 8%. Based on the results of the single-addition experiments involving GGBS, CaO, and Na 2 O•nSiO 2 , it is clear that all three materials positively influence the stabilization of dredged silt.Among these, CaO exhibits the most pronounced solidifying effect, succeeded by Na 2 O•nSiO 2 , with GGBS demonstrating the least effectiveness.Taking into account the solidifying effects and economic considerations of the three materials, the optimal content ranges for the dredged silt in this study are identified as 9% to 15% for GGBS, 3% to 5% for CaO, and 4% to 8% for Na 2 O•nSiO 2 . Results and Analysis of RSM Experiments Employing the Box-Behnken design within Design-Expert software (Version: 13.0.5.0 64-bit), a three-factor, three-level experiment was executed to explain the impacts of each experimental factor and their synergistic interactions on the strength of the stabilized soil.The coding and level configuration of the independent variables are shown in Table 4.The response surface methodology incorporates 17 experimental groups.Unconfined compressive strength tests on the stabilized dredged silt were performed at two distinct curing periods: 7 and 28 d.The response values, denoted as Y 7d and Y 28d , are expressed in units of kilopascals (kPa).The experimental groups and their corresponding results are detailed in Table 5. Using Design-Expert software, the experimental results presented in Table 5 were subjected to a second-order polynomial regression analysis.The second-order regression equation and the results of the variance analysis for each term are detailed in Table 6. This study employed the F-distribution to evaluate the significance of the regression outcomes.During the analysis, it is imperative to first establish the significance level α.In Table 6, should the p-value fall below α, the corresponding experimental result is deemed significantly different.Conversely, if not significantly different, it can be excluded from the optimization analysis.In this study, the significance level α was established at 0.05.As shown in Table 6, the model F-values for Y 7d and Y 28d are 48.31 and 61.02, respectively, with p-values both under 0.0001, signifying that the regression models possess high significance.For Y 7d , the significance ranking of the single factors is B > C > A, denoting CaO content > Na 2 O•nSiO 2 content > GGBS content.The significance ranking of the factor interactions is AB > AC > BC, suggesting that the 7 d unconfined compressive strength of the stabilized soil predominantly relates to the CaO and GGBS contents.For Y 28d , the significance ranking of the single factors is B > A > C, signifying CaO content > GGBS content > Na 2 O•nSiO 2 content.The significance ranking of the factor interactions is AC > BC > AB, indicating that the 28 d unconfined compressive strength of the stabilized soil is chiefly associated with the GGBS and Na 2 O•nSiO 2 contents.Because the Design-Expert software was employed, nonsignificant terms were systematically excluded to formulate the second-order polynomial regression equations between the GGBS content (A), CaO content (B), Na 2 O•nSiO 2 content (C), and the unconfined compressive strength of stabilized soil at 7 and 28 d, as outlined in Equations ( 2) and (3). Table 7 displays the outcomes of the model reliability analysis.The proximity of the model correlation coefficient (R 2 ) and the adjusted determination coefficient (Adjusted R 2 ) confirm the adequacy of the regression equation's fit.Furthermore, a coefficient of variation (C.V.) below 10, a signal-to-noise ratio (Adequate Precision) exceeding 4, and a disparity less than 0.2 between the Adjusted R 2 and the Predicted R 2 emphasize the high accuracy and reliability of the experiments.As shown in Table 7, the R 2 values for the models established for 7 and 28 d are 0.9842 and 0.9874, respectively, nearing 1, thereby indicating high model reliability.The Adjusted R 2 and Predicted R 2 for the two models are 0.9638 and 0.8490 and 0.9712 and 0.8657, respectively.The coefficients of variation stand at 3.12% and 2.14%, while the signal-to-noise ratios are recorded at 20.198% and 19.398%, respectively.This confirmation underscores that both models exhibit high accuracy and robust reliability, attesting to the effectiveness of the models established in this study.Employing the regression equation, the configurations of response surface plots and contour maps are scrutinized to analyze the impact of GGBS, CaO, and Na 2 O•nSiO 2 on unconfined compressive strength.These plots effectively explain the interactions among the variables.By evaluating the steepness of the response surface plots, the magnitude of their impact on response values can be assessed; a steeper gradient signifies more intense interactions among the variables.The interactions of GGBS content (A), CaO content (B), and Na 2 O•nSiO 2 content (C) with respect to the 7 and 28 d unconfined compressive strength values are delineated through the response surfaces and contour lines depicted in Figures 5-8. model correlation coefficient (R 2 ) and the adjusted determination coefficient (Adjusted R 2 ) confirm the adequacy of the regression equation's fit.Furthermore, a coefficient of variation (C.V.) below 10, a signal-to-noise ratio (Adequate Precision) exceeding 4, and a disparity less than 0.2 between the Adjusted R 2 and the Predicted R 2 emphasize the high accuracy and reliability of the experiments.As shown in Table 7, the R 2 values for the models established for 7 and 28 d are 0.9842 and 0.9874, respectively, nearing 1, thereby indicating high model reliability.The Adjusted R 2 and Predicted R 2 for the two models are 0.9638 and 0.8490 and 0.9712 and 0.8657, respectively.The coefficients of variation stand at 3.12% and 2.14%, while the signal-to-noise ratios are recorded at 20.198% and 19.398%, respectively.This confirmation underscores that both models exhibit high accuracy and robust reliability, attesting to the effectiveness of the models established in this study.Employing the regression equation, the configurations of response surface plots and contour maps are scrutinized to analyze the impact of GGBS, CaO, and Na2O•nSiO2 on unconfined compressive strength.These plots effectively explain the interactions among the variables.By evaluating the steepness of the response surface plots, the magnitude of their impact on response values can be assessed; a steeper gradient signifies more intense interactions among the variables.The interactions of GGBS content (A), CaO content (B), and Na2O•nSiO2 content (C) with respect to the 7 and 28 d unconfined compressive strength values are delineated through the response surfaces and contour lines depicted in Figures 5-8 Figure 5a-c systematically depicts the trends on interaction surfaces AB, AC, and BC, respectively.On the AB surface, the gradient of the 7 d UCS initially ascends and subsequently descends with increasing GGBS content.At low GGBS levels, this pattern is mirrored with increasing CaO content; however, at elevated GGBS levels, the gradient stabilizes following an initial ascent with increasing CaO content.The optimal 7 d UCS is achieved with GGBS contents between 11-14% and CaO contents between 3.5-4.5%,wherein CaO exerts a more pronounced influence on UCS than GGBS.On the AC surface, the gradient of the 7 d UCS similarly elevates and then diminishes with escalating GGBS and Na 2 O•nSiO 2 contents, peaking within the ranges of 11-14% GGBS and 5-7% Na 2 O•nSiO 2 , signifying a more substantial influence of Na 2 O•nSiO 2 .On the BC surface, analogous trends manifest with Na 2 O•nSiO 2 and CaO, where the peak 7 d UCS is noted within CaO contents of 3.5-4.5% and Na 2 O•nSiO 2 contents of 5-7%, with CaO exerting a more significant effect.Contour plots in Figure 6a-c depict the interaction between GGBS and CaO as notably distinct, forming an elliptical shape, while the interactions involving Na 2 O•nSiO 2 demonstrate as circular, suggesting less significance and minimal impact on UCS. Figures 7 and 8 demonstrate that the 28 d cured soil's compressive strength trend closely mirrors that of the 7 d cured soil.The highest unconfined compressive strength is observed when GGBS, CaO, and Na 2 O•nSiO 2 contents are within the ranges of 11% to 14%, 4% to 4.5%, and 5% to 7%, respectively.The elliptical contour lines for GGBS and Na 2 O•nSiO 2 on the contour maps indicate a significant interaction between these components, corroborating the findings from the significance analysis in Table 6. Design-Expert software was used to optimize the mix proportions of alkali-activated, slag-stabilized dredged silt, targeting the maximum unconfined compressive strength.The optimal mix proportions were established at 11.5% GGBS, 4.1% CaO, and 5.9% Na 2 O•nSiO 2 .Scatter plots comparing the actual versus predicted unconfined compressive strengths for 7 and 28 d, as illustrated in Figure 9a,b, reveal data points closely aligned along a 45-degree diagonal, demonstrating high fidelity in the model's predictions. Design-Expert software was used to optimize the mix proportions of alkali-activated, slag-stabilized dredged silt, targeting the maximum unconfined compressive strength.The optimal mix proportions were established at 11.5% GGBS, 4.1% CaO, and 5.9% Na2O•nSiO2.Scatter plots comparing the actual versus predicted unconfined compressive strengths for 7 and 28 d, as illustrated in Figure 9a,b, reveal data points closely aligned along a 45-degree diagonal, demonstrating high fidelity in the model's predictions.To validate the accuracy of the optimal mix ratio for the cured soil, samples were prepared and subjected to unconfined compressive strength tests under standard curing conditions until the designated ages were reached.The actual and predicted strength values of the cured soil, as detailed in Table 8 and depicted in Figure 9c, indicate that the actual results are derived from averages across five distinct sets of strength tests.The absolute relative error (D) between the predicted and actual strengths, computed using Equation (3), confirms that, for both 7 d (683.4 kPa predicted vs. 703.4kPa actual) and 28 d (1032.4kPa predicted vs. 1066.3kPa actual), the discrepancies are under 5%.This validation underscores the high accuracy of the predictive model developed in this study, offering a reliable reference for subsequent applications. Analysis of Microstructural Characteristics and Mechanisms X-ray diffraction (XRD) experiments were performed to analyze the alterations in hydration products under diverse curing conditions.The XRD spectrum illustrated in Figure 10 identifies quartz, albite, and calcite as the predominant components of the dredged silt. Analysis of Microstructural Characteristics and Mechanisms X-ray diffraction (XRD) experiments were performed to analyze the alterations in hydration products under diverse curing conditions.The XRD spectrum illustrated in Figure 10 identifies quartz, albite, and calcite as the predominant components of the dredged silt.The XRD spectrum of the GGBS-CaO-Na2O•nSiO2-stabilized dredged silt, displayed in Figure 11, indicates an enhanced composition of internal hydration products, extending beyond the foundational constituents of the original dredged silt, including quartz, albite, and calcite.Significantly, new hydration products such as calcium aluminate hydrate (C-A-H), portlandite (Ca(OH)2), calcium silicate hydrate (C-S-H), and ettringite (AFt) have been identified.When comparing the 28 d spectrum to the 7 d spectrum, a noticeable decrease in portlandite peaks and an increase in calcite peaks are observed, likely attributable to the consumption of portlandite by pozzolanic reactions and the enhancement of calcite through carbonation as curing progresses.This synergistic interaction within the The XRD spectrum of the GGBS-CaO-Na 2 O•nSiO 2 -stabilized dredged silt, displayed in Figure 11, indicates an enhanced composition of internal hydration products, extending beyond the foundational constituents of the original dredged silt, including quartz, albite, and calcite.Significantly, new hydration products such as calcium aluminate hydrate (C-A-H), portlandite (Ca(OH) 2 ), calcium silicate hydrate (C-S-H), and ettringite (AFt) have been identified.When comparing the 28 d spectrum to the 7 d spectrum, a noticeable decrease in portlandite peaks and an increase in calcite peaks are observed, likely attributable to the consumption of portlandite by pozzolanic reactions and the enhancement of calcite through carbonation as curing progresses.This synergistic interaction within the GGBS-CaO-Na 2 O•nSiO 2 binder system not only facilitates the formation of new hydration products but also substantially enhances the strength of the stabilized dredged silt.This phenomenon corresponds with the significant factor interactions observed in response surface testing, exemplifying the complex interplay among the components within the binder system.GGBS-CaO-Na2O•nSiO2 binder system not only facilitates the formation of new hydration products but also substantially enhances the strength of the stabilized dredged silt.This phenomenon corresponds with the significant factor interactions observed in response surface testing, exemplifying the complex interplay among the components within the binder system.Scanning electron microscopy (SEM) was used to investigate the microstructures of both dredged silt and stabilized dredged silt.The SEM image depicted in Figure 12 shows that the particles in the dredged silt are large and poorly interconnected, resulting in a structurally weak composition characterized by abundant pores and cracks.This configuration manifests macroscopically as subpar mechanical performance.Scanning electron microscopy (SEM) was used to investigate the microstructures of both dredged silt and stabilized dredged silt.The SEM image depicted in Figure 12 shows that the particles in the dredged silt are large and poorly interconnected, resulting in a structurally weak composition characterized by abundant pores and cracks.This configuration manifests macroscopically as subpar mechanical performance. Scanning electron microscopy (SEM) was used to investigate the microstructures of both dredged silt and stabilized dredged silt.The SEM image depicted in Figure 12 shows that the particles in the dredged silt are large and poorly interconnected, resulting in a structurally weak composition characterized by abundant pores and cracks.This configuration manifests macroscopically as subpar mechanical performance.Figure 13a,b showcases the SEM images of dredged silt stabilized with an optimal mix of GGBS, CaO, and Na2O•nSiO2 at 7 and 28 d, respectively.The images explain that, as the curing time progresses, the gaps between soil particles diminish and the structural density escalates, thereby bolstering the overall integrity and compressive strength of the stabilized dredged silt samples.A substantial volume of white flocculent gel-like hydration products is observed adhering between the soil particles.These hydration products Figure 13a,b showcases the SEM images of dredged silt stabilized with an optimal mix of GGBS, CaO, and Na 2 O•nSiO 2 at 7 and 28 d, respectively.The images explain that, as the curing time progresses, the gaps between soil particles diminish and the structural density escalates, thereby bolstering the overall integrity and compressive strength of the stabilized dredged silt samples.A substantial volume of white flocculent gel-like hydration products is observed adhering between the soil particles.These hydration products proliferate as the curing period extends, culminating in a more compact soil structure.This effect arises from the SiO 2 and Al 2 O 3 in both the dredged silt and GGBS gradually dissolving under the influence of the alkaline activators CaO and Na 2 O•nSiO 2 , subsequently reaggregating to yield copious flocculent cementitious materials, namely, C-S-H and C-A-H.These materials adhere to the glassy surfaces, enveloping the soil particles and effectively filling the interparticle spaces.Additionally, acicular structures (AFt), observed within the pores, function as a framework supporting the soil particles, collaborating with the cementitious hydrates to effectively fill the voids, thereby creating a dense, net-like internal structure that enhances the ongoing improvement in the macroscopic mechanical strength of the stabilized soil.However, the presence of AFt observed in the images is limited, possibly due to the extensive formation of C-S-H and C-A-H, which may envelop or obscure the AFt.Relative to the SEM image at 7 d, the stabilized dredged silt exhibits substantial changes by 28 d, reflecting the progression of hydration reactions.Na2O•nSiO2 compounds fully dissolve, enabling the resultant cementitious materials to overlap and progressively form extensive clumped and networked structures.As the boundaries between soil particles diminish, a compact aggregate materializes.This transformation consolidates the microstructure, significantly augmenting the macroscopic mechanical strength of the stabi- Relative to the SEM image at 7 d, the stabilized dredged silt exhibits substantial changes by 28 d, reflecting the progression of hydration reactions.Na 2 O•nSiO 2 compounds fully dissolve, enabling the resultant cementitious materials to overlap and progressively form extensive clumped and networked structures.As the boundaries between soil particles diminish, a compact aggregate materializes.This transformation consolidates the microstructure, significantly augmenting the macroscopic mechanical strength of the stabilized dredged silt. Further analysis explored the impact of various factors on the microstructure of hydration products using SEM to examine stabilized dredged silt with differing slag contents in the 7 d response surface tests, as depicted in Figure 14.The SEM images from experimental groups S3 and S4, depicted in Figures 14a and 14b, respectively, display differing mix ratios: S3 with 9% GGBS, 5% CaO, and 6% Na 2 O•nSiO 2 and S4 with 15% GGBS, 5% CaO, and 6% Na 2 O•nSiO 2 .Figure 14b Figure 15a,b presents SEM images for experimental groups S2 and S4, with S2's mixture comprising 15% GGBS, 3% CaO, and 6% Na2O•nSiO2.The images clearly demonstrate that an increase in the quantity of alkaline activators significantly reduces the porosity among the particles of stabilized dredged silt and intensifies the density of the white network gel interspersed among the particles.This effect stems from the elevated OH − concentration resulting from increased alkaline activator usage, which accelerates the dissolution of silico-aluminate compounds in both GGBS and dredged silt.This enhancement in hydration reactions leads to the prolific creation of silicate aluminates.Using the adhesive properties of C-S-H and C-A-H, the overall structural integrity of the stabilized silt is substantially enhanced, rendering the strength of the S4 stabilized silt significantly superior to that of S1. 15a,b presents SEM images for experimental groups S2 and S4, with S2's mixture comprising 15% GGBS, 3% CaO, and 6% Na 2 O•nSiO 2 .The images clearly demonstrate that an increase in the quantity of alkaline activators significantly reduces the porosity among the particles of stabilized dredged silt and intensifies the density of the white network gel interspersed among the particles.This effect stems from the elevated OH − concentration resulting from increased alkaline activator usage, which accelerates the dissolution of silicoaluminate compounds in both GGBS and dredged silt.This enhancement in hydration reactions leads to the prolific creation of silicate aluminates.Using the adhesive properties of C-S-H and C-A-H, the overall structural integrity of the stabilized silt is substantially enhanced, rendering the strength of the S4 stabilized silt significantly superior to that of S1. Based on the extensive XRD and SEM experiments, along with macroscopic testing, a micro-mechanism model for the GGBS-CaO-Na 2 O•nSiO 2 stabilization of dredged silt has been developed, as illustrated in Figure 16.The stabilization involves several key processes: Process 1 ⃝: When the GGBS-CaO-Na 2 O•nSiO 2 solidifier is incorporated into the dredged silt and thoroughly mixed, the CaO within the solidifier undergoes rapid hydration, releasing large amounts of Ca 2+ and OH − .Concurrently, Na 2 O•nSiO 2 reacts with water to produce abundant OH − and Na + .As the concentration of these ions rises, Na + and K + from the silt particles dissolve and engage in adsorption exchange with Ca 2+ , leading to a reduction in the double-layer thickness of the soil particles and decreasing their separation.This process results in flocculation and the formation of larger aggregates, enhancing the soil particles' cohesion.Meanwhile, the less soluble flake-like product, Ca(OH) 2 , gradually precipitates, further enhancing the bonds between soil particles due to its hydration activity combined with the filling effects of GGBS particles.The marked increase in OH − levels in the solution thus establishes an advantageous alkaline environment conducive to facilitating processes 2 ⃝ and 3 ⃝. Figure 15a,b presents SEM images for experimental groups S2 and S4, with S2's mixture comprising 15% GGBS, 3% CaO, and 6% Na2O•nSiO2.The images clearly demonstrate an increase in the quantity of alkaline activators significantly reduces the porosity among the particles of stabilized dredged silt and intensifies the density of the white network gel interspersed among the particles.This effect stems from the elevated OH − concentration resulting from increased alkaline activator usage, which accelerates the dissolution of silico-aluminate compounds in both GGBS and dredged silt.This enhancement in hydration reactions leads to the prolific creation of silicate aluminates.Using the adhesive properties of C-S-H and C-A-H, the overall structural integrity of the stabilized silt is substantially enhanced, rendering the strength of the S4 stabilized silt significantly superior to that of S1.Process ①: When the GGBS-CaO-Na2O•nSiO2 solidifier is incorporated into the dredged silt and thoroughly mixed, the CaO within the solidifier undergoes rapid hydration, releasing large amounts of Ca 2+ and OH − .Concurrently, Na2O•nSiO2 reacts with water to produce abundant OH − and Na + .As the concentration of these ions rises, Na + and K + from the silt particles dissolve and engage in adsorption exchange with Ca 2+ , leading to a reduction in the double-layer thickness of the soil particles and decreasing their separation.This process results in flocculation and the formation of larger aggregates, enhancing the soil particles' cohesion.Meanwhile, the less soluble flake-like product, Ca(OH)2, gradually precipitates, further enhancing the bonds between soil particles due to its hydration activity combined with the filling effects of GGBS particles.The marked increase in OH − levels in the solution thus establishes an advantageous alkaline environment conducive to facilitating processes ② and ③. Process ②: Under alkaline conditions, SiO2 in the dredged silt initially reacts with OH − ions to form H2SiO4 2− .This compound then further reacts with OH − and Ca 2+ to generate the flocculent gel C-S-H.The C-S-H gel envelops and binds the soil particles, leading to the formation of larger particle aggregates. Process ③: Under alkaline conditions, the activity of GGBS is enhanced, causing Al2O3 contained within it to undergo a hydration reaction with OH − ions in the solution, forming AlO 2− .This ion then combines with Ca 2+ to create the flocculent gel C-A-H, which serves as a binder for the soil particles.Additionally, C-A-H reacts with SO4 2− present in Overall, the stabilization of dredged silt via GGBS-CaO-Na 2 O•nSiO 2 is propelled by the binding properties of C-S-H and C-A-H, coupled with the filling actions of Ca(OH) 2 and AFt.These components synergistically transform the loose soil into a dense aggregate, significantly enhancing its stability and mechanical strength. Figure 2 . Figure 2. Particle grading curve of GGBS and dredged silt.Figure 2. Particle grading curve of GGBS and dredged silt. Figure 2 . Figure 2. Particle grading curve of GGBS and dredged silt.Figure 2. Particle grading curve of GGBS and dredged silt. Figure 5 . Figure 5. Surface of mutual influence for three factors on 7 d UCS strength: (a) AB, (b) AC, and (c) BC.Figure 5. Surface of mutual influence for three factors on 7 d UCS strength: (a) AB, (b) AC, and (c) BC. Figure 5 .Figure 5 .Figure 6 . Figure 5. Surface of mutual influence for three factors on 7 d UCS strength: (a) AB, (b) AC, and (c) BC.Figure 5. Surface of mutual influence for three factors on 7 d UCS strength: (a) AB, (b) AC, and (c) BC. Figure 11 . Figure 11.XRD of stabilized dredged silt at 7 and 28 d under the optimal ratio. Figure 11 . Figure 11.XRD of stabilized dredged silt at 7 and 28 d under the optimal ratio. Figure 15 . Figure 15.SEM images of stabilized dredged silt with different alkali-activator contents (5000 times): (a) S2 and (b) S4.Based on the extensive XRD and SEM experiments, along with macroscopic testing, a micro-mechanism model for the GGBS-CaO-Na2O•nSiO2 stabilization of dredged silt has been developed, as illustrated in Figure 16.The stabilization involves several key processes: Figure 16 . Figure 16.Microscopic mechanism model of stabilized dredged silt.Process 2 ⃝: Under alkaline conditions, SiO 2 in the dredged silt initially reacts with OH − ions to form H 2 SiO 4 2− .This compound then further reacts with OH − and Ca 2+ to generate the flocculent gel C-S-H.The C-S-H gel envelops and binds the soil particles, leading to the formation of larger particle aggregates.Process 3 ⃝: Under alkaline conditions, the activity of GGBS is enhanced, causing Al 2 O 3 contained within it to undergo a hydration reaction with OH − ions in the solution, forming AlO 2− .This ion then combines with Ca 2+ to create the flocculent gel C-A-H, which serves as a binder for the soil particles.Additionally, C-A-H reacts with SO 4 2− present in the dredged silt, leading to the formation of needle-like structures known as AFt.Overall, the stabilization of dredged silt via GGBS-CaO-Na 2 O•nSiO 2 is propelled by the binding properties of C-S-H and C-A-H, coupled with the filling actions of Ca(OH) 2 and AFt.These components synergistically transform the loose soil into a dense aggregate, significantly enhancing its stability and mechanical strength. Table 1 . Basic physical properties and indexes of dredged silt. % is a mass fraction. Table 2 . Main chemical components of the experimental materials. Table 1 . Basic physical properties and indexes of dredged silt. % is a mass fraction. Table 2 . Main chemical components of the experimental materials. Table 1 . Basic physical properties and indexes of dredged silt. Table 2 . Main chemical components of the experimental materials. Table 4 . Response surface design scheme. Table 5 . Regression model coefficient and significance. Table 7 . Model reliability test analysis. Table 7 . Model reliability test analysis. Table 8 . Model reliability test analysis.
10,792.6
2024-09-01T00:00:00.000
[ "Environmental Science", "Materials Science", "Engineering" ]
Generalized C ψβ − rational contraction and fixed point theorem with application to second order differential equation In this article, generalized C β rational contraction is defined and the existence and uniqueness of fixed points for self map in partially ordered metric spaces are discussed. As an application, we apply our result to find existence and uniqueness of solutions of second order differential equations with boundary conditions. Introduction From last 15 years, several authors have studied and derived various fixed point results for many contractions in partially ordered sets.Ran and Reurings [1] derived a fixed point result on partially ordered sets in which contractive condition assumed to be hold on comparable elements.After that, author in [9,10] deduced some results to get fixed point for monotone, non-decreasing operator with partially ordered relation on a set Y without using the continuity of maps.They also discussed few applications of their main findings and gave existence as well as uniqueness theorem ordinary differential equation of first order and first degree with restricted boundary conditions.Number of results after that have been investigated to establish fixed point in partially ordered metric spaces (for more detail see [2,4,7,8,11,12,13,15,18,19,21,22]). In 1975, Jaggi [23] and Das and Gupta [24] derived some fixed point results for rational type contraction.There exist several results in the literature for self and pair of maps satisfying rational expression in different spaces [20,25]. In 2007, Suzuki [16] introduced the weaker C-contractive condition and proved some fixed point theorems.The existence as well as uniqueness of fixed point of such types of operator have also been extensively studied in [3,17].such that If {u n } is not a Cauchy sequence in Y then there exist an > 0 and sequences of positive integers (m k ) and In this paper, we first define a generalized C ψ β − rational contraction and then prove the existence and uniqueness of fixed points for self monotone map.We also consider a partially ordered set Y with comparable elements, and a complete metric d with set Y to deduce our main result.As application, we give an existence as well as uniqueness theorem for ordinary differential equation of second order and first degree with restricted boundary conditions. Fixed point result with partial order We define generalized C ψ β − rational contraction as follows: Definition 2.1.A mapping f on a metric space (Y, d) is said to satisfy generalized where Main finding of this article is the following result. Theorem 2.1.Let (Y, d, ) be a partially ordered complete metric space and let f : Y → Y be a non-decreasing, monotone map satisfying generalized Also assume that: (4) For every u, v ∈ Y, there exists z ∈ Y, such that u z and v z. If there exists u 0 ∈ Y such that u 0 f u 0 , then f has a unique fixed point in Y . Proof.Let u 0 ∈ Y satisfy u 0 f u 0 .We define a sequence {u n } as follows: If u n = u n+1 for some n ∈ N , then, clearly M (u n , u n+1 ) = 0 and so, u n is the fixed point of f .So, assume that u n = u n+1 for all n ∈ N. Let a n = d(u n , u n+1 ).Then, clearly a n > 0. Since u 0 f u 0 = u 1 and f is non-decreasing, then where . and hence From (7), we have (8) gives a contradiction to condition (3) and hence Since ψ and β are continuous functions, therefore Similarly we get Thus, we get a sequence {d(u n , u n+1 )} of functions, which is non-increasing and r ≥ 0 such that However, by taking lim n→∞ on both side of (8), we get ψ(r) ≤ β(r), which is a contradiction to (2).Thus we have r = 0, and hence Assume on contrary that sequence {u n } is not Cauchy.Then for every > 0, we can find subsequences of positive integers m k and n k , where Also for this > 0, the convergence of sequence {d(u n , u n+1 )} implies, there exists where, On using Lemma 1.2 and letting k → ∞ in ( 12) and ( 13), we obtain ψ( ) ≤ β( ), that's a contradiction to (3) and hence by Lemma 1.1, we get = 0.This contradicts the assumption that > 0. Therefore our assumption is wrong.Hence {u n } is Cauchy.Since Y is complete, so {u n } converges with all its subsequences to some limiting value, say z ∈ Y .Now assume for every n ∈ N and Then we have this is a contradiction.Hence we must have d(u n , z) ≥ 1 2 d(u n , u n+1 ) or d(u n+1 , z) ≥ 1 2 d(u n+1 , u n+2 ), for all n ∈ N .Thus for a sub-sequence {n k } of N , we obatin where Both, on letting k → ∞, and using (15) in ( 14), we get To establish uniqueness, we suppose on contradictory that for all u, v ∈ Y , u = f u and v = f v provided u = v.Now we discuss following two case for both elements. Case 1.Without loss of generality, suppose that u v are comparable.Then Thus from (2) and Lemma 1.1, we get d(u, v) = 0, i.e, u = v. Case 2. Assume that u and v are not comparable then from ( 4), there exists some z ∈ Y comparable to u and v such that where Hence, from (17), Consequently, we have ψ(d(u, w) ≤ β(d(u, w)). On using Lemma 1.1, we have d(u, w) = 0. Similarly, we can obtain d(v, w) = 0.This implies that u = v.This completes the proof of Theorem 2.1. Theorem 2.2.Let (Y, d, ) be a partially ordered complete metric space and let f : Y → Y be a non-decreasing, monotone map such that for all u, v ∈ Y, and where ψ ∈ Ψ, a i ≥ 0, a i < 1, for all i = 1, 2, 3 and Also assume that, for every u, v ∈ Y, there exists z ∈ Y , such that u z and v z.If there exists u 0 ∈ Y such that u 0 f u 0 , then f has a unique fixed point in Y . Proof.Given that f : Y → Y be monotone, nondecreasing map such that for all u, v ∈ Y, and Since all a i ≥ 0 and a i < 1, for all i = 1, 2, 3, then Rest of the proof follows directly from main result (Theorem 2.1). Also assume that for every u, v ∈ Y , there exists z ∈ Y , such that u z and v z.If there exists u 0 ∈ Y such that u 0 f u 0 , then f has a unique fixed point in Y . Corollary 2.2.Let (Y, d, ) be a partially ordered complete metric space and let f : Y → Y be a non-decreasing map such that for all u, v ∈ Y, Since G(ω, θ) > 0, for ω ∈ L. This proves that H is also weakly increasing mapping.Also, for all u, v ∈ E with u ≥ v implies that and so, in term of metric This implies It is easy to calculate that Also, G(ω, θ)f (θ, 0)dθ ≥ 0. Thus one by one all assumptions of Theorem 2.1 are satisfied and therefore, the function H has a unique non negative solution. Conclusion In this manuscript, we have first defined a generalized C ψ β − rational contraction and then derived our main result Theorem 2.1.Some consequence results (Corollary 2.1, 2.2) and Remarks 2.1, 2.2 flaunted that our result is a proper generalization and extension of some previous existing results.As an application of our main result, we have presented an example to find the existence and uniqueness of solutions of second order boundary value problem.
1,895.6
2018-01-01T00:00:00.000
[ "Mathematics" ]
MA-CharNet: Multi-angle fusion character recognition network Irregular text recognition of natural scene is a challenging task due to large span of character angles and morphological diversity of a word. Recent work first rectifies curved word region, and then employ sequence algorithm to complete the recognition task. However, this strategy largely depends on rectification quality of the text region, and cannot be applied to large difference between tilt angles of character. In this work, a novel anchor-free network structure of rotating character detection is proposed, which includes multiple sub-angle domain branch networks, and the corresponding branch network can be selected adaptively according to character tilt angle. Meanwhile, a curvature Adaptive Text linking method is proposed to connect the discrete strings detected on the two-dimensional plane into words according to people’s habits. We achieved state-of-the-art performance on two irregular texts (TotalText, CTW1500), outperforming state-of-the-art by 2.4% and 2.7%, respectively. The experimental results demonstrate the effectiveness of the proposed algorithm. Introduction In recent years, numeral recognition [1,2] and character recognition in natural scenes have attracted increasing attention and their application has been widely used, such as robot navigation [3] and image retrieval [4]. With the vigorous promotion of deep learning [5], scene text recognition has made rapid progress [6][7][8][9][10][11]. However, scene text recognition is still a task with many challenges due to the different text forms in natural scenes (e.g., irregular text layout, diversity of colors, fonts, etc.) and complex background interference. At present, natural scene text recognition can be roughly divided into two categories: encode-decode based method [6,[12][13][14] and character detection based method [15,16]. Encode-decode based method treats words or text sequences as base unit. Its main idea is to convert text detection in two-dimensional images into one-dimensional text recognition and location, which extremely depends on the accuracy of word region segmentation [17,18]. Therefore, the encode-decode method has some limitations on solving the recognition of curve text sequences. In addition, methods based on sequence are limited to languages based methods. Therefore, a method of character connection is needed to complete the task of word recognition on basis of character detection. Most of the existing character connection methods are based on a hypothesis: from left to right or from top to bottom. However, the text in complex natural scenes is multi-directional, and it is easy to reverse in some words with large rotation angle using the above assumptions. Some studies takes the connection between characters as features to learn [20], but it is easy to be disturbed by the noise in a text picture, which also increases additional computation. In this paper, a new character combination method (VDLink) is proposed by using the relationship between the connection curvature of characters and the text direction, which can be adapt to the text with arbitrary arrangement. The contribution of this paper mainly includes three aspects: 1. A character detection network with adaptive angle selection is proposed, which effectively solves the problem that the shape of the same character is difficult to converge due to the large rotation angle span of the character, and the shape of different characters is similar and difficult to distinguish. 2. A new character combination method (VDLink) is proposed, which can efficiently complete character combination task after detection. The rest paper is organized as follows: Sec.2 reviews the relevant methods; for the methodology, we describe MA-CharNet in Sec. 3. The experiments are discussed and analyzed in Sec.4; the conclusion and the future work are summarized in Sec.5. Related work The main task of scene text recognition is to recognize detected text sequences or edited text images. With the promotion of deep learning, the research has made great progress, gradually moving from the initial recognition of regular text to more challenging areas such as STR (Scene Text Recognize). The current research on Scene Text Recognition can be broadly categorized as follows: Encode-Decode based methods Most of the current work uses the Encode-Decode structure for text recognition, which treats the whole text line as a whole and directly maps the input text image to a string sequence. The processing flow of this method is generally divided into four steps: image pre-processing, feature extraction, sequence modeling and sequence transcription. Image preprocessing is used to improve the quality of the image to increase the recognition accuracy. Common image preprocessing methods include super-resolution [23], irregular correction [24], background erasure [25] etc. Feature extraction networks mostly use common deep learning feature extractors and their variants [26,27], which are used to extract high-level features expressing text; sequence modeling is mainly used to establish contextual relationships between characters, and bidirectional long and short-term memory networks [28] have been applied as mainstream modeling methods in most studies, but it is prone to the problems of gradient disappearance and gradient explosion. In recent years, some new sequence modeling methods can solve the above problems well and gradually gain recognition in the industry, such as sliding window [29], attention [30] etc. The last step of the method, transcription of sequences, is the main challenge, and the two mainstream methods for this step are CTC series methods and the Attention-based methods. Inspired by the successful application of CTC in language recognition and other fields [31] applied CTC to natural scene text recognition for the first time, which significantly improved the recognition performance. Since then, a large number of network methods based on CTC and its variants have been proposed, all of which have shown their powerful decoding performance [12,32]. Although CTC has good decoding performance, it is difficult to be directly applied to two-dimensional irregular text recognition due to its temporally continuous structural characteristics. The Attention-based approach effectively bridges the difference between regular and irregular text by highlighting the features of the location of characters, and shows obvious superiority in the recognition of irregular text [33]. The application scenarios of the series of methods based on Encode-Decode are limited to the Latin languages, and it is difficult to be applied to non-Latin languages. Moreover, this series of methods strongly depend on the quality of the text correction module and cannot be adapted to the situation where the character skew angle spans a wide range. Character-based recognition methods Character-based recognition has been relatively little studied due to the difficulty of obtaining character-level labels, but some classical and effective methods have emerged [34,35]. The idea of this kind of method is to train the segmentation map to locate the location of characters and then use the character classifier to classify the localized result. Wang et al. [36] was the first to train a model using the fraction and location of characters as input and use dictionary matching to get the final prediction, and its performance set the benchmark for research in STR. Driven by deep learning [37], combined convolutional neural networks and unsupervised learning to alleviate the difficulty of obtaining character labels and also achieved good recognition performance. To further improve the recognition ability of the model for characters, some researchers proposed that the characteristics of characters should be learned to distinguish character domains from general objects. Phan et al. [34] used SIFT(scale invariant feature transform) descriptors as learning features to significantly improve the performance of character recognition. After that, Yao et al. [35] used the stroke information of characters to extract text features, Gordo et al. [38] used local mid-level features suitable for building word image representations. Experiments show that such methods are significantly better than the Encode-Decode method in terms of recognition performance and generalization ability. However, it requires accurate character segmentation results. For dense text, it is easy to have adjacent characters stick together. Therefore, the segmentation-based character recognition methods are strongly dependent on and limited by the performance of character segmentation. We propose to use Anchor-free structure to directly regress the location and type information of characters to cope with the case of dense text sticking. In addition, to cope with the large span of character rotation angles in natural scene texts, we learn the angular properties of characters in addition to the task of character localization and classification. Before this research, Xing et al. [39] took the lead in predicting the geometric information such as location, aspect, and angle of characters using CNN to achieve localization and recognition of characters. However, the angle information it learns is only for character refinement localization, and does not solve the problem of recognizing characters with large span of rotation angles. Therefore, we divide the rotated characters into multiple domains by angle, and each domain is trained with a different network respectively. The angle information of the learned characters is used to select the corresponding sub-networks for recognition, and the whole rotation angle domain is divided into multiple small rotation angle domains, which solves the problem of rich diversity of the same characters and similarity of different characters due to large rotation angles. Finally, a unified framework is used to fuse the character features learned from each sub-angle domain, which can effectively detect irregular text, especially with better robustness for characters with particularly wide span of rotation angles. We conduct a comprehensive comparison of advantages and limitations of these methods ( Table 1) in terms of the following properties: the basic unit of processing, whether a post-processing algorithm is required to link characters, whether it can recognize curve text, whether it can recognize extreme tilted text, and whether it can be easily applied to non-Latin language. Proposed method In this work, a text recognition method based on character detection is designed for curved text, and it can particularly deal with the situation where the characters have a large tilt angle. Firstly, anchor free network of high positioning accuracy is selected as the backbone network of character detection, and we have added a character angle perception module. On this basis, a multi detection module which can adaptively select branches according to the character tilt angle is designed. The module is equivalent to the combination of multiple detection networks spanning smaller sub-angle domain, yet with a significantly lower computational resource overhead. From a macroscopic point of view, it looks like a network that fuses the character features of each angle domain, so we also refer to the proposed network as MA-CharNet (Multi-angle Fusion Character Recognition Network). Meanwhile, a matching two-dimensional plane discrete character combination method VDLink is designed. The logical relationship between the modules and the guide diagram of this chapter is shown in Fig 2. Character detection network. The character detection network of this work adopts the structure of CenterNet [40], CenterNet learns about the center of a general object, its length and width, and the modification of its length and width properties. On this basis, we add the task of regression character Angle to provide a basis for adaptive selection of subnetworks. The backbone network of MA-CharNet uses ResNet101 [27], and the convolutional feature map of 1/4 downsampling is used as the input for the subsequent tasks. The design details of each detection head are as follows: • H hm predicts the category of characters, the shape of the output feature map is N s � w � h, N s is the number of categories of characters. In this study, N s is set to 64, to represent 63 characters(52 uppercase and lowercase characters, 10 digitals, one other symbol)and one irrelevant background. • H wh predicts the length and width attributes of the characters, and the size of the output feature map is 2 � w � h, which are used to represent the length and width of characters, respectively. • H reg egression correction for the length and width of the characters, whose output feature maps are still of size 2 � w � h, are used to correct the H wh predicted length and width attributes, respectively. • H ang predicts the angle of the character, and the output shape is 1 � w � h, which directly regresses the angular value of each angle domain in the feature map. Since this study only uses the angle of the character as the control information for selecting each sub-network, the prediction of the angle does not need to be very precise, as long as it can ensure that the predicted angle falls correctly in the angle interval of the corresponding subangle field network. It should be noted that the above H hm , H wh , H reg are all using the structure from CenterNet. As mentioned above, the character recognition network includes multiple tasks, so the loss function of this model is defined as: Since the angle is a continuous value, we use a smoother loss function Smooth L1 loss: where θ i and b y i denote the true angle of the character and the angle value predicted by the global network, respectively. MA-CharNet is actually a combination of N + 1 networks. Denote the rotation angle range of the character as φ, and the global network is first trained in this global angle domain, which is to learn the common features of the character and the angle features of the character. Then the angle domain φ is divided into N sub-domains, and each sub-domain corresponds to an angle domain φ i . Use independent sub-networks to train on each sub-angle domain separately. These N + 1 networks are all structured as above, but only global network containing H ang . The sub-networks corresponding to the N sub-angle domains share the backbone weights of Denote φ as (z, η), where the corresponding angle domain φ i of each sub-network is related to the global angle domain φ as: where |φ| is the angular span value of φ. Then the angle domain corresponding to each subdomain φ i is: [41], FPN fuses multiple features of different scale sizes together to solve the multi-scale problems. In this study, the character features located in different angle domains are merged together to solve the problems caused by large rotation span. We experimentally demonstrate (Fig 3) that the recognition performance of the network trained with sub-domains is higher than that of the network trained globally. The angles of the characters on the natural scene pictures cannot be all distributed in the same sub-angle domain, and a specific network cannot be used to complete the recognition task. This requires running multiple sub-networks simultaneously and then synthesizing these results. But this process is strongly dependent on manual work. In order to avoid tedious manual selection, this study designed the Angel Selector to automatically select different sub-networks according to the angle of characters. Its input is the output of the multi-heads of each sub-network prediction feature map, and the control information is the angle prediction map generated by the global network (i.e., H ang global ), whose structure is shown in Fig 4. In order for each character in the same image to be assigned to the correct sub-network for recognition, it is required that the designed angle selector should be pixel-level. Therefore, we design Angle Selector to first generate angle selection distribution Mask i for each angle domain network based on the angle prediction map H ang global , and then superimpose the results of each area network to obtain H ma : where d H ang global ðm; nÞ denotes the predicted angle value of H ang global in row m, column n. Next, the fused features H ma of each sub-network are obtained as: Angle Selector selects the corresponding sub-networks pixel by pixel according to the angle prediction map, which ensures the automatic fusion of each sub-network and the operational efficiency of MA-CharNet. However, the effectiveness of this method relies heavily on the performance of the angle prediction map. Considering that the global network N global learns the character features of each angle domain, in order to avoid the wrong selection of sub-networks due to inaccurate angle prediction, this work further integrates the features of the fused subnetworks and the global network (the framework diagram of MA-CharNet is shown in Fig 5). In this study, the operation of averaging or maximizing H global and N ma was designed to alleviate the above problem, corresponding to Eqs (7) and (8) (Table 4). Inference. Different from a single branch network, MA-CharNet integrates the output of multiple sub-networks according to the angle adaptive method, i.e., integrates the character features learned by the corresponding sub-networks in each angle domain. More specifically, the processes of inference are as follows: first, inputting a image, backbone network extracts features from it, and feeds the 1/4 downsampled feature maps to the multi-heads of the global network and N sub-network; and then, the feature maps output by N sub-networks and the angle prediction maps generated by the global network are used as the input and control information of Angle Selector, respectively, to obtain the result H ma ; finally, fuse H ma and H global to get H fusion , decode H fusion to get the character recognition result. In addition, after generating the prediction results of characters, they will be concatenated into words by the VDLink we designed. Vector-and distance-based linking methods(VDLink) Text recognition methods based on character detection usually require post-processing algorithms to connect characters into text sequences. The existing connection methods are usually based on people's reading habits, that is, the characters are linked in the order from left to right. This rule can indeed better solve the connection of document text or general irregular text (as shown in Fig 7A), where the characters of such text have vertical or near-vertical horizontal lines in their central axis. However, in natural scenes, the left-to-right linking rule no longer applies because of the varying angles of the characters due to the shooting angle or the varying arrangement of the text itself (e.g., Fig 7B). Specifically, the reading sharing of the text should be related to the orientation of the characters. Unlike other text detection networks, MA-CharNet predicts the angle of individual characters, which provides sufficient reference information for determining the reading direction of the text. For the convenience of presentation, we denote the direction perpendicular to the PLOS ONE As shown in Fig 8, the green point is the center of the character detected by MA-CharNet, and the red point C is the centeroid of outer border. The characters within the same outer border are recorded as P 1 , P 2 , . . ., P i , . . ., P n . The average angular prediction of the above sequence of characters is denoted as � y, and the vector from each character P i to point C is denoted asp i . Then the vectorṽ, which determines the direction of the text link, satisfies: v ¼ ðcos � y; sin � yÞ: The v i representing the direction value of the character P i is obtained by the operation ofṽ andp i : Therefore, the comprehensive score VD i of the character can be expressed as: where d i represents the distance from P i to C. Finally, the composite score VD i of each character is sorted in descending order to get the sequence (. . ., VD m , VD n , . . .), then the character output sequence is (. . ., P m , P n , . . .). Datasets Datasets for evaluation. MA-CharNet is designed to address the recognition of irregular text, so we will verify MA-CharNet on three public irregular data sets: • Total-Text. [19] is an irregular data set containing 1500 training sets and 500 test sets, which contains most of vertical, horizontal, multi-oriented, and curved text. The format of the labels is given in word-level Polygon. • CTW1500. [42] contains 1500 images, of which 1000 are used for training and 500 are used for testing. The test set contains 3,530 curve text instances. This data mainly contains horizontal and multi-oriented text. The dataset gives line-level annotations, which we validate at the word-level level. • CUTE80(CUTE). [43] contains 80 images and can cut out 288 pictures with only one text instance, a small amount of curved text, and perspective text and a blurred and variable background. Datasets for training. Since datasets containing character-level annotations are more difficult to obtain, in addition to training on artificial datasets containing character labels, we also filtered some public datasets containing character-level annotations with higher quality. • SynthText [44] consists of 800k images containing about 8 million horizontal, multi-oriented synthetic words. Each word is rendered into the scene and blending the words with the scene as much as possible. This dataset gives text line, word and character level annotations. It is generally used as a pre-training of the model. • ICDAR2013(IC13) [45] contains 561 images, of which 420 are used for training and 141 for testing. The training set contains the annotations of the character set, and we randomly rotate the training set, and the rotated images are added to the training set. • ReCTS-25k [46] contains 25k images, of which 20k are used for training and 5k are used for testing. Each character (containing Chinese and English) in this dataset is identified, and this work selects 10709 images containing only English and accurately annotated from the training set and added them to the training set. Meanwhile, Total-Text dataset mentioned above does not contain character-level annotations, and 1168 images with character-level annotations were selected from the training set based on the segmentation map and word-level annotations. In addition, we generated 600k additional images with character-level annotations by generating characters with different colors and forms from 179 selected fonts and pasting them randomly on the background images, which are 8k images without text selected based on COCO-Text [47]. Implementation details When training the global network N global with artificial and partially real datasets, the ratio of each dataset fed to each batch is: SynthText: self: ICDAR2013: Rects: TotalText = 16 : 6 : 2 : 4 : 4, which remains the same when training the subnetwork N i . Only the dataset ICDAR2013, Rects, TotalText is rotated in the angle domain φ i corresponding to the subnetwork N i . It should be noted that when training the sub-network, the backbone network Load the weights of the global network N global , and the weights of the backbone network are not updated, only the weight of the Ni detection head are updated. We train our model using 2 Tesla A100 GPUs with the image batch size of 64. We set the learning rate to 0.000125, decay the learning rate to half of the original every 3 epochs, and use Adam as the optimizer. Results and analysis The recognition effect of MA-CharNet in different natural scenes is shown in Fig 9. The figure shows that our algorithm is robust to backgrounds, fonts, etc. Especially for characters with large inclination, it can also locate and recognize them accurately. In addition, we compare with some current mainstream methods on three public datasets, and quantitatively analyze the results, as shown in Table 2. MA-CharNet achieves the best performance on TotalText and CTW, which are 2.4% and 2.7% higher than the current best algorithms, respectively. Since the main advantage of MA-CharNet is to deal with irregular text For CUTE80, this model is 2% lower than the current optimal algorithm. Nevertheless, the average performance of MA-CharNet on above three datasets is still 1.9% better than the existing algorithms. Angle domain division number. In order to verify the effectiveness of the proposed multi-angle fusion method and its influence on speed, we conducted an ablation study on the number of angle domain divisions N. In this study, we set φ to À p 2 ; p 2 À � , i.e. z ¼ À p 2 ; Z ¼ p 2 in Eq (3). In order to avoid side effects, set DW ¼ p 18 in Eq (4), so the range of the rotation angle of the characters in our data set is actually À 5p 9 ; 5p 9 À � . We set the number of divided domains N to 1, 2, 3, 4 respectively, and the experimental results are shown in Table 3, When the angle is divided into 3 areas, the accuracy and speed can reach the best balance. Angle selector and fusion method. MA-CharNet learns the character features of each angle domain, and its Angle Selector module automatically selects the corresponding sub-networks. To verify the effectiveness of Angle Selector, we set up control experiments using only the global network, through Angle Selector fusion sub-networks, and further fusion with the global network after multi-angle fusion(as shown in Table 4). The experimental results confirm the effectiveness of Angle Selector, which significantly improves the recognition performance despite a weak speed loss. Meanwhile, further fusion of the fused sub-networks with the global network brings a small performance improvement with little impact on speed. With or without VDLink. After MA-CharNet recognizes characters, it needs to link them into text sequences. To evaluate the effectiveness of the proposed component VDLink In this study, it is compared with the conventional left-to-right character linking method. Experiments(as shown in Table 5) show that the proposed VDLink has significant advantages. Conclusion We propose MA-CharNet, a novel framework for recognizing irregular text. Different sub-networks are used to learn character features in different angle domains separately, and then the PLOS ONE accurate sub-network is selected autonomously by an adaptive angle selector (Angle Selector), which can well cope with the situation that characters in irregular text span a wide range of rotation angles. It achieves excellent performance while eliminating the tedious manual selection operation. The proposed curvature-adaptive character linking algorithm VDLink also provides a significant performance improvement over traditional character linking methods while incurring almost no computational overhead. However, the accuracy of this model strongly depends on the accurate regression of the character angle, which directly affects whether a suitable character recognition sub-network can be selected for the detection of the target character. How to design a more efficient and accurate angle regressor for characters is the next focus of this work. Supporting information S1 Appendix. (ZIP)
6,069
2022-08-29T00:00:00.000
[ "Computer Science" ]
GeneCloudOmics: A Data Analytic Cloud Platform for High-Throughput Gene Expression Analysis Gene expression profiling techniques, such as DNA microarray and RNA-Sequencing, have provided significant impact on our understanding of biological systems. They contribute to almost all aspects of biomedical research, including studying developmental biology, host-parasite relationships, disease progression and drug effects. However, the high-throughput data generations present challenges for many wet experimentalists to analyze and take full advantage of such rich and complex data. Here we present GeneCloudOmics, an easy-to-use web server for high-throughput gene expression analysis that extends the functionality of our previous ABioTrans with several new tools, including protein datasets analysis, and a web interface. GeneCloudOmics allows both microarray and RNA-Seq data analysis with a comprehensive range of data analytics tools in one package that no other current standalone software or web-based tool can do. In total, GeneCloudOmics provides the user access to 23 different data analytical and bioinformatics tasks including reads normalization, scatter plots, linear/non-linear correlations, PCA, clustering (hierarchical, k-means, t-SNE, SOM), differential expression analyses, pathway enrichments, evolutionary analyses, pathological analyses, and protein-protein interaction (PPI) identifications. Furthermore, GeneCloudOmics allows the direct import of gene expression data from the NCBI Gene Expression Omnibus database. The user can perform all tasks rapidly through an intuitive graphical user interface that overcomes the hassle of coding, installing tools/packages/libraries and dealing with operating systems compatibility and version issues, complications that make data analysis tasks challenging for biologists. Thus, GeneCloudOmics is a one-stop open-source tool for gene expression data analysis and visualization. It is freely available at http://combio-sifbi.org/GeneCloudOmics. INTRODUCTION Multi-dimensional biological data is rapidly accumulating, and it is expected that the size of the data will exceed astronomical levels by 2025 (Stephens et al., 2015). This resulted in the development of computational tools that became vital in driving scientific discovery in recent times (Markowetz, 2017). A parallel increase in the development of online servers and databases has also been witnessed (Helmy et al., 2016), raising a new set of challenges related to the usability and maintenance of all these tools (Mangul et al., 2019). About half of the computational biology tools were found to be difficult to install, 28% of them are unavailable online in the provided URLs, and many others are missing adequate documentation and manuals (Mangul et al., 2019). The problem gets more complex with the limited computational and coding skills of two-thirds of the biologists who use these tools (Schultheiss, 2011). On the other hand, it was also noted that bioinformatics tools that are easy to install and use are highly cited, indicating wider usability by the community and a larger contribution to scientific discovery (Mangul et al., 2019). Thus, more web-based tools that avoid installation difficulties and operating system compatibility issues, simple point-and-click tools are required to tackle multi-dimensional omics datasets. Gene expression profiling is widely used in biomedical research. They enable the investigation of expressed genes and their relevant pathways and cellular processes in a given time point or condition (Stark et al., 2019). Gene expression profiling is usually performed using RNA-Seq or microarray data since they detect the presence and quantify an RNA, the output indicator of an activated or deactivated gene (Yang et al., 2020). It also provides a deeper understanding of the biological system dynamics, growth or developmental process, drug effects or disease mechanisms through the differential gene expression (DGE) analysis (Piras et al., 2014(Piras et al., , 2019Simeoni et al., 2015;Hodgson et al., 2019;Wang et al., 2020). The DGE analysis determines genes with different expression levels between two or more conditions and statistically confirmed as differentially expressed (Pertea et al., 2016;. The analysis of gene expressions or transcriptomics data faces several challenges related to data size, quality, statistical analysis, visualization and interpretation of the results using current bioinformatics approaches (Mantione et al., 2014;Zou et al., 2019). Several bioinformatics or data science tools are available for addressing each of these challenges in the form of stand-alone software tools, web-server or R packages/Python libraries (Russo and Angelini, 2014;Poplawski et al., 2016;Velmeshev et al., 2016;McDermaid et al., 2019;Zou et al., 2019) (Table 1). However, most of the tools only provide a subset of analytics and require some level of programming skills. Often, the users need to move from one tool to another and this could lead to data compatibility issues (Chowdhury et al., 2019). The analysis of gene expression data remains a burden for many biologists due to its intensive requirement of computational, statistical and programming skills that are lacking in two-thirds of biologists who use online biological resources (Schultheiss, 2011). Moreover, as mentioned above, most of the tools are individually scattered. Thus, there is a need to put the tools together in an easy-to-use manner with an intuitive GUI that will allow users to perform bioinformatics analyses with minimum computational skills and resources. In other words, a one-stop online server for transcriptomic data analysis that performs all essential steps of data import, preprocessing, statistical analyses, DGE identifications and functional interpretations of the results, through a friendly and simple user interface, is much needed. Previously, we had developed ABioTrans as a stand-alone biostatistical tool for transcriptomics data analysis, including data pre-processing, statistical analyses, DGE and gene ontology (GO) classification (Zou et al., 2019). It is a downloadable executable that runs on any web browser with an interactive GUI ( Table 2). However, as it is a stand-alone application written in R, the user needs to download it, install R or RStudio then run an installation script that installs all the required and up-to-date packages and dependencies. This was found to be challenging for some users as it requires a minimum level of programming familiarity, and several packages became incompatible with the new release of R (v4.0.0) in spring 2020. This is one common problem for most bioinformatics tools (Mangul et al., 2019). ABioTrans also needs approx. 10 min to download all packages before running. Hence, to provide users with a quick, ready-to-use that does not require regular system updates, a web server version is imminent. To overcome the above-mentioned challenges, here we rebuilt ABioTrans as a new webserver and expanded its functionality to include several new analysis tools such as SOM, t-SNE, random forest clustering, and added further tools for bioinformatics functional analysis of gene and protein sets that includes PPI, protein complex analysis, evolutionary analysis, pathological analysis, physicochemical analysis, and more. We named this new revamped tool GeneCloudOmics, a web server for transcriptomics data analysis and gene/protein bioinformatics that is equipped with publication-ready plotting capabilities. GeneCloudOmics allows 12 biostatistical and data analytics tests and 11 bioinformatics tools for gene/protein datasets analysis and annotation (see Methods and Program Description). In addition, it provides direct data import from NCBI's GEO databases through GEO accession numbers. GeneCloudOmics webserver, thus, relieves the burdens of installation and version compatibilities and is designed to be a quick one-stop transcriptomics (RNASeq and microarray) data analysis tool that provides the user with all the required steps for their analysis (Figure 1). Overall, the web server targets users without any computational or programming skills and provides them with a wide spectrum of hassle-free analytic tools. METHODS AND PROGRAM DESCRIPTION The Gene Expression Profiling Workflow The gene expression analysis aims to identify genes expressed under a particular condition, treatment, developmental stage, or disease. This requires assessing thousands of gene expressions of multiple conditions in raw format, pre-processing and Frontiers in Bioinformatics | www.frontiersin.org November 2021 | Volume 1 | Article 693836 2 normalizing the expression levels, statistically analysing the data, identify DGEs between conditions and perform a functional analysis to elucidate the pathways and cellular functions of the DGEs (McDermaid et al., 2019) ( Figure 1). GeneCloudOmics performs this workflow easily and smoothly on a web server as will be described below. Overview of GeneCloudOmics Web Server GeneCloudOmics provides users with a complete pipeline for analysing and interpreting their transcriptome data ( Figure 2 and Table 2): 1) Data types: users input microarray (.cel files) or RNA-Seq data (raw or normalized read count table in. csv format). In addition, users can provide NCBI GEO database accession and GeneCloudOmics automatically imports the data from the database. 2) Pre-processing raw data using four different normalization techniques (RPKM, FPKM, TPM, RUV), then plotting the normalized data versus the raw data inbox and/or with violin plots. The pre-processed data can be downloaded into a CSV file. 3) Analyse the pre-processed data using nine different statistical tests (read normalization, scatter plots, linear/non-linear correlations, PCA, hierarchical clustering, k-means clustering, t-SNE clustering and SOM clustering) then plot the results of each test in a publication-ready quality. 4) Perform DGE analysis using three of the most commonly used methods DESeq2 (Love et al., 2014), NOISeq (Tarazona et al., 2015) and EdgeR (Robinson et al., 2009) with a single interface for choosing the parameters for each of the methods and in a similar way to plot the results in volcano or dispersion plots. The user can then download the results as a CSV file and the plots as or PNG or PDF. 5) Functionally interpret the DGEs or proteins set using 11 different bioinformatics tools (listed in detail below and in Table 2) that help the user perform essential enrichments and annotations to the gene/protein sets such as pathway enrichment analysis, gene ontology (GO) enrichment, PPI, and protein function enrichment. All the tests are performed through the same interface which allows the user to upload or paste a list of genes or proteins, choose the test parameters, run the analysis, and plot the results, using the standard visualization provided, or download them. The gene/protein set interpretation features are independent from the DGE analysis and can be used separately with any gene/protein set as a stand-alone feature (see demonstration sections below). 6) Creating an analysis report that summarizes and gathers all analyses of the user. In each test or analysis, the user can choose "Add to Report" option which will add the plot and the analysis title to the analysis report. When the user clicks "Analysis report" link in the main menu, the system generates FIGURE 1 | The gene expression profiling workflow. The RNA sequencer produces raw RNA read counts that are aligned on the cell's genome and processed through the quality control (QC) steps. The raw read counts result from QC are next normalized and analyzed statistically to infer the differential gene expressions (DGEs) or other analyses such as Shannon Entropy, Correlations or PCA. Several bioinformatics analyses can also be performed on the list of DEG for functional inference. Frontiers in Bioinformatics | www.frontiersin.org November 2021 | Volume 1 | Article 693836 5 an HTML report containing all the selected plots. The user can then download the report as a PDF. Data Analytics Features GeneCloudOmics accepts both gene expression matrix from RNA-Seq and raw microarray CEL file formats, either through data upload forms or via direct import from GEO database. Examples of valid input files are hyperlinked at each upload section to aid the user with the input files. For RNA-Seq, two input files are required: 1) gene expression matrix, and 2) metadata table. The gene expression matrix should contain estimated abundance (either raw count or normalized) of all genes for all samples in the experiment; and the metadata table should specify experimental conditions (e.g., Control, Treated, etc.) for each sample listed in the expression matrix. Depending on target analysis, the user can upload supporting files including gene length and list of negative control genes to facilitate the preprocessing step. For microarray, the user can upload CEL files to GeneCloudOmics, upon which matrix of gene expression level will be extracted and the user can proceed to subsequent analyses. The data obtained directly from GEO database will undergo an initial exploratory analysis that overviews the quality of data using several plots. Next, the transcriptomics data is processed and analyzed using the following analytics: 1-Data preprocessing: Preprocessing includes two steps: 1) lowexpression gene filtering, and 2) data normalization. Removal of lowly expressed genes is crucial to reduce the effects of measurement noise, and consequently improve the number of differentially expressed genes (Sha et al., 2015). GeneCloudOmics provides the option for the user to indicate the minimum expression value and the minimum number of samples that are required to exceed the threshold for each gene. If input data contain raw read counts, the user can choose one of the normalization options: Fragments Per Kilobase Million (FPKM), Reads Per Kilobase Million (RPKM), Transcripts Per Kilobase Million (TPM) (Li et al., 2015), Remove Unwanted Variation (RUV) (Risso et al., 2014) or Upper Quartile (Bullard et al., 2010). FPKM, RPKM and TPM option perform normalization for sequencing depth and gene length, whereas RUV and upper quartile eliminate the unwanted variation between samples. To check for sample variation, Relative Log Expression (RLE) plots (Gandolfo and Speed, 2018) of input and processed data are displayed for comparison. 2-Transcriptome-wide distributions: Gene expressions are known to follow certain statistical distributions such as power-law or lognormal (Furusawa and Kaneko, 2003;Bengtsson et al., 2005;Beal, 2017), which has been applied to determine a suitable gene expression threshold for low signal-to-noise expression cut-off (Piras et al., 2014(Piras et al., , 2019Piras and Selvarajoo, 2015;Simeoni et al., 2015;. GeneCloudOmics can compare the cumulative distribution function (CDF) of transcriptome-wide expression with six model distributions: Log-normal, Log-logistic, Pareto (or power law), Burr, Weibull, and Gamma. The goodness-of-fit for each distribution is measured by the Akaike information criterion (AIC), from which the user can choose the best-fitted distribution and select threshold for low-expression gene removal. 3-Scatter plot: Scatter plot compares any two samples (or two replicates) by displaying the respective expression of all genes in 2D space. As gene expression data is densely distributed in the low-expression region, making the scatter dots indistinguishable, GeneCloudOmics also overlays the estimated 2D kernel density on the scatter to better visualize the scatter dot density. The scatter plot also shows how variable the gene expressions are between any two samples. The wider the scatter, the less similar the global responses and vice-versa (Piras et al., 2014). 4-Pearson and Spearman correlations: GeneCloudOmics can evaluate the transcriptome-wide relationship between any two samples by linear (Pearson) and monotonic non-linear (Spearman) correlations, displayed in 1) actual values in a table or 2) as a heat map. 5-Principal components analysis and sample clustering: Principal Components Analysis (PCA) is used for simplifying the highdimensional gene expression data into two or more dimensions, termed the principal components. Doing so, the whole transcriptome data can be visualized on a 2D or 3D plot. Each principal component is a linear combination of the original variables, hence, we can ascribe meaning to what the components represent. From the principal components, GeneCloudOmics can cluster the samples into groups based on their similarity by K-means clustering. 6-t-distributed stochastic neighbour embedding (t-SNE): t-SNE is another dimensionality-reduction approach that reduces the complexity of transcriptomic data (Cieslak et al., 2020). GeneCloudOmics introduces an intuitive interface that allows performing t-SNE analysis on the processed untransformed transcriptomic. The user can also choose to log transform the data before submission. Sample clustering by K-means is also applied on the t-SNE transformed dataset upon user selection. 7-Shannon entropy: GeneCloudOmics adopts the formula of Shannon entropy (Shannon, 1948) from information theory to measure the disorder of a high-dimensional gene expression sample, where a higher value indicates higher disorder. As the original formula for entropy is restricted to discrete variables, GeneCloudOmics has to discretize gene expression data (which is a continuous variable) by histogram-based binning; the number of bins are determined by Doane's rule (Doane, 1976;Piras et al., 2014). 8-Averaged transcriptome-wide noise: Averaged transcriptomewide noise quantifies the variability between gene expression scatters of all replicates in one experimental condition (Piras et al., 2014). The noise is defined as the average of variance (σ 2 ) of expression divided by the square mean expression (μ 2 ), for all genes between all possible pairs of replicates (Piras et al., 2014). 9-Differential Expression (DE) Analysis: DE analysis identifies genes that are statistically different in expression levels between any two selected conditions. GeneCloudOmics implements three popular DE methods: edgeR, DESeq2 and NOISeq. In case there are no replicates available for any of the experimental condition, technical replicates can be simulated by NOISeq. To better visualize differentially expressed genes among the others, a volcano plot (plot of log 10 -p-value and log 2 -fold change for all genes) distinguishing the DE and non-DE genes is displayed. Plot of dispersion estimation, which correlates to gene variation, is also available in DESeq2 and EdgeR method. 10-Heatmap and gene clustering: This function clusters differentially expressed genes (result from previous step) into groups of co-varying genes. Expression levels of DE genes first undergo scaling defined by z j (p i ) (x j (p i ) − x j )/σ x j where z j (p i ) is the scaled expression of the j th gene, x j (p i ) is an expression of the j th gene in sample p i , x j is the mean expression across all samples and σ xj is the standard deviation (Simeoni et al., 2015). Subsequently, Ward hierarchical clustering is applied on the scaled expression. 11-Random forest-based clustering: GeneCloudOmics uses RAFSIL (Pouyan and Kostka, 2018), which is a random forest based similarities learning method between single cells from RNA sequencing experiments. RAFSIL utilizes random forest algorithm to learn the pairwise dissimilarity among cells/samples, which in turn is used as an input to the K-means clustering algorithm. The resultant data is subsequently enhanced using t-SNE-reduced dimensions, to reveal clearer clusters of cells/samples. 12-Self-Organizing Map (SOM): SOM is a dimensionality reduction technique that produces a two-dimensional, discretized representation of the high-dimensional gene expression matrix (Yin, 2008). GeneCloudOmics provides a SOM function that outputs five different plots: property plot, count plot, codes plot, distance plot and cluster plot. Bioinformatics Tools DGE analysis usually outputs a list of genes that are statistically determined as differentially expressed genes (DEGs). Next, the list of DEGs is analyzed, interpreted, and annotated to learn more about the functions, pathways, and cellular processes where these genes are involved, for example, diseases they are associated with or perform other investigations on the properties of those genes/proteins (such as phylogenetic or physiochemical analyses). Most of the currently available DGE analysis tools do not include bioinformatics features for gene set analysis or include only a few basic analyses such as GO and pathways enrichment ( Table 1). Even our previous tool, ABioTrans, only provides one GO tool for interpreting the DEGs. In GeneCloudOmics, we redesigned the GO feature to be dynamic by reading the GO terms associated with the genes/ proteins directly from UniProt Knowledgebase (Bateman, 2019) then visualize each of the three GO domains (cellular component, molecular function and biological process) in independent tabs. Furthermore, we have introduced 11 new bioinformatics tools that can be performed on a given gene/protein dataset. 1) Pathways Enrichment Analysis: For a given gene or protein set, GeneCloudOmics uses g:Profiler (Raudvere et al., 2019) to perform a pathway enrichment analysis and displays the results as a network where the nodes are the pathways and the edges are the overlap between the pathways ( Figure 3A). We use Cytoscape. JS for the network visualization (Franz et al., 2015) and through this, the network properties such as colour and layout can be changed and the final network can be downloaded. 2) Protein-Protein Interaction: GeneCloudOmics provides the user with an interface where they can upload a set of proteins (UniProt accessions) and get all the interactions associated with them. The interactions are visualized as a network where the nodes are the proteins, and the edges are the interactions, and the node size corresponds to the number of interactors of the protein. This feature uses Cytoscape. JS for the network visualization (Franz et al., 2015). 3) Complex Enrichment: The identification of the subunits of the protein complexes is important to understand the protein functions and the formation of these macromolecular machines. GeneCloudOmics provides the user with a complex enrichment feature that allows identification of proteins in the provided dataset that are part of a known protein complex using CORUM databases (Giurgiu et al., 2019). 4) Protein Function: UniProt provides a detailed function for thousands of protein sequences. The protein function feature retrieves protein function information from UniProt of a given protein set. 5) Protein Subcellular Localization: Protein localization critically affects a protein function. The protein subcellular localization feature provides the user with an interface to UniProt to get the subcellular localization information for a given list of proteins. 6) Protein Domains: The protein domains are functional subunits of the proteins that contribute to their overall function. GeneCloudOmics provides the user with a protein domain feature that retrieves the domain information from UniProt for a given list of proteins. 7) Tissue Expression: The distinct expression profile of genes and proteins per tissue is what gives different tissues the suitability for their functions. The tissue expression feature in GeneCloudOmics provides the user with tissue expression information from UniProt for each protein in a given list. 8) Gene Co-expression: The co-expression analysis is a common analysis that assesses the expression level of different genes to identify simultaneously expressed genes, which indicates that they are controlled by the same transcriptional mechanism (Vella et al., 2017). GeneCloudOmics provides the user with an interface where they can submit a co-expression query to GeneMANIA (Franz et al., 2018). 9) Protein Physicochemical Properties: For a given set of proteins (UniProt accessions), this feature provides the user with complete sequences of them in a single FASTA file and allows the user to investigate their physicochemical properties, sequence charge, GRAVY index (Kyte and Doolittle, 1982) and hydrophobicity. The full sequences of the proteins are automatically obtained from UniProt Knowledgebase while the physicochemical properties are investigated and plotted using the UniProtR package (Bateman, 2019;Soudy et al., 2020) 10) Protein Evolutionary Analysis: For a given set of proteins, this feature provides the user with a phylogenetic and evolutionary analysis that includes multiple sequence alignment (MSA) of the protein sequences, clustering based on the amino acid sequences, chromosomal locations, or gene trees. 11) Protein Pathological Analysis: Several diseases are associated with the malfunction of certain genes or proteins. The disease-protein association is collected in different online resources such as OMIM databases (Amberger et al., 2019), DisProt (Hatos et al., 2020) and DisGeNET (Piñero et al., 2020). GeneCloudOmics provides the user with an interface that retrieves the disease-protein association from online databases for a given list of proteins and visualizes disease-protein association as a bubble. The features that communicate with UniProt use UniProtR, an R package for data retrieval and visualization from UniProt (Soudy et al., 2020). Since all the bioinformatics features only accept gene names (gene symbol) or UniProt accessions, we provide the user on each page with links to two ID converters UniProt ID mapping (Bateman, 2019) and g:Convert (Raudvere et al., 2019) to convert their identifiers to gene names or UniProt accessions. All the analyses are either performed on the uploaded data or involve connecting to a remote server such as UniProt Knowledgebase. GeneCloudOmics does not store any uploaded data and does not contain any databases. DEMONSTRATION OF GENECLOUDOMICS UTILITY Transcriptome Analysis Features We performed a demonstration of transcriptomic analysis with a recent study on the time-resolved bulk cell RNA-Seq profile of human T regulatory cell differentiation (Schmidt et al., 2018). In the study, human T regulatory cells were isolated from peripheral blood; upon which differentiation was induced by adding TGF-ß factor, in comparison to naïve (unstimulated) T regulatory cells as the control group. At the indicated time points (0, 2, 6, 24, 48 h, 6 days), the cells were collected for RNA extraction and sequencing. Here, we illustrate how GeneCloudOmics was used for data pre-processing (normalization and filter low count), differential analysis, and data clustering. Firstly, unwanted variation among samples was removed by Upper Quartile normalization. The RLE plot clearly illustrates the normalization effects: high between-sample variation in raw data versus low variation after normalizing ( Figure 3A). We also utilized the transcriptome-wide distribution fitting feature to determine the expression threshold for low count filtering ( Figure 3B) (Simeoni et al., 2015;. The threshold of five counts was selected because from this expression level onwards, transcriptome-wide expression was observed to follow most of the model statistical distributions. Next, pairwise scatter, pair-wise sample correlation, and PCA were used to visualize the global relationship of all data samples, through which initial assessment on data quality can be gauged. For example, the low between-replicate variation in contrast with high between-condition variation could be shown by the width of scatter plots ( Figure 3C, Supplementary Figure S1A). It is further illustrated by the correlation heatmap, in which the replicates of the same condition all show close-to-unity Pearson correlation value along the diagonal axis ( Figure 3D); whereas decreasing correlation value with time was observed along the edge of the heatmap. This information is of high importance because low correlation or high variance across replicates will negatively impact the power to detect differentially expressed genes. Clustering of replicates of similar time points was further illustrated in PCA and t-SNE plots, in which the last time point (T06 -when the T cells were fully differentiated) formed a distinct cluster from the transitioning time points ( Figure 3E, Supplementary Figure S1B). From these analyses, we knew that the data show low variation between replicates and that gene expression globally changed along the differentiation time. We performed differential expression (DE) analysis with all three supported DE methods: EdgeR, DESeq2 and NOISeq; and presented the analysis conducted with DESeq2 ( Supplementary Figures S1D,E). The last time point (T06) was compared against the control group (T01) to extract DE genes in the differentiation process (with 0.05 p-value and 2-fold expression threshold). Two important steps in DESeq 2 were visualized: 1) the estimation of gene-wise dispersion and empirical shrinkage of these estimates to produce a more accurate dispersion estimate for actual gene count modelling (Supplementary Figure S1E); and 2) the volcano plot that summarizes DESeq2 p-value and expression fold difference for every gene (Supplementary Figure S1E). The list of all 5,033 differentially expressed genes (3,017 up, 2,016 down) was also listed in a separate table. Finally, the DE genes were channelled into heatmap gene clustering feature, from which DE genes sharing similar patterns of gene expression change throughout the differentiation process were identified ( Figure 3F). Four common expression patterns were observed: 1) gradual decrease (Group 2), 2) gradual increase (Group 3 and 4), 3) initial increase followed by decrease (Group 5 and 6), and 4) sharp decrease, followed by a gradual increase, and finally decrease (Group 1). To further illustrate the dimension-reduced visualization features t-SNE and random forest clustering, we used another single-cell RNA-Seq dataset of distal lung epithelium (Treutlein et al., 2014). The study measured gene expression of a total 198 individual mouse lung epithelial cells at four different stages (E14.5, E16.5, E18.5, adult) throughout development. Sample clustering by k-means on t-SNE1 and t-SNE2 space divided the cells into clusters that are aligned with their respective development stages (Supplementary Figure S1C and Additional File S1): Cluster 1 contains mostly E18 cells, Cluster 2 and 3 contain mostly AT2 cells, Cluster four contains mostly E16 cells, and Cluster 5 contains mostly E14 cells. Finally, clustering by random forest approach (Pouyan and Kostka, 2018) of types of cells provided in the input metadata table and subsequently grouped the cells according to their developmental stages ( Figure 3G). New Bioinformatics Features To demonstrate the utility of the bioinformatics section, we used data from a differential proteomics analysis that was conducted using the AGS cell lines of gastric cancer (GC) (Saralamma et al., 2020). The AGS cells were treated with Scutellarein, a flavone known for its anticancer effect. The study identified 41 proteins that are differentially expressed in AGS when treated with Scutellarein, 24 of them were downregulated and 17 were upregulated. Pathway analysis shows that the down-regulated proteins are associated with movement of cellular or subcellular components and platelet activation ( Figure 4A), while that pathways enrichment for the up-regulated proteins did not result in any significantly enriched pathways. Functional analysis is retrieved, visualized, and represented as Gene Ontology (GO) terms (Biological process; Molecular function; Cellular component). The down-regulated profile shows cell processing components including cell cycle, cell division, and cell migration ( Figure 4B), while the up-regulated profile shows a regulation of apoptotic process including positive and negative regulation associated with cytokine-mediated signalling pathway (Supplementary Figure S2A). Protein-protein interaction (PPI) network of both downregulated and up-regulated proteins retrieved from UniProt (Bateman, 2019) and visualized using Cytoscape. JS (Franz et al., 2015) and GeneCloudOmics protein interaction feature ( Figure 4C, Supplementary Figure S2B). GeneCloudOmics internally uses ClustalOmega (Sievers and Higgins, 2021) to perform a multiple sequence alignment (MSA) which was used to investigate and visualise the homogeneity among protein sequences ( Figure 4D and Supplementary Figure S2C). Pathological analysis of the protein list is a crucial step in data interpretation for connecting computational output with biological data, so the protein accession list is mapped to OMIM database disease IDs for providing information about diseases associated with proteins ( Figures 4E and Supplementary Figure S2D). Physicochemical analysis of the two sets of proteins shows that sequence charge of 100% of the down-regulated proteins is negative while in the up-regulated proteins it is 94% negative and 6% positive (Figures 4F and Supplementary Figure S2E). SUMMARY AND FUTURE DEVELOPMENTS In this paper, we have introduced a new webserver, GeneCloudOmics, for gene expression data analysis using a simple easy-to-use GUI that contains 23 data analytic and bioinformatics tools. This is the largest number of tools in any current webserver to our knowledge ( Table 1). We have demonstrated the utility of key functions using recently published human T regulatory cell differentiation and mouse distal lung epithelium RNA-Seq dataset (Risso et al., 2014;Schmidt et al., 2018) and Scutellarein treated AGS cell lines of gastric cancer proteomics dataset (Saralamma et al., 2020). In the next few years, GeneCloudOmics could be extended to support additional types of high throughput data, on top of RNA-Seq or microarrays. The plan includes supporting the analysis of proteomics, metabolomics, chromatin immunoprecipitation sequencing (ChIP-Seq) and cross-linking immunoprecipitation (CLIP-Seq) data. In addition, we hope to continue improving the transcriptome data analysis by adding new features such as other DGE methods [e.g. Limma (Dias-Audibert et al., 2020) and ScatLay ], sample overlap analysis (Venn diagram), additional data plots (e.g. density plot) and support for Gene Set Enrichment Analysis (GSEA) (Subramanian et al., 2005). The gene and protein IDs could also be extended to support different IDs, so the user is not restricted to use gene names and UniProt accessions only. DATA AVAILABILITY STATEMENT The GeneCloudOmics web server can be freely accessed at http:// combio-sifbi.org/GeneCloudOmics. The software is written using the open-source R programming language (R: a language and environment for statistical computing) and the Shiny framework (Web Application Framework for R [R package shiny version 1.6. 0], 2021). A Docker container image is also available (docker pull jaktab/GeneCloudOmics-webserver:latest). GeneCloudOmics is optimized for Google Chrome. Details on the R packages used in GeneCloudOmics, their versions and sources are available in Supplementary Table S1 and in the tool documentation on GitHub (https://github.com/cbio-astar-tools/GeneCloudOmics). AUTHOR CONTRIBUTIONS MH: led development of the software and drafted the manuscript RA: software development MS: software development TB: software development and writing a section of the manuscript KS: conceptualize and led the whole project, wrote the manuscript. All authors read and approved the manuscript. Supplementary Table S1 | The list of the R packages used in GeneCloudOmics.
7,359.8
2021-11-25T00:00:00.000
[ "Biology" ]
A Novel Theoretical Investigation of the Abu-Shady–Kaabar Fractional Derivative as a Modeling Tool for Science and Engineering A newly proposed generalized formulation of the fractional derivative, known as Abu-Shady–Kaabar fractional derivative, is investigated for solving fractional differential equations in a simple way. Novel results on this generalized definition is proposed and verified, which complete the theory introduced so far. In particular, the chain rule, some important properties derived from the mean value theorem, and the derivation of the inverse function are established in this context. Finally, we apply the results obtained to the derivation of the implicitly defined and parametrically defined functions. Likewise, we study a version of the fixed point theorem for α-differentiable functions. We include some examples that illustrate these applications. The obtained results of our proposed definition can provide a suitable modeling guide to study many problems in mathematical physics, soliton theory, nonlinear science, and engineering. Introduction Fractional calculus is theoretically considered as a natural extension of classical differential calculus, which has attracted many researchers, both from a more theoretical point of view and for its diverse applications in sciences and engineering. Thus, from a more theoretical perspective, various definitions of fractional derivatives have been initiated. Fractional definitions try to satisfy the usual properties of the classical derivative; however, the only property inherent in these definitions is the property of linearity. On the contrary, some of the drawbacks that these derivatives present can be located in the following: More information on this definition of fractional derivative can be found in [1,2]. The locally formulated fractional derivative is established through certain quotients of increments. In this sense, Khalil et al. [3] introduced a locally defined derivative, called conformable derivative. Some of the inconveniences that the previous fractional derivatives presented have been successfully solved via this definition. Thus, for example, the aforementioned rules for the derivation of products and quotients of two functions or the chain rule are properties that have been satisfied by the conformable derivative. The physical and geometric meaning of the derivative is studied in [4,5]. However, in [6], the author shows the disadvantages of using the conformable definition compared to Caputo's fractional derivative definition, to solve some fractional models. Recently, Abu-Shady and Kaabar [7] introduced a new generalized formulation of the fractional derivative (GFFD) that allows to solve analytically in a simple way some fractional differential equations, whose results agree exactly with those obtained via the Caputo and Riemann-Liouville derivatives. Also, this new definition has advantages compared to the conformable derivative definition. In addition, the study in [7] has been recently extended to study some important special functions in the sense of GFFD which are essential for modeling phenomena [8]. The GFFD definition is very important in studying various phenomena in science and engineering due to the powerful applicability of this definition in investigating many fractional differential equations in a very simple direction of obtaining analytical solutions without the need for approximate numerical methods or complicated algorithms like other classical fractional definitions. This definition is a modified version of the conformable definition to overcome all issues and advantages associated with the conformable one. Regarding the geometric behavior of GFFD, by following the previous research study concerning the fractional cords orthogonal trajectories in the sense of conformable definition [5], GFFD can be similarly applied to the same example to interpret its geometrical meaning in more details. One of the limitations of GFFD is that GFFD is locally defined derivative, and some future works are needed to proposed nonlocal formulation of GFFD in order to preserve the nonlocality property of fractional calculus. However, nonlocal definitions come with many associated challenges while working on solving fractional differential equations. Therefore, the future studies will work on overcoming all these challenges. The work is constructed as follows: The GFFD and its main properties are presented in Section 2. New results on generalized α-differential functions are proposed in Section 3 to complete the study carried out in [7]. Some interesting applications of the results obtained on generalized α-differentiable functions are presented in Section 4. In particular, illustrative examples of the derivation of implicitly defined functions, of parametrically defined functions and of the application of the fixed point theorem for generalized α -differentiable functions are included. Some conclusions are drawn in Section 5. New Results on Generalized α -Differentiable Functions In this section, we establish important results that complete the theory of generalized α-differentiable functions, introduced in [7]. Proof. Since Then, Hence, f is continuous at t 0 Proof. We prove the result following a standard limit approach. First, if the function g is constant in a neighborhood of a > 0 then D GFFD ½ f ∘ gðtÞ = 0. If g not is constant in a neighborhood of a > 0, we can find a t 0 > 0 such that gðt 1 Þ ≠ gðt 2 Þ for any t 1 , t 2 ∈ ða − t 0 , a + t 0 Þ. Now, since g is continuous at a, for ε sufficiently small, we have Making in the first factor, so we have from here Remark 7. Using the fact that differentiability implies generalized α-differentiability and assuming gðtÞ > 0, Equation (6) can be written as Theorem 8 (Extended mean value theorem for generalized α -differentiable functions) [5]. Let a > 0, α ∈ ð0,1, and f , g : ½a, b ⟶ R be functions satisfying Then, ∃c ∈ ða, bÞ, ∋ Proof. Consider the function Since F is continuous on ½a, b, generalized α − DF on ð a, bÞ, and FðaÞ = FðbÞ = 0, then by Theorem 3, ∃c ∈ ða, bÞ such that D GF FD FðcÞ = 0. Using the linearity of D GFFD and the fact that the generalized α-derivative of a constant is zero, our result follows. Theorem 12 (see [5]). Let a > 0, α ∈ ð0, 1, and f : ½a, b ⟶ R be a given function satisfying Proof. Following similar line of argument as given in the Theorem 10, there exists c between t 1 and t 2 with Therefore, f is strictly increasing on ½a, b, since t 1 and t 2 are arbitrary number of ½a, b Therefore, f is strictly decreasing on ½a, b, since t 1 and t 2 are arbitrary number of ½a, b. Definition 13. Let I ⊂ ð0,∞Þ an open interval, α ∈ ð0,1, and f : I ⟶ R be we will say that f ∈ C α ðI, RÞ if the f is generalized α − DF on I and generalized α-derivative is continuous on I. Proof. Since Applications Some interesting applications of the results obtained on generalized α − DF functions are presented in this section. Calculating the 1/3 -derivative in this equation, we obtain Taking t = 8 and gð8Þ = 1 in the equation above, we have Finally, the generalized 1/3-derivative is given by Taking p 0 = 1, to obtain this precision, 54 iterations are required. Also, note that since the generalized 1/2-derivative D GFFD f ðtÞ is negative, the successive approximations oscillate around the fixed point. Conclusions Novel results regarding the Abu-Shady-Kaabar fractional derivative have been investigated in this study which are extensions of the previous research study's results in [7]. In particular, some important properties of the generalized fractional derivative have been accomplished, such as the chain rule, some consequences of the mean value theorem, and the derivation of the inverse function. It is verifiable with the fact that these newly obtained results are considered as a natural extension of the classical differential calculus. The potential of this new definition of fractional derivative, both from a theoretical point of view and due to its applications, is evident through the developments and illustrative examples included in the previous section. This research can definitely open a new path for more related future works in which the results of classical mathematical analysis are extended in the sense of this new definition of fractional derivative. This definition will be applied further in studying various partial differential equations such as Schrödinger equation and Wazwaz-Benjamin-Bona-Mahony equation to study some solutions that are important in soliton theory and many other interesting research topics. Some specific examples of studies that can be further studied in the sense of GFFD are the Klein-Fock-Gordon equation via the Kudryashovexpansion method [9], the systems of fractional-order partial differential equations via the Laplace optimized decomposition technique [10], and the noninteger fractional-order hepatitis B model [11], by comparing the previous results in the senses of conformable and Caputo definitions with new results using GFFD. Numerical experiments with error analysis including comparison between conformable derivative and our definition including CPU time in the graphical representations in the sense of our proposed definition will be conducted in our future studies. In addition, in our future study, all algorithms and/ or pseudo-codes will be provided for the solutions' steps using one of the common software packages such as MAPLE and MATHEMATICA. Data Availability No data were used to support this study. Conflicts of Interest The authors declare that they have no competing interests.
2,123.8
2022-09-26T00:00:00.000
[ "Mathematics" ]
A novel hybrid genetic differential evolution algorithm for constrained optimization problems Most of the real-life applications have many constraints and they are considered as constrained optimization problems (COPs). In this paper, we present a new hybrid genetic differential evolution algorithm to solve constrained optimization problems. The proposed algorithm is called hybrid genetic differential evolution algorithm for solving constrained optimization problems (HGDESCOP). The main purpose of the proposed algorithm is to improve the global search ability of the DE algorithm by combining the genetic linear crossover with a DE algorithm to explore more solutions in the search space and to avoid trapping in local minima. In order to verify the general performance of the HGDESCOP algorithm, it has been compared with 4 evolutionary based algorithms on 13 benchmark functions. The experimental results show that the HGDESCOP algorithm is a promising algorithm and it outperforms other algorithms. Keywords—Constrained optimization problems, Genetic algorithms, Differential evolution algorithm, Linear crossover. I. INTRODUCTION Evolutionary algorithms (EAs) have been widely used to solve many unconstrained optimization problems [1], [3], [10], [15].EAs are unconstrained search algorithms and lake a technique to handel the constraints in the constrained optimization problems (COPs).There are different techniques to handle constraints in EAs, these techniques have been classified by Michalewicz [13] as follows.Methods based on penalty functions, methods based on the rejection of infeasible solutions, methods based on repair algorithms, methods based on specialized operators and methods based on behavioral memory. Differential evolutionary algorithm (DE) is one of the most widely used evolutionary algorithms (EAs) introduced by Stron and Price [17].Because of the success of DE in solving unconstrained optimization problems, it attracts many researchers to apply it with their works to solve constrained optimization problems (COPs) [2], [18], [19].In this paper, we proposed a new hybrid algorithm in order to solve constrained optimization problems.The proposed algorithm is called hybrid genetic differential evolution algorithm for solving constrained optimization problems (HGDESCOP).The HGDESCOP algorithm starts with an initial population consists of NP individuals, the initial population is evaluated using the objective function.At each generations, the new offspring is created by applying the DE mutation.In order to increase the global search behavior of the proposed algorithm and explore wide area of the search space, a genetic algorithm linear crossover operator is applied.In the last stage of the algorithm, the greedy selection is applied in order to accept or reject the trail solutions.These operations are repeated until the termination criteria satisfied. The main objective of this paper is to construct an efficient algorithm which seeks optimal or near-optimal solutions of a given objective function for constrained problems by combining the genetic linear crossover with a DE algorithm to explore more solutions in the search space and to avoid trapping in local minima. The reminder of this paper is organized as fellow.The problem definition and an overview of genetic algorithm and differential evolution are given in Section II.In Section III, we explain the proposed algorithm in detail.The numerical experimental results are presented in Section IV.Finally, The conclusion of the paper is presented in Section V. II. PROBLEM DEFINITION AND OVERVIEW OF GENETIC ALGORITHM AND DIFFERENTIAL EVOLUTION ALGORITHM In the following section and subsections, we give an overview of the constraint optimization problem and we highlight the penalty function technique, which are used to convert the constrained optimization problems to unconstrained optimization problems.Finally, we present the standard genetic algorithm and deferential evolutionary algorithm. A. Constrained optimization problems A general form of a constrained optimization is defined as follows: Where f (x) is the objective function, x is the vector of n variables, g i (x) ≤ 0 are inequality constraints, h j (x) = 0 are equality constraints, x l , x u are variables bounds.In this paper, we used the penalty function technique to solve constrained optimization problems [11].The following subsection gives more details about the penalty function technique.www.ijacsa.thesai.org 1) The Penalty function technique: The penalty function technique is used to transform the constrained optimization problems to unconstrained optimization problem by penalizing the constraints and forming a new objective function as follow: Where, penalty(x) = 0 if no constraint is violated 1 otherwise. There are two kind of points in the search space of the constrained optimization problems (COP), feasible points which satisfy all constraints and unfeasible points which violate at least one of the constraints.At the feasible points, the penalty function value is equal the value of objective function, but at the infeasible points the penalty function value is equal to a high value as shown in Equation 2. In this paper, a non stationary penalty function has been used, which the values of the penalty function are dynamically changed during the search process.A general form of the penalty function as defined in [21] as follows: Where f (x) is the objective function, h(k) is a non stationary (dynamically modified) penalty function, k is the current iteration number and H(x) is a penalty factor, which is calculated as follows: Where q i (x) = max(0, g i (x)), i = 1, . . ., m, g i are the constrains of the problem, q i is a relative violated function of the constraints, θ(q i (x)) is the power of the penalty function, the values of the functions h(.), θ(.)andγ(.) are problem dependant.We applied the same values, which are reported in [21]. The following values are used for the penalty function: Where the assignment function was and the penalty value h(t) = t * √ t. B. An overview of genetic algorithm Genetic algorithm (GA) was introduced by Holland [8].The basic principles of GA are inspired from the principles of life which were first described by Darwin [4].GA starts with a number of individuals (chromosomes) which form a population.After randomly creating of the population, the initial population is evaluated using fitness function.The selection operator is start to select highly fit individuals with high fitness function score to create new generation.Many type of selection have been developed like roulette wheel selection, tournament selection and rank selection [12].The selected individuals are going to matting pool to generate offspring by applying crossover and mutation.Crossover operator is applied to the individuals in the mating pool to produces two new offspring from two parents by exchanging substrings.The most common crossover operators are one point crossover [8], two point crossover [12], uniform crossover [12].The parents are selected randomly in crossover operators by assign a random number to each parent, the parent with random number lower than or equal the probability of crossover ration P c is always selected.Mutation operators are important for local search and to avoid premature convergence.The probability of mutation p m must be selected to be at a low level otherwise mutation would randomly change too many alleles and the new individual would have nothing in common with its parents.The new offspring is evaluated using fitness function, these operations are repeated until termination criteria stratified, for example number of iterations.The main structure of genetic algorithm is presented in Algorithm 1 Algorithm 1 The structure of genetic algorithm 1: Set the generation counter t := 0. 2: Generate an initial population P 0 randomly.3: Evaluate the fitness function of all individuals in P 0 .4: repeat 5: Set t = t + 1. { Generation counter increasing}. 6: Select an intermediate population P t from P t−1 .{Selection operator}. 7: Associate a random number r from (0, 1) with each row in P t . 8: if r < p c then 9: Apply crossover operator to all selected pairs of P t .Evaluate the fitness function of all individuals in P t .18: until Termination criteria satisfied. 1) Liner crossover operator: HGDESCOP uses a linear crossover [20] in order to generate a new offspring to substitute their parents in the population.The main steps of the linear crossover is shown in Procedure 1. Choose the two most promising offspring of the three to substitute their parents in the population.3. Return. C. An overview of differential evolution algorithm Differential evolution algorithm (DE) proposed by Stron and Price in 1997 [17].In DE, the initial population consists of number of individuals, which is called a population size N P .Each individual in the population size is a vector consists of D dimensional variables and can be defined as follows: Where G is a generation number, D is a problem dimensional number and NP is a population size.DE employs mutation and crossover operators in order to generate a trail vectors, then the selection operator starts to select the individuals in new generation G+1.The overall process is presented in details as follows: 1) Mutation operator: Each vector x i in the population size create a trail mutant vector v i as follows. DE applied different strategies to generate a mutant vector as fellows: best is the best vector in the population in the current generation G. 2) Crossover operator: A crossover operator starts after mutation in order to generate a trail vector according to target vector x i and mutant vector v i as follows: Where CR is a crossover factor, CR ∈ [0, 1], j rand is a random integer and j rand ∈ [0, 1] 3) Selection operator: The DE algorithm applied greedy selection, selects between the trails and targets vectors.The selected individual (solution) is the best vector with the better fitness value.The description of the selection operator is presented as fellows: The main steps of DE algorithm are presented in Algorithm 2 Algorithm 2 The structure of differential evolution algorithm Set G = G + 1. {Generation counter increasing}. 7: Select random indexes r 1 , r 2 , r 3 , where ). {Mutation oper-ator}.end for end for 24: until Termination criteria satisfied. III. THE PROPOSED HGDESCOP ALGORITHM HGDESCOP algorithm starts by setting the parameter values.In HGDESCOP, the initial population is generated randomly, which consists of NP individuals as shown in Equation 5.Each individual in the population is evaluated by using the objective function.At each generation (G), each individual in the population is updated by applying the DE mutation operator by selecting a random three indexes r 1 , r 2 , r 3 , where r 1 = r 2 = r 3 = i as shown in Equations 6, 7.After updating the individual in the population, a random number r from (0, 1) is associated with each individual in the population by applying the genetic algorithm linear crossover operator as shown in Procedure 1.The greedy selection operator is starting to select the new individuals to form the new population in next generation as shown in Equation 13.These operations www.ijacsa.thesai.orgare repeated until termination criterion satisfied, which is the number of iterations in our algorithm.Set G = G + 1. {Generation counter increasing}. 13: if r < P c then 14: Apply Procedure 1 to all selected pairs of v (G) i in P (G) .{GA linear crossover operator}.for (k = 0; k < NP; k++) do IV. NUMERICAL EXPERIMENTS The general performance of the proposed HGDESCOP algorithm is tested using 13 benchmark function G 1 − G 13 , which are reported in details in [5], [7], [13].These functions are listed in Table I as follows. A. Parameter settings The parameters used by HGDESCOP and their values are summarized in Table II.These values are either based on the common setting in the literature or determined through our preliminary numerical experiments. B. Performance analysis In order to test the general performance of the proposed HGDESCOP algorithm, we applied it with 13 benchmark functions G 1 − G 13 and the results are reported in Table III.Also, six functions have been plotted as shown in Figure 1. 1) The general performance of the HGDESCOP algorithm: The best, mean, worst and standard deviation values are averaged over 30 runs and reported in Table III.We can observe from the results in Table III, that HGDESCOP could obtain the optimal solution or very near to optimal solution for all functions G 1 − G 12 for all 30 runs, However HGDESCOP could obtain the optimal solution with function G 13 for 9 out of 30 runs.Also in Figure 1, we can observe that the function values are rapidly decrease as the number of function generations increases. We can conclude from Table III and Figure 1, that HGDE-SCOP is an efficient algorithm and it can obtain the optimal or near optimal solution with only few number of iterations. C. HGDESCOP and other algorithms In order to evaluate the performance of HGDESCOP algorithm, we compare it with four evolutionary based algorithms, All results are reported in Table IV, and the results of the other algorithms are taken from their original papers.The four algorithms are listed as follows. • Homomorphous Mappings (HM) [9] This algorithm, incorporates a homomorphous mapping between n-dimensional cube and a feasible search space. • Stochastic Ranking (SR) [16] This algorithm introduces a new method to balance objective and penalty functions stochastically, (stochastic ranking), and presents a new view on penalty function methods in terms of the dominance of penalty and objective functions. • Adaptive Segregational Constraint Handling EA (AS-CHEA) [6] This algorithm is called ASCHEA and it is used after extending the penalty function and introducing a niching techniques with adaptive radius to handel multimodel functions.The main idea of the algorithm is to start for each equality with a large feasible Algorithm 3 The proposed HGDESCOP algorithm 1: Set the generation counter G := 0. 2: Set the initial value of F , p c and NP.3: Generate an initial population P 0 randomly.4: Evaluate the fitness function of all individuals in P 0 .5: repeat 6: 26: until Itr no ≤ M axitr {Termination criteria satisfied}. Associate a random number r 1 from (0, 1) with each gene in each individual in P t . 12:17: TABLE I : Constrained benchmark functions.
3,217.2
2014-01-01T00:00:00.000
[ "Computer Science" ]
Strong hydrogen bonding in a dense hydrous magnesium silicate discovered by neutron Laue diffraction By using neutron Laue diffraction, strong hydrogen bonding was observed in the framework structure of phase E of dense hydrous magnesium silicate. Such bonding plays a crucial role in stabilizing large amounts of hydrogen in the crystal structures of minerals under high-pressure and high-temperature conditions inside deep Earth. Introduction Hydrogen can be incorporated into minerals in highly variable amounts. Once incorporated, the hydrogen is circulated throughout the Earth, from the surface to the deep interior, affecting the long-term evolution of the planet (Iizuka-Oku et al., 2017;Kawakatsu & Watada, 2007;Okuchi, 1997;Thompson, 1992). A major proportion of this hydrogen is currently stored within the crystal structures of dense minerals that are thermodynamically stable under the high pressures and temperatures of the deep mantle of the Earth (Ohtani, 2015;Purevjav et al., 2014Purevjav et al., , 2016Purevjav et al., , 2018Sano-Furukawa et al., 2018). Dense hydrous magnesium silicates (DHMSs) are the most typical among such dense mineral species; they have very large hydrogen capacities even under extreme pressure and temperature conditions (Frost, 1999;Nishi et al., 2014;Ohtani et al., 2000). Phase E [Mg 3À0.5x Si x H 6À3x O 6 ] has the largest hydrogen capacity (18% H 2 O weight fraction of the total mass) and one of the best thermodynamic stabilities among DHMSs; it is stable to temperatures of at least 1573 K and pressures of 13-18 GPa (Kanzaki, 1991;Frost, 1999). Fig. 1(a) shows the thermogravimetry result of DHMS phase E synthesized under high-pressure and high-temperature conditions. Most of the hydrogen was retained inside the crystal structure up to 900 K under ambient pressure, which is distinctly higher than that typically seen for common hydrogen-bearing minerals of lower density. To understand the reason for such high-temperature stability of hydrogen in the mineral structure, which fundamentally controls the circulation of hydrogen within deep Earth, its chemical bonding geometry and cation-exchange mechanisms must be fully clarified. X-ray diffraction analysis has been used to examine the framework of DHMS phase E without hydrogen. It was revealed to have a layered structure belonging to a trigonal crystal system (space group R3m) (Kudoh et al., 1993). The structure consists of two different magnesium sites (Mg1 and Mg2), one silicon site (Si) and one oxygen site (O). Each Mg 2+ ion is surrounded by six O 2À ions to form MgO 6 octahedra, and each Si 4+ ion is connected to four O 2À ions to form SiO 4 tetrahedra. Most of the Mg 2+ ions are located at the Mg1 sites, which collectively form a layer of edge-sharing MgO 6 octahedra. However, there was also a minor amount of Mg 2+ occupying the Mg2 sites outside this layer. The SiO 4 tetrahedra are distributed statistically between two adjacent MgO 6 layers together with the possible hydrogen sites; however, hydrogen was not detectable using X-ray diffraction. In order to locate the hydrogen sites, we previously analyzed the structure of deuterated DHMS phase E using powder neutron diffraction at J-PARC, Japan . Two equally plausible hydrogen site models (normal and tilted O-D dipole models) were derived [ Fig. 1(b)]. The hydrogen concentrations determined based on the two models were very similar, derived from their refined site occupancies. Thus, we concluded that the hydrogen concentration within the mineral structure was reasonably constrained, where the refined site occupancies of hydrogen were compatible with the mineral stoichiometry. On the other hand, the powder data did not allow us to discriminate between the geometries of the hydrogen bonds among these models, owing to the insufficient spatial resolution. Thus, in this study, we employed time-offlight (TOF) single-crystal neutron Laue diffraction for our synthesized high-quality DHMS phase E crystal. To determine the most accurate bonding distances of hydrogen, we synthesized a fully deuterated crystal. We expect that the heavier mass of deuterium relative to protium should reduce its vibration/displacement at its equivalent sites. In addition, the longer coherent scattering length of deuterium relative to protium should help to increase the signal-to-noise ratio, and its shorter incoherent scattering length should reduce background scattering. Hence, to obtain the best possible dataset given the very small crystal size, we used a fully deuterated sample to reduce the background and increase the signal-tonoise ratio. Therefore, the TOF Laue scheme will allow very high sensitivity for detecting weaker reflections at lower dspacings from a small synthetic crystal. In our previous studies conducted using this combination, reflections with minimum d-spacings (d min ) as low as 0.3 Å were successfully resolved and analyzed (Purevjav et al., , 2018, enabling quantitative determination of site positions and occupancies of deuterium in DHMS phase E. Single-crystal synthesis and characterization Fully deuterated single crystals of DHMS phase E were synthesized under high-pressure and high-temperature conditions using a scaled-up Kawai-type cell. We previously established a slow-cooling method for growing physically and chemically homogenous crystals of hydrogenated minerals that exist in deep Earth (Okuchi et al., 2015). This method proved applicable for preparing the deuterated crystals. A mixture of Mg(OD) 2 and SiO 2 powders at a 2:1 molar ratio was used as the starting material. The Mg(OD) 2 was synthesized from dried MgO powder and D 2 O water in an autoclave at 513 K and 40 MPa. Raman spectroscopy confirmed that the Mg(OD) 2 had no hydrogen contamination . The SiO 2 powder was prepared from a high-purity glass rod; the glass contained less than 20 p.p.m. OH groups. The mixture was sealed in a gold sample capsule (4 mm outer diameter and 4.5 mm length). The capsule was placed in an 18/ 10 type Kawai cell. To synthesize a fully deuterated crystal, we prebaked the cell parts at 1273 K for 1 h before the synthesis experiment to completely remove any absorbed hydrogen. The cell was combined with eight tungsten carbide anvils in a dry environment (laboratory humidity <40%), which had edge lengths of 46 mm. The sealed cell was compressed to a pressure of 15 GPa; then it was heated to 1366 K and slowly cooled to 1348 K over 3 h to grow the crystals. Subsequently, the cell was quenched rapidly to room temperature by cutting off the heater power. Finally, the pressure was released and the grown crystals were recovered under ambient conditions. Many and brucite [Mg(OH) 2 ] measured at ambient pressure using Rigaku Thermo plus EV02. The former structure retains most of its hydrogen at around 900 K, while the latter retains it at around 600 K. We consider that the difference in dehydration temperatures between these minerals is related to the difference in their hydrogen bonding strengths. (b) Two equally plausible hydrogen site models that have been adopted so far for DHMS phase E . Each corner of the octahedra is made of an oxygen anion (not shown). In the normal O-D model, the dipole is parallel to the c axis and normal to the MgO 6 octahedral layers. In the tilted O-D model, the dipole is tilted from the c axis. The crystallographic illustrations were created using the software VESTA3 (Momma & Izumi, 2011). crystals with the same composition grew together within the capsule, and were confirmed to have the DHMS phase E structure by X-ray diffraction analysis. We carefully selected one of the largest crystals for neutron diffraction, with a volume of 0.1 mm 3 (0.65 Â 0.5 Â 0.3 mm). The crystal was optically transparent, i.e. there was an absence of inclusions, twinning and cracks when observed under a polarized optical microscope (Fig. S1). Time-of-flight single-crystal neutron Laue diffraction The selected sample crystal was studied using the TOPAZ diffractometer installed at Spallation Neutron Source, Oak Ridge National Laboratory (Schultz et al., 2014). The crystal mounting, data collection strategies and integration schemes were the same as in our previous study (Purevjav et al., 2018). The crystal was measured in 17 different orientations for 2 d at 100 K. The proton beam power was 1.4 MW. For the structural analysis, we used 707 independent reflections covering the dspacing range down to d min = 0.50 Å ; all these reflections satisfied the I > 3I criteria. Refinement of structural parameters The hkl reflection intensity dataset was analyzed using General Structure Analysis System (GSAS) software (Larson & Von Dreele, 2004). The initial structural parameters were taken from our previous powder neutron diffraction results , though we did not use any constraints. First, the structural model was fit without D to obtain tentative structural parameters of Mg, Si and O. Using these parameters, we constructed the difference Fourier map to show the sites of D, which was located at the maximum nuclear density at 7.10 fm Å À3 between two adjacent layers of MgO 6 octahedra. As discussed later, its coordinates were consistent with those of the tilted O-D dipole structure model [ Fig. 1(b)]. After selecting this model, we refined the full structural parameters, including the D sites. The wR(F) and R(F) values obtained after full refinement at d min = 0.50 Å were 5.3 and 6.1%, respectively. Additional series of refinements with d min = 0.55, 0.60, 0.65 and 0.70 Å were conducted separately to evaluate the stability of cation occupancies (Fig. S2). Then, it was proved that the cation occupancies of DHMS phase E were stable for all these different d min datasets. Table 1 shows the refined structural parameters at d min = 0.50 Å . Fig. 2 shows the structure of the DHMS phase E. D + is located between the MgO 6 octahedral layers [ Fig. 2(a)]. The nuclear density of D in the difference Fourier map forms a triangular shape [ Fig. 2 Table 1 Refined structural parameters at d min = 0.50 Å . Results and discussion The lattice parameters are a = 2.9647 (4) and c = 13.8892 (3) Å , determined by single-crystal neutron diffraction at 100 K. suitable structural model. The O-D covalent bond distance is 0.817 (3) Å , which is identical to our previous powder diffraction results . The OÁ Á ÁD hydrogen bond distance is 2.088 (3) Å . The bonding angle of O-DÁ Á ÁO is 163.3 (3) , indicating that near-straight hydrogen bonding occurs between D + and one of the three nearest-neighbor O 2À ions. The structure of deuterated DHMS phase E at ambient pressure is close to that of deuterated brucite (magnesium deuteroxide) at high pressures Parise et al., 1994). Both structures possess three-split D sites, clearly supporting the existence of interlayer hydrogen bonding in their structures (insets of Fig. 3). The interlayer distance between adjacent oxygen anions (O-DÁ Á ÁO) in brucite is 3.22 Å at ambient pressure, which is too large for hydrogen bonding; on the other hand, the distance decreases to 2.88 Å at a pressure of 8.9 GPa, thereby enabling hydrogen bonding. In addition, the O-DÁ Á ÁO angle of brucite is 148 at ambient pressure, which is too small for hydrogen bonding; however, this angle increases to 156 at 9.3 GPa, which again is consistent with the occurrence of hydrogen bonding. The c axis of brucite thus becomes distinctly less compressible at high pressures, indicating that its framework structure becomes harder with interlayer hydrogen bonding [ Fig. 3(a)]. On the other hand, the distance of O-DÁ Á ÁO in DHMS phase E is already 2.880 (1) Å at ambient pressure, which is suitable for hydrogen bonding of moderate strength. The O-DÁ Á ÁO angle of 163 is also consistent with the occurrence of hydrogen bonding. Hence, hydrogen bonding occurs in DHMS phase E with a distance of 2.088 (3) Å at ambient pressure. It was previously reported that the c axis of DHMS phase E at ambient pressure is less compressible than that of brucite at high pressure, suggesting that interlayer hydrogen bonding already plays a role in hardening its framework structure [ Fig. 3(b)]. Thus, we expect that interlayer hydrogen bonding in DHMS phase E becomes much stronger at highpressure conditions inside deep Earth. The presently determined O-DÁ Á ÁO distance of DHMS phase E was more than 0.1 Å shorter than the value reported by Shieh et al. (2000), who suggested it exhibited weak hydrogen bonding. The reason for this difference is that Shieh et al. used the relation of the OH stretching frequencies versus OÁ Á ÁO bond distances, where the relation had few data points, especially in the high-frequency range. Thus, such qualitative information is inaccurate for discussing the strength of hydrogen bonding in DHMS phase E. Thermogravimetry analyses [ Fig. 1(a)] demonstrated that the dehydration of DHMS phase E at ambient pressure occurs at a much higher temperature than that of brucite. At high pressure, the dehydration temperature of brucite increases, reaching 1550 K at 15 GPa (Johnson & Walker, 1993); this demonstrates the important role that hydrogen bonding plays in the stability of the structure against heat. Furthermore, the pressure-enhanced hydrogen bonding in DHMS phase E should act to increase the dehydration temperature. We conclude that strong hydrogen bonding is the most important factor for the high-temperature stability of DHMS phase E in deep Earth. Its high-temperature stability limit has not yet been accurately determined; nevertheless, it is stable to at least 1573 K at a pressure of 15 GPa (Frost, 1999). We expect that hydrogen bonding also plays a universal role in enhancing the stability of various hydrous minerals in deep Earth, and we will seek to verify this in our future research. It has been reported that DHMS phase E incorporates a variable amount of hydrogen (Frost, 1999;Tomioka et al., 2016), thereby allowing flexible cation substitution, including hydrogen as one of the exchangeable species. We found a considerable number of Mg 2+ vacancies at the Mg1 site, but no cations at the previously proposed Mg2 site. We found that the Si 4+ and D + sites were very close to each other. Furthermore, we considered the structural relation between brucite and DHMS phase E, as well as a full disordering of cations, as Compressibility along the c axis (c/c 0 ) of brucite and DHMS phase E. (a) c/c 0 of brucite reported by Nagai et al., 2000 (circles); Okuchi et al., 2014 (triangles);and Parise et al., 1994 (squares). The solid and broken lines are linear fits of these data points at lower and higher pressures, respectively. The inset numbers show the distance DÁ Á ÁO across the interlayer space at room temperature, obtained from Okuchi et al. (2014), andParise et al. (1994). The distance monotonically decreases with increasing pressure to form a hydrogen bonding interlayer, which makes the structure distinctly harder. The inset figures show the D sites of brucite at ambient and high pressures, respectively. (b) c/c 0 of DHMS phase E reported by Shieh et al., 2000 (diamonds). The c axis of DHMS phase E around ambient pressure is already less compressible than that of brucite at high pressures. The inset number shows the distance DÁ Á ÁO, determined in present study. The inset figure shows the D sites of DHMS phase E. required by the crystallographic symmetry. It was concluded that multiple D + ions in the interlayer space were simultaneously exchanged with the Si 4+ ions that connect the neighboring layers, together with the generation of Mg 2+ vacancies inside the MgO 6 octahedral layers. By comparing the refined chemical formula of the DHMS phase E crystal (Mg 2.28 Si 1.32 D 2.15 O 6 ) with that of brucite (Mg 3 Si 0 D 6 O 6 ), we found that the exchange mechanism of four possible models have the DHMS phase E structure from brucite, while maintaining the cation charge balances. We calculated the balances in occupancies of Mg and D for these models and compared them with those of our refinement result (see Fig. S3). We found that most plausible model is i.e. one Mg 2+ in the MgO 6 octahedral layer and six D + in the interlayer space are exchanged with two Si 4+ at the top and bottom of the Mg 2+ vacancy [ Fig. 2(a)]. The cation-to-cation distance is too short for the SiO 4 tetrahedron and MgO 6 octahedron to share faces; consequently, Mg 2+ must be removed to introduce two SiO 4 tetrahedra which share their faces with the same Mg1 site. These SiO 4 tetrahedra have a deformed geometry, with an Si-O distance of 1.666 (3) Å along the c axis and 1.8322 (9) Å along the other directions. The O 2À bonded to Si 4+ with a shorter distance along the c axis does not possess a D + ion, thereby avoiding repulsion between Si 4+ and D + . Two of the other three O 2À bonded to Si 4+ with a longer distance possess D + to form two tilted O-D dipoles towards their interlayer hydrogen bonding directions. Thus, the hydrogen capacity in the DHMS phase E structure is eventually controlled by the exchanged amount of Si 4+ , while maintaining site disordering of all cations. Conclusions We analyzed the chemical bonding geometry around hydrogen in the framework structure of phase E, which is representative of the dense hydrous magnesium silicate (DHMS) minerals that retain hydrogen within deep Earth. A single crystal of deuterated DHMS phase E was synthesized at high pressure and temperature and subsequently analyzed using TOF neutron Laue diffraction. The nuclear density distribution of D + in the DHMS phase E framework structure at 100 K was obtained with a high spatial resolution of d min = 0.50 Å . It was found that, within the layered structure of DHMS phase E, the O-D dipole was tilted from the direction normal to the MgO 6 octahedral layers due to the occurrence of interlayer hydrogen bonding to one of the neighboring O 2À ions. This geometry of the hydrogen bonds was similar to that of compressed brucite at high pressures. The hydrogen bond length of DHMS phase E at ambient pressure was comparable with that of brucite at high pressure. By referring to compressibility studies on DHMS phase E and brucite, which have similar structures made of MgO 6 octahedral layers and interlayer spaces, we conclude that hydrogen bonding in these minerals plays a crucial role in increasing their dehydration temperatures. However, the role of hydrogen bonding is more significant in DHMS phase E than in brucite. We propose that cation exchange of Mg 2+ , D + and Si 4+ approximately À1:À6:+2 (molar ratio) occurs within the DHMS phase E structure while retaining full disordering of the cation sites.
4,319
2020-04-02T00:00:00.000
[ "Materials Science", "Physics" ]
Parameters Tuning of Fractional-Order Proportional Integral Derivative in Water Turbine Governing System Using an Effective SDO with Enhanced Fitness-Distance Balance and Adaptive Local Search : Supply-demand-based optimization (SDO) is a swarm-based optimizer. However, it suffers from several drawbacks, such as lack of solution diversity and low convergence accuracy and search efficiency. To overcome them, an effective supply-demand-based optimization (ESDO) is proposed in this study. First, an enhanced fitness-distance balance (EFDB) and the Levy flight are introduced into the original version to avoid premature convergence and improve solution diversity; second, a mutation mechanism is integrated into the algorithm to improve search efficiency; finally, an adaptive local search strategy (ALS) is incorporated into the algorithm to enhance the convergence accuracy. The effect of the proposed method is verified based on the comparison of ESDO with several well-regarded algorithms using 23 benchmark functions. In addition, the ESDO algorithm is applied to tune the parameters of the fractional-order proportional integral derivative (FOPID) controller of the water turbine governor system. The comparative results reveal that ESDO is competitive and superior for solving real-world problems. Introduction With the development of social economy and technology, many complex optimization problems have appeared in the fields of communication, transportation, machinery, ecommerce, automation, materials and economics [1][2][3][4][5][6][7][8][9][10]. Meta-heuristic algorithms, an effective tool, simulate one or some natural processes in nature and have unique advantages in solving complex optimization problems. The No-Free−Lunch (NFL) theorem [11] proves that any optimizers cannot provide the best solutions for all different optimization problems. Therefore, plenty of meta-heuristics have sprung up in recent years according to different inspirations. Meta-heuristic algorithms can be divided into three categories: evolutionarybased (EB), physis-based (PB), and swarm-based (SB). The most representative EB is the genetic algorithm (GA) [12], which simulates the evolution process of biological groups in nature. With the development of GA, many improved versions and variants have emerged, but most of them obtain high-quality solutions through mutation, crossover and selection steps. Other EBs are genetic programming (GP) [13], evolution strategies (ES) [14] and evolutionary programming (EP) [15]. PBs simulate the physical laws in nature. The annealing algorithm (SA) is a classical PB. The theoretical idea behind SA is the motion of molecules in a solid material as it cools gradually from high temperature. In recent years, numerous new physics-inspired algorithms have been proposed, including gravitational search algorithm (GSA) [16], atom designs an effective SDO termed ESDO; it incorporates the EFDB and Levy flight, mutation mechanism and adaptive local search strategy into SDO to perform global search. The effect of ESDO is verified based on the comparative results of ESDO with several well-regarded algorithms on a set of benchmark functions and tuning FOPID controller of water turbine governor. The results verify the efficiency and superiority of ESDO. The major difference of our study and its competitors is that ESDO makes three different strategies to improve its overall optimization performance in terms of solution diversity, convergence accuracy and search efficiency. The rest of this study is as follows. Section 2 gives the main structure of SDO and provides the proposed ESDO by combining several strategies. In Section 3, the experimental results on some functions are investigated to assess the effectivity of the proposed ESDO. Section 4 provides an application of ESDO in tuning the FOPID controller of a water turbine governor. Section 5 gives some conclusions of the study. Effective Supply-Demand-Based Optimization (ESDO) 2.1. Supply-Demand-Based Optimization (SDO) In SDO [34], there are d commodity prices and quantities as candidate solutions and possible candidate solutions, respectively. After evaluation, the one with better fitness is selected as the current candidate solution. The mathematical expressions for commodity price and quantity are: where x i and y i (i = 1 · · · n) represent the price vectors and quantity vectors in each market, and x j i and y j i (i = 1 · · · n; j = 1 · · · d) represent the price and quantity of the jth commodity in the ith market, respectively. There are n markets in total. Then, the fitness values of all prices and quantities are evaluated by the following functions: Fy = [Fy 1 Fy 2 Fy 3 · · · Fy n ] T , where T represents transposition of the matrix. The vector of the equilibrium price and quantity are represented by: where r and r 1 are the random numbers in [0, 1], R(P) and R(Q) are the Roulette Wheel Selection. The expressions of demand function and supply function are given by, respectively: where x i (t) and y i (t) respectively represent the ith price and quantity of commodity at time t, and β and α are respectively the demand weight and supply weight. Substituting (7) into (8), the demand equation can be rewritten into the following form: Therefore, the commodity price is updated by adjusting the values of α and β, and it is updated according to the equilibrium price relative to the current price. α and β can be expressed as: where, |αβ| < 1 is equivalent to the stability mode in the supply and demand mechanism, emphasizing the exploitation ability, |αβ| > 1 is equivalent to the instability mode and emphasizing the overall exploration ability. Figure 1 shows the two modes of SDO. where ( ) i x t and ( ) i y t respectively represent the th price and quantity of commodity at time t , and β and α are respectively the demand weight and supply weight. Substituting (7) into (8), the demand equation can be rewritten into the following form: Therefore, the commodity price is updated by adjusting the values of α and β , and it is updated according to the equilibrium price relative to the current price. α and β can be expressed as: where, 1 αβ < is equivalent to the stability mode in the supply and demand mechanism, emphasizing the exploitation ability, 1 αβ > is equivalent to the instability mode and emphasizing the overall exploration ability. Figure 1 shows the two modes of SDO. Proposed Method Since SDO easily suffers from low search efficiency and misses some better solutions, ESDO is an enhanced version of SDO in order to overcome them. For SDO, the fitness Proposed Method Since SDO easily suffers from low search efficiency and misses some better solutions, ESDO is an enhanced version of SDO in order to overcome them. For SDO, the fitness value of each solution and the distance between each solution and the best solution are very two important factors affecting the search efficiency of the algorithm for the optimal solution. In this study, inspired by a fitness-distance balance (FDB) [43], an enhanced fitness-distance balance (EFDB) is developed to replace the selection for the equilibrium quantity and price of commodities. Meanwhile, to strengthen exploration ability of the algorithm in the search space, a Levy flight strategy is introduced to the weight to improve the convergence ability. A mutation mechanism is employed to enhance the search efficiency of SDO. Meanwhile, an adaptive local search strategy is used to improve the convergence accuracy of the algorithm. ESDO is a newly presented optimizer and not yet applied in any real-world application. Combining Enhanced Fitness-Distance Balance (EFDB) and Levy Flight In [44], Kati et al. proposed an improved version of SDO, in which the equilibrium quantity is replaced with the commodity quantity selected by the FDB method to provide diversity. In this study, according to [44], we propose the EFDB method, in which the equilibrium quantity and price of commodities are replaced with the commodity quantity and price by using the FDB method, respectively. This method can further strengthen the solution diversity. The following is a specific description to the EFDB method. The fitness value of commodity price is calculated by [43]: The equilibrium price vector x 0 is redefined by: As an analogy, the equilibrium quantity vector y 0 is redefined by: where normFy i is the normalized fitness value of Fy i , normDy i is the normalized distance value of the ith commodity, It is the current iteration and MaxIt is the maximum number of iterations. This selection method combines the fitness and the distance to calculate the score for each individual in the population. Therefore, this strategy can effectively improve solution diversity and avoid local solutions. In addition, inspecting Equation (16), in the early iterations, the selection for the equilibrium quantity and price of commodities takes into account the largest fitness value and the furthest distance from the current optimal individual so far, it will contribute to exploration; in the later iterations, the selection focuses more on the fitness value rather than the distance, it will be dedicated to exploitation. The Levy flight [45,46] whose step-width obeys non-uniform Levy distribution, is a random walk; thus, it has the superior ability to enhance exploring space search [47]. The step-width of the Levy flight is produced by [48,49]: where λ is a stability/tail index, s is the step-width and u and v obey the normal distribution, respectively: u~N(0,1), v~N(0,1), where Γ denotes the standard Gamma function and b = 1.5. The weight α in Equation (10) is reformulated by: The Mutation Mechanism To improve the search efficiency of the algorithm, the mutation mechanism is employed in this study. Although some different mutation strategies are introduced in the literature [50][51][52], the Gaussian mutation is one of the most frequently used mutation methods since it is more effective and simpler to implement [53]. So, in ESDO, the supply function and demand function are modified by, respectively: where rn~N(0,1) Adaptive Local Search (ALS) Strategy Local search strategy is an important way to improve the current best solution. The chaotic local search is a classic local search (CLS) method [54], which used the chaotic map to improve the solution quality by searching the neighborhood around the best solution so far. However, the step-size for this local search cannot decrease as iterations go on, which will affect the solution accuracy and search efficiency. So, to dynamically adjust the step-size of the local search, an adaptive local search (ALS) strategy is given by: x best (t) is a new current solution generated at time t. The update of the current best solution is given by: If the fitness value of x best (t) is better than that of x best (t), the current best solution is replaced with the new one, or it remains unchanged. It can be observed from Equation (28) that a bigger step-size contributes to exploration in the early iterations, with the increase of iterations and, a small step-size is greatly dedicated to exploitation. In addition to improving the convergence accuracy, this adaptive search strategy also strengthens the balance between exploration and exploitation to some extent. The Proposed ESDO Algorithm By introducing the EFDB method and Levy flight, the mutation mechanism, and the ALS strategy to strengthen the optimization performance, the ESDO algorithm is proposed. The pseudocode of ESDO algorithm is given in Figure 2. to improving the convergence accuracy, this adaptive search strategy also strengthens the balance between exploration and exploitation to some extent. The Proposed ESDO Algorithm By introducing the EFDB method and Levy flight, the mutation mechanism, and the ALS strategy to strengthen the optimization performance, the ESDO algorithm is proposed. The pseudocode of ESDO algorithm is given in Figure 2. Update the commodity quantity vector yi by Eq. (25). Update the commodity price vector xi by Eq. (26). Calcula te their fitness values Fxi and Fyi. If Fyi is better than Fxi, replace xi by yi, End For. Perform adaptive local search strategy by Eqs. (28)-(30) Update the best solution found so far xbest. End While. Return the best solution found so far xbest. Test Functions and Parameter Setting To assess the performance of the ESDO algorithm, a classical suit of benchmark set, containing 23 test functions (see Table A1 in Appendix A for details), are employed. The 23 benchmark functions include 7 unimodal functions (UFs) (F1-F7), 6 multimodal functions (MFs) (F8-F13) and 10 low-dimensional multimodal functions (LMFs) (F14-F23). Meanwhile, several competitive optimizers, including whale optimization algorithm (WOA) [55], gray wolf optimizer (GWO) [56], and GSA, are used and their results are provided for a comparison. For all the considered optimizers, the population size and the Test Functions and Parameter Setting To assess the performance of the ESDO algorithm, a classical suit of benchmark set, containing 23 test functions (see Table A1 in Appendix A for details), are employed. The 23 benchmark functions include 7 unimodal functions (UFs) (F1-F7), 6 multimodal functions (MFs) (F8-F13) and 10 low-dimensional multimodal functions (LMFs) (F14-F23). Meanwhile, several competitive optimizers, including whale optimization algorithm (WOA) [55], gray wolf optimizer (GWO) [56], and GSA, are used and their results are provided for a comparison. For all the considered optimizers, the population size and the maximum number of iterations are set to 50 and 500, respectively. The ESDO algorithm is firstly analyzed qualitatively based on exploration and exploitation. Then, the Wilcoxon signed-rank test and Friedman test are statistically analyzed, respectively. The experimental results are based on 30 independent runs. The other parameter settings of all the considered optimizers are described in Table 1. Exploitation Analysis The functions F1-F7 having only one extremum are used to assess exploitation of algorithms. The comparison results of the algorithms on these UNs are listed in Table 2, in which the underline indicates the best value among all the algorithms. From this table, ESDO provides better solutions on functions F1-F5 and F7 in terms of mean and Std. ESDO performs as well as SDO and GWO on function F6. Therefore, ESDO obtains better results on most of UNs. Figure 3 shows the convergence curves of the algorithms on F1-F7. ESDO exhibits superior convergence rate over other optimizers in exploiting the optimal solution. Exploration Analysis The functions F8-F23 having multiple extrema are used to evaluate exploration of algorithms. The comparison results of the algorithms on these functions are shown in Tables 3 and 4. Inspecting Table 3, ESDO obviously outperforms other optimizers on function F12 and F13. ESDO, SDO and WOA provide the same results in terms of mean on functions F9 and F10. In addition, ESDO is third best algorithm on function F7. The performance of ESDO ranks only second to WOA on function F8. Observing Table 4, for F15 and F20, ESDO obtains the second-best results, which are only inferior to those of SDO. For functions F16-F19 and F22-F23, ESDO, SDO and one of other algorithms offer the same best results. For F21, ESDO provides the best results. The convergence curves of the algorithms on F8-F23 are depicted in Figures 4 and 5, which manifests that ESDO has Exploration Analysis The functions F8-F23 having multiple extrema are used to evaluate exploration of algorithms. The comparison results of the algorithms on these functions are shown in Tables 3 and 4. Inspecting Table 3, ESDO obviously outperforms other optimizers on function F12 and F13. ESDO, SDO and WOA provide the same results in terms of mean on functions F9 and F10. In addition, ESDO is third best algorithm on function F7. The performance of ESDO ranks only second to WOA on function F8. Observing Table 4, for F15 and F20, ESDO obtains the second-best results, which are only inferior to those of SDO. For functions F16-F19 and F22-F23, ESDO, SDO and one of other algorithms offer the same best results. For F21, ESDO provides the best results. The convergence curves of the algorithms on F8-F23 are depicted in Figures 4 and 5, which manifests that ESDO has better convergence performance with comparison to other algorithms when tackling these test functions. Statistical Analysis To evaluate the overall performance of ESDO and rank it statistically, the Wilcoxon signed-rank test (WSRT) [57] and Friedman test (FT) [58] are employed, respectively. Tables 5 and 6 provide the results of WSRT. In the table, "=" indicates there is no significant difference between ESDO and its competitor for a considered problem, "+" indicates that the performance of ESDO is better than that of a competitor for a considered problem and "−" is the opposite. Table 7 summarizes the results of WSRT in Tables 5 and 6. In Table 7, The WSRT results reveal that ESDO is superior to SDO, GWO, WOA and GSA on functions 7, 18, 22 and 17 out of 23, respectively, indicating the superior performance of ESDO to its competitors statistically. Figure 6 depicts the rank of each function for the comparative optimizers, Figure 7 gives the mean of these ranks. From Figure 7, ESDO ranks the first among the considered optimizers, demonstrating that it exhibits the best optimization ability compared to its counterparts. Statistical Analysis To evaluate the overall performance of ESDO and rank it statistically, the Wilcoxon signed-rank test (WSRT) [57] and Friedman test (FT) [58] are employed, respectively. Tables 5 and 6 provide the results of WSRT. In the table, "=" indicates there is no significant difference between ESDO and its competitor for a considered problem, "+" indicates that the performance of ESDO is better than that of a competitor for a considered problem and "−" is the opposite. Table 7 summarizes the results of WSRT in Tables 5 and 6. In Table 7, The WSRT results reveal that ESDO is superior to SDO, GWO, WOA and GSA on functions 7, 18, 22 and 17 out of 23, respectively, indicating the superior performance of ESDO to its competitors statistically. Figure 6 depicts the rank of each function for the comparative optimizers, Figure 7 gives the mean of these ranks. From Figure 7, ESDO ranks the first among the considered optimizers, demonstrating that it exhibits the best optimization ability compared to its counterparts. System Description The traditional integer-order PID controller has only three parameters ( p K , i K , d K ) [59], while the FOPID controller used in this experiment has five parameters ( p K , i K , d K , λ , μ ), so as to achieve more accurate control. Theoretically, λ and μ can take any number. In the case of 1 λ μ = = , FOPID is converted to integer-order PID. The transfer function of FOPID is described by: where p K , i K and d K represent the proportional coefficient, integral coefficient and dif- System Description The traditional integer-order PID controller has only three parameters ( p K , i K , d K ) [59], while the FOPID controller used in this experiment has five parameters ( p K , i K , d K , λ , μ ), so as to achieve more accurate control. Theoretically, λ and μ can take any number. In the case of 1 λ μ = = , FOPID is converted to integer-order PID. The transfer function of FOPID is described by: where p K , i K and d K represent the proportional coefficient, integral coefficient and dif- System Description The traditional integer-order PID controller has only three parameters (K p ,K i ,K d ) [59], while the FOPID controller used in this experiment has five parameters (K p ,K i ,K d ,λ,µ), so as to achieve more accurate control. Theoretically, λ and µ can take any number. In the case of λ = µ = 1, FOPID is converted to integer-order PID. The transfer function of FOPID is described by: where K p ,K i and K d represent the proportional coefficient, integral coefficient and differential coefficient, respectively, s represents the Laplace operator, and λ and µ represent the exponents of the integral operator and the differential operator, respectively. T v represents the differential time constant. The variation of the unit load produces a deviation e. The FOPID controller will convert this difference into an adjustment signal. After receiving the signal, the mechanical hydraulic system will operate to adjust the opening of the guide vane, and then adjust the flow and restore the speed. Figure 8 is the FOPID model of the water turbine governor system. hydraulic system will operate to adjust the opening of the guide vane, and then adjust the flow and restore the speed. Figure 8 is the FOPID model of the water turbine governor system. Experimental Results and Analysis This study is simulated by MATLAB/Simulink. The fitness function uses the time multiplication error integration criterion and is defined by: where ( ) e t is the deviation of the actual output from the expected output. The specific experimental settings and parameter values are shown in Table 8. In this experiment, in order to be more practical, four different working conditions under 0-20% load are adopted, and the controller performance from ESDO is compared with that from SDO, WOA and GSA under each working condition. Each algorithm runs 20 times, and the average optimal fitness value, overshoot and adjustment time based on 20 experiments are compared to verify the parameter setting effect of ESDO on the FOPID controller. Under different load conditions, the overshoot is represented by the maximum value. The simulation results are shown in Table 9. The convergence curve of average fitness, convergence curves of average FOPID parameters and the speed response curves obtained of the average FOPID parameters under different load conditions are shown in Figures 9-20, respectively. Experimental Results and Analysis This study is simulated by MATLAB/Simulink. The fitness function uses the time multiplication error integration criterion and is defined by: where e(t) is the deviation of the actual output from the expected output. The specific experimental settings and parameter values are shown in Table 8. In this experiment, in order to be more practical, four different working conditions under 0-20% load are adopted, and the controller performance from ESDO is compared with that from SDO, WOA and GSA under each working condition. Each algorithm runs 20 times, and the average optimal fitness value, overshoot and adjustment time based on 20 experiments are compared to verify the parameter setting effect of ESDO on the FOPID controller. Under different load conditions, the overshoot is represented by the maximum value. The simulation results are shown in Table 9. The convergence curve of average fitness, convergence curves of average FOPID parameters and the speed response curves obtained of the average FOPID parameters under different load conditions are shown in Figures 9-20, respectively. Under 4% load, the fitness value of the FOPID controller offered by ESDO is the smallest among the five algorithms, which shows that ESDO can jump out of local optimum well and find the global optimum; meanwhile, it effectively improves the optimization accuracy. From the convergence curves under 4% load in Figure 9, the convergence rate of ESDO is also faster than other algorithms, which indicates that ESDO has the strongest ability to find the best solution among these algorithms. The overshoot and adjustment time are two important indicators to measure whether a control system is stable. It can be seen from Table 9 that under the condition of 4% load, the overshoot and adjustment time obtained are the minimum after the governor of the water turbine is tuned by ESDO. The trend of the response curves in Figure 11 shows that after the peak value, the oscillation amplitude of the curve obtained by ESDO is the smallest, and it can reach a stable value quickly. On the contrary, the oscillation amplitude of the response curves obtained by SDO, GWO, WOA and GSA are very large, and the state is very unstable. It proves that the dynamic regulation ability of the FOPID controller offered by ESDO is stronger than those offered by the other four algorithms, and the turning parameters of the turbine governor with ESDO can make the control system more stable and efficient. Under 8% load, the fitness value of the FOPID controller offered by ESDO is also the smallest, showing that the global exploration ability and local exploitation ability of ESDO Under 4% load, the fitness value of the FOPID controller offered by ESDO is the smallest among the five algorithms, which shows that ESDO can jump out of local optimum well and find the global optimum; meanwhile, it effectively improves the optimization accuracy. From the convergence curves under 4% load in Figure 9, the convergence rate of ESDO is also faster than other algorithms, which indicates that ESDO has the strongest ability to find the best solution among these algorithms. The overshoot and adjustment time are two important indicators to measure whether a control system is stable. It can be seen from Table 9 that under the condition of 4% load, the overshoot and adjustment time obtained are the minimum after the governor of the water turbine is tuned by ESDO. The trend of the response curves in Figure 11 shows that after the peak value, the oscillation amplitude of the curve obtained by ESDO is the smallest, and it can reach a stable value quickly. On the contrary, the oscillation amplitude of the response curves obtained by SDO, GWO, WOA and GSA are very large, and the state is very unstable. It proves that the dynamic regulation ability of the FOPID controller offered by ESDO is stronger than those offered by the other four algorithms, and the turning parameters of the turbine governor with ESDO can make the control system more stable and efficient. Under 8% load, the fitness value of the FOPID controller offered by ESDO is also the smallest, showing that the global exploration ability and local exploitation ability of ESDO Under 4% load, the fitness value of the FOPID controller offered by ESDO is the smallest among the five algorithms, which shows that ESDO can jump out of local optimum well and find the global optimum; meanwhile, it effectively improves the optimization accuracy. From the convergence curves under 4% load in Figure 9, the convergence rate of ESDO is also faster than other algorithms, which indicates that ESDO has the strongest ability to find the best solution among these algorithms. The overshoot and adjustment time are two important indicators to measure whether a control system is stable. It can be seen from Table 9 that under the condition of 4% load, the overshoot and adjustment time obtained are the minimum after the governor of the water turbine is tuned by ESDO. The trend of the response curves in Figure 11 shows that after the peak value, the oscillation amplitude of the curve obtained by ESDO is the smallest, and it can reach a stable value quickly. On the contrary, the oscillation amplitude of the response curves obtained by SDO, GWO, WOA and GSA are very large, and the state is very unstable. It proves that the dynamic regulation ability of the FOPID controller offered by ESDO is stronger than those offered by the other four algorithms, and the turning parameters of the turbine governor with ESDO can make the control system more stable and efficient. Under 8% load, the fitness value of the FOPID controller offered by ESDO is also the smallest, showing that the global exploration ability and local exploitation ability of ESDO are significantly enhanced. The convergence curves of average fitness in Figure 12 demonstrate that, under the 8% load condition, ESDO can quickly find the optimal value in the optimization process, greatly improving the efficiency of the algorithm. After the governor of the water turbine is tuned by different algorithms, the overshoot of the FOPID controller offered by ESDO is only second to that offered by GSA and very close to that offered by GSA, but the adjustment time of the FOPID controller offered by ESDO is far better than that offered by GSA. It can be found from Figure 14 that the FOPID controller from ESDO also has the smallest oscillation amplitude and the smallest duration, the FOPID controller from GSA has the largest oscillation amplitude and the longest duration, and the FOPID controllers from SDO, GWO and WOA are in between. Considering comprehensively, under 8% load condition, ESDO also has the strongest optimization ability for the FOPID controller among the five algorithms. When the load is 12%, the FOPID controller offered by ESDO also gets the minimum fitness value. From Figure 15, although the convergence speed of ESDO is slower than that of SDO at the beginning of the iteration, in the middle of the iteration, ESDO outperforms SDO. At this time, the convergence speed of ESDO is the fastest, and it tends to jump out of the local extreme value to find a smaller value. Moreover, in the regulation system, the overshoot of the FOPID controller offered by ESDO is relatively good, and the adjustment time is also the smallest. From the speed response curves in Figure 17, after the first peak, the ESDO curve is very stable with little fluctuation, while the response curves of the FOPID controller offered by other algorithms fluctuate greatly and last a long time. When the load is 16%, the fitness value of the FOPID controller offered by ESDO is still the smallest. From Figure 18, there is a fast convergence speed at the beginning of iteration, and its ability to jump out of local extreme value is also the strongest among all algorithms. According to the speed response curves in Figure 20, the overshoot of of the FOPID controller offered by ESDO is very close to that offered by GSA, which is better than those offered by other algorithms. The adjustment time of the FOPID controller offered by ESDO is the shortest among all algorithms, after falling from the peak, the fluctuation weakens rapidly and finally tends to be stable. Therefore, through tuning the FOPID parameters of the water turbine governor under different load conditions, these convincing results reveal that ESDO is excellent in tackling real-world engineering applications. Conclusions To better handle optimization problems, an effective supply-demand-based optimization (ESDO) is proposed, it combines three strategies, including the enhanced fitnessdistance balance (EFDB) with Levy flight, mutation mechanism and adaptive local search, to improve solution diversity, and convergence accuracy and search efficiency. The experimental results of ESDO with several well-regarded algorithms on 23 benchmark functions discover that ESDO has superior optimization ability. In addition, the practicability of ESDO is also verified by tuning the parameters of the FOPID controller of the water turbine governor system. The experimental results show that the turbine governor system tuned by ESDO is better than those tuned by other algorithms in terms of response time and overshoot. In this study, some secondary and high-level terms are ignored when establishing the hydraulic turbine simulation model, and higher-order models can be established in the future. The implementation method of FOPID can also be improved. In future studies, ESDO could be specifically intended to handle a variety of problems in hydraulic engineering, such as optimal allocation of power station [60], intelligent fault diagnosis of turbine [61] and optimal design of gate [62][63][64]. Conflicts of Interest: The authors declare no conflict of interest. Appendix A Table A1. Unimodal test functions.
7,214.2
2022-09-27T00:00:00.000
[ "Computer Science" ]
An algorithm for scheduling hybrid slot-size to increase traffic density in EPONs The current basically Ethernet passive optical network (EPON) efficiently supports different service classes, i.e., Giant Prime concern (GP) traffic and Efficiently-best (EE) traffic, in future-generation networks. The difficult task is finding the efficient Dynamic Bandwidth Allocation (DBA) procedure task which suggests service classes of bandwidth simultaneously. This paper suggests improved version to DBA algorithm, Remodeled version-Hybrid division-size/Rate (HDSR) scheme. The modified control message scheduling algorithm is also discussed along with Remodeled-HDSR Algorithm which fits to the proposed scheme. In the modified Remodeled-HSSR scheme, time cycle length is identical classified accordingly with two subparts. In algorithm as explained in order to increase traffic density, the subdivision part of the planned cycle, the traffic conditions are dynamically suggested an improved version for Best effort traffic of multiple ONUs. In the First subdivision part of the span time analysis time will not be compressed, particularly at less buffer status conditions plan buffer status of the HP traffic. The Remodeled control message algorithm which predefined status and is scheduled particularly which controls connecting the sub-parts of the planned cycles of the sequence time and in-turn increases synchronization between two Gate messages to particular ONUs. Introduction Depending on details on EPON-PON, standards based on the IEEE 802.3ah [1] Ethernet which shows important task for overcoming traffic allocation blockages in wide area networks. The system based on tree networks, shown in Figure 1, consists of two subdivisions namely: the optical line terminal (OLT) which resides at the Control office (CO) and the different optical network units (ONUs) which is located at the User's home premises. The allocated frames transmission is carried out by the OLT, which passes through a 1:N passive splitter to reach particular ONU. The splitter used by passive devices which is distance located apart from the control office, but ONUs which is close to the customer home premises. In the PON system in which bandwidth is controlled with help of message control office, i.e., Gate and Report messages, and with the help of OLT and ONUs. The Control messages studies is controlled out with the help of multi-point control protocol (MPCP) of EPON systems. The ONU of MPCP in which buffer status is updated and checked according to the OLT through a sent Report message and OLT allocated portion of the remaining upstream traffic allocation for particular ONU a Gate message is given. Preliminaries In Ref. [1], a useful effective uplink dynamic bandwidth allocation method -demand forecasting DBA (DF-DBA) which suits ONU's advance demands by statistical modeling in order to fulfill the predicted demands, which results in reduced delay. Results show that usage of simple normal distribution in the DBA engine which offers more suitable bandwidth allocation with 14% reduced packet drop ratio (PDR) compared to values more than standard GigaPON access network (GIANT) DBA following request-grant cycle procedure. In Ref. [2], scheme in which DBA is given accordingly for basic subdivisions of service classes of buffer status traffic, i.e., Expedited-forwarding (EF), Forward assurance (FA), and Best-Effort (BE) buffer status. In Subdivision methods, subfirst part traffic (bandwidth) allocation of bandwidth is done to the best methods traffic conditions, and then left portion of traffic is done accordingly to the BE traffic. As a part of more experimental results verified, the BE traffic which suffers more occupied delay status. In Ref. [3], the proposed techniques to check demanding traffic conditions in which system demands and allocate more traffic depending upon conditions and reduce the dynamic loading conditions of improved based networks of the service given. In Ref. [6], hybrid slot-size/rate (HSSR) DBA algorithm is discussed accordingly in which algorithm, a time in which it is split into two equal parts. The part-1 of the cycle is similarly divided according to the loading conditions into HP traffic of the entire ONUs while part-2 of the next cycle time is accordingly to the BE traffic of one or multiple ONUs. In Ref. [7], the experts (authors) have given an exposure about DBA techniques which gives all the Report messages achieved end results to the suggested window sizes to determine an ONU which requires the more allocated span length according to the allocation of particular size defined. This scheme efficiently utilizes the non-busy allocated time traffic conditions depending upon the load conditions. A distinct efficient plan suggestion in NG-EPON is to sketch appropriate wavelength dynamically traffic is allocated in which decide future demands of the users-network. Hybrid EPON has been grouped into well defined into design architectures based on how busy traffic status is controlled on many port connections of wavelengths: SSD-EPON, MSD-EPON, and wavelength agile (WA)-EPON [8]. Dynamic wavelength and traffic load conditions (DWBA-algorithm) are the well defined techniques which implement the OLT to allocate the channel bandwidth externally between amounts of wavelengths allocated which distributes for various slots basis-data communication effectively in the upstream direction. Network architectures techniques suggested MHSSR-DBA scheme In the Sub-part section, first, we provide the necessary discussions about the feedbacks of the PON architecture system with the upstream format frame consisting of both HP and BE traffic. Furthermore the principle of the efficiently traffic given to the proposed MHSSR DBA scheme is discussed accordingly. Network Architecture of PON system In the preceding Fig. 2 shows suggests stock structure of the PON system with the given loading bandwidth alternations and frame layout of N ONUs. The particular N ONUs is given to an OLT through a passive splitter/or which combines. The each sub-division allocated packets ONU are divided accordingly into two sub grouping, i.e., HP and BE. The first top (upstream) stream division of each time cycle, the allocated bandwidth conditions are divided for HP traffic of all the ONUs is based on the given MHSSR scheme. The sub-sequent section part, the 1st sub-part of time cycle k is done carried out for the HP traffic of allocatedONUs and the 2nd sub-part of the planned cycle k is carried out for best effort according to the buffer status load conditions to ONU 1 and ONU 2. Water filling -Dynamic Bandwidth Allocation algorithm Water-filling algorithm [6] describes on DWBA methods (WF-DBA) which based on proceedings in [7]. A WFDBA sort the channels according to the allocation are based on top-priority basis is first given on the less loaded traffic channel, so that it uniformly suggests efficiently uniform channel. With an sample experiments, the sub-part according to twice wavelength channels is more than the reported traffic bandwidth, method is suggested to given on only one channel; The entire concept, an ONU can be deal with more port input connections of wavelength channels which suggests upon the reported bandwidth and the contrast between the final concluding part of wavelength channels depending on entire process bandwidth allocation. The first sub-part of WF-DBA and advanced IPACT and NG-EPON, this paper plan suggests first-fit DWBA to allocate traffic to ONUs in well distributed manner which suggest load data according to less delay and avoiding frame sequencing problems. First-fit DWBA plan gives an idea of allocation of one channel, which is given first among all the channels at the particular instant of time. References [8-11] use a well defined plan to suggest an plan for multithread polling in reach PONs. First-FIT DWBA scheme Analysis is carried out many times of WF-DBA and modified-IPACT and advancements is carried out for NG-EPON, this scheme suggested method of First-Fit method of DWBA to allocated traffic depending upon loading (buffer) conditions to ONUs in a disciplined manner data buffer display and reschedules frame according to loading conditions. First-fit method of DWBA grants on each channel and it will be suitable at the time of suggested decision. References [8][9][10][11] deals an important method for multithread polling in Longer range of PONs, the efficient method, no such problems has been given to address. The following symbols are used in the suggested algorithm: Suggested method DBA-MHSSR Scheme The first part which describes MHSSR scheme, the span of cycle is primarily ranged into two sub-divided parts, which is indistinguishable to the HSSR and DVGP strategies. Initially, the 1st sub-part of cycle is accordingly divided and it is used for the HP traffic while the part-2 accordingly given and is used for the Long reach buffer status of the ONUs in the predefined PON techniques. The subsequent equation which defines the span length calculation of a time cycle T cycle is suggested below. The maximum allocated length of time cycle is calculated as follows: = [W ( ) + Tg] + [W ( ) + Tg]. ……(1) The ONUs, L which specifies traffic allocation ONUs being coordinates to more traffic loads in a time cycle, and are the given in particular window sizes for the HP and BE traffic of ONUi, commonly, and Tg is the guard time. Where W ( ) and W ( ) are the efficient window sizes which depends on HP and BE traffic, and in the preceding method, the next method consider The involvement of vitally allocation, the proposed scheme which separates the non occupied predefined slots as well as fewer-load buffer conditions cost effective increased bandwidth from the buffer status fewer loaded ONUs to depending upon its ONUs. Fig.3 suggests the efficient plan of the PON system using given MHSSR DBA system. In the Fig. 2, sub-division parts i.e., time cycles k and k+1, are suggested according buffers status of traffic grants scheme. Accordingly the 1st sub-part cycle k, more bandwidth W ( ) (HP) is taken from little loaded ONUs with W ( ) is given for the ONU N. In the 1st sub-part of span of cycle k +1, the complete ONUs is more loaded. With verifying the result analysis the allocated bandwidths (buffer status) is carried out ONU1, ONU2, and ONU3...….ONU. Here, less ONUs are loaded traffic is fewer and little ONUs are given additional traffic, but the whole required window allocated sizes for the more traffic of preceding ONUs is more or less equal to the equation given below. This condition mainly suggests in the cycle k of the Fig. 3. In this case, it suggest to calculate W grant( ) Where W ex( ) is the more bandwidth given for the highly allocated traffic of ONU i. The preceding qu (5) is used to calculateW T ex( ) Which defined in which M is the no of equally less full-filled ONUs in a given plan cycle. The preceding, Eq. (6) which is used to calculate where the bandwidth is distributed fairly thisW ex( ) to more allocated traffic to ONUi. and process are carried accordingly. After the results MHSSR DBA scheme is given some important suggestions based on N ONUs in organization network, where part-3 cases of the MHSSR given plan are given accordingly to ONU. Progress evaluation and experimental software procedures In this section, evaluation results, parameters results, and of the MHSSR scheme are tabulated accordingly to the needs and verified results depends on traffic load conditions. Repeated conduction with various analysis of experiments with different conditions were check the results of suggested DWBA with respect to modified-IPACT and WF-DBA with λmax=4 and λmax=2. 6 shows the percentage bandwidth allocated remains unallocated wastage for particular lengths of cycle's ie., 0.5ms to 3ms. The techniques so far discussed according to DVGP and HSSR provide bandwidth wastage which is still pending than the plan proposed in this paper. The allocation of bandwidth for the HP traffic in both techniques is not changed for load which is allocated and also causes bandwidth idle for the less loaded ONUs. In contrast our Remodeled HSSR scheme bandwidth is improved for the HP traffic where idle bandwidth can be reduced in each ONU. Packet delay median-average analysis of Improved-IPACT, WF-Dynamically bandwidth algorithm, and plan given to DWBA Fig.7 represents the Plan suggested to Improve-IPACT, WF-DBA and suggested accordingly for DWBA by contrast and gives traffic midpoint packet delay at particular load. The Frames depends on less allocated conditions with respect to fewer loads depending on home user's conditions. Wastage bandwidth Percentage Average delay (ms) Conclusion This paper illustrates First-fit DWBA strategy defining EPON for future generation networks in order to grant less bandwidth for transmission and to cause fewer problems for re-sequencing by allocating ONU data frames on a wavelength channel which will be one grant allocation. Here method suggests bandwidth efficiently utilized by overcoming problems guard time contrast with WF-DBA, rebuilt-IPACT DBA. ONU is allowed to communicate data which is equal to bandwidth outline on one particular channel allocated and which reduce bandwidth usage in terms of granted time, frames which is denied for transmission has been left out and bandwidth remaining in which it happens frame size mismatch in which of Remodeled-IPACT and WF-DBA. In this paper it improves a version of DBA Algorithm the Remodeled-HSSR scheme which increases stability in high speed traffic in EPON system.
2,994.6
2021-07-30T00:00:00.000
[ "Computer Science", "Engineering" ]
Novel deep genetic ensemble of classifiers for arrhythmia detection using ECG signals The heart disease is one of the most serious health problems in today’s world. Over 50 million persons have cardiovascular diseases around the world. Our proposed work based on 744 segments of ECG signal is obtained from the MIT-BIH Arrhythmia database (strongly imbalanced data) for one lead (modified lead II), from 29 people. In this work, we have used long-duration (10 s) ECG signal segments (13 times less classifications/analysis). The spectral power density was estimated based on Welch’s method and discrete Fourier transform to strengthen the characteristic ECG signal features. Our main contribution is the design of a novel three-layer (48 + 4 + 1) deep genetic ensemble of classifiers (DGEC). Developed method is a hybrid which combines the advantages of: (1) ensemble learning, (2) deep learning, and (3) evolutionary computation. Novel system was developed by the fusion of three normalization types, four Hamming window widths, four classifiers types, stratified tenfold cross-validation, genetic feature (frequency components) selection, layered learning, genetic optimization of classifiers parameters, and new genetic layered training (expert votes selection) to connect classifiers. The developed DGEC system achieved a recognition sensitivity of 94.62% (40 errors/744 classifications), accuracy = 99.37%, specificity = 99.66% with classification time of single sample = 0.8736 (s) in detecting 17 arrhythmia ECG classes. The proposed model can be applied in cloud computing or implemented in mobile devices to evaluate the cardiac health immediately with highest precision. Introduction In the last decades, solving problems from various fields, including medicine, using various machine learning (ML) techniques is very popular [1-6, 9, 46, 50, 53, 54, 57, 58, 60, 61, 75, 76]. This popularity is due to the fact that ML can cope with problems that are difficult to solve in a conventional way due to the unknown rules. Due to the properties of learning and generalization of knowledge, these methods are able to solve many problems. Artificial intelligence techniques achieve high performance in various fields of science. The advantages of ML (in particular, computational intelligence) lie in the properties inherited from their biological counterparts, such as learning and generalization of knowledge (e.g. artificial neural networks), global optimization (e.g. evolutionary algorithms), and use of imprecise terms (e.g. fuzzy systems). Electronic supplementary material The online version of this article (https://doi.org/10.1007/s00521-018-03980-2) contains supplementary material, which is available to authorized users. The presented DGEC method draws inspiration from three areas of ML: (1) ensemble learning (EL), (2) deep learning (DL), and (3) evolutionary computation (EC). More information about EL and EC can be found in [51], and about DL in [77]. Heart diseases Electrocardiography (ECG) is the most popular and the basic technique of diagnosing heart diseases. This is because it is noninvasive, simple and provides valuable information about the functioning of the circulatory system. Cardiovascular diseases are serious social issue, because of: (a) highest mortality in the world (37% of all deaths, 17.3 million people per year [7,8,71]), (b) high incidence, and (c) high costs of treatment (long-lasting and expensive treatment caused by chronic course of the disease [34,68]). The arguments quoted above will intensify because the number of deaths will grow from 17.3 million (2016) to 23.6 million (2030) [7,8,33,71], which will be caused by the progressive aging of the population. Current methods for the diagnosing of heart abnormalities are based on the calculation of the dynamic or morphological features of single QRS complexes. This solution is error-prone and difficult because of the variability of these characteristics in different persons [48]. For these reasons, the methods currently presented in the scientific literature do not obtain sufficient performance [23]. The above facts present a strong motivation to conduct research on new methods to support the medical diagnosis early and more effectively diagnose the heart disorders. An important aspect of our research is also to reduce the computational complexity of the developed algorithms in the context of implementing our solution in cloud computing or mobile devices to monitor the health of patients in real time. Goals The main goals of this paper are as follows: Goal 1 Design new and efficient ensemble (network) of classifiers based on EL, DL, and EC for the automatic classification of cardiac arrhythmias based on segments of ECG signal. Goal 2 Design method for patient self-control and prevention application in telemedicine and cloud computing or mobile devices characterized by low computational complexity. Goal 3 Design universal method for the general population. Novelty Our main novel contribution is: Novel contributions of this work are based on works [12,13,22,23], focused on: N1 New deep multilayered structure of the system (ensemble of classifiers) provides appropriate information flow and fusion. N2 New method of combining the system nodes (classifiers) based on genetic layered training (selection of classifiers/experts answers/votes). Previous research The work described in the article includes stage III of conducted research, and the remaining stages Nos. I and II are described in earlier articles: Stage I -the focus was on designing and testing methods for signal preprocessing, feature extraction, selection, and CV. In this type of work, only single classical classifiers were tested, optimized by genetic algorithm (GA). A description of this type of work is presented in the article [52]. Stage II -the focus was on developing ML methods by designing, optimizing and testing a genetic (two layers, 18 or 11 classifiers) ensemble of classifiers, combining the advantages of EL and EC. In this research, only one type of classifier and one path of signal preprocessing and feature extraction were used. A description of this research is presented in the article [51]. Stage III -the focus was on further developing ML methods by designing, optimizing and testing a more complex deep genetic (three layers, 53 classifiers) ensemble of classifiers combining the advantages of EL [21,37,38,47,62,72], EC [14,28] and DL [11,16,17,27,32,39,63]. In this research, four types of classifiers and 12 paths of signal preprocessing and feature extraction were used. This increase in diversity has increased the efficiency of the system. A description of this research is presented in this article. The concept of ECG signal analysis presented in this article is based on previous research, but the solution proposed, in a decisive way, differs from previous works. The proper direction of development of the proposed idea resulted in obtaining definitely better results. Assumptions The described study is based on the new methodology presented in articles [51,52]. The main assumptions of the new methodology are as follows: A1 Analysis 10-s segments of ECG signal (longer than single QRS complexes). A2 No signal filtering A3 No detection and segmentation of QRS complexes. A4 Use of feature extraction comprising the estimation of the power spectral density. A5 Use of feature (frequency components) selection based on GA. A6 Analysis of ECG signal segments contains one type of class (with the exception of normal sinus rhythm) abnormality. A7 Use of stratified tenfold CV method A8 Use of the Winner-Takes-All (WTA) rule to classification of the ECG signal samples. The use of the new approach has the following benefits: (1) reduced number of classifications (average 13 times, for a heart rate of 80 beats per minute) and (2) eliminating the detection and segmentation of QRS complexes. These advantages reduce the computational complexity, which enabled the use of the proposed solution in cloud computing or mobile devices (real-time processing). Also analyzing longer segments of ECG signal gives better outcomes for the classification of few diseases, for example, atrial-sinus and atrioventricular conduction blocks, Wolff-Parkinson-White syndrome, and elongates PQ intervals [52]. ECG dataset The ECG signals were obtained from the MIT-BIH Arrhythmia [45] database from the PhysioNet [31] service. The details of the data used are given below. The appropriate balance of data is an important aspect. In the research, a number of analyzed segments of ECG signal for all classes are in the range from 1.34% to 25.94%, imbalance ratio (IR) = 19, (see Table 1). More information about the dataset used can be found in the article [51]. Methods In Fig. 1, subsequent phases of processing and analysis of the ECG signals are presented. Phase I: preprocessing with normalization Gain reduction and constant component reduction were used. Three types of normalization were tested: (a) standardization of signal (signal standard deviation = 1 and mean signal value = 0), (b) rescaling of signal to the range of ½À1; 1þ reduction of constant component, and (c) lack Table 1 Dataset description with randomly selected segments of ECG signals and segments division by stratified tenfold CV into testing and training sets [51] No. Class Stratified tenfold CV Segments number Phase II: feature extraction First, power spectral density (PSD) [64] of ECG signal was estimated based on Welsh method [70] and discrete Fourier transform (DFT) [64]. Subsequently, the transformed signal was logarithmized to normalize the frequency components of the PSD. Four Hamming window widths: (a) 128, (b) 256, (c) 512, and (d) 1024 samples, were used to calculate the PSD. More information about feature extraction can be found in articles [51,52]. Phase III: feature selection Three methods have been tested: (a) no selection, (b) selection based on GA [14,59], and (c) selection based on particle swarm optimization (PSO [36]). The selection based on GA has obtained the best result and GA was applied for feature (frequency components) selection. Successive single ECG signal features (given as the inputs data of the classifiers) are represented as genes in the population of individuals. The following values can take genes: 0-rejected feature or 1-accepted feature. GA parameters are described in Tables 2 and 3. Phase IV: cross-validation Two methods have been tested: (a) stratified fourfold CV and (b) stratified tenfold CV [38]. Stratified tenfold CV method has obtained the best results. The testing and training sets were created by randomly selecting ECG signal segments separately for each class, maintaining the proportions between classes. Table 1 presents the selection of signal segments into training and testing sets. Stratified CV means that each set has proportional number of instances from all the classes. More information about the CV can be found in the article [51]. Phase VI: parameter optimization Three methods have been tested: (a) particle swarm optimization, (b) GA, and (c) grid search. Among them, GA has achieved the best results. Tables 2 and 3 describe the GA parameters used. Evolutionary neural system For single classifiers, the evolutionary neural systems comprising the SVM, kNN, PNN, and RBFNN classifiers have obtained the best results. The core of the evolutionary neural system is the classifier optimized using the GA. The selection of features and the optimization of the classifier parameters were carried out using a GA coupled with a stratified tenfold CV method. Classical ensembles of classifiers (CEC) A classic, two-layer bagging-type ensemble of classifiers were used. The first layer consisted of ten SVM classifiers (nu-SVC). SVMs are trained on the basis of the subsequent ten combinations of testing and training sets from stratified tenfold CV. In the second layer, the answers of classifiers from the first layer were combined using the majority voting method. The GA was used to optimize the CEC system. Deep genetic ensemble of classifiers (DGEC) DGEC is three-layer (48 ? 4 ? 1) system. In the DGEC system, each classifier of the first layer is optimized in order to maximize the accuracy of arrhythmia classification. In the second layer and third layer, based on the classifier answers from the first layer and based on DL and genetic selection of features, knowledge extraction process occurs leading to the final decision by the system. In the previous work, we designed and tested: (1) genetic ensemble of classifiers optimized by classes (GECC system) and (2) genetic ensemble of classifiers optimized by sets (GECS system), described in [51]. Philosophy The inspiration for the DGEC system was approach based on DL mimic nature on mechanisms occurring in the neocortex of the brain. Characteristic features of system are as follows: • Classifiers as neurons, connected in network. Formula for calculating the fitness function: where: w l ¼ 1-weight for training sets errors; w t ¼ 1-weight for testing sets errors; err Lsum -sum of errors in all ten training sets; err Tsum -sum of errors in all ten testing sets; Fa F ¼ C F (formula 8 in Sect. 2.5); Number of features has been reduced about two times (from 4001 to 2000 frequency components) as a result of used selection of features- Table 5; Number of outputs = 17, from the set: f0; 1g. Value ''1'' assigned to the highest stimulus output (class); Optimizing parameters Based on a broader range were chosen experimentally final parameter ranges. Crossover type: scattered in second and third layer; crossover probability: 0.9 in second and third layer; Mutation type: uniform in second and third layer; mutation probability: 0.1 in second and third layer; Number of best individuals who survived with no change: 10 in second and third layer; Method of scaling the fitness function value: ranking in second and third layer; Parent selection method: tournament in second and third layer; Formula for calculating the fitness function is given in Table 2, equation 1 The number of features has been reduced about two times in second layer (from 204 to 100 classifier answers/votes), and about three times in third layer (from 68 to 25 classifier answers/votes) as a result of used selection of features- Table 6; Classifiers In second layer: four optimized, trained and tested classifiers-judges (one classifier type Á one CV type) In third layer: one optimized, trained and tested classifier-judge (one classifier type Á 1 CV type) Basic parameters SVM Type: C-SVC in second and third layer; Type of kernel function: linear in second and third layer; Number of outputs: 17, from the set: f0; 1g in second layer, and 1, from the set: f1; . . .; 17g in third layer; -c (cost) parameter specifying the margins equal to the default value = 1; Optimizing parameters Lack Neural Computing and Applications (2020) 32:11137-11161 11143 • Layered learning-as in DL, the learning is progressing in stages. • Genetic layered training: -Optimization of connections between classifiers from adjacent layers (feature selection) realized by a GA is analogous to the elimination of connections in the brain between neurons. -Feedback occurring during training in the form of a GA (genetic optimization) and in the form of CV (training) is similar to back-connections in brain. • Diversity is present in classifiers, data preprocessing, and connections and is analogous to the different types of neurons, signal processing and irregular connections between neurons belonging to the neocortex of brain. The diversity of classifiers (four types) is included in the first layer. The diversity of data preprocessing is present in the first layer (three types of normalization and four types of Hamming window widths). The diversity of connections is between first and second layer and between second and third layer because do not occur all possible connections between the classifiers. • Bipolarity is noticeable by the value of transmitted signals from the set: f0; 1g similar to the value of the action potential of nerve cells (neurons). • Multilayered (depth)-according to the definition of DL, networks that have in their structure above two layers are considered as deep, which is analogous to the neocortex of brain, and it consists of seven layers. • Abstract learning is in the form of the internal extraction of features and transforming information in subsequent layers of its structure which generates more complex features that are abstract concepts, like in the brain. The deep structure of designed system (network) consists of three layers, and the term genetic implies that in this research, GA plays a key role and ensemble of classifiers is comprised of 53 classifiers (nodes). Layered learning-the first supervised training was performed for 48 classifiers from the first layer. The second supervised training was performed for four classifiers from the second layer based on the answers received from 48 models of classifiers from the first layer. The third supervised training was performed for one classifier from the third layer based on the answers received from four models of classifiers from the second layer. Cross-validation-the stratified tenfold CV was coupled with GA (in first, second, and third layer of DGEC system), and all individuals (feature vectors) in the population were tested on all ten testing sets and ten training sets. This solution minimizes over-training. First layer -Genetic feature selection was used to feature (frequency components) selection and parameter optimization for 48 classifiers in the first layer. Second and third layer -Genetic layered training was used to tune the ensemble of classifiers structure in second and third layer, relying on feature selection (experts or judges votes) from the first or second layer, based on reference answers. The aim of GA was to reject the incorrect answers (votes) of classifiers (nodes) from the first or second layer, based on the errors in all testing and training sets, and accept only correct answers (votes) as shown in Fig. 5. Genetic layered training is a novel approach of connecting classifiers (ensemble combination), and it is effective through transformation of one output into 17 outputs of classifiers. Second layer The second layer of the DGEC system has four judges with SVM classifiers (C-SVC, linear). These four meta-classifiers were developed in order to assess the experts votes from the first layer. Each of the judges was assigned to a group of 12 classifiers, which meant that the first judge evaluated the votes from 12 SVM classifiers, second from 12 kNN classifiers, third from 12 PNN classifiers, and fourth from 12 RBFNN classifiers. These 48 classifiers (from the first layer) have 17 outputs each, so the length of input features vector, to the second layer, is equal to 816 votes. Therefore, on the inputs of each of the four judges (classifiers from the second layer), 204 features are given. Third layer The third layer of the DGEC system has one judge, i.e., SVM classifier (C-SVC, linear). This one meta-classifier was designed to evaluate judges votes from the second layer. Four SVMs (from the second layer) have 17 outputs each, so the length of input features vector, to the third layer, is equal to 68 votes. The analogous two-layer system (48 classifiers ? 1 classifier) was also tested in the study. However, such a system was characterized by less effective training, which did not yield good ECG classification results. Therefore, this system is not widely discussed in the article. Hence, it can be concluded that another layer has contributed to boost the training efficiency of the system. Many parameter configurations of GA and SVM, kNN, PNN, and RBFNN classifiers were tested as part of the study. The details of the optimization parameters are presented in Tables 2, 3, and 5. Basic and optimizing parameters of 48 classifiers from the first layer of DGEC system and GA parameters used for feature selection and optimization of classifier parameters are presented in Table 2. Basic and optimizing parameters of (4 ? 1) classifiers in second and third layer of DGEC system and GA parameters used for feature selection and optimization of classifier parameters are presented in Table 3. normalization types and 4 Hamming window widths)experts Detailed results of 48 classifiers from 1st layer were placed in Table 2 The second layer of DGEC system-4 SVM (C-SVC, linear) classifiersjudges Figures 2 and 3 show the scheme of subsequent phases of information processing in DGEC system. In Fig. 4, scheme of connections between layers, information flow and fusion in DGEC system is presented. Algorithm 1 and 2 present the DGEC system algorithm. Figure 5 presents the scheme of genetic layered training (for single segment of ECG signal and exemplary chromosomes of individuals). Evaluation criteria The following coefficients have been determined for the evaluation of the designed methods [30,65]: (1) More detailed information about the calculated coefficients can be found in the articles [51,52]. The equations for the calculated coefficients are as follows: • Accuracy • Specificity where N is number of sets applied in the stratified tenfold CV method = 10, k is index of class, n equal to 17 is a number of classes, M is total number of classified ECG signal segments that are compared to the reference responses (labels), m k;k is the number of classified ECG signal segments belonging to the reference class k that have also been classified as a class j, C k is total number of classified ECG signal segments belonging to class k, and G k is total number of reference responses (labels) belonging to class k. • Acceptance feature coefficient (the smaller the better.) where F a -the number of accepted features, and Fthe number of all features. More information about evaluation criteria can be found in the articles [51,52]. Results The MATLAB R2014b software along with the LIBSVM library was used in the work. The computer with an Intel Core i7-6700K 4.0 GHz (only one core was used) with 32 GB of RAM for calculations was used. All calculation times, including optimization, training, and testing stages, are presented in Tables 4, 5, and 6, respectively. We have achieved sensitivity (SEN) of 100% and ERR sum equal to 0 errors for all training sets. A comparison of achieved outcomes, for the ensembles of classifiers: CEC, DGEC, and single classifiers: SVM, PNN, RBFNN, kNN, is presented in Table 4. Table 5 presents detailed results, for the first layer of DGEC system for four classifiers (RBFNN, PNN, kNN, and SVM) with values of optimized parameters for: (a) three signal preprocessing types (no normalization, rescaling ? constant component reduction, and standardization), (b) four feature extraction types (four Hamming window widths 1024, 512, 256, and 128 samples), and (c) one CV variant-stratified tenfold. Second layer Optimized parameter values and outcomes for the second layer (4 C-SVC linear, SVM classifiers) and third layer (1 C-SVC linear SVM classifier) of the DGEC system are presented in Table 6. ERR L coefficient determines in all training sets sum of errors. Results of feature selection Results of feature selection the 12 classifiers (experts) from the first layer were separated by a dashed black line. Figure 6 presents the answers of 12 SVM classifiers from the first layer, kNN in Fig. 7, PNN in Fig. 8, and RBFNN in Fig. 9. Third layer In Figs. 10, 11, 12, 14 and in Table 7, the results for entire DGEC system are presented. In Fig. 10, the confusion matrix, for one SVM classifier from the third layer, is presented. Figure 11 presents comparison of the following coefficients: sum of errors (ERR), accuracy (ACC), sensitivity (SEN), positive predictive value (PPV), specificity (SPE), false positive rate (FPR), and j coefficient, for three variants of recognition (17,15, and 12 classes). In Fig. 12, percent error values for particular testing sets, from stratified tenfold CV method, for DGEC system, are presented. Figure 13 presents the visualization of feature selection outcome (genetic layered training), for the third layer of DGEC system (one SVM classifier). Accepted features (answers of classifiers) are shown by red dots. The votes (answers) of the four SVMs (experts) from the second layer were separated by a dashed black line. Table 7 indicates the detailed information on the efficiency of recognition for 17 ECG signal classes of the DGEC system. Coefficient ERR % indicates the percentage of errors. In Fig. 14, ERR, ACC, SEN, PPV, SPE, and FPR for each class, are presented. It can seen from Table 8 that we have obtained the highest classification performance using the same database for the classification of 17 ECG classes. Hypothesis Achieved outcomes confirmed the hypothesis that the use of designed new DGEC system is effective, automatic, fast, computationally less complex and universal classification of myocardium dysfunctions using ECG signals. Tables 4 and 8 show the results obtained by our novel method. Our proposed method obtained the SEN ¼ 94:62% (ACC = 99:37%, SPE = 99:66%). We have obtained the highest classification performance (Table 8). It should be emphasized that we have classified 17 classes, the classification sensitivity for 15 classes ¼ 95% and 12 classes = 98% (Fig. 11). Most of the other works from the literature present the results of the classification only for five classes. In this work, we have used a single (10-s) ECG signal segment and the time required to test the ECG signal is only C k ¼ 0:8736 (s) for DGEC system. Hence, the developed system can be applied in telemedicine, cloud Fig. 12 Error values for particular testing sets of DGEC system computing or mobile devices to aid the patients and clinicians to improve the accuracy of diagnosis. Components of the classifier system High performance of the DGEC system is presented in Tables 4, 5, 6, and Fig. 10. This result is obtained through: (1) appropriate connection of system nodes (classifiers) using genetic layered training in the second and third layer of the system, (2) diversity of the component classifiers (different classifiers make different errors) achieved by different signal normalization (three types), different Hamming window widths (four types), and different types of classifiers (four types), (3) adequate quality of the component classifiers of the system. Transformation of one output into 17 outputs of all component classifiers has enabled high performance of the genetic selection of votes (genetic layered training). Taking benefits from all component classifiers, and minimizing their drawbacks, was possible through combining classifiers using genetic feature selection. Deep multilayer structure of the system The success of designed DGEC system has been obtained based on: (1) genetic feature (frequency components) selection in the first layer of ensemble, (2) layered learning (accelerated and facilitated the training), (3) genetic layered training in the second and third layer (experts votes selection) applied to connection classifiers, (4) genetic parameters optimization (appropriate balance between exploitation and exploration) coupled with stratified tenfold CV, which significantly decreased the over-training and hence increased the performance of the DGEC method, and (5) DL (multilayer structure of system, in which extraction of features and pattern recognition occur through appropriate flow and fusion of information). Deep learning DGEC system based on DL. This section presents a comparison of the DGEC system with other DL algorithms such as the convolutional neural network (CNN). Advantages of our system are given below: -obtained higher accuracy (e.g. compared to work [77]). -the possibility of greater interference in the optimization of the structure (selection of: nodes (classifiers), number of layers, connections between nodes, etc.). Disadvantages of our method are given below: -complex structure requiring longer system design (longer training and optimization). -the feature extraction needs to be performed. Similarities with other systems are as follows: -also a network of neurons (nodes process information), consisting of nodes in the form of classifiers. -also has a deep structure in which occur similar processes of fusion and flow information (with successive layers, the concepts are more and more abstract). Differences with other state-of-art systems are as given below: -nodes, these are not classic neurons (with weights, biases, and activation functions) but more complex classifiers and each node is different (greater diversity of nodes). -outputs, instead of one there are 17 outputs from each node (classifier). -training and optimization, performed in stages, one by one in subsequent layers, and the results from the previous layer go to the next layer, in the CNN training and optimization is more global. -connections, flexibility in designing connections between nodes (classifiers) from different layers. -structure, in the first layer, nodes (classifiers) are called experts, and in the second and third layer are called judges. In the first layer, a processed ECG signal is given to the inputs of nodes, and in the second and third layer on the nodes (classifiers) inputs are given votes (17 answers with values of ''0'' or ''1'' indicated the recognized class) of each of the classifiers from the first layer. -structure tuning, eliminating incorrect voices (second and third layer), and ECG signal feature selection (first layer), and optimization of classifier (nodes) parameters is performed using GA. Dysfunctions/classes Classification performance for all classes of DGEC system is presented in Fig. 14. We can notice from the results that PPV of over 70% and SEN of over 70% are achieved despite using the imbalanced data, which is a significant success. The worst results have been achieved for fusion of ventricular and normal beats (PPV ¼ 73:33% and supraventricular tachyarrhythmia SEN ¼ 72:73%), which is shown in Table 7. Conclusion The objective of this study was to develop new ML method, focusing on EC and also EL and DL approach which enables the effective classification of cardiac arrhythmias (17 classes: normal sinus rhythm, 15 types of arrhythmias and pacemaker rhythm) using ECG signal segments (10-s). Our main contribution is the design of a novel three-layer (48 ? 4 ? 1) genetic ensemble of classifiers. Novel system based on fusion of three normalization types, four Hamming window widths, four classifiers types, stratified tenfold CV, genetic feature (frequency components) selection, EL, layered learning, DL, EC, classifiers parameters optimization by GA, and new genetic layered training (expert votes selection) to connect the classifiers. The DGEC system achieved a classification sensitivity of 17 cardiac arrhythmias (classes) equal to 94.62% (40 errors / 744 classifications, accuracy ¼ 99:37%, specificity ¼ 99:66%, classification time of single sample ¼ 0:8736ðsÞ). To the best of our knowledge, this is the highest classification performance obtained for 17 ECG classes using 10 s ECG segments. The salient features of our work are as follows: (1) strong imbalanced data for some classes, (2) classification of 17 classes of cardiac disorders, and (3) application of stratified tenfold CV method (analogous to subjectoriented validation scheme). The authors have designed a new DGEC system for the effective (Table 8), automatic, fast (Table 4), low computational complexity (Sect. 1) and universal ( Table 1), classification of cardiac disorders. The strengths of the research are: (1) possibility to use our solution in telemedicine and implement designed method in mobile devices or cloud computing (only single lead, lower computational complexity, and low cost), (2) high performance, (3) classification of 17 heart disorders (classes), (4) design of new genetic layered training applied to connecting classifiers, and (5) design of novel ML method-DGEC system. Due to the very promising results obtained, the described research is worth continuing. The next stages of research will include: (1) improving the accuracy of recognition myocardium dysfunctions through development and improving the algorithms based on fusion of EL and DL, (2) testing the other optimization methods based on EC, and (3) testing the efficiency of DGEC system with other physiologic signals.
7,105
2019-01-05T00:00:00.000
[ "Computer Science" ]
Elucidating Deviating Temperature Behavior of Organic Light‐Emitting Diodes and Light‐Emitting Electrochemical Cells Organic light‐emitting diodes (OLEDs) and light‐emitting electrochemical cells (LECs) exhibit different operational modes that render them attractive for complementary applications, but their dependency on the device temperature has not been systematically compared. Here, the effects of a carefully controlled device temperature on the performance of OLEDs and LECs based on two common emissive organic semiconductors are investigated. It is found that the peak luminance and current efficacy of the two OLEDs are relatively temperature independent, whereas, the corresponding LECs exhibit a significant increase by ≈85% when the temperature is changed from 20 to 80 °C. A combination of simulations and measurements reveal that this deviating behavior is consistent with a shift of the emission zone from closer to the transparent anode toward the center of the active material for both the OLEDs and the LECs, which in turn can be induced by a stronger positive temperature dependence of the mobility of the holes than the electrons. Introduction Electroluminescent devices based on organic semiconductors (OSCs), notably the organic light-emitting diode (OLED) and the light-emitting electrochemical cell (LEC), can be flexible and light-weight, [1] feature large-area emission, [2] and operate with high efficiency at high brightness. [3] The OLED can, in fast turn-on time, [11a,11b] and that recent studies established the strong effects that self-heating can have on the operation of free-standing LEC devices. [11c,12] However, we also note that a systematic study aimed at establishing the influence of temperature on LEC operation is lacking. Moreover, a direct comparison between the effects of temperature on OLED and LEC devices based on the same OSC is, to the best of our understanding, nonexistent in the scientific literature. It is therefore the goal of the present study to address these issues through the systematic investigation of two OLEDs and LECs based on the same two emissive OSCs, and with their device temperature accurately controlled by a carefully designed temperature setup. Quite unexpectedly, we find that the peak luminance and current efficacy of the two LEC devices increase by ≈85% when the device temperature increases from 20 to 80 °C, whereas the luminance and current efficacy of the two OLEDs are essentially invariant to the same temperature change. Using a combination of optical modeling and experiments, we demonstrate that this behavior can be rationalized by a shift of the peak of the exciton distribution from closer to the transparent anode toward the center of the active material, and we note that such a shift could be induced by a stronger increase of the hole mobility than the electron mobility with increasing temperature. We further find that the operational stability of the LECs is more sensitive to an increase in temperature than the OLEDs, and that the turn-on time of the LECs drops significantly with temperature because of a thermally activated ion mobility within the active material. Figure 1a presents the electron-energy structure of the two OLED devices, which are distinguished by the selection of the polymeric OSC emitter, being either a yellow emitter termed Super Yellow [13] (SY) or a blue emitter termed Polymer Blue [14] (PB). The corresponding two OLEDs are accordingly termed "Yellow-OLED" and "Blue-OLED." A relatively balanced hole and electron injection is achieved by employing high work-function poly(3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) for the positive anode and low work-function Ca for the negative cathode. In order to attain a sufficiently high anode conductivity, a transparent film of indium-tin-oxide (ITO) was included in between the transparent PEDOT:PSS anode and the glass substrate, whereas a layer of Al was deposited on top of the reflective Ca cathode to improve the device stability. Figure 1b displays the corresponding electron-energy structure of the two LEC devices, the "Yellow-LEC" and the "Blue-LEC," as identified by the selection of the polymeric OSC emitter. The active material of an LEC comprises an electrolyte (i.e., mobile ions) in addition to the OSC, and we employed a KCF 3 SO 3 salt dissolved in a hydroxyl-capped trimethylolpropane ethoxylate (TMPE-OH) ion-transporter for this end since LEC devices based on this electrolyte and the SY [11c,13] and PB [15] emitters have demonstrated good device performance. For the LEC, we employed reflective Al for the top cathode and transparent ITO for the bottom anode on top of the glass substrate, since this electrode combination has been reported to deliver a good performance in similar LEC devices. [16] Figure 1c-f presents the temporal evolution of the voltage (upper panels) and the luminance (lower panels) for the four different devices, as identified in the upper panels, during driving by a constant current density of 50 mA cm −2 (Yellow-OLED and Yellow-LEC) or 25 mA cm −2 (Blue-OLED and Blue-LEC). The device data were recorded at a set of externally controlled temperatures of 20, 40, 60, and 80 °C, as identified in the lower left insets of Figure 2c,e. The temperature was accurately controlled by positioning the device under study on a temperature stage, comprising a Peltier element as a heater, and by sandwiching a conformal and high thermal-conductivity three-layer structure, consisting of a 3-mm-thick Al plate, a soft thermal pad, and thermal paste, between the device and the temperature stage. The merit of this approach was verified in a spatially resolved temperature measurement using a thermal camera, which demonstrated that the temperature difference between the center emission area and the non-emitting substrate edges was < 1 °C at the maximum measurement temperature of 80 °C. In addition, we found that this measured device temperature during electrical driving corresponded well with the input temperature of the temperature stage. Results and Discussion The Yellow-OLED ( Figure 1c) and the Blue-OLED (Figure 1e) both display a decrease of the initial voltage with increasing temperature (upper panels). This observation is primarily assigned to an increase of the electron and hole mobility with temperature, which is a characteristic general feature of OSCs. [17] The initial luminance response to the change in temperature differs somewhat between the two devices (lower panels). The Yellow-OLED exhibits a minor decrease of the initial luminance with temperature, which we assign to the commonly observed temperature-induced lowering of the photoluminescence quantum yield (PLQY) of OSCs. [17a] The initial luminance of the Blue-OLED is, in contrast, essentially independent of temperature within the investigated range. We tentatively explain this observation by that the lowering of the PLQY is compensated by an improved balance of the electron and hole injection with increasing temperature for the Blue-OLED (note the big difference between the electron-and hole-injection barriers for the Blue-OLED in comparison to the Yellow-OLED in Figure 1a). The long-term stability of the voltage and luminance is, as expected, dropping with increasing temperature for both OLED devices (note the logarithmic x-axis in Figure 1c-f), and Figure S1, Supporting Information, reveals that the time to half-peak luminance (LT 50 ) exhibits an Arrhenius dependence, that is, , within the probed temperature range, with an activation energy (E a ) of E a, Yellow-OLED = 0.36 eV and E a, Blue-OLED = 0.20 eV. The temporal behavior of the Yellow-LEC and the Blue-LEC at different temperatures is depicted in Figure 1d,f, respectively. All characterized LEC devices exhibited a decreasing voltage and an increasing luminance during the early operation, which is due to the formation of the EDLs at the electrode interfaces and the subsequent formation of a p-n junction doping structure within the bulk of the active material. [9b,c] The observation of these characteristic LEC transients thus verifies that the investigated devices are well functioning LECs. The characteristic LEC operation is further manifested in that the two LEC devices feature a significantly lower minimum voltage by 1-3 V than the corresponding OLEDs, despite that the active-material thickness (≈100 nm) and the OSC are the same. Figure S2a,b, Supporting Information, summarizes the time to peak luminance ( Lpeak t ) and the time to minimum voltage ( Vmin t ) as a function of temperature for the Yellow-LEC and the Blue-LEC, respectively. It was possible to fit the Arrhenius equation to these turn-on data (t / a B e E k T ∝ − ; see dashed lines in Figure S2a,b, Supporting Information), which is in agreement with that the ion motion within the active material is thermally activated. [11d] E a is the activation energy in the Arrhenius equation, and in the context of ion motion it can be thought of as the effective energy barrier height between the initial and final ion state. We find that the activation energy for the turn-on time to peak luminance is 0.81 eV for the Yellow-LEC and 0.61 eV for the Blue-LEC. We note that Burnett and coworkers report a significantly higher activation energy for "the light intensity growth rate" of 1.6 eV for a slightly different LEC system, [11d] which is in agreement with that the active-material morphology plays a critical role for the transient behavior of LEC devices. From an application viewpoint, we also wish to call attention to that the Yellow-LEC exhibits a fast sub-second turn-on time to a high luminance of >1000 cd m −2 already at 20 °C (see Figure 1d). During "ideal" constant-current operation, it is reasonable that Lpeak t represents the shorter time for the formation of efficient EDLs, whereas Vmin t estimates the longer time to the establishment of the steady-state p-n junction doping structure (when all ions are locked up in the EDLs and the doping regions). As expected, we find that V We tentatively attribute this unexpected behavior of the Blue-LEC to that undesired side reactions are starting to take place already during the initial operation of the Blue-LEC, and that these side reactions have a stronger impact on the charge transport than the emission, presumably since they are localized at one (or both) of the electrode interfaces, far away from the light-emitting p-n junction. [18] The LT 50 at 20 °C for the Yellow-LEC is essentially the same as for the Yellow-OLED ( Figure S1a, Supporting Information), while it is significantly shorter for the Blue-LEC than the Blue-OLED (Figure S1b, Supporting Information). This observation brings further support to the notion that side reactions take place in parallel to the electrochemical doping during the early operation of the Blue-LEC. We also find that the operational stability of the two LECs drops faster with increasing temperature than the corresponding OLEDs (see Figure S1, Supporting Information), as manifested in higher E a values of E a,Yellow-LEC = 0.88 eV and E a,Blue-LEC = 0.56 eV and that the final decay behavior is more drastic for the LECs (see transients in Figure 1). [19] We further observe that the drastic final luminance decay of the two LEC devices is accompanied by a very fast increase of the driving voltage (and note that a similar observation has been made by other authors [20] ); we speculate that this drastic failure could be due to the formation of an electrically isolating layer of electrochemical side-reaction products within the active material. [18] An interesting observation is that the temperature dependence of the peak luminance (L peak ) and the efficiency are Figure 2. a,b) The peak luminance (left y-axis), the current efficacy (right y-axis), and c,d) the power efficacy as a function of temperature for the devices identified in the insets. e-g) A schematic presentation of the distribution of excitons (yellow shading), p-type doping (red shading), and n-type doping (blue shading) in an OLED (left) and in an LEC (right) for three exciton-distribution scenarios. h) The simulated normalized peak luminance as a function of the exciton peak position for the Yellow-OLED and the Yellow-LEC. The positive anode is located at 0 and the negative cathode at 1 in the interelectrode gap. The active-layer thickness is indicated in the legend (d AL ). markedly different for the OLEDs and LECs. Figure 2a,b reveals that L peak (left y-axis) and the current efficacy (right y-axis) are essentially independent of temperature for the two OLEDs between 20 and 80 °C, whereas they increase significantly for the two LEC devices. Specifically, for the Yellow-LEC L peak increases by 83% between 20 and 60 °C, and for the Blue-LEC it increases by 85 % between 20 and 80°C. We mention in passing that the peak current efficacy of the Yellow-OLED of 8.6 cd A −1 (at 20 °C) corresponds to an external quantum efficiency (EQE) of 3.1%, while the peak current efficacy of the Yellow-LEC of 6.0 cd A −1 (at 60 °C) is equivalent to an EQE of 2.2%. Figure 2c,d presents the power efficacy as a function of temperature, a property for which the LEC is more competitive with the OLED because of its lower drive voltage. The cause is the LEC-characteristic operation, which renders the electron and hole injection ohmic and which improves the charge-transport capacity of the bulk of the active material. The temperature dependence for the power efficacy is also distinctly stronger for the LEC devices than the corresponding OLEDs. In order to understand this deviating temperature behavior of the OLEDs and LECs, we have formulated and analyzed a simple model in which the hole and electron injection is considered balanced, the thickness of the active material (d AL ) is 100 nm, and the key free parameter is the exciton distribution. The latter is described by a Gaussian with a full width at half maximum (FWHM) of 53 nm for the OLED and 12 nm for the LEC. The motivation for the thinner exciton distribution in the LEC devices is that the two in situ formed doping regions confine the exciton formation to the thin p-n junction region, [21] which is supported by, for instance, direct observations of the light-emitting p-n junction in planar surface cells [9c,22] and by impedance measurements. [23] Figure 2e-g presents three scenarios for the simulated steady-state exciton distribution (yellow shading) in the interelectrode gap for the Yellow-OLED (left panels) and the Yellow-LEC (right panels), with the exciton peak position located: i) Closer to the positive anode (Figure 2e), ii) in the center of the interelectrode gap (Figure 2f), and iii) closer to the negative cathode ( Figure 2g). The anodic interface is positioned at 0 and the cathodic interface at 1. The LEC-defining doping regions are modeled with constant gradients, and the p-type (n-type) doping at the anode (cathode) is indicated by red (blue) shading. More details on the simulation can be found in Section 4 and in refs. [9e,24]. Figure 2h presents the modeled forward luminance for the Yellow-OLED and the Yellow-LEC as a function of the exciton peak position, and the simulation data imply a distinctly different behavior of the two devices. Specifically, the ideal value for the exciton peak position is close to the anode at 0.28 for the Yellow-OLED, while it is essentially centered in the interelectrode gap at 0.52 for the Yellow-LEC. This is in agreement with that the metallic Al cathode (positioned at 1) is a more significant exciton quencher than the transparent ITO electrode (positioned at 0), but also demonstrates that the two doping regions in the LEC devices are highly effective quenching sites. More specifically, the essentially centered ideal exciton distribution for the Yellow-LEC is caused by a stronger exciton quenching capacity of the p-type doped Super Yellow region (next to the ITO anode) than the n-type doped Super Yellow region (next to the Al cathode), [25] which effectively compensates for Al being a stronger exciton quencher than ITO. A comparison between the measured temperature-independent luminance data for the Yellow-OLED in Figure 2a and the simulated luminance data in Figure 2h implies that the exciton peak position for the OLED (open symbols) either is relatively invariant with increasing temperature or confined to migrate within the range of 0.2-0.5, where the peak luminance is relatively constant (and close to its maximum). Figure S3, Supporting Information, presents the measured and simulated forward electroluminescence (EL) spectrum of the Yellow-OLED as a function of temperature, and these data strongly imply that the exciton peak position is migrating toward the cathode with increasing temperature. Thus, we conclude that the exciton distribution in the OLED is migrating from closer to the anode toward the center of the interelectrode gap with increasing temperature. A corresponding comparison of the temperature-dependent measured luminance of the Yellow-LEC in Figure 2a with the simulated luminance data in Figure 2h demonstrates the exciton peak position for the Yellow-LEC (filled symbols) either must be shifting from closer to the anode toward the center of the interelectrode gap or from closer to the cathode toward the center, with increasing temperature. In this context, we mention that a recent study on a similar Yellow-LEC device revealed that the room-temperature emission zone was positioned at 0.3, that is, closer to the positive anode. [9e,24a] Moreover, preliminary data on the evolution of the measured and simulated forward EL spectrum as a function of temperature suggest that the exciton distribution in the Yellow-LEC is migrating from a region closer to the anode toward the center of the interelectrode gap. Thus, our conclusion is that both the OLED and LEC devices exhibit similar qualitative behavior with a steady-state exciton distribution that is migrating from closer to the anode at room temperature toward the center of the interelectrode gap at elevated temperatures. At this point, we wish to mention that other temperature-induced effects, such as a changing effective width of the exciton distribution or a shifting exciton environment, [12,26] also can influence the device performance, but that an investigation on these effects are outside the scope of this study. So what could be the origin of a shift of the exciton peak position with increasing temperature? Previous studies on OLEDs and LECs have shown that the exciton peak position is dependent on the ratio of the hole and electron mobilities, μ p /μ n , and that an increase (decrease) in the μ p /μ n ratio will result in a shift of the exciton peak position toward the cathode (anode). [3d,27] We, therefore, suggest that our derived cathodic shift of the exciton peak position for the investigated OLED and LEC devices originates (at least partially) in an increasing μ p /μ n ratio with increasing temperature, or more specifically to that the hole mobility is increasing at a faster rate with temperature than the electron mobility. Conclusions To summarize, the investigated OLEDs and LECs, based on the same emissive OSCs, exhibit a distinctly different dependency on the device temperature. The peak luminance and current efficacy of the two OLEDs are relatively constant within the temperature interval of 20 to 80 °C, whereas, the two LECs exhibit a peak luminance and current efficacy increase of ≈85 %. Complementary simulations and measurements demonstrate that this deviating behavior is concomitant with a shift of the exciton peak position from closer to the positive anode at 20 °C to the center of the active material at 80 °C for both device types. We note that this shift can be provoked by a stronger increase of the hole mobility than the electron mobility with increasing temperature. We further find that the LEC turn-on is significantly shortened with temperature because of a thermally activated ion motion within the active material, whereas the operational lifetime of both the OLEDs and LECs is dropping, with the latter being more sensitive. These results thus highlight significant differences in the sensitivity to a changing temperature and emission zone position between OLED and LEC devices, and also reinforce the importance of controlling and reporting the temperature during device characterization. Experimental Section The electroluminescent OSCs are a yellow-emitting phenyl-substituted poly(paraphenylene vinylene) conjugated copolymer termed "Super Yellow" (Merck KGaA, Darmstadt, DE), and a blue-emitting conjugated polymer "Polymer Blue" (Livilux SPB 02T, Merck); their chemical structures are depicted in Figure S4, Supporting Information. The OLED inks comprised Super Yellow dissolved at 7 g L −1 in cyclohexanone (Yellow-OLED) and Polymer Blue dissolved at 10 g L −1 in cyclohexanone (Blue-OLED). The ITO coated glass substrate (145 nm, R s = 20 Ω □−1 , thin film devices) was cleaned by sequential ultrasonic treatment in detergent (Extran MA 01, Merck), deionized water, acetone, and isopropanol. The cleaned ITO-coated substrate was exposed to 10 min of UV-generated ozone (model 42-220, Jelight Company). Thereafter a poly(3,4-ethylenedioxythiophene): polystyrene sulfonate (PEDOT:PSS, Clevios P VP AI 4083, Heraeus) film was spin-coated at 4000 rpm for 60 s, and dried at 120 °C for 30 min. The dry thickness of the PEDOT:PSS film was 35 nm. The OLED ink was spin-coated on top of the PEDOT:PSS at 3000 rpm for 60 s, and thereafter dried at 70 °C for 2 h. The thickness of the dry active material was 100 nm. The reflective top electrode was deposited by thermal evaporation under vacuum (p < 8 × 10 −6 mbar) through a shadow mask, and it consisted of 20 nm Ca and 100 nm Al. The LEC active material comprises a blend of the electroluminescent polymeric OSC, a KCF 3 SO 3 salt (Aldrich), and a hydroxyl-capped TMPE-OH (Aldrich; M w = 450 g mol −1 ) ion-transporter. The master solutions were prepared with the following solute concentrations in cyclohexanone (Aldrich): 8 g L −1 (Super Yellow) and 10 g L −1 (Polymer Blue, KCF 3 SO 3, and TMPE-OH). The LEC ink was prepared by mixing the master solutions in a solute mass ratio of OSC:TMPE-OH:KCF 3 SO 3 = 1:0.15:0.03. The LEC ink was spin-coated on the ITOcoated substrate for 60 s at either 3000 rpm (Yellow-LEC) or 2000 rpm (Blue-LEC), and the spin-coated active material was dried at 70 °C for 2 h. The dry thickness of the Yellow-LEC (Blue-LEC) active material was 100 nm (90 nm), as measured by a stylus profilometer (DektakXT, Bruker). The reflective Al top electrode was deposited on top of the active material by thermal evaporation. The overlap of the transparent ITO and the reflective top cathode defined four 2 × 2 mm 2 independent OLED/LEC devices on each substrate The devices were encapsulated by attaching a thin glass substrate on top of the reflective electrode with a single-component and UV-curable epoxy (Ossila) to allow for ambient-air characterization. [19] More details on the device fabrication are available in ref. [28]. The optoelectronic characterization was performed with the device under study positioned on a temperature stage, comprising a Peltier element as a heater, with the encapsulation glass facing downward toward the stage. A conformal and high thermal-conductivity three-layer structure, comprising a 3-mm-thick Al plate, a soft thermal pad, and thermal paste, was sandwiched between the device and the temperature stage in order to establish good thermal contact and accurately control the device temperature. The effectiveness of this approach was verified by a spatially resolved temperature measurement using a thermal camera (FLIR A35sc), which demonstrated that the temperature difference between the center emission area and the non-emitting substrate edges was <1 °C at the maximum measurement temperature of 80 °C. The device characterization started at the lowest temperature of 20 °C and finished at the highest temperature of 80 °C to minimize effects of thermal annealing. The devices were driven by a constant current density of either 50 mA cm −2 (Yellow-OLED and Yellow-LEC) or 25 mA cm −2 (Blue-OLED and Blue-LEC), with the compliance voltage set to 21 V, and with the ITO biased as the positive anode. A source measure unit (Keithley 2400) supplied the current and recorded the corresponding voltage. The luminance was measured with a photodiode, equipped with an eye-response filter (BPW 21, Osram Semiconductors), which had been calibrated with a luminance meter (Konica Minolta LS-110). All measurements were performed on pristine devices. For the OLEDs, the measurement was stopped when the luminance reached 50% of its peak value, that is, at LT 50 ; while for the LECs, the measurement was ended when the luminance reached zero or the voltage reached the compliance. 1-3 independent OLED devices and 2-3 LECs were characterized at each temperature, and the presented data correspond to that of a typical device, with the exception being the lifetime measurement, for which all device data are presented. The optical simulation was performed with a commercial software (Setfos 4.6.11, Fluxim), and a detailed description of the employed procedure can be found in refs. [9e,24a]. The configuration of the simulated OLED was: glass substrate (thickness = 0.75 mm), ITO (145 nm), PEDOT:PSS (35 nm), active material (100 nm), Ca (20 nm), and Al (100 nm). The exciton profile within the active material was simulated as a Gaussian distribution, with a standard deviation of 22.5 nm (corresponding to a FWHM of 53 nm). The simulation software dictated that the emissive region was transparent, and a thickness of the emissive region of 90 nm was opted, which implied that two 5-nm thin exciton-free and absorbing regions were positioned next to the electrode interfaces. The peak of the exciton profile was shifted from 10 to 90 nm away from the anodic PEDOT:PSS interface in the simulation. The simulated configuration for the LEC device was: glass substrate (0.75 mm), ITO (145 nm), active material (100 nm), and Al (100 nm). The simulated steady-state doping structure comprised a 20 nm intrinsic region sandwiched between a p-type doped region and an n-type doped region. The doped regions featured a constant doping gradient, with the maximum doping next to the corresponding electrode interface. The exciton profile within the intrinsic region was estimated by a Gaussian distribution, with a standard deviation of 5 nm (corresponding to a FWHM of 12 nm). The position of the center of the intrinsic region was shifted from 30 to 70 nm away from the ITO anode in the simulation. The peak of the exciton profile within the intrinsic region displayed a corresponding relative shift to the center of the intrinsic region within the interelectrode gap. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
6,034
2020-11-13T00:00:00.000
[ "Materials Science" ]
Model Predictive Controller-Based Optimal Slip Ratio Control System for Distributed Driver Electric Vehicle -e slip ratio control is an important research topic in in-wheel-motored electric vehicles (EVs). Traditional control methods are usually designed for some specified modes.-erefore, the optimal slip ratio control cannot be achieved while vehicles work under various modes. In order to achieve the optimal slip ratio control, a novel model predictive controller-based optimal slip ratio control system (MPC-OSRCS) is proposed. -e MPC-OSRCS includes three parts, a road surface adhesion coefficient identifier, an operation mode recognizer, and anMPC based-optimal slip ratio control. -e current working road surface is identified by the road surface adhesion coefficient identifier, and a modified recursive Bayes theorem is used to compute the matching degree between current road surfaces and reference road surfaces. -e current operation state is recognized by the operation mode recognizer, and a fuzzy logic method is applied to compute the matching degree between actual operation state and reference operation modes.-en, a parallel chaos optimization algorithm (PCOA)-based MPC is used to achieve the optimal control under various operation modes and different road surfaces. -e MPC-OSRCS for EV is verified on simulation platform and simulation results under various conditions to show the significant performance. Introduction With the development of social, the environment is getting worse [1]. Compared with traditional vehicle, EVs have great advantage in decreasing environment pollution. erefore, more and more scholars are involved in the research of EVs [2,3]. In the recent years, the slip ratio control is researched by scholars [4]. A model predictive control-based slip control was proposed [5];the wheel slip ratio was controlled on a stable zone rather than an optimal value. Aiming at improving EV safety, a sliding mode framework control system was extended [6]. In order to achieve the multiobjective optimization control, an MPC-based slip control system for EVs was proposed [7]. A wheel slip control algorithm combined on wheel slip ratio and wheel acceleration regulation was proposed [8]. In order to achieve traction control, a decoupling state feedback controller based on the uncertain frictional coefficient was derived [9]. A robust and fast wheel slip control based on the moving sliding surface technique was proposed [10]. A model predictive controllerbased multimodel system by optimal slip ratio control was proposed [11]. e torque is assumed as a constant and the effect of the road surface adhesion coefficient is not considered in this paper. However, the EVs usually work under various operation modes and different road surfaces, while the present research studies of EVs focus on some typical modes and cannot achieve good performance under various operation modes and different road surfaces. In order to solve this problem, a novel MPC-OSRCS is proposed in this paper. e MPC-OSRCS includes three parts, a road surface adhesion coefficient identifier, an operation mode recognizer, and an MPC based-optimal slip ratio control. e current working road surface is identified by the road surface adhesion coefficient identifier. In order to accurately describe the state of road surface, five road surface reference models are established. e matching degree between the actual road surfaces and five reference road surfaces is computed. A modified recursive Bayes theorem is used to calculate the matching degree. e operation mode recognizer is used to recognize the current operation state. In order to accurately describe the operation state, three operation mode reference models are established. e matching degree between the actual operation state and three operation mode reference models is computed by the fuzzy method. e PCOA [12] is used to realize the optimal design of MPC. Finally, the control output of MPC-OSRCS is computed by the weighted output of each model to achieve optimal slip ratio control under various operation states and different road surfaces. e main contributions of this paper cover the following points. (1) e state of EV is divided into fifteen kinds of typical modes, and the reference model is established for these fifteen modes, which separately represent the link of five road surfaces and three operation modes. (2) e identifier is designed for recognizing road surface adhesion coefficient and operation mode, respectively. e output of the identifier presents the matching degree between the state of actual EV and each typical model. (3) Aiming at obtaining the control output under each state of EV, each type of matching coefficient is substituted into the controller and the weighted output of each state of EV makes up the output of MPC-OSRCS. (4) e optimal design of MPC is achieved by PCOA because of the global optimization with fast and accurate performance can be achieved by it. System Model and Problem Statement In order to achieve the optimal slip ratio control, a model of EV is established in this section, and the model of EV mainly consists of vehicle, tyre, and motor torque. Tyre model and vehicle are established according to the EV dynamics. e symbols in this paper and their physical meanings are displayed in Table 1. Vehicle Model. e rotational movement of EV and longitudinal motion was included in the vehicle model in this paper, and the vehicle model was established based on two degree-of-freedom (2DOF) plane and shown in Figure 1. Many terms in this section are explained in detail in Rajamanis book [13]. e vehicle model can be described as follows. Longitudinal model: where F xfl , F xfr , F xrl and F xrr are longitudinal forces of four wheels, respectively. Rotation movement: where T b is motor braking torque. Tyre Model with Various Actuators. With the Dugoff tyre model [14], the longitudinal force of tyre is shown as follows: where F x is the longitudinal force, C x and C y are the tire stiffness of longitudinal and lateral, respectively, μ is the road surface adhesion coefficient, κ is the longitudinal slip ratio, α is the tire slideslips angle, and F z is the vertical load. e sideslip of tyre is not considered; then, the sideslip angle of tyre is considered as zero. erefore, equation (3) can be rewritten as follows: e value of the wheel normal force could be calculated as follows: e κ represents the wheel slip ratio, which relates to the wheel speed and the vehicle speed. e relationship can be calculated as follows: In order to avoid v x ⟶ 0⇒κ i ⟶ ∞, the ψ is set to 0.1 m/s. e longitudinal motion is only considered in this paper. In order to realize torque control, a torque balance equation is established based on vehicle characteristic for each wheel. In this case, the weight of EV is assigned to four tyres and every part equals to one quarter of the weight of the vehicle. erefore, the single tyre model can be described as follows: Driving motion: the EV works under uniform speed or acceleration operation mode while r · ω > v. According to equations (1)-(5), (11), and (12), the κ can be described as follows, where v x � v: Braking motion: the EV works under braking operation mode while v > r · ω. According to equations (1)-(5), (11), and (12), the κ can be described as follows: According to equations (6) and (12), (dμ/dκ) is shown as follows [15]: Control Problem Formulation. In order to realize the optimal slip ratio control, some problems should be considered in this paper: (1) Various operation modes and different road surfaces problem: EVs usually work under different operation modes and various road surfaces. A mount of research studies show that the slip ratio is related to the adhesion coefficient and operation mode [16][17][18]. erefore, different operation modes and various adhesion coefficients should be considered. (2) Constraints problem: In order to keep the EV in a stability state, all the control parameters should be kept at constraint range. e κ should be limited within the stable slip ratio range for each road surface. κ p is the optimal for each road surface: In addition, the motor torque command T should be limit within the maximum motor output torque T max [7], and it is shown as follows: Design of Control System In order to realize the optimal slip ratio control when EV works on various operation modes and different road surfaces, a novel MPC-OSRCS is proposed in this paper. (1) e road surface adhesion coefficient identifier is used to identify the current working road surface. It includes five reference models and represents five typical road surfaces, respectively. Each reference model can accurately represent the state of road surfaces. A modified recursive Bayes theorem is used to calculate the matching degree between the actual road surfaces and five reference road surfaces. (2) e operation mode recognizer is used to recognize the current operation state. In order to accurately describe the state of operation state, three operation mode reference models are established. A fuzzy logic method is used to calculate the matching degree between the actual operation state and three reference operation modes. (3) e optimal slip ratio control under various operation modes and different road surfaces is realized by a MPC, and the optimal design of MPC is achieved by PCOA. (4) In order to obtain the control output under each state of EV, each type of matching coefficient is substituted into the controller and the weighted output of each state of EV makes up the output of MPC-OSRCS. 3.1. e identifier of Road Surfaces Adhesion Coefficient. Mathematical Problems in Engineering e slip ratio κ is related to the road surface adhesion coefficient μ, different road surfaces have various characteristic between adhesion coefficient μ, and slip ratio κ. μ − κ function curve equation is proposed by Zhang et al. [19]; μ − κ function is easy and accurate to describe the mathematical relationship between the slip ratio κ of tyre based on various roads and the adhesion coefficient μ as follows: where c 1 , c 2 , and c 3 are the fitting coefficients for various road surfaces according to via experiment research. e parameters of five typical road surface are shown in Table 2. In the design of road surface adhesion coefficient identifier, it is difficult to compute the matching degree between the actual road surfaces and five reference road surfaces. e switching idea between two operation modes has been introduced in many literature studies [20][21][22]. Based on the same method, this theory can also be used to identify road conditions. A modified recursive Bayes theorem is used to compute the weight coefficient ξ j , where ξ j represents the matching degree between the actual road surfaces and each reference road surface. e former relative error is applied to the calculation of ξ j and then the smooth switching can be achieved. e posterior probability is evaluated by a modified recursive Bayes theorem for the jth model (or value) at the kth time instant and can be described as follows: where ε j,k � (y m (k) − y j (k))/y m (k) represents the relative error between the actual output state y j (k) and the reference value y m (k). P j (k) represents the posterior probability in the jth linearized model at k moment. H � represents the weight matrix between state variable k j , h e (e � 1, 2, 3, 4) represents the influence factor and relates to control performance, and G represents a time invariant weight matrix and usually is selected to a diagonal. According to the normal distribution, G represents the inverse matrix of the residual covariance. e higher parameter of G means that the residual variance is small and the confidence in the residual of each model is greater. e higher the value of the element of G is, the ability of rejection of model with large residual is more stronger. e weight coefficient of road surfaces ξ j is described as follows: Here, ξ j is a value between 0 and 1. e summation of ξ j is 1: e Recognizer of Operation Modes. e T-S fuzzy controller is applied to recognize various operation modes in this paper, and it is proposed by the scholars Takagi very small (VS), small (S), medium (M), big (B), very big (VB), and great (G), respectively. At the same time, the value of RO is defined as five kinds of states, and there are very small (VS), small (S), medium (M), big (B), and very big (VB), respectively. e value of η is defined as four kinds of states, and there are very small (VS), small (S), medium (M), and big (B). e membership functions for PO and RO are set, as shown in Figure 3. Table 3 shows the rule set in the fuzzy inference system (FIS). e rule set includes 7 * 5 � 35 rules. Normalization processing is carried out to calculate the weight coefficient λ i as follows: where η i is membership value of three typical operation modes. Design of the MPC. e optimal and constraint problem are efficiently solved by MPC. MPC is selected to optimum κ for each operation mode and each road surface in this paper. MPC has a great advantage to achieve satisfy diverse and even conflicting requirements of vehicles [23] as its characteristic. In addition, the effect of model mismatch and unmeasured disturbance can be attenuated by MPC [24,25]. Moreover, the effect of switching between various road surfaces and different operation modes can be attenuated by the weighted output of each MPC. Model Predictive Control Law. x represents state vector, u represents control vector, and y represents output vector, and they can be expressed as follows: In order to achieve the optimal MPC design, time statespace models in equations (12)-(13) are discrete by the Euler method. k represents the sample time and is defined as k � int(t/T s ), where running time is defined as t and fixed step size is defined as T s . At sample time k, the predictive state is calculated as follows: en, the discretization of the state space model of equation (12) is described as equation (27), and the discretization of state space of equation (13) is described as equation (28): According to the principle of MPC, p represents a predictive horizon and m represents a control horizon. m � p � 6 in this paper. U(k) represents the optimization vectors and Y(k) represents the predictive control output. ey can be expressed as follows: where U(k) is defined as an array of control input u and Y(k) represents the output vector from sampling time k to sampling time k + i. Moreover, R(k) represents reference sequence. u(k) is defined as the control input change and can be calculated by Δu(k) � u(k) − u(k − 1). ey can be defined as follows: According to the principle of MPC, the relationship of u(k) and U(k) can be described as follows: Mathematical Problems in Engineering where u(k) represents optimal vectors, which is used to compute the optimal control action of the EV system. Design of the Objective Function. In order to realize the optimal control, the multiple objective function of the MPC is established and it is achieved by minimization of this multiple objective function. e optimization function includes two parts. One is the motor torque T; it includes drive torque T e and brake torques T b ; the other is slip ratio κ of the four wheels. e cost function is described as follows: (1) e main importance is to make the actual slip ratio κ track the optimal slip ratio κ p , whatever the conditions of EV are. erefore, the first cost function can be described as follows: where Q represents a positive weight factors and can be used to adjust the performance of tracking. (2) In order to save energy, the sum of torque command squares should be controlled as small as possible. erefore, the second cost function can be calculated as follows: where R represents a positive weight factors and can be used to adjust the performance of T. (3) Aiming at achieving the optimal slip ratio control, the actual slip ratio κ needs to track the optimal slip ratio κ p as accurate as possible. Hence, the third cost function can be calculated as follows: (4) e tyre longitudinal performance is related to T. erefore, in order to ensure the longitudinal stability of EV, T is limited within T max , and the forth cost function can be calculated as follows: Mathematical Problems in Engineering erefore, the total cost function can be defined as follows: e Process of Optimal Design. e optimal design is realized by a novel PCOA, and we have carried out some works about PCOA. In addition, the PCOA has great advantage to realize global optimum. Objective function in the PCOA is described in equation (30) and it can be rewritten as follows: where X � f(x 1 , x 2 , x 3 , x 4 ) ∈ W 4 is a vector in the 4-dimensional decision variables space, x i represents different variables parameters and x i ∈ [L i , P i ], L i represents the lower bound of the ith variable, and P i represents the upper bound of the ith variable. en, PCOA evolves a stochastic population of N candidate individuals with 4-dimensional parameter vectors; the candidate individuals will experience twice carrier wave mechanism. e different chaotic traces are obtained by the first carrier wave; the search precision is enhanced by the second carrier wave. e Control Output of MPC-OSRCS. As described above, the weight coefficient of different reference models and the output of each MPC are calculated. At each sampling point, the control output of MPC-OSRCS is constituted by the output of each MPC u i·j (k) and its weight coefficient. erefore, the control output of MPC-OSRCS can be described as follows: where u(k) represents the control output of MPC-OSRCS, the u i·j (k) represents the i · jth reference model, and λ i and ξ j are the weight coefficients. Simulation and Analysis e computer simulation is used to verify the control performance of the proposed MPC-OSRCS. e simulation is based on an 8DOF model simulation platform, as shown in e road surfaces are wet asphalt during 0 − 0.75 s, the road surface is wet pebbles during 0.75 − 1.5 s, the road surface is dry asphalt during 1.5 − 2.5 s, the road surface is ice during 2.5 − 4 s, the road surface is snow during 4 − 5 s. e traditional method means that only a MPC under various operation modes and different road surfaces [17]. e input variables of the traditional MPC are four motor torques from EV, and the output variables are actual state values of the actual state value of κ fl , κ fr , κ rl , and κ rr . e Performance of Traditional MPC. e simulation results of the traditional MPC are shown in Figure 5. As shown in Figure 5(a), the slip ratio is very big at the beginning, even close to 1. With the increase of velocity, the slip ratio is kept on a steady range. It is clearly shown in Figure 5(a) that the actual slip ratio has not changed following the road surfaces changes; in other words, the slip ratio is only a steady value rather than an optimal value. It is proved that the traditional MPC cannot be suitable to the multimodes. erefore, the traditional MPC is difficult to achieve good performance and realize the optimal slip ratio control in actual working. We can see from Figure 5(b) that the motor torques of four wheels T e (e � 1, 2, 3, 4) are appropriate and equal to 80 N. e motor torques of four wheels is not following the road surface changes, whatever the road surface is wet asphalt or snowy. e required motor torques of four wheels are different while the road surfaces changes. In the switching process between various operation modes and different road surfaces, the oscillatory behavior may occur. From the results of simulation, we can find that traditional MPC cannot realize the optimal slip ratio control while EV switches on various operation modes and different road surfaces; then, it is difficult to ensure the longitudinal stability of EVs. 4.1.2. e Performance of MPC-MMCS. e simulation results of MPC-MMCS are shown in Figure 6. As Figure 6(a) shows, the optimal slip rate follows the road surfaces change in electric vehicle operation. However, the optimal slip rate cannot achieve when the road surfaces changes. It is easy to find in Figure 6(b) that T is also kept within a stable range. e wheel velocity is displayed in Figure 6(c), the smoothly switching can be achieved when the EV works under different operation modes and various surfaces road surfaces. In Figure 6(d), the λ i can accurately reflect the operation modes of EV. In conclusion, the MPC-MMCS can successfully recognize the operation modes of EV. Yet the optimal slip rate cannot achieve when the EV works under different road surfaces. 4.1.3. e Performance of MPC-OSRCS. e simulation results of MPC-OSRCS are shown in Figure 7. Note that the weight coefficient λ i and ξ j are calculated by the fuzzy method and a modified recursive Bayes theorem. ey represent the matching degree between the state of actual EV and three reference operation modes and between the actual road surfaces and reference road surfaces, respectively. e biggest challenges in the proposed MPC-OSRCS are the optimal slip ratio control under different operation modes and various road surfaces. As shown in Figure 7(a) that the optimal wheel ratio slip can be achieved no matter what the state of actual EV, in addition, the optimal slip rate follows the road surfaces change in electric vehicle operation. While the EV has switched different road surfaces, the optimal slip ratio also can smoothly achieved. We can see from the picture that the optimal slip ratio control can be realizes by proposed MPC-OSRCS when EV switches on various operation modes and different road surfaces. Figure 7(b) illustrates the wheel velocity r · ω. We can see from Figure 6(b) that EV runs smoothly when it switches on various operation modes and different roads surfaces. We can see from Figures 7(c) and 7(d) that the λ i represents the matching degree between the state of actual EV and each operation modes, and the ξ j represents the matching degree between the actual roads surfaces and the reference road surfaces. As described above, the matching degree is almost identical to operation conditions. We can see from Figure 7(e) that T changes smoothly while the state of EV changing. Whether the change of operation modes or the road surfaces, the T is also kept within a stable range T � 240. e difference between motor torques of four wheels is small and the comfortable of EV is well. Compared with the traditional MPC, the proposed MPC-OSRCS can realize the smooth switching and optimal slip ratio control whatever the running state of EV. e simulation clearly proves that the proposed MPC-OSRCS can be suitable for different operation modes and various road surfaces and all can achieved smooth switching and optimal slip ratio control. Test 2. e EV also is tested on various operation modes and different road surfaces in test 2. e starting speed of EV is 85 km/h and the straight running only is considered. e state of EV is deceleration, its velocity decreases from 80 km/h to 10 km/h during 0 − 1.75 s, and then the EV keeps the speed unchanged. At this time, the state of EV is uniform speed during 1.75 − 3.25 s. At the next time, the EV is acceleration and the velocity of EV increase from 10 km/h to 80 km/h during 3.25 − 5 s. At the same time, the road surfaces are also changed over time. At the beginning, the EV is driving on the ice road surface. After 0.75 s, the road surfaces are changed to snow road surface. After 0.75 s, the road surfaces are changed to wet pebbles road surface. After 1 s, the road surfaces are changed to wet asphalt road surface. After 1.5 s, the road surfaces are changed to dry asphalt road surface. e Performance of Traditional MPC. e simulation results of the traditional MPC are shown in Figure 8. As shown in Figure 8(a), we can see that the slip ratio is mainly affected by operation modes and the road surfaces adhesion coefficient changes can be ignored. However, the slip ratio is affected by operation modes and road surfaces, and the optimal slip ratio control cannot be achieved under different operation modes and various road surfaces. erefore, it is proved that the conventional MPC cannot be suitable for the multimodes and the conventional MPC cannot achieve optimal slip ratio control performance in actual working. We can see from Figure 8(b) that the motor torques remain unchanged. In order to be suitable for different operation modes and various road surfaces, the motor torque should change, following the state of EV. However, the motor torque of four wheels T e (e � 1, 2, 3, 4) are appropriate and equal to 80 N, whether the road surface is wet asphalt or snowy. erefore, it is difficult to ensure the longitudinal stability of EV under traditional MPC, while EV works on various operation modes and different road surfaces. e conventional MPC cannot ensure driver comfort and longitudinal stability under this situation. From the simulation of traditional MPC, we can conclude that it is difficult to realize the optimal slip ratio control and smooth switching, while EV works on various operation modes and different road surfaces. 4.2.2. e Performance of MPC-MMCS. e simulation results of MPC-MMCS are shown in Figure 9. As shown in Figure 9(a), the optimal slip rate follows the road surfaces change in electric vehicle operation. However, the optimal slip rate cannot be achieved when the road surfaces changes. It is easy to find in Figure 9(b) that T is also kept within a stable range. e wheel velocity is displayed in Figure 9(c), and the smooth switching can be achieved when the EV works under different operation modes and various surfaces road surfaces. In Figure 9(d), the λ i accurately reflects the operation modes of EV. 4.2.3. e Performance of MPC-OSRCS. e simulation results of the proposed MPC-OSRCS are shown in Figure 10. We can see from Figure 10(a) the EV works under different road surfaces and various operation modes, and the slip ratio is controlled at the optimal slip ratio for the corresponding road surface, no matter how the state of EV changes. Compared with the traditional MPC, the slip ratio κ changes quickly when the state of EV changes, and it can be proved that the rapidity of the controller is good. e slip ratio can be effectively achieved by the MPC-OSRCS and can be suitable for various operation modes and different road surfaces. Figure 10(b) illustrates the test on wheel velocity r · ω. It is observed that EV runs smoothly under various operation modes and different roads surfaces. We can see from Figures 10(c) and 10(d) that the identifier can quickly distinguish when the state of EV changes. As described above, the matching degree change following the state of EV changes and can precisely reflect the state of EV changes under different road surfaces and various operation modes. We can see from Figure 10(e) that T changes smoothly while the state of EV change, whether it is the change of operation modes or the road surfaces, the T is also kept within a stable range T � 240. e difference between motor torques of four wheels is small, and the comfortable of EV is well. Compared with the conventional MPC, the proposed MPC-OSRCS can identify different operation modes and various road surfaces, and optimal slip ratio control can be realized. e simulation results reveal that the proposed MPC-OSRCS can better ensure the longitudinal stability of EV under different operation modes and various road surfaces. Conclusions Aiming at solving the problem that the traditional MPC cannot realize the optimal slip control while EV switches on various operation modes and different road surfaces, a novel MPC-OSRCS is proposed in this paper. It can not only identify different operation modes but also recognize various road surfaces. e control performance of MPC-OSRCS for EV is verified while EV works on different operation modes and various road surfaces. Simulation results demonstrate the advantage of MPC-OSRCS. Compared with the conventional MPC, the MPC-OSRCS can effectively improve longitudinal stability performance and achieve the optimal slip ratio control under various operation states and different road surfaces. Data Availability e raw/processed data required to reproduce these findings cannot be shared at this time as the data also forms part of an ongoing study. Conflicts of Interest e authors declare that they have no conflicts of interest.
6,743.8
2020-04-29T00:00:00.000
[ "Computer Science", "Engineering" ]
Approximate Analytical Solutions to Nonlinear Oscillations of Horizontally Supported Jeffcott Rotor : The present paper focuses on nonlinear oscillations of a horizontally supported Jeffcott rotor. An approximate solution to the system of governing equations having quadratic and cubic nonlinearities is obtained in two cases of practical interest: simultaneous and internal resonance. The Optimal Auxiliary Functions Method is employed in this study, and each governing differential equation is reduced to two linear differential equations using the so-called auxiliary functions involving a moderate number of convergence-control parameters. Explicit analytical solutions are obtained for the first time in the literature for the considered practical cases. Numerical validations proved the high accuracy of the proposed analytical solutions, which may be used further in the study of stability and in the design process of some highly performant devices. Introduction The nonlinear dynamics of rotors have long attracted attention, being an interesting subject with considerable technical depths and breadths. The theory of oscillations was intensively developed in the field of high-speed machinery and can be used particularly in studies of a disk on a massless shaft; power generation; land, sea, and air transportation; aerospace; textiles; home appliances; or various military systems. For an analysis of simple machinery, one has to take into consideration the accurate forms of excitation, heating and supports, the complicated geometry of the rotor, and so on. There are many types of rotating machines, with different rotor sizes, complexities, speeds, loads, powers, and rigidities [1]. The nonlinear oscillations of rotating machines were studied by many researchers. Muszynska [2] proposed many possible responses of rotor-stator systems. Karlberg and Aidanpää [3] considered the nonlinear vibrations of a rotor system with clearance, analyzing the two-degree-of-freedom unbalanced shaft in relation to a non-rotating massless housing. The rotor start-up lateral vibration signal is investigated by Patel and Darpe [4]. Vibration responses are simulated for the Jeffcott rotor having two lateral degrees of freedom. The Hilbert-Huang transform is applied to investigate the coast-up rub signal, and the wavelet transform is employed for comparison purposes. The chaotic vibration analysis of a disk-shaft system with rub impact was performed by Khanlo et al. [5], including a consideration of the Coriolis and centrifugal effect. Yabuno et al. [6] explored nonlinear normal modes which considered the natural frequencies in vertical and horizontal directions, investigating the characteristics with primary resonance. Theoretical and experimental investigations are presented by Lahriri et al. [7], considering the impact motion of the rotor against a conventional annular backing guide, and an unconventional annular guide built with four adjustable pins. Various analytical does not imply the presence of a small or large parameter in the governing equations, or the boundary/initial conditions, and can be applied to a variety of engineering domains. The validity of this original method is proved by comparing the results with numerical integration results. We deal with the OAFM in a proper manner and completely differently in comparison with other known techniques. The cornerstone of the validity and flexibility of this approach is in the choice of linear operators and optimal auxiliary functions, which both contribute to obtaining highly accurate results. The convergence-control parameters involved in our procedure are optimally identified in a rigorous mathematical way. Each nonlinear differential equation is reduced to two linear differential equations that do not depend on all terms of the nonlinear equation. The present study provides accurate explicit analytical solutions which may be used further in the study of stability, and in the design process of some highly performant devices. The Governing Equations of Motion In this research, we consider the horizontally supported Jeffcott rotor presented in Figure 1. approximate solution by means of a moderate number of convergence-control parameters. Our technique does not imply the presence of a small or large parameter in the governing equations, or the boundary/initial conditions, and can be applied to a variety of engineering domains. The validity of this original method is proved by comparing the results with numerical integration results. We deal with the OAFM in a proper manner and completely differently in comparison with other known techniques. The cornerstone of the validity and flexibility of this approach is in the choice of linear operators and optimal auxiliary functions, which both contribute to obtaining highly accurate results. The convergence-control parameters involved in our procedure are optimally identified in a rigorous mathematical way. Each nonlinear differential equation is reduced to two linear differential equations that do not depend on all terms of the nonlinear equation. The present study provides accurate explicit analytical solutions which may be used further in the study of stability, and in the design process of some highly performant devices. The Governing Equations of Motion In this research, we consider the horizontally supported Jeffcott rotor presented in Figure 1. The origin O of the inertial coordinate system, Ouvz, is the intersection of the disk and the bearing center line. The whirling motion is assumed to occur on the U-V plane. The mass of the disk is m, its center of gravity G(u,v) deviates slightly from the geometric center with eccentricity ed. If ω is the angular velocity of the rotor spinning, the restoring force F can be a symmetric nonlinear cubic function with respect to the vertical deflection r of the shaft: where k1 and k3 are positive constants. The nonlinear differential equations that describe the horizontal and vertical oscillations of horizontally supported Jeffcott rotor system are expressed as follows [6,15]: where ( ) and ( ) is a nonlinear restoring force due to the bearing clearance, cu and cv are the damping coefficients in the U and V directions, g is the gravity acceleration, and the dot represents the derivative with respect to time. From Equation (3), the deflection of the shaft due to the gravity in the static equilibrium state satisfies: The origin O of the inertial coordinate system, Ouvz, is the intersection of the disk and the bearing center line. The whirling motion is assumed to occur on the U-V plane. The mass of the disk is m, its center of gravity G(u,v) deviates slightly from the geometric center with eccentricity e d . If ω is the angular velocity of the rotor spinning, the restoring force F can be a symmetric nonlinear cubic function with respect to the vertical deflection r of the shaft: where k 1 and k 3 are positive constants. The nonlinear differential equations that describe the horizontal and vertical oscillations of horizontally supported Jeffcott rotor system are expressed as follows [6,15]: where k 3 u u 2 + v 2 and k 3 v u 2 + v 2 is a nonlinear restoring force due to the bearing clearance, c u and c v are the damping coefficients in the U and V directions, g is the gravity acceleration, and the dot represents the derivative with respect to time. From Equation (3), the deflection of the shaft due to the gravity in the static equilibrium state satisfies: where v st is the static displacement of the geometric center G due to the disk weight. as a consequence, the motion of geometrical center G in terms of deviations u d and v d from the static equilibrium can be rewritten in the directions U-and V-as: and therefore, the resulting equations are: introducing the dimensionless parameters: one can get the dimensionless nonlinear differential equations of motion: where the prime denotes the derivative with respect to τ, and: From Equations (10) and (11), we remark that the linear natural frequencies of the horizontal and vertical directions are slightly different due to the nonlinearity of the restoring force and the static deflection v st given by Equation (5). Furthermore, the same effects produce an asymmetric nonlinear quadratic component. In what follows, an approximate analytical solution will be determined to the asymmetric system (10) and (11) using the Optimal Auxiliary Functions Method (OAFM). Basics of the OAFM The nonlinear differential Equations (10) and (11) can be written in a general form as [22][23][24][25][26][27]: where L is a linear operator, N is a nonlinear operator, and X(τ) is an unknown function. In our particular case, X(τ) = (u(τ),v(τ)). The corresponding boundary/initial conditions for Equation (13) are: We suppose that the approximate analytical solution X(τ) of Equation (13) can be rewritten in the form: where the initial approximation X 0 (τ) and the first approximation X 1 (τ) can be determined as follows. Inserting Equation (15) into Equation (13) we are led to: The initial approximation X 0 (τ) is obtained by solving the linear differential equation: and the first approximation X 1 (τ) follows to be determined from the nonlinear equation: The nonlinear operator N is expanded in the form: To avoid the difficulties which appear when solving Equation (18), accelerating the convergence of the approximate solutions needs, instead of the last term from Equation (18), the employment of another expression. As such, Equation (18) can be rewritten: where F i (τ), i = 1,2, . . . ,p and p are known auxiliary functions depending on the initial approximation X 0 (τ), on the functions which appear in the composition of N[X 0 (τ)], or the combination of such expressions. We remark that the p and the auxiliary functions Fi(τ) are not unique. Accordingly, X 0 (τ) and N[X 0 (τ)] are sources for the auxiliary functions, and it should be emphasized that we have a large amount of freedom to choose these auxiliary functions. In expression (20), C i , i = 1,2, . . . ,p and p are unknown parameters at this moment. We remark that the nonlinear differential Equation (13) is reduced to only two linear differential Equations, namely (17) and (20). Now, using the results obtained from the theory of differential equations, the variation of parameters method, Cauchy method, Kantorovich method, or the integral factor method [28], we have the freedom to choose the first approximation in the form: where F j are the auxiliary functions defined in Equation (20) and f i are n functions depending on the functions F j , satisfying the boundary/initial conditions: As a consequence, the first approximation X 1 can be determined from Equations (21) and (22). Finally, the unknown parameters C i are optimally identified via rigorous mathematical approaches, such as the collocation method, Galerkin method, Ritz method, the least square method, or by minimizing the residual error. In this way, the approximate solution X(τ) is well determined after the identification of the optimal values of the initially unknown convergence-control parameters C i , i = 1,2, . . . , n. We will prove that our approach is a very powerful tool for solving nonlinear problems without the presence of small or large parameters in the initial Equation (13) or the boundary/initial conditions (14). Application of OAFM to Nonlinear Oscillations of Jeffcott Rotor in the Case of Internal Resonance Taking into account that Ω = ω 1 ≈ ω 2 , the functions f i from Equation (42) and g i from Equation (44) will depend on the p 1 , p 2 , and Ω, as follows: so that the approximate analytical solution for Equations (26) and (27) where C i , i = 7,8, . . . 12 are unknown parameters and f i , g i are given by Equations (48) and (49), respectively. Numerical Example In order to prove the accuracy of our approach, we consider that the data for Equations (25)- (27) for every case (simultaneous resonance and internal resonance) are as follows: The Case of Internal Resonance. In the case of internal resonance, the parameters are chosen as: 00919 The optimal values of the convergence-control parameters in this case are: In Figures 4 and 5, we compared the numerical solutions of Equations (10) and (11), The Case of Internal Resonance In the case of internal resonance, the parameters are chosen as: The approximate solution in the case of the internal resonance of Equations (10), (11), and (23) becomes: In Figures 4 and 5, we compared the numerical solutions of Equations (10) and (11), and the approximate solutions (60) and (61), respectively, for the case of internal resonance. Conclusions The objective of this research is the study of the nonlinear vibration of a horizontally supported Jeffcott rotor with quadratic and cubic nonlinearity, where the nonlinear restoring force, due to the bearing clearance and the rotor weight, is considered. The linear natural frequencies in the horizontal and vertical directions have small differences due to the nonlinearity of the restoring force and disk weight. The nonlinear vibrations of the horizontally supported Jeffcott rotor are generated by the rotor eccentricity. Explicit analytical solutions for the two cases are established using our original Optimal Auxiliary Functions Method (OAFM). Our approach considerably simplifies calculations because any nonlinear differential equation is reduced to two linear ordinary differential equations using the so-called auxiliary functions. This idea does not appear in any other methods known in the scientific literature. Our technique is different from other traditional procedures, especially concerning the optimal auxiliary functions that depend on some initially unknown parameters. We have a large degree of freedom to choose the auxiliary functions and the number of convergence-control parameters. The obtained approximate analytical solutions are in excellent agreement with the numerical integration results in all cases. Our technique is valid, even if the nonlinear governing equations do not contain small or large parameters. The construction of the first iterations is completely different from other known methods. The optimal values of the Conclusions The objective of this research is the study of the nonlinear vibration of a horizontally supported Jeffcott rotor with quadratic and cubic nonlinearity, where the nonlinear restoring force, due to the bearing clearance and the rotor weight, is considered. The linear natural frequencies in the horizontal and vertical directions have small differences due to the nonlinearity of the restoring force and disk weight. The nonlinear vibrations of the horizontally supported Jeffcott rotor are generated by the rotor eccentricity. Explicit analytical solutions for the two cases are established using our original Optimal Auxiliary Functions Method (OAFM). Our approach considerably simplifies calculations because any nonlinear differential equation is reduced to two linear ordinary differential equations using the so-called auxiliary functions. This idea does not appear in any other methods known in the scientific literature. Our technique is different from other traditional procedures, especially concerning the optimal auxiliary functions that depend on some initially unknown parameters. We have a large degree of freedom to choose the auxiliary functions and the number of convergence-control parameters. The obtained approximate analytical solutions are in excellent agreement with the numerical integration results in all cases. Our technique is valid, even if the nonlinear governing equations do not contain small or large parameters. The construction of the first iterations is completely different from other known methods. The optimal values of the From Figures 2-5, a very good agreement can be observed between the approximate solutions and numerical integration results, which confirms the great potential of the OAFM. Conclusions The objective of this research is the study of the nonlinear vibration of a horizontally supported Jeffcott rotor with quadratic and cubic nonlinearity, where the nonlinear restoring force, due to the bearing clearance and the rotor weight, is considered. The linear natural frequencies in the horizontal and vertical directions have small differences due to the nonlinearity of the restoring force and disk weight. The nonlinear vibrations of the horizontally supported Jeffcott rotor are generated by the rotor eccentricity. Explicit analytical solutions for the two cases are established using our original Optimal Auxiliary Functions Method (OAFM). Our approach considerably simplifies calculations because any nonlinear differential equation is reduced to two linear ordinary differential equations using the so-called auxiliary functions. This idea does not appear in any other methods known in the scientific literature. Our technique is different from other traditional procedures, especially concerning the optimal auxiliary functions that depend on some initially unknown parameters. We have a large degree of freedom to choose the auxiliary functions and the number of convergence-control parameters. The obtained approximate analytical solutions are in excellent agreement with the numerical integration results in all cases. Our technique is valid, even if the nonlinear governing equations do not contain small or large parameters. The construction of the first iterations is completely different from other known methods. The optimal values of the convergence-control parameters are identified by means of a rigorous mathematical procedure, providing a fast convergence of the approximate analytical solutions using only the first iteration. It is proved that the OAFM is very effective and efficient in practice. This research provides helpful guidance to solve dynamic problems, and may help to design and manufacture more reliable engineering products.
3,882.2
2022-02-03T00:00:00.000
[ "Engineering", "Physics" ]
Analysis and Evaluation Methods of Seismic Subsidence Characteristics of Loess and Field Seismic Subsidence How to cite item Yan, X., Wang, S., & Wang, N. (2020). Analysis and Evaluation Methods of Seismic Subsidence Characteristics of Loess and Field Seismic Subsidence. Earth Sciences Research Journal, 24(4), 485-490. DOI: https://doi.org/10.15446/ esrj.v24n4.91593 The objective of this research is to analyze the dynamic degeneration of loess and the evaluation method of field seismic subsidence. In this study, Q3 loess is taken as the research object, and the dynamic properties of loess with 10%, 20%, 30% and 35% moisture content are tested by triaxial experiment. In addition, seismic subsidence characteristics of loess with dry densities of 1.4g/cm3, 1.6g/cm3, and 1.8g/cm3 and consolidation stress ratios of 1.0, 1.2, 1.4, and 1.6 are analyzed. Then the simplified seismic subsidence estimation method is used to calculate the relationship between seismic subsidence coefficients at different soil depth in one dimensional field, cycle times, and subsidence depth. The results show that the higher the water content of loess is, the greater the change of seismic subsidence appears. The larger the dry density of loess is, the smaller the change degree of seismic subsidence appears. The larger the consolidation stress ratio is, the greater the change of seismic subsidence occurs in loess. When the depth of soil reaches 9.5m, the maximum seismic subsidence coefficient can reach 0.8%. When the depth of soil layer is 10m, the degree of seismic subsidence is the largest. When the depth of soil layer is 12~16m, the settlement depth caused by earthquake subsidence is small. While the depth of soil layer is 8~12m, the settlement degree is large. ABSTRACT Analysis and Evaluation Methods of Seismic Subsidence Characteristics of Loess and Field Seismic Subsidence Introduction Loess is a kind of soil with weak cementation, large pore size, and easy denaturation when encountering water (Qiu et al., 2018). However, under the action of water and dynamic load, serious geological disasters such as collapsibility and seismic subsidence would occur (Hao et al., 2018). The judgement of seismic subsidence of loess is an important means of seismic safety evaluation. The subsidence of soil under the action of earthquake is seismic subsidence. Due to its unique dynamic characteristics, loess is usually damaged when strong earthquakes occur (Cheng et al., 2018). At present, some experts and scholars have comprehensively analyzed the dynamic characteristics of loess by the triaxial test system. The specific physical properties of loess, such as dry density and water content, have a great impact on the dynamic characteristics of loess and play an important role in the study of the characteristics of loess (Liu et al, 2017). When earthquake subsidence occurs, the soil would subside under the action of dynamic load, and some soil with high water content would collapse due to vibration (Chen et al., 2017). Dry density is also an important internal factor that reflects the seismic subsidence degeneration of loess. The seismic subsidence performance can be calculated by obtaining the volume strain variable through the dynamic single shear experiment. Or, on the basis of the volume strain variable data obtained from the dynamic single shear experiment, the incremental calculation equation is established, and the calculation equation of seismic subsidence is established according to the analysis results of field earthquakes Drzewiecki & Piernikarczyk, 2017). It is mainly used to analyze the equivalent linearized site and determine the shear strain time history of different soil layers, and then convert the random vibration waves into harmonics to obtain the equivalent shear strain amplitude and vibration times. Under the same vibration condition, the longitudinal strain value of vibration is obtained through the single shear test of the soil samples in different soil layers, and finally the settlement amount of seismic subsidence under different soil layers is analyzed and calculated (Gao et al., 2017). The simplified seismic subsidence estimation method can obtain the equivalent shear strain of the soil layer through shear model, maximum ground acceleration, overburden load of the upper soil, and stress attenuation coefficient. According to the relationship between the circular cycle and the volume strain, the volume strain value of equivalent shear strain is obtained. After the loading cycle, the volume strain value of the magnitude is calculated (Araujo & Castro, 2017;Sarhosis et al., 2018). In this study, by analyzing the seismic subsidence characteristics of loess and the influencing factors of seismic subsidence, the seismic subsidence changes of loess with different moisture content, dry density, and consolidation stress ratio are analyzed. The seismic subsidence characteristics of onedimensional loess field and the variation of seismic subsidence in different soil layers are analyzed by the simplified seismic subsidence estimation method. This study aims to provide theoretical basis for the subsequent analysis of loess seismic subsidence characteristics and field earthquakes. Characteristics and influencing factors of loess seismic subsidence The main mechanism, disaster development mode, and disaster types of loess seismic subsidence are influenced by the material index of soil, topographic characteristics of loess field, and earthquake. When loess earthquake subsidence occurs, soil quality can significantly affect the intensity of earthquake and the severity of earthquake subsidence. The degree of loess earthquake subsidence is obviously different in different areas. Different forces act on the primary structure of loess during the earthquake. When the external force exceeds the connection strength between the soil particles, the pores in the primary structure of loess would be destroyed and the soil particles would be rearranged. At this time, the surface would show sudden settlement. Seismic subsidence in loess is the result of the interaction of various factors, and the difference in soil structure has a significant impact on the occurrence of seismic subsidence, such as the water content and pore ratio of soil particles. Loess is composed of grain size, and the content of clay between soil grains has a great influence on the occurrence of seismic subsidence. The loess with different water content would also exhibit different kinetic characteristics, and the seismic subsidence of loess would increase with the increase of water content of soil. When the pore ratio is greater than 0.75, the seismic subsidence would increase with the increase of the pore ratio of soil particles. Seismic subsidence test of loess under different conditions Generally speaking, residual strain is used for the representation of loess earthquake subsidence, and the electro-hydraulic servo dynamic triaxial testing machine controlled by microcomputer (Xi'an Lichuang, China) is adopted for the test. During the test, the ratio of fixed junction stress of loess samples is not equal to 1, and the center of dynamic strain amplitude would move in the direction of compression with the cyclic action of dynamic stress. Accumulated residual strain that can't be completely recovered would occur in the loess after the cycle stops. The residual strain can be expressed as Equation 1: Among them, ε r (N) is the residual strain, that is, the seismic subsidence coefficient, H is the height of the loess sample before the dynamic load, and h is the height of the loess sample after dynamic loading. Figure 1A shows the relationship between the time course of loess dynamic strain and the number of vibration with the passage of cycle number N. It can be observed from figure 1B that after N cycles stop, the accumulated residual strain value of loess is the corresponding strain value when the dynamic stress under this cycle number is equal to 0. The Q3 loess, which is 3.5m in the third terrace of the XXX River, is collected from XXX County of XXX City. The loess structure is loose. After testing, it was found that the selected loess has a natural water content of about 11.98%, a dry density of about 1.42g/cm 3 , and a plasticity index of 7.55. The dynamic properties of loess with 10%, 20%, 30%, and 35% moisture content are tested, and seismic subsidence characteristics of loess with dry densities of 1.4g/cm 3 , 1.6g/cm 3 , and 1.8g/cm 3 and consolidation stress ratios of 1.0, 1.2, 1.4, and 1.6 are analyzed Evaluation of seismic subsidence of one-dimensional loess The one-dimensional loess site is shown in Figure 2. When an earthquake occurs in a one-dimensional field, the loess layer would extend horizontally in all directions in the same direction and be propagated by vertical and upward shear waves. At this time, the loess volume would only generate horizontal vibrations conducted by bedrock. When seismic subsidence occurs and the loess is of different densities and of the same type, it can be considered that the seismic subsidence deformation of the same type of loess follows the ratio of the same seismic subsidence Among them, a i is the acceleration amplitude of level i, N ai is the cycle number of a i , a max is the maximum acceleration value. With the change of depth h, the relationship between the stress attenuation coefficient r d of the loess field is shown in Figure 3. Therefore, the stress attenuation coefficient of loess can be expressed by the Equation 5. Here, the equivalent shear strain of the soil layer at any depth is as follows. Among them, δ0 is the pressure covering the soil layer, f is the conversion coefficient of frequency addition, and g is the acceleration of gravity. coefficient increment to the dynamic strain amplitude. When the moisture content, dry density, and consolidation pressure of the loess in the same soil field are the same, the seismic depression coefficient of the loess with different soil depth can be calculated by the Equation 2. (2) Among them, △ε p is the increment of the seismic subsidence coefficient, γ d is equivalent shear strain amplitude of loess body during earthquake, ε p is the seismic subsidence coefficient of the cycle that has occurred, ρ 0 is the dry density of loess, ρ ref is the referenced dry density, and a and b are the reference values of earthiness, as shown in Equation 3. Result and discussion The relationship of pore ratio of soil grains, dry density and moisture content to seismic subsidence coefficient The density of loess can determine the maximum deformable quantity of earthquake subsidence, and it is reflected by dry density and void ratio. It can be observed from figure 4A that the earthquake subsidence coefficient would increase with the increase of the pore ratio of soil particles, and there is a linear growth relationship between the two. As can be observed from figure 4B, the dispersion between the seismic subsidence coefficient of loess and the dry density of soil particles (g/cm 3 ) is relatively large, but when the dry density of soil particles is large, a large seismic subsidence also occurs. It can be observed from Figure 5 that when the pore ratio of soil particles is relatively low, the loess with high water content may not be seismic subsidence. When the void ratio is relatively high and the water content of loess is low, seismic subsidence may also occur. This indicates that the influence of water content limit should be considered for loess with low porosity. However, the boundary of water content does not need to be considered in loess with higher porosity. Seismic subsidence variation rule of loess with different moisture content When the consolidation stress and ratio are equal, the influence of different water contents of loess on seismic subsidence is shown in Figure 6. When the consolidation stress ratio is 1.0, the consolidation stress is 200kPa and the dry density of the loess is 1.4g/cm 3 and the pressure is applied on the loess samples, the loess with different moisture contents all have seismic subsidence changes to some extent. It can be observed from figure 4A that when the moisture content of the loess is 10%, different dynamic stresses have little influence on the residual strain of the loess. However, when the moisture content of the loess is greater than or equal to 20% and the dynamic stress exceeds 100kPa, the residual strain of the loess increases significantly, and the soil also shows obvious seismic subsidence changes. It can be observed that the seismic subsidence of loess is mainly caused by the destruction of the pore structure of soil particles under the action of dynamic load. External pressure would reduce the pore ratio of loess soil particles, and the soil mass would become more compact and deformed (Anbazhagan et al., 2017). However, when the water content of loess increases, the cementation between the soil particles would be strengthened, thus reducing the friction strength between the soil particles. Therefore, under the action of the same load and consolidation stress, the pore ratio of the loess with high water cut is smaller, the seismic subsidence is more likely to occur, and the deformation of seismic subsidence is more serious (Xu & Yang, 2017). Seismic subsidence variation rule of loess with different dry density When the consolidation stress and ratio are equal, the influence of different dry densities of loess on seismic subsidence is shown in Figure 7. It can be observed from figure 7A that when the dry density of loess is 1.4g/cm 3 , its influence on dynamic stress is lower than that of dry densities of 1.6g/cm 3 and 1.8g/cm 3 , indicating that under the action of the same dynamic stress, the dry density of loess is inversely proportional to the deformation generated by earthquake subsidence. This may be because the higher the dry density of the loess is, the higher the compactness between the soil particles is, and the higher the dynamic stress is needed to make the loess volume change. As can be observed from figure 7B, the residual strain generated by the loess volume decreases with the increase of the dry density of the soil, and when the dry density of the loess is larger than 1.6g/cm 3 , the influence of the residual strain on the volume of the loess also decreases gradually. Figure 8A shows that when the consolidation stress is 100kPa, the dry density of the loess is 1.4g/cm 3 and the moisture content is 20%, the consolidation stress ratio is proportional to the residual stress generated by the volume of the loess after the pressure is applied to the sample. It is possible that due to the increase of deviator stress on the loess, the pore structure among the soil particles is gradually destroyed, which leads to the increase of residual strain of the loess volume. It can be observed from figure 8B that the residual stress generated by loess would increase with the increase of consolidation stress ratio, showing a linear growth relationship, which is consistent with the research results of Song et al., 2017. Analysis of seismic subsidence characteristics of one-dimensional field The variation of seismic subsidence coefficient and cycle times of onedimensional field at different soil depth is shown in figure 9. It can be observed that the seismic subsidence coefficient produced by loess soil is proportional to the cycle times, but the seismic subsidence amplitude is inversely proportional to the development of seismic subsidence. It can be observed from figure 9 that when the depth of the soil layer reaches 9.5 m, the earthquake subsidence coefficient is the largest, the generated earthquake subsidence is the strongest, and the maximum earthquake subsidence coefficient of this deep soil layer can reach 0.8%. The results of the relationship between seismic subsidence coefficients at different soil depths and seismic subsidence depths are shown in figure 10. It can be observed from figure 10A that the seismic subsidence coefficient first increases and then decreases with the increase of soil depth. And it can be observed from 10B that when the depth of soil layer is 12~16m, the settlement depth caused by earthquake subsidence is small, while the settlement degree in the soil depth of 8~12m is larger. Conclusion Based on the previous research results, the seismic subsidence characteristics of the original loess with different moisture content, density, and consolidation stress are analyzed by triaxial experiment. It is found that the seismic subsidence is stronger when the loess has higher water content, lower density, and higher consolidation stress. After the calculation of seismic subsidence coefficient of one-dimensional loess field, it is found that when the depth of soil layer is 10m, the seismic subsidence coefficient is the largest and the deformation of seismic subsidence is the largest. In this study, the mechanical properties of loess are not considered comprehensively, and the calculation amount of models used for seismic subsidence analysis is too small. However, the results of this study can provide certain theoretical basis for the subsequent study on seismic subsidence and evaluation of loess. Number of cycles Seismic subsidence coefficien 0.5m 3.5m 6.5m 9.5m 12.5m 15.5m Figure 9. Relationship between seismic subsidence coefficient and cycle times at different soil depths
3,907.4
2021-01-26T00:00:00.000
[ "Geology" ]
Understanding an Urban Park through Big Data To meet the needs of park users, planners and designers must know what park users want to do and how they want the park to offer different activities. Big data may help planners and designers gain this knowledge. This study examines how big data collected in an urban park could be used to identify meaningful implications for planning and design. While big data have emerged as a new data source, big data have not become an accepted source of data due to a lack of understanding of big data analytics. By comparing a survey as a traditional data source with big data, this study identifies the strengths and weaknesses of using big data analytics in park planning and design. There are two research questions: (1) what activities do park users want; and (2) how satisfied are users with different activities. The Gyeongui Line Forest Park, which was built on an abandoned railway, was selected as the study site. A total of 177 responses were collected through the onsite survey, and 3703 tweets mentioning the park were collected from Twitter. Results from the survey show that ordinary activities such as walking and taking a rest in the park were the most common. These findings also support existing studies. The results from social media analytics found notable things such as positive tweets about how the railway was turned into a park, and negative tweets about diseases that may occur in the park. Therefore, a survey as traditional data and social media analytics as big data can be complementary methods for the design and planning process. Introduction Although big data have emerged as a critical source of data and are playing an essential role in urban studies, it is still uncommon in park planning and design. Understanding urban settings by using big data can reveal heretofore hidden characteristics of urban areas [1][2][3]. The use of social media data, a type of big data, has the potential to provide a deeper understanding of human attitudes and perceptions toward urban places [4] and explain resulting behaviors [5,6]. There are significant strengths of social media data such as indisputability, volume, and real-time data. Since social media users post their opinions on their social media platform [7], social media data represent public opinions more directly than other traditional methodologies when researchers try to understand public attitudes and perceptions. Social media can generate enormous amounts of data. In the case of Twitter, over 473,400 tweets are posted every minute around the world, and 2.5 quintillion bytes of data are created every single day [8]. Social media data can also be collected in real-time [9]. While other researchers have been attuned to the advantages of big data in social science research, researchers in landscape architecture and urban planning rarely use big data analytics. One study used social media data to identify successful public spaces [10], and another one used location-based data to trace the human visit dynamics in parks [11]. However, due to the difficulties in measuring social media data [12] and concern over how closely the content analysis can link to that context [13,14], different content analysis such as sentiment and frequency analysis are rarely used. The approach used most often in the past, or the traditional approach, for designing and planning open spaces is called "the demand approach" [15]. The demand approach uses the stated desires of people from interviews or questionnaires and provides recommendations of recreation and amenities to meet those demands. As stated above, most parks, urban forests, and greenways are planned and designed using the demand approach [16][17][18][19][20]. If done properly, these methods require considerable time and expense to collect the needed data. One of the reasons for the reluctance in using social media data is that the reliability of social media analytics has not been verified in the design and planning. Therefore, this study examined how social media data can be used to understand: (1) visitor activities in parks and (2) visitor satisfaction about the park. This was done by comparing two different methods: (1) a survey as traditional data and (2) social media analytics as big data. The survey will examine how people indicate they use a park, then collected tweets mentioning the park will be analyzed. After that, the results derived from the two approaches will be compared to identify the similarities and differences between the two methods. This article contains five sections: introduction, literature review, methods, results, and conclusion. The literature review answers the benefits that park visits provide to users, how to assess user activities and their attitudes, and the pros and cons of the survey and social media data. In the methods section, we address the study site, data collection, and analytic methods used. The results section provides our findings. We then discuss the conclusions and implications in the conclusions. Literature Review This section describes how urban parks are used and compares two different methods that can be used to understand how people use urban parks. Parks benefits can be grouped into three categories: economic, health, and environmental benefits [21,22]. This section describes the social benefits of urban parks and how urban parks contribute to social interactions. Then, we discuss traditional analytics including surveys and big data analytics focused on social media analytics in urban studies. After that, we compare the pros and cons of the two methods. Social Benefits and Social Interaction in Urban Parks Several terms have been used in studies of the social benefits from urban parks. Many studies have investigated the role of urban parks as an essential space to increase the quality of daily life of an urbanized society [23,24]. Empirical evidence supports that urban parks, greenways, and urban forests in urban contexts improve the quality of life in many ways. In addition to environmental benefits such as pollution purification, urban parks contribute to improving health, enhancing social interactions, and providing peacefulness [25][26][27][28]. Social benefit is a crucial metric to determine the success of a redevelopment project. Social benefit is related to how the project generates urban vitality. Urban vitality can be measured by social interactions [29][30][31][32]. Social interaction, as a process of interactivity among more than two people and the relationship between people and spaces, includes all forms of communication such as cooperation, competition, playing, informing, negotiating and bargaining, and creates the placeness [30][31][32]. Furthermore, urban vitality can be considered in various ways. Jane Jacobs describes urban vitality as a place to provide chances for good relations between people [33]. Jalaladdini and Oktay (2012) states that urban vitality is a safer, more desirable, and more attractive place that offers more choices for social activities [34]). Ulrich [27], for example, compares psychophysiological reactions toward three landscape types: landscapes with vegetation, with water, and with urban content. By using metrics that represent physiological and psychological measures, he found that vegetation and water landscapes had greater beneficial influences on the positive psychological feelings of park users. Chiesura [26] states that urban parks provide social and psychological benefits and enrich our lives with meaning and emotions. She explains that being in nature makes people feel positive. To do a better job of planning and designing urban parks that will offer positive emotions, we need to understand the tools or approaches that can be used. These include the role of traditional methods and big data approaches. Peters, Elands, and Buijs [35] assert that urban parks can be a trigger to generate social interactions by stimulating social cohesion. They used a survey, observations, and interviews to carry out social interaction research in five urban parks in the Netherlands. According to this study, urban parks can promote the mingling of different ethnic groups and encourage interactions among visitors. Coley, Sullivan, and Kuo [36] verified that urban parks encouraged social interactions among residents and contributed to social cohesion. Their findings revealed that natural elements increased opportunities for social interactions, and using outdoor places promoted communication within neighborhoods [36]. Natural environments can also help people relax as well as promote social interactions [37]. Kuo et al. [37] showed that the presence of greenspaces in the inner city generated positive responses from residents. The density of urban nature may also increase the sense of safety. These studies show practical evidence that urban parks not only foster social interaction, but also contribute to the sense of safety in urban environments. These studies show the practical implications that urban parks contribute to urban vitality by fostering social interactions. Age groups have been considered as an important predictor of park visits, especially for people aged over 50 years who visit parks and participate in park activities at a lower rate than other age groups [38][39][40]. Numerous studies have stated that physical activities and visits to a park significantly decrease with age. Another sociodemographic characteristic is gender. In 1997, Portes [41] indicated that gender represents a major dimension of social structure and offers important insights to understand many phenomena. In more recent research, Reed, Price, Grost, and Mantinan [42] found significant differences in gender and park use. Social media data can answer questions by detecting social networks revealed on a social media platform [43]. Social network analysis contributes to detecting communities [44], social roles, and social cohesion [43]. Traditional Analytics for Understanding People There are several methods to understand how people use a place and how people interact with each other in a place. Researchers have developed several ways to measure these variables through field observation, surveys, and interviews. Since these methods have been used in studies of urban parks for a long time, these methods are often called traditional analytics [45,46]. Some studies have termed these methods as small data analytics when compared to new methods such as big data analytics [47]. In this study, we call these methods traditional analytics to stress their historicity. Among the traditional analytics such as an interviews, observations, and focus group meetings, surveys can be considered as the representative method for studying urban parks and behaviors [48,49]. A survey can satisfy two main concerns of traditional methods: fairness and efficiency. As many surveys are targeted to anticipated park users and often obtain data about the survey participants, it is easy to verify who the participants are and the extent to which they are representative of park users. In many studies, a survey is used to understand people's attitudes and their behavior in urban open spaces such as parks. Peters, Elands and Buijs [35] used a survey to find that the urban parks can promote social cohesion. Whiting, Larson, Green, and Kralowec [50] used a park visitors survey to identify motivation and preferences for outdoor recreation. While traditional assessment tools can be used to evaluate the on-site benefits of parks through preset surveys, interviews, and observations [51][52][53][54], the weakness of these methods is that the intention of the researcher can be reflected in the questions asked and in the wording of the questions. One limitation of many surveys is that they are self-selecting. Participation is voluntary. Those who are motivated to participate may not represent all park users. Another limitation is that surveys require the researcher put the questionnaire together to preconceive what types of activities users may want to participate in. If certain activities are not included in the survey, they will not appear in the results. A survey can provide good data when it includes activities of interest to the participants. How a survey compares to big data analytics will be discussed next. Big Data Analytics as New Techniques for Understanding Park Usage Big data sources, especially social media data, hold the potential for enhancing the understanding of human behaviors and perceptions of urban places. Social media such as Facebook, Twitter, Instagram, and Flickr are widely used by people to post and share their opinions and communicate with each other [7]. Social media can be considered important source data to identify and understand how people interact within urban spaces. Although accessing social media data is not free, and researchers often have to pay fees for the data or licensing agreement, it still has considerable strengths. There are three reasons as to why social media could be a valuable data source: (1) social media users post text to express their thoughts directly [55]; (2) collecting social media data allows researchers to trace the past data, and (3) social media data are cheap data and their volume is enormous when compared to traditional data. According to social cognitive theory (SCT), it explains how people memorize in terms of three things: (1) their understanding of the activities; (2) their participation; and (3) the physical environmental [56]. Users post information on social media to share their thoughts and communicate with each other [7]. When we considered the motivation of using social media based on SCT, there are two kinds of factors: intrinsic (personal) and extrinsic (environmental) factors. In terms of the extrinsic factors, user behavior is affected by people sharing information [7]. Big data, especially social media data, may be considered as representative media to capture a user's thoughts and behaviors. Sentiment analysis is one way to evaluate one's perceptions and emotions, whether they are positive or negative [57][58][59][60] by using social media data. Among the sentiment analysis classification, the lexicon-based approach was used in this study. The lexicon-based approach uses positive and negative sentiment terms by using a dictionary based-approach [61]. The algorithm gives one point to the positive words and subtracts one point for every negative word. Then, all scores are evaluated for total content. The algorithm gives a zero score to content that has no positive (higher than 0) or negative words (lower than 0) or offsets positive and negative words in the content. The score refers to the users' attitude toward a park. The sentiment score derived from the algorithm shows whether the users were satisfied with their experience of the park. If the score is bigger than zero, the score is counted as a positive experience. If the score is under zero, it represents a negative experience. Although big data are accepted as reliable data, there are several challenges that need to be overcome when using big data. First, big data require a researcher to have specific computer skills to access and analyze the data. To write a code that uses software packages requires that a researcher is knowledgeable about computer program languages such as R, Python, Java, and C [62]. Second, it is common for some researchers to believe that big data are available to all without the need to pay a fee [63]. Accessing big data sources is not free. Social media data from Twitter, and Facebook require the terms of services (ToS) to be continuously updated and changed to protect privacy issues [9]. Compared to big data, traditional data sources such as census data are more freely and widely accessible and more available for everyone to analyze with standard software. Third, there is a lack of standardization of methods for collecting and analyzing big data. While some studies very precisely address data collection and methods [64], other studies are rather vague [65,66]. Fourth, big data analytics will contain sampling errors. In the case of traditional methods such as a survey or interview, there are standard methods to minimize the sampling error [67,68]. However, big data, especially social media data, have challenges beyond data collection [69]. Users who post on social media are not likely to be representative of all users. While social media may provide meaningful implications about people and their behavior in an urban setting [9,70] in terms of its volume, social media data also have limitations that it may be biased from the opinions of younger people [69]. Hargittai (2015) pointed out that big data studies have methodological challenges of limited sampling frames from those inclined to use social media platforms. For example, Facebook users are younger, and the results from that platform tend to reflect younger opinions [71]. Comparison of Traditional Methods with Big Data Analytics Survey data as a traditional method of collecting data are strong as the sampling process can be easier when compared to big data analytics. In terms of a sample, an onsite survey may select a visitor by contacting every fifth or tenth visitor for validity in a sampling process [67,68]. However, since researchers cannot identify the demographic information of social media users, it is difficult to control sampling validity in big data analytics [69]. Compared to traditional data, social media data provide valuable information about the behavior of people in a specific space. While social media data have limitations related to a lack of generalization to the entire population [72], social media data may have other advantages. First, there is an enormous number of users around the world who publish their daily activities. Second, social media data include a variety of types of information such as trivial travel experiences that cannot be documented with traditional methods. When we considered the motivation of using social media based on social cognitive theory, there are two types of factors: intrinsic (personal) and extrinsic (environmental) factors. In terms of the extrinsic factors, the users' behaviors are affected by people sharing information [7]. Methodology This section describes the study site, how data were collected, and the analytics used. The authors selected the Gyeongui Line Forest Park in Seoul, Korea, which has emerged as an urban hotspot to examine how visitors use the park through a survey and social media analysis. Since the park is surrounded by four universities, it has become a hotspot for young generations in Korea. For collecting data, a survey as a traditional method and social media as big data were selected. Among the traditional methods, a survey covers larger samples [67] and represents targeted information that is designed by researchers. Social media data also cover large samples and social media platforms are used to post and share the thoughts and daily activities of social media users [6]. Several studies have compared survey data and social media data [73,74]. This study selected a survey and social media data for comparison. At the park, an onsite survey was conducted on weekdays and weekends. Twitter postings that mentioned the park title were also collected during this period. With the survey and social media data, statistical analysis and sentiment analysis were conducted to derive results. Study Site As above-mentioned, the Gyeongui Line Forest Park was selected as the study site. This park was used as a railroad for the last 100 years (Figure 1a). When the railroad was turned into an underground railroad, the site was vacant and abandoned. As one of the urban revitalization projects, the Seoul Metropolitan Government built the park on the vacant land. The Seoul Metropolitan Government expected the park to facilitate the economic growth of adjacent areas [75]. From Mapo-gu to Yongsan-gu, Seoul, the park, 6.3 km in length, crosses the center of the city. The park was divided into three phases, and each phase was completed in 2012, 2015, and 2016, respectively [75]. At present, the park has become the center of younger generations because of its location. Four universities, Yonsei, Hongik, Seokang, and Ewha Women's University are located within a 700 m radius from the park (Figure 1b). The park has emerged as an urban hotspot where young people gather and frequently visit [76]. Well-known districts for young people, Sinchon, Hongdae, and Ewha districts, are also located near the park. Data Collection The data were collected using two different methods: a survey and social media (Table 1). For the survey data, an onsite survey was conducted and a total of 192 samples were collected. For the social media data, postings mentioned in the keywords related to the park were collected and a total of 3703 tweets were filtered. Survey The survey was used as the traditional method for comparison with social media data and to understand user activities in the park and user satisfaction about the park. Among the traditional methods, surveys can be considered as a way that satisfies two things: fairness and efficiency. Fairness is related to concepts such as democracy, representativeness, transparency, and public acceptability [77]. This concept concerns the perceptions of participants and the public and whether public participation has been conducted in a manner that accurately reflects the views of the target population. The concept of efficiency refers to the ease with which data can be collected [77]. The survey was conducted from August 15-18 and October 1-8, 2018. The sampling dates at the park were selected to include weekdays and weekends. At the park, the researcher approached every third adult visitor and asked of their willingness to take the park survey. A total of 192 respondents stated they were willing to participate in the survey, and 15 were disqualified because they did not complete the survey. The questionnaire was divided into five sub-parts: park visiting frequency and activities, satisfaction of the visit, social interactions, social cohesions, and respondent information. The first part included information about the visits to the park (frequency, duration, company) and activities Data Collection The data were collected using two different methods: a survey and social media (Table 1). For the survey data, an onsite survey was conducted and a total of 192 samples were collected. For the social media data, postings mentioned in the keywords related to the park were collected and a total of 3703 tweets were filtered. Survey The survey was used as the traditional method for comparison with social media data and to understand user activities in the park and user satisfaction about the park. Among the traditional methods, surveys can be considered as a way that satisfies two things: fairness and efficiency. Fairness is related to concepts such as democracy, representativeness, transparency, and public acceptability [77]. This concept concerns the perceptions of participants and the public and whether public participation has been conducted in a manner that accurately reflects the views of the target population. The concept of efficiency refers to the ease with which data can be collected [77]. The survey was conducted from August 15-18 and October 1-8, 2018. The sampling dates at the park were selected to include weekdays and weekends. At the park, the researcher approached every third adult visitor and asked of their willingness to take the park survey. A total of 192 respondents stated they were willing to participate in the survey, and 15 were disqualified because they did not complete the survey. The questionnaire was divided into five sub-parts: park visiting frequency and activities, satisfaction of the visit, social interactions, social cohesions, and respondent information. The first part included information about the visits to the park (frequency, duration, company) and activities visitors did in the park [78]. Nine activities in the park were carefully selected following the previous literature [24,78,79]. User satisfaction included questions about the visitors' feelings toward the park features and their experience ranged from '1 = strongly disagree' to '5 = strongly agree'. For the social interaction, participants were asked to answer several variables including feeling safe, participating in social programs in the park, and forming new relationships with others. Since social cohesion occurs based on trusting social relationships such as a sense of community [80], the social cohesion part of the survey asked about the nature of trusting relationships with those in their neighborhood. Following Perez et al. (2015), survey participants were asked to self-report about social cohesion factors among residents and their willingness to take action for the common good. Based on questions from the literature, the authors also added a question about the residents' attitudes toward and willingness to make social relationships with other visitors of the park. Social Media Data For the social media data, all tweets mentioned in the keywords related to the park were collected from July to September 2018 through Twitter API. Every tweet that mentioned one of three keywords, 'Gyeongui Line', 'Gyeongui Line Forest Park', and 'Yeontral Park' were scrawled. Three keywords were selected as the park has different names: some call it the 'Gyeongui Line Forest Park', others refer to it as 'Gyeongui Line', while the younger generation know the park as 'Yeontral Park'. All tweets in the dataset were posted inside South Korea. A total of 3703 postings were collected over three months. Survey Analytics Data analytics was designed to answer two main research questions: (1) what activities in the park could be detected through the survey and (2) how many visitors were satisfied with their experiences in the park. For analysis, the authors mainly used R Studio, which is a free and open source tool and an integrated development environment (IDE) for the R language [81]. Given its accessibility and the ease in using the R programming language, it was decided in this paper to use R. For the survey data, descriptive statistics and correlation analysis were conducted to identify how visitors used the park and how satisfied they were with their park experiences. The surveyed data collected included user activities, user satisfaction, and social interaction. A frequency analysis was also conducted. After the frequency analysis, the relationship between satisfaction and social interactions was analyzed by using correlation analysis. Social Media Analytics Regarding social media data, two main analytics were conducted to investigate the data. Text mining and sentiment analysis can be used to identify the implications. Text mining is usually divided into three areas: data mining, statistics, and linguistics, which aims to extract meaningful information from unstructured textual data [82]. The emergence of social media applications has contributed to the growth of text mining usage. We used Hu and Liu's (2004) approach to conduct sentiment analysis [83]. Sentiment analysis was then used to understand the preferences and attitudes of users. Frequency analysis was conducted to identify major words used in social media posts. Content analysis was then conducted on the posted texts to categorize them into single word categories. These categories were then counted to identify the main reason for the users' sentiments. These sentiments can represent people's opinions, sentiments, emotions, and attitudes [84]. This analysis provided the researchers with user assessments of the park and the reasons for those sentiments. Results The results were divided into three parts: the survey, social media, and a comparison of these two methods. The survey results indicate that visitors occasionally visited the park for relaxation. User satisfaction showed a positive relationship with social interaction. From the survey results, the park as an urban hotspot provides a place for relaxation and fosters social interaction and cohesion. The social media results revealed the positive attitudes of park visitors toward the park, especially attributes of the park such as 'forest' and 'railroads'. The results of the social media analysis also pointed out that park visitors positively reacted to interactive activities with others. For the comparative test of the difference between the survey and social media, there were no differences between the satisfaction from the survey and positive postings from social media. Survey Results The survey data revealed the characteristics of the park visitors. Like in previous studies, the park tended to be visited by more females (54.24%) and young people under 40 years of age (64.41%). The majority of visitors came from other districts and around thirty percent of visitors were residents of adjacent areas to the park ( Table 2). The descriptive statistics of the survey explained the basic uses of the park. Table 3 provides an understanding of the visits. Many visitors came to the park less than once a month (29.9%) or were visiting the park for the first time (23.7%). Additionally, some portion of visitors visited the park for more than two days (14.1%) a week or every day (10.7%). This result supports that park visitors use the park occasionally rather than on a daily basis, and that visitors normally came to the park on special occasions rather than regular visits. Table 3. Frequency of visit. Numbers of Visit Percentage Every day 10.7% More than 2 days a week 14.1 Once a week 9.6 1-3 times a month 11.9 Less than once a month 29.9 This is the first time 23.7 In terms of the stated desire for visiting and actual use of the park, Table 4 shows the results. Activities in four categories-physical activities, mental health, social interaction, and other activities-were asked and the results are stated in Table 4. Visitors came to the park to refresh their daily life (53.7%) and take a rest (36.2%). The main activity in this park differs slightly from existing studies that have identified physical activities as the main activities in a park [85]. The highest percentage of visitors used the park to interact with their friends or family (50.9%). Under 20 percent of visitors used this park to do physical activity. The average satisfaction score was based on a scale of 1 = rarely satisfied to 5 = highly satisfied. Users of the park were slightly satisfied (mean = 3.66), but the level of satisfaction was not that high. For social interaction in the park, users answered that the park contributed to reinforcing social interaction (mean = 3.56). The social cohesion satisfaction of adjacent community residents was 3.51. The satisfaction score was compared between two groups: visiting alone and visiting the park with friends or family. Pearson's Chi-squared test was used to examine the relationship between factors where there was no difference between the two groups (p-value = 0.49). One of the results of the survey was the correlation analysis. The correlation between satisfaction and social interaction was positive (correlation coefficient = 0.61, p-value < 0.0001). Although many researchers have verified that social interaction is a social benefit from a park [32,35], the relationship between the park visitors' satisfaction and social interaction level has rarely been identified. Next, the correlation analysis was conducted once again to identify the relationship between satisfaction (sati), social interaction (soc), and social cohesion (soco) in the park (Figure 2). Figure 2 represents the correlation between these factors. The graphs with the red line refer to the correlation graphs between satisfaction (sati) and social interaction (soc), social interaction (soc) and social cohesion (soco), and satisfaction (sati) and social cohesion (soco). The numbers represent the correlation coefficient. The relationships between the factors of satisfaction, social interaction, and social cohesion were positive. For example, satisfaction and social interaction had a positive relationship (correlation coefficient = 0.62). There was also a positive relationship between satisfaction and social cohesion (0.32), but the relationship was not strong. Social interaction and social cohesion relationship also showed a positive relationship that was not strong (0.42). This means that visitors of the park who were satisfied with their experience tended to value social benefits such as increasing social interaction or social cohesion. Social Media Results As mentioned in Section 2, the sentiment analysis of social media data provides insight into the users' attitude toward the park. The results indicate that more users had a positive experience than those who had a negative one (Figure 3). Figure 3 shows the number of tweets and sentiment scores of each tweet during the study period. The results show that social media users generally reacted positively when they posted about the park. This also supports the previous study that social media users tend to highlight their positive attributes [86]. Figure 3. The result of the sentiment analysis (from negative sentiment (red, less than 0) to positive sentiment (blue, higher than 0)). Social Media Results As mentioned in Section 2, the sentiment analysis of social media data provides insight into the users' attitude toward the park. The results indicate that more users had a positive experience than those who had a negative one (Figure 3). Figure 3 shows the number of tweets and sentiment scores of each tweet during the study period. The results show that social media users generally reacted positively when they posted about the park. This also supports the previous study that social media users tend to highlight their positive attributes [86]. Social Media Results As mentioned in Section 2, the sentiment analysis of social media data provides insight into the users' attitude toward the park. The results indicate that more users had a positive experience than those who had a negative one (Figure 3). Figure 3 shows the number of tweets and sentiment scores of each tweet during the study period. The results show that social media users generally reacted positively when they posted about the park. This also supports the previous study that social media users tend to highlight their positive attributes [86]. To figure out the type of activities that made the positive or negative sentiment, the dataset divided into two sub-sets: the positive data, and the negative data. Then, the word frequency analysis was comprised to extract the main reasons for each sentiment. The most frequently mentioned words in the positive data were related to food and eating such as 'beer', 'cake', 'coffee', 'cooked', 'lamb', 'tteokbokki (street-food in Korea)', and 'taste'. Physical features of the park were also mentioned in the dataset: 'forest', and 'railroad'. This indicates that the unique character of the park is important to the positive experience of users. Words which implied the relationship between people were also mentioned frequently such as 'people', 'we', and 'you'. (Figure 4a). From these results of the most frequently mentioned words, we can conclude that among the park visitors, social media users tended to have a positive experience about the unique attributes of the park such as 'forest' and 'railroad' and also tended to enjoy casual treats in the park. Furthermore, 'railroad' and 'railway' refer to the previous railway on the grounded Gyeonggui Line. To figure out the type of activities that made the positive or negative sentiment, the dataset divided into two sub-sets: the positive data, and the negative data. Then, the word frequency analysis was comprised to extract the main reasons for each sentiment. The most frequently mentioned words in the positive data were related to food and eating such as 'beer', 'cake', 'coffee', 'cooked', 'lamb', 'tteokbokki (street-food in Korea)', and 'taste.' Physical features of the park were also mentioned in the dataset: 'forest', and 'railroad.' This indicates that the unique character of the park is important to the positive experience of users. Words which implied the relationship between people were also mentioned frequently such as 'people', 'we', and 'you.' (Figure 4a). From these results of the most frequently mentioned words, we can conclude that among the park visitors, social media users tended to have a positive experience about the unique attributes of the park such as 'forest' and 'railroad' and also tended to enjoy casual treats in the park. Furthermore, 'railroad' and 'railway' refer to the previous railway on the grounded Gyeonggui Line. The negative sentiments were more complex in real-life than the positive sentiments, since the result of the frequency analysis indicated that the number of users who mentioned a word was too low to draw any conclusions. In other words, each word was mentioned 20 times. Only some words such as 'park', 'it', 'I', and 'people' were mentioned over 20 times in the negative dataset. Other words that were mentioned between 10 to 20 times were 'disease', 'tsutsugamushi (disease from lawn)', and 'ghastly.' All three words were related to the disease from the lawn. This data show that park visitors worried about the disease when they visited the park (Figure 4b). In terms of activities in the park, social media data indicate that eating something with someone most often resulted in positive sentiment toward the park. This assumption was derived from the most frequently mentioned words such as 'eating', 'foods', and 'we.' One of the more interesting results is the use of personal pronouns. When the users represented a negative sentiment, they used 'I' rather than 'we.' From the use of pronouns, it seems that experiences with others tended to be positive while negative experiences tended to seen as relating to the individual. These insights from social media analytics could not be found in the survey results. Since the survey questionnaire was designed to identify participation in traditional activities in the park, the questionnaire did not capture the importance of park activities such as 'eating.' However, social media data tended not to capture ordinary activities in the park such as 'walking' and 'relaxing.' Perhaps this is because people take something like walking for granted. When people posted about their activities and experiences on Twitter, they did not upload ordinary, common activities of their daily life and preferred instead to share their more extraordinary or less common events with others. The negative sentiments were more complex in real-life than the positive sentiments, since the result of the frequency analysis indicated that the number of users who mentioned a word was too low to draw any conclusions. In other words, each word was mentioned 20 times. Only some words such as 'park', 'it', 'I', and 'people' were mentioned over 20 times in the negative dataset. Other words that were mentioned between 10 to 20 times were 'disease', 'tsutsugamushi (disease from lawn)', and 'ghastly'. All three words were related to the disease from the lawn. This data show that park visitors worried about the disease when they visited the park (Figure 4b). In terms of activities in the park, social media data indicate that eating something with someone most often resulted in positive sentiment toward the park. This assumption was derived from the most frequently mentioned words such as 'eating', 'foods', and 'we'. One of the more interesting results is the use of personal pronouns. When the users represented a negative sentiment, they used 'I' rather than 'we'. From the use of pronouns, it seems that experiences with others tended to be positive while negative experiences tended to seen as relating to the individual. These insights from social media analytics could not be found in the survey results. Since the survey questionnaire was designed to identify participation in traditional activities in the park, the questionnaire did not capture the importance of park activities such as 'eating'. However, social media data tended not to capture ordinary activities in the park such as 'walking' and 'relaxing'. Perhaps this is because people take something like walking for granted. When people posted about their activities and experiences on Twitter, they did not upload ordinary, common activities of their daily life and preferred instead to share their more extraordinary or less common events with others. Comparing the Survey and Big Data Results from both the social media data and the survey data indicate a mostly positive experience in the park. Even though there were some differences between users of the park and the respondents of the survey, their experiences in the park tended to be positive. The results from the social media data and the survey data explained the users' activities, satisfaction (sentiments), and their social interaction in the park. The survey data and the social media data represent the different aspects of the users' activities. For example, the social media data could catch the extraordinary events of the daily lives of users such as eating delicious food with someone. The survey data included the ordinary, common activities in the park such as walking, relaxing, and chatting. In terms of social interaction, both methodologies support the idea that the park can contribute to social interaction. In terms of the social interactions, social media data and the survey data suggest that a park is a place to reinforce social interactions. According to the survey, users of the park indicated that the park contributed to social interaction and social cohesion. This result regarding the social interaction and satisfaction was identified as a positive relationship, however, the causal factors between two variables were not verified. Conclusions From the survey and social media, this study attempted to figure out the uses and meaning of urban parks. To overcome the problem that the survey may not reach sufficient users, social media can be used to fill a gap between a preset questionnaire and the reality. Through two lenses, the authors found meaningful implications. Through the survey, the results showed that park visitors mainly used the park for restoration and relaxation from their daily life. Additionally, park visitors who were more satisfied with their park experience also experienced more social cohesion and enhanced social interaction. Through the social media data, the authors found that social media users who visited and posted about the park enjoyed unique attributes or features of the park such as the railway, railroad, and forest features. Furthermore, social media users reacted positively to relatively small and common experiences such as coffee, beer, and casual foods in the park and negatively to less common and potentially threatening experiences such as a potential health and disease issues. This study provides meaningful insights into the possible use of social media content as the data source for landscape architecture research. According to the study, the social media data and big data analytics can be used to detect a new activity type in a park. For the social interaction, park visitors mentioned 'we' when they posted positive contents instead of 'I'. This may mean that social media users tend to share their status when they are with others. This study also identified several limitations to the use of social media data in park studies, especially in terms of sampling challenges. By choosing a study site where young generations tend to visit, the authors tried to minimize a gap between social media users and real visitors. It may diffuse the challenges, but is not a perfect solution. The sampling challenges of social media data have to carefully consider when researchers use social media data for future research. This study is a starting point in identifying more advanced research methods that can be used to augment existing methods. This paper makes several significant contributions to landscape architecture studies by providing a way to use social media data as a tool for understanding neighborhoods and resident preferences. Through the comparison of these methodologies, this study identified the pros and cons of the methods and explored the possibility of using big data analytics for understanding urban parks and the people who use them.
9,830.8
2019-10-01T00:00:00.000
[ "Environmental Science", "Engineering", "Computer Science" ]
Operator Behavior Analysis System for Operation RoomBased on Deep Learning Human behavior analysis has been a leading technology in computer vision in recent years. .e station operation room is responsible for the dispatch of trains when they enter and leave the station. By analyzing the behaviors of the operators in the operation room, we can judge whether the operators have violations. However, there is no scheme to analyze the operator’s behavior in the operation room, so we propose an operator behavior analysis system in the station operation room to detect operator’s violations..is paper proposes an improved target tracking algorithm based on Deep-sort..e proposed algorithm can improve the target tracking performance through the actual test compared with the traditional Deep-sort algorithm. In addition, we put forward the detection scheme for common violations in the operation room: off-position, sleeping, and playing mobile phone. Finally, we verify that the proposed algorithm can detect the behaviors of operators in the station operation room in real time. Introduction In the railway industry, the operators in the station play a vital role in the safety of train dispatch. If these operators have some violations, they may have serious potential safety hazards to railway operation safety. e most common violations are off-position, sleeping, and playing mobile phone. ese three violations may lead to serious safety accidents. At present, the most common method is to set up some security officers to monitor these operators through remote monitoring systems. e remote monitoring systems are composed of monitoring cameras in each operating room. e security officers can judge whether each operator has violations by looking at the monitoring screen from remote monitoring systems. However, a railway bureau usually has hundreds of operating rooms, which requires many security officers to meet the needs of the monitor. erefore, an intelligent behavior analysis system is urgently needed to replace the manual management in the operation room. e operator behavior analysis system first analyzes the pictures collected by the monitoring camera in the operation room to find out the operators and track them. en, the analysis system uses three behavior analysis methods to judge whether the tracked target has violations. In addition to the behavior analysis of railway station operators, the analysis system can also be applied to other similar fields. Related Work Before analyzing the behaviors, we first use an object detection algorithm to detect the operator location. Object detection algorithms are mainly divided into two categories: two-stage and one-stage. e two-stage network first extracts the object candidate regions from the input image and uses the classifier to classify all the candidate regions. erefore, the detection speed is relatively slow. is algorithm mainly includes RCNN [1], Fast-RCNN [2], Faster-RCNN [3], and Mask RCNN [4]. e one-stage network directly finds candidate regions from the feature map. e detection speed is usually faster than the two-stage network, but the actual detection accuracy of the algorithm may be affected. At present, the common algorithms are Yolov1 [5], Yolov2 [6], Yolov3 [7], SSD [8], RetinaNet [9]. e detection efficiency of Yolov1 is excellent, but the overall accuracy is low. e most significant improvement of Yolov2 is to improve the ability of small object detection. e Yolov3 replaces the backbone network with Darknet53 [7] [12], which solves the problem of end-to-end training. Sun et al. proposed Dan (deep affinity network) [13]. e algorithm can carry out end-to-end training and prediction. However, it introduces a lot of additional calculations, so the algorithm is inefficient. Behavior analysis is mainly to analyze the behavior of the object. K. Simonyan et al. proposed a two-stream convolutional neural network [14], which significantly improved the accuracy of behavior recognition combined with optical flow information. Girdhar et al. [15] added an Action VLAD layer based on two-stream networks, but they did not research the recognition of multitarget different behaviors. Tran et al. constructed C3D [16] network using 3D convolution and 3D pooling. Xu et al. proposed R-C3D [17] network, which extracts behavior keyframes from a video. en, the category of behavior is identified based on these keyframes. e network can analyze videos of any length. Behaviour Analysis Algorithm e design scheme of the behavior analysis algorithm of the station operation room is shown in Figure 1. e algorithm includes object detection, target tracking, and behavior analysis. e object detection module primarily uses the deep learning algorithm to detect the position of the operators. is paper proposes an improved algorithm based on Yolov4 [18]. To improve the detection results of the small object, we add the SPP module to the Yolov4 network. In the target tracking process, we introduce the HOG (histogram of oriented gradients) feature and improve the IoU (intersection over union) calculation method to improve target tracking ability. Finally, we design three behavior analysis methods to identify off-position, sleeping, and playing mobile phones. Object Detection. Yolov4 network is the object detection network proposed by Alexey based on the Yolov3 [7] network. e detection network mainly consists of the following four parts: CSP Darknet53 [13] network, spatial pyramid pooling (SPP) [19], PANet [20], and Yolov3 head [7]. e CSP Darknet53 network includes cross-stage partial (CSP) [21] and Darknet53 [7]. e CSP can enhance CNN's learning ability and reduce computational difficulty. e Darknet53 network contains five large residual network blocks. In each large residual network block, it contains some residual network structures. After each large residual block, we add the CSP structure to get the CSP Darknet53. e SPP can produce a fixed output for any input size, which solves the image distortion error caused by the nonproportional compression of the input image. e SPP is used in the Yolov4 network to increase the receptive field of the network. e PANet can locate the pixels correctly by preserving the spatial information to enhance the ability of instance segmentation. Figure 2 shows the Yolov4 network structure. e SPP module obtains the receptive field information by using the maximum pooling of different cores and carrying out feature fusion. is fusion of receptive fields in different scales can effectively enrich the expression ability of the feature map. Figure 3 shows the structure of SPP. In the Yolov4 network, the SPP module is located before the final 19 * 19 feature map. In this paper, we also apply the SPP module before the final 38 * 38 feature map and 76 * 76 feature map to enhance the ability to express the feature information in the feature map. Figure 4 shows the improved Yolov4 network structure. Object Tracking. e most widely used real-time multitarget tracking algorithm in recent years is sort [10] and Deep-sort [11]. Although the sort algorithm is fast in target tracking, the accuracy will decrease when occlusion occurs. e Deep-sort uses the Kalman filter in video space. en, it uses the Hungarian algorithm to correlate data frame by frame. is paper considers both target motion information and appearance information when correlating data. e association of motion information uses the Mahalanobis distance between the Kalman prediction result and the object detection result. e association of appearance information calculates the minimum cosine distance between the last 100 successfully associated features and the detection result of the current frame. e formulas are as follows: is paper increases the comparison of HOG [22] features when calculating the association of appearance information. e HOG feature can describe the target's contour through gradient or edge direction. Many studies in recent years have shown that this feature can accurately describe the outline of a person. We compare the HOG feature of the previously successfully associated rectangular box with the HOG feature of the current rectangular box. Mathematical Problems in Engineering In the matching process, Deep-post uses the IoU to calculate the coincidence degree of the bounding box. e box 1 is the first bounding box. e box 2 is the second bounding box. e Deep-post does not consider the width and height of bounding boxes, leading to false detection. erefore, we improve the IoU by introducing the height and width information of the bounding box as follows: e h 1 and w 1 are the height and width of the first bounding box. e h 2 and w 2 are the height and width of the second bounding box. e α is the adjustment coefficient. Behavior Analysis. At present, the behavior analysis based on deep learning mainly adopts object detection to directly identify people's behavior, such as sleeping and playing mobile phones. is method has poor robustness. In some cases, the results of this algorithm are inaccurate. In this paper, we propose a behavior analysis algorithm based on target tracking and behavior characteristics. is paper analyzes three behaviors: off-position, sleeping, and playing with mobile phones. Off-Position Detection. We think they leave their work area when the object detection algorithm cannot detect the operators in consecutive N frames. C leave is the off-position behavior counter. When there is no operator in the detection result, add one to the counter. If the operator is detected, the counter will be cleared when the off-position behavior counter meets the following: It is considered that the operators have the off-position behavior, and the T leave is the off-position behavior threshold. Sleeping Detection. e recognition of sleep behavior is mainly based on the change of the tracked operator's position in each frame. rough the target tracking algorithm, we complete the matching degree between the target's current position and the Deep-sort tracking prediction results. In the matching process, we obtain their IoU score. We measure the change of target position in different frames through the IoU score. e C sleep is the sleeping behavior counter. For the same tracked target, if the IoU score between the tracking algorithm predicted position and the object detection predicted position is less than the set threshold, we add one to the counter. e counter will be cleared if the tracked target disappears or the IoU score is smaller than the threshold when the sleeping behavior counter meets the following: It is considered that the operator has the sleeping behavior, and the T sleep is the sleeping behavior threshold. Playing Mobile Phone Detection. We assume that when the target is playing with the mobile phone, the mobile phone is close to the person. erefore, by calculating the Euclidean distance between mobile phones and operators, we can judge whether operators are playing mobile phones. First, this paper uses object detection proposed above to Figure 4: e improved Yolov4 network structure. detect mobile phones. We assume that the center of the mobile phone is (x p , y p ), and the operator's center is (x i , y i ). We calculate the nearest operator to the mobile phone through Euclidean distance as follows: e i min indicates the bounding box of the operator closest to the mobile phone. C phone is the playing mobile phone behavior counter. Suppose the Euclidean distance between the mobile phone and the nearest operator's bounding box is less than the smaller value between the width and height of the bounding box. In that case, we believe that the operator is playing mobile phone in this frame and adding one to the counter. When the playing mobile phone behavior counter meets the following: It is considered that the operator is playing on the mobile phone. e T phone is the playing mobile phone behavior threshold. Experiment Analysis is paper verifies the feasibility of the proposed algorithm from three aspects: object detection, target tracking, and behavior analysis. e test environment of the experiment is 8 GB memory, Intel Core i5-6500 CPU, and NVIDIA gtx-1050 graphics card. Object Detection. In object detection, the training environment of object detection is 32 GB memory, Intel Xeon e5-2650 CPU processor, and NVIDIA gtx-1080ti graphics card. For training and testing, we propose operating room datasets. e total number of images in the operation room datasets is 20000, composed of monitoring images taken by dozens of station operation room webcams in different illumination and time. e image size is 1280 * 720. e dataset is divided into 70% images as the training set, 20% as the verification set, and 10% as the test set. e categories marked in the dataset are mobile phone and person. e main parameters used in this training are shown in Table 1. In object detection, precision (Pr) and recall (Re) are used as the benchmark to measure the object detection algorithms. Pr � TP TP + FP , is paper compares three object detection algorithms: Yolov3, Yolov4, and our method. Table 2 shows the test results of the three algorithms. Compared with the Yolov4 and Yolov3 detection algorithms, our algorithm has improved the accuracy and recall rate in our dataset. Object Tracking. In this paper, we use MOTP (multiple objects tracking precision) and MOTA (multiple objects tracking accuracy) to measure the ability of the target tracking algorithm. We use the sort algorithm, Deep-sort algorithm, and our algorithm to test on our dataset and MOT16 dataset [23]. Figure 5 shows the process of target tracking. Table 3 shows that we test three different algorithms on two different datasets. Compared with the original Deep-sort algorithm, the MOTA increased 2.7%, and the MOTP increased 1.6% in the MOT16 dataset. In our dataset, the MOTA increased 1.9%, and the MOTP increased 1.1%. Behavior Analysis. is article analyzes three violations: off-position, sleeping, and playing with mobile phones. e experimental analysis results are shown. Off-Position Detection. We extract ten off-position videos from the video database. We extract one image every second. For the test of the off-position behavior, the threshold of off-position behavior T leave is 180. If no personnel is detected in 180 consecutive images, the operator is judged to be off-position. Table 4 shows the results of 10 video tests. It can be seen from Table 4 that there may be a difference between the detected frame number and the actual frame number because people may not be detected when they leave the screen. But it does not affect the actual detection results. It can be seen from Table 4 that C leave is greater than the offposition behavior threshold T leave , so the off-position behavior analysis algorithm proposed in this paper can judge the personnel off-position behavior. e screenshot of the test video is shown in Figure 6. Table 5. According to Table 5, there are some differences between the counters' maximum C sleep and the total frames of sleeping behavior in videos 1, 2, 3, and 10. But the C sleep of the sleeping behavior analysis algorithm is still greater than the sleep behavior threshold T sleep . erefore, the sleeping behavior algorithm proposed in this paper can judge the sleeping behavior of the personnel. In Figure 7, we show some sleeping behavior detection results. Playing Mobile Phone Detection. We extract 10 playing mobile phone videos from the video database. We extract one image every second. In Figure 8, we show some playing mobile phone behavior detection results. e red rectangle indicates the person playing mobile phone, and the yellow rectangle Conclusion Aiming at operation room management and monitoring demand, we analyze the operation room's actual problems and put forward an efficient behavior analysis method based on deep learning. rough the experimental test, we verify the effectiveness of the proposed algorithm. e method proposed in this paper has been widely used in many railway stations. Data Availability e data used to support the findings of this study are available from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
3,616.2
2022-03-16T00:00:00.000
[ "Computer Science" ]