text
stringlengths
1.23k
293k
tokens
float64
290
66.5k
created
stringdate
1-01-01 00:00:00
2024-12-01 00:00:00
fields
listlengths
1
6
Intrathecal kappa free light chains as markers for multiple sclerosis Cerebrospinal fluid (CSF) kappa free light chain (KFLC) index has been described as a reliable marker of intrathecal IgG synthesis to diagnose multiple sclerosis (MS). Our aims were: (1) to compare the efficiency of KFLC through different interpretation approaches in diagnosing MS. (2) to evaluate the prognostic value of KFLC in radiologically and clinically isolated syndromes (RIS-CIS). We enrolled 133 MS patients and 240 with other neurological diseases (93 inflammatory including 18 RIS-CIS, 147 non-inflammatory). Albumin, lambda free light chain (LFLC) and KFLC were measured in the CSF and serum by nephelometry. We included two groups of markers: (a) corrected for blood-CSF barrier permeability: immunoglobulin G (IgG), KFLC and LFLC indexes. (b) CSF ratios (not including albumin and serum-correction): CSF KFLC/LFLC, CSF KFLC/IgG, CSF LFLC/IgG. KFLC were significantly higher in MS patients compared to those with other diseases (both inflammatory or not). KFLC index and CSF KFLC/IgG ratio showed high sensitivity (93% and 86.5%) and moderate specificity (85% and 88%) in diagnosing MS. RIS-CIS patients who converted to MS showed greater KFLC index and CSF KFLC/IgG. Despite OB are confirmed to be the gold-standard to detect intrathecal IgG synthesis, the KFLC confirmed their accuracy in MS diagnosis. A “kappa-oriented” response characterizes MS and has a prognostic impact in the RIS-CIS population. Laboratory. All 10 . The gel was evaluated by two independent operators for the presence of OB and for the attribution of one of the five patterns according to Freedman 11 . Type II (presence of OB exclusively in CSF) and III (presence of OB in both CSF and serum but clear prevalence of CSF) were considered positive for intrathecal IgG synthesis. Statistical analysis. Continuous variables were expressed with mean and SD. Their distributions were checked with Shapiro-Wilk test and resulted not normally distributed. To compare data of multiple groups (MS, ID and NID patients), a non-parametric ANOVA (Kruskal-Wallis analysis) was applied with Bonferroni correction for multiple comparisons (p-values below 0.005 were considered to be significant). Sensitivity was calculated as "true-positive/(true-positive + false-negative)", specificity as "true-negative/(true-negative + falsepositive)". Area under curve (AUC), sensitivity and specificity were performed on received-operating curve (ROC) using a VassarStat software and with a Bayesian calculator made available by The Italian Society of Laboratory Medicine (SIPMEL). Differences between patients with RIS-CIS, that converted to MS, and those who did not covert were explored by Mann-Whitney test. The prognostic value of KFLC was determined by comparing converters versus non-converters by binary logistic regression analyses. P-values below 0.05 were considered to be significant. Results Data are shown in Table 1 (N. of included patients: 373). KFLC differentiated MS patients from those with ID and non-ID (p < 0.005). In fact, KFLC index and CSF KFLC/IgG ratio were significantly higher in MS then in other neurological conditions. Similarly, MS patients presented increased absolute concentrations of KFLC (mean value was 0.48 mg/dl) compared to both ID (0.20 mg/ dl) and non-ID patients (0.03). The KFLC, despite considering different interpretation approaches, permitted also to distinguish among the three groups (MS versus ID versus non-ID). Conversely, LFLC were not relevant in MS diagnosis (223 patients of the 373 included were tested for LFLC). LFLC index and CSF LFLC/IgG ratio resulted greater in MS then in other neurological conditions, but did not differ significantly among the three groups. KFLC index emerged as the most sensitive marker corrected for blood-CSF barrier permeability in diagnosing MS. Its sensitivity of 93% overtook that of IgG index (70.5%), and was only slightly lower than of OB (95.5%). Accordingly, we confirmed the greater accuracy of OB in MS diagnosis, according to McDonald criteria 2017 10 . Of note, in our study the specificity of OB was similar to that of KFLC index (85%). If comparing different approaches to calculate KFLC intrathecal fraction in our cohort, sensitivity towards MS diagnosis was 98% for Reiber's KFLC diagram 9 , in face of 53% specificity. Thus, concerning MS diagnosis, KFCL index performances resulted more similar to that of OB in our population. Among CSF markers, only KFLC/IgG ratio resulted a sensitive marker of intrathecal IgG synthesis (sensitivity 86.5%). www.nature.com/scientificreports/ In our study, we included 3 patients with RIS and 15 with CIS. Mean age of the 18 subjects (11 females) was: 36.3 years (± SD 8.5). CIS presentations included: unilateral optic neuritis (6 patients), focal supratentorial syndrome (4), and partial myelopathy (5). Brain and spinal magnetic resonance (performed at the time of diagnostic work-up) did not fulfilled criteria for dissemination in space in 12 cases, and for dissemination in time in the remaining 6. Six (33%) patients converted to MS during the follow up (that lasted at least one year), developing new lesions over time. Mean follow up of this subgroup of 18 subjects was 3.6 years (± SD 3.6). All the subjects that converted to MS presented OB and significantly higher KFLC then those who remained RIS-CIS (Fig. 1). Gender and age at onset did not differ significantly among RIS-CIS patients who converted or not to MS. Those patients who presented with optic neuritis converted less to MS then other types of onset (RIS, focal supratentorial syndrome, or partial myelopathy) (p = 0.07). Discussion Our study confirmed the role of KFLC in the diagnostic work-up for MS. Both KFLC index (corrected for blood-CSF barrier permeability) and KFLC/IgG ratio (evaluating the overproduction of KFLC in CSF only) showed a high sensitivity and decent specificity towards MS diagnosis. Overall, OB remained the gold standard for CSF analysis in MS. These results still supported our routine testing for CSF analysis using a KFLC index to collect all cases suspected for MS, and proceeding with OB detection only if KFLC index is higher than 5. If compared to by isoelectrofocusing and immunoblotting, KFLC index measurement has some advantages: it can be completely automatized, it is operator-independent in interpretation, less time-consuming and less expensive 2 . Recently, the great sensitivity of intrathecal KFLC fraction has been confirmed even using several approaches such as: different ROC-curve determined KFLC index cut-offs, Reiber's diagram, Presslauer's exponential curve, and Senel's linear curve 8,9,12 . Schwenkenbecher et al. showed that Reiber's diagram had a greater sensitivity towards intrathecal Ig synthesis (if compared with the above-mentioned approaches and to a KFLC index cut-off of 5.9) 9 . In our cohort we confirmed the greatest sensitivity (98%) of Reiber's KFLC diagram toward MS. Although, this measure lacked of specificity in our population. Senel et al. calculated the CSF-serum ratio of KFLC, named Q KFLC (applying CSF-serum albumin ratio-dependent reference values). They showed no relevant difference for MS diagnostic accuracy comparing Q KFLC to the ROC curve determined cut-off value in 1224 patients 12 . We employed KFLC index according to its sensitivity of 93% and specificity of 85% in our population. Moreover, "false positive" values for KFLC index were double-check for the clinical diagnosis at the end of diagnostic workup and after a follow up of one year. In our study, we considered another approach to evaluated the excess of intrathecal KFLC only in CSF, and not corrected for blood-CSF barrier permeability. This CSF KFLC/IgG ratio also resulted sensitive (and specific) in discriminating MS from other neurological conditions, as previously described 13 . Not only this marker could be used to search for intrathecal IgG synthesis in suspect MS if serum is not available, but also supported the hypothesis that MS patients have an "excess" of KFLC production limited to the CSF 14 . These data confirmed the "kappa-oriented" immune reaction in MS CSF 14,15 . To our knowledge, the KFLC overproduction in MS patients has not been clarified yet. Increased concentrations of serum FLC have been described in several autoimmune disorders (and related to disease activity in few) in relation to the phenomenon of "antigen excess" 4 . Although not explaining the "kappa" prevalence, this mechanism could be speculated for CSF in the MS population, and have prognostic relevance. Table 1. Absolute concentrations of kappa (K) and lambda (L) free light chains (FLC), CSF ratios and indexes were determined in multiple sclerosis (MS), inflammatory neurological diseases other than MS (ID), and non-ID. Values are expressed in mean ± standard deviation (SD). We included 373 patients for KFLC and oligoclonal bands (OB) evaluation, 223 of them were tested also for LFLC. OB yes/no: "yes" was intended for types II (OB exclusively in CSF and not in serum) and III (contemporary presence of OB in both CSF and serum with a clear predominance in CSF). *Means significantly different in MS from ID and non-ID (p < 0.005 was considered significant according to Bonferroni correction for multiple comparisons). # Means significantly different among three groups (MS versus ID versus non-ID, p < 0.05). MS (n = 133) ID (n = 93) Non-ID (n = 147) Sensitivity (%) Specificity (%) www.nature.com/scientificreports/ The presence of OB during the early MS phases have been discussed also as a negative prognostic indicator for disease outcome 16,17 and we previously reported KFLC index being a significant predictor for disability over time being higher in those patients who developed greater disability in the short term 18 . Robust data have been published on the role of OB in predicting CIS conversion to MS 19 . In this study we included a small group of RIS-CIS patients and evaluated conversion to MS in the short term. KFLC through different interpretation approaches resulted higher in those subjects who converted to MS during the follow up, being CSF KFLC/IgG ratio more significant then KFLC index. A prognostic value for KFLC have been discussed in few recent studies 20,21 . Villart et al. associated high CSF KFLC absolute concentrations (categorized versus less than 0.53 mg/l) to a greater probability of conversion to MS in 78 CIS patients 22 . A similar prognostic role was confirmed for KFLC index by Makshakov et al. 23 . There is no prognostic data on the excess of KFLC in the CSF (using the ratio that includes CSF IgG as we did). Moreover, in our study, the CSF KFLC/IgG ratio better stratified the risk of conversion to MS if compared to KFLC index. LFLC did not differ among the groups, as previously described 21 . Early conversion to MS was less frequent with optic neuritis onset, whereas other clinical/paraclinical parameters failed to identify converters in our cohort (possibly because of the small sample size). Senel et at. enrolled 77 CIS patients according to Mc Donald 2010 criteria 24 , of whom 38 converted to MS. They showed that KFLC are predictors for conversion to MS (almost as sensitive as OB) 25 . In the present study, the application of McDonald criteria 2017 reduced the number of cases that could be classify as RIS-CIS, and definitely, a prolonged follow up with long-term outcomes could improve the prognostic role of KFLC. CSF markers (not albumin or serum-corrected): mean ± SD Another limit of this study was that our patients underwent a unique lumbar puncture. Consequently, we do not have data on any changes of FLC levels over time, despite foregoing reports suggest they remain stable 26 . It is questionable whether KFLC could change with treatment particularly targeting B-cells. In conclusion, we confirmed KFLC index as the most sensible and specific quantitative marker for diagnosing MS and suggested that CSF KFLC/IgG might be employed to find whose RIS-CIS patients will convert to MS. Data availability Data available on request. No difference in LFLC was found between RIS-CIS patients with and without conversion to MS. CSF KFLC/IgG ratio resulted more informative in detecting whose patients were at risk of convert to MS. In fact, RIS-CIS patients with elevated CSF KFLC/IgG ratio had a higher risk to convert to MS (hazard ratio, HR 1.05; 95% CI 1.01-1,10; p = 0.02). Conversely, regression was not significant for KFLC index (HR 1.07; 95% CI 0.99-1.16; p = 0.09). Scientific Reports | (2020) 10:20329 | https://doi.org/10.1038/s41598-020-77029-7 www.nature.com/scientificreports/ Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2,961
2020-11-23T00:00:00.000
[ "Biology", "Medicine" ]
Evaluating Knowledge of Business Processes Any organization relies on processes/procedures in order to organize the operations. Those processes can be explicit (e.g. textual descriptions of workflow steps or graphical descriptions) or implicit (e.g. employees have learned by experience the steps needed to ‘get things done’). A widely acknowledged fact is that processes change due to internal and/or external factors. How can managers make sure the employees know the last version of the process? The current practice is to test employees by multiple-choice questions. This paper proposes a novel knowledge-testing approach based on graphical and interactive questions. To validate our approach, we set up a single-factor controlled experiment with novices and experts in a faculty admission process. The results show that our approach has better results in terms of correct answers. Introduction This paper introduces our research on testing methods that can be employed in testing the knowledge of business processes.This study approaches the research question: Which method of testing the knowledge about processes/procedures is more suited for evaluating the capacity of employees to execute them?Typically, process knowledge is evaluated by asking multiple-choice or open questions on the process documentation.Most organizations document some of the main processes in textual form and then struggle to keep it up to date.A growing number of organizations acknowledge the importance of business processes, document them using some form of graphical models, and employ information systems tailored to support them.However, there are still many organizations that haven't documented processes.In these organizations, employees learn by training and/or experience how to achieve organizational goals.No matter if process documentation exists or not, it is critical for managers to be able to evaluate how well the employees are able to execute processes.Shallow knowledge about processes relates to learning the main steps to be performed, their sequence, the documents and data involved, etc.A deeper understanding resides in, for example, issues like contingency steps in case of errors; overview of organization-wide processes, etc.Current testing based on multiplechoice questionnaires don't go beyond the shallow understanding.We argue that a new testing approach is needed.It should aim to put the tested employee in the position where problem-solving deep knowledge is needed, rather than the ability to memorize process steps.In this paper we introduce a first take on this challenge.We evaluate if asking questions in a graphical-interactive manner is better than the multiple-choice way.Better is interpreted in terms of a greater number of correctly asked questions as well as the time needed to answer comprehension questions.The context of our experiment is an organization of higher education.More specifically, we use the annual admission process to evaluate how accurate different types of participants know and understand the entire process, and if they are able to execute it in any specific case that might arise.The paper unfolds as follows.First, we provide an overview of the theoretical foundations by reviewing papers related to factors influencing model understanding and experimental design.In the next section, we provide the details of our controlled experiment.The 1 DOI: 10.12948/issn14531305/20.3.2016.01single factor of the experiment is the comprehension question presentation format.In section 4 we introduce the results and our data analysis.We end with conclusions and the implications of our findings. Related Work Business process models are key artefacts in the development of information systems.While one of their main purposes is to facilitate communication among stakeholders, little is known about the factors that influence their comprehension by human agents.To date, the body of research on process model understanding relies on controlled experiments based on multiple-choice comprehension questions.Therefore, in this section, we approach two main related research avenues: one on process model understanding, and a second one on controlled experiments in process model settings.The notation that we use in our experiment is the Business Process Model and Notation (BPMN) which is the industry standard in process modelling.It has straight-forward syntax and semantics.So-called connectors (XOR, AND) define complex routing constraints of splits (multiple outgoing arcs) and joins or merges (multiple ingoing arcs).Through XOR, when splitting, the sequence flows to exactly one of the outgoing branches.When merging, it awaits one incoming branch to complete before triggering the outgoing flow.Through AND, when used to split the sequence flow, all outgoing branches are activated simultaneously.When merging parallel branches process flow waits for all incoming branches to complete before triggering the outgoing flow. Process Model Understanding Process models typically capture in some graphical notation the tasks, events, states, and control flow logic that together constitute a business process.The understanding of the process is essential when it comes to achieving organizational goals, or when just passing information to third parties.Ignorance of the procedures/processes can result in either unsuccessful achievement of some goal, or in wasted resources (e.g.employee extra workload to correct mistakes).There is no clear definition of the notion of understanding of business process models.It is used in different manners depending on the context [1], [2], [3], [4].Even though a definition lacks, this notion is very important.In the academic setting, the ability of humans to understand processes is linked to model features such as structuredness, complexity (e.g.number of model elements, number of types of model elements), or the particular modeling notation (e.g.formalism needed to depict the model, how the model is actually drawn, if it follows secondary notation conventions, etc.).In organizations, the domain knowledge also plays an important role.After all, a model of which all aspects are understood very well by the stakeholders is easily verifiable from the validity and completeness points of view.In state-of-the-art research in business process management there is a clear stream of research aimed at clarifying what makes process models understandable.This stream of research is divided into 3 main branches: a) research focused on model characteristics. Basically, researchers set out to find out what makes process models complex.As complexity is directly linked to understanding, it was proven that a model with e.g. more elements, more crossing arcs, less structure, etc., will increase the cognitive load and thus reduce the comprehension performance.What is worth mentioning, in the context of our project, is that all this work relies on similar setups of controlled experiments.Basically, multiple choice comprehension questions are asked on process models and independent variables such as various model metrics (e.g.number of model elements, number of model arcs, etc.) are linked to dependent variables such as the number of correct answers or the time needed to answer the questions.Of course, many variations can be found, linked to the specific purpose of the research (e.g.model activities with textual or abstract labels, artificial or industry models, etc.).b) research focused on the process of creating DOI: 10.12948/issn14531305/20.3.2016.01models.This kind of research aims to bring further light into the best practices of creating process models.Researchers also rely on controlled experiments, but this time aiming to capture the flow of modeling activities.For example, experimenters record how novices and experts perform the modeling activities, being given identical process descriptions.c) research focused on the connection between process models and real life implementations.This kind of research leaves the abstract approach to a more real-world focus.Thus, the research method of choice are case studies. Controlled Experiments in Business Process Management (BPM) We designed our study according to the research methodology laid out by Field and Hole [5] as well as Creswell [6].Shortly, we followed the recommended stages of research: Planning (i.e.literature research for related papers; choose the method for empirical research; and design the controlled single-factor experiment), Execution of the experiment, and Data analysis and interpretation. For the literature research we performed the first steps towards a systematic literature review.We searched Google Scholar for a combination of key words (i.e."experiment business process" and "experiment process model") in order to extract all papers related to experiments in BPM area.We considered all the hits on the first 15 pages and screened the titles for relevance.The papers passing this first filter were filtered again based on their abstracts.One last filter was based on the number of citations and the year of publication (i.e.we divided the number of citations to the count of years since the paper was published and retained as relevant the hits with a ration of at least 5).Out of the third filter emerged a total of 7 papers that very closely related to our own effort.Key researchers in the area appear to be B Webber, M Weske, J Mendling and H Reijers.The papers that provided the most inspiration were [77], [8], [9], [10], [Error!Reference source not found.], [Error! Reference source not found.]. In choosing the research design, appropriate for our particular research problem, we considered the following:  the means of obtaining the information;  the availability and skills of the researchers;  justification of the way in which selected means of obtaining information will be organized, and the reasoning leading to the selection;  the time available for research.Execution of the project is a very important step in the research process.If the execution of the project proceeds as planned, the collected data is adequate and dependable.A major concern was ensuring that the survey is under statistical control so the collected information is in accordance with the pre-defined standard of accuracy.After the data was collected, we turned to the task of analyzing them.The analysis of data comprises a number of closely related operations such as: creation of categories, the application of these categories to raw data through coding, tabulation and drawing statistical inferences.Coding operation is usually done at this stage through which the categories of data are transformed into symbols that may be tabulated and counted.Editing is the procedure that improves the quality of the data for coding.Tabulation is a part of the technical procedure where the classified data is put in tables.By statistical tests we seek to test whether observed differences are real or the result of random fluctuations.After analyzing data, as briefly described before, we moved to hypotheses testing.Do the facts support the hypotheses or they happen to be contrary?This is the usual question which should be answered while testing hypotheses.Various tests, such as Chi square test, t-test, F-test, ANOVA, etc. have been developed by statisticians for the purpose.Hypothesis-testing will result in either accepting the hypothesis or in rejecting it.If a hypothesis is tested and upheld several times, it may be possible to arrive at generalizations, i.e., to build a theory.DOI: 10.12948/issn14531305/20.3.2016.01 Experiment Overview As understandability is a rather broad aspect and cannot be measured directly, we rely on a controlled experiment to gain insights into the research question.The goal is to investigate the impact of the presentation in an effort to answer the main research question: Which method of testing the knowledge about processes/procedures best reflects the ability of experts to perform these processes/ procedures?We experiment with one factor: how the knowledge test questions are formulated and presented to the subject.Basically, we ask the same basic test question but we introduce/show it to the user in two different ways.There are two levels of this factor: a) a classic multiple choice questions layout and b) a cus-tomized graphical interface tailored for process-related knowledge.Below, we introduce one example of such a comprehension question, with its two presentation variations for the comprehension question: "Indicate the minimum number of steps to confirm your place within the admission".The participants in the experiment were 16 faculty staff and students.All had expertise with the faculty admission process that was used as a setting for our study.Participants were divided into two groups: experts were designated faculty staff that were involved in at least 3 executions of the process while novices were designated participants involved in at most 2 executions of the process.Participants were randomly assigned to one of the two factor levels such that we maintain an even distribution of experts and novices in the two groups.DOI: 10.12948/issn14531305/20.3.2016.01 The objects of the experiment are comprehension questions.Participants answered a set of ten questions about the business model, its documents, tasks, data objects and exception situations. We use two dependent variables in the experiment.First variable is the correctness in answering comprehension questions.It is coded 1 if the correct answer was indicated by the participant, and 0 otherwise.The second dependent variable is the time needed to answer each comprehension question.The difficulty of answering comprehension questions can be linked to the amount of time it took for people to provide their answer.Time was recorded manually from the moment the question was introduced to the participant until the answer was given.Time is stored in seconds. We also use several independent variables like: a) experience -codes the domain knowledge with the faculty admission process.There are two possible values: 0 for novices and 1 for experts, b) type of questionstores the focus of the comprehension question (e.g. can be sequence of process activities, documents and/or data needed to execute the process activities).In total, there are 4 question types, c) treatmentcodes the factor level the participants were assigned to.Basically, we formulate six hypotheses about the relationship between independent and dependent variables:  H1 The presentation format of the comprehension questions (i.e.treatment) will impact on the ability of the participants to provide the correct answer (i.e.correctness),  H2 The general process knowledge of the participant (i.e.experience) will impact on the ability of the participants to provide the correct answer (i.e.correctness),  H3 Comprehension questions on different process perspectives like control-flow, data (i.e.question type) will impact on the ability of the participants to provide the correct answer (i.e.correctness),  H4 The presentation format of the comprehension questions (i.e.treatment) will impact on how much time a participant needs to provide an answer (i.e.response time),  H5 The general process knowledge of the participant (i.e.experience) will impact on how much time a participant needs to provide an answer (i.e.response time),  H6 Comprehension questions on different process perspectives like control-flow, data (i.e.question type) will impact on how much time a participant needs to provide an answer (i.e.response time). Personal factors have also been recognized as important factors for this type of research.They relate to the reader of such a model, for example with respect to one's educational background or the perceptions that are held about a process model.The way information is processed by humans is influenced by cognitive styles, which can be related to personality.There are persons who prefer verbal over visual information.From this point of view, through visualization, perceptional capabilities of a person are also relevant.These capabilities differ between persons with different modeling expertise [1].A level of professional expertise is assumed to take at least 1,000 to 5,000 hours of continuous training [13].Such regular training is needed to build up experience and knowledge regarding a specific process.We are unable, and thus did not aim, to capture such personal features as independent variables. Tasks As mentioned before, we investigate the impact of the presentation on process knowledge testing.To this end, we created a questionnaire that tests how well participants understand the process, from various perspectives.A questionnaire is a research instrument consisting of a series of questions and other prompts for the purpose of gathering information from respondents.One variant of the questionnaire consists of multiple choice questions on the admission process.The other variant implements our own proposal of question format.DOI: 10.12948/issn14531305/20.3.2016.01 The questionnaire was developed by iterating through three steps.First, we collected the textual description of the admission process and screened for issues that might rise comprehension problems.Our concern was to cover different types of understanding perspectives.For example, one major concern of managers is that employees know the correct sequence of the steps that need to be performed.Another concern is that employees know exactly which documents need to be requested at specific points in the process, etc.Second, we formulated questions based on those issues and 3 to 5 answer options.The questionnaire contains the same questions, with the same answer options, but presented in two different styles.Therefore, the third step was to create the two presentation variants in such a way to preserve neutrality regarding 'guessing' the correct answer.The questions which are included in the questionnaire are: 1. Specify which documents are necessary to enroll to the faculty.2.You have been admitted!Specify the minimum number of steps for confirming the enrollment.3. It is mandatory for a student to take the Lingua exam? 4. I'm Ionescu, a student from Switzerland. Which is the previous step that I need to go through, before I submit my entry file? 5. What happens if I don't submit the original Baccalaureate degree for the confirmation?6.It is mandatory to collect the first downpayment when I register?7. Specify which documents are necessary to be included in the confirmation file.8. I'm Ionescu.It is mandatory to enter data in the online pre-registration system? 9. I'm Ionescu.I wasn't assigned a place neither the admission average nor depending on the option.What happens after the initial distribution?10.Online enrollment need to be made before or after the preparation of the documents from the file? Questions are grouped in different types, such as:  Two questions related to documents.The participants are tested if they know what should be include in the student's brief;  Questions related to the control-flow. There are three questions which inquire about the sequence (i.e.order in which activities should be performed).Such a question asks the participant if he could do something without doing something else previously.For assessing knowledge on concurrency, there is one question that inquires about the order in which some actions can be executed.Finally there is one question related to process overview, which asks the participant to indicate the smallest number of activities to be executed between two process points;  Questions related to decision making. There are three questions which highlight the alternative way that may be followed when a decision needs to be made or when an error occurs.The collected data was be analyzed using statistical methods to verify the degree of correlation between participant's perception of processes knowledge and the proposed metrics.While the multiple choice items were evaluated automatically, the open answers had to be interpreted and matched with the errors detected based on the textual description. Subjects Overall, the experiment was performed on two groups made of faculty staff and students of the Faculty of Economics and Business Administration in the Babes-Bolyai University of Cluj-Napoca.All participants had a direct involvement in the execution of the admission process.The experts have more experience with the back-office side the admission process.Students have better experience with the front-office side of the process.However, all participants should have a good understanding of the processes activities and, even more, were provided with the by-law which gives the official textual description of the process.Each participant was randomly assigned to a group that received one treatment.The first DOI: 10.12948/issn14531305/20.3.2016.01 group was given the multiple choice version and the second group had the graphic interface version. Method and Experimental Design The overall phases of this research project are: 1. Documentary research; 2. Development of the solution and implementation of the prototype system; 3. Experimental validation by controlled experiment; 4. Analysis of experimental data; 5. Dissemination of the result.We chose to perform a controlled experiment, in which all factors remain fixed except one.The steps of the experiment execution were: 1) the experiment begins by randomly assigning the participant to one of the two treatments; 2) the operator gives the participant a brief description of the experiment goals, its setting and the tasks to be performed. 3) the document with the official procedure is provided to the participant and the operator explains that it can be consulted at any moment during the experiment.A 15 minutes interval was granted to the participant for refreshing their memory based on this document; 4) the operator introduces each question and the answer options.A timestamp is recorded; 5) the operator records the answer option indicated by the participant and a second timestamp. Instruments According to the theoretical background, both the characteristics of the reader of a process model and those of the process itself impacts on the understanding that may gain from studying that model. The format and content of the questions were developed and tested in several iterations, before the final version of the questionnaire was reached.The questionnaire implementation was done in Balsamiq Mockups.Data collection was done using Microsoft Excel.Then it was imported for analysis in the statistical package MedCalc. Research Results In this section we introduce the results of our experiment.The collected data includes information such as: the first name, last name, treatment, question number, the answer option indicated by the participant and the task execution time measured in seconds.Additional data was added such as: the correctness of the answer option (i.e. 1 for a correct answer and 0 for incorrect one), expertise (i.e.0 for novices and 1 for experts), and question type (e.g.questions asking for sequence were coded 1, questions testing for process document knowledge were coded 2, and decision-making questions were coded 3).For each question asked to a participant one row was created.A pre-processing step was performed by manual inspection of the data for obvious errors or outliers.No abnormal observations were detected, thus all 160 observations were used for further analysis In the second stage, the results were analyzed using the MedCalc statistical package, which implements:  Descriptive data analysis ( e.g. the average of questions answered correctly according to the model and to the type of subjects, average time spent per filled in question according to the model and to the type of subjects.); Correlation analysis;  Variation analysis (e.g.ANOVA, AN-COVA, etc.).Looking at Table 2, the obvious insight is the weak, but significant correlation between Treatment and Correctness.This supports our main hypothesis that the presentation of the comprehension questions is linked to how many correct answers are provided by the participants.The positive value of the correlation needs to be read in the context that Treatment was coded 0 for the 'classic' multiple choice and questions and 1 for the graphical interface while Correctness was coded 0 for an incorrect answer and 1 for a correct answer. There is a weak negative correlation close to the significance threshold (p=0.07) between experience and time.This points to the conclusion that experts spend less time in answering comprehension questions. Having some interesting correlations between variables, we further analyzed the data to test out hypotheses.The analysis of variance groups the recorded Correctness by the Treatment codes.Our proposed graphical approach to asking the comprehension questions was coded 1 while the 'classical' multiple choice questions were coded 0. As ANOVA shows, the averages of the two groups are significantly different (p<0.001),while out proposed approach has a much higher correct response rate than the classic multiple choice group (59% versus 27%).Our main hypothesis, H1, is supported.It relates to the research question of this paper, that an interactive graphical approach to asking comprehension questions leads to a greater number of correct answers, which in turn are linked to a better understanding of the process.We believe that this increase in performance can be attributed to a better understanding of the questions and their context.The only other hypothesis partially supported is the obvious influence of experience over the time needed to answer the comprehension questions.The weak correlation can be attributed to the fact that we asked questions about specific, rarely occurring issues in the process.Also, the answer options were formulated in such a way that the participant could not 'guess' the answer.Unsurprisingly, the null hypothesis of H2 finds support.Experience does not seem to imply a better performance.We believe that this is linked to our careful formulation of the comprehension questions in such a way that there would be no obviously 'right' answer.DOI: 10.12948/issn14531305/20.3.2016.01 Conclusions. Threats to validity In this paper we investigated which method of testing the knowledge about processes is more suited for evaluating the capacity of experts to execute them.The empirical validation supports our claim that a better comprehension performance is achieved through graphical interface than the multiple choice.We believe that this increase in performance is closer related to the actual level of understanding of the process by the participants, as indicated by the results of the ANOVA in Table 3 that point to a very low average of correct answers in the 'classical' multiple choice group.This conclusion is further strengthened by the fact that experience in executing the process doesn't impact on the correct answer rate, thus confirming that we formulated questions that cannot be simply answered by 'guessing' or from general knowledge.We acknowledge there are many threats to the validity of the results of our study, and tried to mitigate them.Conclusion validity of this study is limited by the sample size of collected data (i.e. 2 treatments, 1 process, 10 questions, 16 respondents).One particular aspect of the external validity of the presented research relates to the limited number of respondents from a single organization.Construct validity is linked to how the dependent variables (correctness, time) were linked to the data collected from the questionnaire filled in by the subjects.The measurements may have lacked accuracy, given that we used a manual recording of the times at which participant started and completed one task.To reduce this threat, respondents received detailed instructions about how to use and fill in the questionnaire and experimental sessions were performed with a single participant.Regarding Internal Validity, we considered several aspects that may have threatened it: -Persistence Effects.The experiment was executed by participants who had never done a similar experiment before.-Knowledge of the Universe of Discourse. The knowledge of the domain did not affect internal validity, since it was familiar to all subjects.-Fatigue Effects.The total experiment time for each participant was less than 30 minutes, thus fatigue effects were unlikely to appear.-Subject Motivation.The subjects were highly implied to this research and the results could be a beneficial to them since the experts are directly implied in the admission process and the novices study at this university.-Plagiarism and Influence Among and Between Subjects doesn't exist, because they haven't seen each other, so they can't inspire from the colleague. Our conclusions impact on the industry.We believe that there is enough support for moving to the next phase of implementing the proposed interactive graphical approach to testing process knowledge in the form of an application targeted at employees of organization where procedure understanding is critical.Such an application will also enable us to perform the experiment on a larger sample of participants and on multiple processes, and thus reinforce our conclusions. Table 2 . Correlation table Table 3 . One-way ANOVA for Treatment and Correctness variables Table 4 . Table 4 summarizes our findings: Results of hypotheses testing
6,104
2016-07-01T00:00:00.000
[ "Computer Science", "Business" ]
Comparative Analysis between 3D-Printed Models Designed with Generic and Dental-Specific Software With the great demand in the market for new dental software, the need has been seen to carry out a precision study for applications in digital dentistry, for which there is no comparative study, and there is a general ignorance regarding their applications. The purpose of this study was to investigate the accuracy differences between digital impressions obtained using generic G-CAD (general CAD) and D-CAD (CAD dental) software. Today, there is a difference between the design software used in dentistry and these in common use. Thus, it is necessary to make a comparison of precision software for specific and generic dental use. We hypothesized that there is no significant difference between the software for specific and general dental use. Methods: A typodont was digitized with an intraoral scanner and the models obtained were exported in STL format to four different softwares (Autodesk MeshMixer 3.5, Exocad Dental, Blender for dental, and InLAB). The STL files obtained by each software were materialized using a 3D printer. The printed models were scanned and exported in STL files, with which six pairs of groups were formed. The groups were compared using analysis software (3D Geomagic Control X) by superimposing them in the initial alignment order and using the best fit method. Results: There were no significant differences between the four analyzed software types; however, group 4, composed of the combination of D-CAD (Blender–InLAB), obtained the highest average (−0.0324 SD = 0.0456), with a higher accuracy compared to the group with the lowest average (group 5, composed of the combination of the Meshmixer and Blender models), a generic software and a specific software (0.1024 SD = 0.0819). Conclusion: Although no evidence of significant difference was found regarding the accuracy of 3D models produced by G-CAD and D-CAD, combinations of groups where specific dental design software was present showed higher accuracy (precision and trueness). The comparison of the 3D graphics obtained with the superimposition of the digital meshes of the printed models performed with the help of the analysis software using the best fit method, replicating the same five reference points for the six groups formed, evidenced a greater tolerance in the groups using D-CAD. Introduction The application of digital workflow in dentistry is increasing due to the rapid development and improvement of intraoral scanners, dental software and dental materials, becoming an integral part of the daily routine and communication between dentists and dental technicians [1,2].In this field, one of the most well-known tools is CAD/CAM technology (computer aided design/computer aided manufacturing) [3,4], which offers among its advantages the ability to optimize processes that are more laborious analogically [5].In addition, it makes it possible to design and manufacture chairside restorations, with high functional and aesthetic quality, ensuring a better fit between surfaces in a faster and more comfortable way for the patient, adjusting to the characteristics and anatomy of the tooth, by means of restorations designed with different softwares [6,7].In summary, these technologies provide a more efficient workflow in the clinical environment, providing high accuracy, precision, predictability, efficiency and cost-effectiveness with a wide range of restorative materials with adequate physical, optical and biological properties [8,9].The digital workflow with a CAD/CAM system involves image acquisition, digitization, design and manufacturing [10,11].This workflow can be categorized into three groups: chairside, directly in the dental office; labside through a dental lab; or mixed, using processes from both of the above [12].The process begins with the acquisition of information to create a "points cloud" by means of scanning.The spatial situation of the digitization of the scanned points is defined by their Cartesian coordinates; thus, a 3D model is formed, given by the union of the planes arranged in triangulations.This step is considered critical to originate a 3D file [13,14]. The points cloud generated during scanning is converted into a continuous surface through the CAD software algorithm, which may cause some loss of accuracy [12].Technical factors influencing the accuracy of the scanning process include: ambient illumination, operating software version, the intraoral scanner's optical impression technology and the depth of field and scanning strategy [3,15,16].However, even if the point cloud has a low density or aberrant areas, the scanner can remove such measurements using computer algorithms, thus generating a better digital model [3,14,17].For this reason, certain digitization software, after the acquisition of the image, is responsible for processing the information to generate a file that can be exported to different design programs depending on the format required [13,18].The digital file obtained can be stored in various formats, such as stereolithography (STL), the 3D objects geometry information format (OBJ) and the polygon file format (TYP).OBJ or PLY files contain additional information concerning the color and texture of the object [19].However, in most systems, CAD data are handled and transmitted in STL format, where each triangle of an STL file consists of three points with Cartesian coordinates, namely X, Y, Z and a surface [1], and has therefore become the standard file format in 3D printing [8].In the CAM phase, the model can be materialized in various systems either as additive manufacturing (AM) in 3D printing or by subtraction through a milling machine [20]. In the exchange of data between different types of dental software there can be more or less accuracy in the mesh of the 3D objects [21].The quality of the digital impression is defined by two independent factors, which are trueness or reliability and precision; the combination of both factors determines the accuracy [22].Trueness is obtained by comparing the original geometry of the reference master model with the digitized model, while accuracy is obtained by an intra-group comparison of digitized models, i.e., it refers to the approximation of the agreement between test results [19,20].During the acquisition and digitization steps, the accuracy of the impression may be affected [23]. There are two types of systems that support archives-open systems or closed systems.Closed systems offer a complete integrated flow, including data acquisition, virtual design with software and restoration fabrication in the same environment.All steps are integrated into a unique system, and there is no interchangeability between different systems from other companies [24], although some of them allow to export to software from different companies by exchanging STL files for design.Currently, many workflows allow opening universal files through the export of digital meshes in different formats [22,25]. On the other hand, open systems allow the adoption of the original digital data generated by CAD software and CAM devices from different companies, providing greater versatility [24,26].These types of systems handle three-dimensional data in the STL format, which is the most commonly used in dental CAD/CAM systems, based on open source software because it is freely available, which means that any user can inspect, improve or share it.Its universal format allows STL to work with almost all CAD software programs [9,27]. When using open-source software, the fact that it may not have originally been developed for the use of dental design should be considered; this software has been transformed along time, as is the case of the software Blender 3.3.1,which was adapted to Blender for Dental 3.3.1 [28]. Software specifically for dental design can be classified as D-CAD and software for general non-specific design can be classified as G-CAD.The incorporation of new technologies in private practice or in a dental laboratory requires the mastery of design software, as well as the understanding of these tools and their application in various clinical situations and environments; it should be noted that the use of software involves a significant learning curve [29]. The CAD phase is a very challenging part of the digital workflow.The use of different types of specific dental design software added to the scarce existing literature has made it a subject to be investigated in depth [30].The evaluation of D-CAD by analyzing the learning curve confirms that the results differ according to the type of software program [31].New studies have shown that CAD learning is closely related to the learning curve and repeated D-CAD learning; therefore, software design learning is necessary for effective clinical application [32].It should be considered that D-CAD learning is strongly influenced by the user interface (UI) of the software and the user experience (UX).Therefore, even if the software program is used for the first time, the better the UI and UX, the higher the learning rate [33]. D-CAD serves as an intuitive tool for professionals; however, the flexibility to create virtual designs is more limited due to the cost per package or version required, compared to the non-specific design software G-CAD.D-CAD has the advantages of presenting in a more simplified way all the tools available for designs; therefore, the learning curve and work time are shorter [34,35]. SGSD requires a longer learning curve and lower economic investment compared to D-CAD.However, the disadvantage of G-CAD is that it would be more prone to errors in both meshing and treatment planning [36].In the absence of extensive literature on experimental studies of D-CAD, a dentist working with G-CAD software will encounter difficulties in comprehensive treatment planning, until he or she becomes familiar with all the necessary tools [37]. An example of G-CAD is MeshMixer, which is free with basic functions such as simple trimming, degumming, digital model labeling, and the option to add dental libraries; however, within its limitations, it cannot analyze occlusion, nor dental proportions, positions, shapes and morphologies.For this reason, the need for a longer learning curve is mentioned as a disadvantage, as well as a longer design time [27,29,38,39]. The purpose of this study was to compare the accuracy of different STL files of 3D models, which were exported with G-CAD and D-CAD, to observe if the meshes of the files could suffer modifications impacting the accuracy of the digital impressions.This would provide distinct information for dentists and laboratorians, besides providing criteria for their choice, in view of the scarce literature [27].Given the different characteristics of the software used, the printed models were analyzed in terms of veracity and accuracy compared with the different types of design software.Therefore, the null hypothesis was that there would be no statistically significant differences in terms of accuracy and trueness between the files of the printed dental models that were designed with these two softwares. Model Digitalization and Design: First Phase For the first phase, a master upper arch typodont was scanned with a high-end intraoral scanner (PrimeScan TM , Dentsply-Sirona TM , New York, NY, USA), following the scanning strategy recommended by the manufacturer.In addition, this method was considered due to the high trueness and the fact that the system seems to be a valid tool to obtain digital full-arch datasets in vivo with comparable accuracy [29].It was then digitized, and the image obtained was exported in a high-resolution STL file format using a chairside software (CEREC 5.0.0,Dentsply-Sirona, NY, USA). The STL file was exported to each of the four different softwares previously selected: three SDEDs (Exocad Dental 3.1, exocad, Darmstadt, Germany; Blender for dental 3.3.1,blender, New York, USA; and InLAB SW 22.0, Dentsply-SironaTM, Bensheim, Germany); and one G-CAD (MeshMixer, Autodesk, San Francisco, CA, USA).Summarizing, the study compares the accuracy of the digital models processed with different CAD programs. Next, an exported STL file was obtained from each design software studied.Each of the digital models were printed using a 3D printer (SprintRay, Los Angeles, CA, USA), with resin for models (SprintRay Die and Model 2).Four printed models for each group were obtained.Immediately, the printed models were placed on an automated multi-stage wash platform, starting with a two-cycle wash with the use of 91% isopropyl alcohol, followed by a 10-minute fast dry (SprintRay Pro Wash/Dry, SprintRay, Los Angeles, CA, USA); upon completion they were light-cured with 120 W UV light for 120 s in the built-in photo-polymerization system (Procure, SprintRay, Los Angeles, CA, USA) (Figure 1).digitized, and the image obtained was exported in a high-resolution STL file format using a chairside software (CEREC 5.0.0,Dentsply-Sirona, NY, USA). The STL file was exported to each of the four different softwares previously selected: three SDEDs (Exocad Dental 3.1, exocad, Darmstadt, Germany; Blender for dental 3.3.1,blender, New York, USA; and InLAB SW 22.0, Dentsply-SironaTM, Bensheim, Germany); and one G-CAD (MeshMixer, Autodesk, San Francisco, CA, USA).Summarizing, the study compares the accuracy of the digital models processed with different CAD programs. Next, an exported STL file was obtained from each design software studied.Each of the digital models were printed using a 3D printer (SprintRay, Los Angeles, CA, USA), with resin for models (SprintRay Die and Model 2).Four printed models for each group were obtained.Immediately, the printed models were placed on an automated multi-stage wash platform, starting with a two-cycle wash with the use of 91% isopropyl alcohol, followed by a 10-minute fast dry (SprintRay Pro Wash/Dry, SprintRay, Los Angeles, CA, USA); upon completion they were light-cured with 120 W UV light for 120 s in the builtin photo-polymerization system (Procure, SprintRay, Los Angeles, CA, USA) (Figure 1). Digitization of the Model and Groups to Be Studied: Second Phase The process began with the scanning of the four printed models, following the scanning strategy recommended by the manufacturer (PrimeScan TM , Dentsply-Sirona TM , New York, NY, USA).The digital models were exported as STL files in 100% resolution, in order to compare, by superimposing the meshes, the different softwares.To make the comparison, six groups were formed with the composition described in Table 1. Table 1.Groups included in the study. Blender for dental, New York, NY, USA. Group 3 Autodesk Mesh Mixer, San Francisco, CA, USA. Digitization of the Model and Groups to Be Studied: Second Phase The process began with the scanning of the four printed models, following the scanning strategy recommended by the manufacturer (PrimeScan TM , Dentsply-Sirona TM , New York, NY, USA).The digital models were exported as STL files in 100% resolution, in order to compare, by superimposing the meshes, the different softwares.To make the comparison, six groups were formed with the composition described in Table 1. Analysis of .STL Files Using mesh analysis software (Geomagic control; X Rock Hill/SC/3D Systems Inc., San Francisco, CA, USA), because it is a high-precision comparison software that generates various measurement points and color maps, it is easy for the designer to use.The two STL files formed in each group were imported for comparison, by initial alignment and the best fit method, replicating the same reference points. From the aligned digital meshes, the 3D comparison was started, measuring five points in each group.The five measurement points were placed on the interincisal papilla, on the palatal cusp of pieces 1.5 and 2.5, and the mesiopalatal part of pieces 1.6 and 2.6.Replicating this step in the six groups for the same points, gave in total 30 measurement points, which provided the descriptive statistics of each group with mean, standard deviation, minimum and maximum values, in addition to the color map of −1 mm blue + 1 mm red, and the tolerance range of ±0.1 mm, as shown in Figure 2. Autodesk Mesh Mixer, San Francisco, CA, USA. Analysis of .STL Files Using mesh analysis software (Geomagic control; X Rock Hill/SC/3D Systems Inc., San Francisco, CA, USA), because it is a high-precision comparison software that generates various measurement points and color maps, it is easy for the designer to use.The two STL files formed in each group were imported for comparison, by initial alignment and the best fit method, replicating the same reference points. From the aligned digital meshes, the 3D comparison was started, measuring five points in each group.The five measurement points were placed on the interincisal papilla, on the palatal cusp of pieces 1.5 and 2.5, and the mesiopalatal part of pieces 1.6 and 2.6.Replicating this step in the six groups for the same points, gave in total 30 measurement points, which provided the descriptive statistics of each group with mean, standard deviation, minimum and maximum values, in addition to the color map of −1 mm blue + 1 mm red, and the tolerance range of ± 0.1 mm, as shown in Figure 2. Statistical Analysis For data analysis, the variables obtained from the combinations of the six groups of software used in this study were defined.Based on the data generated by the analysis software, statistical tests of homoscedasticity (equality of variances) were performed with Levene's statistical test, a normality test was performed with the Shapiro-Wilk test, and a test of ranges was performed with the Kruskal-Wallis statistical test (to determine discrepancies in the average), all with statistical software (SPSS V27 for Windows; IBM Corp. Chicago, IL, United States) and with a significance level of 5%. Results Table 2 shows the descriptive statistics, where group 4 obtained the highest average (−0.0324SD = 0.0456) with minimum and maximum values of −0.070 and 0.0462, respectively.From these results it was observed that groups 3 and 4 showed greater precision and accuracy with less dispersion between their measurements. Statistical Analysis For data analysis, the variables obtained from the combinations of the six groups of software used in this study were defined.Based on the data generated by the analysis software, statistical tests of homoscedasticity (equality of variances) were performed with Levene's statistical test, a normality test was performed with the Shapiro-Wilk test, and a test of ranges was performed with the Kruskal-Wallis statistical test (to determine discrepancies in the average), all with statistical software (SPSS V27 for Windows; IBM Corp. Chicago, IL, USA) and with a significance level of 5%. Results Table 2 shows the descriptive statistics, where group 4 obtained the highest average (−0.0324SD = 0.0456) with minimum and maximum values of −0.070 and 0.0462, respectively.From these results it was observed that groups 3 and 4 showed greater precision and accuracy with less dispersion between their measurements. Figure 3 shows the analysis of outliers and quartiles.In this case, no outliers were found in any of the groups.For group 1 (Blender-Exocad) little dispersion was observed; the median showed that 50% of the measurements were below −0.0665, quartile 1 indicated that 25% were below −0.1656 and quartile 3 showed that 75% were below −0.0403.Group 2 (InLAB-Exocad) was the group with the largest dispersion of all, where 50% of the observations were below −0.0574, 25% below −0.2088 and 75% below −0.0389.Group 3 (Blender-Meshmixer) was the group with the second-lowest dispersion, where 50% of the observations were below −0.0299, 25% below −0.1325 and 75% below −0.0260.Group 4 (Blender-InLAB) was the group with the lowest dispersion of all (highest accuracy), where 50% of the observations were below −0.0439, 25% below −0.063 and 75% below 0.004.Group 5 (Meshmixer-Exocad) was the second group with the highest dispersion after group 2, where 50% of the observations were below −0.0747, 25% below −0.1886 and 75% below −0.044.For group six, 50% of the observations were below −0.0729, 25% below −0.1485 and 75% below −0.0281.According to these results, the values were similar between groups, but some groups were more variable than others.The results of the descriptive analysis indicated that the Blender for Dental, InLAB, MeshMixer, and Exocad softwares showed greater accuracy and precision in the group combinations where a D-CAD was present.However, the null hypothesis was not rejected, meaning there is no evidence of a significant difference in the accuracy of 3D dental models.Figure 3 shows the analysis of outliers and quartiles.In this case, no outliers were found in any of the groups.For group 1 (Blender-Exocad) little dispersion was observed; the median showed that 50% of the measurements were below −0.0665, quartile 1 indicated that 25% were below −0.1656 and quartile 3 showed that 75% were below −0.0403.Group 2 (InLAB-Exocad) was the group with the largest dispersion of all, where 50% of the observations were below −0.0574, 25% below −0.2088 and 75% below −0.0389.Group 3 (Blender-Meshmixer) was the group with the second-lowest dispersion, where 50% of the observations were below −0.0299, 25% below −0.1325 and 75% below −0.0260.Group 4 (Blender-InLAB) was the group with the lowest dispersion of all (highest accuracy), where 50% of the observations were below −0.0439, 25% below −0.063 and 75% below 0.004.Group 5 (Meshmixer-Exocad) was the second group with the highest dispersion after group 2, where 50% of the observations were below −0.0747, 25% below −0.1886 and 75% below −0.044.For group six, 50% of the observations were below −0.0729, 25% below −0.1485 and 75% below −0.0281.According to these results, the values were similar between groups, but some groups were more variable than others.The results of the descriptive analysis indicated that the Blender for Dental, InLAB, MeshMixer, and Exocad softwares showed greater accuracy and precision in the group combinations where a D-CAD was present.However, the null hypothesis was not rejected, meaning there is no evidence of a significant difference in the accuracy of 3D dental models.Table 3 shows the normality test of the measurements.According to these values, normal distribution was confirmed for group 4 (p-value = 0.079 > 0.05), group 5 (p-value = Table 3 shows the normality test of the measurements.According to these values, normal distribution was confirmed for group 4 (p-value = 0.079 > 0.05), group 5 (p-value = 0.210 > 0.05) and group 6 (p-value = 0.314 > 0.05).Therefore, the nonparametric Kruskal-Wallis test was used to determine the mean discrepancy. The Kruskal-Wallis test for independent samples did not reject the null hypothesis.With respect to the standard deviation, the Levene test did not reject the null hypothesis of equality of variances, that is, there were no significant differences between the standard deviations of the groups.One of the objectives was to determine whether there were significant differences between the average obtained with each group.Table 4 shows that there were no differences between the average of each group (H = 3.524 and p-value = 0.620 > 0.05).Therefore, the null hypothesis was accepted.Table 5 showed that there were no differences between the precisions (standard deviation) reported by the different groups, since the Levene statistic based on the mean and median yielded values greater than the significance level (mean p-value = 0.772 and median p-value = 0.977).From the analysis of Figures 3 and 4, it was determined that group 3 showed the highest percentage within the tolerance range of 75.21%, followed by group 4 (72.23%) and group 1 (70.78%).Groups 6, 5 and 2 showed tolerances of 67.87%, 67.85% and 60.32%, respectively, showing lower tolerance and greater dispersion. Discussion The purpose of this study was to compare both D-CAD and G-CAD softwares through the accuracy of different STL files of 3D models, to observe the behavior of the meshes of the files that could influence the veracity of the digital impressions. The results of the descriptive analysis indicate that the Blender for dental, InLAB, MeshMixer and Exocad softwares showed higher accuracy and precision in the combinations of groups in which a D-CAD was present.However, the null hypothesis was not rejected, meaning that there is no evidence of significant difference in the accuracy of den- Discussion The purpose of this study was to compare both D-CAD and G-CAD softwares through the accuracy of different STL files of 3D models, to observe the behavior of the meshes of the files that could influence the veracity of the digital impressions. The results of the descriptive analysis indicate that the Blender for dental, InLAB, MeshMixer and Exocad softwares showed higher accuracy and precision in the combinations of groups in which a D-CAD was present.However, the null hypothesis was not rejected, meaning that there is no evidence of significant difference in the accuracy of dental 3D models produced by different softwares, both D-CAD and G-CAD. Regarding the average, the D-CAD groups obtained higher accuracy and lower dispersion than the average obtained with a G-CAD. By standard deviation, the study also found no significant differences between the groups.These findings suggest that the choice of dental software used to fabricate dental 3D models may not be critical in terms of precision and accuracy, as all the softwares evaluated produced comparable results.However, the findings through outlier and quartile analysis of the STL measurements for each group did not provide outliers for any of them, as similar results were identified, even though some groups showed more variability than others.Therefore, the results provided by the different softwares for both the D-CAD or G-CAD softwares included in the study did not show significant differences, although better results in terms of precision and accuracy were evidenced when compared to a D-CAD. The comparison of the 3D graphics obtained with the superimposition of the digital meshes of the printed models performed with the help of the analysis software using the best fit method, replicating the same five reference points for the six groups formed, evidenced a greater tolerance in the groups using a D-CAD. Although the use of digital impressions in dentistry is not a new topic, there remains a concern about dispensing with conventional impressions and relying more on digital impressions.Several studies have compared the accuracy of digital impressions of dental implants with conventional impression techniques and the results obtained have shown that the accuracy offered by digital impressions can be clinically acceptable [40,41].In this regard, they noted that results can be influenced by the operator (e.g., experience and scanning strategy) [21,42,43], by technology (e.g., scanners, printers, algorithms, software) [44,45], and clinical conditions (e.g., ambient light, dental materials within the oral cavity, saliva and/or blood, amount of attached gingiva, patient movement) [46,47]. Specifically in relation to the technologies involved as part of the CAD/CAM process, there is evidence from previous studies investigating the accuracy of the different scanners on the market [48][49][50], which revealed that the results of the accuracy of digital impressions with intraoral scanners may vary depending on whether the investigation was performed in vitro [51] or in vivo [52]; few studies addressed the accuracy achieved by combining the scanner with CAD design software, in the cases of those with proprietary or open source design software considering the deviations that occur when exporting the STL file to different CAD design software that make it possible to evaluate data loss during the transfer of intraoral scans to various CAD design programs.According to this approach, one study concluded that more accurate results are obtained when using the proprietary design software associated with the intraoral scanner used, and in another case they recommend the use of open system scanners that perform a direct export to STL format because data loss related to model accuracy was observed in their study when transferring from the proprietary scanner format to the STL format [53,54].The effect of scanner software versions and CAD design software on the pre-accuracy of the results has also been examined, showing that the most recent updates guarantee greater accuracy and therefore more satisfactory results.Regarding the accuracy of STL files, software updates seem to achieve increasingly accurate STL files [55,56]. Accuracy refers to precision and trueness.Precision, conceptually, is the difference between repeated measurements on a given target, while trueness expresses how close the results of a measurement are to the real values of the measured object [57].In relation to fingerprints, trueness is an important measure for analyzing a model from this source [27,58].This is while the evidence of accuracy of digital impressions according to the literature consulted would be clinically acceptable between 50 and 120 µm [59][60][61], which means that the accuracy of the digital impression as a first step in any digital workflow should be below that range.It is noteworthy that the accuracy of today's digital impressions has led to their integration into dental offices [7].Accuracy can be measured by different methods, with many cases evaluating accuracy by examining prosthetic workflows [59,62].However, most of the research similar to the present study used special software to compare STL files with a reference data set [57], with best-fit alignment being the most commonly used method.In this sense, it is revealed that the checking software (Geomagic, Control X) used in this study has shown good accuracy in the measurement of digital models [58,63,64]. Technological progress in recent years has led to the implementation of a large number of D-CAD and G-CAD softwares, and the intraoral scanner used in the study has a good reputation in relation to the accuracy of the measurements.In the case of G-CAD, the new developments have been adapted and now include extensive dental libraries, as is the case of MeshMixer, which facilitates its use in dental design.In general, the software that makes dental design possible currently offers different tools, protocols for recording CT or CBCT data, surface models and outputs in STL format, and makes it possible to integrate 2D design into 3D, as is the case of Exocad Dental, InLAB and Blender for dental, applications that have the capacity to make dental pieces with complex geometry, making maximum use of resources [65] and carrying out virtual planning using the entire digital workflow. According to other authors, some software designs can be more intuitive than others, making their choice very subjective, recommending that before choosing a system, as many software designs as possible should be tested until finding the most satisfactory CAD program, which best fits the specific systems used in the daily routine [66,67], where the user considers for their choice costs, the tools offered, the learning curve, the time consumption for the design and its affinity with the digital flow [8,39,68].These reasons supported the interest of the present research in view of the scarce literature comparing generic and specific software [69]. It should be noted that in the present in vitro study, the scanning was performed in an environment different from the intraoral conditions; therefore, the results obtained may differ if performed under clinical conditions.A limitation of the study is also identified as the use of a single intraoral scanner and 3D printer, which limits the results of the study to the technical specifications of the software and hardware of this equipment.However, it is important to highlight that the intraoral scanner used in the study has a good reputation in relation to the accuracy of the measurements.In relation to this, the Roth study revealed that, in the comparison between 12 intraoral scanners, CEREC Primescan in relation to the accuracy results (trueness + precision) was the most accurate (4.2 trueness points + 3.2 precision points = 7.4 points out of 10) [70,71], which presents results comparable with the accuracy of laboratory scanners mainly in short and linear segments [72].Regarding the importance of the printer used to establish a comparison, Morón et al. stated that statistically significant differences can occur in the accuracy of the printed models, with better results for industrial desktop 3D printers than for dental ones [73].However, printers such as the DLP printer used in this study have increasingly automated material selection and subsequent post-production processes with integrated devices. Regarding the results obtained for the different softwares, other authors, in findings concordant with our results, showed that the three CAD programs analyzed in their study (InLAB, Multi-CAD and Blue-Sky CAD) can design clinically acceptable crowns in terms of internal and marginal fit, although the InLAB crowns outperformed the others in terms of marginal fit [74], showing that statistically there were no significant differences between the results obtained by the different design softwares; however, InLAB, which is a D-CAD, produced the most accurate results (Figure 5). In terms of tolerance, the D-CAD softwares showed better results.This is in agreement with other authors' results, in addition to highlighting their versatility, high precision and greater operator intuitiveness. (InLAB, Multi-CAD and Blue-Sky CAD) can design clinically acceptable crowns in term of internal and marginal fit, although the InLAB crowns outperformed the others in term of marginal fit [74], showing that statistically there were no significant differences betwe the results obtained by the different design softwares; however, InLAB, which is a D-CA produced the most accurate results (Figure 5).In terms of tolerance, the D-CAD softwares showed better results.This is in agre ment with other authors' results, in addition to highlighting their versatility, high pre sion and greater operator intuitiveness. Among the limitations of the study, in addition to being an in vitro study and ther fore lacking intraorally reproducible clinical conditions (saliva and darkness, among oth factors), a single operator, a single scanner model and a single 3D printer were used.As recommendation, it is necessary to increase the sample size, to have greater certainty the results. Therefore, further studies are required to analyze and compare the results of differe design softwares used in dentistry. Although the study provides valuable information on the accuracy of various den softwares, further research with a higher sample size is necessary to confirm these resu and assess other relevant aspects such as user-friendliness, efficiency, and cost.Ultimate the choice of dental software should be based on a variety of factors, such as the use needs, the complexity of the case, and resource availability, and not solely on the accura and precision of the produced 3D models. Conclusions The present study has shown that the softwares evaluated in this work present similar results in terms of dispersion measures, as well as uniformity in the color ma generated by groups from the different combinations of STL files.In addition, it w shown that there were no alterations in the performance of the different softwares used the study, suggesting that there are no significant differences in accuracy and veraci between the files of the printed models that were designed with three specific and o general softwares; however, it should be noted that the groups that were combined wi a D-CAD had better accuracy than the G-CAD.Among the limitations of the study, in addition to being an in study and therefore lacking intraorally reproducible clinical conditions (saliva and darkness, among other factors), a single operator, a single scanner model and a single 3D printer were used.As a recommendation, it is necessary to increase the sample size, to have greater certainty in the results. Therefore, further studies are required to analyze and compare the results of different design softwares used in dentistry. Although the study provides valuable information on the accuracy of various dental softwares, further research with a higher sample size is necessary to confirm these results and assess other relevant aspects such as user-friendliness, efficiency, and cost.Ultimately, the choice of dental software should be based on a variety of factors, such as the user's needs, the complexity of the case, and resource availability, and not solely on the accuracy and precision of the produced 3D models. Conclusions The present study has shown that the softwares evaluated in this work presented similar results in terms of dispersion measures, as well as uniformity in the color maps generated by groups from the different combinations of STL files.In addition, it was shown that there were no alterations in the performance of the different softwares used in the study, suggesting that there are no significant differences in accuracy and veracity between the files of the printed models that were designed with three specific and one general softwares; however, it should be noted that the groups that were combined with a D-CAD had better accuracy than the G-CAD. I The results of the descriptive analysis indicated that the softwares Blender for dental, InLAB, MeshMixer and Exocad showed greater accuracy and precision in the combinations of groups in which a D-CAD was present. I The comparison of the 3D graphs obtained with the overlap of digital meshes, using the analysis software with the best fit method, showed greater tolerance in the groups that used the D-CAD and showed better results. I Regarding the mean, the D-CAD groups obtained greater precision and lower dispersion than the mean obtained with the G-CAD. Figure 1 . Figure 1.Model digitization and design procedure. Figure 1 . Figure 1.Model digitization and design procedure. Table 1 . Groups included in the study. Table 3 . Normality test for STL values in the groups. Table 4 . Kruskal-Wallis test for STL values in the groups. Table 5 . Levene's test for homoscedasticity of STL values across groups.
8,330
2023-09-01T00:00:00.000
[ "Medicine", "Engineering", "Computer Science" ]
The Tumor Suppressive mir-148a Is Epigenetically Inactivated in Classical Hodgkin Lymphoma DNA methylation was shown previously to be a crucial mechanism responsible for transcriptional deregulation in the pathogenesis of classical Hodgkin lymphoma (cHL). To identify epigenetically inactivated miRNAs in cHL, we have analyzed the set of miRNAs downregulated in cHL cell lines using bisulfite pyrosequencing. We focused on miRNAs with promoter regions located within or <1000 bp from a CpG island. Most promising candidate miRNAs were further studied in primary Hodgkin and Reed-Sternberg (HRS) cells obtained by laser capture microdissection. Last, to evaluate the function of identified miRNAs, we performed a luciferase reporter assay to confirm miRNA: mRNA interactions and therefore established cHL cell lines with stable overexpression of selected miRNAs for proliferation tests. We found a significant reverse correlation between DNA methylation and expression levels of mir-339-3p, mir-148a-3p, mir-148a-5p and mir-193a-5 demonstrating epigenetic regulation of these miRNAs in cHL cell lines. Moreover, we demonstrated direct interaction between miR-148a-3p and IL15 and HOMER1 transcripts as well as between mir-148a-5p and SUB1 and SERPINH1 transcripts. Furthermore, mir-148a overexpression resulted in reduced cell proliferation in the KM-H2 cell line. In summary, we report that mir-148a is a novel tumor suppressor inactivated in cHL and that epigenetic silencing of miRNAs is a common phenomenon in cHL. Introduction DNA methylation is a crucial mechanism responsible for deregulation of gene expression in human neoplasms. Both global DNA hypomethylation and hypermethylation of CpG islands, located in gene promoter regions, were widely described in tumorigenesis. Global hypomethylation increases genomic instability, whereas promoters hypermethylation result in the silencing of gene expressions [1]. However, recently new insight into the mechanisms of gene expression regulation by DNA methylation was provided. As shown for MMP-9 genes in melanoma cell lines, intragenic hypermethylation, in contrast to hypermethylation of promoter regions, positively correlates with gene expression level [2]. The exceptional importance of aberrant DNA methylation in the development of classical Hodgkin lymphoma (cHL) was demonstrated by several studies showing that DNA hypermethylation attenuates the expression of genes responsible for normal B-cell development [3,4]. Consequently, the neoplastic Hodgkin and Reed-Sternberg cells (HRS) of cHL show a characteristic loss of the B-cell phenotype and an increased immune escape potential [5]. Concerning the important role of DNA methylation in cHL pathogenesis, we propose that DNA methylation is co-responsible for deregulation of miRNA expression in cHL in a similar manner to protein coding genes. The phenomenon of miRNA silencing by DNA methylation in cancer is gaining attention recently. It is known that mir-155, mir-152, mir-137, mir-31 and mir-874 expression are regulated by DNA methylation in solid tumors such as breast and prostate cancer [6,7]. Similarly, it has been shown that this mechanism contributes to miRNA downregulation in hematological malignances such as infant acute lymphoblastic leukemia or mantle cell lymphoma [8,9]. However, the significance of this process for cHL pathogenesis remains unknown and the available literature data on this phenomenon are scarce. Among the few published studies, miRNA promoter methylation and subsequent changes in microRNA expression after 5-aza-2-deoxycytidine (5-Aza-dC) treatment were described by Navarro et al. in two cHL cell lines (L-428 and L-1236) [10]. The authors have shown that altogether the expression of 13 microRNAs were induced after global DNA demethylation in both cell lines, suggesting their epigenetic inactivation. Intrigued by these findings, we aimed to identify epigenetically inactivated miRNAs within the group of 23 downregulated miRNAs in cHL which might act as potential tumor suppressors in the disease. We decided to use unmodified cHL and non-Hodgkin lymphoma (NHL) cells lines as well as normal germinal center B-cells (GCB) from non-tumor donors in contrast to a previous study [10], to assess baseline methylation states in these malignant and normal B cells. As a result of our analysis, we identified an epigenetically regulated microRNA mir-148a, not previously reported for cHL which could play an important role in cHL pathogenesis since it is known to be involved in the regulation of B-cell differentiation and of germinal center transcription factors [11]. Laser Capture Microdissection (LCM) of HRS and Non-Tumor Cells Frozen lymph nodes of 14 patients with cHL (10 cases: 4 nodular sclerosis, 3 mixed cellularity, 2 lymphocyte-rich and 1 undefined subset used for miRNA expression analysis and 6 cases: 3 nodular sclerosis, 2 mixed cellularity and 1 lymphocyte-rich used for DNA methylation analysis) were obtained from Dr. Senckenberg Institute of Pathology, Goethe University Hospital, Frankfurt am Main, Germany. Detailed information about clinical samples are presented in Table S2. Frozen sections (5-10 µm) of the lymph nodes were mounted on membrane-covered slides (PALM, Zeiss, Bernried, Germany) and fixed in acetone. HRS cells were microdissected immediately after H&E or CD30 immunostaining. For miRNA expression analysis approximately 1000 HE stained cells per case were collected onto adhesive caps. For DNA methylation analysis, anti-CD30 (Clone BerH2, DAKO, Glostrup, Denmark, Super Sensitive Link-Label IHC Detection System BIOGENEX, San Ramon, CA, USA) pretreated slides were used to dissect 2 × 200 HRS cells and 2 × 200 non-tumor cells per case into 20 µL PCR buffer without MgCl 2 (Expand High Fidelity, Roche, Grenzach, Germany) supplemented with 0.1% Triton X-100. LCM was performed using the PALM laser capture microdissection microscope/system (PALM MicroBeam, Zeiss, Bernried, Germany). The study was approved by the local ethics committee of the Goethe University Hospital (157/17 from 06.04.2017). Sorting of GCB CD77 + Cells CD77 + GCB cells were purified from fresh tonsils obtained from tonsillectomies of chronic hyperplastic tonsillitis using magnetic activated cell sorting (MACS; Miltenyi Biotech, Bergisch Gladbach, Germany), as described previously [14]. Informed consent was obtained from all patients according to the declaration of Helsinki. The study was approved by the local ethics committee of Goethe University Hospital (157/17 from 06.04.2017). DNA Isolation DNA isolation from cell lines was performed by phenol/chlorophorm extraction with the use of Phase Lock Gel™ tubes (5Prime Quantabio, Beverly, MA, USA) and ethanol precipitation and for MACS sorted CD77 + GCB cells by using a DNeasy Blood and Tissue Kit (Qiagen, Hilden, Germany). DNA from LCMed HRS and bystander cells for methylation analysis were obtained by cell lysis in Tris-Protein K buffer by shaking (600 rpm) in 55 • C for 72 h. RNA and miRNA Isolation Total RNA from cell lines was isolated with use of Trizol reagent based on the Chomczynski method [15]. miRNA from sorted GCB and microdissected HRS cells was isolated using miRNeasy Mini Kit (Qiagen). miRNA Expression Analysis The total RNA (10 ng) from cell lines and sorted GCB, and 10 µL of the miRNA containing eluate from HRS cells were transcribed to cDNA with TaqMan™ Advanced miRNA cDNA Synthesis Kit (Applied Biosystem, Foster City, CA, USA) according to the protocol provided by the manufacturer. For reverse transcription, 3' poly-A tailing and 5' adaptor sequence ligation were performed and all mature miRNAs were reverse transcribed using RT primers binding to universal sequences present on both the 5 and 3 extended ends. Afterward, cDNA was amplified using the Universal miR-Amp Primers and miR-Amp Master Mix. Expression of miR-148a-3p and miR-148a-5p was assessed with TaqMan™ Advanced miRNA Assays (Assay ID 477814_mir and 478718_mir) and normalized to control microRNAs: miR-361-5p and let-7g-5p (Assay ID 478056_mir and 478580_mir). The PCR reaction mix contained: 10 µL 2 × Fast Advanced Master Mix, 1 µL TaqMan ® Advanced miRNA Assay, 5 µL of diluted cDNA template (1:10), and 4 µL H 2 O. Reactions were run in triplicate under the following condition: 95 • C for 20 s × 1; (95 • C for 1 s, 60 • C for 30 s) × 40. Gene Expression Analysis Total RNA (500 ng) from the cell lines were reverse transcribed into cDNA using the Maxima First Strand cDNA Synthesis Kit (Thermo Fisher Scientific, Waltham, MA, USA). The expression level of putative target genes for miR-148a-3p and for miR-148a-5p was evaluated in reference to the expression of ACTB and GAPDH genes. Primer sequences were designed using the Primer-BLAST software (Primer3 and BLAST) (https://www.ncbi.nlm.nih.gov/tools/primer-blast) ( Real-time qPCR was performed using the CFX96Touch Real-Time PCR System (Bio-Rad, Hercules, USA) according to standard procedures. Results were analyzed using the Gene Expression MacroTM 1.10 software (Bio-Rad). mir-148a Mutation Screening Primer sequences for mir-148a amplification were designed using the Primer-BLAST software (https://www.ncbi.nlm.nih.gov/tools/primer-blast) ( Pyrosequencing was performed using the PyroMark Q24 (Qiagen) sequencer as described previously [16]. Each run included fully methylated (M-commercially available methylated DNA, Millipore, Hilden, Germany) and unmethylated controls (UMET-whole genome amplified DNA from pooled peripheral blood lymphocytes by using GenomePlex Complete Whole Genome Amplification (WGA) Kit, Sigma-Aldrich, Saint Louis, MO, USA). DNA methylation level was assessed as a mean result for all analyzed CpG dinucleotides for each assay. Detailed information about sequences analyzed by pyrosequencing is shown in Table S4. For mir-148a promoter DNA methylation in microdissected cells, the same PCR conditions as described above were used, however two rounds of PCR were performed as described previously [17]. The first PCR round included: 25 µL PyroMark Master Mix; 5 µL CoralLoad; 1 µL of F and R primer (20 pmol/µL) and 18 µL of converted DNA (whole lysate). The second PCR round included a standard PCR reaction mix with 1 µL of PCR product from the first round as a DNA template. Dual-Luciferase Reporter Assay Wild type (WT) or mutant (MUT) miRNA binding sites located in the 3 UTRs of selected genes were cloned into the pmirGLO Dual-Luciferase miRNA Target Expression Vector (Promega) followed by JM109 competent cells transformation (Promega). Vectors were purified using PhasePrep BAC DNA Kit (Sigma-Aldrich) and controlled by Sanger sequencing. WT and MUT oligonucleotides were designed as proposed by Mets et al. and purchased from Genomed company (Warsaw/Poland) (Table S5) [18]. In detail, WT constructs represent miRNA binding sites flanked with 30 +/− bp of the respective 3 UTR. For MUT oligonucleotides, point mutations in miRNA binding sites were introduced in an attempt to abolish the putative interaction between the miRNA and the 3 UTR. The validation of the miRNA-3 UTR interactions was performed in the HEK 293T cells in two independent transfections and in three technical repetitions using jetPRIME DNA/siRNA (Polyplus-transfection SA, Illkirch-Graffenstaden, France) reagent as follows: • In total, 500 ng of vector containing the 3 UTR WT sequence + 50 µM of the analyzed miRNA mimic (mirVana ® miRNA mimic, MC10263, MC12683, Invitrogen, Carlsbad, CA, USA) • In total, 500 ng of vector containing the 3 UTR WT sequence + 50 µM of the mimic negative control (mirVana™ miRNA Mimic, Negative Control #1, Invitrogen) • In total, 500 ng of vector containing the 3 UTR MUT sequence + 50 µM of the analyzed miRNA mimic (mirVana ® miRNA mimic, MC10263, MC12683, Invitrogen) • In total, 500 ng of vector containing the 3 UTR MUT sequence + 50 µM of the mimic negative control (mirVana™ miRNA Mimic, Negative Control #1, Invitrogen) Cells were lysed with Dual-Glo Luciferase Assay System 24 h after transfection and the bioluminescence signal of firefly luciferase was measured using the GloMax ® 96 Microplate Luminometer in reference to internal control of Renilla luciferase. miRNA Overexpression The mir-148a insert containing the 3p and 5p miRNA was prepared by PCR amplification using primers specific for the genomic sequence harboring the pre-miRNA-148a hairpin, including approximately 100-250 nt flanking sequence on each site, as described previously [19] (for primer sequences see Table S3). The PCR product with cohesive ends was directly cloned into the pCDH-CMV-MCS-EF1α-GreenPuro vector (SBI, Palo Alto, USA) and used for functional studies. The vector was packaged into lentiviral particles in HEK 293T cells using jetPRIME transfection reagent (Polyplus-transfection SA, Illkirch-Graffenstaden, France). The lentiviral particles were harvested 48 h after transfection and three cHL cell lines (KM-H2, L-1236 and L-540) were independently transduced by the vector carrying the mir-148a (3p and 5p) sequence as well as by the empty vector. The vectors used contain a GFP gene and express the GFP-puromycin resistance fusion gene that enables drug selection of target cells stably expressing the microRNA. After 14 days of antibiotic selection, transduction efficiency was analyzed by flow cytometry and the overexpression of the respective miRNAs was confirmed by real-time qPCR TaqMan ® Advanced miRNA Assays. Cultures showing > 75% of transduced cells were used for proliferation assays. Proliferation Tests Three transduced cHL cell lines (KM-H2, L-1236 and L-540) were seeded (500,000 cells/per well) in 24-well plates in antibiotic depleted medium after puromycin selection. The CCK8 test (Cell Counting Kit-8, Sigma-Aldrich) was performed in a time dependent manner from days 0 to 8 (measurement was performed every 2 days) to observe the differences between cells transduced with mir-148a expression and the empty vector. CCK8 assay, which is based on bio reduction of WST-8 (2-(2-methoxy-4-nitrophenyl)-3-(4-nitrophenyl)-5-(2,4-disulfophenyl)-2H-tetrazolium, monosodium salt) into formazan by cellular dehydrogenases, was performed to check the influence of miR-148a on cell proliferation. Cells were incubated with CCK-8 for 2 h in 37 • C and the absorbance at 450 nm and 600 nm was measured using the GloMax ® 96 Microplate Luminometer. Experiments were performed in 4 replications in three independent reactions. Cell proliferation was also analyzed via DNA synthesis measurement using the Click-iT Plus EdU Alexa Fluor 647 Flow Cytometry Assay Kit (Invitrogen). Cells (500,000 cells/per well) were incubated with EdU (5-ethynyl-2 -deoxyuridine) or DMSO as a control for 4 h, fixed and permeabilized with saponin. The fluorescence signal of Alexa Fluor™ 647 Click-iT™ was detected by FlowSight ® Imaging Flow Cytometer (Luminex). The experiment was performed in triplicate in two time points (days 0 and 3). miRNA Expression in cHL Is Deregulated by DNA Hypermethylation Within the group of 23 miRNAs found downregulated in cHL in our parallel study (manuscript in preparation), we identified five with promoter regions located within or <1000 bp from a CpG island: miR-339-3p, miR-148a-3p, miR-148a-5p, miR-193a-5p, miR-4488 ( Figure 1). To determine whether those miRNAs were regulated by DNA methylation, we have performed bisulfite DNA pyrosequencing for the respective promoter regions of these miRNAs in cHL cell lines (n = 7) and NHL cell lines (n = 10) as controls. We have found that the promoter region of mir-339 was hypermethylated in all cHL cell lines miRNA Expression in cHL is Deregulated by DNA Hypermethylation Within the group of 23 miRNAs found downregulated in cHL in our parallel study (manuscript in preparation), we identified five with promoter regions located within or <1000 bp from a CpG island: miR-339-3p, miR-148a-3p, miR-148a-5p, miR-193a-5p, miR-4488 ( Figure 1). To determine whether those miRNAs were regulated by DNA methylation, we have performed bisulfite DNA pyrosequencing for the respective promoter regions of these miRNAs in cHL cell lines (n = 7) and NHL cell lines (n = 10) as controls. We have found that the promoter region of mir-339 was hypermethylated in all cHL cell lines (range 77-89%) and in 3 of miRNA Expression in cHL is Deregulated by DNA Hy Within the group of 23 miRNAs found downregu in preparation), we identified five with promoter reg island: miR-339-3p, miR-148a-3p, miR-148a-5p, miR whether those miRNAs were regulated by DNA me pyrosequencing for the respective promoter miRNA Expression in cHL is Deregulated by DNA H Within the group of 23 miRNAs found downregu in preparation), we identified five with promoter reg island: miR-339-3p, miR-148a-3p, miR-148a-5p, miR whether those miRNAs were regulated by DNA me pyrosequencing for the respective promoter regions NHL cell lines (n = 10) as controls. ; / -min. and max. outliers. -min. and max. outliers. Importantly, by further testing of these three regions (promoter of mir-339, mir-148a, mir-193a) in GCB cell pools, we observed no DNA hypermethylation for any of the chosen miRNAs (elevated DNA methylation was observed for mir-339) suggesting that DNA hypermethylation in these regions is a unique characteristic of the neoplastic cells. Because two miRNAs, namely miR-148a-3p and miR-148a-5p, were found to be recurrently silenced by DNA methylation exclusively in 4/7 cHL cell lines and not in any of the tested NHL cell lines or in GCB cells, we focused on these miRNAs in the further analysis. Lastly, we have confirmed the downregulation of miR-148a-3p and miR-148a-5p in cHL cell lines and GCB cells using real-time qPCR with Taqman probes ( Figure 3A). This shows that DNA hypermethylation downregulates miRNA gene expression and contributes to cHL-associated attenuation of miR-148a-3p and miR-148a-5p. Importantly, by further testing of these three regions (promoter of mir-339, mir-148a, mir-193a) in GCB cell pools, we observed no DNA hypermethylation for any of the chosen miRNAs (elevated DNA methylation was observed for mir-339) suggesting that DNA hypermethylation in these regions is a unique characteristic of the neoplastic cells. Because two miRNAs, namely miR-148a-3p and miR-148a-5p, were found to be recurrently silenced by DNA methylation exclusively in 4/7 cHL cell lines and not in any of the tested NHL cell lines or in GCB cells, we focused on these miRNAs in the further analysis. Lastly, we have confirmed the downregulation of miR-148a-3p and miR-148a-5p in cHL cell lines and GCB cells using real-time qPCR with Taqman probes ( Figure 3A). This shows that DNA hypermethylation downregulates miRNA gene expression and contributes to cHL-associated attenuation of miR-148a-3p and miR-148a-5p. Canonical Gene Inactivation Mechanisms Seldomly Target mir-148a in cHL In order to identify further mechanisms underlying the deregulation of mir-148a in cHL, we have screened for putative copy number losses by using available results of SNP array platforms for cHL cell lines [20,21]. In two of seven evaluated cHL cell lines (L-1236, HDLM-2) with low (9%) or moderate (64%) mir-148a DNA methylation levels, we found heterozygous deletions that may partially explain the observed downregulation of this miRNA. In addition, we have used Sanger sequencing to identify putative mir-148a loss of function mutations. No genomic variants have been detected in the seven cHL cell lines which strengthens the hypothesis that DNA hypermethylation is the main mechanism of mir-148a deregulation. mir148a Is Transcriptionally Deregulated and Hypermethylated also in Primary HRS Cells In order to elucidate if the DNA hypermethylation and downregulation of mir-148a is not only limited to cell lines, we have performed real-time qPCR with Taqman probes in pooled HRS cells from 10 cHL cases. Similarly to what we observed in cHL cell lines, the expression level of miR-148a-3p was significantly lower in primary HRS cells in comparison to NHL cell lines and GCB cells sorted from tonsillectomy specimens of chronic hyperplastic tonsillitis (p < 0.05) ( Figure 3B). MiR-148a-5p expression in HRS cells was not possible to analyze due to low input of miRNA after microdissection and significantly lower endogenous expression of this miRNA in comparison to miR-148a-3p. Canonical Gene Inactivation Mechanisms Seldomly Target mir-148a in cHL In order to identify further mechanisms underlying the deregulation of mir-148a in cHL, we have screened for putative copy number losses by using available results of SNP array platforms for cHL cell lines [20,21]. In two of seven evaluated cHL cell lines (L-1236, HDLM-2) with low (9%) or moderate (64%) mir-148a DNA methylation levels, we found heterozygous deletions that may partially explain the observed downregulation of this miRNA. In addition, we have used Sanger sequencing to identify putative mir-148a loss of function mutations. No genomic variants have been detected in the seven cHL cell lines which strengthens the hypothesis that DNA hypermethylation is the main mechanism of mir-148a deregulation. mir148a Is Transcriptionally Deregulated and Hypermethylated also in Primary HRS Cells In order to elucidate if the DNA hypermethylation and downregulation of mir-148a is not only limited to cell lines, we have performed real-time qPCR with Taqman probes in pooled HRS cells from 10 cHL cases. Similarly to what we observed in cHL cell lines, the expression level of miR-148a-3p was significantly lower in primary HRS cells in comparison to NHL cell lines and GCB cells sorted from tonsillectomy specimens of chronic hyperplastic tonsillitis (p < 0.05) ( Figure 3B). MiR-148a-5p expression in HRS cells was not possible to analyze due to low input of miRNA after microdissection and significantly lower endogenous expression of this miRNA in comparison to miR-148a-3p. Lastly, we have also confirmed higher mir-148a promoter region DNA methylation levels in microdissected HRS cells from a subset of cHL primary cases by bisulfite DNA pyrosequencing. Two out of six evaluated cases showed elevated methylation as compared to non-tumor bystander cells from the same patient. (Figure 3C). miRNA Expression in cHL is Deregulated by DNA Hype Within the group of 23 miRNAs found downregula in preparation), we identified five with promoter regio island: miR-339-3p, miR-148a-3p, miR-148a-5p, miR-19 whether those miRNAs were regulated by DNA meth pyrosequencing for the respective promoter regions of NHL cell lines (n = 10) as controls. ; / -min. and max. outliers. -max. outlier. Based on these results, we conclude that the loss of epigenetic repression of mir-148a in cHL contributes to the deregulation of several target genes which may alter important processes in cHL pathogenesis. mir148a Overexpression Decreases Cell Proliferation In order to put the observation of mir-148a downregulation in cHL in a functional context, we have established three cHL cell lines stably overexpressing mir-148a (KM-H2, L-540, L-1236). By analyzing these cell lines using the CCK8 assay we have observed a significant decrease (p < 0.05) in cell proliferation after miR-148a overexpression in the KM-H2 cell line. On day 8 of the experiment a 32% reduction in cell proliferation was observed in the mir-148a-expressing cell line as compared to cells with the empty vector. miR-148a overexpression had no effect on the proliferation of L-1236 and L-540 cell lines ( Figure 6 and Supplementary Figures S2-S4). To analyze if the decrease in cell proliferation observed using the CCK8 test is related to differences in DNA replication, we conducted the Click-iT ® Plus EdU Alexa Fluor ® 647 Flow Based on these results, we conclude that the loss of epigenetic repression of mir-148a in cHL contributes to the deregulation of several target genes which may alter important processes in cHL pathogenesis. mir148a Overexpression Decreases Cell Proliferation In order to put the observation of mir-148a downregulation in cHL in a functional context, we have established three cHL cell lines stably overexpressing mir-148a (KM-H2, L-540, L-1236). By analyzing these cell lines using the CCK8 assay we have observed a significant decrease (p < 0.05) in cell proliferation after miR-148a overexpression in the KM-H2 cell line. On day 8 of the experiment a 32% reduction in cell proliferation was observed in the mir-148a-expressing cell line as compared to cells with the empty vector. miR-148a overexpression had no effect on the proliferation of L-1236 and L-540 cell lines ( Figure 6 and Supplementary Figures S2-S4). Based on these results, we conclude that the loss of epigenetic repression of mir-148a in cHL contributes to the deregulation of several target genes which may alter important processes in cHL pathogenesis. mir148a Overexpression Decreases Cell Proliferation In order to put the observation of mir-148a downregulation in cHL in a functional context, we have established three cHL cell lines stably overexpressing mir-148a (KM-H2, L-540, L-1236). By analyzing these cell lines using the CCK8 assay we have observed a significant decrease (p < 0.05) in cell proliferation after miR-148a overexpression in the KM-H2 cell line. On day 8 of the experiment a 32% reduction in cell proliferation was observed in the mir-148a-expressing cell line as compared to cells with the empty vector. miR-148a overexpression had no effect on the proliferation of L-1236 and L-540 cell lines ( Figure 6 and Supplementary Figures S2-S4). To analyze if the decrease in cell proliferation observed using the CCK8 test is related to differences in DNA replication, we conducted the Click-iT ® Plus EdU Alexa Fluor ® 647 Flow To analyze if the decrease in cell proliferation observed using the CCK8 test is related to differences in DNA replication, we conducted the Click-iT ® Plus EdU Alexa Fluor ® 647 Flow Cytometry Assay based on measurement of newly synthesized DNA in cells in S-phase. In line with the CCK8 test results, a decrease in DNA synthesis was observed for the KM-H2 cell line. On day 3 of culture, KM-H2 cells transduced with miR-148a expression vector showed a significantly lower percentage of cells with newly synthesized DNA as compared to cells transduced with the empty vector (35% vs. 49%, p = 0.03). Taken together, functional studies performed in cell lines with induced mir-148a overexpression indicate that this microRNA has a negative influence on proliferation and DNA synthesis in a subset of cHL cell lines. Therefore, we assume that mir-148a may act as a tumor suppressor at least in some cHL cases. Discussion DNA methylation was previously shown to be an essential mechanism responsible for regulation of gene expression in the pathogenesis of cHL [3,4]. Our findings suggest that epigenetic silencing in cHL is not limited to protein coding genes but also plays an important role in deregulation of miRNA expression. miRNAs are responsible for fine-tuning the expression of protein coding genes involved in the maturation of B-cells, therefore deregulation of miRNA expression might contribute to the development of B-cell lymphomas [24,25]. As cHL is defined by a unique miRNA expression profile distinct from other B-cell lymphomas, one can expect the presence/absence of driver miRNAs having significant influence on cHL pathogenesis [26,27]. Following this lead, we have analyzed whether miRNAs found downregulated in cHL cell lines in our parallel study (our unpublished results, manuscript in preparation) have their promoter region located within or <1000 bp from a CpG island. We assumed that this is a strong indication that these miRNAs are epigenetically regulated and their DNA methylation level should be evaluated. Consistent with our hypothesis, we have found promoter region hypermethylation of mir-339, mir-148a, mir-193a and mir-4488 in cHL cell lines. Moreover, with the exception of mir-4488, we have demonstrated that DNA methylation level inversely correlates with the expression level of these miRNAs. Importantly, mir-148a was found epigenetically attenuated exclusively in cHL. Mir-148a was previously reported to be involved in the development of other type of cancers such as stomach, liver, lung, and breast cancer [28]. There are several ways in which it can contribute to cHL pathogenesis, however reports on its involvement in cHL are lacking so far. Firstly, mir-148a has been described as a component of a regulatory circuit that involves the NF-κB pathway, which is activated in HRS cells [29]. In this model, epigenetically mediated downregulation of mir-148a results in the overexpression of NF-κB in cancer cells. Secondly, by the interaction with methyltransferase DNMT3b and DNMT1, miR-148a-3p are directly involved in the process of DNA methylation, which is essential in the context of the global deregulation of methylation machinery in cHL [30,31]. Thirdly, mir-148a is expressed during physiological B-cell activation, and the promotor region of mir-148a is rich in motifs recognized by B-cell specific transcription factors like ELF1, EBF1, and E2A [9]. This is in line with our observation that this miRNA is unmethylated and expressed in GCB cells. Therefore, downregulation of mir-148a in cHL will likely contribute to the deregulation of the normal B-cell maturation process. However, the exact function of most of the mir-148a target genes validated in our study in cHL remain poorly understood. Only the role of IL15, a prominent pro-inflammatory cytokine and important component of the growth and survival signals in cHL was previously described [32]. In the study by Ullrich et al., IL15 stimulation of cHL cell lines resulted in increased proliferation and activation of MAP kinase and JAK/STAT5 pathway. Interestingly, HOMER1 expression is also regulated via MAPK pathways and has a potential anti-apoptotic function [33]. SERPINH1 and SUB1 in turn were described as oncogenes in different cancer types promoting cell proliferation and invasion [34]. Exogenous overexpression of SUB1 in nude mice was shown to lead to transformation of normal multipotent fibroblast and tumorigenesis [35]. Taken together, the observed downregulation of mir-148a-3p/5p may lead to loss of transcriptional control of several cancer-related genes and at least partially explain their overexpression in cHL. Lastly, in an attempt to understand the biological effect of the observed downregulation of mir-148a-3p/5p in cHL cell lines and microdissected HRS cells, we established three cHL cell lines (KM-H2, L-1236 and L-540) with stable mir-148a overexpression. Functional assays revealed the involvement of these miRNAs in negative regulation of proliferation in the KM-H2 cell line. We can only speculate that the composition of genetic alterations in the KM-H2 cell line makes it more sensitive to mir-148a overexpression than in the L-1236 and L-540 cell lines. In summary, we identify mir-148a as a novel tumor-suppressive miRNA that is epigenetically inactivated in cHL. Conclusions We propose that miRNAs undergo epigenetic silencing by DNA hypermethylation in cHL in the same way as protein coding genes. Moreover, we identified mir-148a to be silenced by recurrent DNA hypermethylation which leads to loss of transcriptional control over several target genes including IL15 and HOMER1 and thereby contribute to cHL pathogenesis.
6,669.6
2020-10-01T00:00:00.000
[ "Biology", "Medicine" ]
A COMPARATIVE STUDY OF CONVOLUTIONAL NEURAL NETWORKS FOR SEGMENTATION AND CLASSIFICATION OF REMOTE SENSING IMAGES Geographical satellite images that are used for the analysis of environmental and geographical plains are obtained through remote sensing techniques. The raw images collected from the satellites are not well suited for statistical analysis and accurate report preparation. So, the raw images undergo the usual image processing procedure such as preprocessing, segmentation, feature extraction and classification. Traditional image classification techniques have several spatial and spectral resolution issues. A novel image classification technique, namely, Convolutional Neural Networks (CNN) technique is an emerging research criterion. It is an extension of neural networks and deep learning approaches. In this paper, several CNN based image classification techniques are analyzed and their performance is compared. The techniques involved in this analysis include Full Convolutional Network (FCN), Patch-based classification, pixel-to-pixel based segmentation and convnetbased feature extraction. Each technique utilized different datasets for its own performance evaluation. Finally, the performance evaluations are analyzed in terms of accuracy. I. INTRODUCTION Remote sensing is the process of monitoring a remote object without having a physical contact with that object. In general, the objects are observed by gathering data using the artificial satellites that are launched to revolve around the earth. Remote sensing technology has its wide applications in weather forecasting, agriculture, studies regarding the environment and hazards, fossil fuel and minerals identification, mapping of the land usage, and so on. During the analysis of disaster recovery and management, it is necessary for the government to collect the land cover for identifying the affected areas. The constellation satellites generate the high quality images of the entire earth in a less amount of time. The images produced by the geographical satellites have a large amount of noise and irrelevant data due to the distractions caused in the space. Remote sensing is regularly portrayed by complex information properties as heterogeneity and class irregularity, and covering classcontingent appropriations. Together, these perspectives constitute serious difficulties for making land cover maps or distinguishing and restricting items, creating a high level of vulnerability in acquired outcomes, notwithstanding for the best performing models. There is an immense research on characterization approaches that consider the range of each individual pixel to allocate it to a specific class. On the otherhand, more propelled systems join data from a couple neighboring pixels to upgrade the classifiers' execution, regularly mentioned to as spectral-spatial order. These methodologies depend on the separate distinctive classes in light of the range of a single pixel or of some neighboring pixels. In an extensive scale setting, these methodologies are not powerful. Convolutional neural systems (convnets) have empowered huge achievements in different picture order errands and remote detecting picture order is not a specialcase to this pattern. Generally, neural systems have been viewed as secret elements and prepared end-to-end for aparticular grouping assignment. This has been one of the purposes behind their prosperity, and the classifiers are found out in a manner that the most discriminative elements are utilized for characterization. In remote detecting, information marking is costly and substantial named datasets are rare. Two outcomes have made conceivable to evade the need of preparing information and disturbed the adjustment in context on convnets. The first is the perception that the yields of a neural system with irregular weights that can be used to prepare a classifier, which will bring about great accuracy in results. The second one is an intriguing property of convnets that it is conceivable to acquire exact result on a given errand by utilizing a totally inconsequential errand. The last layer for the job is also completed using the convnets and also effectively utilized as a part of remote detecting picture grouping. As an outcome of the above discoveries, the convent consists of two sections such as a component extraction part and a classifier part. This division is free and there is no strict lead which layers of the system includes extraction and classifier parts. The Artificial Neural Network (ANN) helps in message passing between the neurons which are utilized for solving complex functions. Fig. 1 illustrates the overall structure of CNN.The feed forward neural networks distribute the messages in an acyclic fashion. In CNN, a set of inputs are provided to do a basic operation using which a single output is generated. The vector is used to provide the input to the system. The parameters of the function in the neural network include the weight vectors and the biases. The values of these parameters are identified using the training process. The remaining sections of the paper are organized as follows: Section II gives a brief note about FCN. Section III explains the patch-based classification and pixel-to-pixel based segmentation. Section IV describes convent based feature extraction. The performance of the all the techniques are analyzed in Section V and the paper is concluded in Section VI. II. FULLY CONVOLUTIONAL NETWORK (FCN) The FCN architecture [1]is proposed to generate dense predictions. The fully connected layer is converted to a convolutional layer. The dimension of the convolution kernel is to be chosen to coincide with the preceding layer. Hence, its connections are equal to a fully connected layer. The FCN architecture includes a deconvolutional layer for improving the resolution of the output feature map. It performs upsampling of the feature maps. The upsampled feature map comprises a central portion estimated by adding the input of two neighboring kernels. The upsampling is attained by the interpolation from a set of nearby points. The interpolation is parameterized by a kernel. The kernels should be large enough to overlap in the output, for the effective interpolation. The kernel states the level and extent of contribution from a pixel value to the neighboring positions, based on their locations only. The kernel values are multiplied by each input and the overlapping responses in the output are added to perform the interpolation. Fig.2 depicts the deconvolution layer for 2× upsampling. The scaling step is performed based on the constant4×4 kernel. The interpolation kernel is an additional group of learnable network parameters irrespectiveofbeingdefined as apriori. Only one kernel contributes to the outer border that is an extrapolation of the input. The inner region is the interpolation. The extrapolated border is collected from the output to avoid artifacts. The advantages of FCN over the patch-based approach are 1) Removal of discontinuities due to the patch borders. 2) High accuracy due to the simplified learning process and a smaller number of parameters. 3) Lower execution time due to the fast execution of convolution operations. The FCN is created by the convolutionalization of the existing patch-based network architecture. An existing framework is selected to benefit from a complete architecture and enable thorough comparison. Fig. 3(b) shows the FCN. Let us assumethat the size of output patch of the network is 1×1. Thus, a single output centered in its receptive field is dedicated. Next, the fully connected layer is transformed as a convolutional layer with a single feature map and spatial dimensions of the previous layer (9×9). Finally, a deconvolutional layer is added for upsampling the input by a factor of 4 to recover the input resolution. The original network can obtain the input images of different sizes.Inthe training stage, a 16×16 patch is obtained as output for matching the learning process in the patch-based network. A patch input of size 80×80 is required as in the architecture. The input is larger than the original 64×64 patches, as every output is currently centered in its context. During the inference time, the inputs of random sizes are fed to the network to constructthe classification maps. In the deconvolutional layer, the overlapping areas are added to generate the output. The output is indicated in gray and the excluded extrapolation is denoted in white. Fig. 4 shows the two-scale convolutional module. A. Patch-based classification A Convolutional Neural Network (CNN) is trained on same image patches that are extracted from large training images [2]. Paisitkriangkrai et al. [3] achieved best accuracy using patches. But, the image patch should be classified according to the center pixel in order to choose a pixel shape. During the test-phase, the trained CNN is used for the efficient classification of whole test image. • Patch-based CNN Architecture: It involves four convolutional layers and two fully connected layers. The convolutionallayersinclude 32, 64, 96, 128 kernels of size 5×5×5, 5×5×32, 5×5×64 and 5×5×96, respectively.A stride of1on the 65×65×5 input image is applied on the kernels. A patch for every object is extracted initially with the object being centered for generating the training and validation data. Then, each patch is rotated randomly several times at various angles to generate additional training data for the object class. Further classes are sampled from the images, such that the center pixel belongs to the class of interest. The same amount of training data is sampled from each class to achieve class balance.To ensure efficient classification of larger images, the fully connected layers are converted to the convolutional layers. This reduces the computational complexity of the sliding window approach, where overlapping regions lead to the redundant computations, and allows the classification of various image sizes. Pixel-to-Pixel segmentation Apixel-to-pixel architecture is designed based on the FCN architecture [4]trained by using the cross-entropy loss function. This function is estimated by adding all the pixels in the image. But, it does not suit well for the imbalanced classes.The network is trained in small batches on the 256×256 pixel patches. The size of the patch is selected based on the Graphics Processing Unit (GPU) memory considerations [2]. Fig.5 presents the pixel-to-pixel architecture that enables end-to-end learning of the pixel-to-pixel semantic segmentation. Itcontainsfour sets of double 3×3 convolutions. Each set is separated by a 2×2 max pooling layer with the stride 2. The first convolutional layer has a stride of 2. All other convolution layers have a stride 1. The final 3×3 convolutionconsists of one kernel for each class to produce class scores. It is followed by a 1×1 convolution.Thefractional-strided convolution layer follows the convolutional layers. It learns to upsample the prediction back to the size of the original image and a softmax layer. The image patches are obtained from the input image with the overlap rate of about 50%. The patches are flipped left to right and up and down and rotated at 90 degree intervals, to yield eight augmentations per overlapping image patch. Two FCN models are trained to consider the imbalanced classes into account. In one FCN model, the weighting of the loss of the classes is performed using the median frequency balancing [5,6]. This weighting process is performed depending on the ratio of median and actual class frequency in the training set.Other FCN model uses the standard cross-entropy loss. The modified cross-entropy function is calculatedas (1) Where denotes the weight of the class 'c', indicates the frequency of pixels in the class 'c', 'N' represents the number of samples in a mini-batch, 'C' denotes the set of all classes, signifies the softmax probability of sample 'n' in the class 'c' and represents the label of the sample 'n' for the class 'c'. The is calculated as IV. CONVNETS The convnets [7] with one or two convolutional layers followed by pooling layers are used for feature extraction. Filters in the convolutional layers are 3×3 pixels and the stride is equal to 1.The filter weights are initialized randomly as described in [8]. The max-pooling is used on non-overlapping regions of size 2×2 pixels. Fig.6 shows an illustration of the convnets. A single softmax layer is used as a classifier. During the usage of the random weights, only the classifier is trained and the weights of the convolutional layers remain static. The stochastic gradient descent with Nesterov momentum is used for training all convnets. During the learning, the validation error of the convnet is monitored and the learning rate is reduced by half, if the validation error did not drop for ten consecutive epochs. The learning rate is not reduced for eight epochs. The learning is terminated if the validation error did not drop for 30 consecutive epochs or if the learning rate was reduced by a factor of more than 1000 in total.The features are analyzed further using Fisher criterion to obtain better insight behind the classification accuracy and evaluate the separability of classes in the feature space. Fisher criterion is used for evaluating the ability of Gabor-based features for the discrimination between two textures. A cluster is formed in the feature space using the feature vectors for images from a single class. The features are more suitable for discrimination between the classes, if the separability of the clusters is better. The separability depends on the distance and compactness between the clusters. It can be assessed using Fisher discriminant analysis. (4) is the mean vector of the set of feature vectors from the i th class, . The between-class scatter matrix is a measure of the distance between the clusters and within-class scatter matrix is a measure of compactness between the clusters. It is defined as (5) The total scatter matrix is defined as (6) The criterion function has the following form (7) Where represents the trace of a matrix, which is the sum of the Eigen values of the matrix. If the equation (7) is increased, there is an increase in the between-class scatter and decrease in the within-class scatter. This is equal to the increase in the distance and compactness between the classes. If the Fisher criterion value is large, the separability of the classes is better. A Distribution Separability Criterion (DSC) is used to measure the discriminative power of features. It is computed as (8) Where denotes the mean of the distance between means and indicates the mean of the standard deviation of the class conditional distributions. The DSC is similar to the Fisher criterion in the two-class case. V. PERFORMANCE ANALYSIS The FCN is built and its performance is evaluated using the Massachusetts Buildings dataset. This dataset is derived by correcting the minor errors of OSM frozen dataset. It consists of images captured from Boston whose spatial resolution is 1 square meter. It included certain area for validation, training and testing. The validation area ranges around 9 square kilometer, 340 square kilometer for training and 22.5 square kilometer for testing. These color images are grouped under two categories, namely, building class and not building class respectively. The FCN is analyzed using three metrics including accuracy, AUC, and IoU. The fine tuning is done by adjusting the weights of the images in the OSM Forez dataset. The accuracy of FCN is 99.126, whereas the accuracy of FCN after fine tuning is 99.459. The AUC and IoU of FCN is 0.969166 and 0.48 respectively, whereas the AUC and IoU of the fine-tuned FCN is 0.99699 and 0.66 respectively. For evaluating the patch-based classification and the pixel-to-pixel segmentation, a dataset namely, ISPRS Vaihingen 2D semantic labeling contest dataset is utilized. This dataset possess varied sized images of 33 numbers in which each image has 3 million to 10 million pixels. This dataset contains the images captured in Vaihingen located at Germany using high quality true ortho photo from a distance of 9 cm from the object. Each image contains a Digital Surface Model (DSM) apart from True Ortho Photo. To overcome the issues that occur due to varied ground height, extra DSM also included in the dataset. There are 16 ground truth images out of 33 images.Two metrics such as accuracy and F-measure are used to measure the performance of patch-based classification and pixel-to-pixel segmentation. The patch-based classification misclassifies some small plants as the area of vegetation. Its classification accuracy is high for buildings and roads. When compared to patchbased classification, pixel-to-pixel based segmentation achieved higher accuracy. The classification accuracy of patch-based method, in the case of buildings is 94.04%. Two datasets, namely, SAT-4 and SAT-6 datasets that contain remote sensing images are used to evaluate the performance of the convnets image classification technique. The SAT-4 dataset includes 400000 training images and 100000 testing images and SAT-6 dataset includes 324,000 training images and 81,000 testing images. The SAT-4 dataset includes four types of classes such as grassland, roads, barren land, buildings and water bodies. The classes in the SAT-6 dataset are classified as water bodies, roads, barren land, grassland, trees, buildings and water bodies. As the feature extraction plays a vital role in improving the classification accuracy, the convnet based feature technique is analyzed by varying the number of convolutional layers. The highest accuracy attained using SAT-4 dataset is 99.52% and SAT-6 data is 98.51%. The accuracy analysis of the methods is presented in Table I and its graphical plot is represented in Fig.7. VI. CONCLUSION In this paper, several geographical image classification techniques based on CNN is studied and analyzed. The techniques are FCN, Patch-based classification, pixel-to-pixel based segmentation and convnet-based feature extraction. The FCN and fine-tuned FCN utilized MassachusettsBuildings dataset and OSM Forzen dataset respectively. ISPRS Vaihingen 2D semantic labeling contest dataset is used to evaluate the performance of patch-based classification technique. The convnet based feature extraction technique is analyzed using two datasets such as SAT-4 and SAT-6. All the techniques are compared using a common metric, namely, accuracy. The accuracy of FCN and fine-tuned FCN are 99.126% and 99.459% respectively. The convnet-based feature extraction technique achieved 99.52% and 98.51%, when evaluated using SAT-4 and SAT-6 datasets respectively. The patch-based classification attained an accuracy of 94.04%.
4,058.6
2017-08-30T00:00:00.000
[ "Environmental Science", "Computer Science" ]
Si-TCP Synthesized from “Mg-free” Reagents Employed as Calcium Phosphate Cement The influence of silicon doping on calcium phosphate cement were explored in this work. α -TCP and Si- α -TCP were prepared by solid state reaction employing “Mg-free” CaHPO 4 , CaCO 3 and CaSiO 3 as precursors. It was possible to obtain TCP powders with low contents of β phase as contaminant. Cement liquid phase was an aqueous solution containing 2.5 wt. (%) of Na 2 HPO 4 and 1.5 wt. (%) of citric acid. The liquid-to-powder ratio was 0.6 mL.g –1 . Chemical, physical and mechanical properties of the cement samples were determined by means of XRD, FTIR, XRF, compressive strength and SEM. The calcium phosphate cements obtained achieved satisfactory properties; however, Si- α -TCP presented a decrease on the rate of setting reaction. Introduction The need for new biomaterials which could improve life quality of people who suffer from oldness diseases or who have any bone tissue injury due to accidents and diseases like obesity and cancer are resulting in a growing number of researches. In this context, the development of new orthopedics biomaterials based on calcium phosphate compounds is relevant once they present excellent bioactivity and biocompatibility due to its chemical composition similar to the mineral part of bone and teeth [1][2][3] . Silicon substitution into some phosphorous sites of calcium phosphate bioceramics is a promising approach to develop new biomaterials for orthopedics applications due to the increased bioactivity and cell differentiation on the material's surface which could be promoted by the presence of this element [4][5][6][7][8][9][10][11] . Therefore, silicon doped α-tricalcium phosphate (Si-α-TCP), is attracting the attention of researchers since its employment as bone cement could be of great interest. Nevertheless, it is still not well stabilished if the enhanced biological properties of the silicon doped calcium phosphates compounds is due to the presence of silicon itself or it is because of its influence on the chemical properties of the material [12][13][14] . Moreover, silicon is known to stabilize the α-tricalcium phosphate, α-TCP, structure and to promote its formation at lower temperatures 15,5,16,17 leading to a cost reduction of its processing. It is well known that the synthesis of a pure α-TCP is not an easy task since all process conditions can change its final properties, or even inhibit its formation. The most limiting factor is the quality of the starting reagents which may preclude the formation of α-TCP at temperatures as high as 1600 °C 10,13 . Therefore, the reproducibility of α-TCP synthesis becomes very difficult and, in some cases, impossible. In a previous work, our group has developed simple synthetic methods to synthesize high purity reagents to eliminate the most important impurity: magnesium, which is an established stabilizer element of β-tricalcium phosphate, β-TCP 16 . It has been discovered that the standardization of the reagents properties guaranteed the reproducibility of α-TCP manufacturing process and the formation of a high purity α-TCP and Si-α-TCP. Thus, the major objective of this study is to investigate the influence of Si on the chemical, physical and mechanical properties of the calcium phosphate cement. TCP precursors and TCP powders synthesis "Mg-free" CaHPO 4 and CaCO 3 were synthesized by aqueous solution precipitation in the presence of ethylenediamine tetraacetic acid (EDTA). CaSiO 3 were synthesized by solid state reaction of "Mg-free" CaCO 3 and electronic grade SiO 2 , which was kindly provided by the Photonic Materials Laboratory-UNICAMP, Brazil 16 . Afterwards, two tricalcium phosphate powders, α-TCP and Si-α-TCP, were synthesized by solid state reaction. The syntheses parameters are displayed in Table 1. Calcium phosphate cements Cement samples were prepared using an aqueous solution containing 2.5 wt. (%) of Na 2 HPO 4 and 1.5 wt. (%) of C 6 H 8 O 7 (citric acid) with a liquid-to-powder ratio equal to 0.60 mL.g -1 . After molding in Teflon molds (6 × 12 mm) samples were left in a 100% relative moisture environment for 24 hours. Then, cement cylinders were polished, demolded and immersed in simulated body fluid (SBF) for 24 and 168 hours at 37 °C. After each immersion period, samples were gently rinsed with distillated water, immersed in acetone to stop the setting reaction and dried at 100 °C for 6 hours. Characterizations TCP powders and cement samples crystalline phase analyses were carried out by means of qualitative X ray diffraction (Rigaku DMAX 2200, 20-40° (2θ), 0.01° (2θ)/s, 40 mA and 20 kV). JCPDS files used for phase identification were 09-0348 for α-TCP, 09-0169 for β-TCP and 46-0905 for calcium deficient hydroxyapatite, CDHA. Moreover, a quantitative XRD analysis was performed to quantify β-TCP on the TCP powders. It was employed the internal pattern method in which a diffraction line from the phase being quantified is compared with a diffraction line from a standard mixed with the sample in known proportions 18 . The standard employed was Al 2 O 3 and β-TCP diffraction line used was (2 1 4). Chemical composition of the samples were evaluated by means of fourier transformed infrared spectroscopy. Samples were diluted in KBr and analyzed in a Perkin Elmer 1600 FT-IR spectrometer with a scanning range from 450 to 4000 cm -1 and resolution of 2 cm -1 . Powders stoichiometry was determined by quantitative X ray fluorescence (MagiX Super Q Version 3.0 X-ray fluorescence spectrometer, Philips, The Netherlands). Samples were weighed at 0.3000 g, mixed with 5.5 g of spectral grade Li 2 B 4 O 7 and melted in a Pt/Au crucible and formed into disks in a special controlled furnace Perl'X3 (Philips, The Netherlands). Calibration curves were prepared using certified composition standards of natural and synthetic calcium phosphates and calcium silicates. Finally, BET specific surface area and particle size distribution were determined using a Micromeritics ASAP 2010 and a Malvern Mastersizer S, respectively. Cement setting times were determined using the ASTM-C266-04 standard 19 . Cement samples compressive strength after each time of setting were determined using a MTS, Test Star II with a 10 kN cell attached and a compression velocity of 1 mm/min. The fracture surface was gold coated (BAL-TEC, SCD 050) and its morphology was analyzed on a scanning electron microscope (JEOL, JXA-840A). Results For both samples it was possible to obtain TCP powders in its α-TCP form as can be observed on XRD diffractograms of Figure 1. The contents of β-TCP were very low: 8 and 4 wt. (%) for α-TCP and Si-α-TCP, respectively. Powders stoichiometry is displayed on Table 2, Ca/P and Ca/P+Si ratios were 1.50 and 1.46, respectively. The effectiveness of the solid state reaction employed was also elucidated by FTIR analysis since the absorption bands present on both spectrums ( Figure 2) are characteristic of α-TCP as displayed on Table 3 20 . Moreover, both TCP powders have very similar BET specific surface area (0.8030 ± 0.0125 and 0.6930 ± 0.0033 m 2 /g for α-TCP and for Si-TCP, respectively) and particle size distribution after 48 hours of ball milling. The mean particle diameter was 9.61 ± 0.14 µm for α-TCP and 10.68 ± 0.08 µm for Si-TCP. This results are resumed on Table 2. Cement setting times are displayed on Table 4. For both cements the values obtained using the Gilmore Needles 19 were higher when compared to the values reported on the literature 20,21 . Initial setting time was 15 minutes for α-TCP and 30 minutes for Si-α-TCP. Moreover, final setting time was 43 minutes for α-TCP and 120 minutes for Si-α-TCP. As can be observed on XRD patterns of Figure 1 α-TCP and Si-α-TCP setting reaction occurs by the dissolution of TCP phase and the precipitation of apatite crystals once these are the only crystalline phases observed during the role process. Indeed, as displayed on the FTIR spectrums of Figure 3 the apatite phase formed are calcium deficient hydroxyapatite (CDHA, Ca 9 (HPO 4 )(PO 4 ) 5 OH) since their characteristics absorption bands (Table 5) are present in both spectrums. Moreover, it is possible to verify that both cements lead to an apatite phase which is also carbonated due to the CO 3 2characteristics bands at 850 to 900 and at 1350 to 1600 cm -1 (highlighted with a "*" on Figure 3). Cement's mechanical strength evolution with time of SBF immersion can be observed on Figure 4. It is verified that Si-α-TCP resulted on lower values of compressive strength after 7 days of immersion. Moreover, during the first day of immersion, Si-α-TCP did not achieve any mechanical resistance while α-TCP achieved values around 5 MPa. Table 4. Setting times of cement samples. T i = initial setting time and T f = final setting time. Discussion The lower content of β-TCP on Si-α-TCP confirms the efficiency of Silicon in stabilizing the α-TCP phase by lowering the β → α phase transformation temperature 4,5,22 once the powder doped with Silicon resulted on purer α-TCP (4 wt. (%) of β-TCP) at a lower sintering temperature work were sintered for longer times it would be expected a reduction on β-TCP content. Samples purity were also confirmed after Ca/P and Ca/P + Si ratios determination since their values are very close to the theoretical ones of TCP (α and β phases) compounds, 1.50. During cement preparation, it was determined larger values of setting time. In a first moment, this fact can be explained by the high liquid-to-powder ratio employed 0.60 mL.g -1 against 0.32-0.34 mL.g -1 normally used for conventional α-TCP cement 8,20,21,25 ; however, this huge amount of liquid was necessary to guarantee the cement moldability. Moreover, the addition of citric acid to cement's liquid phases has also contributed to the high setting times since this compound increases the TCP particles' wettability and cement paste fluidity caused by a deflocculation on the TCP powder which also leads to a lower rate of setting reaction 26 . It is important to emphasize that without citric acid addition the liquid-to-powder ratio needed was higher than 1.0 mL.g -1 . Furthermore, by comparing XRD patterns of Figure 1 it is possible to infer that Silicon induces a reduction on the rate of setting reaction. In the first 24 hours, as it was expected, α-TCP (α-TCP-c0) started to solubilize together with CDHA precipitation. Surprisingly, for Si-TCP the setting reaction seems not to occur on the first 48 hours (Si-TCP-c0 and Si-TCP-c1) since only α-TCP diffraction lines are observed on the XRD patterns. Finally, after 168 hours, for α-TCP cement the TCP → CDHA conversion has finished while Si-TCP cement still have some TCP without reacting. The difference on TCP reactivity is responsible for the lower compressive strength achieved by Si-TCP cement, as displayed on the boxplot chart of Figure 4. At initial times samples "Si-TCP-c0" and "Si-TCP-c1" did not present any mechanical resistance while sample "α-TCP-c0" reached 5.6 ± 0.9 MPa. As TCP → CDHA conversion evolves samples' compressive strength enhances reaching after 168 hours 21.5 ± 2.4 and 14.8 ± 2.6 MPa for α-TCP-c7 and Si-TCP-c7, respectively. Nevertheless, it is important to observe that even though the mechanical resistance for Si-TCP cement after 168 hour is lower than for α-TCP cement, this material had not reached the 100% TCP → CDHA conversion, thus, it is expected that its maximum mechanical resistance became higher after all Si-TCP is converted into CDHA. Conclusions Si-α-TCP was synthesized by a simple solid state reaction in which it was employed "Mg-free" reagents leading to lower sintering temperatures for both Si doped and non-doped α-TCP. Moreover, calcium phosphate cements obtained employing these TCP powders achieved satisfactory properties; however, Silicon has induced a decrease on the setting reaction velocity. (1250 °C). Based on results reported on the literature [22][23][24] depending on the Magnesium content in the precursors employed the temperature at which pure α-TCP can be synthesized could be raised up to 1500 °C, however, if the material is sintered at the right temperature and for enough time to guarantee the total β → α conversion it is possible to obtain pure α-TCP no matter how high is the Mg contamination. Then, if the powders synthesized on this
2,811.2
2012-08-01T00:00:00.000
[ "Materials Science" ]
Effect of Quamoclit angulata Extract Supplementation on Oxidative Stress and Inflammation on Hyperglycemia-Induced Renal Damage in Type 2 Diabetic Mice Type 2 diabetes mellitus (T2DM) is caused by abnormalities of controlling blood glucose and insulin homeostasis. Especially, hyperglycemia causes hyper-inflammation through activation of NLRP3 inflammasome, which can lead to cell apoptosis, hypertrophy, and fibrosis. Quamoclit angulata (QA), one of the annual winders, has been shown ameliorative effects on diabetes. The current study investigated whether the QA extract (QAE) attenuated hyperglycemia-induced renal inflammation related to NLRP inflammasome and oxidative stress in high fat diet (HFD)-induced diabetic mice. After T2DM was induced, the mice were treated with QAE (5 or 10 mg/kg/day) by gavage for 12 weeks. The QAE supplementation reduced homeostasis model assessment insulin resistance (HOMA-IR), kidney malfunction, and glomerular hypertrophy in T2DM. Moreover, the QAE treatment significantly attenuated renal NLRP3 inflammasome dependent hyper-inflammation and consequential renal damage caused by oxidative stress, apoptosis, and fibrosis in T2DM. Furthermore, QAE normalized aberrant energy metabolism (downregulation of p-AMPK, sirtuin (SIRT)-1, and PPARγ-coactivator α (PGC-1 α)) in T2DM mice. Taken together, the results suggested that QAE as a natural product has ameliorative effects on renal damage by regulation of oxidative stress and inflammation in T2DM. Introduction Diabetes mellitus (DM) is considered as a metabolic disease that results in impaired glucose and insulin homeostasis [1]. Especially, insulin resistance caused by hyperglycemia, is the worldwide epidemic that is accompanied by various complications in type 2 DM (T2DM) [2]. The early stage of nephropathy (DN) is characterized by the structural changes of kidney such as damage of the glomerular basement membrane (GBM), enlargement of the mesangial cells, glomerulosclerosis, and fibrosis and renal function failure including microalbuminuria and reduced glomerular filtration rate (GFR) [3,4]. The main cause of renal damage is the hyperglycemic condition in T2DM. Hyperglycemia leads to overproduction of reactive oxygen species (ROS), which potentially causes oxidative stress and activates various cytokines, chemokines, and growth factors. Oxidative stress results from an imbalance between oxidants and antioxidants such as NAD(P)H quinone dehydrogenase-1 (NQO1), hemeoxygenase-1 added to 1 kg of activated charcoal at room temperature for 1.5 h. After incubation, water, 20% ethanol fraction, and charcoal were removed through centrifugation and filtration (0.45 µm). Fractions were mixed, concentrated in vacuo, and frozen to dry. The yields of hot water extract and activated charcoal fractions were 25% and 11%, respectively. Identification of Candidate Compounds of QAE The standardization of QA was analyzed by using the HPLC system (Waters Corp., Milford, MA, USA) consisting of a separation module (e2695) and a photodiode array (PDA) detector. Twenty milligrams of dried QA were dissolved in 50% methanol/water. Protocatechuic acid, chlorogenic acid, syringic acid, myricetin, and quercetin were used as standard compounds and dissolved in methanol. For the analysis of each compound or sample, a Kromasil C 18 column (150 × 4.6 mm, 5 µm) was used and a column temperature was set at 30 • C. The mobile phase consists of 3% acetic acid/water (solvent A) and methanol (solvent B) using a gradient program of 0-10% (B) in 0-10 min, 10-70% (B) in 10-44 min, 70-100% (B) in 44-50 min. The calibration was linear in a range of 0.1-1000 µg/mL for these five compounds. The flow rate was 0.9-1.0 mL/min and the PDA detector was set at 280 nm for acquiring chromatograms. Animals Experiments Male C57BL/6 mice at five weeks were housed in two or three per cages and maintained in a constant environment (temperature (22 ± 1 • C), humidity (50 ± 5%), and 12 h light/12 h dark cycle). After seven days of adaptation, the mice were randomly allocated into two groups. The first group was a non-diabetic control group (NC), which was fed an AIN-93G diet (10% kcal fat, Research Diets, New Brunswick, NJ, USA). The second was a diabetic group (DM), which was fed a high fat diet (40% kcal fat, Research Diets, New Brunswick, NJ, USA) for four weeks. Then, the diabetic group received an intraperitoneal administration of 30 mg/kg body weight (BW) of streptozotocin (Sigma-Aldrich, St. Louis, MO, USA) in a citric acid buffer (pH 4.4). The NC mice received an equivalent amount of solvent. After five weeks from the last injection, fasting blood glucose (FBG) levels were measured once per week during the whole period of the animal experiment. Mice with FBG >140.4 mg/dL (7.8 mmol/L) more than two times were considered as the diabetic condition. The diabetes induction protocol was referred to the previous study by Zhang et al. [22]. Mice were separated in four groups; (1) CON: Non-diabetic normal mice were gavaged with distilled water, (2) DMC: Diabetic mice were gavaged with distilled water, (3) LQ: Diabetic mice were gavaged with a low dosage of QAE (5 mg/kg/day), (4) HQ: Diabetic mice were gavaged with a high dosage of QAE (10 mg/kg/day). QAE was dissolved in distilled water. Body weight, food intake, and fasting blood glucose level were weekly monitored during the animal experiment. The animals were sacrificed after 12 weeks of oral supplementation. Blood was collected in a heparin (Sigma-Aldrich, St. Louis, MO, USA) coated syringe from the heart, centrifuged at 850 g at 4 • C for 10 min to obtain plasma. The kidney was removed from mice and stored at −80 • C before the experiment. All experiments with mice were approved by the Institutional Animal Care and Use Committee of Kyung Hee University (KHUASP(SE)-16-005 on 14 June, 2019). Hemoglobin A1c (HbA1c) and Plasma Insulin Assay HbA1c levels were measured using enzyme-linked immunosorbent assay (ELISA) commercial kits (Crystal Chem., Downers Grove, Elk Grove Village, IL, USA) according to directions of the manufacturer within two weeks from the sample collection. Oral Glucose Tolerance Test (OGTT) Fasted mice were administrated for 16 h with a 50% glucose solution (2 g/kg). The blood glucose level was detected at 0, 15, 30, 60, 90, and 120 min using a glucometer (OneTouch, LifeScan Inc., Malvern, PA, USA). The area under the curve (AUC) values of OGTT are calculated according to the trapezoidal rule as follows: Renal Function Test Urine samples were collected during three phases of the experiment (0-4 weeks; initial, 4-8 weeks; mid, and 8-12; late-points). Urinary albumin excretion was determined by the albumin assay kit (Bioassay, Hayward, CA, USA). The concentrations of urinary and plasma creatinine were calculated from interpolating the results of optical density at 515 nm into a standard curve. Concentrations of BUN were measured in accordance with the manufacturer's instructions using a commercial kit (Asan pharmaceutical, Seoul, South Korea). Histological Observation of Kidney Kidney tissues were fixed in 10% formaldehyde and then dehydrated through a series of alcohol. The tissues were cleared in xylene and embedded in paraffin. The sections were cut with a microtome into 5 µm, and stained with hematoxylin and eosin (H&E). Kidney morphology in stained tissue was observed using an optical microscope (Nikon ECLIPSE Ci, Nikon Instrument, Tokyo, Japan). To calculate the glomerular area in H&E-staining, paraffin-embedded sections were measured by the Canvas 11 software (Deneba, Miami, FL, USA). The Glomerulus area was expressed as the mean of thirty glomeruli per each sample and a minimum of four samples from each group were examined. Area values are reported in µm 2 × 10 −3 . Protein Extraction and Western Blot Analysis The kidneys were ground and lysed on ice for 30 min. The lysate was centrifuged to remove tissue debris at 1945× g at 4 • C for 10 min. Each supernatant was centrifuged again at 9078× g at 4 • C for 30 min. Then, the final supernatant was collected for cytosolic extract. The pellet was re-crushed in a hypertonic lysis buffer for 1 h, and then the lysate was centrifuged at 9078× g at 4 • C for 20 min and the supernatant was used for nuclear extract. The protein concentration was quantified according to a BCA protein assay (ThermoFisher Scientific, Grand Island, NY, USA). Thirty µg of each protein sample were loaded into an SDS-PAGE and transferred to poly-vinylidine fluoride (PVDF) membranes (Millipore, Marlborough, MA, USA). We used 8~12% SDS-PAGE gel according to the molecular weight (MW) of target protein(s). After the transfer, the membrane was blocked in 1~3% bovine serum albumin (BSA) in a phosphate buffed saline −0.1% Tween 20 (PBS-T), the membrane was incubated at 4 • C with each primary antibody. To detect primary antibodies, respective horseradish peroxide (HRP)-conjugated secondary antibodies were given to membranes. Protein bands were visualized using a chemiluminescent detector (Syngene, Cambridge, UK). Levels of targeted proteins were calculated using Syngene GeneSnap (Syngene, Cambridge, UK). Statistical Analysis Data were expressed as mean ± standard error of the mean (SEM). The significant differences between sample groups were determined using one-way ANOVA (significant level = 0.05). Effect of QAE Supplementation on Body Weight, Food Intake, and Kidney Weight in T2DM Mice Body weight and food intake of all T2DM groups (the DMC group, the LQ group, and the HQ group) were significantly increased compared to the NC group. QAE supplementation for 12 weeks had no effect on body weight change in T2DM mice. With a slightly different result, kidney weight in the DMC group was significantly increased compared to the NC group, and there was no significant difference in the LQ group and the HQ group compared to the DMC group (Table 2). Effect of QAE Supplementation on Fasting Blood Glucose and Plasma Insulin Levels, Homeostasis Model Assessment of Insulin Resistance (HOMA-IR), and Hemoglobin A1c (HbA1c) in T2DM Mice Fasting blood glucose level, plasma insulin level, HOMA-IR, and HbA1c level were as follows (Table 3). At the end of the QAE treatment period, there was no difference in the fasting blood glucose level among all groups. The plasma insulin level in the HQ group was significantly lower than that in the DMC group. HOMA-IR was significantly reduced in both QAE treated diabetic groups. The HbA1c level was significantly decreased only in the HQ group compared with the DMC group (Table 3). Effect of QAE Supplementation on Glucose Homeostasis in T2DM Mice OGTT was performed to estimate insulin resistance and failure of glucose metabolism ( Figure 1A), and glucose AUC was calculated as shown in Figure 1B. The figure showed that the DMC group had a high blood glucose level during 120 min after glucose administration compared to the NC group and there was no significant difference in the blood glucose level at 90 min after glucose administration among the DMC group and the QAE treatment groups. The blood glucose level of the LQ group at 120 min was remarkably reduced as compared to the DMC group and glucose AUC of the LQ group was significantly lower than that of the DMC group. On the other hand, the protein level of receptor for advanced glycation end products (RAGE) was remarkably increased in the DMC group compared to that of the NC group ( Figure 1C). The HQ group showed a significant reduction of RAGE expression in comparison to the DMC group. Effect of QAE Supplementation on Kidney Function in T2DM Mice The ACRs of all T2DM groups were significantly higher than that in the NC group during the entire experimental period (Figure 2A). The ACRs of the QAE treatment groups decreased during the treatment period and showed a significant difference at the late stage of treatment in the diabetic mice. As shown in Figure 2B, the supplementation with a high dose of QAE significantly decreased the plasma creatinine and BUN compared with the DMC group. In representative H&E staining of the kidney (Figure 2C), the DMC group showed glomerular hypertrophy as compared to the NC group, while both the QAE treated groups ameliorated glomerular hypertrophy. The red arrow indicated mesangial expansion in the DMC group compared to the NC group. In the NC group, Bowman's space was observed as a thin white line. However, Bowman's space was broadened in the DMC group compared to that in the NC group, and was narrower in the QAE treatment groups than that in the DMC group. In addition, glomerular surface areas in histological sections of renal cortex were quantified to measure the degree of glomerular hypertrophy ( Figure 2C). The glomerulus of the DMC group was significantly expanded compared with that of the NC group, while both QAE supplementation groups regardless of dose showed significantly reduced glomerular hypertrophy. Effect of QAE Supplementation on Oxidative Stress in T2DM Mice The renal 4-hydroxynonenal (4-HNE) level was examined for assessing lipid peroxidation and the level of renal protein carbonyls was used as a marker of protein oxidation caused by oxidative stress ( Figure 3A). In the kidney, the 4-HNE protein level in the DMC group was significantly higher than that in the NC group. The HQ group presented a significant reduction of 4-HNE level compared to that of the DMC group. Renal levels of protein carbonyls in both QAE groups were significantly lower than that in the DMC group. In the DMC group, the protein levels of nuclear Nrf2 and its related markers such as HO-1, NQO1, catalase, MnSOD, and GPx were remarkably higher than those in the NC group. However, both QAE treatments significantly reduced the protein levels of nuclear Nrf2 and MnSOD. Moreover, a high dose of the QAE treatment significantly decreased the protein levels of HO-1, NQO1, and catalase in the diabetic mice. The protein levels of GPx and NOX4 were not significantly different among the DMC group and the QAE treatment groups ( Figure 3B). Effect of QAE Supplementation on Inflammation in T2DM Mice The protein level of NLRP3 inflammasome was elevated in the DMC group compared to that of the NC group. However, the level of NLRP3 was significantly decreased in the QAE treatment groups compared to those in the DMC group. However, only a high dose of QAE treatment significantly lowered the protein levels of ASC, procaspase-1, caspase-1, and mature IL-1β in the diabetic mice. The protein levels of precursor IL-1β were not normalized in the QAE treatment groups compared to that in the DMC group ( Figure 4A). Furthermore, the DMC group demonstrated higher levels of inflammation related protein including monocyte chemoattractant protein (MCP)-1, CRP, nuclear NF-κB, TNF-α, IL-6, and iNOS than the NC group ( Figure 4B). However, a high dose of QAE treatment in the diabetic mice reversed the protein levels of MCP-1 and nuclear NF-κB to the levels of the NC mice. In addition, the QAE treatment regardless of dose suppressed other inflammatory markers such as CRP, TNF-α, IL-6, and iNOS in the diabetic mice. Effect of QAE Supplementation on Energy Metabolism in T2DM Mice The protein levels of AMPK were not significantly different among the groups. The protein level of phosphorylated AMPK in the DMC group was significantly declined compared to that of the NC group, but those in the QAE treatment groups were increased compared to the DMC group. In addition, the QAE treatment elevated the pAMPK/AMPK ratio as much as the level of the NC group ( Figure 5A). Furthermore, the protein levels of SIRT1 and PGC-1α were significantly declined in the DMC group compared to those in the NC group, but were increased in the QAE treatment groups regardless of dosage ( Figure 5B). Effect of QAE Supplementation on Apoptosis and Fibrosis in T2DM Mice The protein levels of caspase-8, caspase-3, and nuclear p53 in the DMC group were significantly higher than those in the NC group, but the high dose of QAE supplementation decreased the protein levels of caspase-8 and p53 than those in the DMC group. Furthermore, the QAE treatment regardless of dose reduced caspase-3 compared to the DMC group ( Figure 6A). The protein levels of Bax in the QAE treatment groups were significantly decreased in comparison to that of the DMC group. The QAE treatment regardless of dose remarkably lowered the protein level of Bax/Bcl-2 ratio in the diabetic mice ( Figure 6A). In addition, the QAE treatments reduced the protein level of ERK compared to that of the DMC group. At the same time, the protein levels of phosphorylated ERK in the QAE treatment groups were reduced compared to that in the DMC group. Moreover, the pERK/ERK ratio, an index of ERK phosphorylation, in the QAE treatment groups was also decreased compared to that in the DMC group ( Figure 6A). To examine the effect of the QAE supplementation on renal fibrosis, the protein levels of PKC-βII, TGF-β, α-SMA, and COL1A were measured ( Figure 6B). The renal protein levels of PKC, TGF-β, α-SMA, and COL1A in the DMC group were significantly higher than those in the NC group. However, the QAE treatments decreased the protein levels of PKC, TGF-β, and α-SMA in comparison to the DMC group, and, in particular, a high dose of the QAE treatment declined the protein level of COL1A in the diabetic mice. Discussion Various studies noticed that many medicinal plants and natural products have potential biological activities. Among these plants, QA is a species of ipomoea morning glory and cultivated as an ornamental plant throughout the tropics. In this study, we aimed to investigate that dietary QAE supplementation could have beneficial effects on NLRP3 inflammasome dependent hyper-inflammation and consequential renal damage by stimulation of AMPK-SIRT1 signaling in type 2 diabetes. The current study suggested a hypoglycemic effect of QAE presented by decreased plasma insulin, HOMA-IR, and HbA1c. HbA1c is considered as an index of average blood glucose control level, because the HbA1c level tends to increase with the averaged blood glucose levels over preceding three months. A previous study also showed a strong correlation between HbA1c and 6-h fasting glucose levels than overnight FBG levels in diabetic mice [23]. In this study, the QA supplementation ameliorated plasma insulin level, HOMA-IR, and HbA1c compared to the DMC group, although did not significantly decrease the FBG level. In glucose tolerance test, AUC was declined in the LQ group compared to the DMC group. As shown in Table 1, QA contained five compounds such as protocatechuic acid (PCA), chlorogenic acid, syringic acid, myricetin, and quercetin. A recent study showed similar tendency that PCA significantly reduced blood glucose and plasma insulin level in the hyperglycemic condition [24]. In addition, it is known that activated ligation of AGEs to renal RAGE activated production of ROS subsequently causing oxidative stress [25]. Furthermore, chlorogenic acid (CGA) and quercetin have been shown to decrease blood glucose level by stimulating glucose uptake through the activation of AMPK in diabetic mice [26,27]. CGA has been also reported as an inhibitor of carbonic anhydrase V which has an impact on gluconeogenesis [28]. The current results showed that a high dose of QAE treatment decreased RAGE expression compared to T2DM mice. These data suggest that the QAE treatment has ameliorative effects on a hyperglycemic condition due to synergistic or additive effects of PCA, chlorogenic acid, and quercetin these active ingredients. Moreover, there are well-known renal malfunction indicators including albuminuria, plasma creatinine, BUN, and urinary ACR level in DN. Our data showed that the QAE treatment significantly decreased urinary ACR, plasma creatinine, and BUN in diabetic mice. From these changes, it could be inferred that the QAE treatment improved renal function in a diabetic condition. In terms of molecules, major mechanisms of hyperglycemia-induced tissue damage are as follows-the increase of intracellular AGEs formation and its receptor expression, and activation of PKC. As indicated above, the current study demonstrated that QAE supplementation reduced the protein level of RAGE in the diabetic mice. In addition, elevated protein levels of 4-HNE were decreased in the HQ group and both doses of QAE supplementation lowered protein carbonyls in the DMC group. Previous studies showed that the increased level of Nrf2 as well as 4-HNE and protein carbonyls activated Nrf2 related antioxidant defense systems [12,29]. Our results particularly demonstrated that the protein level of Nrf2 and its related antioxidant defense enzymes including NQO1, HO-1, and catalase were increased in the DMC mice but these markers were reduced in the HQ group. Especially, PCA is known to attenuate oxidative stress by decreasing the levels of ROS and malondialdehyde (MDA) in a diabetic condition [30]. Hence, it can be concluded that QAE containing PCA and CGA supplementation could alleviate cellular oxidative stress as well as activations of RAGE in diabetes. Oxidative stress also can contribute to inflammatory response via the activation of NF-κB and downstream factors such as TNF-α, IL-6, and iNOS in DN [31]. Furthermore, oxidative stress would potentially activate NLRP3 inflammasome by initial recognition as cellular danger [32,33]. In this study, the renal protein levels of NLRP3 inflammasome, nuclear NF-κB, and subsequent inflammatory factors were higher in the DMC group compared with the NC group. A previous study demonstrated that the PCA treatment significantly reduced the secretion of pro-inflammatory cytokines in T2DM rats [34]. Furthermore, syringic acid is known to reduce oxidative stress and inflammation in diabetes [35]. Simultaneously, QAE supplementation selectively reduced the renal inflammatory factors via suppression of NLRP3 inflammasome. Therefore, the current study suggested that QAE supplementation alleviated the activation of NLRP3 inflammasome and consequential hyper-inflammation under a diabetic condition. In consistent hyperglycemia, chronic hyper-inflammation in the kidney results in renal apoptosis via activation of caspases, proapoptotic protein Bax, p53, and mitogen activated protein kinase (MAPK) signaling [36][37][38][39]. PCA is known to reduce the protein expression levels of type IV collagen, laminin, and fibronectin in high glucose-stimulated human mesangial cells (MCs) [40]. Our results showed that pro-fibrosis related markers including PKC-βII, TGF-βI, and α-SMA as well as apoptosis related markers such as caspase-8, caspase-3, Bax/Bcl-2 ratio, and pERK/ERK ratio were declined in the QAE treated group, regardless of dose, compared with the DMC group. The present study suggested that PCA and chlorogenic acid in QAE might play major roles in the protection of renal apoptosis and fibrosis in T2DM. How could the QAE treatment ameliorate the renal damage through suppression of oxidative stress, NLRP3 inflammasome-dependent hyper-inflammation, cell apoptosis, and pro-fibrosis in a hyperglycemic condition? There are cumulative evidences that AMPK influences intracellular signaling pathway, especially amelioration of oxidative stress via activation of antioxidant defense enzymes [41]. Metformin, which is a well-known diabetic drug, shows therapeutic mechanisms related to AMPK, which suppresses the NF-κB through activation of SIRT1 and PGC-1α [21,42,43]. PCA also increased the phosphorylation of AMPK and then activated the expression of p-Nrf2 and HO-1 in oxidative damage in HUVECs [44]. Moreover, a recent study reported that the syringic acid improved energy metabolism by regulation of mitochondrial biogenesis in diabetic rats [33]. On the other hand, SIRT1, an intracellular energy sensor, beneficially affected glucose homeostasis, cellular immunity to oxidative stress, inflammation, apoptosis, and fibrosis in the kidney [45]. In DN, one of the earliest characteristics is the loss of podocyte, which plays a crucial role in albumin processing, but SIRT1 is known to attenuate podocyte depletion and albuminuria by downregulation of claudin-1 in podocytes [46,47]. Resveratrol, a natural plant polyphenol, respectively stimulates SIRT1 and AMPK, and has a protective effect on oxidative stress and inflammatory response in the kidney [20]. A previous study showed that SIRT1 suppressed NLRP3 inflammasome activation as well as the NF-κB associated inflammatory response [48,49]. PGC-1α also regulates oxidative stress via participation in cellular signaling to mitochondrial oxidative stress and independently inhibits the NF-κB related inflammatory response [50,51]. The current studies demonstrated that the treatment of QAE containing PCA and syringic acid elevated the protein levels of SIRT1 and PGC-1α and downregulated NLRP3 inflammasome dependent inflammatory mediators in the diabetic mice. In particular, both doses of QAE supplementation has an effect on the stimulation of AMPK/SIRT pathway and a high dose of QAE supplementation decreased NLRP3 inflammation accompanied by nuclear NF-κB activation in our study. Therefore, it can be inferred that the QAE treatment has a protective effect on renal oxidative stress and hyper-inflammation under a hyperglycemic condition by involving this antagonism of SIRT1/NF-κB/NLRP3 inflammasome. The previous study reported by our group found that the Lespedeza bicolor extract (LBE) containing polyphenolic compounds such as quercetin, genistein, daidzein, and naringenin has shown to exert antioxidant and anti-inflammatory effects accompanied by upregulation of the AMPK-SIRT1 pathway in the same diabetic model [52]. The current findings supported that QAE at much lower concentrations compared to LBE and other plant extracts has shown antidiabetic effects through regulation of the AMPK-SIRT related mechanism as shown in LBE treated diabetic mice [52]. Moreover, QAE supplementation attenuated pro-fibrosis as well as apoptosis in the diabetic group, which was not shown in LBE treatment groups. Therefore, it can be concluded that QAE is more effective than LBE on renal fibrosis and apoptosis in diabetes. Conclusions Taken together, we reported that QAE supplementation at a high dose had ameliorative effects on renal NLRP3 inflammasome associated hyper-inflammation and consequent renal cell apoptosis and pro-fibrosis in the HFD/STZ-induced T2DM mice. In addition, QAE supplementation regardless of the dose stimulated AMPK/SIRT1 signaling and ameliorated oxidative stress, although some molecular markers were selectively regulated at different treatment doses of QAE in diabetic renal damage. In conclusion, the current study suggested that QAE could be a potential therapeutic for improving renal damage in T2DM. Conflicts of Interest: The authors declare no conflict of interest.
5,836.6
2020-05-27T00:00:00.000
[ "Medicine", "Biology" ]
Nontypical BIRPS on the margin of the northern North Sea: The SHET Survey Summary. Striking similarities in the reflectivity of the crust and upper mantle on BIRPS profiles has led to the development of the "typical BIRP, a model seismic section for the British continental lithosphere. The SHET survey, collected in the region of the Shetland Islands and the northern North Sea, fits the general pattern to a certain extent. Caledonian structures and Devonian or younger basins are imaged in the otherwise acoustically transparent upper crust. An unexpected and exciting feature imaged on SHET is a short wavelength structure on the Moho or abrupt Moho offset beneath the strike-slip Walls Boundary Fault. SHET differs markedly from the SWAT typical BIRP, however, by showing a poorly reflective lower crust. Only a narrow zone (-1 s) at the base of the crust contains high-amplitude reflections. The SHET survey therefore highlights the wide variation in lower crustal reflectivity within the total BIRPS data set rather than the similarities. Introduction In August 1984, the British Institutions Reflection Profiling Syndicate (BIRPS) acquired 830 km of deep seismic reflection data on the British continental shelf around the Shetland Islands. This survey, SHET, comprises six profiles (Fig. l), each recorded to a two-way travel-time (TWTT) of 15 s (-50 km). More detailed results of this survey will be presented in McGeary et al. ( in prep.). The Shetland region is a geologically complex area. At least two major tectonic events have affected the continental lithosphere during the Phanerozoic: the early Palaeozoic compressional Caledonian orogeny and the late Palaeozoic and Mesozoic continental extension which preceded the actual rifting of the north Atlantic region. SHET primarily images the crust of the Shetland Platform, the northern promontory of the British Caledonian orogen, yet the survey is also located at the juncture between the North Sea extensional basin and the rifted Atlantic margin. Both major tectonic events must have involved significant deformation of the continental lithosphere and have presumably also affected its reflective character. This paper first presents some of the typical features imaged by SHET in the upper crust of the Shetland Platform. It then discusses the reflective character of the Shetland crust as a whole, particularly the lower crust and Moho. Finally, the implications of the SHET survey results with respect to the concept of the "typical BIRP' are examined. Upper crust There are three types of features, mapped at the surface of the Shetland Platform, which are imaged by the SHET profiles: eastward-dipping Caledonian structures, major strike-slip faults, and Devonian sedimentary basins. All three features can be related to the Caledonian orogeny or its terminal stages (Watson 1984). Structures and basins imaged on SHET in the upper crust which are related to the later extensional tectonics are not discussed in this paper. The Shetland Platform shows abundant evidence of Caledonian compressional tectonics (Flinn 1985). Although none of the prominent reflections on SHET can be directly tied to any of the compressional structures mapped on the Shetlands, there are several sets of eastward-dipping reflections which can be interpreted to be Caledonian structures. Unlike similar Structures interpreted on the MOIST and DRUM profiles north of Scotland (Smythe et af. 1982;McGeary & Warner 1986), these Structures appear neither to be highly reflective nor to have been later significantly reactivated during extension. By far the most interesting structure crossed by the SHET survey is the Walls Boundary Fault (WBF). This fault forms a major structural discontinuity in the Shetlands (Flinn 1977) and is probably the most significant of the several possible northern extensions of the Great Glen Fault (Fig. l), a major late Caledonian strike-slip fault in Scotland. The amount and timing of movement on each fault is highly controversial (see Smith & Watson 1983), but the transcurrent displacements are undoubtedly large, at least 100-200 km of sinistral offset on the Great Glen Fault before the end of the Devonian (Smith & Watson 1983) and 65-90 km of post Middle Devonian dextral offset on the WBF (Flinn 1977;Astin 1982 The SHET survey crosses the WBF twice ( Fig. 1). South of the Shetlands, reflections from a sedimentary basin west of the fault and east-dipping reflections interpreted to be a Caledonian structure east of the fault are both truncated by the WBF. The reflection times to the Moho, however, do not appear to be significantly different on either side of the fault. In contrast, the SHET profile north of the Shetlands reveals a surprising set of reflections (Fig. 2) which suggests that the WBF is a near-vertical structure which penetrates the entire crustal thickness and offsets or abruptly warps the continental Moho. The Moho reflection time changes from -9 s to -10.5 s across the fault zone. Even more indicative of Moho structure are the two high-amplitude, diffraction-shaped events which originate at Moho depths and collapse at crustal velocities. These events suggest a very short wavelength structure on the Moho located directly beneath the surface location of the strike-slip Walls Boundary Fault. This structure has been preserved since at least the Cretaceous, the latest possible time of movement on the fault. The third feature of the upper crust on SHET, the Devonian basins, have complex structure and are not particularly reflective. Their most striking characteristic on SHET is that they seem to damage seismic penetration to deeper levels of the crust, making the Shetland Platform a technically difficult area in which to profile the lower continental crust. Lower crust and Moho The lower continental crust of both the Shetland Platform and the northern North Sea is not particularly reflective as imaged on SHET, except near the base of the crust where there is often a band of high-amplitude reflections about 1 s thick. The base of this bright band is interpreted to be the continental Moho (Matthews & Cheadle 1986). Lower crustal reflections are generally not very continuous, either chopped into short segments or diffractive, and the reflectivity is often concentrated into two or three discrete horizontal layers at different travel-times separated by transparent zones. An example of the most reflective lower crust imaged on SHET can be seen in Fig. 2 west of the WBF. Here the Swan McGeary upper crystalline crust is not reflective at all and the lower crust is only moderately so. The Moho reflection band is quite continuous and bright but the local Moho topography in travel-time is itself uncommon. Discussion and conclusions The results of early BIRPS surveys, particularly WINCH, prompted the creation of an imaginary average seismic section of the British continental crust, the "typical BIRP' cartoon interpreted to be structrual in origin (Warner & McGeary, this issue). SWAT spectacularly reinforced the apparent validity of this cartoon and confirmed its usefulness as a working model. In contrast, the SHET survey exhibits a few significant departures from the "typical BIRP." Most striking is the poor reflectivity of the lower crust compared to that of SWAT or WINCH. Those lower crustal reflections which do exist tend to be quite short or diffractive. In addition, although there is still a distinction between the "blank' unreflective upper crust and the reflective lower crust, the top of the reflectivity is highly variable and difficult to pick, perhaps a problem in signal penetration. The notable exception to the poor reflectivity is the narrow (-1 s), highly reflective zone at the base of the crust. The abrupt difference in travel-time to the Moho across the WBF is also quite unusual for a BIRPS profile; the Moho typically remains at a fairly constant travel-time regionally or on the scale of a single section (Warner, in press). Although in an area new to BIRPS, SHET was acquired and processed in a way very similar to the WINCH and SWAT surveys (Brewer et al. 1983;BIRPS & ECORS 1986), especially SWAT. Therefore, the differences evident in the reflective character of the lower crust and Moho on SHET compared to WINCH or SWAT cannot be directly ascribed to differences in processing or acquisition. Such differences therefore must be caused either by local problems in signal penetration and/or noise or by actual variation in the underlying crustal geology. The high amplitudes of the Moho reflections on SHET beneath the poorly reflective lower crust suggests that variation in the lower crustal geology rather than noise is the primary cause. The question then arises whether the concept of the "typical BIRP' should be discarded in light of the differences evident on the non-typical SHET profiles. In defence of the concept, I would suggest that SHET merely highlights the variation in lower crustal reflectivitythe continuity, amplitude, thickness, and distribution of the reflectionsin a particularly dramatic way and represents not so much a departure from the normal crustal profile as an end member of a continuum of crustal images. With over 8000 km of deep seismic reflection data in a geologically complex region the size of the Basin and Range, the lower crust of Britain may be the most densely sampled in the world. With such sampling, the method of classification becomes a philosophical point, similar to that which may arise when classifying fossils within an evolutionary continuum or when deciding when to stop dividing tectonostratigraphic terranes. One can either use a descriptive model for the reflectivity of the lower crust general enough to include all variation (a "lumper"; Fig. 3a) or alternatively try to divide or bin different profiles into separate types (a "splitter"; Fig. 3b) It is certainly not clear which method of classification is the most useful in this case. In conclusion, the concept of the "typical BIRP' highlights the similarities between profiles in the BIRPS data set. In general, the crystalline upper crust is not reflective with the exception of basins and faults; the lower crust is reflective to variable degrees, with a variable top to the reflectivity but almost always a well-defined base, the Moho. The SHET survey, however, highlights the variation within the BIRPS data set. This variation can itself prove to be quite interesting. For example, an analysis of the whole data set shows that the variation in lower crustal reflectivity cannot be directly related to different tectonic provinces. There are no identifiable abrupt boundaries between the Caledonian foreland and orogen, between Variscan and Caledonian crust, or between the Caledonides and the extended crust of the North Sea. Finally, before any conclusions can be derived from this variation, it is imperative that we better understand the effect of noise generated at shallow depths in the crust on the image at depth, and also the petrologic and physical nature of deep reflectors.
2,457
1987-04-01T00:00:00.000
[ "Geology" ]
On the Non-Local Surface Plasmons’ Contribution to the Casimir Force between Graphene Sheets Herein we demonstrate the dramatic effect of non-locality on the plasmons which contribute to the Casimir forces, with a graphene sandwich as a case study. The simplicity of this system allowed us to trace each contribution independently, as we observed that interband processes, although dominating the forces at short separations, are poorly accounted for in the framework of the Dirac cone approximation alone, and should be supplemented with other descriptions for energies higher than 2.5 eV. Finally, we proved that distances smaller than 200 nm, despite being extremely relevant to state-of-the-art measurements and nanotechnology applications, are inaccessible with closed-form response function calculations at present. Introduction Casimir forces are rare instances of quantum phenomena in room conditions, typically manifesting as an attractive force between two conducting plates due to vacuum fluctuations [1,2]. These forces are of critical technological importance, as they cause stiction in micro and nano electromechanical systems (MEMS and NEMS) [3]. In addition, they are linked to many surface effects, such as wettability [4] and friction [5], and hold promise as a means to achieve levitation [6]. It has been recognised that surface modes play a crucial role in Casimir forces, which is particularly true for surface polaritons such as plasmons and phonons, where the associated pole in reflectivity dominates the force [7,8]. Because of this property, surface modes are often engineered via nanostructuration as a means to exert some control over and mitigate the strong attraction arising in tiny gaps [9,10]. However, there is another crucial actor in the physics of dispersive forces; namely, the optical non-locality of the materials [11][12][13][14][15]. This effect, also called spatial dispersion because the polarisation field originates from an extended region of space rather than a point, translates into a dependency of a medium response on the wave vector. It can play a major role in the forces, and tends to be underestimated [14,16]. In this letter, we consider two graphene sheets as a case study to explore the importance of non-local surface plasmons in Casimir forces. First we discuss the importance of doping and Drude damping on the forces; then, we study in detail the effects of non-locality. Despite these modes being strongly affected by Landau damping at short separation distances, we observe that interband transitions dominate the force in this regime. Unfortunately, this corresponds to energies where the Dirac cone approximation for the optical response of graphene breaks down, and a different description of the dielectric response should be used for a more accurate analysis. Indeed, this unusual behaviour allows for broadband absorption and is complemented by a large tunability afforded by doping [25]. This tunability enables both interband and intraband transitions, and allows for spectral control over Pauli blocking and Drude response [26]. We are especially interested in graphene because it is mostly transparent to propagating modes, and therefore supports strongly confined surface plasmons. This unique property suggests that the Casimir forces could be solely governed by the plasmonic response. In addition, the 2D nature of graphene allows for a simpler system and implementation, reducing the number of variables considerably, while still manifesting a very rich physics. This makes graphene an ideal platform for the study of Casimir forces. Graphene Response At zero temperature, the non-local susceptibility of a doped graphene sheet with chemical potential µ = √ Nπhv F within the random phase approximation can be written as where k is the in-plane wave vector, ω the angular frequency, and E ± = (2µ ±hω)/hv F k is the rescaled energy change upon absorption (+) and emission (−) of a photon. The prescription ω → ω + i0 + applies so that the function is analytic in all of the upper complex plane [27]. In Equation (1), the terms in arccosh correspond to the interband transitions taking place athω > 2µ, while the terms in 1 − E 2 ± characterise the intraband (Drude) response; see Figure 1a. In order to conserve the number of charge carriers at finite relaxation time τ = 1/Γ, one must also make use of the Mermin prescription, given by [28]: When non-locality is ignored, i.e., k → 0, the susceptibility of graphene can be simply written as where H is the Heaviside function [25]. Again, one can recognise a Drude (intraband) response in the first term, while the second term accounts for interband transitions. Lifshitz Formalism We make use here of the intuitive and elegant theoretical framework developed by Lifshitz [29][30][31] to calculate the Casimir forces in our planar geometry, which consists of three media separated by two graphene sheets, as shown in Figure 1b. The pressure is obtained by summing all possible electromagnetic modes (evanescent and propagating) via Fresnel reflection coefficients. Within the Lifshitz formalism for dispersive materials at zero temperature, the Casimir pressure and energy density as a function of the plate separation d can be calculated using real frequencies from with ρ, the polarisation; and q i (ω, k ) = k 2 − ε i (ω)(ω/c) 2 is the modified normal component of the wave vector in each region. The indices denote the different media, 2 being the gap region and 1 and 3 the left and right-hand half-spaces respectively, as in Figure 1b. The sign convention for the force is that of a negative sign for attraction. It is often more computationally convenient to calculate the Lifshitz integrals at imaginary frequencies, ω = iξ, using the form In this case one needs to make use of modified optical functions for the materials ε(iξ, k ), χ(iξ, k ), etc. [32]. Here we consider interfaces covered by graphene, so that the Fresnel coefficients for p (TM) and s (TE) polarization are given by [8,14,16,33] Intrinsic Graphene In order to understand and correctly trace each contribution to the Casimir forces between graphene sheets, we start by investigating the effects caused by variation in the doping levels of graphene for a broad range of separation distances. Note that as we are interested in gaining physical insights into the phenomena at hand, we restrict ourselves to the case of suspended sheets at zero temperature. Environments different from vacuums result merely in screening, which leads to rescaled forces but no conceptual difference. Similarly, thermal effects bear little consequence for separations below a few tenths of a nanometre [14,16,34], and do not affect the following discussion either. We show in Figure 2 the Casimir pressure normalised to the pressure between two perfect electric conducting (PEC) plates [32] P PEC = −hcπ 2 /240d 4 . Figure 2. Attractive Casimir pressure between two graphene sheets in a function of distance and normalised to the pressure between two perfect conductors P PEC = −hcπ 2 /240d 4 . The green line at P/P PEC ∼ 0.00538 is the force between two intrinsic graphene sheets with universal conductance σ 0 = e 2 /4h. Dashed green is one undoped sheet on top of an aluminium (Drude,hω p = 12.5 eV, hΓ = 0.063 eV) half-space. Full blue (red) is the force between two doped sheets with µ = 0.2 eV (µ = 2 eV) and zero relaxation frequency. Dashed red is the same as red but with Drude damping included (hΓ = 0.1 eV). First consider intrinsic graphene (µ = 0 eV, solid green curve), which is described by the universal conductance [35] σ 0 = e 2 /4h = παcε 0 . In this case the normalised Casimir pressure is constant [36], with P/P PEC = 720α/32π 3 ∼ 0.00538, where α = e 2 /2hcε 0 ∼ 1/137 is the fine structure constant. As we will see later, this behaviour is fully dictated by the assumption that the electronic band structure of graphene is linear; i.e., described by a Dirac cone [35] for all energy transitions. In reality, graphene only takes this particular value in a limited energy range 2µ <hω < 2.5 eV, at which point the Dirac cone approximation breaks down [37]. For comparison, we also show the pressure between such an undoped graphene sheet and an aluminium half-space (dashed green curve), described by a Drude model withhω p = 12.5 eV and hΓ = 0.063 eV [7]. Short distances correspond to high frequencies in the calculation of the force (see Equation (4)), such that the Drude response becomes transparent in the short distance regime, leading to a drastic reduction of the force per unit area. On the other hand, for large separations (relating to low energies), the aluminium becomes a very good metal, approaching the theoretical limit between graphene and PEC [33], with P/P PEC ∼ 0.025. This simple example shows the importance of the high frequency response of the materials within the Lifshitz formalism at short separations. We return to this central aspect in the last section of the article. Effects of Doping and Loss When graphene is doped, it becomes more metallic, and one can observe at a larger pressure in Figure 2 for increased doping (solid blue and red lines). However as the distance is reduced, favouring higher energies in the Lifshitz integral, the pressure converges to the case of intrinsic graphene. This is because at sufficiently high energies, the response of graphene is dominated by interband terms, which do not depend on doping, whereas intraband processes are forbidden forhω > 2µ [26]. Therefore, for undoped graphene, only interband transitions contribute. This leads to the universal conductance described above, towards which doped graphene converges at high energies. We also plot the pressure for a finite scattering rate (hΓ =h/τ = 0.1 eV, dashed red line), which results in a drastic reduction of the force at large distances but has little effect at short separations. Such Drude damping corresponds to Ohmic losses accompanying the conduction current, which dominates the response of a conductor at low frequencies. As the frequency increases, free carriers start to lag behind the field, and the material response is instead dominated by the polarization field, in which Joule heating plays a negligible role. Non-Local Plasmons To gain a deeper understanding of the effect of plasmons on the pressure, it is helpful to study its spectrum in ωk space, given by the integrand [7] of Equation (4). Several spectra are presented for the case of doped sheets (µ = 0.5 eV,hΓ = 0.02 eV, v F = 10 6 ms −1 ) in Figure 3. In these spectra we can clearly observe the strong poles produced by the surface plasmons, as illustrated in Figure 1c. These confined oscillations of free charges on each sheet couple together to form bonding ( or symmetric, ω − ) and anti-bonding ( or anti-symmetric, ω + ) hybrids contributing attractively/repulsively to the force respectively [7,8]. Most guided modes give rise to similarly strong contributions [38], and therefore, there is a general interest in trying to use them to control the Casimir force with the help of nanostructuration [9,10]. As the distance is decreased, such as in the step from Figure 3a,b to Figure 3c,d, the increased coupling leads to stronger modes which extend to larger wave vectors. At these shorter distances, non-local effects change the dispersion of the plasmons dramatically, mostly due to Landau damping, which takes place athω/µ ≥ 2 − k/k F and is clearly visible in Figure 3d. From Figure 3, we can deduce a cross-over in the coupling strength between the plasmons, where the Casimir forces will transition from being governed by Ohmic losses at low frequency and momentum to that of Landau damping when the plasmons are suitably energetic. The axes have been renormalised to k F = µ/hv F and µ respectively. We also plotted as a dashed line, the dispersion relation for uncoupled surface plasmons in the local picture k sp = −2ε 0 /χ. Contributions to the Forces We now turn to the various contributions to the force by separating interband from intraband terms, considering polarisation, and comparing the local and non-local calculations in Figure 4. Contributions to the Casimir forces between two graphene sheets (µ = 0.5 eV) as a function of the distance and normalised to the pressure between two perfect conductors P PEC . Full lines represent the local case, whereas the circles denote the local situation. The dashed black line is the total force between intrinsic graphene sheets, P/P PEC ∼ 0.00538. The inset shows the region in the dashed box, where the difference between the local and non-local cases is greatest. As is well known [14], at large separations both TE and TM modes have a similar strength, originating from waveguided modes within the gap region [39]. In this regime, interband processes are negligible because the relevant energy scale is much smaller than 2µ. As the distance is decreased, all guided modes (both TE and TM) are cut-off and only surface plasmons, which are TM excitations, are left to contribute to the force. However, the strength of interband transitions also rises strongly at small separations, overcoming that of the intraband response at 2 nm for the doping considered here (µ = 0.5 eV). As discussed earlier, at very short distances the force is mainly due to the interband transitions, and hence the convergence to the case of undoped sheets. When including the effect of non-locality, displayed as open circles in Figure 4, we observe that mostly TM intraband processes, typically plasmons, are affected. This confirms the intuition derived from Figure 3. These excitations suffer from strong Landau damping at short distances because they correspond to a regime of high phase velocity (momentum), thereby reducing the force compared to the local picture. We can appreciate in the inset of Figure 4 that the non-locality makes a difference in the force spectrum, but it does not in the force. Dirac Cone Approximation Although the previous discussion provides good insight, it ignores a crucial issue related to the Lifshitz formalism in the calculation of the Casimir force. The problem arises from the need to integrate, to infinite frequency, an expression dependent on the susceptibility of graphene. While the integral converges quickly along imaginary frequencies and can thus be truncated, shorter separations make higher and higher energies relevant. Therefore, the description of graphene must remain valid at these higher energy scales. Unfortunately, the closed-form susceptibility equation we use is only accurate below 2.5 eV, beyond which the Dirac-cone approximation breaks down due to the van Hove singularity and many-body effects, such as excitonic excitations [37,40]. In such a situation we should resort experimental data or more accurate descriptions of the susceptibility. In fact, if the response of graphene was known exactly at all relevant frequencies, the calculation of the force within the general Lifshitz framework should be accurate down to a few angstroms, at which point the evanescent decays of the π orbitals on each graphene sheet start to interact. In order to illustrate this issue and test the validity of the Lifshitz approach in the case of graphene, we calculated the deviation arising from truncating the integral (Equation (7)). This truncation does not have a physical meaning; its purpose is just to have an idea of the importance of the high energy region on the Casimir effect. The results are shown in Figure 5, where we plotted δE = (E truncated − E ∞ )/E ∞ with E ∞ given by Equation (7), and E truncated is the same integral but truncated tohω max = 2 eV (red lines) or 10 eV (blue lines). It is clear from the dashed lines in Figure 5 that the RPA response used here (Equation (1)) breaks down below 200 nm because it fails to take into account the Van Hove singularity, as discussed earlier. d [m] Unfortunately, even a response that would be accurate up to 10 eV, which corresponds to the deep UV, may not be capable of predictions for distances shorter than a few tens of nanometres. Nevertheless, it would be advisable for calculations made within this energy range to make use of experimental data for the response of graphene as they are certainly more appropriate than the available RPA. When doping is considered (full lines), intraband excitations are allowed and the weight of the interband term in the integral is decreased, reducing the deviation severalfold for µ = 2 eV. This still does not solve the problem of deviation at quite large separations. Conclusions In this letter, we have revisited Casimir forces between graphene sheets, placing a particular emphasis on their physical origins. To that end we isolated each contribution, explaining in detail the energy range and corresponding separation distances in which they are influential. Furthermore, we made use of the force spectrum representation [7] to highlight the importance of surface plasmons and their dependency on non-locality. We observed that non-local effects do indeed strongly affect these modes at short distances, due to Landau damping. At large separations (low frequency), where a conduction current is prominent in the graphene response, they are instead very sensitive to Ohmic losses. However, for small gaps, interband transitions dominate because the form of the graphene response considers that they exist, unmodified, to infinity. This arises from the linear dispersion predicted by the Dirac cone approximation, which contradicts experiments [37] forhω > 2.5 eV. By truncating the integral at this value, we showed that this description is unsuited to calculate the Casimir forces between graphene sheets below 200 nm separation distance. This means that there is currently a vast and technologically critical parameter space which is inaccessible by closed-form response function calculations, but of course, this can be fixed using experimental data.
4,100.6
2020-01-19T00:00:00.000
[ "Physics" ]
Small amplitude solitary waves in the Dirac-Maxwell system We study nonlinear bound states, or solitary waves, in the Dirac-Maxwell system proving the existence of solutions in which the Dirac wave function is of the form $\phi(x,\omega)e^{-i\omega t}$, $\omega\in(-m,\omega_*)$, with some $\omega_*>-m$, such that $\phi_\omega\in H^1(\mathbb{R}^3,\mathbb{C}^4)$, $\Vert\phi_\omega\Vert^2_{L^2}=O(m-|\omega|)$, and $\Vert\phi_\omega\Vert_{L^\infty}=O(m-|\omega|)$. The method of proof is an implicit function theorem argument based on an identification of the nonrelativistic limit as the ground state of the Choquard equation. Introduction and results The Dirac equation, which appeared in [Dir28] just two years after the Schrödinger equation, is the correct Lorentz invariant equation to describe particles with nonzero spin when relativistic effects cannot be ignored. The Dirac equation predicts accurately the energy levels of an electron in the Hydrogen atom, yielding relativistic corrections to the spectrum of the Schrödinger equation. Further higher order corrections arise on account of interactions with the electromagnetic field, described mathematically by the Dirac-Maxwell Lagrangian, which aims to provide a selfconsistent description of the dynamics of an electron interacting with its own electromagnetic field. The perturbative treatment of the Dirac-Maxwell system in the framework of second quantization allows computation of quantities such as the energy levels and scattering cross-sections, which have been compared successfully with experiment; of course this quantum formalism does not provide the type of tangible description of particles and dynamical processes familiar from classical physics. Mathematically, the quantum theory (QED) has not been constructed, and indeed may not exist in the generally understood analytical sense. In particular it is a curious fact that although the electron is the most stable elementary particle known to physicists today, there is no mathematically precise formulation and proof of its existence and stability. This has resulted in an enduring interest in the classical Dirac-Maxwell system, both in the physics and mathematics literature. Regarding the former, the relevance of the classical equations of motion for QED has been widely debated. The prevalent view seems to be that the Dirac fermionic field does not have a direct meaning or limit in classical physics, and hence that the classical system is not really directly relevant to the world of observation. Nevertheless, there have been numerous attempts, both by Dirac himself and by many others -see [Dir62,Wak66,Lis95] and references therein -to construct localized solutions of the classical system or some modification thereof, with the aim of obtaining a more cogent mathematical description of the electron (or other fundamental particles). We consider the system of Dirac-Maxwell equations, where the electron, described by the standard "linear" Dirac equation, interacts with its own electromagnetic field which is in turn required to obey the Maxwell equations: (1.1) with the charge-current density J µ = (ρ, J) generated by the spinor field: (1.2) Above, ρ and J are the charge and current respectively. We denoteψ = (γ 0 ψ) * = ψ * γ 0 , with ψ * the hermitian conjugate of ψ. The charge is denoted by e (so that for the electron e < 0); the fine structure constant is the dimensionless coupling constant α ≡ e 2 c ≈ 1/137. We choose the units so that = c = 1. We have written the Maxwell equations using the Lorentz gauge condition ∂ µ A µ = 0. The Dirac γ-matrices satisfy the anticommutation relations {γ µ , γ ν } = 2g µν , with g µν = diag[1, −1, −1, −1]. The four-vector potential A µ has components (A 0 , A), with A = (A 1 , A 2 , A 3 ), so that the lower index version A µ = g µν A ν has components (A 0 , −A) so A 0 = A 0 . Following [BD64] and [BS77], we define the Dirac γ-matrices by where I 2 is the 2 × 2 unit matrix and σ j are the Pauli matrices: After introduction of a space time splitting, the system (1.1) takes the form Here α = (α 1 , α 2 , α 3 ), and α j and β are the 4 × 4 Dirac matrices: with {σ j } 3 j=1 the Pauli matrices. We will not distinguish lower and upper indices j of α and σ, so that α j = α j , σ j = σ j . The α-matrices and γ-matrices are related by Numerical justification for the existence of solitary wave solutions to the Dirac-Maxwell system (1.1) was obtained in [Lis95], where it was suggested that such solutions are formed by the Coulomb repulsion from the negative part of the essential spectrum (the Klein paradox). The numerical results of [Lis95] showed that the Dirac-Maxwell system has infinitely many families of solitary wave solutions φ N (x, ω)e −iωt , ω −m. Here the nonnegative integer N denotes the number of nodes of the positronic component of the solution (number of zeros of the corresponding spherically symmetric solution to the Choquard equation; see §3). A variational proof of existence of solitary waves for ω ∈ (−m, 0) and with N = 0 first appeared in [EGS96], and the generalization to handle ω ∈ (−m, m) is in [Abe98]. In the present paper, we give a proof of existence of solitary wave solutions to the Dirac-Maxwell system based on the perturbation from the nonrelativistic limit and also obtain the precise asymptotics for the solution in this limit. The physical significance of these types of solitary wave solutions requires not only their existence but also stability, and it is to be hoped that the type of detailed information about the solutions which is a consequence of the existence proof in this article, but does not seem to be so easily accessible from the original variational constructions, will be helpful in future stability analysis (see Remark 3.6 below). The second motivation for presenting this proof is to realize mathematically the physical intuition explained in [Lis95] which explains the existence of these bound state solutions in terms of the Klein paradox ([BD64, §3.3]). Moreover, once one knows that the excited eigenstates of the Choquard equation are nondegenerate (currently this nondegeneracy is established only for the ground state, N = 0 [Len09]), our argument will yield the existence of excited solitary wave solutions in Dirac-Maxwell system, extending the results of [EGS96] to N ≥ 1. We will construct solitary wave solutions by deforming the solutions to the nonrelativistic limit (represented by the Choquard equation) via the implicit function theorem. Such a method was employed in [Oun00,Gua08] for the nonlinear Dirac equation and in [RN10a,Stu10,RN10b] for Einstein-Dirac and Einstein-Dirac-Maxwell systems. The solitary wave (φe −iωt , A µ (x)) satisfies the stationary system (1.6) Theorem 1.1. There exists ω * > −m such that for ω ∈ (−m, ω * ) there is a solution to (1.6) of the form are of Schwartz class. The solutions could be chosen so that in the nonrelativistic limit ǫ = 0 one haŝ where n ∈ C 2 , |n| = 1, and ϕ 0 is a strictly positive spherically symmetric solution of Schwartz class to the Choquard equation (1.8) Remark 1.2. The existence of a positive spherically-symmetric solution ϕ 0 ∈ S (R 3 ) to (1.8) was proved in [Lie77]. Here is the plan of the paper. We give the heuristics in §2. The Choquard equation, which is the nonrelativistic limit of the Dirac-Maxwell system, is considered in §3. In §4, we complete the proof of existence of solitary waves via the implicit function theorem. Heuristics on the nonrelativistic limit The small amplitudes waves constructed in Theorem 1.1 are best understood physically in terms of the non-relativistic limit. Since we have set the speed of light and other physical constants equal to one, the relevant small parameter is the excitation energy (or frequency) as compared to the mass m. To develop some preliminary intuition regarding the non-relativistic limit, following [Lis95], we neglect the magnetic field described by the vector-potential A j , getting Let us consider a solitary wave solution ψ( , where φ 1 , φ 2 ∈ C 2 and A 0 = A 0 (x) only. Then φ 1 , φ 2 , and A 0 satisfy where σ = (σ 1 , σ 2 , σ 3 ), the vector formed from the Pauli matrices. Consider small amplitude solitary waves with ω ≈ −m. Then A 0 is small and −2mφ 1 ≈ −iσ·∇φ 2 , Denoting ǫ 2 = m 2 − ω 2 , 0 < ǫ ≪ m, the above suggests the following scaling: Note that since φ j and A 0 depend on ω and x, the scaled functions A 0 and Φ j are functions of y and of ǫ. In the limit which can be rewritten as the following equation for Φ 2 only: with the understanding that Φ 1 is then obtained from the first equation of (2.3). Remark 2.1. Regarding self-consistency of this approximation: one can check that, when using the scaling (2.2), the magnetic field vanishes to higher order in the limit ǫ → 0, in agreement with [Lis95]. . The second equation from (2.1) would then take the form where A·σφ 1 = O(ǫ 6 ) while other terms are O(ǫ 4 ). Thus the approximation is al least self-consistent, and the analysis in §4 justifies this rigorously. Remark 2.2. Regarding symmetry: while it is clear that radial symmetry of both φ 1 and φ 2 is inconsistent with (2.3), are permitted in principle, suggesting that in the non-relativistic limitΦ 2 could be radial, or to be more precise of the The starting point for our perturbative construction of solitary wave solutions to (1.4) is indeed a radial solution of (2.6), although the exact form of these solitary waves has to be modified from (2.5) when the effect of the magnetic The method of proof we employ does not require any particular symmetry class of the solitary wave. The above discussion suggests that the system (2.6) will determine the non-relativistic limit to highest order. The system (2.6) describes a Schrödinger wave function with an attractive self-interaction determined by the Poisson equation. Because the sign of the interaction is attractive it is often referred to as the stationary Newton-Schrödinger system. It is equivalent to a nonlocal equation for Φ known as the Choquard equation, which is the subject of the next section. The nonrelativistic limit: the Choquard equation The system (2.6) can also be obtained by looking for solitary wave solutions in the system This is the time-dependent Newton-Schrödinger system. If φe −iωt , V (x) is a solitary wave solution, then φ and V satisfy the stationary system We rewrite the system (3.1) in the non-local form, called the Choquard equation: where ∆ −1 is the operator of convolution with − 1 4π|x| . The solitary waves are solutions are of the form ψ(x, t) = φ ω (x)e −iωt , with φ ω satisfying the non-local scalar equation This suggests the following variational formulation for the problem: find critical points of subject to the constraint |φ(x)| 2 dx = const. This formulation is the basis of the existence and uniqueness proofs in the references which are summarized in the following theorem. Remark 3.2. Together with the heuristics in the previous section, this result suggests that for ω sufficiently close to −m there might exist infinitely many families of solitary waves to the Dirac-Maxwell system, which differ by the number of nodes. As mentioned in [EGS96], the variational methods used in that paper are hard to generalize to prove the existence of multiple solitary waves for each ω (such a multiplicity result is obtained in [EGS96] for the Dirac -Klein-Gordon system). Remark 3.3. The φ(x) and V (x) for different values of ω < 0 can be scaled to produce a standard form as follows. Let ζ > 0 satisfy ζ 2 = −ω and write y = ζx , φ(x) = ζ 2 u(ζx) , and V (x) = ζ 2 v(ζx). Then (3.2) is equivalent to the following system for u(y), v(y): In the remainder of this section we summarize the properties of the linearized Choquard equation which follow from [Len09] and are needed in §4. Consider a solution to the Choquard equation of the form with R, S real-valued. The linearized equation for R, S is: Both L 0 and L 1 are unbounded operators L 2 → L 2 which are self-adjoint with domain H 2 ⊂ L 2 . Clearly L 0 ϕ 0 = 0, with 0 ∈ σ d (L 0 ) an eigenvalue corresponding to a positive eigenfunction ϕ 0 ; it follows that 0 is a simple eigenvalue of L 0 , with the rest of the spectrum separated from zero. The range of L 0 is {ϕ 0 } ⊥ , the L 2 orthogonal complement of the linear span of ϕ 0 . Notice that . Lemma 3.4. The self-adjoint operator L 1 : H 2 → L 2 has exactly one negative eigenvalue, which we denote −Λ 0 , and has a three dimensional kernel Ker L 1 spanned by {∂ j ϕ 0 } 3 j=1 . The range of L 1 is (Ker L 1 ) ⊥ , the L 2 orthogonal complement of the linear span of the {∂ j ϕ 0 } 3 j=1 . Proof. We proceed similarly to [Kik08, Lemma 5.4.3]. The n = 0 ground state solution ϕ 0 to (3.4) is characterized in [Lie77] as the solution, unique up to translation and phase rotation, to the following minimization problem: for certain µ > 0. We claim that this implies that establishing the claim. We took into account that ϕ 0 satisfies the stationary equation E ′ (ϕ 0 ) = ω 0 Q ′ (ϕ 0 ) and also that Q ′ (ϕ 0 ), ϕ 0 = 2 ϕ 0 2 So L 1 is non-negative on a codimension one subspace. On the other hand, since the integral kernel of ∆ −1 is strictly negative, while ϕ 0 is strictly positive and L 0 ϕ 0 = 0, it follows that ϕ 0 L 1 ϕ 0 < 0 so that there certainly exists one negative eigenvalue characterized as Let η 0 be the corresponding eigenfunction, L 1 η 0 = −Λ 0 η 0 . To prove that (−Λ 0 , 0) ⊂ ρ(L 1 ), the resolvent set, consider the minimization problem (3.9) Now the relation L 0 ϕ 0 = 0, together with translation invariance, implies that L 1 ∂ j ϕ 0 = 0. Moreover, it is proved in [Len09] that ϕ 0 is nondegenerate, in the sense that the kernel of L 1 is spanned by the ∂ j ϕ 0 , 1 ≤ j ≤ 3. Hence, by consideration of linear combinations of the eigenfunctions η 0 and ∂ j ϕ 0 , that the number defined by (3.9) is ≤ 0. In fact it must equal zero since if it were negative a simple compactness argument (based on the negativity of ω 0 ) would imply the existence of a negative eigenvalue in the interval (−Λ 0 , 0) with corresponding eigenfunction η 1 orthogonal to η 0 . But since η 0 , η 1 would then be an orthogonal pair of eigenfunctions of L 1 with negative eigenvalues, and both having non-zero inner product with ϕ 0 , this would immediately contradict the fact that L 1 is non-negative on {ϕ 0 } ⊥ . We conclude with a few remarks on the stability of solitary waves to the Choquard equation. By Remark 3.3 we know the ω-dependence of a localized solution φ ω (x)e −iωt to (3.3): one has φ ω (x) = ζ 2 u(ζ|x|), where ζ = √ −ω. From this we can obtain the frequency dependence of the charge: It follows that for all negative frequencies dQ dω < 0 . By the Vakhitov-Kolokolov stability criterion ( [VK73]), this leads us to expect the linear stability of no-node solitary waves (the ground states) in the Choquard equation. To determine the point spectrum of is an eigenfunction corresponding to the eigenvalue λ ∈ C, then −λ 2 R = L 0 L 1 R. If λ = 0, then one concludes that R is orthogonal to Ker L 0 = {ϕ 0 }, hence we can apply L −1 0 ; taking then the inner product with R, we deduce that: which implies that λ 2 ∈ R. Moreover, by (3.9), λ 2 ≤ 0, leading to the conclusion that the point spectrum σ d (JL) ⊂ iR, and hence the absence of growing modes at the linearized level. The (nonlinear) orbital stability of the ground state solitary wave was proved in [CL82]. Remark 3.6. In view of [CGG12,BC12], one expects that the linear stability or instability of small amplitude solitary waves is directly related to the linear stability or instability of the corresponding nonrelativistic limit, which for Dirac-Maxwell is given by the Choquard equation. We hope that this may provide a route to understanding stability of small solitary waves solutions for the Dirac-Maxwell system. Proof of existence of solitary waves in Dirac-Maxwell system In this section, we complete the proof of Theorem 1.1. It is obtained as a consequence of Proposition 4.6 after the application of a rescaling motivated by the discussion in §3. , where for j = 1, 2 the φ j ∈ C 2 are essentially the components of φ in the range of the projection operators Π 1 = 1 2 (1 + β), and Π 2 = 1 2 (1 − β) (under obvious isomorphisms of these subspaces with C 2 ). Applying Π 1 and Π 2 to (1.6), we have: We write (4.3) as and regard the potentials A 0 and A = (A j ) as non-local functionals of φ = φ 1 φ 2 . Above, N(x) = (4π|x|) −1 is the Newtonian potential. In abstract terms, the equations are of the form ωQ ′ = E ′ where the charge functional is and, regarding A 0 , A as fixed non-local functionals (4.4) of φ, the Hamiltonian E(φ) is given by For future reference we recall the following trick from [Stu99]: Lemma 4.1. Let ξ α be a finite collection of vector fields on the phase space which are infinitesimal symmetries, in the sense that Q ′ , ξ α = 0 = E ′ , ξ α . Then any solution of the equation ωQ ′ − E ′ − a α ξ α = 0 , for some set a α ∈ R, is also a solution of ωQ ′ − E ′ = 0, as long as the matrix ξ α , ξ β is well defined and nondegenerate. Proof. For sufficiently regular ξ β it is possible to take the inner product, yielding a α ξ α , ξ β = 0 which gives the result. (The precise meaning of sufficiently regular is just that this computation is valid; it would be sufficient for ξ α to lie in a subspace F of L 2 with the property that the equation ωQ ′ − E ′ − a α ξ α = 0 holds in the dual of F .) Example 4.2. For ψ : R → C and Q = 1 2 |ψ| 2 and E = 1 2 |∇ψ| 2 − 1 p+1 |ψ| p+1 the symmetry of phase rotation corresponds to the infinitesimal symmetry ξ(ψ) = iψ, and it is easy to check that given an H 1 distributional solution of ωQ ′ − E ′ − aξ = 0, i.e. a weak solution of −∆ψ − |ψ| p ψ = ωψ − iaψ, for any a ∈ R, one necessarily has a = 0. The same holds in higher dimensions as long as p is such that the equation holds as an equality in H −1 . Remark 4.3. The advantage of solving the more general equation with the unknown "multipliers" a α is that in an implicit function theorem setting the multipliers can be varied to fill out the part of the cokernel corresponding to the symmetries. It is then shown after the fact that the multipliers are in fact zero. The choice of ξ α is determined by the symmetry group; in the case of Dirac-Maxwell the relevant group is the seven dimensional group generated by translations, rotations and phase rotation. The infinitesimal versions of these actions give the following vector fields ( [BD64]): In accordance with the heuristics in §2 we introduce functions Φ 1 (y, ǫ), Φ 2 (y, ǫ) ∈ C 2 and A µ (y, ǫ) by the following scaling relations: where ǫ and ω are related by ω = − √ m 2 − ǫ 2 . Then, writing ∇ y for the gradient with respect to y j = ǫx j , 1 ≤ j ≤ 3, we have: Let ϕ 0 ∈ S (R 3 ) be the ground state solution to the Choquard equation with ω 0 = − 1 2m : That is, ϕ 0 is a strictly positive, spherically symmetric, smooth, and exponential decaying function. As discussed in the previous section, such a solution exists by [Lie77]; the value ω 0 = −(2m) −1 is chosen for our convenience. Using ϕ 0 , we can produce a solution to (4.9)-(4.11) in the nonrelativistic limit ǫ = 0: (4.14) The symmetry of this configuration is axial, with the magnetic field along the z axis of symmetry. C 4 ). Then A µ defined by (4.11) satisfy Proof. The functions A µ defined by (4.11) are of the form N * h with h := f g, where f, g ∈ H 1 (R 3 ). Due to the Sobolev embedding H 1 (R 3 ) ⊂ L 6 (R 3 ), we have h ∈ L p (R 3 ) , 1 ≤ p ≤ 3. By the Hölder inequality, one has where B 1 is the unit ball in R 3 and χ B1 is its characteristic function, hence |x| −1 * h ∈ L ∞ (R 3 ). Furthermore the structure of (4.11) makes it clear that the mappings (4.16) Introducing P / = −iσ·∇ y and substituting ω = − √ m 2 − ǫ 2 , we rewrite (4.9), (4.10) as the equation (4.17) As above, we regard the A µ = (A 0 , A), A = (A j ), as non-local functionals A µ = A µ (Φ, ǫ) determined by (4.11). With this understood, the entire system is encapsulated in the equation F (Φ, ǫ) = 0 for Φ = Φ 1 Φ 2 only. In terms of the original variables: where the functionals Q, E are defined by (4.5), (4.6). The nonrelativistic limit satisfies F (Φ, 0) = 0 (cf. (4.13), (4.14)), so that to obtain solutions for small ǫ it is necessary to compute the derivative of F at the point (Φ, 0). This is determined by the set of directional derivatives. Let e 1 = 1 0 and e 2 = 0 1 , and let g ∈ H 1 (R 3 , C 2 ). To compute the directional derivatives first note that A j drops out on putting ǫ = 0, and then note further that by (4.11) only the derivative of A 0 at (Φ 1 , 0) with respect to Φ 2 is nonzero, with derivative given by We deduce that for C 2 -valued functions U and V , Thus, the derivative of F at the nonrelativistic limit point (Φ, 0) is the linear map DF (Φ, 0) given by the matrix M. This is a differential operator, which we consider as an unbounded operator on L 2 (R 3 ; C 2 ) ⊕ L 2 (R 3 ; C 2 ). 1. The map M : is a Hermitian operator with domain X. The kernel of M is given by 4. The range of M : is closed in the topology of Y and is given by where ⊥ means the orthogonal complement with respect to the inner product in L 2 ⊕ L 2 . The inverse of M : where the definitions and properties of the operators L 0 , L 1 are given in §3. Proof. The proof depends on some properties of the linearized Choquard equation from [Len09] which are stated in §3. The fact in (1) that M is Hermitian follows from the fact that P / is Hermitian. From Lemma 4.4 the assertion (2) is immediate from the properties of N and the fact that ϕ 0 and its partial derivatives are smooth and exponentially decreasing. To prove (3),(4) and (5) we consider how to solve M U V = F G , i.e. the system We first express U in terms of V by U = 1 2m (F − P / V ) , and, writing V = V 1 e 1 + V 2 e 2 , Referring to the definitions in §3 of L 0 and L 1 , with ω 0 set equal to −(2m) −1 , we arrive at the following equations: (4.20) It is useful here that the components with respect to e 1 and e 2 are decoupled. Noting also from the form of L 0 , L 1 that these operators take real/imaginary valued functions to real/imaginary valued functions, and further that L 1 = L 0 on pure imaginary functions, we obtain the given formula for V , and hence for U , immediately from §3. The identification of the kernel in (3) is then a specialization of this, given the information on Ker L 0 and Ker L 1 in §3, and also (4) is a consequence of the identification of the ranges of L 0 and L 1 given in §3. The statement of Theorem 1.1 will follow from the following result. Proof. Solutions of (4.1)-(4.3) for small ǫ can be produced by solving F = 0. The proof of existence of solutions to this equation is by the implicit function theorem and Lemma 4.1, perturbing from the nonrelativistic limit point F (Φ, 0) = 0. To start we claim that F , as defined in (4.17), is a C ∞ function X × (−m, +m) → Y . To prove this notice that the expression for F is manifestly smooth in ǫ for ǫ 2 < m 2 , and its dependence on Φ j is built up from compositions of certain multilinear maps and linear operators; the structure of the expressions obtained after successive differentiation is the same. Referring to the specific formulae, the fact that these expressions are all C ∞ is an immediate consequence of the fact that multiplication gives continuous bilinear ( =⇒ smooth) maps H 1 × H 2 → H 1 and H 2 × H 2 → H 2 (Moser inequalities) and Lemma 4.4. We are looking for Φ(ǫ) in the form We use the same component notation as above We apply the implicit function theorem to the function G : (4.23) Remark 4.7. Referring to (4.3) we have introduced a linear combination of the six infinitesimal symmetries corresponding to translation and rotation. The action of phase rotation is not independent of rotation in the nonrelativistic limit, which is why the seventh parameter does not appear. In terms of the original variables (cf. (4.7)): Computing the derivatives of (4.23) at ǫ = 0, we see that the linear span {∂ aj G, ∂ bj G; 1 ≤ j ≤ 3} is equal to Ker M. Referring to Lemma 4.5, this establishes that the derivative of G at ǫ = 0, This latter condition serves to divide out by the action of the symmetry group, giving a local slice. Referring to Lemma 4.1, to deduce that these in fact generate solutions of F = 0, for sufficiently small ǫ > 0, it is sufficient to verify that a(ǫ) = 0, b(ǫ) = 0, which is in turn a consequence of the nondegeneracy of the matrix of inner products of the infinitesimal vector fields, scaled as above. This amounts to the need to verify nondegeneracy of the 6 × 6 matrix for small ǫ. (In the matrix (4.25) the indices j, j ′ , k, k ′ run between 1 and 3.) Lemma 4.8. The matrix given by (4.25), evaluated at φ(x) = Proof. Clearly the dominant terms arise from the second ("large") component giving rise to diagonal matrix elements which, referring to the block form in (4.25), are O(ǫ 4 ). Since Ψ j = O(ǫ), the result will follow from nondegeneracy of the matrix with Ψ j set equal to zero. Using ǫ −2 φ = − ǫ 2m P /Φ 2 Φ 2 andΦ 2 = ϕ 0 0 , we calculate the first diagonal term: where we took into account the spherical symmetry of ϕ 0 , which leads to ∂ y 1 ϕ 0 , ∂ y 1 ϕ 0 = 1 3 ϕ 0 , (−∆ y )ϕ 0 . Next for the off-diagonal terms we compute, again using the same expression for ǫ −2 φ: The first two terms are identically zero since ϕ 0 is spherically symmetric (so that by parity considerations it is L 2 orthogonal to all of its first partial derivatives, which are in turn orthogonal to all of the second partial derivatives). Finally, for the second diagonal term: The non-degeneracy of the matrix (4.25) for small ǫ follows. Remark 4.10. We briefly consider the symmetry properties of the solitary wave solutions: in [Lis95, Section 5] Lisi gives an ansatz for the solitary waves, using cylindrical coordinates (ρ, z, θ), from which symmetry properties can be deduced. For our situation the relevant ansatz for the Dirac wave function is φ =     ψ 1 (ρ, z) ψ 2 (ρ, z)e iθ ψ 3 (ρ, z) ψ 4 (ρ, z)e iθ     . (4.28) It seems likely that the solutions constructed via Proposition 4.6 have this symmetry and that this fact could be proved via an application of the implicit function theorem within the symmetry class of (4.28).
7,085.4
2012-10-26T00:00:00.000
[ "Mathematics", "Physics" ]
Penalized profiled semiparametric estimating functions : In this paper, we propose a general class of penalized profiled semiparametric estimating functions which is applicable to a wide range of statistical models, including quantile regression, survival analysis, and missing data, among others. It is noteworthy that the estimating function can be non-smooth in the parametric and/or nonparametric components. Without imposing a specific functional structure on the nonparametric component or assuming a conditional distribution of the response variable for the given covariates, we establish a unified theory which demonstrates that the resulting estimator for the parametric component possesses the oracle property. Monte Carlo studies indicate that the proposed estimator performs well. An empirical example is also presented to illustrate the usefulness of the new method. Introduction In statistical estimation, regularization or penalization has flourished during the last twenty years or so as an effective approach for controlling model complexity and avoiding overfitting, see for example Bickel and Li (2006) for a general survey. To estimate an unknown p-dimensional vector of parameters β = (β 1 , . . . , β p ) T , the regularized estimator is defined as where L n is a loss function that measures the goodness-of-fit of the model, and p λn (·) is a penalty function that depends on a positive tuning parameter λ n . Despite the large amount of work on regularized estimation, most existing studies were restricted to linear regression and likelihood based models. Recent statistical literature has witnessed increasingly growing interest on regularized semiparametric models, due to their balance between flexibility and parsimony. However, current results usually focus on a specific type of semiparametric regression model. For example, Bunea (2004), Xie and Huang (2009) and Liang and Li (2009) studied the partially linear regression model; Wang and Xia (2009) investigated shrinkage estimation of the varying coefficient model; Li and Liang (2008) proposed the nonconcave penalized quasilikelihood method for variable selection in semiparametric varying-coefficient models; Liang et al. (2010) considered partially linear single-index models; Kai, Li and Zou (2011) investigated the varying-coefficient partially linear models; Wang et al. (2011) studied estimation and variable selection for generalized additive partial linear models. Although the aforementioned work convincingly demonstrate the merits of regularization in a semiparametric setting, a general theory is still lacking. Furthermore, most of the existing theory assumes a smooth loss function which excludes many interesting applications, such as those arising from quantile regression, survival analysis and missing data analysis. Instead of penalizing the loss function, Fu (2003) proposed to directly penalize the estimating function for generalized linear models. Later, Johnson, Lin and Zeng (2008) derived impressive results on the asymptotic theory for a broad class of penalized estimating functions when the regression model is linear but the error distribution is unspecified. It is noteworthy that their approach allows the estimating function to be discrete. In addition, Chen, Linton and Keilegom (2003) introduced a non-smooth estimating function for the semiparametric models, but they only focused on non-penalized estimation. Since the non-parametric component in their estimating function has been profiled, we refer to it as the profiled semiparametric estimating equation for the purpose of simplicity. Both of the above two innovative approaches motivate us to propose a general class of penalized profiled estimating functions that substantially expands the scope of applicability of the regularization approach for semiparametric models. In this paper, we provide a unified approach for obtaining penalized semiparametric estimation that is applicable for many commonly used likelihood based models as well as non-likelihood based semiparametric models. This broad class of models has three appealing features: • First, the models incorporate nonparametric components for nonlinearity without imposing any assumptions on the conditional distribution of the response variable for the given covariates. • Second, the profiled estimating function allows the preliminary estimators of the nonparametric functions to depend on the unknown parametric component. Furthermore, the estimator for the nonparametric component is only assumed to satisfy mild conditions. Thus, the widely used nonparametric estimation methods, such as kernel smoothing or spline approximation, can be applied. • Third, the profiled semiparametric estimation function can be non-smooth in both the parametric and/or nonparametric components, which is particularly useful for models arising from quantile regression, survival analysis, and missing data analysis, among others. Based on the profiled semiparametric estimating function with an appropriate nonconvex penalty function, we demonstrate that the penalized estimator of the parametric component possesses the oracle property under suitable conditions. That is, the zero coefficients in the parametric component are estimated to be exactly zero with probability approaching one and the nonzero coefficients have the same asymptotic normal distribution as if it is known a priori. It is noteworthy that asymptotic results are established under a set of mild conditions and without assuming a parametric likelihood function. In addition, the proposed estimator can be computed via an efficient algorithm. The rest of the paper is organized as follows. Section 2 introduces the methodology of the penalized profiled semiparametric estimating function and then illustrates its applicability via four analytical examples. Section 3 presents a set of sufficient conditions and provides asymptotic theories of the penalized estimator. Monte Carlo studies and an empirical example are reported in Section 4 to demonstrate the finite sample performance and the usefulness of proposed method, respectively. Section 5 concludes the article with a brief discussion. All detailed proofs are relegated to the Appendix. Estimating function Let m(z, β, h) be a p-dimensional vector that is a function of a p-dimensional parameter vector β and an infinite dimensional parameter h. The nonparametric component h can depend on both β and z, and thus is written as h(z, β) when clarity is needed. We assume that β ∈ B, a compact subset of R p ; and that h ∈ H which is a vector space of functions endowed with the sup-norm metric ||h|| ∞ = sup β sup z |h(z, β)|. We further assume that the data . . , n} are randomly generated from a distribution which satisfies for some β 0 ∈ B and h 0 ∈ H, whereX i is a generic notation for the covariate vector and Y i is a response variable. In this paper, we consider semiparametric models satisfying the above moment condition, and denote the true values of the finite and infinite dimensional parameters as β 0 and h 0 (·, β 0 ), respectively. In many real applications, the researchers are interested in estimating the parametric component β 0 and treat the nonparametric component h 0 as a nuisance function. To this end, for a given β, we consider the "profiled" estimator h(·, β) (abbreviated as h), which serves as a nonparametric estimator for h(·, β) in the semiparametric setting. To estimate β 0 , we subsequently define the pdimensional profiled semiparametric estimating function (2) be the population version of the estimating function. In this paper, we assume that M(β, h) is smooth in β, while its sample version M n (β, h) may be non-smooth in β. Based on the above estimating function, Chen, Linton and Keilegom (2003) considered the problem of estimating β 0 by emphasizing that m is a non-smooth function in β and/or h. Although M n (β, h) only contains a profile estimator,ĥ, it may implicitly depend on the additional estimators induced by the model setting. For the sake of explicitness, we sometimes include those augmented components in the estimating functions (e.g., see Examples 2 and 3 in the next subsection). In this paper, we study a related but different problem of variable selection and estimation for the parametric component. We assume that some of the components in β 0 = (β 01 . . . , β 0p ) T are zero, corresponding to redundant covariates. To estimate β 0 and identify its nonzero components, we propose the following penalized profiled (PP) semiparametric estimating function: where the notation q λn (|β|)sgn(β) denotes the component-wise product of q λn (|β|) = (q λn (|β 1 |), . . . , q λn (|β p |)) T with sgn(β) = (sgn(β 1 ), . . . , sgn(β p )) T and sgn(t) = I(t > 0) − I(t < 0). The function q λn (·) is the gradient of some penalty function. Based on the penalty function setting in Section 3, q λn (|β j |) is zero for large values of |β j |, whereas it is relatively large for small values of is heavily penalized, which forces the estimator of β 0j to shrink to zero. Once an estimated coefficient shrinks towards zero, its associated covariate is excluded from the final selected model. It is known that the convex L 1 penalty or Lasso Tibshirani (1996) is computationally attractive and demonstrates excellent predictive ability. However, it requires stringent assumptions to yield consistent variable selection (Greenshtein and Ritov, 2004;Meinshausen and Bühlmann, 2006;Zhao and Yu, 2006, among others). A useful alternative to the L 1 penalty function is the noncovex penalty function SCAD (Fan and Li, 2001) or MCP (Zhang, 2010), which alleviates the bias of Lasso and achieves model selection consistency under more relaxed conditions on the design matrix. Hence, we focus on nonconvex penalty functions that satisfy the general conditions given in Section 3.1. When U n (β, h) is a non-smooth function, an exact solution to U n (β, h) = 0 may not exist. Hence, we estimate β 0 by any β that satisfies ||U n ( β, h)|| = O p (n −1/2 ), where || · || denotes the L 2 or Euclidean norm. For the sake of simplicity, we name it an approximate estimator. In Section 3, we demonstrate that the oracle estimator is an approximate solution of the penalized profiling estimating equations; and any root-n consistent approximate estimator of the penalized profiling estimating equations possesses the oracle property with probability tending to one. Analytical examples The proposed PP semiparametric estimating function can be applied to a wide range of statistical models. To illustrate its broadness, we consider the four motivating examples given below, some of which will be further discussed later to demonstrate the theory and applications. Since the penalty function in (3) does not depend on the model structure, we only present the profiled semiparametric estimating function. Example 1 (Partially linear quantile regression). We consider a random sample (X i , W i , Y i ), i = 1, . . . , n, from the partially linear quantile regression model where β 0 and the function h 0 (·) are unknown, and the random error ǫ i satisfies Then h 0 (w) = h(w, β 0 ). Accordingly, the profiled semiparametric estimating function is where h(w, β) is a nonparametric estimator of h(w, β). In Section 4, h(w, β) is obtained by the local linear smoothing of quantile regression. Specifically, for a given β, we have that where K t (·) = t −1 K(·/t), K(·) is a kernel function, and t > 0 is the bandwidth. Accordingly, the local linear estimator is h(w, β) = a 1 . Example 2 (Single-index mean regression). We observe a random sample (X i , Y i ), i = 1, . . . , n, from the model where β 0 and the function h 0 (·) are unknown, and the random error ǫ i satisfies E(ǫ i |X i ) = 0. For a given β, let h(X T β) = E(Y |X T β), where (X, Y ) has the same distribution as (X i , Y i ). Then h 0 (X T β 0 ) = h(X T β 0 ). There are various approaches to estimate h, for example, the leave-one-out Nadaraya-Watson ker- , where K t (·) is defined as in Example 1. Furthermore, adopting Ichimura (1993)'s suggestion, the profiled semiparametric estimating function is ∂β , for example, the derivative of the Nadaraya-Watson kernel estimator. Example 3 (Partially linear mean regression with missing covariates). Consider the partially linear regression model Liang et al. (2004) studied this model when the data on X i may not be completely observed. Let δ i be the observing data indicator: δ i = 1 if X i is observed and δ i = 0 otherwise. Assume that X i is missing at random in the sense that P (δ i = 1|X i , W i , Y i ) = P (δ i = 1|W i , Y i ), and denote the probability of X i being observed by π(Y i , W i ) = P (δ i = 1|W i , Y i ). In addition, let m 1 (w) = E(X|W = w), m 2 (w) = E(Y |W = w), m 3 (y, w) = E(X|Y = y, W = w), and m 4 (y, w) = E(XX T |Y = y, W = w). Then h(w, β) = m 2 (w) − m 1 (w) T β. Moreover, let m j be a nonparametric estimator of m j for j = 1, . . . , 4. As a result, h(w, β) = m 2 (w) − m 1 (w) T β. Finally, let π be an estimator of π based on a parametric (e.g., logistic regression) model or a nonparametric regression approach. Adapting Liang et al. (2004)'s method, we obtain the following estimating function where A = ( m 1 , m 2 , m 3 , m 4 , π) and In Section 4.1, the Horvitz-Thompson (HT) weighted local linear kernel estimators (Wang et al., 1998;Liang et al., 2004) are used for estimating m j (w) (j = 1, . . . , 4), which collectively yield the estimate of h(w, β). Example 4 (Locally weighted censored quantile regression). Censored quantile regression has been recognized as a useful alternative to the classical proportional hazards model for analyzing survival data. It accommodates heterogeneity in the data and relaxes the proportional hazards assumption. The survival time (or a transformation of it) T i is subject to random right censoring and may not be completely observed. However, we observe the i.i.d. triples is the indicator for censoring and C i is the censoring variable. we further assume that Therefore, X T i β 0 is the τ th conditional quantile of the survival time. Following approach, we obtain the profiled semiparametric estimating function where h(·|X i ) is the local Kaplan-Meier estimator of h 0 (·|X i ), which is the conditional distribution function of T i given X i , and the weight function is . . , n. showed that the estimator obtained by solving the above estimating function is consistent for β 0 , and it is also asymptotically normal under weaker conditions than those in the literature. Asymptotic properties In this paper, we assume that U n (β, h) can be a non-smooth function due to either M n (β, h) or q λn (|β|). For example, the popular SCAD penalty function (Fan and Li, 2001) has for θ ≥ 0 and some a > 2, where the notationb + stands for the positive part ofb, i.e.,b + =bI(b > 0). Hence, the q λn (θ) function is not differentiable at θ = λ n and θ = aλ n . It is not surprising that an exact solution to U n (β, h) = 0 may not exist. Hence, we consider an approximate estimator for β 0 that satisfies where ||·|| denotes the L 2 or Euclidean norm, see also the non-penalized approximate estimator in Chen, Linton and Keilegom (2003). Alternatively, we may consider the estimator as an approximate zero-crossing of U n (β, h); see Johnson, Lin and Zeng (2008). Without loss of generality, we assume that β 0 = (β T 10 , β T 20 ) T , where β 10 consists of the nonzero components and β 20 = 0 contains the zero components. Let A = {1 ≤ j ≤ p : β 0j = 0} be the index set of the nonzero components and denote the dimension of β 10 by s, where 1 ≤ s ≤ p. Our goal is to simultaneously estimate β 0 and identify its nonzero components. Under the moment condition (1), the population version of the estimating function M(β, h) satisfies M(β 0 , h 0 ) = M(β 0 , h 0 (Z i , β 0 )) = 0. To characterize the influence of the parametric component and nonparametric component on estimation, we adopt the approach of Chen, Linton and Keilegom (2003) and define the ordinary derivative and the path-wise functional derivative of M(β, h). Specifically, the ordinary derivative of M(β, h) with respect to β is the p × p matrix Γ 1 (β, h), which satisfies To facilitate the presentation of the large-sample theory for the penalized profiling semiparametric estimating equations, we consider the following three sets of conditions. (I) Conditions on the PP estimating equation Let (C1) The ordinary derivative Γ 1 (β, h 0 ) exists for β in a small neighborhood of β 0 and is continuous at β = β 0 . (III) Conditions on the true parameters and the unpenalized estimating equation , which is assumed to be positive definite. Remark 1. Conditions (C1)-(C5) and (T1)-(T3) are similar to those in Chen, Linton and Keilegom (2003) to ensure good performance of the profiled estimating equations, and they are general enough to allow the estimating equations to be non-smooth. In addition, Condition (T4) imposes a constraint on the magnitude of the smallest signal, which is common for the theory of penalized estimators. It is noteworthy that Condition (P1) is satisfied by popular nonconvex penalty functions, such as SCAD and MCP. Condition (P2) is a standard requirement on the rate of the tuning parameter in achieving the oracle property (Fan and Li, 2001). For any root-n consistent approximate solution Remark 2. The property described in this theorem is often referred to as the oracle property of parameter estimators in the variable selection context. In addition, for a nonconvex penalty function such as SCAD, we have that ). This is the asymptotic normal distribution that would be obtained if the true model is known a priori. Theorem 1 establishes the asymptotic property of the approximate estimator for a possibly non-smooth estimating function. If the unpenalized estimating function is continuous in the true parameter space, then an exact solution can be found. This leads us to investigate the property of its resulting estimator given below. Before presenting the result, let us define U n1 (β, h) and M n1 (β, h) be the subvectors that contain the first s components of U n (β, h) and M n (β, h), respectively. is continuous in β 1 , then with probability approaching one, there exists β 1 that is root-n consistent for β 10 and satisfies Furthermore, β 1 has the same asymptotic normal distribution as stated in Theorem 1(2). To apply the above two theorems, the main efforts lie in checking Conditions (C2)-(C5). Condition (C2) usually can be verified based on the smoothness of the population version of the objective function M (β, h). Condition (C3) is often satisfied for frequently used nonparametric estimators. Condition (C4) holds if we can show that the function class {m(Z, β, h) : β ∈ B, h ∈ H} is a Donsker class (e.g. van der Vaart and Wellner, 1996). In addition, the three sufficient conditions for (C4) are provided in Theorem 3 of Chen, Linton and Keilegom (2003). Condition (C5) can usually be established by applying a uniform Bahadur representation of h − h 0 , which is available for commonly used nonparametric smoothers. We have checked four analytical examples, which satisfy all conditions. It is noteworthy that Chen, Linton and Keilegom (2003) examined Conditions (C4) and (C5) for a partially linear median regression model that is a special case of Example 1. For the sake of illustration, we briefly demonstrate the examination of Conditions (C4) and (C5) for Example 2 in Appendix B. Parameter estimation To allow for the PP semiparametric estimating function to be non-smooth, we apply the idea of the MM (majorization-minimization) algorithm to both the profiled semiparametric estimating function and the penalty function. We refer to Hunter and Lange (2004) for a general tutorial on the MM algorithm. Specifically, we first obtain the nonparametric estimate h(W i , β) for the given β. Then, we adopt Hunter and Lange (2000)'s MM algorithm to the unpenalized profiled estimating function and Hunter and Li (2005)'s MM algorithm to the penalty function, which yields their corresponding MM functions: M ǫ n (β, h) and nq λn (|β|) β ǫ+|β| , respectively, where the explicit form of M ǫ n (β, h) depends on the specific model form under study and the constant ǫ stands for a small perturbation, which we take to be 10 −6 in our simulation studies, see (12) below for an example. Accordingly, the penalized estimator β = ( β 1 , . . . , β p ) T approximately satisfies: where the product in the last term of (9) denotes the component-wise product. It is noteworthy that M ǫ n ( β, h) = M n ( β, h) when M n is a smooth function. To obtain β, we employ the concept of the Newton-Raphson algorithm to the function U ǫ n (β, h), which yields the following iterative algorithm: where The above algorithm is iterated until certain stopping criterion is met, for example, || β (k+1) − β (k) || ≤ 10 −4 . In addition, any coefficient sufficiently small is suppressed to zero, i.e., if | β j | ≤ 10 −4 upon convergence, then the estimator of this coefficient is set to be exactly zero. It is noteworthy that h in the iterative algorithm is updated along with β (k) . Finally, we select the tuning parameter λ n by minimizing a Bayesian Information Criterion (Schwarz, 1978), where L n ( β, h) is the loss function that leads to M n ( β, h) and the effective number of parameters is For the sake of illustration, we revisit Example 1 by briefly presenting the estimating equation and its relevant quantities. Based on equations (3) and (4), the penalized estimator of partially linear quantile regression satisfies the following equation, Note that to estimate h(W, β), the minimization of the objective function in (5) can be solved using existing software packages, for example, the quantile regression package in R. Furthermore, the non-penalized MM function is Numerical results In this section, we use the SCAD penalty function defined in (8) with a = 3.7 for both simulations and real data analyses. Monte Carlo simulated examples To evaluate the finite sample performance of the proposed method, we first consider the partially linear quantile regression model given in Example 1, where β 0 = (3, 1.5, 0, 0, 2, 0, 0, 0) T and ǫ i is the random error. As a result, the number of nonzero coefficients is 3. Furthermore, the vectors X i are generated from a multivariate normal distribution with mean 0 and an AR-1 correlation matrix with the auto-correlation coefficient 0.5. The covariates W i are simulated from a uniform (0,1) distribution, and they are independent of X i and ǫ i . Moreover, we consider two nonparametric functions: h 0 (w) = 2 sin(4πw) adapted from Fan and Huang (2005) and h 0 (w) = 16w(1 − w) − 2 adapted from Li and Liang (2008); two values for σ: 1 and 3; two sample sizes: n = 200 and 400; three different quantile levels: τ = 0.25, 0.5, 0.75, and four error distributions of ǫ i : (1) the standard normal distribution, (2) the t distribution with 3 degrees of freedom, (3) the mixture normal distribution with heavy tails: 0.9N (0, 1) + 0.1N (0, 10 2 ), and (4) the Gamma(2,2) distribution. We standardize ǫ i such that it satisfies P (ǫ i ≤ 0|X i , W i ) = τ for a given quantile level τ of interest. For each of the above settings, a total of 500 realizations are conducted. To assess the model selection properties, we report the average number of nonzero coefficients that are correctly estimated to be nonzero (labeled 'C'), the average number of zero coefficients that are incorrectly estimated to be nonzero (labeled 'I'), and the proportion of the selected model being underfitted (missing any significant variables, labeled 'UF'), correctly fitted (being the exact subset model, labeled 'CF') and overfitted (including all significant variables and some noise variables, labeled 'OF'). To examine the estimation accuracy, we report the mean squared error (MSE), 500 −1 500 m=1 || β (m) − β|| 2 , where β (m) is the estimate from the mth realization. As a benchmark, we also compute the mean squared error of the oracle estimate (in parentheses), which is the un-penalized quantile estimate of the true model. When n = 200 and 400, Tables 1 and 2, respectively, present the results for the partially linear quantile regression model with the nonparametric function h 0 (w) = 2 sin(4πw). We observe the following important findings. (i) As the sample size gets larger, MSE becomes smaller and approaches that of the oracle estimate, which is consistent with the theoretical finding. When the signal gets stronger (i.e., σ decreases from 3 to 1), the measurements of MSE, I, UF and OF decrease and those of C and CF increase as expected. (ii) In the symmetric distributions, which are standard normal, t 3 , and mixture, it is not surprising that τ = 0.5 yields better performance than τ = 0.25 and τ = 0.75 in terms of all measurements. In the positively skewed Gamma(2,2) distribution, it is also sensible that τ = 0.25 outperforms τ = 0.5 and τ = 0.75. It is noteworthy that the proportion of underfitted models is high for the Gamma(2,2) distribution with σ = 3 and τ = 0.75. This is because the signal is too weak in this case, due to a large variance and skewness. Because the simulations with the nonparametric function h 0 (w) = 16w(1 − w) − 2 exhibit similar results, we do not present them here to save space. To further illustrate the proposed method, we next generate random data from a partially linear mean regression model with missing covariates given in Example 3, where the ǫ i are independently generated from a N (0, 1) distribution and β 0 is the same as that in (13). The variables X i and W i are also generated from the same distributions as in the previous example. Moreover, h 0 (w), n, and σ are defined as above. Let δ i = 1 if X i is observed; and δ i = 0 otherwise. Then, consider the case where the covariates X i are missing at random in the sense that π(Y i , Subsequently, we employ logistic regression to generate the missing data indicators: To assess the sensitivity of parameter estimates against the missing rate, we study the following four cases: Case 1: (γ 0 , γ 1 , γ 2 ) = (1, 1, 2); Case 2: (γ 0 , γ 1 , γ 2 ) = (3, 1, 2); Case 3: (γ 0 , γ 1 , γ 2 ) = (6, 1, 2); and Case 4: (γ 0 , γ 1 , γ 2 ) = (8, 1, 2). The average missing rates are approximately 0.35, 0.25, 0.10 and 0.05, respectively. Based on the simulated data from each of the four cases, we are able to estimate (γ 0 , γ 1 , γ 2 ) from the above logistic regression model and then get π i . Since simulation settings lead to E((X − E(X))ǫ|Y, W ) = 0, we could follow Liang et al. (2004)'s comment and use the first part of function Φ defined after equation (7), together with the estimation process of Section 3.2, to obtain the penalized estimates. Finally, the tuning parameter is selected by minimizing BIC(λ n ) in equation (11). When h 0 (w) = 2 sin(4πw), Table 3 indicates that MSE decreases and approaches that of the oracle estimate when the sample size becomes large, which confirms the theoretical result. It is also not surprising that the measurements of MSE, I, UF, and OF decrease and C and CF increase as σ decreases from 3 to 1. Since the missing rate decreases from Case 1 to Case 4, it is sensible that Case 4 performs the best while Case 1 performs the worst in terms of all assessing measures. Moreover, the nonparametric function, h 0 (w) = 16w(1 − w) − 2, yields similar findings, which we omit here to save space. In summary, our proposed estimates perform well for simultaneous estimation and variable selection. A real example To demonstrate the practical usefulness of the proposed method, we consider the Female Labor Supply data collected in East Germany that has been analyzed by Fan, Härdle and Mammen (1998). The data set consists of 607 observations, and the response variable y is the 'wage per hour'. There are eight explanatory variables: x 1 is the number of working hours in a week (HRS); x 2 is the 'Treiman prestige index' of the woman's job (PRTG); x 3 is the monthly net income of the woman's husband (HUS); x 4 and x 5 are dummy variables for the woman's education (EDU): x 4 = 1 if the woman received between 13 and 16 years of education, and x 4 = 0 otherwise (EDU 1 ); x 5 = 1 if the woman received at least 17 years of education, and x 5 = 0 otherwise (EDU 2 ); x 6 is a dummy variable for children (CLD): x 6 = 1 if the woman has children less than 16 years old, and x 6 = 0 otherwise; x 7 is the unemployment rate in the place where she lives (UNEM); and w is her age. Recently, Wang, Li and Tsai (2007) employed the penalized partially linear mean regression model to fit the data by including a nonparametric component w and seven linear main effects together with some of the first-order interaction effects among x 1 , x 2 and x 3 . The covariates x 1 , x 2 , x 3 and x 7 were standardized. To further understand the relationship between the wage and other variables, we adopt the quantile regression model given in analytic Example 1, which could provide more comprehensive and insightful findings. To this end, we consider τ = 0.25, 0.5, and 0.75, which correspond to the responses of lower-paid females, middle-paid females, and well-paid females, respectively. After preliminary analyses, one observation with Age=60 is deleted because it is an outlier that has high leverage and low response. Then, we apply the five-fold cross validation method to choose smoothing bandwidths forĥ(w), which are t τ =0.25 = 7.63, t τ =0.5 = 4.13, t τ =0.75 = 4.56. Subsequently, we employ the BIC criterion to select the tuning parameters λ n , which are λ n,τ =0.25 = 0.061, λ n,τ =0.5 = 0.093, and λ n,τ =0.75 = 0.073. Accordingly, the penalized profile estimates are obtained. In addition, we adapt equation (4.1) of Hunter and Li (2005) to compute the standard errors of parameter estimates. Table 4 reports the penalized regression estimates and their standard errors that yield the following interesting results. (a) The associated coefficient estimates of x 1 , x 2 , x 2 1 , x 2 2 , and x 1 x 2 indicate that a unit increase in HRS has a larger negative impact on middle-paid females than on lower-paid females. In addition, it leads to a stronger negative effect on well-paid females when PRTG is at a higher level than when PRTG is at a lower level. In contrast, a unit increase in PRTG has a larger positive impact on middle-paid females than on lower-paid females. Moreover, it yields a smaller positive effect on wellpaid females when HRS is at a higher level than when HRS is at a lower level. (b) The associated coefficient estimates of x 4 (EDU 1 ) and x 5 (EDU 2 ) indicate that higher education usually yields a larger positive effect on well-paid females than on middle-paid and lower-paid females. (c) It is not surprising that variable x 6 (CLD) is not selected into the median regression, since it has not been included in the mean regression (see Wang, Li and Tsai, 2007). However, it is chosen into the quantile regression models with τ = 0.25 and τ = 0.75. The associated coefficient estimates indicate that, for well-paid females with young children, they are better motivated and have the ability to earn more; while, for lower-paid females with young children, their salaries are negatively affected possibly due to limited skills and time spent on child care. This result demonstrates that quantile regressions could provide more comprehensive findings than mean regression alone. (d) Two variables, x 3 (HUS) and x 7 (UNEM), are not selected in any of the quantile regression models. Hence, they do not appear to affect the hourly wage. Figure 1 depicts the estimated nonparametric functionsĥ(w) for all three quantile models. It indicates that the difference between starting wage at age 26 for well-paid versus middle-paid females is much smaller than that between middle-paid and lower-paid females. In addition, between ages 26 and 33, the rate of growth in wage of well-paid females increases much faster than that of middle-paid females. Afterward, these two groups exhibit similar rates of growth and decrease. Moreover, the rate of growth in wage of lower-paid females increases faster after age 48. This is because they have more time and experience to earn higher wages. In sum, the starting wage and the strong rate of growth in wage at earlier age play a significant role in females' lifetime earnings. Conclusion and discussions In this paper, we study a class of penalized profiled semiparametric estimating functions that are flexible enough to incorporate nonlinearity and non-smoothness. Hence, they cover various regression models, such as quantile regression, survival regression, and regression with missing data. Under very general conditions, we establish the oracle property of the resulting estimator for parametric components. The oracle property implies that the regularized estimator for the subvector of nonzero coefficients has the asymptotic variance as that of the estimator based on the unpenalized estimating equation when the true model is known a priori. Hence, when the moment condition in (1) comes from a semiparametric efficient score function, it is expected that the corresponding regularized estimator achieves the semiparametric efficiency bound for estimating the subvector of nonzero coefficients. For instance, consider the mean single-index regression model in Example 2, and let (x T 1i , x T 2i ) be a partition of X i corresponding to (β T 10 , β T 20 ). A direct calculation reveals that the regularized estimator with the SCAD penalty for β 10 has an asymptotic covariance matrix Γ −1 , and σ 2 (x 1i ) = E(ǫ 2 |x 1i ). When the error is homoscedastic, one then can apply Carroll et al. (1997) result and show that the proposed regularized estimator asymptotically achieves the semiparametric efficiency bound for estimating β 10 . For the partially linear quantile regression discussed in Example 1, one can use the semiparametric efficiency score derived in Section 5 of Lee (2003), which requires estimating the conditional error density function. In general, obtaining a semiparametric efficient estimator can be computationally cumbersome. For example, for the missing data problem discussed in Example 3, Liang et al. (2004) in their Section 4.1 pointed out that one needs to solve a complex integral equation to obtain the optimal weight for the semiparametric efficient score function. To further explore the proposed function, one could link the current work to ultrahigh dimensional analysis by incorporating the screening methods from Fan and Lv (2008), Wang (2009), Fan, Feng and Song (2011), and Liang, Wang and Tsai (2012. It is also of interest to extend the estimation function to nonlinear time series models (see Fan and Yao, 2003) and financial time series models (see Tsay, 2005). We believe that these efforts would broaden the usefulness of the penalized profiled semiparametric estimating function. Appendix B: Examination of Conditions (C4) & (C5) for Example 2 We consider the single index mean regression model defined in (6). Assume that X ∈ R X , β ∈ B, and that both R X and B are compact subsets of R p . The true parameter value β 0 is assumed to be in the interior of B. Let T = {t : t = X T β, X ∈ R X , β ∈ B}, then T is a compact subset of R. We consider the following two classes of smooth functions: H = {h(t) : h(t) is twice continuously differentiable on T} and S = {S(X, β) : S(X, β) has continuous partial derivatives w.r.t X ∈ R X and β ∈ B}.
8,689.8
2013-12-01T00:00:00.000
[ "Mathematics" ]
Application of High Performance Liquid Chromatography for Identification of Mycobacterium spp Mycobacterium tuberculosis infects over one-third of the human population worldwide, causing nine million new cases of tuberculosis and two million deaths annually [3]. While members of the MTC cause more disease worldwide than any other bacteria [4], NTM are widespread in nature and, with some significant exceptions, are free-living saprophytes and opportunistic pathogens. Although considered to be non-pathogenic, NTM can pose a threat to humans, mainly in patients with underlying conditions such as AIDS or cancer, and there is an increasing awareness of their public-health importance, especially as nosoco‐ mial pathogens [5]. Introduction Since 1896, when Lehmann and Neumann described the bacterium responsible for causing tuberculosis and leprosy, about 150 species of Mycobacterium have been described. Except for Mycobacterium leprae, which does not grow in vitro, those species were classified in two distinct groups: species that belong to the Mycobacterium tuberculosis complex (MTC), and nontuberculous mycobacteria (NTM) [1,2] Mycobacterium tuberculosis infects over one-third of the human population worldwide, causing nine million new cases of tuberculosis and two million deaths annually [3]. While members of the MTC cause more disease worldwide than any other bacteria [4], NTM are widespread in nature and, with some significant exceptions, are free-living saprophytes and opportunistic pathogens. Although considered to be non-pathogenic, NTM can pose a threat to humans, mainly in patients with underlying conditions such as AIDS or cancer, and there is an increasing awareness of their public-health importance, especially as nosocomial pathogens [5]. was performed by traditional culture methods based on phenotypic and biochemical characteristics. The principal disadvantage of this method is the time-consuming evaluation. Currently a genotypic evaluation is more preferred by the mycobacterial species [7]. Different species can display distinct antibiotic resistances and require different prescriptions for treatment. For this reason it is important to identify Mycobacterium species in a rapidly and accurately way [8,9] Complex high-molecular-weight β-hydroxyl fatty acids with a 22-or 24-carbon alkyl chain at the α-position are structural characteristics of mycolic acids (MAs), a type of fatty acid found in the Mycobacterium spp. cell wall. Several methods of fatty-acid analysis have indicated that MAs are species-or group-specific [10]. High-performance liquid chromatography (HPLC) analysis of MAs is a reliable method for the detection of mycobacteria, because of the rapid and reproducible nature of the method and because the MAs elution spectrum observed for each mycobacterial species has generally been found to be unique, except for two species (M. bovis and M. tuberculosis) that share the same spectrum pattern [11,12]. The HPLC method has been considered a standard test for chemotaxonomic classification and rapid identification of Mycobacterium species by the Centers for Disease Control and Prevention (CDC) since 1990, and has been reported to be more than 96% accurate compared with DNA probe tests [6]. Even HPLC is considered one of the most reliable and cost-effective tools for the rapid identification of Mycobacterium spp. isolated in culture based on the presence of different MAs [13] and it was well described and standardized [14], the methodology could be affected by several factors and must be optimized in accordance with local laboratory capabilities in order to assure accurate diagnosis. In this review are presented the procedures to saponification, extraction (chloroform), derivatization (p-bromophenacyl), separation (C18 column and a gradient of methanol and methylene chloride) and detection (ultraviolet spectrophotometer) of MAs. Also is explored the importance of built a pattern chromatogram library for successful identification of clinical samples based on comparison of the relative retention times (RRT) of the chromatogram patterns with those obtained from reference strains and with those available in external databases. HPLC is necessary for separation of MAs due to their large size and complexity that requires the use of different columns and solvents. Initial methods required manual interpretation of chromatograms with eventual development of automated systems. Mycobacterium species and mycolic acids The analysis of lipid fractions has contributed significantly to the knowledge of Mycobacterium species. The abundance of lipid constituents in mycobacterial cell walls made them classic candidates for early chemical investigations [6,15]. MAs, 2-alkyl, 3-hydroxy long-chain fatty acids, are the hallmark of the cell envelope of Mycobacterium tuberculosis (Figure 1). They are found either unbound, extractable with organic solvents (esters of trehalose or glycerol) or esterifying the terminal pentaarabinofuranosyl units of arabinogalactan, the polysaccharide that, together with peptidoglycan, forms the insoluble cell wall skeleton [16]. Both forms presumably play a crucial role in the remarkable architecture and impermeability of the cell envelope, also called the mycomembrane [17][18][19]. Structurally similar substances to MAs have been found in all mycobacterial species, with very few exceptions (e.g., Corynebacterium amycolatum and Corynebacterium kroppenstedtii). The identification of MAs structure has been addressed through the application of analytical techniques such as thin-layer chromatography (TLC), gas chromatography (GC), HPLC, mass spectrometry, and nuclear magnetic resonance spectroscopy. Based on their structural variability and complexity, MAs were largely used as taxonomic markers [17]. These large fatty acids contain a variety of functional groups and can vary in both qualitative and quantitative characteristics between species. This variety provides the basis for separation and identification of a large number of mycobacterial species using the HPLC. High performance liquid chromatography methodology Reverse-phase high-performance liquid chromatography of MAs esters has been demonstrated to be a rapid, reproducible, species-specific method for the identification of mycobacterial species. Also this method is relatively inexpensive and has been found to be more rapid alternative laboratory technique than the use of commercial nucleic acid probes [20]. Different methods have been developed for the detection of mycobacteria in clinical samples (e.g., blood, sputum) but they can also be applied to detection in other sources such as water [21] and milk [22]. Standard procedures to HPLC identification of mycobacterial species and most common steps used for different researchers are showed in the Figure 2 and Table 1 Bacterial culture HPLC still requires initial culture of isolates on solid medium before analysis. This can be problematic because the slow growth rate of mycobacteria delays full identification and leaves treating physicians with little useful information after the initial report of an acid-fast bacilli (AFB)-positive broth culture [23]. The identification is achieved when Mycobacteriae are grown under standardized culture medium conditions such as Lowenstein-Jensen (L-J) slant, which may be supplemented with additional growth factors for those strains of Mycobacteriae that are unable to grow on L-J. A carbol fuchsin/phenol or fluorochrome stain is performed to verify the presence of AFB. Another common solid medium used for mycobacterial species is the Middlebrook 7 H10 or 7 H11 at 35-37 °C. Currently available databases have been developed which incorporate mycobacterial species which require different growth conditions such as Mycobacterium haemophilum and Mycobacterium marinum (30 °C) [24]. According the Brazilian National Manual for the Laboratory Surveillance of Tuberculosis and other Mycobacteria [1], mycobacteria strains are cultured in L-J culture, except for Mycobacterium bovis that is grown in Stonebrink media, at 35 °C. Recently Buchan et al. [23] explored the use of broth culture for mycobacterial species such as an alternative for the solid medium and demonstrated a rapidly and accurately identification of mycobacteria to the species level from solid medium (7H11) or directly from broth culture such as Myco broth. It is important remark that culturing of Mycobacterium tuberculosis must be performed in special laboratorial conditions (Biosafety level 3) and follow and appropriate guidelines for the use and handling of pathogenic microorganisms [25]. Saponification The autoclaving-saponification steps in the HPLC procedure is performed for two reasons: frees MAs and kills the mycobacteria, assuring laboratory safety. Also this step is important because it will determine the amount of MAs that will be extracted. MAs are covalently linked to the cell wall arabinogalactan matrix. Removal of the MAs requires saponification with potassium hydroxide (50 % w/v), which is often performed in an autoclave to accelerate the process and provide for the safety of laboratory personnel working with Biosafety Level III mycobacterial species. Once autoclaved, the organisms are killed by the procedure and the mycolic acids released from the cell wall [24]. A standard protocol for HPLC identification of mycobacteria of CDC suggest transfer 1-2 loops of bacterium culture to a glass tube (13 by 100 mm) and add 2 mL of methanolic saponification reagent (25% potassium hydroxide in 50% methanol). The tube is capped tightly, homogenised and autoclaved for 1 h at 121°C and 15 psi [14]. Extraction MAs exist in the cell in two basic forms: covalently bound to the cell wall, and loosely associated with an insoluble matrix esterified to a variety of carbohydrate containing molecules. Treatment of intact cells with mixtures of chloroform and methanol is suitable for extracting the smaller quantity of non-covalently attached mycolate [16]. Once autoclaving has been completed, samples are cooled to room temperature, acidified, and extracted into chloroform. Free MAs are extracted by acidifying with 1.5 ml of a 50% solution of concentrated HCl and H 2 0 (v/v) and 2 mL chloroform. The chloroform layer is dried under air at 80-100 °C, and 2 mg of potassium bicarbonate is added [14]. Derivatization The preparation from extraction step is resuspended in 1.0 mL chloroform, and a derivatize reagent (p-bromophenacyl) is added. Derivatization is completed in a water bath at 80-100 °C for 20 min. Tubes are cooled, the mixture is acidified with 1 mL of the acidification solution (concentrated HCl and H 2 O; 1:1, v/v), and 1 mL methanol is added. After that the solution is thoroughly mixed, the bottom chloroform layer is transferred to a glass tube and evaporated to dryness. Samples are resuspended in 50 µL methylene chloride before analysis. HPLC conditions MAs are analyzed using a HPLC apparatus, in a gradient elution, and a UV detector set at 260 nm. Samples are separated in a C-18 reverse-phase column. The mobile phase is a mixture of methanol and methylene chloride in a flow rate of 2 mL min -1 . Authors performed some modifications in a CDC protocol for HPLC identification of MAs. Du et al. [9] tested a column with different dimensions (15.0 cm × 4.6 mm, 5 µm) and also a different elution program (run time 30 min and 1.5 mL min -1 flow rate) from the CDC specifications [14]. However, they obtained chromatograms quite similar to those from the CDC protocol. On the other hand, Figueiredo et al. [22] use a C-18 column 33% taller than those used in the CDC protocol (7.5 cm), increased the run time to 20 min. With these changes they observed a superior resolution in an adapted protocol and could be an alternative to discriminate between species with homologous HPLC chromatogram patterns. Special careful must be taken when manual injection is performed. Its is recommended cleaned the syringe at least five times with HPLCgrade methylene chloride and the injection loop be cleaned one time with 1 mL of the mobile phase solvent; it is also recommended that a blank injection be used between samples when the prior MAs signal is high [27]. Identification of mycobacteria species There is a wide range of structures and also concentrations of types or classes of MAs (α, methoxy, keto, epoxy mycolates, etc.) among mycobacterial species. The HPLC methodology is unable to separate all the homologous series of MAs, and for this reason the chemical composition of the chromatogram components could not be precisely identified. Although the individual mycolate cannot be identified, this is not necessary for identification of mycobacteria, since a species-specific chromatographic pattern is generated [10,28]. In order to identify unknown mycobacteria specimens using HPLC, the laboratory maintains chromatograms of mycobacteria commonly seen in the laboratory. HPLC profiles of unknown mycobacteria are compared to the patterns contained in this spectral library. The chromatographic pattern for each strain is examined for differences in the heights for pairs of peaks. HPLC patterns are grouped according to species, and the values calculated for each ratio are combined, sorted in numerical order, and examined for their ability to discriminate species, using the range of the relative standard deviation (RSD) of the absolute retention times (ART) and the relative retention times (RRT). RRTs are adjusted by comparison with external mycobacterial MA peaks [29]. The Mycobacterium intracellulare mycolic acid fingerprint is usually used as a reference standard to help differentiate Mycobacterium spp. The visual pattern recognition method employs only chromatographic criteria, although when available, other identification test results should be included in the decision-making processes. The initial step for identifying a species is determining the overall complexity and number of MA peak clusters. These clusters may consist of a few peaks or many peaks and are further defined as single-, double-, and triple-or multiple-peak clusters [6]. The amount of MA present is related to the amount of light emitted and the structure of the MA is related to the time of elution off the column. Pattern recognition is performed by visual comparison of sample results with MA patterns from reference species of known Mycobacteriae; however, a correct pattern of interpretation requires training. For that reason computer-assisted pattern recognition software, which utilizes retention time, peak width, and peak amount to provide a peak name which can then be compared to a library database, was developed [24]. Chromatogram profile database from Mycobacterium spp. Due to the interpretation of chromatographic data can become tedious and time consuming for laboratories that process large numbers of samples, some studies recommend the construction of computer-based file (library) of Mycobacterium species. This library is to be used in conjunction with commercially available pattern recognition software packages like a Pirouette [20] or more recently, the Sherlock Mycobacterial Identification System (SMIS; MIDI, Inc.) has been developed for the rapid, computer-assisted identification of mycobacterial species based on the separation and quantification of MAs using HPLC technology [30]. Figueiredo et al. [22] grouped 35 strains of Mycobacterium species according a fingerprint library into three general patterns (single-, double-and triple-peak clusters) and divided in subgroups according to the chromatogram characteristics of MAs derivatives (Figure 3). (ATCC 19210). *peaks showing a high degree of separation (appearing as a "double peak"), named according to [10] Single-peak cluster patterns Members of the MTC such as Mycobacterium bovis ( Figure 4) and Mycobacterium tuberculosis, and others species such as Mycobacterium asiaticum, Mycobacterium gordonae chromotype I and Mycobacterium. kansasii showed chromatogram patterns with a single, late-emerging peak cluster. Mycobacteria species within the same taxonomic groups, such as the M. tuberculosis complex species, show very similar chromatographic patterns, since they share the same MA structural types. A means to discriminate between the closely related species with similar HPLC chromatogram profiles could further distinguish the MAs that might be present in the same peak. Considerable heterogeneity exists within a particular class of MAs, considering the chain length of individual acids (which can show a mixture of up to 100 structural isomers of α mycolates) and the potential range of heterogeneity in each species or subspecies [16]. Double-peak cluster patterns Mycobacterium chitae, Mycobacterium porcinum and Mycobacterium agri are representatives of this group that displays late-emerging and close-together clusters of peaks. Mycobacterium fortuitum, Mycobacterium peregrinum and Mycobacterium smegmatis are members of the Mycobacterium fortuitum complex and displayed very similar chromatogram patterns. Therefore, the HPLC results obtained for these species provided insufficient information to distinguish between them. The Mycobacterium chelonae-Mycobacterium abscessus taxonomic group has undergone several revisions following the identification of newly recognized species such as Mycobacterium massiliense, which was proposed based mainly on genotypic analysis. As expected for closely related species all the members of this group showed a single chromatogram pattern [31,32]. Triple-peak cluster patterns Mycobacterium simiae could be included in this group. The last group included Mycobacterium chubuense, Mycobacterium obuense, Mycobacterium parafortuitum and Mycobacterium vaccae, which showed early-peak clusters emerging before 10.0 min. Application of high performance liquid chromatography According to Figueiredo et al. [22] the identification of mycobacteria by HPLC is performed by comparing fingerprint patterns obtained from each clinical sample with those from the reference strains. The first criterion for identifying Mycobacterium spp. is to match the overall complexity and number of MA peak clusters: single, double and triple peaks. The second criterion is the range of time of elution between multiple peak groups, where the positions of peaks are determined as RRTs, adjusted by comparison to an external mycobacteria MA peak. To increase reliability, the relationships of peak heights of major diagnostic peaks are determined and compared to those from reference strains. Mondragón-Barreto et al. [33] describe the advantages of HPLC method to Mycobacterium identification but if results are unclear (problems are principally for inadequate HPLC reference patterns, the isolate should be analyzed using PCR-RFLP. Another interesting application to MAs identification by HPLC is the estimation of bacterial growth [27]. It was described a linear relationship between the total area under the MA chromatographic peaks of a culture of Mycobacterium tuberculosis and log CFU per milliliter, suggesting the possibility of using this results as a good estimator of mycobacterial growth. Conclusion HPLC procedure for MAs separation is a rapid, reproducible and easily way to Mycobacteria identification and can be executed by many laboratories, making this approach one of the most appropriate methods to distinguish among the species. A customized database, using locally adapted protocols, must be developed in order to obtain chromatogram spectra from reference strains in the new analytical conditions, accrediting the local methodology and allowing accurate analysis of clinical samples. Although HPLC equipment is too expensive for many laboratories, it is realize that this system is useful to MAs identification. It is recommended that the HPLC can be combined with other techniques like PCR as a confirmatory diagnosis for the identification of clinical isolates where the matching chromatogram fingerprints failed or were inconclusive in differentiating species within the same taxonomic group, such as the M. tuberculosis complex species.
4,125.8
2015-07-08T00:00:00.000
[ "Medicine", "Biology" ]
Volterra Kernel Estimation of White Light LEDs in the Time Domain In this paper, we present a time domain method for extracting coefficients of nonlinear Volterra-series kernels for white light-emitting diodes (LED) used both for illumination and visible light communications. We show that this method may have several advantages over the thus far more popular frequency domain method. We successfully apply the measured kernel coefficients up to the 3rd order for the modeling of nonlinear distortion impact on advanced modulation formats: pulse amplitude modulation, carrierless amplitude phase and orthogonal frequency division multiplexing. The impact of blue filtering on dynamic nonlinearity is also presented. Introduction Light-Emitting Diode (LED) communication is an attractive and low-cost solution for free space communication systems [1] and transmission in polymer optical fibers (POF) [2]. Recently, due to the growing market share of LEDs in lighting applications, they have attracted increased attention in the context of visible light communications (VLC). The idea of VLC is to use lighting LEDs both for illumination and for distribution of high-speed data signal. VLC is now considered a technology complementary to 5G mobile systems, as it can provide additional high-capacity communications in the so-called optical attocells, where the receiver is directly illuminated by the white light source [3]. There are two types of LEDs used for lighting applications. The first is a blue chip covered with a phosphorous layer, serving as a blue to yellow color converter. Both colors combined form light perceived as white. Unfortunately, the phosphorous layer has a slower time response and typically limits the bandwidth of LED to a few MHz. This can be managed with receiver blue filtering [4] or digital equalization. Unfortunately, both solutions impose either power or noise enhancement penalties [5,6]. This problem is avoided in the second kind of lighting LED, which consists of three or more chips of different colors (e.g., RGB). In addition, as different chips can be modulated independently, wavelength division multiplexing can be used, which increases the data rate by a factor of channel number. Regardless the LED type, LEDs are nonlinear devices, i.e., the emitted optical signal power a nonlinear function of the modulating current. This nonlinearity can have both static and dynamic characteristics. The former is mainly caused by efficiency drop, i.e., decreased internal quantum efficacy with increasing injection currents [7]. The latter is caused by difference in carrier lifetimes, depending on the modulating current [8]. Unfortunately, unlike inter-symbol interference (ISI), nonlinearity cannot be overcome by well-established methods of linear equalization and may become the major transmission rate limiting factor, especially for spectrally efficient advanced modulation formats, which require a high signal-to-noise ratio (SNR) at the receiver. Accurate description of LED nonlinearity is a vital issue, as it is necessary to estimate the information capacity of the link, and it can help evaluate the performance of different modulations. While the nature of nonlinearity has already been described in carrier rate equations [8], this simple description does not accurately model the dynamics of the device [9]. Therefore, description of LED nonlinearity as a black-box system could have a greater significance for LED transmission system modeling than theoretical models based on carrier transport [9]. In our approach, LED input/output relation is represented with Volterra series, which is a general description of nonlinear systems behavior. The Volterra model has already been used for LED nonlinearity identification [9][10][11] and has also successfully been applied to LED nonlinearity compensation for single-carrier [12] and multicarrier modulation formats [13]. Theory To properly describe LEDs with the Volterra series, the series coefficients need to be measured first. Here, two methods can be applied: the frequency domain and the time domain methods. In the frequency domain method, the device is probed with a current waveform i(t) = I 1 cos(2π f 1 t) + . . . + I N cos(2π f n t) and the amplitudes of harmonics generated at frequencies of f 1 ± f 2 ± . . . ± f n are measured, which can be related to the n-th order Volterra kernel |H n (± f 1 , . . . , ± f n )|. In this method, n tones are needed to estimate the kernel up to the n-th order. This approach has been applied to estimate the Volterra kernel of LEDs up to the 2nd [9,10] and 3rd [11] order. The frequency domain method has some limitations, though. Firstly, only the magnitude response of the kernel can be measured. Secondly, it cannot measure the response at the kernel diagonals, e.g., H 2 ( f 1 , − f 1 ), as the intermodulation product of this component falls at dc. Thirdly, for some frequencies of the probing harmonics, intermodulation products of the 2nd and higher orders fall at the same frequencies. For example, if two tones at frequencies f 1 , 2f 1 are used, the 2nd order intermodulation products fall at f 1 and 3f 1 , and the third order at 3f 1 , 4f 1 , 6f 1 . It is readily visible that the tone at f 1 is disturbed by the presence of the probing signal, while the tone at 3f 1 is interfered with by the presence of the 3rd order distortion. To avoid this, the kernels need to be separated by linear regression. In this method, the probing is done with tones at various amplitudes and the dependence of the amplitudes of the intermodulation products of different orders on the amplitude of probing signal is exploited as an additional degree of freedom in separating the kernels [12]. Unfortunately, this procedure highly complicates the measurement. The time domain method is based on the relation between the input training signal x(t) and the output signal y(t) as described with time Volterra series [12] where h m (τ 1 , . . . , τ m ) is the Volterra kernel of m-th order in the time domain at τ m time delay, n(t) is additive noise signal, M is the number of kernel orders and We assume the probing signal to be a M-ary filtered pulse amplitude modulation (PAM) waveform at baud rate r and symbol interval T s = 1/r. The time delays in (2) are then quantized at discrete multiples of T s and the equation can be transformed into a discrete form where M m is the memory length of the m-th order. As a prerequisite of the estimation, the maximum kernel order has to be assumed. It is a tradeoff between the estimation accuracy and complexity, as the number of series terms is given roughly as the memory length of the kernel to the power of the kernel order. However, the symmetry of the kernel coefficients can be exploited to reduce the number of terms that need to be estimated. For example, from (3), it is readily visible that h 2 (i 1 , The total number of irredundant terms is in the 1st, 2nd, and 3rd order, respectively [14]. After some rearrangements, (3) can be expressed in a matrix formalism where Y(n) = [y(n),y(n − 1)...,y(n − L)] T , N(n) = [n(n),n(n − 1)...,n(n − L)] T and is a vector containing the Volterra coefficients, and X = X 1 , X 2 , . . . , T is the training sequence matrix, with the following combinations of the training symbols x(n): where L is the length of the training sequence. The coefficients H can be sought using least squares (LS) solution [15] It is noted that by doubling the training sequence length the variance of estimated coefficients is reduced by half. As an alternative to the LS method, recursive least squares (RLS) or least mean squares (LMS) adaptive algorithms may be applied to find H [16], which could help to avoid desynchronization if the transmitter and receiver are using different clocks. We can now briefly comment on the numerical complexity of the estimation. The number of irredundant terms for each order has been plotted in Figure 1 for varying values of the memory parameter. Clearly, the numerical complexity grows rapidly with the inclusion of higher-order nonlinearity. In addition, computation of each of the elements of X matrix requires (O-1) multiplications, where O is the order number. Finally, the LS problem (7) is most efficiently solved by means of QR decomposition of matrix X, which requires approximately 2L × ( The time domain coefficients can be transformed into the frequency domain using the Fourier transform [14] The time domain method has certain advantages over the frequency domain. As it yields complex valued coefficients, it allows for the prediction of the signal at the output of the model for known input signals. It does not require estimation at different modulating current values to separate the overlapping products of different kernels. Finally, the measurement procedure is instant in a setup with a PAM signal generator and digital storage oscilloscope (DSO). In the same manner as in the frequency domain method, the nonlinear distortions coming from higher-order terms, not included in the estimation, bias the estimated coefficients. Experimental Setup and Measurement Procedure The experimental setup for the time domain method requires a multilevel random signal generator and digital storage oscilloscope (DSO) for recording the output signal. The setup schematic is shown in Figure 2. First, random symbols of the PAM-8 signal are generated offline in Matlab and upsampled with a factor of 2. Next, we apply a root raised cosine (RRC) filter with 0.1 roll-off coefficient to shape the signal spectrum to quasi-rectangular. The choice of the roll-off is a tradeoff between spectrum flatness, total symbol duration and peak to average power ratio (PAPR), which increases for lower roll-off values. At this value, the filter spectrum is flat up to approx. 0.9r/2 ( Figure 3). The generated PAM signal is fed into AWG, which modulates the LED. The modulation index is different for different LEDs and scenarios. After short transmission in free space, the signal is photodetected and sampled in DSO. In the case of white phosphorescent LEDs, blue filtering is applied at the receiver. Further processing in Matlab involves resampling to 2r, synchronization using the cross-correlation method, filtering with a matched RRC filter, averaging over all copies of the received sequence captured in one DSO frame (approx. 30) for additive Gaussian noise cancellation, downsampling to symbol frequency and LS estimation, as described above. It is noted that in the applied method, the received signal was AC coupled, so the estimated kernel depends on the bias current of the LED. The applied time domain method also has some restrictions related to the spectral extent of the estimated kernel. As governed by the sampling theorem, the linear kernel (frequency response) can be measured up to r/2 (in addition, a 10% margin on the RRC response should be assumed). However, further restrictions to higher-order terms will apply. We illustrate this for the 2nd order kernel. Let us consider a LED probed with a single tone at frequency f 1 that will generate a 2nd order harmonic at 2f 1 . This harmonic falls outside the RRC filter bandwidth at the receiver, and hence H(f 1 ,f 1 ) cannot be measured. It is noted that RRC filtering is necessary, as this harmonic would otherwise cause aliasing at frequency r − 2f 1 . Therefore, the 2nd order can be fully measured only up to |f 1 + f 2 |< r/2 frequency. In general, the M-th order kernel can be estimated up to r/(2M) frequency. It is noted that we assume that the LED is the dominating source of both the bandwidth limitation and nonlinearity in the system. The bandwidth of the detector was almost order of magnitude higher. We made sure that the detector is placed at the optimum distance from the transmitter to make sure that it is far from being saturated. Phosphorescent White Light LED Here, we tested an Osram LE UQ Q9WP LED. It is noted that the LED was equipped with a driving circuit with a current amplifier. The probing signal consisted of 20 k symbols of PAM-8 modulation transmitted at 500 Mbaud. In the case of this LED, the modulation index was close to 100%. We assumed estimation with memory length (in symbols) of 160, 50 and 20 for the 1st, 2nd and 3rd nonlinearity orders, respectively. The impulse and frequency response (linear kernel) is shown in Figure 4. The 6 dB (electrical) bandwidth of the device is in the order of 20 MHz. The 2nd order time kernel is shown in Figure 5a. It can be seen that most of the non-zero coefficients in the time domain are concentrated along the diagonal, and the maximum significant delay difference between τ 1 and τ 2 is in the order of 10 ns. The frequency domain 2nd order kernel is shown in the same Figure 5b. Most of the 2nd order distortion is present at the lowest frequencies; however, a significant amount of distortion is visible in the (−100,100) MHz region. We attribute this distortion to the LED-driving circuit. For completeness, we have also shown the phase response of the 2nd order kernel (Figure 5c). In Figure 6, the H vector as defined by (Equations (5)-(7)) has been plotted to demonstrate the relation in magnitude between different kernels. The amplitudes of coefficients in higher-order kernels decrease with the kernel order (but obviously their number raises). 7)). Relation of amplitudes in different nonlinearity orders. Coefficient index on the x-axis. Applying the Measured Kernel to Predict LED Behavior for Advanced Modulation Formats The measured Volterra kernels are universal in the sense that once measured for PAM-8, they should predict the nonlinear distortion effect of this LED on different modulation formats under the same experimental conditions. To verify this, we transmitted signals of different advanced modulations: PAM-4 signal, carrierless amplitude-phase (CAP)-16 and orthogonal frequency division multiplexing (OFDM) signal with quadrature amplitude modulation (QAM)-4 at all subcarriers. Next, we averaged the received signals over the received copies to reduce the additive white noise and compared with signals synthesized using the Volterra kernel estimated in Section 4.1. As a similarity indicator, we evaluated the variance of the error between reconstructedŷ(n) and received y(n) signals = E (y(n) −ŷ(n)) 2 . To avoid resampling errors, as the estimation was performed for 500 Mbaud, the sampling frequencies (or baud rates) of the predicted signals were 500, 250 or 125 Mbaud. The results are presented in Table 1. The highest reduction of the approximation error was obtained for the 500 Mbaud PAM-8 signal after including the 3rd order kernel (7 dB). It is not surprising as this signal was used for the measurement, and only in this case was there a perfect synchronization between the transmitter and receiver. For the remaining signals, the estimated kernel in the time domain can be time shifted up to half a symbol period with respect to the measured signal, which imparts the reconstruction. In all cases, error reduction was observed, the highest after including the 2nd order kernel. In one case (PAM-4, 500 Mbaud), the 3rd order kernel slightly increased the error. In the next step, we compared eye diagrams of the received and reconstructed signals obtained after decision feedback equalizers (DFE) with 30 forward and 10 backward taps. The results for PAM-4 and CAP-16 are presented in Figures 7 and 8, respectively. The similarity between the eye diagrams and constellations is readily visible. It is noted that without including the 2nd and 3rd order terms in the reconstruction, the DFE was able to compensate the ISI completely and the eye diagrams and constellations of the synthesized signals were perfectly undistorted. It cannot deal with nonlinear distortion, though, which is the main source of quality degradation in Figures 7 and 8. Particularly interesting is the case of the OFDM signal, where the spectral distribution of the distortion can be extracted. We transmitted an OFDM signal loaded with QAM-4 modulation at all subcarriers. Upon reception and single-tap equalization, we estimated the signal to interference and noise ratio (SINR) distribution among subcarriers. The transmitted signal had 500 symbols, the number of subcarriers was 128, and the cyclic prefix length was 30 samples to eliminate ISI. The frequency of the highest subcarrier was 125 MHz. As this time only 2 copies of the signal were captured in one oscilloscope frame, noise averaging was not performed, but additive noise of −22 dB with respect to the signal was added to the synthesized signal. In addition, we found that for OFDM, the power of the signal at the input to the Volterra kernel had to be increased by 8 dB with respect to the previous signals. This is to compensate for the PAPR of OFDM. The results are shown in Figure 9. As we can see, for the approximation with only the 1st kernel, the SINR follows the frequency response of the link. By adding the 2nd and 3rd kernel, we can almost perfectly model the effect of the nonlinearity on the OFDM signal over the whole spectrum of interest. This is despite quite a small error variance reduction indicated in Table 1. Impact of Optical Filtering on White LED Nonlinearity In phosphorous white LED communications link receivers, a blue filter is typically applied to cut off the light generated in the slow-response phosphorescent layer and increase the electrical bandwidth of the system [3]. Although the impact of blue filtering on the frequency response is well known [5], to the best of our knowledge, it has never been studied in the context of nonlinear distortion magnitude and distribution. In this paper, we measured the Volterra kernel of a 1W Luxeon white phosphorescent LED of hot color temperature. The receiver was equipped with blue and yellow filters with their cut-off at 480 nm (this wavelength corresponds to the blue and yellow color boundary), and also without filter. The probing signal was transmitted at 100 Mbaud. The measured frequency responses are shown in Figure 10. The peak close to DC can be attributed to slow yellow light, and the plateau between 10 and 40 MHz to the blue light. Please note that to study the nonlinearity of this particular LED, we had to apply an additional electrical amplifier on the modulating current (Mini-Circuits 2FL-1000 H+), which cut out the spectrum from 0 to 2 MHz. The measured 2nd order Volterra kernels are shown (in the frequency domain) in Figure 11. The cross-like pattern close to DC of one of frequencies is attributed to the effect of the mentioned amplifier. It is readily visible that, in a similar manner as the 1st order frequency response, the 2nd order kernel also depends on optical filtering, with an increased impact of nonlinearity at low frequency when the blue filter at the receiver is not applied. Our study is not conclusive as to the origin of low-frequency nonlinearity, which may be a result of the fluorescence process, or may simply stem from the higher power at low frequencies when yellow or no filter is applied. Conclusions The Volterra series representation is a viable and practical method for nonlinear behavior description of white light LEDs in high-speed communications links. In this paper, we advocate for estimation of the LED's Volterra kernel in the time instead of the frequency domain. We have shown that there are several advantages to this approach, including less complicated measurement procedure and full information on the kernel coefficients, which can be applied to modeling of advanced modulation formats performance in the VLC link. In particular, we have demonstrated that the Volterra kernel estimated using one modulation (in our case PAM-8) can be successfully used for nonlinear LED behavior modeling under other modulation types.
4,587.8
2018-03-29T00:00:00.000
[ "Engineering", "Physics" ]
Development and Field Evaluation of a Low-Cost Wireless Sensor Network System for Hydrological Monitoring of a Small Agricultural Watershed Hydrological monitoring and real-time access to data are valuable for hydrological research and water resources management. In the recent decades, rapid developments in digital technology, micro-electromechanical systems, low power micro-sensing technologies and improved industrial manufacturing processes have resulted in retrieving real-time data through Wireless Sensor Networks (WSNs) systems. In this study, a remotely operated low-cost and robust WSN system was developed to monitor and collect real-time hydrologic data from a small agricultural watershed in harsh weather conditions and upland rolling topography of Southern Ontario, Canada. The WSN system was assembled using off-the-shelf hardware components, and an open source operating system was used to minimize the cost. The developed system was rigorously tested in the laboratory and the field and found to be accurate and reliable for monitoring climatic and hydrologic parameters. The soil moisture and runoff data for 7 springs, 19 summer, and 19 fall season rainfall events over the period of more than two years were successfully collected in a small experimental agricultural watershed situated near Elora, Ontario, Canada. The developed WSN system can be readily extended for the purpose of most hydrological monitoring applications, although it was explicitly tailored for a project focused on mapping the Variable Source Areas (VSAs) in a small agricultural watershed. Introduction Long-term, high-quality climatic and hydrological data are essential for hydrological research and the implementation of effective water management strategies at both field and watershed scale.Monitoring and collecting long-term data from remotely located watersheds are time-consuming and expensive; due to the need for frequent visits to the sites for maintaining and monitoring the instruments and for data collection [1].Though this approach involves a significant amount of time and resources; it is imperative and valuable.Currently, a number of data acquisition technologies are being used to obtain hydrological data.Accuracy, resolution, and scalability are some of the significant issues that need to be addressed in developing an efficient and robust hydrological monitoring system [2] [3].In the earlier techniques, analog type network with cables and a number of sensors wired to data loggers were used for hydrological monitoring. The need for cabling in the field increases costs and restricts the spatial size of the monitoring area [4] [5], whereas the digital wireless networks can be deployed to collect long-term data at larger scale and resolution while maintaining robust and reliable network performance [6] [7] [8]. In recent years, the rapid development of WSN technology has created new opportunities for sensing, computing, and communication in a wide range of applications in the field of science and engineering.WSNs integrate real-time sensing, computing, and communicating processes and provide an efficient and cost-effective observation technique, monitoring, gathering data, performing local computations and relaying the aggregated data capabilities [9] [10]. WSNs comprise of few to several "nodes" (known as a Mote in North America) where each node is connected to one or more sensors [11].Each sensor node has four key components: 1) the microprocessor & ADC (analog to digital converter), 2) transceiver & antenna, 3) memory unit, and 4) external sensors [12]. The individual sensor node consists of a number of hard-wired sensors.Each node is wirelessly connected to other nodes, and finally to a central base station (Figure 1).A digital WSNs comprised of spatially distributed nodes connected to sensors communicates bi-directionally to the central location [13].As WSN does not require cables, they are cheaper and easier to install, in addition to requiring low maintenance.Flexibility, easy and rapid deployment, self-organization, high sensing reliability, and low-cost characteristics of WSNs make them a promising technology for various applications [14] [15]. WSNs can be used with many diverse types of sensors, such as thermal, optical, acoustic, seismic, magnetic, infrared, pressure and radar [16].Sensors used in WSNs convert physical parameters like temperature, soil moisture, pressure, light, speeds, etc. into a signal and measure them electrically [17].These sensors DOI: 10.4236/ojce.2018.82014can monitor a wide variety of conditions such as temperature, pressure, humidity, light, noise level, movement, speed direction and size of an object [18] [19]. The widespread adoption of these devices, particularly for industrial applications, has made them extremely cost-effective [19] [20].Sensor nodes can be used for different purposes, including event detection, continuous tracking, location sensing, etc. [21] [22] [23].Currently, WSNs are extensively used in many real-world applications like security and surveillance, home and industrial automation, automobiles, medical applications, fire and pollution monitoring, flood forecasting, habitat monitoring, military applications, and hydrologic and environmental monitoring [24] [25] [26].Recently, agriculture monitoring has attracted considerable research attention, and WSNs are emerging as a great aid in the field of precision agriculture to improve crop quality, productivity, and resource optimization.It is also widely used in greenhouses for monitoring and controlling humidity, temperature, moisture water flow, etc. [27] [28]. Unlike other systems, WSNs are designed for specific requirements and applications [29].The WSNs for environmental monitoring are specially designed to collect the data on an event-driven or time-driven basis according to environmental conditions and application requirements, i.e., when a specific environmental event occurs or at the particular time interval [30].Details of importance, the accuracy of the data and the physical environment of deployment require careful consideration in designing the WSN system.The WSNs must be designed to withstand weather conditions, such as temperature, winds, rain, snow, and pressure or vibration [7].Although WSN technology is continuously improving, no off-the-shelf solution exists yet for hydrological monitoring applications [31]. The WSNs also have various resource constraints and challenges.Constraints include energy, bandwidth, memory, and processing capacity.Among them, energy consumption is of prime importance as each sensor node based on the DOI: 10.4236/ojce.2018.82014number and type of the attached sensor components relies on the limited availability of battery power for data collection, processing, storage, transmission, and reception [32].Moreover, energy consumption rate of each node depends on its distance from Base Station.The inequality of energy usage among the sensor nodes in the network affects the lifetime of the network for the intended application [33] [34].Careful energy resource management is crucial for the WSNs deployed in remote areas for an extended period.Another specific challenge to WSN is the security attacks from the surrounding deployment area due to the broadcast nature of radio transmission.Due to the limited computing power of nodes, it is difficult to provide security and to protect the sensitive data from unauthorized access to WSN using public-key cryptography [35].The climate and deployed environment also affect the efficiency in the WSN [36]. This study aimed to develop a WSN system to monitor and collect the real-time hydrological and climatic data for the research study of mapping and modeling Variable Source Areas from a distantly located watershed.The specific objectives were to design and deploy a long-term, low-cost, and robust WSN system that can withstand harsh climatic conditions (extreme variation in temperature, high winds, rain, and snow) of humid and temperate climatic conditions such as Southern Ontario, Canada. Design and Development of WSN The design and development of the WSN took place over a four-year period from 2007 to 2011 [37].During this period, a number of WSNs with different types of components were used, and designed systems were rigorously tested in lab and field conditions.Various design and deployment issues were identified and resolved during the development of the WSN. The WSN development was conducted in three phases.In the first phase, a WSN system was designed using hardware from Texas Instruments (TI).The nodes were based on TI-MSPTRF6903 boards with a TRF6903 RF-transceiver and an MSP430 microcontroller.The transceiver operates in the 902-MHz to 928-MHz ISM frequency band, and the microcontroller was a 16-Bit ultra-lowpower MCU with 60 kB of Flash memory for data storage.Soil moisture sensors ICT ECH2O-20 cm of Decagon Devices, Inc. were used.The MPXV70002 vacuum pressure sensors from Freescale were used to capture the water height and were connected to the Analog to Digital Converter (ADC) port of the TI board. The board was programmed via the MSP430 JTAG connector.The Multipoint Control Unit (MCU) Flash memory was erased and reprogrammed.The IAR System's Workbench EW430 software package, in combination with the MSP430 JTAG, allowed real-time debugging of the code.The developed WSN with three nodes was tested and evaluated in the laboratory and the field; however, it was observed that the system was consuming immense power.Moreover, DOI: 10.4236/ojce.2018.82014 the communication range of the nodes was limited, and the wireless communication was sensitive to metal fences and electrical power lines.These problems caused noise in temperature and pressure readings [38] [39]. In the second phase, the WSN system was modified to resolve the shortcomings encountered in phase 1.In the new version, the hardware components from Despite successful application of this WSN system, it still required further improvements due to its short battery life and interruption of the signal from depressions and tall vegetation.The battery life was measured to be 11 days with the original configuration.The deployed solar ESS unit proved to be the most effective system as it correctly functioned over a testing period of 32 days without completely dissipating the battery power.The disadvantage of this system was that the large size of the node board required a sizeable waterproof housing unit and an extended antenna which was challenging to maintain in the field [39]. Taking these issues into consideration, the WSN system was further modified in phase 3, with the objective of improving the efficiency.For further improvement of the WSN system, an updated third generation MICA2 IRIS 2.4 GHz nodes XM2110CA were used (Figure 2(1)).This node featured several new capabilities that enhanced the overall functionality of the WSN system.The communication range of this node was twice than the previous node and a built-in Sensors The pressure sensor used for the phase 3 WSN system shown in Figure 2 6) assembly of a node in the field (7). removes the soil type sensitivity of the sensor and thus, improves its ability to measure soil moisture in any soil. Power Supply The third generation MICA2 nodes require a power range of 1.7 to 4.3 V DC supply for communication within its wireless network.After rigorous testing of various conventional and rechargeable batteries, 4.0 V (4.5 Ah) lead-acid batteries were found to be the most reliable for this application.These batteries lasted for about 30 days in the field under normal climatic conditions (Figure 2(6)).Solar panels of 14 × 4 × 0.5 cm with 6 V DC open circuit voltage and a short circuit current output of 100 mA were used to recharge the batteries.These panels have two solder tabs with 7.5 cm long insulated leads to be connected to the batteries and weigh only 27 g.Each WSN node was provided with two solar panels to charge the batteries and maintain the supply voltage within a specified range to extend the battery life and the WSN operation, as shown in Figure 2(5). The Sturdiness of Node Assembly Each wireless node was housed in a sturdy and watertight PVC housing (80 × 50 × 25 mm) to withstand harsh temperatures, winds, and rain in the field.Moisture absorption packages were also placed within the casing to prevent humid conditions and to ensure that moisture does not collect on the electronics.The node housing was attached to a 3.0 m long and 25 mm diameter PVC pipe.This pipe was connected to a 450 × 450 × 100 mm wooden pedestal. The wooden pedestal was secured in the field using four 29 cm long PVC plugs.A glow sign cone was attached on top of the node to protect the PVC housing from rain, snow and for providing prominent visibility (Figure 2 (7)).A pair of solar panels was attached to this cone.This modified node setup was found to be very sturdy and resistant to severe weather conditions.The overall node components, sensors, and node assembly in the field are shown in Figure 2 (7). Communication Connectivity The nodes were elevated 3.0 m above ground level to increase communication connectivity so that the crop height and the depressed areas did not interfere with the line of sight connectivity between the nodes.Increased height of the nodes improved connectivity between the nodes and resulted in a decreased number of required nodes and reduced the overall cost of the WSN system.The hardware components were purchased directly from the distributors, and data acquisition boards for the IRIS Mote were designed and fabricated in the laboratory in order to increase the cost-effectiveness.The assembling of WSN components was carried out in the department workshop.A summary table listing the main characteristics of the three phases of WSN development is shown in Table 1.DOI: 10.4236/ojce.2018.82014 the soil sample was determined using the gravimetric method.Water was added to the container; the sensor reading was recorded, and water content was measured again.This procedure was repeated until the saturation of soil was achieved. The data obtained from the sensor reading and soil water content were plotted as shown in Figure 3.The following equation fitted to the data with a determination coefficient R2 of 0.9299.where θy is soil moisture content in% by volume and "x" is the sensor reading in mV. Similarly, three pressure sensors were randomly selected for calibrating the depth of the water.Two flexible plastic tubes were attached to the pressure sensor.One tube was vented to the atmospheric pressure, and another was placed in a graduated glass cylinder.Water was gradually added to this graduated cylinder to increase the water level from 0.0 to 20 cm, and the corresponding sensor reading of differential pressure was recorded.The graph of sensor readings versus water height for calibration is shown in Figure 3.The linear equation fitted to this graph is presented below in Equation ( 2), and it has a determination coefficient (R2) of 0.9891 0.6072 292.48 where, H is the depth of water in mm and "x" stands for sensor reading in mV. Field Testing of WSN The field testing of the WSN's performance was carried out at three different lo- layer of soil was measured using a digital VG-200 soil moisture meter, and the height of water level above the V-notch was measured manually.Figure 4 show soil moisture levels and depth of water at the location of node # 5 recorded by the WSN and manually for a storm occurred on 12 September 2011.Similarly, WSN readings of node # 4 were verified manually on 27 December 2011 (Figure 5). The comparison confirmed the accurate functioning of the WSN system during field deployment. Field Data Collection After successful testing of the WSN system, data collection from a small agricul- watershed is sandy loam belonging to hydrological soil group B with soil depth ranging from 0.60 to 0.90 m underlain by a restrictive layer.The entire watershed was under hay crop cultivation during the process of data collection. The study watershed at ERS was divided into eight sub-watersheds using watershed delineating tool of ArcGIS.At the outlet of each sub-watershed, a V-notch weir with pressure sensor was installed to measure overland runoff.Soil moisture sensors were installed at the centroids of each sub-watersheds and near all eight outlet points.A total of 16 soil moisture sensors, 8 V-notch weirs with pressure sensors, and six hopper nodes were installed in this study watershed. The watershed at ERS and the location of soil moisture sensors and V-notch weirs are shown in Figure 6.A base station node was attached to a computer with an internet connection and was stationed in a nearby private property in order to power the laptop.wet or dry periods.Since the relevant data was to be collected during rainy periods, the sampling interval was shortened remotely during the dry weather. Furthermore, remote monitoring the system enabled to put the WSN in sleep mode during extended dry periods to conserve the battery power.This not only helped to conserve the battery life but also helped to avoid the accumulation of unnecessary data. Soil moisture levels and runoff generated from eight sub-watersheds of the study area were monitored from September 2011 to July 2013, and data for 45 rainfall events were successfully collected.During the entire experimental period, WSN system worked efficiently, and no inconsistency was noticed in the performance of nodes due to variations in the climatic conditions.The readings of soil moisture sensors and pressure sensors were converted from mV to soil moisture percentage and water depth using calibration equations 1 and 2 respectively.The discharge (m 3 /s) corresponding to the water height above the bottom of the weir was determined using the V-notch equation.For each rainfall event, a flow hydrograph of individual sub-watershed segment was developed to compute the runoff.Rainfall and temperature data were collected from the ERS weather station located about 500 m from the study watershed. The field measurements of a rainfall event dated 01, June 2012 are plotted in For this rainfall event, a total of 2456 m 3 of runoff was generated at the outlet of the watershed, and the runoff coefficient was 29.28%.The developed WSN system worked accurately with minimum maintenance. The field data of soil moisture and discharge for 10 rainfall events in the fall of 2011 were successfully recorded.During 2012, data for four spring events, 13 summer events, and 9 fall events were collected.During 2013, field data for 3 springs and 6 summer rainfall events were recorded.The collected data were used for the research project of Mapping and Modeling of Variable Source Areas in a Small Agricultural Watershed. Summary and Conclusions This study has provided an overview of the development of an integrated WSN system for monitoring the climatic and hydrologic parameters of a remotely located agricultural watershed.The designed WSN system comprised of an advanced wireless network technology, which together with the internet facilitated the data communication between the study site and the client in real-time.Low power consumption, along with its compact size and multiple sensors made it perfectly suitable for field application.The WSN system was calibrated in the laboratory and tested at three locations in southern Ontario, Canada.Field-scale testing demonstrated that the system is robust to work under adverse weather conditions, such as extreme variation in temperature, high winds, rain, and snow.The developed WSN system was used in a remote agricultural watershed near Elora (ON), where it successfully acquired, stored and transmitted real-time climatic and hydrological data.The WSN worked accurately with minimum maintenance and enabled continuous data collection for more than two years. The advantage of this system was that it could be accessed from anywhere by any computer connected to the internet.Remote data collection and maintenance considerably reduced the need for site visits, which significantly reduced the monitoring cost.Although this WSN system was tailored explicitly for mapping the VSAs in a small agricultural watershed, it is flexible for use in a variety of contexts. Figure 1 . Figure 1.A typical distributed wireless sensor network system. Crossbow (Xbow) were used to build a new WSN system.Crossbow's wireless sensor network was based on XM2110 nodes with built-in control and communication functions.Each platform included an ATmega1281 low-power microcontroller with a 10-bit ADC and 512 kB of memory and an AT86RF230 RF front-end IEEE 802.15.4 compliant, and a ZigBee transceiver with 300 m line-of-sight transmission range.The network gateway consisted of an IRIS mote connected to a USB MIB520CA interface.Motorola MPXV7007DP Pressure sensor and the ICT ECH2O-20 cm soil moisture sensor were attached to the 51-pin expansion slot through a printed circuit board (PCB).The interface board passed the sensor data onto a PC.The nodes were powered by using two 2.4 V-750 mAh AAA Duracell NiMH batteries.The software tool Mote-View[40] was used which is designed specifically for the WSNs, uses XML files to convert the data from simple binary input-form from the gateway into decimal values, and these values could be displayed in real-time and saved in a database.The program allows database dumping, whereby collected sensor data is exported into a text file.The text file can be read in Excel and modified with custom calibration equations.The modified WSN system was tested in the lab as well as in the field for communication between nodes and also between the nodes and the gateway.The range of the node as per the Crossbow IRIS reference manual was 300 m for outdoor conditions and 50 m for indoor situations.The transmission range of the nodes in the field was found to be about 250 m at the optimal battery voltage, with the range decreasing in accordance with drops in the battery voltage.This system was installed in the study watershed at the Guelph Turfgrass Institute of the University of Guelph, where it performed satisfactorily under a small height of vegetation and flat ground surface conditions.The study watershed was monitored, and the data for modeling the spatial variability of runoff generating areas were collected from July 2008 to April 2009. 1. 2 - inch monopole antenna.A PCB was designed and fabricated in the department lab with the capacity to connect a maximum of six different kinds of sensors to the 51-pin expansion slot.The interface unit MIB510CA, shown in Figure 2(2), allowed the user to reprogram any node by plugging the node directly into the base and operating it as a part of the root node interface, giving the PC a data conduit of the radio-based sensor network. Figure 4 .Figure 5 . Figure 4. WSN and manual readings of soil moisture and pressure sensors on September 12, 2011. Figure 6 . Figure 6.Layout of the study watershed at Elora (Ontario). Figure 7 . Figure 7. Rainfall started at 5.00 a.m., and the total rainfall for the event was 46.0 mm.The initial soil moisture at the beginning of the rainfall was 14% and runoff initiated after 43 minutes when soil moisture reached 43% (saturation).The daytime maximum temperature was 13.7˚C, and the initial abstraction (Ia) of this rainfall event was 4.3 mm.The peak discharges of 0.028, 0.021, and 0.020 m3/s were recorded at 6.00 p.m. at the outlets of sub-watershed 4, 7 and 1 respectively. Figure 7 . Figure 7. Field observations of rainfall and runoff event dated June 01, 2012. Figure 8 . Figure 8. Field observations of rainfall, soil moisture, and temperature during September 2011. Figure 9 . Figure 9. Field observations of rainfall, soil moisture, and temperature during the year 2012.
5,118.4
2018-05-09T00:00:00.000
[ "Environmental Science", "Agricultural and Food Sciences", "Engineering" ]
An empirical comparison of the harmful effects for randomized controlled trials and non-randomized studies of interventions Introduction: Randomized controlled trials (RCTs) are the gold standard to evaluate the efficacy of interventions (e.g., drugs and vaccines), yet the sample size of RCTs is often limited for safety assessment. Non-randomized studies of interventions (NRSIs) had been proposed as an important alternative source for safety assessment. In this study, we aimed to investigate whether there is any difference between RCTs and NRSIs in the evaluation of adverse events. Methods: We used the dataset of systematic reviews with at least one meta-analysis including both RCTs and NRSIs and collected the 2 × 2 table information (i.e., numbers of cases and sample sizes in intervention and control groups) of each study in the meta-analysis. We matched RCTs and NRSIs by their sample sizes (ratio: 0.85/1 to 1/0.85) within a meta-analysis. We estimated the ratio of the odds ratios (RORs) of an NRSI against an RCT in each pair and used the inverse variance as the weight to combine the natural logarithm of ROR (lnROR). Results: We included systematic reviews with 178 meta analyses, from which we confirmed 119 pairs of RCTs and NRSIs. The pooled ROR of NRSIs compared to that of RCTs was estimated to be 0.96 (95% confidence interval: 0.87 and 1.07). Similar results were obtained with different sample size subgroups and treatment subgroups. With the increase in sample size, the difference in ROR between RCTs and NRSIs decreased, although not significantly. Discussion: There was no substantial difference in the effects between RCTs and NRSIs in safety assessment when they have similar sample sizes. Evidence from NRSIs might be considered a supplement to RCTs for safety assessment. Introduction Randomized controlled trials (RCTs) are considered the most unbiased study design and represent the current gold standard for assessment of efficacy of interventions (Guyatt et al., 2008). Through the randomization process, RCTs would mostly avoid the bias of confounding factors by indicating the intervention effect (Shrier et al., 2007). However, RCTs are expensive, and thus most RCTs only cover a small number of patients with a short follow-up period (Van Spall et al., 2007;Kennedy-Martin et al., 2015). In addition, sample size estimates for RCTs are usually based on the main outcome, that is, efficacy, rather than adverse events. This makes it challenging to assess safety outcomes since many outcomes occur at a low frequency-the observed events would be rare and even zero for certain outcomes. Therefore, statistical inference faces significant uncertainty caused by random errors (Bhaumik et al., 2012;Efthimiou, 2018). In addition, recruiting subjects usually involves strict inclusion criteria, and researchers tend to exclude high-risk patients, such as children, elderly people, pregnant women, patients with multiple complications, and those with potential drug interactions. These restrictions limit the representativeness of the findings of RCTs (Chou and Helfand, 2005;Golder et al., 2011). Non-randomized studies of interventions (NRSIs) are an alternative to overcome the aforementioned issues for assessing safety. It is widely known that a case-control study is designed for when the cases of events are rare (Vandenbroucke and Pearce, 2012). There are two sources of error that could impact the estimates of NRSIs, namely, systematic error (bias) and random error. For effectiveness of intervention, the bias of NRSIs is deemed to be the main effect modifier on the results, and the random error may have limited impacts due to the large sample size and sufficient outcomes (Higgins et al., 2011). Methods such as stratification, matching, and regression analysis have been proposed to address the confounding bias for NRSIs (McNamee, 2005;Austin, 2011). Simulation studies have verified that these methods work well to control the impact of confounders on the effects (Jreich and Sebastien, 2021). However, for rare adverse events, such methods may not be feasible due to the limited number of cases. For example, when the event risk is 1/1000, even for an NRSI with a sample size of 2000, the expected number of cases would only be two, which is insufficient for the aforementioned methods. In such a case, in safety assessment, the random error may have a larger impact than the systematic error (bias), which dominates the results. One increasingly popular method was to pool all available RCTs of the same topic together, i.e., via a meta-analysis, to increase the statistical power, and it has the ability to increase the power in testing whether the true effect actually exists. Nevertheless, the statistical power of these meta-analyses was still seriously insufficient . Researchers then proposed to include NRSIs in the meta-analysis because, for safety outcomes, the primary aim is to capture any signal of harm (Reeves et al., 2013;Valentine and Thompson, 2013). This is somewhat reasonable as we mentioned previously that for safety outcomes of rare events, systematic error may have a limited impact on the results. Even so, this has raised wide controversy as the concerns about the confounding bias still exist for NRSIs and will be synthesized into the pooled effect (Benson and Hartz, 2000;Concato et al., 2000;Ioannidis et al., 2001;Abraham et al., 2010;Hemkens et al., 2016;Soni et al., 2019). To address this concern, we designed an empirical study based on a database of systematic reviews of safety that compared the effects of RCTs and NRSIs to see whether there was any difference in the evaluation of adverse events between them. Materials and methods The current study findings are reported according to the Strengthening the Reporting of Observational studies in Epidemiology (STROBE) checklist for case-control studies (von Elm et al., 2008). A brief description of the study is as follows. First, we searched for the published systematic reviews of safety and screened for those with safety as exclusive outcomes. Then, we checked the eligible systematic review for those including both RCTs and NRSIs in the meta-analyses. The RCTs and NRSIs were further matched by sample size (1:1) within each meta-analysis. Finally, the effects of each pair of RCT and NRSI were compared. Sample size estimation To ensure a sufficient sample size (pairs) for the statistical test, we used the following formula to estimate the minimum sample size for the current study: n (z α/2 × d/E) 2 (Donner, 1984). Here, E indicates the margin of error and d represents the expected standard deviation of the difference of the effects (i.e., ln odds ratio, lnOR) across the pairs. For the margin of error, it is a concept similar to the bias in a simulation study, namely, how close the estimated effect is to the true effect (Donner, 1984). For the standard deviation, it is a concept similar to the betweenstudy heterogeneity in a meta-analysis (Pateras et al., 2018). Therefore, we took 25% as the tolerable margin of error and 1 as standard deviation, indicating that there would be substantial-to-large heterogeneity across pairs (Ju et al., 2020;Xu et al., 2021a). Based on these parameters, the estimated sample size of the current study is 96.04; that is, we need at least 97 pairs of RCTs and NRSIs to ensure the statistical power to test whether the difference of the effects across the pairs was significant. Data source We used a dataset collected in 2020, which was primarily established to improve the evidence-based practice for safety assessment and has been documented elsewhere (Xu et al., 2021b). The dataset consists of 640 systematic reviews of healthcare interventions published in two time periods (2008-2011 and 2015-2020), with adverse events as exclusive outcomes and at least one meta-analysis. The two different periods were primarily designed for comparing how double-zero studies were dealt with by systematic review authors over time (Xu et al., 2021b). For each time period, a comprehensive literature search was performed to ensure the representativeness of the sample (systematic reviews of safety). A detailed description of the dataset can be found in our previous works (Zorzela et al., 2014;Xu et al., 2021b). Frontiers in Pharmacology frontiersin.org Eligibility criteria We screened 640 systematic reviews for those with at least one outcome (each outcome referred to a separate meta-analysis) that included both RCTs and NRSIs in order to compare the effects of NRSI vs. RCT. In addition, considering that data extraction error is commonly seen in published meta-analyses, we only considered those providing summarized 2 x 2 table data for each study in the meta-analysis; a further double-checking process for such data through original studies is possible. Based on the same consideration, those reviews directly reporting the effect size (e.g., OR) and standard error for the meta-analysis were not considered; for such systematic reviews, it is impossible to check whether the effect sizes they used were correctly estimated or extracted, especially for NRSIs. We collected RCTs and NRSIs in systematic reviews under the condition that each pair of the RCT and NRSI has the same topic. Thus, the potential impact of different topics on the results was eliminated. In addition, only pairwise meta-analyses were considered to ensure the interventions were homogeneous. Data collection The meta-analytic data of each outcome from each eligible systematic review were extracted by two review authors independently. Any disagreements were solved by discussing with the lead author. These include the 2 x 2 table information (i.e., numbers of cases and sample sizes in intervention and control groups) of each study in the meta-analysis, type of design of each study (i.e., RCT or NRSI), first author of the systematic reviews, and first author and year of publication of included studies. During data extraction, any disagreements were solved by discussion. The primary data were collected from the systematic reviews, and to ensure the quality of the data, we further doublechecked the data of matched pairs from the original studies included in the corresponding systematic reviews. Data analysis Previous studies pooled the effects of NRSIs and RCTs by treating them as subgroups in a meta-analysis and compared the pooled effects across each meta-analysis (Mathes et al., 2021). However, this method has a big disadvantage in that it requires a sufficient number of studies (i.e., 10) in each subgroup to ensure the robustness of the pooled effects. Under such a limitation, there would be very few meta-analyses that would meet the requirement and may further impact the generalizability of the findings. In the current study, in order to compare the potential difference of the effects, we matched RCTs and NRSIs within the same metaanalysis by their sample sizes to control the impact of random error on the effects. In brief, we first calculated the sample size of each study in each meta-analysis and ranked the sample sizes within the meta-analysis. Then, those RCTs and NRSIs with similar sample sizes were matched as a pair, using the "nearest neighbor matching" method (Austin, 2011). To ensure the matched RCT and NRIS have almost the same sample size, we calculated the ratio of their sample size; only those with a ratio from 0.85/1 to 1/0.85 were considered to avoid the potential influence of sample size on the results (Xu et al., 2021c). In each pair, the OR and its standard error of the RCTs and NRSIs were estimated as it has been considered one of the optimal effect estimators Doi et al., 2021). For those studies with zero events in single or double groups, the continuity correction was applied by adding 0.5 to each cell to produce an approximate evaluation of the OR and its standard error (Xu et al., 2021d). Furthermore, the ratio of the ORs (ROR) of NRSI against RCT was calculated to reflect the deviation of the effects; the ROR is the primary outcome of the current study (Dechartres et al., 2018). This statistics allows us to further test whether there is a difference in the effect of RCTs and NRSIs. When the weighted mean value of the ROR across the pairs is 1, there would be no difference between the effect of RCTs and NRSIs. In order to obtain the weighted mean value of the ROR, we calculated the natural logarithm of ROR (lnROR) and its standard error and then used the inverse variance heterogeneous model to combine these lnRORs (Doi et al., 2015;. The standard error of the lnROR of each pair can be estimated using the SEs for the RCT and NRSI estimates (Golder et al., 2011). The pooled effect is the weighted mean value. A statistical null hypothesis would be then the pooled lnROR = 0. We used the two-sided t -test with the significant level of alpha = 0.05. Sensitivity analysis was employed by cluster robust error meta-regression to consider the potential correlation of lnRORs for the pairs within each systematic review (Xu and Doi, 2018). Further subgroup analysis by the maximum sample size of each pair was employed to see if the potential difference of the effects varies by sample size. The following five groups were prespecified: 1-50, 51-100, 101-200, 201-500, and >501. Statistical analyses were conducted in MetaXL 5.3 software (EpiGear International, Australia) and Stata 14/SE (Stata, College Station, TX). Basic characteristics Of the 640 systematic reviews of adverse events, 87 included both RCTs and NRSIs. We further excluded 12 with the NRSIs only used for incidence of adverse events or did not include both RCTs and NRSIs within a meta-analysis. Of the remaining 75 systematic reviews, 31 were eligible, which had at least one outcome, contained both RCTs and NRSIs, and provided summarized 2 x 2 table data for each study in the meta-analysis (Grootscholten et al., 2008;Sun et al., 2008;Torloni et al., 2009;Touzé et al., 2009;Slobogean et al., 2010;Yaghoobi et al., 2010;Aires et al., 2015;Geng et al., 2015;Ghayoumi et al., 2015;Inokuchi et al., 2015;Wang et al., 2015;Yoon et al., 2015;Zhang and Ma, 2015;Keir et al., 2016;Peng et al., 2016;Vavken et al., 2016;Balasubramanian et al., 2017;Geminiani et al., 2017;Pecorelli et al., 2017;Cheng et al., 2018;Shah et al., 2018;Zhao et al., 2018;Ceresoli et From the 31 systematic reviews, 178 meta-analyses contained both RCTs and NRSIs with a total of 1,404 studies. 119 pairs of RCTs and NRSIs were successfully matched for the analysis (Supplementary Figure S1). In a further analysis of the 238 studies from 119 pairs, we recorded two (0.84%) had data extraction errors, which were further addressed by correcting these errors. The sample size of the current study is bigger than the minimum requirement (Sample size estimation). Among these 119 pairs, there were 19 (15.97%) with the sample size ranging from 1 to 50, 41 (34.45%) pairs ranging from 51 to 100, 19 (15.97%) ranging from 101-200, 17 (14.29%) ranging from 201-500, and 23 (19.33%) with the sample size >500. Figure 1 shows the distribution of the lnRORs, which has an approximately normal distribution (p = 0.446 for skewness and p = 0.13 for kurtosis). The unweighted mean value of the lnROR was − 0.14 with a standard deviation of 1.23, and the single-sample t -test showed no substantial difference of lnROR over zero (t = − 1.25, p = 0.21). RCTs vs. NRSIs on the effects Supplementary Figure S2 shows the forest plot of the weighted average lnRORs. Again, no difference was observed between the effects of NRSIs against RCTs. The pooled ROR across the 119 pairs was 0.96 (95% confidence interval [CI]: 0.87, 1.07; p = 0.49), with no obvious between-study heterogeneity (I 2 = 0%). A robust metaregression model that considers the correlation between the pairs within a systematic review showed a similar result, with the pooled ROR as 0.96 (95% CI: 0.90, 1.03; p = 0.27). Subgroup analysis Similar conclusions were obtained from the analysis of different sample size subgroups. There was no significant difference between the weighted mean value of lnROR and 0 in each subgroup, that is, there was no significant difference in the effects between RCTs and NRSIs, regardless of sample size. The forest plots of the subgroup analyses are shown in Figure 2. However, there was a slight difference in the absolute value of the weighted mean of lnROR for each sample size subgroup, which decreased lnROR with increasing sample size (Figure 3). With the increase in sample size, the difference between RCTs and NRSIs diminished. In addition, the treatment used in the original study had no significant effect on the results. We compared the weighted mean of lnROR in the treatment subgroup, and the results of either surgical treatment or drug therapy were close to 0, and there was no significant difference (Supplementary Figure S3). Discussion In this study, we compared the effects of RCTs and NRSIs on safety assessment based on empirical evidence. Our results showed that there was no significant difference between RCTs and NRSIs in the evaluation of adverse events of the same topic, and there was no significant difference in sample size or treatment subgroups. In our research, although different sample size subgroups yielded similar results, there was still a slight difference in the weighted average RORs of different sample size subgroups. As shown in Figure 3, with the increase in the sample size, the value of lnROR decreases gradually; that is, the difference between RCTs and NRSIs gradually decreases. This is likely because the random error decreased as the sample size increased, and the estimated effect is therefore closer to the true effect (i.e., InROR = 0) (Moher et al., 1994;Wang and Ji, 2020). This also indicates that small studies may lead to biased estimation of the effects and should be addressed and interpreted appropriately in further original studies as well as metaanalyses. Several previous studies have systematically evaluated the differences in the effects of adverse events between RCTs and NRSIs. One study included 19 systematic reviews, and the pooled ROR of RCTs compared to observational studies was estimated to be 1.03 (95% confidence interval 0.93-1.15) (Golder et al., 2011). The other two studies showed similar results (Grodstein et al., 2003;Edwards et al., 2012). These results are similar to our results and further confirm that there is no difference in the average risk FIGURE 3 Scatter plot between the sample size and the absolute value of the weighted mean lnRORs. Frontiers in Pharmacology frontiersin.org estimates of intervention adverse events between RCTs and NRSIs. One possible explanation for the findings is that for safety outcomes, the events are rare, and the sample sizes are also limited, which makes the random error the predominant error impact the effect over the systematic errors (e.g., error from confounding), and therefore under the same sample size with almost the same amount of random error, the effects are similar for RCTs and NRSIs. However, some minor differences in the effects were observed. A study of postmenopausal hormone therapy on breast cancer survivors found that the results of observational studies were inconsistent with those of randomized trials (Col et al., 2005). This may be due to inconsistencies among the study population that they excluded people with a high incidence of adverse events. In Papanikolaou et al. (2006) study, the authors compared risks of 13 major harms of medical interventions using data from both RCTs and observational studies, and the non-randomized studies were often more conservative in their estimates of risk than the randomized trials. The study attributed these differences to the higher rate of adverse reactions reported by the RCTs because adverse events are recorded more thoroughly in RCTs, owing to regulatory requirements. It may also be caused by the different study populations. Further research on measuring the amount of random error and systematic error on NRSIs for rare events could be useful for the community to better understand the mechanism and deserves more attention. Strengths and limitations To the best of our knowledge, our study is currently the largest empirical study that compared the difference of the effects between RCTs and NRSIs for safety outcomes. The sample is representative, and the findings could provide indications for further evidencebased practice for assessing adverse events. In addition, we attempted to source the primary studies contained in each metaanalysis. This can avoid the errors that may exist in the extraction of data by the authors of meta-analyses. Moreover, we matched RCTs or NRSIs with the same outcome in the same systematic review according to their sample sizes, which can avoid the influence of different sample sizes on the results. The current study has several limitations. First, we did not analyze and evaluate the bias of the included systematic review and possible confounding factors in the original study, such as drug dose, treatment duration, or study population. These confounding factors may affect the outcome of adverse events. In addition, even for the same adverse event, there are differences in how these events were defined or recorded, especially in composite outcomes. The absence of such methodological information increases the potential heterogeneity of the results and even biases the conclusion. Therefore, in the original study, detailed information on outcome collection should be sufficiently provided. Second, selection bias may occur in the current study. It has been estimated that only about 43% of the published studies reported adverse events, while the proportion is 88% in unpublished studies (Golder et al., 2016). This means in the current study, the studies included were those with better reporting on safety outcomes; thus, our results may not be representative of those with poor reporting. Third, we used the matching method for comparison; during the matching process, only 17% were matched among 1,405 studies from the 178 meta-analyses. This means the majority of RCTs and NRSIs have different sample sizes, and therefore whether the effects of them were similar or not is unclear. This is hard to be estimated as the sample size itself is a source of bias. In addition, systematic reviews of adverse events potentially have serious issues in data extraction, and these errors can mislead the conclusions (Xu et al., 2022). Even if data extraction is checked and corrected in this study, there may still be some errors. Further studies are warranted to address these issues. Conclusion In conclusion, the current study identified that there was no significant difference between RCTs and NRSIs in the evaluation of the effect of adverse events for the same topic when they have similar sample sizes. It is of great significance to the systematic reviews of adverse events that well-conducted NRSIs may provide valid results, which is similar to RCTs. Evidence from NRSIs might be considered a supplement to RCTs to improve the generalizability and comprehensiveness of the review. Data availability statement The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation. Author contributions MD conceived and designed the study; MD analyzed the data and drafted the manuscript; AS collected the data, assessed the methodological quality, and edited the manuscript; LF-K, QW, and LL screened the literature; LL and LF-K provided methodological comments and revised the manuscript. All authors approved the final version for publication. Funding This study was supported by the Chinese National Programs for Brain Science and Brain-like Intelligence Technology, China Depression Cohort Study (2021ZD0200700) and grants for Key Project 82171499 from the National Natural Science Foundation of China. LF-K is funded by an Australian National Health and Medical Research Council Fellowship (APP1158469). The funding body had no role in any process of the study (i.e., study design, statistical analysis, and result reporting). Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors, and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
5,510.4
2023-03-21T00:00:00.000
[ "Mathematics" ]
OPTIMIZATION OF ALGORITHMS FOR EFFECTIVE MANAGEMENT OF THE MEASUREMENT PROCESS IN ELECTRICAL IMPEDANCE TOMOGRAPHY AND ULTRASONIC TOMOGRAPHY OPTYMALIZACJA ALGORYTMÓW DLA EFEKTYWNEGO ZARZĄDZANIA PROCESEM POMIARU W ELEKTRYCZNEJ TOMOGRAFII IMPEDANCYJNEJ I TOMOGRAFII ULTRADŹWIĘKOWEJ The article discusses the design of advanced embedded algorithms that aim to manage and control the measurement process using the Electrical Impedance Tomography and Ultrasonic Tomography methods. The project aims to develop solutions that optimize the performance of embedded devices that must operate on limited resources such as memory, computing power, or bandwidth. Minimizing energy consumption is an important aspect, especially for battery-powered devices. Algorithms must also be adapted to specific hardware constraints, such as low RAM or CPU limitations, which require complex engineering and optimization. Additionally, the project assumes the implementation of algorithms that consider security aspects, which is crucial in protecting data processed and transmitted by the device. It also requires the development of effective communication methods between the embedded device and other systems, including appropriate communication protocols Implementation of algorithms and methods The main goal of the work was to develop solutions that optimize the performance of embedded devices.The solution was based on Electrical Impedance Tomography (EIT) and Ultrasonic Tomography (UST).The project involves the implementation of algorithms and the development of effective methods of communication between the built-in device (Nowakowski et al., 2017;Romanowski et al., 2019;Soleimani et al., 2009). Measurement and data processing occur in parallel when some of the functional blocks perform tasks involving the acquisition of measurement data.At the same time, data obtained during the previous sequence is processed.Using such a solution allows for optimization of the solution's performance, which contributes to increasing the developed solution's efficiency and speed of operation (Rymarczyk et al., 2019;Rymarczyk et al., 2018). Processing the measurement signal in the described system is a step-bystep process that starts immediately after triggering the measurement signal (ADC_CNV_START).The first step is to capture signal samples, the amount of which is determined by the duration of the synchronization signal (SYNC).This time corresponds to a multiple of the duration of the excitation signal period.The collected samples are then saved in the device's cache. In the second stage of the process, the signal is filtered to eliminate the DC component.To do this, the signal passes through a digital high-pass filter, which helps remove unwanted frequencies. Then, the pre-processed signal is directed to two parallel processing blocks.The first is the RMS (Root Mean Square) value calculation block, which determines the practical signal value.The second block calculates the phase shift value, which is crucial for further analysis of the signal characteristics. At the end of the process, the measured and processed data are presented as values corresponding to the obtained parameters.This data is displayed on the system bus, and the entire process ends with generating a measurement readiness signal (ADC_CNV_CPL), signaling the end of the measurement cycle. The interactions between the system's functional blocks are detailed in Figure 1, allowing for a better understanding of signal flow and processing throughout the system. Figure 1. ADC_SIG block diagram One element that plays a vital role in the measurement process is the ADC_ CTR block.This block deals not only with acquiring and sampling measurement data but also with processing and filtering the acquired data until measurement results (RMS signal value, RMS current value, phase shift value) are obtained.The communication method of the measuring block is shown in Figure 2. The MES_PRC_CTR block in the measurement system plays a vital role as a controller of the measurement process.Its primary function is to manage and coordinate the operation of individual functional blocks, determine the required parameters, and synchronize the entire measurement process.The control process starts by starting the measurement by issuing the Mes_Start signal.After completing the measurement stage and receiving the ADC_CNV_ CPL readiness signal, the system analyzes the collected data.If the measured excitation current values are within the permissible error limits, the data is written to the external RAM.The system then moves the measurement sequence to the next defined position, and the process starts again. If the measured current value does not meet the established acceptance criteria, the system initiates a digital adjustment process, which results in repeated measurement.Figure 3 illustrates the entire control process and its stages.It shows a block diagram of the operation of the measurement control system.This diagram visualizes the signal flow and the control hierarchy within the system, which is essential to understanding the functioning and interactions between components. Carrying out tests of the developed algorithms The process of testing algorithms and embedded software, crucial to ensuring the effective operation of FPGA-based systems, begins with a series of test stages necessary to verify and optimize every aspect of the device's operation. The first step in this process is simulations.Before implementing an actual FPGA, the software is thoroughly tested using HDL simulation tools such as VHDL or Verilog.This phase allows you to precisely check the correctness of the project's logic and functionality, eliminating initial errors before loading the software onto the device. Then, unit tests are performed, which focus on analyzing individual project components or modules.These tests allow for the early identification and correction of errors, which is necessary to ensure the stability of basic system functions. After completing unit tests, the system goes through the integration testing stage.In this phase, it is checked whether all modules work together correctly, which is critical for the entire system's functionality.Then, functional tests confirm that the FPGA system meets all the requirements, especially under various operating conditions. Performance testing is also crucial, especially if the system has specific speed, throughput, and latency requirements.Additionally, load tests are performed to examine how the FPGA system copes under maximum load conditions.These tests are important for assessing the system's stability, resistance to high temperatures, and potential power supply problems. The final stage is testing and debugging on real hardware.Loading the software onto an actual FPGA device and testing it in real-world conditions allows for final verification of system performance, confirming that the implementation on real hardware is working as expected. Such a detailed and multi-stage testing process ensures that the FPGA system is thoroughly tested and ready to operate effectively in practical applications, fully considering all key functionalities. Tests of the mechanisms of the measurement and data processing block Several tests and simulation analyses were performed as part of the tests to check the correctness of the obtained data and the integrity of the time compatibility of individual signals.The work began with a simulation using synthetic data generated by a specially created function block simulating the operation of an analog-to-digital converter, based on which the correctness of the obtained results was analyzed. The second testing stage focused on testing the finished solution using the target hardware layer and specially adapted embedded software containing functions and procedures necessary for debugging.To test the correctness of the operation, it is essential to create mechanisms enabling the transfer of selected, measured signal waveforms to an external environment.For this purpose, the measurement function block was expanded with a debugging side, allowing for recording the envelope of the measured signal, expanded with a set of 3 two-port RAMs.The next step to check the correctness of the obtained results was to compare the solution with a standard signal generator and use laboratory measuring instruments.The obtained envelope results and calculated RMS values were compared with the values obtained using measuring instruments.As part of the solution tests, test procedures were created to check the correctness of written and read data and data segregation in virtual non-volatile memories.For this purpose, a function block was designed to simulate the operation of the ADC_MES block, replacing accurate data with synthetic data.Then, the obtained data were analyzed to ensure its correctness. Tests of data storage and segregation mechanisms and forcing signal generation/current regulation system As part of the solution tests, test procedures were created to check the correctness of written and read data and data segregation in virtual non-volatile memories.For this purpose, a function block was designed to simulate the operation of the ADC_MES block, replacing accurate data with synthetic data.Then, the obtained data were analyzed to ensure its correctness.The set OPTIMIZATION OF ALGORITHMS FOR EFFECTIVE MANAGEMENT OF THE MEASUREMENT PROCESS IN ELECTRICAL IMPEDANCE… of excitation signal-generating blocks was analyzed in the stage of analyzing the correct operation of the excitation signal-generating blocks, and individual function blocks were simulated and tested for the correctness of the generated waveforms.Then, the correct operation of the entire excitation generation and control unit was analyzed.In the first phase of the tests, the proper operation of the signal generation system was checked.For this purpose, various frequencies of the excitation signal were synthetically set, and the correctness of the obtained frequencies was checked using measuring instruments.The level of signal mapping with expected values was also analyzed.In the next phase, the correct operation of the signal amplitude control block was checked by setting successive values of the forcing amplitude and then analyzing the results using reference devices.The last functional testing phase was to check the correct operation of the entire function block. Measurement system The lower urinary tract tomograph has been tested for system stability (Fig. 4).For this purpose, test software was prepared and launched to force the measurement of UST and EIT as often as possible.The test was intended to test the device in extreme conditions that the end user could not enforce during everyday use.The test software was parameterized to perform 10,240 measurements.All measurements were performed correctly (Fig. 5).The device has undergone several design changes, the main one being a complete change in its form and method of use.The new design assumes the construction of a device as a portable backpack connected to textile underwear equipped with a miniaturized ultrasonic head and electrical impedance measuring electrodes.The new type of construction compromises mobility, comfort of use, and the quality of diagnostics.The electrical design of the device has been optimized in terms of energy consumption.The power supply section has been rebuilt, including it is equipped with a separating converter that isolates the patient from the power grid when it is charged from a charger, new high-voltage converters with better efficiency for generating the forcing signal of ultrasonic transducers in SEPIC technology, and the possibility of installing a galvanically isolated converter.The method of powering measurement cards has also been changed.The new design allows you to disconnect the power supply when inactive, translating into longer battery life.In addition, a Li-Ion battery charger with a BMS system is integrated into the device's motherboard, described in detail in the next section of this report.Below is the new electrical design of the device's main board for dual diagnostics of the lower urinary tract, along with partial assembly work (Fig. 6). The new design has also been adapted to wireless communication.Using the ESP32 system, a communication module was designed and manufactured, with the possibility of communication via the USB port.Below is the electrical design of the WiFi module, along with 3D models of the boards and a photo of the assembly. Figure 6. Design of the tomograph motherboard -bottom view -with UST measurement cards and LED strip installed (a) and design of a tomograph plate -top view -with a WiFi card and an EIT measurement card installed (b) The casing design of the mobile urinary tract diagnostic device was designed in the form of a backpack.This solution ensures patient comfort and ease of wearing without limiting the device's functionality.Sketches of a pattern were made for sewing the textile rear element of the backpack along with shoulder straps that will hold the device stably.A ventilation system (using foam and spacer mesh) and an adjustment system (buckles and clamp adjusters) were designed.Textile materials have been carefully selected to ensure both durability and comfort (ventilation, no body contact with the housing body).A vital aspect considered at the conceptual stage was the possibility of cleaning and disinfecting all elements. Several actions were taken to optimize the comfort of use and functionality in designing a new casing for the device, which will be adapted for children.The case size has been reduced as much as possible to accommodate the smaller dimensions of the electronics while maintaining a flat surface against the back.This solution was used because the anatomical structure of a two-yearold's back is significantly different from that of an eighteen-year-old, which makes it impossible to adapt to their curvature.To increase wearing comfort, foam, and spacer mesh are used on the back of the casing and shoulder straps, which improves air circulation and reduces pressure on the child's back. Another important element is the battery, the cells of which are arranged so that the device's weight is evenly distributed.Additionally, the battery module has been designed so that it can be easily unscrewed and the cells removed, allowing them to be replaced. The housing also has specially designed harness holders, which ensure stable device attachment.The design also includes holes for connectors such as EIT, UST, power, or USB, as well as buttons and mounting holes, which facilitate the installation of additional elements and ensure easy maintenance. Additionally, ventilation holes have been designed in the housing to optimize the device's operation and prevent it from overheating.These holes ensure adequate air circulation, which is crucial for maintaining the device's proper operating temperature and avoiding overheating electronic components. The housing also includes a place for an LED strip, which has been integrated to complement the geometry of the new housing, adding an aesthetic and functional visual element. The entire project focuses on ensuring comfort, safety, and efficiency of use, which is crucial for devices intended for younger users. To illustrate the device's size on the patient, a visualization was created that shows the body model of a child aged 3-4, 100 cm tall.This visualization helps understand the proportions of the device to the structure of the target patient example.Every effort has been made to provide an ergonomic shape that will fit the contours of the child's body to ensure maximum comfort and freedom of movement.Harnesses with buckles allow for precise adjustment of the device to the patient.The design of the electrode insert, which is an integral part of the measurement system, has been improved.The insert has an electrode system whose arrangement is optimized for EIT and UST measurements.The electrode insert allows easy and safe connection to the measurement system (Fig. 7). Assessment of prototype functionality The aim was to perform a performance test of the device.This test was performed using test software that forces the tomograph to achieve maximum performance using a Wi-Fi wireless network. One hundred measurements were made, and the execution time was 31 seconds, which means that the average measurement duration is 0.31 seconds, i.e., an average of 3 measurements per second (Fig. 8).Then, the entire system was validated with the introduced modifications.Before final certification, the tomograph for examining the lower urinary tract was tested as part of the design tests.Four tests were carried out on (1) conducted emissivity, (2) electromagnetic compatibility, (3) immunity to electromagnetic interference, and (4) resistance to ESD discharges. Conducted emission tests did not reveal any interference exceeding the permissible standards from the device charger to the network.Electromagnetic compatibility tests were carried out in the 30MHz-1GHz band; the tests allowed to locate sources of electromagnetic field exceeding the permissible standards.The problems were mainly due to insufficient filtering of signals entering the device through the cabling.During the tests, the problems found were successively eliminated, and the measurements were repeated until the emission sources that exceeded the standards were removed entirely.The test was completed with a correct result, and the modifications introduced in the device's structure were included in the electrical design.During the electromagnetic interference resistance tests, the tomograph was disturbed by a field with an intensity of 10V/m at a frequency of 80MHz-6GHz in two antenna polarizations.This test was aimed at checking whether the electromagnetic field could disrupt or interrupt the operation of the device.The examination revealed that with one of the antenna polarizations, there are problems with the proper operation of the tomograph.The cause of these problems was the LED strip on the front of the device, which induced noise and passed it on to the motherboard microcontroller.This problem was also solved on-site by installing a ferrite bead on the cable connecting the LED strip and the motherboard.The test was ultimately carried out with a positive result.The last test performed on the CT scanner was resistance to ESD discharges.The device was repeatedly tested at voltages of 8kV and 15kV from every possible side.The discharges were also directed toward the EIT electrodes and the UST head.The device without USB service cables connected worked continuously during these tests.In the case of connected USB service cables, problems with wired communication occurred already at an 8kV surge.In the case of these tests, this is an acceptable situation.The device may have trouble communicating with the external system but should be ready to resume transmission without a power reset.These problems could not be repeated during the final certification tests because all communication is carried out only via Wi-Fi, and the end user needs access to the service USB ports. After the design tests, several modifications were made to the device's main board and the adapters installed in the measurement probe plugs.The modifications consisted mainly of improving the filtering of signals from the device through the cabling and shielding places on the motherboard, which were the OPTIMIZATION OF ALGORITHMS FOR EFFECTIVE MANAGEMENT OF THE MEASUREMENT PROCESS IN ELECTRICAL IMPEDANCE… largest source of EMI.In particular, long signal lines and high-frequency lines, where additional capacitors have been added to smooth the edges.The measurement probe adapters are equipped with appropriately selected ferrite beads. Conclusions The task aimed to miniaturize the functional modules of the system, thus ensuring energy optimization.The existing diagram of the device's motherboard has been completely remodeled regarding the power supply method.Each circuit in the new design can remotely control the power supply.Critical circuits, i.e., the STM32H7 microcontroller and the ESP32 WiFi communication board, are powered by the high-efficiency LMR12020 converter operating at a frequency of 2MHz throughout the device tests.In contrast, the power supply to the UST and EIT measurement cards can be disconnected during breaks between subsequent measurements, which will significantly reduce consumption-energy stored in batteries.The new design also changed the way of generating high voltage.The latest model of HV converters necessary to develop the ultrasonic signal excitation of the phased measuring head works in SEPIC technology, thanks to which these converters operate with much higher current efficiency and voltage stability than the previously used NMT1272SC compact converters with galvanic isolation.The temperature characteristics depending on the load of the new LT3958 converters are also much better and more stable.Despite the better properties of SEPIC converters, installing a symmetrical isolated converter PDQE15-Q24-D24-D on the motherboard is possible.This converter differs from the previously used one, but it also works more stably than the NMT1272 converters.The new design of the motherboard also has an integrated BMS (Battery Management System) for four 18650 battery cells, necessary for energy optimization of the charging process.Additionally, the motherboard is equipped with a Li-Ion battery charger made using the LTC4006EGN-4 system presented in the previous stage of this project.The final selection of the HV converter will be made during head energy consumption tests. Figure 2 . Figure 2. Connection graph of the measuring block Figure 3 . Figure 3. Block diagram of the measurement process control mechanism Figure 4 .Figure 5 Figure 4.A tomograph for examining the lower urinary tract while assessing the stability of the system Figure 7 . Figure 7. Visualization of the device housing, connectors, cables, and electrode insert on a human body (height 100 cm) (a) and insert with detached UST head (b) Figure 8 Figure 8Series of 100 measurements -duration of a single measurement[s]
4,579.4
2024-08-20T00:00:00.000
[ "Engineering", "Computer Science" ]
The process of coevolutionary competitive exclusion: speciation, multifractality and power-laws in correlation Competitive exclusion, a key principle of ecology, can be generalized to understand many other complex systems. Individuals under surviving pressure tend to be different from others, and correlations among them change correspondingly to the updating of their states. We show with numerical simulation that these aptitudes can contribute to group formation or speciation in social fields. Moreover, they can lead to power-law topological correlations of complex networks. By coupling updating states of nodes with variation of connections in a network, structural properties with power-laws and functions like multifractality, spontaneous ranking and evolutionary branching of node states can emerge out simultaneously from the present self-organized model of coevolutionary process. Competitive exclusion, a key principle of ecology, can be generalized to understand many other complex systems. Individuals under surviving pressure tend to be different from others, and correlations among them change correspondingly to the updating of their states. We show with numerical simulation that these aptitudes can contribute to group formation or speciation in social fields. Moreover, they can lead to power-law topological correlations of complex networks. By coupling updating states of nodes with variation of connections in a network, structural properties with power-laws and functions like multifractality, spontaneous ranking and evolutionary branching of node states can emerge out simultaneously from the present self-organized model of coevolutionary process. Process of competitive exclusion [1] occurs in some real systems-evolutionary branching of species in ecosystems, citations in scientific research and designation of consumer goods are examples among many others. It is actually a fundamental ingredient governing main property of dynamical behaviors of systems which are often described with complex networks [2] nowadays. However, the contribution of competitive exclusion to the interactional structure of networks and to their functional features is not widely realized up till now. In modeling a system, individuals are represented as nodes and correlations among them are represented as edges of a graph. Scale-free property [3], characterized by power-law degree distribution, has attracted extensive attention since it reflects a general feature of diverse systems such as the Internet, citation networks, protein-protein interaction, and so on [2]. In most previous models, dynamics of networks and dynamics on the networks are separated. The interplay between the formation of topological structure and functions emerge from the network is usually neglected, which is reasonable when the structure is independent of the dynamical states of nodes, or when these two sides vary in rather different speeds. However, in many practical phenomena like academic and art creation, financial transactions, global climate fluctuation and synaptic plasticity of neuron network in the brain [4], both the structure and functions emerge from the identical process, and time-dependent variations of both individual states and local connections * Electronic address<EMAIL_ADDRESS>of nodes feed back with each other. Therefore, novel models with coevolution mechanisms [5] underlying them appeared to fit for the necessary. Unfortunately, scarcely could one produce both scale-free structure and collective dynamics of nodes simultaneously. On the other side, new nodes are often assumed to know the global information of whole the growing network, which is usually impossible for huge-size systems. In this sense it is needed to set up models based on local interactions to see if structure and functions at system level will emerge from self-organized dynamics [6]. As it is well known, competitive exclusion plays key role in the formation of species. There is strong competition among species occupying the same or nearest loci, surviving pressure force them drift their traits away from the local average level, and gradually induces evolutionary branching of species. Sympatric speciation [7] in an ecosystem is a recent focus of naturalists. It refers to the origin of two or more species from a single local population. Seceder model [8] based on a simple rule of local third-order collision succeeded in mimicking such a process and capturing its similarity to group formation in society. A network version [9] of it has been reported, giving rise to a possible mechanism of community structure and clustering in social networks. In this paper the principle of competitive exclusion is generalized outside the realm of ecology, the seceder model is modified to describe temporally updated states of nodes and corresponding variation of connections among them together. We show that generic natures of members in diverse systems, i.e., to be different from others under the pressure of competition, and coevolution between updating node states and varying connection among nodes, can lead to simultaneous emergences of evolutionary branching of individual traits, spontaneous ranking and multifractality of node states and, power-law topological structure of correlations in a system. In this way we are able to understand scale-free phenomena and other characteristics in various fields with a novel common mechanism. Such self-organized coevolution models of scale-free network with both structural and functional properties integrated are still few to the best of our knowledge. We set up the present model through three iteration rules. (1) Network growth starts from a primitive complete graph with m 0 nodes. Each node i on joining the network was assigned an initial state with a random real number w(i) uniformly distributed in (0, 1). At each time step, a new node i ′ is added to the preexisting network. It gives out m edges (m < m 0 ) to old nodes arbitrarily. (2) At every step, each node i countsw(i) -the average of state values w(j)(j = i) over its nearest linked neighbors, from them it picks up the one whose w(j) makes the maximum distance from averagew(i), i.e. J max (i) corresponds to max|w(j) −w(i)|, then, a randomly selected node j among the nearest neighbors of i is chosen as the offspring of J max (i), called J sed (i). Different from original seceder model [8], it is kept at its own site and, with its state variable updated as w(J sed (i)) = w(J max (i)) + δ, where random number δ ∈ (0, 1) is also uniformly distributed and with positive numerical range for wider applications. Obviously w(i) here can be accounted as a time-dependent non-decreasing fitness [10]. (3) For the new comer node i ′ at every step, together with its 'young' enough fellows (i.e. i ′ − i ≤ ∆I, with ∆I a given integer constant implicating aging effect [11], hereafter we call them I altogether for convenience). Search seceders for all I's neighbors j. When w(J sed (j))/w(I) ≥ h, where h is a given value of threshold, a new edge is added between node J sed (j) and I(Double links and self-loops are forbidden). Meanwhile, an edge linking such node I and its neighbor j is removed if the condition w(j)/w(I) < h or w(I)/w(j) < h is satisfied. Finally, if any node i becomes isolated due to edge-cutting, directly link it to its seceder J sed (i). The threshold description of correlation adopted here is widely used in modeling complex systems [12]. Actually the iteration rules of the model are abstracted from observation to real systems. In art creation and scientific research, people have generic tendency to create new works so that they behave differently from others. Sparkles from collision of opinions with large difference often result in creation. As well known, scholars are often under the pressure of publication. Papers with the same or very similar viewpoint, method and results to existing ones have less chance to get published. Here we see the competition exclusion promotes prosperity of scientific research. Suppose a graduate student just start his academic career by joining the research on a certain topic, usually he has to focus on some papers after extensive searching due to limited time, and often he extends his reading to references of them. Generally speaking, he needs to pay more attention to ones with sharp contrasts against his knowledge background(w(i)), and understand lately published literature (w(jsed(j))) to inspire new ideas for his own paper. But in the reading he may be restrained within the ability of his understanding. Therefore, it is natural to predict a suitable range of threshold ratios within which papers with state values w(Jsed(j)) would be cited (connected). And papers in selective reading based on one's local sight are likely to be cited, forming increased in-degree of those ones. On the opposite, papers(on the node state w(j) ) have small difference (too low ratio of w(j)/w(i)) with w(i) are less cited(the link between node i and j is trimmed). Anyway, a recently updated node state(w(jsed(i))) would be more attractive to a failure(an isolated node). Artists update themselves by continuous creation,therefor the cooccurrence network of musicians serves another example of competitive exclusion. We know that musicians with similar genre are competitors for performance. Managers usually do not intend to arrange opportunity for them to appear on the same stage since audience prefer performance with diversity. It is assumed that whoever created a playlist was using a certain criterion to group artists in them. One does not normally find concerts with a mixture of heavy rock, jazz and piano sonata, therefore a range of thresholds is used to balance the homogeneity and heterogeneity. As the results of coevolution, both citation network [13,14,15] and musician network [16] display the topology of scale-free structure although most foodwebs do not [17]. Suppose a man faces to job crisis, he has to refresh himself to become non-trivial for going out of dilemma. And he may attempt to learn from, even coalesce to a succeeded person by recommendation of a common friend. But whether they can sustain a close relation, it depends on whether they are mutually needed and compensate in a proper measure (e.g. w(i)). In all these cases states of nodes keep varying with time and correlations among them change corresponding to such variations along an optimal gradient. Coevolution of node states and topological connection yields most structural properties of complex networks by self-organization. Numerical simulation reveals out power-law distribution of node degree: p(k) ∼ k −γ , which is illustrated in Fig. 1 a. Without ensemble average on network configurations, it is shown that in the case of h = 3.0 the distribution keep invariant for all values of m, with the slope γ = 2.39. In-degree is counted by a node to its accepted edges from younger ones. The distribution also shows essential power-law as shown in the inset of Fig.1a. The slope of the double-logarithmic line p i (k) ∼ k −β is around β = 2.0, which is in accordance with numerical results of another model [13] and empirical studies [14,15]. In Fig.1b we show the variation of power exponents γ depending on correlation thresholds h. They lay in the range of (2.0, 3.0), which fits well to real complex systems. And the inset of it displays Other parameters are the same as those in Fig.1a. that essential power-law behavior of in-degree distributions also exist for different thresholds. The calculated Pearson coefficients r [18] which describe degree-degree correlation of the network are shown in Fig.2a. They are positive reflecting statistical feature of social networks. Moreover, they also behave asymptotic power-law decay in the size of the system, i.e. r(N ) ∼ N −α , which is, to our knowledge, a specific feature and first predicted by the present model. It is expected to be verified by empirical data from real complex systems. Fig.2b displays size-dependent decay of clustering coefficients [2]: Fig.3). However, when we allow a small portion (ten to twenty percent) of cut-off operations not to carry rule 3, scalefree properties can be retrieved promptly (see Fig.3). Moreover, degree-degree correlations restore assortativity corresponding to it. This implies that randomness may play an essential role in the origin of scale-free behaviors since there should be more or less relaxation on deterministic rules in complex systems [19]. Ranking behavior of node states also emerge spontaneously from coevolution. Whole the range of node states is divided into 100 intervals in Fig.4 to show that the values are distributed quite discontinuously. This is drastically different from uniform initial distribution and, is comparable to group formation in original seceder model(see Fig.1 of ref. [8]). Inherited from seceder model, two prominent traits(see Fig.4) at both ends can be regarded as the result of evolutionary branching [7] with the tendency of elimination for mediate genotypes. Here, species in sympatry seem to likely drift their traits away from local average level since the strongest competition exists between similar genotypes [20]. Anyway, to make a scrutiny into applicability of co-evolutionary mechanism to sympatric speciation would be valuable. Applied to citation networks, it means that the long term coevolution gradually eliminate the publishing chance of a paper at middle level, instead, the population of quality tends to be divided and shift approaching both ends. Beyond seceder model [8,9], our numerical results also give support to the assumption of the ranking model [21] of SFN with self-organization mechanism. It is noticeable, scalefree property as a result of coevolution can be obtained without the prerequisite of preferential attachment on the power-law function of prestige ranks of nodes. The updating process of node states induced by competitive exclusion coupled with topological variation re-sults in collective behavior of nodes, which reflects characteristics of functional aspects apart from structural ones of the network. Based on simulated data of node states w(i) which are put in order of the time sequence as node's participation in the network, we calculate function V (q, d) = l µ l (q, d)lnµ l (q, d) with standard boxcounting technique [22] for different moment q versus x =ln d, where d represents scales of boxes, and µ l is normalized measure of the summation over states in box l. Essential linearity can be seen for at least 4-5 center lines in Fig.5 so that the verified singularity spectrum f (α) of multifractal is shown in its inset. Interestingly, the present work gives another example of long-range correlated gradient-driven growth of a multifractal entity [23] with scale-free network as its inherent skeleton. The multifractality of node states is found to emerge accompanied with scale-free property of the structure and vanishes correspondingly. We have verified the correspondence between two properties in the range of m 0 ∈ [15,50], and ∆I ∈ [5,15]. Therefore, the present model suggests a common mechanism of scale-free structure of social systems together with their multifractality and assortativity as well. Simultaneous emergences of macroscopic properties on both structural and functional sides also enable us to understand functions in coordination with the Internet, world wide spatial distribution of population [23] with all kinds of transport and communication networks connecting resident sites being complex networks among which some ones are coevolution SFNs, middle latitude climate network [12], citation network [13,14,15], number distribution of species in ecological networks [24], musician networks [16], and diversity maintenance method for evolutionary optimization algorithms [15,25], on a novel platform of coevolution with alterable details. Actually, it relies on the mechanism with another type of preferential attachment of node state correlation but does not explicitly depend on node degree [3,13,26], which distinguishes itself from previous ones. Starting from but outreaching seceder model, we can account generic natures of individuals-to update states to selfadapt the competitive exclusion, and correlations among them change correspondingly-as driving force in selforganization of some evolutionary complex systems characterized by power-law distributions of various topological quantities and specific functions.
3,572.8
2007-10-13T00:00:00.000
[ "Physics", "Mathematics" ]
PsychRNN: An Accessible and Flexible Python Package for Training Recurrent Neural Network Models on Cognitive Tasks Visual Abstract Example workflow for using PsychRNN. First, the task of interest is defined, and a recurrent neural network (RNN) model is trained to perform the task, optionally with neurobiologically informed constraints on the network. After the network is trained, the researchers can investigate network properties including the synaptic connectivity patterns and the dynamics of neural population activity during task execution, and other studies, e.g., those on perturbations, can be explored. The dotted line shows the possible repetition of this cycle with one network, which allows investigation of training effects of task shaping, or curriculum learning, for closed-loop training of the network on a progression of tasks. Introduction Studying artificial neural networks (ANNs) as models of brain function is an approach of increasing interest in computational, systems, and cognitive neuroscience (Kriegeskorte, 2015;Yamins and DiCarlo, 2016;Richards et al., 2019). ANNs comprise many simple units, called neurons, whose synaptic connectivity patterns are iteratively updated via deep-learning methods to optimize an objective. For application in neuroscience and psychology, ANNs can be trained to perform a cognitive task of interest, and the trained networks can then be analyzed and compared with experimental data in a number of ways, including their behavioral responses, neural activity patterns, and synaptic connectivity. Recurrent neural networks (RNNs) form a class of ANN models which are especially well-suited to perform cognitive tasks which unfold across time, common in psychology and neuroscience, such as decision-making or working-memory tasks (Sussillo, 2014;Song et al., 2016;Barak, 2017;Yang and Wang, 2020). In RNNs, highly recurrent synaptic connectivity is optimized to generate target outputs through the network population dynamics. RNNs have been applied to model the dynamics of neuronal populations in cortex during cognitive, perceptual, and motor tasks and are able to capture associated neural response dynamics (Mante et al., 2013;Sussillo et al., 2015;Carnevale et al., 2015;Rajan et al., 2016;Remington et al., 2018;Masse et al., 2019). Despite growing impact of RNN modeling in neuroscience, wider adoption by the field is currently hindered by the requisite knowledge of specialized deep-learning platforms, such as TensorFlow or PyTorch, to train RNN models. This creates accessibility barriers for researchers to apply RNN modeling to their neuroscientific questions of interest. It can be especially challenging in these platforms to implement neurobiologically motivated constraints, such as structured synaptic connectivity or Dale's principle which defines excitatory and inhibitory neurons. There is also need for modular frameworks to define the cognitive tasks on which RNNs are trained, which would facilitate investigation of how task demands shape network solutions. To better model experimental paradigms for training animals on cognitive tasks, an RNN framework should enable investigation of task shaping, in which training procedures are progressively adapted to the subject's performance during training. To address these challenges, we developed the software package PsychRNN as an accessible, flexible, and extensible computational framework for training RNNs on cognitive tasks. Users define tasks and train RNN models using only Python and NumPy, without requiring comprehensive understanding of ANNs. The training backend is based on TensorFlow and is extensible for projects with additional customization. PsychRNN implements a number of specialized features to support applications in systems and cognitive neuroscience, including neurobiologically relevant constraints on synaptic connectivity patterns. Specification of cognitive tasks has a modular structure, which aids parametric variation of task demands to examine their impact on model solutions and promotes code reuse and reproducibility. Modularity also enables implementation of curriculum learning, or task shaping, in which tasks are adjusted in closed-loop based on performance. Our overall goal for PsychRNN is to facilitate application of RNN modeling in neuroscience research. Package structure To serve our objectives of accessibility, extensibility, and reproducibility, we divided the PsychRNN package into two main components: the Task object and the Backend (Fig. 1). We anticipate that all PsychRNN users will want to be able to define novel tasks specific to their research domains and questions. The Task object is therefore fully accessible to users without any TensorFlow or deep-learning background. Users familiar with Python and NumPy are able to fully customize novel tasks, and they can customize network structure (e.g., number of units, form of nonlinearity, connectivity) through preset options built into the Backend. For users with greater need for flexibility in network design, the Backend is designed for accessibility, customizability, and extensibility. Backend customization typically requires knowledge of TensorFlow. For those with TensorFlow knowledge, PsychRNN's modular design enables definition of new models, regularizations, loss functions, and initializations. This modularity facilitates testing hypotheses regarding the impact of specific potential structural constraints on RNN training without having to expend time and resources designing a full RNN codebase. Task object The Task object is structured to allow users to define their own new task using Python and NumPy. Specifically, generate_trial_params creates trial specific parameters for the task (e.g., stimulus and correct response). trial_function specifies the input, target output, and output mask at a given time t, given the parameters generated by generate_trial_params. PsychRNN comes set with three example tasks that are well researched by cognitive neuroscientists: perceptual discrimination (Roitman and Shadlen, 2002), delayed discrimination (Romo et al., 1999), and delayed matchto-category (Freedman and Assad, 2006). These tasks highlight possible schemas users can apply to specifying their own tasks and provide tasks with which users can test the effect of different structural network features. Tasks can optionally include accuracy functions. Accuracy measures performance in a manner more relevant to experiments than traditional machine learning measures such as loss. On a given trial, accuracy is either one (success) or zero (failure). In contrast, loss on a given trial is a real-numbered value. Accuracy is calculated over multiple trials to obtain a ratio of trials correct to total trials. Accuracy is used as the default metric by the Curriculum class. Backend The Backend includes all of the neural network training and specification details (Fig. 1, step 2). The backend, while being accessible and customizable, was designed with preset defaults sufficient to get started with PsychRNN. The TensorFlow details are abstracted away by the Backend so that researchers are free to work with or without an understanding of TensorFlow. Additionally, since the Backend is internally modular, different components of the Backend can be swapped in and out interchangeably. In the remainder of this section, modular components of the Backend are described so that researchers who want to get more in-depth with PsychRNN know what tools are available to them. Models RNNs are a large class of ANNs that process input over time. In the PsychRNN release, we include a basic RNN (which we refer to as an RNN throughout the rest of the paper), and a long short-term memory network (LSTM) model (Hochreiter and Schmidhuber, 1997). The basic RNN model is governed by the following equations: where u, x, and z are the input, recurrent state, and output vectors, respectively. W in , W rec , and W out are the input, recurrent, and output synaptic weight matrices. b rec and b out are constant biases into the recurrent and output units. dt is the simulation time-step and t is the intrinsic timescale of recurrent units. s rec is a constant to scale recurrent unit noise, and dj is a Gaussian noise process with mean 0 and standard deviation 1. f is a nonlinear transfer function, which by default in PsychRNN is rectified linear (ReLU). This default can be replaced with any TensorFlow transfer function. PsychRNN also includes an implementation of LSTMs, a special class of RNNs that enables longer-term memory than is easily attainable with basic RNNs (Hochreiter and Schmidhuber, 1997). LSTMs use a separate "cell state" to store information gated by sigmoidal units. Additional models can be user-defined but require knowledge of TensorFlow. Initializations The synaptic weights that define an ANN are typically initialized randomly. However, with RNNs, large differences in performance, training time, and total asymptotic loss have been observed for different initializations (Le et al., 2015). Since initializations can be crucial for training, we have included several initializations currently used in the field (Glorot and Bengio, 2010). By default, recurrent weights are initialized randomly from a Gaussian distribution with spectral radius of 1.1 (Sussillo and Abbott, 2009). We also include an initialization called Alpha Identity that initializes the recurrent weights as an identity matrix scaled by a parameter a (Le et al., 2015). Each of these initializations can substantially improve the training process of RNNs. PsychRNN includes a WeightInitializer class that initializes all network weights randomly, all biases as zero, and connectivity masks as all-to-all. New initializations inherit this class and can override any variety of initializations defined in the base class. Step 1: Define New Task Step 2: Define Network Step 1, Defining a new task requires two NumPy-based components: trial_function describes the task inputs and outputs, and generate_trial_params defines parameters for a given trial (Extended Data Fig. 1-1). Optionally, one can define an accuracy function describing how to calculate whether performance on a trial was successful. Step 2, The Backend defines the network. First, the model, or network architecture, is selected. A basic RNN and LSTM (Hochreiter and Schmidhuber, 1997) are implemented, and more models or architectures can be defined using TensorFlow. That model is then instantiated with a dictionary of parameters, which includes the number of recurrent units and may also include specifications of loss functions, initializations, regularizations, or constraints. If any parameter is not set, a default is used. Step 3, Training parameters, such as the optimizer or curriculum, can be specified. During network training, measures of performance (loss and accuracy) are recorded at regular intervals. Optimization of the network weights is performed to minimize the loss. After training, the synaptic weight matrix can be saved, and state variables and network output can be generated for any given trial. Loss functions During training an RNN is optimized to minimize the loss, so the choice of loss function can be crucial for determining exactly what the network learns. By default, the loss function is mean_squared_error. Our Backend also includes an option for using binary_cross_entropy as the loss function. Other loss functions can be easily defined with some TensorFlow knowledge and added to the LossFunction class. Loss functions take in the network output (predictions), the target output (y) and the output_mask, and return a float calculated using the TensorFlow graph. Regularizers Regularizers are penalties added to the loss function that may help prevent the network from overfitting to the training data. We include options for L2-norm and L1-norm regularization for the synaptic weights, which tend to reduce the magnitude of weights and sparsify the resulting weight matrices. In addition, we include L2-norm regularization on the post-nonlinearity recurrent unit activity, r. Other regularizations can be added to the Regularizer class through TensorFlow. By default, no regularizations are used. Optimizers PsychRNN is built to take advantage of the many optimizers available in the TensorFlow package. Instead of explicitly defining equations for back propagation through time, PsychRNN converts the user supplied Task and RNN into a "graph" model interpretable by TensorFlow. TensorFlow can then automatically generate gradients of the user supplied LossFunction with respect to the weights of the network. These gradients can then be used by any TensorFlow optimization algorithm such as stochastic gradient descent, Adam or RMSProp to update the weights and improve task performance (Ruder, 2017). Neurobiologically motivated connectivity constraints PsychRNN is designed for investigation of neurobiologically motivated constraints on the input, recurrent, and output synaptic connectivity patterns. The user can specify which synaptic connections are allowed and which are forbidden (set to zero) through optional user-defined masks at the point of RNN model initialization. This feature enables modeling neural architectures including sparse connectivity and multi-region networks (Rikhye et al., 2018). Optional user-defined masks allow specification of which connections are fixed in their weight values, and which connections are plastic for optimization during training (Rajan et al., 2016). By default, all weights are allowed to be updated by training. PsychRNN also enables implementation of Dale's principle, such that each recurrent unit's synaptic weights are all of the same sign (i.e., each neuron's postsynaptic weights are either all excitatory or all inhibitory; Song et al., 2016). The optional parameter dales_ratio sets the proportion of excitatory units, with the remaining units set as inhibitory. Curriculum learning Curriculum learning refers to the presentation of training examples structured into successive discrete blocks sorted by increasing difficulty (Bengio et al., 2009;Krueger and Dayan, 2009). Task modularity in PsychRNN enables an intuitive framework for curriculum learning that does not require TensorFlow knowledge. Curriculum learning is implemented by passing a Curriculum object to the RNN model when training is executed. Although very flexible and customizable, the simplest form of the Curriculum object can be instantiated solely with the list of tasks that one wants to train on sequentially. The Curriculum class included in PsychRNN is flexible and extensible. By default, accuracy, as defined within a task, is used to measure the performance of the task. When the performance surpasses a user-defined threshold, the network starts training with the next task. The Curriculum object thus includes an optional input array, thresholds, for specifying the performance thresholds required to advance to each successive task. Apart from accuracy, one may wish to advance the curriculum stage using an alternative measure such as loss or number of iterations. We include an optional metric function that can be passed into the Curriculum class to define a custom measure to govern task-stage transitions. Simulator One limitation of specifying RNN networks in the TensorFlow language is that to run a network, the inputs, outputs, and computation need to take place within the TensorFlow framework, which can impede users' ability to design and implement experiments on their trained RNN models. To mitigate this, we have included a NumPy-based simulator which takes in RNN and Task objects and simulates the network in NumPy. This simulator allows the user to study various neuroscientific topics such as robustness to perturbations. Software availability The PsychRNN open-source software described in the paper is available online for download in a Git-based repository at https://github.com/murraylab/PsychRNN. Detailed documentation containing tutorials and examples is also provided. The code and documentation are available as Extended Data 1. All data and figures included were produced on a MacBook Pro (Retina, 13-inch, Early 2015) with 8 GB of RAM and 2.7 GHz running macOS Catalina 10.15.5 in an Anaconda environment with Python 3.6.9, NumPy 1.17.2, and TensorFlow 1.14.0. Results To facilitate accessibility, PsychRNN allows users to define tasks and define and train networks using a Python-based and NumPy-based interface. PsychRNN provides a machine-learning backend, based on TensorFlow, which converts task and network specifications into the Tensorflow deep-learning framework to optimize network weights. This Open Source Tools and Methods allows users to focus on the neuroscientific questions rather than implementation details of deep-learning software packages. As an example, we demonstrate how PsychRNN can specify an RNN model, train it to perform a task of neuroscientific interest, here, a two-alternative forced-choice perceptual discrimination task (Roitman and Shadlen, 2002), and return behavioral readout from output units and internal activity patterns of recurrent units (Fig. 2). Modularity The PsychRNN Backend is complimented by the Task object which enables users to easily and flexibly specify tasks of interest without any prerequisite knowledge of TensorFlow or machine learning. The Task object allows flexible input and output structure, with tasks varying in not only the task-specific features but also the number of input and output channels (Fig. 3). Furthermore, the object-oriented structure of task-definition in PsychRNN facilitates tasks that can be quickly and easily varied along multiple dimensions. For example, in an implementation of a delayed discrimination task (Romo et al., 1999), we can vary stimuli and delay durations with a set of two parameters (Fig. 3B). Importantly not only can we vary the inputs as they exist, but integration between the Task object and Backend makes it possible to vary the structure of the network from the Task object. This is because the RNN models are constructed after task definitions using task parameters, and are therefore custom-structured to accommodate task features. In our implementation of a delayed match-to-category task (Freedman and Assad, 2006), we can freely change the number of inputs (input discretization) and the number of outputs (categories; Fig. 3D). This flexibility allows researchers to D C B A Figure 2. Example task (perceptual discrimination). A, Inputs and target output as specified by the task (top two panels), and the network's output for the displayed input (bottom panel). Because the task-specified output mask is zero during the stimulus period, the network is not directly constrained during that period. B, Percent of decisions the network makes for choice 1 at varying coherence levels. Negative coherence values indicate stimulus inputs rewarded choice 2. A psychometric function is fit to the data (black). This plot validates that the network successfully learned the task. C, State variable activity traces across a range of stimulus coherences, for multiple example units, averaged over correct trials. The network produces state variable activity across all units. D, Population activity traces in the subspace of the top two principal components. Principal component analysis was applied to the activity matrix formed by concatenating across coherences the trial-averaged correct-trial traces, for each unit. E, Minimal example code for using PsychRNN. All relevant modules are imported (lines 1-3), a PerceptualDiscrimination Task object is initialized (line 4), the basic RNN model is instantiated and trained (lines 5-9), t and output and state variables are extracted (lines 10-11). investigate how the network solution of trained RNNs may depend on task or structural properties (Orhan and Ma, 2019). Neurobiologically motivated connectivity constraints While there are multiple general-purpose frameworks for training ANNs, neuroscientific modeling often requires neurobiologically motivated constraints and processes which are not common in general-purpose ANN software. PsychRNN includes a variety of easily implemented forms of constraints on synaptic connectivity. The default RNN network has all-to-all connectivity, and allows units to have both excitatory and inhibitory connections. Users can specify which potential synaptic connections are forbidden or allowed, as well as which are fixed and which are plastic for updating during training. Furthermore, PsychRNN can enforce Dale's principle, so that each unit has either all-excitatory or all-inhibitory synapses onto its targets. Figure 4 demonstrates Figure 3. Modularity of task definition. A, Task modularity. This schematic illustrates the trial progression of one trial of a delayed discrimination task. The task is modularly defined such that stimulus and delay duration can be varied easily, simply by changing task parameters. B, One input channel generated by a delayed discrimination task, with varied stimulus and delay durations (Extended Data Fig. 3-1). Delay duration is varied across columns, and stimulus duration is varied across rows. C, Structural modularity. Tasks can provide any numbers of channels for input and output on which to train a particular RNN model. Variation in numbers of inputs and outputs is enabled through simple modular task parameters in PsychRNN. D, Example of a match-to-category task. The number of inputs (colored outer circles) is varied across columns, and the number of output categories (Cat) is varied across rows (Extended Data Fig. 3-2). Open Source Tools and Methods making autapses (i.e., self-connections). Networks with block-like connectivity matrices can be used to model multiple brain regions, with denser within-region connectivity and sparser between-region connectivity. Curriculum learning One important feature included in PsychRNN is a native implementation of curriculum learning. Curriculum learning, also referred to as task shaping in the psychological literature (Krueger and Dayan, 2009), refers to structuring training examples such that the agent learns easier trials or more basic subtasks first (Fig. 5A,B). Curriculum learning has been shown to improve ANN training both in training iterations to convergence and in the final loss (Bengio et al., 2009). In neuroscience, researchers adopt a wide variety of different curricula to train animals to perform full experimental tasks. By including curriculum learning, PsychRNN enables researchers to investigate how training curricula may impact resulting behavioral and neural solutions to cognitive tasks, as well as potentially identify new curricula that may accelerate training. Further, curricula can be used more broadly to investigate how learning may be influenced and biased by the sets of tasks an agent has previously encountered. As an example, we trained RNNs on a version of the perceptual decision-making task (from Fig. 2) and examined the effects of using curriculum learning in the training procedure (Fig. 5C,D). Here, curriculum learning involved initially training the model at high stimulus coherences, and introducing progressively lower coherences when the model's performance reached a threshold level. We found that curriculum learning enabled faster training of models, as commonly observed in experiments (Krueger and Dayan, 2009). Comparison to other frameworks PsychRNN compares favorably to alternative high-level frameworks available (Fig. 6). Most similar to PsychRNN is PyCog (Song et al., 2016), another Python package for training RNNs designed for neuroscientists. PsychRNN presents several key advantages over PyCog. First, PyCog's backend is Theano, which is no longer under active support and development. Second, PyCog has no native implementation of curriculum learning. Third, task definitions in PyCog are not themselves modular, making experiments which are trivial to implement in PsychRNN more laborious and cumbersome for the user. Lastly, PyCog utilizes a built-in "vanilla" stochastic H F D B G E C A Figure 4. Neurobiologically motivated constraints. This figure illustrates the effects of different connectivity constraints on the recurrent weight matrices and psychometric functions of RNNs trained on the perceptual discrimination task (Fig. 3). For the recurrent weight matrices (top row), red and blue show excitatory and inhibitory connections, respectively. The coherence plots (bottom row) show that the network successfully trains to perform the task while adhering to the constraints. A, B, This network is constrained to have no autapses, i.e., no self-connections, as illustrated by zeros along the diagonal of the weight matrix. C, D, This network is constrained to have two densely connected populations of units with sparse connection between the populations. These constraints can be used to simulate long-range interactions among different brain regions. E, F, This network is constrained to follow Dale's principle: each neuron either has entirely excitatory or entirely inhibitory outputs. G, H, This network has Dale's principle enforced and has subset of weights which are fixed, i.e., they cannot be updated by training. In this example, all connections between excitatory and inhibitory neurons are fixed, and other within excitatory-to-excitatory and inhibitory-to-inhibitory connections are plastic during training. gradient descent algorithm, whereas PsychRNN allows users to select any optimizer available in the TensorFlow package. Alternatively, research groups may use a general-purpose high-level wrapper of TensorFlow, such as Keras (Chollet, 2015), which is not specifically designed for neuroscientific research. Importantly, these frameworks do not come with any substantial ability to implement biological constraints. Users interested in testing the impact of such constraints would need to modify the native Keras Layer objects themselves, which is nontrivial. In addition, Keras does not provide a framework for modular task definition, which therefore requires the user to translate inputs and outputs into a form compatible with the model. PsychRNN, by close integration with the TensorFlow framework, manages to maintain much of the power and flexibility of traditional machine learning frameworks while also providing custom-built utilities specifically designed for addressing neuroscientific questions. Discussion PsychRNN provides a robust and modular package for training RNNs on cognitive tasks and is designed to be Figure 5. Curriculum learning. A, Schematic of curriculum learning, or task shaping. The network is trained on selections from the trial set, then tested on selections from that trial set. Depending on the performance when testing on the trial set, the trial set can then be updated, e.g., to contain progressively more difficult trial conditions. B, Example schematic of increasing difficulty of trial set (top) paired with performance over time (bottom). The task difficulty is progressively increased each time performance reaches the performance threshold. C, Comparison of number of iterations needed to train a network to perform the perceptual discrimination task (from Fig. 3) with 90% accuracy at coherence level of 0.1. Ten networks were randomly initialized and each was trained both on a curriculum with decreasing coherence, and without any curricula, with fixed coherence. Networks trained without curriculum learning were trained solely on stimulus with coherence = 0.1. Networks trained with curriculum learning were trained with a curriculum with coherence decreasing from 0.7 to 0.5 to 0.3 to 0.1 as performance improved (see Extended Data Fig. 5-1). When the network reached 90% accuracy on stimuli with coherence = 0.1, training was stopped. Networks trained with curriculum learning reached 90% accuracy significantly faster than networks without it (p , 0.01). D, Trajectories of difficulty (defined here as inverse coherence), accuracy, and loss (mean squared error) across training iterations, for two identically initialized networks from C, one of which was trained with curriculum learning, and one of which was trained without curriculum learning. accessible to researchers with varying levels of deep-learning experience. The separation into a Python-based and NumPy-based Task object and a primarily TensorFlowbased Backend expands access to RNN model training without reducing flexibility and power for users who require more control over the precise setup of their networks. Further, the modularity of task and network elements enables easy investigation of how task and structure affect learned solution in RNNs. Lastly, the modular structure facilitates curriculum learning which makes optimization more efficient and more directly comparable to animal learning. PsychRNN's modular design enables straightforward implementation of curriculum learning to facilitate studies of how training trajectories shape network solutions and performance on cognitive tasks. Task shaping is a relatively understudied topic in systems neuroscience, despite its ubiquity in animal training. For instance, it is poorly understood whether differences in training trajectories result in different cognitive strategies or neural representations in a task (Latimer and Freedman, 2019). Standardization and automation in animal training may aid experimental investigation of task shaping effects (Berger et al., 2018;Murphy et al., 2020). Although PsychRNN utilizes a supervised training procedure, rather than reinforcement-based ones used in animal training, the implementation of curricula enables exploration of how task shaping may impact learning of cognitive tasks. Future extensions to the PsychRNN codebase can enable investigation of additional neuroscientific questions. Some potentially useful directions are the addition of units that exhibit firing rate adaptation through an internal dynamical variables associated with each unit (Masse et al., 2019), spiking neurons (Zenke and Ganguli, 2018), and the implementation of networks with short term associative plasticity (Miconi, 2017). An interesting area for extending task training capability is to add trial-by-trial dependencies. In the current version of PsychRNN, each task trial is trained independently from other trials in the same block. PsychRNN could potentially be extended to support dependencies across trials by having the loss function and trial specification depend on a series of trials. In model training, PsychRNN could be extended to support learning algorithms apart from supervised gradient descent, such as deep reinforcement learning algorithms (Botvinick et al., 2020). With the recent release of TensorFlow 2.0 extending functionality to match alternative frameworks including PyTorch, we see TF as a strong base on which to design PsychRNN. PsychRNN allows for future extension to include other frameworks in its Backend. Importantly, the modular design of PsychRNN can enable such extensions and updates without forcing any user-side change in task specification or front-end experience. The modular design of PsychRNN also supports extension with various methods for analysis of trained RNNs, which could be implemented by users. Here, we have provided a basic set of built-in analysis tools to directly investigate the features and structures of trained RNN weights, states, and outputs. Because the landscape of analysis methods differs substantially across studies, built-in analysis methods cannot be comprehensive, and therefore, we decided to instead focus on providing output in forms that are most broadly compatible with common analysis pipelines. The PsychRNN package provides an easy-to-use framework that can be applied and transferred across research groups to accelerate collaboration and enhance reproducibility. Where in the current environment research groups need to transfer their entire codebase to run an RNN model, in the PsychRNN framework they are able to transfer just a task or model file for researchers to investigate and build on. The ability to test identically specified models across tasks in different groups, and identically specified tasks across models improves reliability of research. Furthermore, the many choices in defining and training RNNs can make precise replication of prior published research difficult. The specification of PsychRNN task files and parameter dictionaries can make reproduction of RNN studies more open and straightforward. PsychRNN was designed to lower barriers to entry for researchers in neuroscience who are interested in RNN modeling. In service of this goal we have created a highly user-friendly, clear, and modular framework for task specification, while abstracting away much of the deep learning background necessary to train and run networks. This modularity also provides access to new research directions and a reproducible framework that will facilitate RNN modeling in neuroscientific research.
6,759.2
2020-10-01T00:00:00.000
[ "Computer Science", "Psychology", "Biology" ]
HIF1A Knockout by Biallelic and Selection-Free CRISPR Gene Editing in Human Primary Endothelial Cells with Ribonucleoprotein Complexes Primary endothelial cells (ECs), especially human umbilical vein endothelial cells (HUVECs), are broadly used in vascular biology. Gene editing of primary endothelial cells is known to be challenging, due to the low DNA transfection efficiency and the limited proliferation capacity of ECs. We report the establishment of a highly efficient and selection-free CRISPR gene editing approach for primary endothelial cells (HUVECs) with ribonucleoprotein (RNP) complex. We first optimized an efficient and cost-effective protocol for messenger RNA (mRNA) delivery into primary HUVECs by nucleofection. Nearly 100% transfection efficiency of HUVECs was achieved with EGFP mRNA. Using this optimized DNA-free approach, we tested RNP-mediated CRISPR gene editing of primary HUVECs with three different gRNAs targeting the HIF1A gene. We achieved highly efficient (98%) and biallelic HIF1A knockout in HUVECs without selection. The effects of HIF1A knockout on ECs’ angiogenic characteristics and response to hypoxia were validated by functional assays. Our work provides a simple method for highly efficient gene editing of primary endothelial cells (HUVECs) in studies and manipulations of ECs functions. Introduction Endothelial cells (ECs) form a single layer of cells lining the vascular systems [1]. Blood ECs are vital conduits that play essential roles in, e.g., the delivery of nutrients and oxygen to the tissues, regulation of immune responses, and regulation of vascular tone [2][3][4], whereas lymphatic ECs which delineate lymphatic vessels are important for immune responses and maintenance of vessel integrity [5]. The ECs are highly heterogeneous, depending on the vascular beds, tissue types (even within a single organ), physiology states, and diseases [6][7][8][9]. ECs from mature tissues or organs mostly stay in a quiescent state but remain metabolically active and can form new vessels through the process termed angiogenesis [4,6]. It is thus not surprising that ECs are involved in many prevalent diseases, including conditions characterized by excess ECs growth (e.g., cancers, eye diseases) or ECs dysfunction (e.g., diabetes, cardiovascular disease) [4]. Cultured ECs are widely used in vascular cell biology and for studies of ECs functions/dysfunctions. Genetic modification of cultured cells is a widely used method for studying biological processes, but the approach has been difficult to apply in primary Preparation of EGFP mRNA and EGFP Plasmid In vitro transcription (IVT) EGFP plasmid was made by associate professor Rasmus O. Bak, Department of Biomedicine, Aarhus University. EGFP mRNA was generated by IVT. First, the EGFP plasmid was linearized by mixing 15 µL nuclease-free water (Thermo Scientific, Waltham, MA, USA, #R0582), 2 µL 10× Fast Digest Green Buffer (Thermo Scientific, #00959802), 2 µL (1 µg/µL) EGFP plasmid, and 2 µL Fast Digest restriction enzyme BbsI (Thermo Scientific, #00986235) and stored on ice. The reaction mixture was incubated at 37 • C for 3 h. Next, the linearization of the EGFP plasmid was visualized on a 1% agarose gel with a 1 kb marker (Thermo Scientific, #SM0311). NucleoSpin Gel and PCR Clean-up (Macherey, #2006/001) was used to elute the EGFP DNA from the agarose gel by following the manufacturer's protocol. Finally, IVT was performed by using MEGAscript kit (Thermo Fischer Scientific, #AMB13345) following manufacturer's protocol with one improvement as 3 µL CleanCap AG (6 mM) (TriLink Biotechnologies, San Diego, CA, USA, #N-7113-5) was added directly to the reaction mix to increase the stability of the mRNA and its translation. The RNA concentration was measured by a Nanodrop 1000 Spectrophotometer. The EGFP mRNA was stored at −20 • C. Nucleofection of HUVECs with EGFP mRNA Titration First, the HUVECs were split as described above and incubated for 48 h in a 5% CO 2 atmosphere at 37 • C. After 48 h incubation, the M199 medium with supplements was changed. The cells were incubated until optimal confluency of 90% before nucleofection. Nucleofection: The 90% confluent HUVECs were divided into six groups (0.0 µg EGFP mRNA (control), 0.4 µg EGFP mRNA, 0.8 µg EGFP mRNA, 1.6 µg EGFP mRNA, 3.2 µg EGFP mRNA, and 4.8 µg EGFP mRNA) with 2.4 × 10 5 cells in each group. The cells were gently washed twice in PBS, centrifuged at 500× g for 5 min, and resuspended in 60 µL OptiMem (Gibco, #11058-021). EGFP mRNA was thawed on ice. The given concentration of EGFP mRNA was added to the appropriate groups and mixed thoroughly by pipetting. Each test group was divided into three wells in the nucelocuvette. Immediately upon transfer to the nucleocuvette, the HUVECs were nucleofected by the 4D-Nucleofector X Unit (Lonza, CH) with the nucleofection program: CM138. After nucleofection, 150 µL prewarmed M199 medium with supplements was added to each well in the nucelocuvette. Finally, the nucleofected HUVECs were seeded in the prepared 24-well plate with a total volume of 500 µL culturing medium and incubated for 24 h in a 5% CO 2 atmosphere at 37 • C. Visualization: After 24 h of incubation the expression of EGFP was visualized by fluorescent microscopy and quantified by FC analysis. The cells were visualized by a Leica DMi1 fluorescent microscope, DE, obtaining three brightfield and FITC microscopy images per well at 10× magnification, with the same laser intensities and camera exposures. Image visualization and analysis was done in Fiji (ImageJ) v. 2.9.0. The background signal was adjusted based on averaging background values from four areas negative for signal in each channel and subtracting the mean values from the final image. NovoCyte Quanteon 4025 flow cytometer was used to quantify the percentage of EGFP positive cells. The HUVECs were washed in PBS twice, trypsinized, and centrifuged at 400× g for 5 min. The cell pellet was washed twice in PBS + 5% FBS and resuspended in 200 µL PBS + 5% FBS. All samples were incubated in the dark on ice until FC analysis: 100 µL of each sample was acquired and EGFP detected off the 488 nm laser (100 mW) in the B530/30 detector. The FC data were analyzed in NovoExpress v. 1.5.6 using the following gating strategy: 1 A forward-scatter-area to side-scatter-area density plot was made to exclude debris, 2 A forward-scatter-height to forward-scatter-area density plot was followed by a sidescatter-height to side-scatter-area density plot to exclude doublets, 3 A single parameter histogram was made to identify the cells expressing EGFP. This experiment was performed with one HUVECs doner at p. 5 (n = 3). Nucleofection of HUVECs with EGFP mRNA and EGFP Plasmid The nucleofection of HUVECs protocol, described above was used to nucleofect HUVECs with 3.2 µg EGFP mRNA or 1.312 µg EGFP plasmid. The concentration EGFP plasmid had to be lower than EGFP mRNA since high plasmid concentrations are toxic to the cells. This experiment was repeated to achieve both biological-and technical triplicates, by using three different HUVECs donors at p. 2-5. CRISPR-Cas9 gRNA Design The three gRNA target regions, exon 2, −3, and −4, were chosen as they are early consecutive exons, with appropriate distance to the ATG start codon in the HIF1A gene. The online CRISPR web tools "CRISPor" (http://crispor.tefor.net accessed on 10 October 2020) [32] and "CRISPRon" (https://rth.dk/resources/crispr/crispron/ accessed on 10 October 2020) [33,34] were used to design and evaluate the gRNAs for the CRISPR-Cas9 system, SpCas9 (Streptococcus pyogenese CRISPR associated protein 9). The sequence of the three target regions' exons was set as input in the web tools and submitted with default settings. The final three gRNAs targeting three different exons of HIF1A were all chosen by analyzing the CRISPRon/CRISPor output tables, by choosing the gRNA with the highest predicted efficiency (Figure 2a, Supplementary Table S2). The gRNA was purchased at Synthego, US. In addition, the robustness of HIF1A gRNA 1 editing was validated by purchasing the gRNA from two different vendors: Synthego.com and IDTdna.com (Integrated DNA Technologies (IDT)). Evaluation of CRISPR-Cas9 Gene Editing Efficiencies on HIF1A in HUVECs The CRISPR-Cas9 editing efficiencies for the HIF1A gene were analyzed by nucleofecting HUVECs with three different HIF1A gRNAs, as described in the nucleofection protocol above with few alterations. Before nucleofection of the HUVECs, the ribonucleoprotein (RNP) complex was prepared. The synthesized gRNAs (Synthego, Redwood City, CA, USA) were dissolved to be 3.2 µg/µL in nuclease-free water mixed by vortexing and stored at −20 • C. The RNP complex was prepared in three groups: 1 HIF1A gRNA 1 + spCas9 protein, 2 HIF1A gRNA 2 + spCas9 protein, and 3 HIF1A gRNA 3 + spCas9 protein. All three groups were prepared by mixing 1.8 µL gRNA (3.2 µg/µL), 1.8 µL spCas9 Nuclease V3 (IDT, #1081059), and 3.6 µL nuclease-free water in PCR-tubes and kept at room temperature for 10-60 min. The sample groups in this experiment were: wild type (untreated), 3.2 µg EGFP mRNA treatment (positive control), HIF1A gRNA 1 treatment, HIF1A gRNA 2 treatment, and HIF1A gRNA 3 treatment. The HUVECs sample groups were resuspended in 60 µL OptiMem (Thermo Scientific, #31985062) and 2.4 µL of each RNP complex was added to the appropriate groups. This was followed by nucleofection and 48 h of incubation, in penicillin/streptomycin free medium, before genotyping. The EGFP fluorescence was visualized by a Leica Dmi1 fluorescent microscope to ensure efficient nucleofection after 24 h of incubation. This experiment was performed with one HUVECs donor at p. 4 (n = 3). Experiments with HIF1A gRNA 1 were repeated to achieve three biological replicates at p. 2-4, each with technical triplicates. Sanger sequencing: The PCR products were sent to Sanger sequencing at Eurofins Genomics, DK, by following the manufacturer's protocol to Mix2Seq Kit (Eurofins Genomics, DK). The Sanger sequencing results were analyzed by Snap gene viewer and the web tool "ICE" (https://ice.synthego.com/#/ accessed on 8 December 2020) was used to calculate the overall CRISPR gene editing efficiency and determine the profiles of all different types of edits present in the Sanger sequencing data. Prism 9, US, was used to plot the gene editing efficiency for each HIF1A gRNAs ± the standard deviation (SD). FC Analysis of HIF1A KO and WT HUVECs HIF1A KO and WT HUVECs were prepared for FC analysis as described in the nucleofection protocol, with few alterations: 1/3 were left unstained, and 2/3 of the cells were stained with mouse anti-CD31 FITC (BD Bioscience, #555445) and mouse anti-CD45 BV421 (BD Bioscience, #563879) for 30 min on ice in the dark, After removing debris and doublets, density plots were used to gate the CD45-CD31+ stained HUVECs, (Supplementary Figure S3a-f). LDL Uptake Assay 8-well chamber slides (Lab-Tek, #177445) were coated with 0.1% gelatin. HIF1A KO and WT HUVECs were seeded (1·10 5 cells per replicate (n = 3) in 50% M199 medium with supplements and 50% endothelial cell growth medium and incubated for 24 h in a 5% CO 2 atmosphere at 37 • C. The culturing medium was replaced to M199 medium with supplements and treated with 10 µg/mL Alexa Fluor 594 conjugated acetylated low-density lipoprotein (LDL) (Life Technologies, #L-35353) and incubated for 4 h in a 5% CO 2 atmosphere at 37 • C. The cells were washed trice with PBS and fixed with 4% paraformaldehyde (PFA) (CellPath, #03809391) for 15 min and washed trice with PBS for 5 min. At the final wash, the cells were stained with 1:1000 Hoechst for 10 min. Cover slides (Thermo Scientific, #174942) were mounted with 3 µL mounting buffer (Thermo Scientific, #P10144). Microscopy images (5 images/well) were obtained with an Olympus BX63 fluorescent microscope equipped with CoolLED pE-300ultra fluorescence microscopy Illumination System and Sensitive Andor Zyla 5.5 camera using 40× (Plan Fluorite) objective. The same intensity of illumination and the same exposure time settings were used for comparative image acquisition as follows; UV excitation (maximum excitation irradiance at 345 nm wavelength) 20% with 10 ms exposure time (emission maximum at 455 nm), GR excitation (maximum excitation irradiance at: 595 nm wavelength) with 200 ms exposure time (emission maximum at 615 nm). Image visualization and analysis was done in Fiji (ImageJ) v. 2.9.0. The background signal was adjusted based on averaging background values from four areas negative for signal in each channel and subtracting the mean values from the final image (Supplementary Figure S3g,h). Staining of HIF1A/Hoechst/Actin Two 0.1% gelatin coated 8-well chamber slides with HIF1A KO and WT HUVECs (80% confluent) were cultured for 1 h in a normoxic-(21% oxygen) or hypoxic (1% oxygen) environment. The cells were fixed with 4% PFA for 15 min and washed trice with PBS for 5 min. The samples for staining were covered with blocking solution (5% FCS serum, 0.3% TritonX-100 in PBS) and 1:100 Human/Mouse/Rat HIF1A antibody (RND Systems, #AF1935), and incubated at 4 • C overnight on plate shaker. The secondary antibody control samples were left untreated in the blocking solution. All the samples were washed twice with PBS for 5 min and resuspended in blocking solution with the secondary antibody 1:500 Alexa flour 594 Donkey anti goat IGG (Life technologies, #A11058). The samples were incubated in the dark on a plate shaker for 2 h at room temperature. All the samples were washed trice with PBS for 5 min. At the final wash the cells were stained with 1:40 Alexa Fluor 488 Phalloidin (actin) (Invitrogen, # A12379) in PBS and incubated for 20 min at room temperature. Followed by 1:1000 Hoechst staining for 10 min in the dark. Cover slides were mounted with 3 µL mounting buffer. Image acquisition was performed (5 images/well) with a Zeiss LSM800 laser scanning confocal microscope, 63X oil objective equipped with diode lasers and three GaAsP detectors. The same laser intensity photodetector sensitivity and exposure time were applied for comparative image acquisition as follows; lasers: 405 nm: 4%, 488 nm: 5%, 561 nm: 16% with the exposure time of 930.91 ms. The background signal was adjusted based on averaging background values from four areas negative for the signal in each channel and subtracting the mean values from the final image. To assess HIF1A expression in the HIF1KO and WT HUVECs, maximum intensity z-projections of the Hoechst and actin channels were used. ROIs of the nuclei were acquired, and a mask was subsequently created and subtracted from the maximum projection of the actin channel to eliminate the areas comprising the nuclei. In the resulting image, the wand tool was used to trace all areas negative for actin signal and the derived mask was inverted to create a ROI comprising the cell cytoplasm. The mean fluorescence intensity in a grey scale sum projection of the HIF1A channel was calculated using the mean grey values from the nuclei and actin ROIs, respectively. Image analysis was done in Fiji (ImageJ) v. 2.9.0. Tube Formation Assay Endothelial cell tube formation assay (TFA) was performed by thawing the Geltrex LDEV-Free Reduced Growth Factor Basement Membrane Matrix (Gibco, #A1413201) in an ice bath at 4 • C overnight. Four 8-well chamber slides were pre-cooled on ice, to avoid the Geltrex Matrix Solution (GMS) from immediately solidifying. The GMS was mixed by gentle pipetting and kept on ice until added, 100 µL/cm 2 , and evenly distributed in the wells. The GMS was allowed to solidify for 30 min at 37 • C. Every 5 min the slides were gently tapped against the flow bench table to minimize the formation of a strong concave meniscus. When the GMS had solidified, 0.3·105 cells (HIF1A KO and WT HUVECs) were seeded per well in 50% M199 medium with supplements and 50% endothelial cell growth medium and incubated for 24 h in a 5% CO2 atmosphere at 37 • C. Finally, the TFA HIF1A KO and WT HUVECs were incubated in a normoxic or hypoxic environment for 2 h. The cells were fixed with 4%PFA for 15 min and washed trice in PBS. Bright field images (6 images/well) were taken with a Leica DMi1 microscope, DE. Image visualization and analysis was done in Fiji (ImageJ) v. 2.9.0. The background signal was adjusted based on averaging background values from four areas negative for signal in each channel and subtracting the mean values from the final image. The tube formation analysis program WimTube from Wimasis.com, ES, was used to make a quantitative analysis of the six TFA images. Gaphpad Prism 9, US, was used to plot the WimTube analysis data (mean ± SD) of the HIF1A KO and WT HUVECs cultured in both normoxic and hypoxic conditions. One-way ANOVA with Tukey multiple comparisons test compared each sample group to the WT HUVECs cultured in normoxic conditions. Efficient mRNA Delivery into HUVECs by Nucleofection We first sought to establish an efficient nucleofection protocol for HUVECs. To accurately quantify transfection efficiency and efficacy, we used an in vitro transcribed (IVT) mRNA encoding an Enhanced Green Fluorescent Protein (EGFP). The expression of EGFP in the HUVECs was quantified by fluorescence microscopy and flow cytometry (FC) analysis. High concentrations of the transfection reagent could cause cellular toxicity [35]. Hence, we tested different amounts of EGFP mRNA to quantify the optimal concentration of EGFP mRNA for nucleofection of HUVECs. The nucleofection protocol was established by nucleofecting HUVECs with different amounts of EGFP mRNA (0.4, 0.8, 1.6, 3.2, and 4.8 µg, n = 3 per group). The bright field images showed that the morphology and confluency of the HUVECs after nucleofection with mRNA, appeared normal with few dead cells in all groups. The HUVECs receiving 4.8 µg mRNA were less confluent compared to those receiving lower doses (Figure 1a), which indicated a dose-dependent negative impact on ECs growth. Fluorescence microscopy showed that green fluorescence was visible from 0.8 µg and increased accordingly with the increase of EGFP mRNA (Figure 1a). FC analysis showed that the majority (74%) of the nucleofected HUVECs were EGFP positive when nucleofected with 0.4 µg IVT EGFP mRNA (Figure 1b,c), suggesting that the FC analysis was more sensitive than the fluorescence microscopy analysis. The fraction of EGFP-positive cells increased significantly from 74% to 96% when increasing EGFP mRNA to 0.8 µg per nucleofection, and nearly 100% of the cells were EGFP positive when nucleofected with EGFP mRNA concentrations from 0.8-4.8 µg (Figure 1c). We next quantified the expression level of the EGFP in the nucleofected HUVECs. The median fluorescence intensity (MFI) of EGFP-positive cells increased as more EGFP mRNA was used per nucleofection ( Figure 1d). Although nearly 100% transfection efficiency was already achieved with 0.8 µg mRNA, the efficacy of gene expression still significantly increased with the amount of mRNA used per nucleofection (Figure 1c,d). We also validated the robustness of transfection efficiency and EGFP expression by mRNA as compared to traditional DNA plasmid-based delivery (Supplementary Figure S1). We observed less confluent HUVECs upon nucleofection with 4.8 µg IVT EGFP mRNA, indicating a potentially negative effect on the primary ECs growth (Figure 1a), thus 3.2 µg EGFP mRNA was used for further studies. Efficient CRISPR Gene Editing of Primary HUVECs We next tested whether the established nucleofection and mRNA delivery approach could be used to achieve efficient CRISPR gene editing in primary HUVECs. To this end, we used a pre-formed ribonucleoprotein (RNP) complex comprising the SpCas9 protein and chemically modified synthetic guide RNA (gRNA). Three gRNAs were designed to target the early consecutive exons of the HIF1A gene (Figure 2a). Sanger sequencing (Figure 2b) and ICE-based indel deconvolution analysis showed that high gene editing efficiencies were achieved in primary HUVECs with all three gRNAs: 98% for gRNA 1, 79% for gRNA 2, and 66% for gRNA 3 (Figure 2c). The highly efficient CRISPR gene editing efficiency of gRNA 1 is really striking. Unlike the other two gRNAs (2 and 3), HIF1A gRNA 1 only creates an indel of thymine (T) insertion (Figure 2d), which leads to the introduction of a stop codon (Supplementary Figure S2a). The dominant indel type of T insertion at the double-brand break site corroborates our previous observation of the CRISPR 1bp insertion indel profiles [33]. Furthermore, we tested synthetic gRNA (HIF1A gRNA 1) provided by two different vendors (Synthego and Integrated DNA Technologies (IDT)), which both resulted in an efficiency of nearly 100% and the same indel formation (Supplementary Figure S2b,c). This unique indel formation is consistent with our previous indel profiling using self-targeting surrogate libraries [33]. Notably, the CRISPR efficiency achieved in our study was based on a completely selection-free setting. This confirms that the nucleofection and RNP delivery approach can be used to achieve highly efficient CRISPR gene editing in primary ECs. Efficient CRISPR Gene Editing of Primary HUVECs We next tested whether the established nucleofection and mRNA delivery approach could be used to achieve efficient CRISPR gene editing in primary HUVECs. To this end, we used a pre-formed ribonucleoprotein (RNP) complex comprising the SpCas9 protein and chemically modified synthetic guide RNA (gRNA). Three gRNAs were designed to target the early consecutive exons of the HIF1A gene (Figure 2a). Sanger sequencing (Figure 2b) and ICE-based indel deconvolution analysis showed that high gene editing efficiencies were achieved in primary HUVECs with all three gRNAs: 98% for gRNA 1, 79% Intensity of EGFP positive cells **** ** * CRISPR 1bp insertion indel profiles [33]. Furthermore, we tested synthetic gRNA (HIF1A gRNA 1) provided by two different vendors (Synthego and Integrated DNA Technologies (IDT)), which both resulted in an efficiency of nearly 100% and the same indel formation (Supplementary Figure S2b,c). This unique indel formation is consistent with our previous indel profiling using self-targeting surrogate libraries [33]. Notably, the CRISPR efficiency achieved in our study was based on a completely selection-free setting. This confirms that the nucleofection and RNP delivery approach can be used to achieve highly efficient CRISPR gene editing in primary ECs. Functional Validation of HIF1A KO HUVECs To validate if HIF1A gRNA 1 had successfully disrupted gene expression at the protein level, we cultured the HIF1A gRNA 1 KO (hereafter referred to as HIF1A KO) and wild type (hereafter referred to as WT) HUVECs in normoxic (21% oxygen) or hypoxic (1% oxygen) conditions and assessed HIF1A expression by antibody-based protein staining ( Figure 3). We had tested different hypoxic cultivation times (1, 2, and 4 h), of which 1 h hypoxic cultivation could clearly induced HIF1A nucleus translocation in ECs without triggering massive cell death. In WT HUVECs, we observed a clear upregulation and translocation of the HIF1A protein from the cytoplasm (normoxic, Figure 3a,e) to the nucleus (hypoxic, Figure 3b,e), commensurate with the HIF1A function and pathway (Supplementary Figure S5a) [36][37][38][39][40][41]. In the HIF1A KO cells, no expression of HIF1A was detected in neither normoxic nor hypoxic conditions (Figure 3c-e). We also confirmed the expression of CD31 and functional uptake of LDL of the HIF1A KO and WT HUVECs, as confirmation of their EC phenotype (Supplementary Figure S3). We next sought to evaluate if HIF1A KO affects ECs functions. Tube formation assay (TFA) is a commonly used method to study angiogenesis [42]. We performed TFA to investigate if HIF1A KO affects HUVECs' ability to form capillary-like structures (tubes). Both HIF1A KO and WT HUVECs cultured in normoxic or hypoxic conditions can form a tube-like network. WT HUVECs formed tube-like networks with long branches, more tight junctions, and a clear mesh pattern under both conditions (Supplementary Figure S4a,b). In contrast, the HIF1A KO HUVECs in normoxic conditions showed a weaker ability to form the tube-like network and fenestrated junctions (Supplementary Figure S4c). This difference was more pronounced when cultured in hypoxic conditions. The tube-like network of HIF1A KO HUVECs cultured in hypoxic conditions was disrupted (Supplementary Figure S4d). WimTube analysis was further performed to make a quantitative analysis of the TFA images ( Figure 4a). The tube formation process was investigated by measuring the total tube length in pixels (px), the total number of tubes (count), mean tube length (px), percentage of covered area, and total branching points (counts) (Figure 4). Quantitative results showed that there was no significant difference between the HIF1A KO and WT HUVECs in total tube lengths ( Figure 4b). The HIF1A KO HUVECs in both normoxic and hypoxic conditions had a significantly increased number of tubes (Figure 4c), and significantly decreased mean tube length (Figure 4d). In addition, the number of branching points was significantly increased in HIF1A KO HUVECs in both normoxic and hypoxic conditions (Figure 4e). The HIF1A KO HUVECs cultured in hypoxic conditions showed challenged angiogenesis (Supplementary Figure S4d), which was quantitively confirmed as the percentage of tube-covered area was significantly decreased compared to the WT HUVECs (Figure 4f). In conclusion, we confirmed that HIF1A deficiency affects HUVECs ability of angiogenesis and response to hypoxia. Discussion Primary ECs models, like HUVECs, most closely represent the tissue of origin compared to secondary or immortalized cell lines, but are challenging as they are hampered by a limited life span, low transfection efficiencies, and high contamination risks [18][19][20]. Thus, previous gene editing studies have mainly been performed on immortalized ECs [21,22]. This study created a protocol for highly efficient RNP-mediated CRISPR gene editing of primary ECs (HUVECs), targeting the HIF1A gene, which to our knowledge has not been published to date. First, an efficient protocol for nucleofection of HUVECs with EGFP mRNA was established, which resulted in transfection efficiencies of nearly 100%. Previous studies like Moradian et al. transfected primary cells with EGFP mRNA, which resulted in 70% EGFP positive macrophages, quantified by FC, with no significant change in the cell viability [43]. Hunt et al. investigated different transfection reagents to transfect HUVECs with EGFP plasmid, the most efficient transfection reagent resulted in 49% of cells expressing EGFP [44]. The microscopy images (Supplementary Figure S1) show that the HUVECs transfected with EGFP mRNA were more confluent compared to the EGFP plasmid transfected HUVECs. This indicates that EGFP mRNA nucleofection is less toxic to the HUVECs compared to EGFP plasmid. One of the main advantages of mRNA nucleofection, in contrast to plasmid nucleofection, is that it avoids transcription and is less toxic as it results in transient gene expression -since the mRNA is less stable [43]. The gene editing delivery method has a great impact on the gene editing efficiency of the primary ECs, thus the optimization of the nucleofection protocol has been of great importance for our experiments. Previous studies have used lentiviral vector mediated CRISPR as a delivery system and report gene editing efficiencies of 40-58% [10,25]. Gong et al. report gene editing efficiencies of 40-60% by dual viral vector (lenti/adeno virus) mediated CRISPR gene editing [24]. The most efficient CRISPR gene editing of primary ECs, to our knowledge [15], resulted in 80% gene disruption by AAV5 mediated CRISPR. In 2019 Schwefel et al. compared lentiviral and RNP-mediated CRISPR gene editing on immortalized HUVECs, which resulted in 66% lentiviral and 63% RNP gene editing efficiencies [22]. The RNP-mediated CRISPR gene editing is also highly efficient in other primary cell types like human primary T cells [28] and B cells [45]. The RNP-based method has many advantages, as it enables immediate gene editing, and has a short presence of Cas9 protein, since the protein is degraded. This results in specific gene editing with few off-targets and low toxic effects [46]. These advantages founded our opinion that RNP-mediated CRISPR will be efficient for gene editing of primary ECs. This study accomplished the creation of a highly efficient protocol for RNP-mediated CRISPR gene editing of primary ECs, with a gene editing efficiency of 98% of the HIF1A gene in primary HUVECs. The HIF1A gene is well studied, found to control transcription of over 40 genes, playing an important role in endothelial adaption, vascular development, and angiogenesis [37,38]. The HIF1A gRNA 1 showed a remarkably high gene editing efficiency of 98% and repaired only by the insertion of one thymine at the CRISPR-induced double strand break site. The consistent one base pair insertion occurs 17 bp upstream of the PAM sequence, which confirms our previous findings that reveal how one base pair insertions most frequently result in the insertion of the same nucleotide as N17 upstream of the PAM [33]. This might be related to the NHEJ mechanism, as it prefers one base pair insertion after the CRISPR induced double stranded breaks since this will be one of the fastest repair options [33,47]. Our results show that the HIF1A KO results in impaired angiogenesis, as we see many branching points, short tube lengths, and less tube covered area. The impaired angiogenesis might be caused by an imbalance in the proportion of tip and stalk cells, thus resulting in "split ends" in the sprouting of the ECs. Similarly, Tang et al. deleted HIF1A in ECs by crossbreeding Tie2-Cre transgenic mice and found that deletion of HIF1A in primary murine lung ECs disrupt vascular endothelial growth factor (VEGF)-dependent signaling pathway in vivo, which resulted in impaired angiogenesis [48]. Impaired angiogenesis in diabetes complications, like diabetic retinopathy, is treated by anti-angiogenic therapy (AAT), which targets the tip cells in the sprouting ECs by antagonizing the VEGF receptor [49][50][51]. Unfortunately, AAT requires regular injections, and some patients acquire resistance to the AAT as the ECs adapt their angiogenic mechanisms [52,53]. New model organisms elucidating VEGF, angiogenesis, and ECs functions are needed to develop better VEGF targeting AAT. Promisingly, Holmgaard et al. demonstrate that RNP-mediated CRISPR effectively generates a VEGF KO in mice retina, which is a potential strategy for future treatment of retinal diseases, but further studies are needed [54]. Although not investigated in our study, the efficient approach of biallelic and selection-free CRISPR gene (HIF1A) knockout in primary ECs allows us to investigate how gene (HIF1A) disruption affects the EC transcriptional machineries, functions, plasticity, and heterogeneity using, e.g., single cell RNA sequencing. In summary, we established an RNP-mediated CRISPR gene editing protocol for primary ECs allowing extremely efficient HIF1A KO in primary HUVECs. The RNPmediated CRISPR gene editing of HIF1A in primary ECs, resulted in gene editing efficiencies up to 98%. HIF1A gRNA 1-based editing created a one base pair insertion leading to an early stop codon. The functional validation assays show that the HIF1A KO HUVECs are functional ECs as they have an uptake of LDL and express CD31 but the KO results in insufficient angiogenesis.
6,952.4
2022-12-22T00:00:00.000
[ "Biology" ]
Formation of Amorphous Iron-Calcium Phosphate with High Stability Amorphous iron-calcium phosphate (Fe-ACP) plays a vital role in the mechanical properties of teeth of some rodents, which are very hard, but its formation process and synthetic route remain unknown. Here, the synthesis and characterization of an iron-bearing amorphous calcium phosphate in the presence of ammonium iron citrate (AIC) are reported. The iron is distributed homogeneously on the nanometer scale in the resulting particles. The prepared Fe-ACP particles can be highly stable in aqueous media, including water, simulated body fluid, and acetate buffer solution (pH 4). In vitro study demonstrates that these particles have good biocompatibility and osteogenic properties. Subsequently, Spark Plasma Sintering (SPS) is utilized to consolidate the initial Fe-ACP powders. The results show that the hardness of the ceramics increases with the increase of iron content, but an excess of iron leads to a rapid decline in hardness. Calcium iron phosphate ceramics with a hardness of 4 GPa can be achieved, which is higher than that of human enamel. Furthermore, the ceramics composed of iron-calcium phosphates show enhanced acid resistance. This study provides a novel route to prepare Fe-ACP, and presents the potential role of Fe-ACP in biomineralization and as starting material to fabricate acid-resistant high-performance bioceramics. Introduction Calcium phosphates (CaPs) are the main inorganic components of vertebrate hard tissues. Amorphous calcium phosphate (ACP), prevalent in biological organisms, represents a unique class of calcium phosphates. [1] It has no translational and orientational long-range order of the atomic positions, showing essentially glass-like physical properties. [2] Previous studies revealed that intracellular ACP precursors reside in the mitochondria of mineralizing cells, and they are transferred from mitochondria viathe lysosomal pathway. [3] Recent studies have suggested that ACP might be the precursor of many biominerals, and it plays a vital role in the functions of these structures. [4] As a transitory phase, ACP is highly unstable in an aqueous medium, readily transforming to crystalline calcium phosphates, such as hydroxyapatite. [5] Some trace elements, such as magnesium and strontium have been found in natural hard tissues, and these divalent ions are able to stabilize ACP by either substituting the calcium or adsorbing on the surface of ACP to disrupt the crystallization process. [6] Recently, the function of iron-based phases in the biomineralization process has attracted much attention. Iron has been found to exist in hard tissues of organisms in various forms, such as ferrihydrite, magnetite, and goethite. [7] The function of iron oxide in teeth hardening has been demonstrated in limpets, chitons and chichlid fishes. [8] Another study shows that the iron-rich phases strengthen the incisor of feral coypu. The iron-rich enamel shows a higher mechanical strength (Hardness ≈4.6 GPa) than the nonpigmented enamel (Hardness ≈ 3.5 GPa). [9] Moreover, a recent investigation on rodent teeth shows that amorphous intergranular phases control the mechanical properties of enamel. The mixture of ferrihydrite and amorphous iron-calcium phosphate in the intergranular phases makes the enamel harder and more resistant to acid attack. [10] An important question related to ironhardened minerals is how iron-rich phases, such as amorphous iron-calcium phosphate (Fe-ACP) are formed. Previous studies have shown that the presence of iron affects the crystallinity and solubility of hydroxyapatite and octacalcium phosphate. [11] However, there is still no evidence revealing the mechanisms of formation of the Fe-ACP in aqueous media, whether or not such an iron-bearing amorphous phase is a transient precursor remains unclear. Therefore, the synthesis of Fe-ACP and the unrevealing of its formation process are essential for understanding biomineralization of iron-rich calcium phosphate-based biominerals. Except for the important role of biomineralization, iron-based materials show great potential in the biomedical field. As an essential component for cell metabolism and biochemical reaction, iron plays a crucial role in some body functions, such as oxygen transport, DNA synthesis, and cooperates with many enzymes. Iron-containing particles have been successfully applied in photothermal and photodynamic therapies for the treatment of cancer. [12] More recently, the function of iron-based bioceramics in tissue regeneration has been extensively explored. It is found that iron-containing bioceramics can promote angiogenesis and osteogenesis by regulating the expression of vascular endothelial growth factor and HIF-1 in endothelial cells. [13] Our recent research has shown the great osteogenic property of an ironbearing calcium phosphate cement in vivo. [14] The use of amor-phous calcium phosphate in the biomedical field has been reported, including implant coatings, [15] drug delivery vehicles, [16] and reinforcing agents for self-setting cements. [17] Therefore, it is our interest to synthesize the iron-containing ACP particles and explore their potential use in the dental and orthopedic fields. Many attempts have been made to produce ACP using wet route syntheses or dry route syntheses. [18] The wet synthesis route usually involves the rapid mixing of calcium salt with phosphate salt to prevent phase transformation during the preparation. Organic molecules, such as polyacrylic acid (PAA), [19] polyaspartic acid (PASP), [20] polyethylene glycol, [18a] poly(allyamine) hydrochloride, [21] and triethylamine [22] have been applied to slow down the conversion rate. Citrate ions, which account for 1-2 wt.% of natural bone, significantly affect the stability of calcium phosphates and regulate their crystal growth. [23] It is worth noting that most reported ACP are prepared under neutral or basic conditions, and the study by Posner has shown that increasing pH can greatly slow down the conversion rate of ACP. [24] At a more acidic pH, dicalcium phosphate dihydrates and octacalcium phosphates are the most common phases. In the presence of magnesium and citrate ions, which are known as crystallization inhibitor of apatite, acidic ACP can be prepared at solution pH of 6.0-6.5. [25] Although an acidic disordered form of calcium phosphate is detected in the bones of zebrafish, [26] to the best of our knowledge, no ACP has been prepared in even more acidic aqueous solutions. In the oral environment, the acidic ACP-containing materials can take advantage of enhanced acid-resistance, which can potentially be applied as dental repair materials. Moreover, when preparing the iron-bearing calcium phosphates, the acidic reaction medium can minimize the precipitation of iron hydroxide, which might facilitate the formation of Fe-ACP. In the present work, we have explored the synthesis routes to prepare an iron-bearing amorphous calcium phosphate in aqueous media at ambient temperature. Ammonium iron citrate (AIC), which contains both iron and citrates, is selected as the iron source for the preparation. The amorphous iron-calcium phosphate can be prepared at pH values as low as 4 prepared in the presence of AIC. The resultant particles kept their amorphous feature in water and SBF solution when prepared under high concentration of AIC. The iron-containing calcium phosphate particles showed good biocompatibility and osteogenic properties in vitro. Subsequently, the synthesized Fe-ACP was applied as starting material to fabricate acid-resistant high-performance bioceramics. Together, the study reveals clues about the biomineralization process of Fe-ACP and provides new insight into the stability of the amorphous phase. The as-synthesized particles might play a key role in the biomineralization process of iron-rich hard tissues, and possess potential as starting materials to fabricate acid-resistant high-performance bioceramics. Preparation and Characterization of Fe-ACP Typical Fe-ACP particles which were synthesized with 0.3 m PO 4 3− , 0.5 m Ca 2+ , and 0.04 m ammonium iron (III) citrate (the pH after reaction is ≈5 without adjustment) is shown in Figure 1. The diameters of the particles are between 50 and 200 nm ( Figure 1a). TEM and SAED pattern confirmed that the prepared particles were non-crystalline nanoparticles (Figure 1b,c). The specimen exhibited peaks of Fe 2p, O 1s, Ca 2p, C 1s, and P 2p over a wide binding energy region (Figure 1d). The binding energy of the Fe 2p 3/2 peak was 710 eV (Figure 1e), which can be assigned to Fe 3+ bonded to phosphate groups. [27] We have shown that the concentration of ammonium iron citrate is essential for forming Fe-ACP. (Figure 2). The preparation of amorphous calcium phosphate by precipitation is usually conducted in alkaline solutions. [28] Under acidic conditions, the final products are normally brushite (dicalcium phosphate dihydrate, CaHPO 4 ·2H 2 O, and DCPD) or monetite (dibasic calcium phosphate anhydrate, CaHPO 4 , and DCPA) with high crystallinity. [29] Although ACP has been prepared at pH 6 in the presence of magnesium and citrate ions, the synthesis of ACP at pH lower than 5 has, to our knowledge, never been reported. Our previous research showed that cetyltrimethylammonium bromide and ammonium chloride effectively regulate the crystal growth of CaPs, forming DCPD or DCPA particles with various morphologies and hierarchical structures. [30] In this study, we have shown that AIC is a strong crystallization inhibitor of acidic calcium phosphates, with which ACP under pH 5 was synthesized. As shown in Figure 2, the inhibition effect of AIC is concentration-dependent. With a low concentration of AIC (0.007 and 0.02 m), the final phase was still brushite (Figure 2a), but the morphology of the powders changed from micro-sized platelets to blocklike particles ( Figure 2c). Nano-sized Fe-ACP can be obtained by increasing the AIC concentration up to 0.04 m. Further increasing the AIC concentration to 0.1 and 0.2 m had no influence on the morphology and final phase of the particles. We further investigated the inhibition effects of citrate and iron ions, respectively, to better understand the formation process of Fe-ACP. The presence of iron ions alone was not effective in forming the Fe-ACP ( Figure S1, Supporting Information). With 0.04 or 0.10 m iron nitrate, the crystalline phase was still plate-like brushite. Previous studies revealed that citrate regulates the crystal growth of hydroxyapatite by strongly binding to the hydroxyapatite surface. [31] More recently, it is found that the citrates can facilitate the intrafibrillar formation of hydroxyapatite to produce an inorganic-organic composite by reducing the interfacial energy between the biological matrix and the amorphous calcium phosphate precursor. [32] In accordance with these reports, hydroxyapatite nanocrystals formed in the presence of citrate ( Figure S1a,b, Supporting Information). Therefore, the formation of Fe-ACP is regulated by the synergistic effect of both iron ions and citrates. The structural differences of the obtained particles were confirmed by infrared spectra (Figure 2b). The spectrum showed sharp bands of 2 PO 4 3− (≈985 cm −1 ), 3 PO 4 3− (≈1056 and ≈1132 cm −1 ), and 4 PO 4 3− (≈524 cm −1 ). In contrast, the spectra of AIC004, AIC010, and AIC020 had rounded absorption bands ≈556 and 1074 cm −1 , confirming the amorphous features of these samples. [28,33] It is worth noting that particles prepared in the presence of AIC show additional bands ≈1400 and 1610 cm −1 , which can be attributed to vibrations of the carboxyl group for the associated citrate, suggesting a relevant amount of citrate remaining in the resultant particles. [31c,34] The pH changes during the synthesis of AIC000, AIC004, AIC010, and AIC020 are shown in Figure S2 (Supporting Information). The final pH dropped with the addition of 0.04 mol L −1 AIC, but continuously increasing the AIC concentration in the solution to 0.10 and 0.20 mol L −1 resulted in a slight increase in the pH. The chemical composition of Fe-ACP is reported in Table S1 (Supporting Information). TGA-DTG analysis revealed that all the samples present a small amount of carbonate and citrate ions ( Figure S3, Supporting Information). In general, the amount of iron, as well as citrate and carbonate in the particles, increased with the AIC concentration in solution. The detected Fe/(Ca+Fe) ratio was 0.11 when the concentration of AIC was 0.07 mol L −1 , and it increased to 0.29 and 0.35 when 0.10 and 0.20 mol L −1 AIC were added. Simultaneously, the amount of phosphate decreased with increasing AIC concentration. Possibly, it was replaced by carbonate and citrate. The structural water content for AIC 0007 was ≈5.0% and it increased to 8.7% for AIC020 samples. The percentage of adsorbed water varied in the range of 16-21%. It is worth noting that Fe-ACP can be prepared even at pH 4 (adjust with hydrochloric acid) at the presence of 0.10 mol L −1 AIC ( Figure S4, Supporting Information). The resulting particles were spherical, with diameters ≈300 nm. The Fe/Ca ratio of the particles was higher than that of the particles prepared at pH 5. TEM images showed that some kind of layered structure existed within the AIC004 particles (Figure 3a), which was not observed in AIC010 particles ( Figure S5a, Supporting Information). The element mapping showed that the iron was uniformly distributed among the particles (Figure 3b,c; Figure S5b,c, Supporting Information). Stability of Fe-ACP In the bone mineralization process, ACP has been proposed as the precursor and transition phase of crystalline apatite. [4c,5b,35] Several studies have investigated the phase transformation process of ACP to crystalline calcium phosphates in solution. [36] In an aqueous solution, the amorphous phase can only exist for several hours. [24] The conversion rate can be slowed down by adding stabilizers, such as polyethylene glycol [37] or adenosine triphosphate. [38] Some ions, such as magnesium and strontium are demonstrated to be efficient in stabilizing the ACP as well. [6b,39] In this study, the stability of Fe-ACP was investigated by immersing the particles in water and simulate body fluid (SBF, the composition is shown in the Experimental Section). It is interesting to find that the stability of these particles is highly related to the concentration of AIC used during preparation. AIC004 (0.04 mol L −1 AIC) particles transformed to hydroxyapatite after immersion in water and SBF for 7 days (Figure 4a,b). The particles after conversion showed irregular plate-like morphology ( Figure S6, Supporting Information). When the concentration of AIC was 0.10 mol L −1 (AIC010), no conversion was observed, even after 31 days of immersion in water and SBF (Figure 4a,b). This was further confirmed by TEM micrographs and SAED patterns of AIC010 particles, which have shown amorphous features after immersion in SBF for 7 and 31 days ( Figure S7, Supporting Information). It is similar for AIC020 particles, which showed no phase transformation after storage for 45 days in water and SBF ( Figure S8, Supporting Information). This can likely be attributed to the inhibiting action of citrate, which is known to stabilize ACP [40] and even prenucleation clusters, as well as liquid precursor phases. [41] That way, crystallization can be effectively inhibited. We further investigated the thermal stability of the AIC particles by TGA/XRD. As shown in Figure 4c, continuous weight loss is observed on heating Fe-ACP. The water molecules loosely absorbed on the surface of Fe-ACP were removed between 25 and 200°C. The weight loss in the range of 200-400°C was attributed to the loss of strongly bound water molecules. The total mass loss up to 800°C was ≈35%. The XRD patterns showed that the particles kept their amorphous feature when heating up to 500°C, without any diffraction peaks. At 800°C, the amorphous phase converted to calcium iron phosphate (Ca 9 Fe(PO 4 ) 7 ) (Figure 4d Information). The AIC000, AIC004, and AIC010 sample showed similar Ca and P release, however, the amount of Ca and P in the acid buffer solution was lowest for AIC010-pH4 sample, indicating the highest stability against dissolution in an acetate buffer solution (Figure 4e,f). Biological Performance of Fe-ACP Calcium phosphate-based materials show good biocompatibility and bioactivity, having broad applications in dental and orthopedic fields. [42] As one of the trace elements in the human body, an iron ion is non-toxic in a physiological range. [43] In order to verify the biocompatibility of Fe-ACP nanoparticles, the effects of Fe-ACP nanoparticles with different concentrations and iron contents on cell proliferation activity were investigated. Fe-ACP nanoparticles had no inhibitory effect on cell proliferation at the concentrations of 50, 100, and 200 μg mL −1 (Figure 5a-c). With higher concentrations of Fe-ACP particles (100 and 200 μg mL −1 ), the cell proliferations were slightly promoted. Alkaline phosphatase staining (ALP) and alizarin red staining (ARS) were applied to evaluate the osteogenic properties of AIC010 nanoparticles (Figure 5d,e). The ALP activity of bone marrow mesenchymal stem cells (BMSCs) at 7 days was significantly enhanced by AIC010 particles, and the promotion effect increased with the increase of concentration (Figure 5d). At 14 days, the BMSCs cultured with AIC010 particles showed more matrix mineralization (Figure 5e). The in vitro study revealed that the Fe-ACP particles had good biocompatibility and osteogenic properties, having the potential to be applied in the biomedical field. Spark Plasma Sintered Bioceramics One potential application of the Fe-ACP particles is as starting materials for high-performance bioceramics. In this study, Spark Plasma Sintering (SPS) was used to sinter calcium iron phosphate ceramics. Our results have clearly shown that the presence of iron has a great impact on hardness of sintered bioceramics. The ceramics without iron showed a hardness of 3.0 GPa, and it reached 4.0 GPa for the AIC010 sample, which is higher than that of human enamel (≈2.7-3.7 GPa). [44] Further increasing the iron content (AIC020) resulted in a decrease in hardness (2.3 GPa) (Figure 6a). This is in accordance with the findings in rodent teeth, showing that the iron-containing enamel is harder than that without iron. [10] XRD patterns of the sintered ceramics are shown in Figure 6b. The final phase of the ceramics was Ca 2 P 2 O 7 for the AIC000 sample, and the main phase shifted to Ca 9 Fe(PO 4 ) 7 with the increase of the iron content. The compositions and unit cell parameters were calculated by Rietveld refinement of XRD data and are shown in Table S2 (Supporting Information). The fracture surfaces of the ceramics were examined through SEM (Figure 6c-f). The ceramics with AIC000 and AIC004 as the starting materials showed many micro and nano pores on the surface (Figure 6c,d), while ceramics using AIC010 and AIC020 as starting materials showed more dense fracture surfaces (Figure 6e,f). The acid resistance of the ceramics is evaluated (Figures S10 and S11, Supporting Information). The surface morphologies after the acid attack are shown in Figure S10 (Supporting Information). For the samples without or with low amount of iron, the surfaces were severely damaged ( Figure S10a,b, Supporting Information). Meanwhile, for the samples with high iron contents, the surfaces were much smoother ( Figure S10c,d, Supporting Information). The acid resistance of the ceramics was further estimated by the amounts of Ca and P dissolved from the samples. The AIC010 and ACI020 ceramic samples showed much lower amounts of dissolved Ca and P, in comparison with AIC000 and AIC004 ceramic samples, indicating improved acid resistance of the ceramics in the Figure 6. Spark plasma sintered bioceramics using AIC000, AIC004, AIC010, and AIC020 as the raw materials. a) Hardness and relative density (n = 5). b) XRD pattern. The patterns are matched with the standard diffraction patterns of Ca 2 P 2 O 7 (PDF 00-003-0605) and Ca 9 Fe(PO 4 ) 7 (PDF 01-089-0514). Fracture surface of the ceramics (Magnification x20 000) prepared using c) AIC000, d) AIC004, e) AIC010, and f) AIC020 as starting materials. presence of iron. Overall, the ceramics with higher amount of iron exhibited superior acid resistance, showing its great potential for dental applications. Conclusion Fe-ACP particles with variable content of iron are prepared in the presence of ammonium iron citrate. The Fe-ACP particles are highly stable in aqueous media, such as water, SBF and acetate buffer solutions, and their stability in water and in SBF solution depends on the amount of ammonium iron citrate. The Fe-ACP particles show good biocompatibility and osteogenic properties in vitro, and can be applied as starting materials to fabricate calcium phosphate ceramics. With the presence of a proper amount of ammonium iron citrate, ceramics with a hardness higher than human tooth enamel and superior acid resistance can be prepared. The work highlights a novel route to prepare Fe-ACP particles, and presents the potential role of Fe-ACP in biomineralization and as starting material to fabricate acid-resistant highperformance bioceramics. Experimental Section Chemicals: Ca(NO 3 ) 2 ·4H 2 O was purchased from Chinasun Specialty Products Co., Ltd; Na 2 HPO 4 ·12H 2 O was obtained from Shanghai Lingfeng Chemical Reagent Co., LTD; ammonium iron (III) citrate was purchased from Sigma-Aldrich; ethanol was purchased from Sinopharm Chemical Reagent Co. All chemicals were used as received without further purification. All of the chemicals were of analytical grade. Deionized water was used in all experiments. Preparation of Fe-ACP: In a typical experiment, solution 1 was prepared by dissolving 0.3 m Na 2 HPO 4 ·12H 2 O and a certain amount of ammonium iron (III) citrate (0, 0.007, 0.02, 0.04, 0.10, and 0.20 M) in deionized water. Solution 2 was prepared by dissolving 0.5 m Ca(NO 3 ) 2 ·4H 2 O in deionized water. Solution 2 was slowly added into solution 1 under vigorous stirring at room temperature for 5 min. The pH during the synthesis was measured using a pH meter (PB-10, Sartorius). Each experiment was repeated for three times. The pH electrodes were calibrated by 50 mm C 8 H 5 KO 4 (pH = 4.01, 25°C), 25 mm NaH 2 PO 4 /Na 2 HPO 4 standard buffer solution (pH = 6.86, 25°C), and 10 mm Na 2 B 4 O 7 ·10H 2 O standard buffer solution (pH = 9.18, 25°C). When preparing Fe-ACP particles under pH 4, the pH was adjusted using hydrochloric acid. The particles were separated from the solvent by centrifugation and washed with deionized water three times. Finally, Fe-ACP was washed with ethanol and dried in a vacuum desiccator. The samples were designated as AIC000 (no AIC), The pH of solution 1 was adjusted to the same pH as when ammonium iron citrate (0.04 and 0.10m) were added. Solution 2 was prepared by dissolving 0.5 m Ca(NO 3 ) 2 ·4H 2 O in deionized water, other procedures were similar as mentioned above. When investigating the thermal stability of the particles under different temperatures, the AIC particles were calcined at a predetermined temperature using a muffle furnace (KSL-1700X, Kejing, China). The temperature of muffle furnace rose to the specified temperature at a rate of 10°C min −1 , then keeps the specified temperature for 3 h. Preparation of SBF and Acetate Buffer Solutions: The SBF solution was prepared by dissolving NaCl, NaHCO 3 , KCl, K 2 HPO 4 ·3H 2 O, MgCl 2 ·6H 2 O, CaCl 2 , and Na 2 SO 4 in de-ionized water, adjusting the ion concentrations to be similar to those in human blood plasma (Table 1). [45] The SBF was buffered at a pH value of 7.40 using (CH 2 OH) 3 CNH 2 and 1.0 m HCl. The acetate buffer solution was prepared by dissolving 1.86 g sodium acetate (0.02269 mol) and 4.64 g acetic acid (0.07731 mol) in 1 L deionized water (pH 4). The Stability of Fe-ACP in Water, SBF Solution, and Acetate Buffer Solution: Stability in water and SBF solution: 100 mg freshly prepared Fe-ACP was placed in wide-mouth bottles containing 100 mL deionized water or simulated body fluid (SBF). Afterward, they were kept in an oscillating incubator at constant temperature (90 rpm, 37.5°C), respectively. After the predetermined time, the samples were isolated by centrifugation, washed with deionized water and ethanol, and dried in a vacuum desiccator. Stability in acetate buffer solution: 10 mg Fe-ACP particles were put into the beaker with 10 mL acetate buffer solution and intensively stirred. After 15 and 60 min, the solution was filtered with 0.22 μm pore-sized filter and the filtrate was used to measure the ion concentration. Each experiment was repeated for four times (n = 4). Scanning Electron Microscopy (SEM): The morphology of dried Fe-ACP samples was observed using a scanning electron microscope (S-400, Hitachi, Japan). Specimen were sputtered with a thin Au coating for 45 s before measurement. The samples were mounted on aluminum stubs with double-sided carbon tape. The acceleration voltage was set to 15 kV. Transmission Electron Microscopy (TEM): A JEOL 2200FS HRTEM operated at 200 kV, equipped with a JEOL EDX detector, was used to perform high-angle annular-dark-field (HAADF) scanning TEM (STEM), and the EDX element line scanning profile, as well as the mapping. When preparing samples, the particles were dispersed in ethanol and a drop of the colloidal solution was dipped on a copper grid and then air-dried. X-Ray Powder Diffraction (XRD): Dried Fe-ACP specimen were ground into fine powder and analyzed using an X-ray diffractometer (XRD, Bruker D8 Advance, Germany) equipped with a copper-source, operating at 40 kV and 40 mA. Data were collected for 2 ranging between 10 and 80°under CuK radiation ( = 1.5418). The step size was 0.02°and the residence time was 0.1 s. Qualitative and quantitative phase analyses of the studied samples were conducted on an MDI Jade software. The external standard method and the whole pattern fitting refinement were utilized to obtain the lattice parameters of the studied phases. Thermogravimetric Analysis (TGA): Thermogravimetric analysis (TGA) measurements were performed using a thermal analyzer instrument (SDT Q600, American) under air atmosphere. The samples were heated from ambient temperature to 800°C at a heating rate equal to 10°C min −1 . For each experiment, ≈2 mg of powdered samples were placed in the crucible of the thermobalance. Hardness Measurement of Sintered Ceramics: A micro hardness tester (HVS-1000, Lerot Test Instruments Co., LTD, Shandong, China) was used to measure the Vickers hardness of spark plasma sintered bioceramics on the micro scale with an indentation load of 2.94 N. Five indentations on each sample were conducted and the average values were reported. Alizarin Red Staining (ARS): After osteogenic induction of BMSCs which were co-cultured with different concentrations of AIC010 nanoparticles for 14 days, cells were fixed with 4% PFA and stained with an Alizarin Red solution (Solarbio, China) for 30 min. The images were obtained using an optical microscope. Spark Plasma Sintering (SPS): Spark plasma sintering (Labox 325, Sinter Land, Japan) was utilized to consolidate the raw powder. Approximately 0.5 g powder was loaded in a graphite die (with a diameter of 10 mm) and two graphite punch units. A low pressure (≈10 MPa) was applied to the die to pre-press the powder. At the beginning of the sintering experiment, the pressure of the chamber was evacuated to lower than 20 Pa. The instrument took 2 min to warm up from room temperature to 570°C. After that, the powder was heated up from 570°C to the highest sintering temperature (900°C) with a ramping rate of 50°C min −1 . The dwell time at 900°C was 5 min. The temperature was monitored by a thermocouple inserted into a non-through hole of the graphite die. A uniaxial pressure was applied gradually at the punch unit, reaching a maximum of 40 MPa before the temperature reached 700°C, and the pressure was maintained during the entire sintering process. The residual graphite on the sintered samples was removed by grinding and polishing with SiC paper. Acid Resistance: The acid resistance of spark plasma sintered ceramics was evaluated in a citric acid buffer solution (10 mm). The citric acid buffer solution (pH 5) was prepared by adding citric acid monohydrate (10 mm) and trisodium citrate dihydrate (10 mm) into deionized water. The Fe-ACP sintered ceramic was immersed in 10 mL citric acid buffer solution at 37°C. After 72 h, the citric acid buffer solution was filtered with 0.22 μm pore-sized filter and the filtrate was used to measure the ion concentrations. The surface of the ceramics after acid resistance assay was analyzed by SEM. The solution was collected for ion release measurement. Ion concentrations were measured by inductively coupled plasma atomic emission spectroscopy (SPECTRO Analytical Instrument, Agilent 7800, America), measuring atomic Ca at 393.366 nm, P at 177.495 nm, and Fe at 259.940 nm. Statistical Analysis: Quantitative data were expressed as the mean ± standard deviation. One-way analysis of variance (ANOVA) followed by Tukey post hoc comparison (OriginLab Corporation, MA, USA) was used for statistical analysis. A value of p < 0.05 denoted a statistically significant difference. Supporting Information Supporting Information is available from the Wiley Online Library or from the author.
6,532.4
2023-05-25T00:00:00.000
[ "Materials Science" ]
Modified Aquila Optimizer with Stacked Deep Learning-Based Sentiment Analysis of COVID-19 Tweets : In recent times, global cities have been transforming from traditional cities to sustainable smart cities. In text sentiment analysis (SA), many people face critical issues namely urban traffic management, urban living quality, urban information security, urban energy usage, urban safety, etc. Artificial intelligence (AI)-based applications play important roles in dealing with these crucial challenges in text SA. In such scenarios, the classification of COVID-19-related tweets for text SA includes using natural language processing (NLP) and machine learning methodologies to classify tweet datasets based on their content. This assists in disseminating relevant information, understanding public sentiment, and promoting sustainable practices in urban areas during this pandemic. This article introduces a modified aquila optimizer with a stacked deep learning-based COVID-19 tweet Classification (MAOSDL-TC) technique for text SA. The presented MAOSDL-TC technique incorporates FastText, an effective and powerful text representation approach used for the generation of word embeddings. Furthermore, the MAOSDL-TC technique utilizes an attention-based stacked bidirectional long short-term memory (ASBiLSTM) model for the classification of sentiments that exist in tweets. To improve the detection results of the ASBiLSTM model, the MAO algorithm is applied for the hyperparameter tuning process. The presented MAOSDL-TC technique is validated on the benchmark tweets dataset. The experimental outcomes implied the promising results of the MAOSDL-TC technique compared to recent models in terms of different measures. This MAOSDL-TC technique improves accuracy and interpretability of sentiment prediction. Introduction Social media platforms play an important part during extreme crises as individuals use these communications media to share feedback, sentiments, thoughts, and reactions with other people to manage and respond to crises [1].Thus, this study focuses on explorative collective reactions to events expressed on social platforms [2].Special consideration will be given to analyzing the public's responses to worldwide medical-relevant events, particularly the pandemic, described through Twitter's social network, due to its widespread reputation and ease of access utilizing the application programming interface (API) [3].Sentiment analysis (SA) is a kind of technique employed to represent, separate, or define personal data like ideas communicated in a given content, depending on natural language processing (NCP) and computational methods [4].The major goal of SA is to define the author's feelings as negative, positive, or neutral regarding different subjects [5].To evaluate the effects of social media information relevant to the COVID-19 pandemic, research associated with people's opinions on medical information and applications gained major significance [6].In particular, text analysis of Twitter information has been the emphasis in several reviews, allowing researchers to analyze massive instances of user-defined content to find views, which can inform decision-making and earlier reaction mechanisms [7].The Twitter platform has been undergoing a large infusion of data relevant to COVID-19 problems [8].For SA, researchers have been using different kinds of textual documents such as Facebook posts and tweets [9]. Several research works on SA using social media data are available in the literature [10].Identification of such sentiments from social media can support respondents in comprehending network dynamism, for example, panics, users' important problems, and emotional impacts on members' skills [11].This study aims to examine the application of deep learning (DL) methods and natural language processing (NLP) approaches, namely SA, to support policymakers and communities to avoid the growth of misleading information, incitement of insurrection, and fake news [12].SA or public view mining can be described as a way of employing machine learning (ML) and NLP for the classification of sentiments and subjective data [13].SA is defined as the most common research field in the domain of NLP as it provides the ability to study and analyze sentiments that are expressed by various individuals [14]. This article introduces a modified aquila optimizer with stacked deep learning-based tweets classification (MAOSDL-TC) technique for text SA.The presented MAOSDL-TC technique incorporates FastText, an effective powerful text representation approach used for the generation of word embeddings.Furthermore, the MAOSDL-TC technique utilizes an attention-based stacked bidirectional long short-term memory (ASBiLSTM) model for the classification of sentiments that exist on Twitter.To improve the detection results of the ASBiLSTM model, the MAO algorithm is applied for the hyperparameter tuning process.The presented MAOSDL-TC technique is validated against the benchmark of a COVID-19 tweets dataset. Related Works Qorib et al. [15] downloaded public tweets day-to-day from Twitter using the Tweeter API and pre-processed and labelled them.Vocabulary normalization was based mainly on the stemming and lemmatization processes.The NRCLexicon method was used to transform tweets into 10 different classes.A T-test was deployed to check the statistical point of the relationship between the sentiments.Lastly, neural networks including the bidirectional encoder representations from transformers (BERT), 1-dimensional convolutional neural network (1DCNN), long short-term memory (LSTM), and multilayer perceptron (MLP) were tested and trained.In [16], an approach was introduced that was designed to provide an ensemble module where the advantages of automatic feature extraction and handcrafted features were linked through ML and DL algorithms.Before training ML techniques, unstructured information was attained, pre-processed, and annotated using VADER and TextBlob.Sunagar et al. [17] implemented the tweet classification of COVID-19 datasets via implementing DL approaches.The algorithm was executed using two-word embedding methods such as Global Vector for Word2Vec and Word Representation (GloVe). In [18], the researchers presented an NLP technique based on the bidirectional LSTM (BiLSTM) method to implement sentiment classification and detect several problems related to public sentiment on COVID-19.BiLSTM is an enhanced version of classical LSTM to generate the outputs from right and left contexts at every time step.This enabled authorized institutions utilizing this model to alleviate the effect of negative messages and to understand the people's concerns.Tatineni et al. [19] presented a technique to evaluate the emotion of live tweets.The technique comprised a dashboard with different functionalities.The central dashboard had a clickable map of India that illustrated statewide data visualization and had country-wide data visualization of the emotion drawn from Twitter.Live emotion prediction of tweets can be accomplished using the DL techniques.Tweet fetching is a dynamic to obtain new data automatically.Vaddadi et al. [20] developed a technique that used automated implementation to extract details regarding COVID-19 from the up-to-date tweet data.SA uses LSTM, which is a kind of recurrent neural network (RNN) employed by using Twitter's COVID-19 hashtags to see people's reactions to the outbreak.Then, the tweet datasets are categorized and labelled as positive, negative, and neutral and the results visualized. Chakraborty et al. [21] presented SA on the amount of tweets gathered on COVID-19.In the beginning, they analyzed the trends of public sentiments related to COVID-19 using the n-gram analysis and evolutionary classification.Next, the sentiment rating was calculated on gathered tweets based on the class.Lastly, the LSTM model was trained through two classes of rated tweets to forecast sentiment on the COVID-19 dataset.Tawfik and Makhlouf [22] analyzed public opinions on the program of vaccination against COVID-19.To achieve this, an ensemble mechanism based on DL was established, which fused LSTM and bidirectional gated recurrent unit (BiGRU).The accuracy of the presented algorithm was compared with five different ML techniques, and two DL algorithms using advanced approaches. Raheja and Asthana [23] implemented an SA of tweets in lockdown utilizing a multinomial LR approach.The presented methodology design followed the pre-processed, polarity and scoring, and extracting features before executing the ML approach.In [24], a novel algorithm was presented for automatic sentiment classification of COVID-19 tweets utilizing ANFIS approaches.Jain et al. [25] purposed to analyze the performance of many classification techniques that demand an input value and identified to which resultant classification they belong.Six ML approaches, two ensemble systems, and four DL methods were utilized for this work.In [26], the R programming language was used to conduct an investigation of Twitter data.During this case, the authors planned a method named Hybrid Heterogeneous SVM (H-SVM) and carried out the sentiment classification and categorized tweets as negative, neutral, and positive. The Proposed Model This article is concentrated on the improvement of the MAOSDL-TC technique for text SA.The MAOSDL-TC technique mainly concentrates on the recognition and categorization of different kinds of sentiments in COVID-19 tweets.In the presented MAOSDL-TC technique, the following set of processes are involved, namely pre-processing, FastText, ASBiLSTM-based classification, and MAO-based parameter selection.Figure 1 depicts the workflow of the MAOSDL-TC algorithm. Data Pre-Processing and Word Embedding Text preprocessing is the technique used to clean the original text data.A robust text pre-processing technique is crucial for applications of NLP tasks.After preprocessing, the attained text components act as key elements of input that are fed into the processing of textual data.Preprocessing consists of different approaches for translating the original texts using a well-defined method: special characters or symbols, lemmatization, elimination of stopwords, lexical analysis (ignore case sensitivity, word tokenization, and removal of punctuation).Afterwards, the FastText method was employed for the processing of word embedding.FastText is a widely used text representation method that generates word embeddings that are a dense vector representation of words.This embedding captures the semantic meaning of an individual word and its subword information and morphological structure.Particularly, this makes FastText more effective in handling out-of-vocabulary words and capturing the relationship between words with related prefixes or suffixes.FastText works by assuming a word is a mixture of subword units (character n-grams).This technique enables it to create embedding for known and unknown words by leveraging the subword component. Data Pre-Processing and Word Embedding Text preprocessing is the technique used to clean the original text data.A robust tex pre-processing technique is crucial for applications of NLP tasks.After preprocessing, th attained text components act as key elements of input that are fed into the processing o textual data.Preprocessing consists of different approaches for translating the origin texts using a well-defined method: special characters or symbols, lemmatization, elimin tion of stopwords, lexical analysis (ignore case sensitivity, word tokenization, and r moval of punctuation).Afterwards, the FastText method was employed for the processin of word embedding.FastText is a widely used text representation method that generate word embeddings that are a dense vector representation of words.This embedding cap tures the semantic meaning of an individual word and its subword information and mo phological structure.Particularly, this makes FastText more effective in handling out-o vocabulary words and capturing the relationship between words with related prefixes o suffixes.FastText works by assuming a word is a mixture of subword units (character n Tweet Data Classification Using ASBiLSTM Model Once the tweets are preprocessed, classification takes place using the ASBiLSTM model.In this study, we used the ASBiLSTM model as an essential element of the presented method, which has the benefit of simultaneously extracting temporal features of time series [27].The BILSTM is an augmentation of the LSTM.The LSTM is a kind of RNN, which overcomes the problems of vanishing gradient from RNN through the inclusion of a gating module.In comparison with RNN, LSTM is composed of memory cells, forget, input, and outputs, in which the cell memory is liable to store the overview of historical input series, and the gate modules control the flow of information between the input and output datasets.LSTM aids efficient learning of long-term temporal dependency relationships by taking their well-developed structure into account. Consider c t−1 as the memory cell state of the prior t−1 time step, an input vector x t at t time steps, and h t−1 indicates the hidden layer of the prior t−1 time step.f t , i t , and 0 t show the gate vector that controls how much data is to be forgotten, updated, and output from the memory cell, correspondingly.The operation of LSTM was formulated by the following expression: From the expression, the Tanh function ensures that the value of HL remains in [−1, 1] the interval.σ(•) indicates the sigmoid function; the symbol shows the pointwise multiplication.The learnable parameters W and b are weight and deviation during the training model, respectively. BILSTM combines a bidirectional conceptualization into LSTM that exploits forward and backward LSTM for feature extraction and concatenates respective hidden features for extracting patterns or bidirectional features.Accordingly, BILSTM attains context data in the previous observation for the entire input.This bidirectional extraction on the time series simplifies the capture of backwards and forward temporal attributes in wind power-related data considering the variation patterns.With the context feature, BiLSTM allows a hybrid model for wind power-related data to attain feature extraction capabilities and better representation that enables more accurate and efficient prediction of future observation by leveraging past observation. Particularly, BILSTM trains its parameters in backward and forward paths to realize the context.During the backward layer, the LSTM estimates the derivation of transmission errors in the forward layer.The LSTM updates the parameters from the conventional way in the forward layer.Considering an input of length T, the operational procedures are shown below: where H t indicates the hidden layer (HL) of BILSTM at time step t, → h t and ← h t signifies the HL in the forward and backward layers at timestept. In ASBiLSTM, the attention module was used to optimize the prediction outcomes.Figure 2 signifies the framework of ASBiLSTM.The attention module is a weighting quantity of sequences that allocates great weight to targets with higher correlation.An attention module minimizes the loss of prior datasets and extracts relevant information by highlighting the contribution of the most powerful and useful parts of the input to the outputs.In the DL technique, the attention module allocates weight to the output of BiLSTM by mapping the weights and the learning parameter matrix can be focused on the input that contributes to the outputs. Figure 2 signifies the framework of ASBiLSTM.The attention module is a weighting quantity of sequences that allocates great weight to targets with higher correlation.An attention module minimizes the loss of prior datasets and extracts relevant information by highlighting the contribution of the most powerful and useful parts of the input to the outputs.In the DL technique, the attention module allocates weight to the output of BiLSTM by mapping the weights and the learning parameter matrix can be focused on the input that contributes to the outputs.As shown in Equations ( 3) to ( 5), a series of outputs , , … , through the HL of BILSTM are fed as input to the attention model, and the distribution of attention weights is attained.Equation ( 5) indicates the accomplishment of the last state of the attention mechanism.Equation ( 4) shows the computation of attention weight by standardizing the score.Equation (3) defines the computation of similarities or correlations between the input and output features. where and signify the weighted coefficient of the parameter learned in the training model. indicates the distribution probability at time steps. shows the bias. Hyperparameter Tuning Using MAO Algorithm The MAO algorithm can be applied in this work for the hyperparameter tuning of the ASBiLSTM module.The AO mainly depends upon the prey-grabbing nature of the Aquila.AO is a population-based algorithm which exhibits its effectiveness in the field of complex and nonlinear optimization in a short period of time.The classical AO principally focuses on five significant steps namely initialization, expanded exploration, narrowed exploration, expanded exploitation and narrowed exploitation. An MAO was introduced in this study [28].By modifying the SCF from IAO, MAO was inspired to make further amendments to the AO.However, the convergence properties of SCF decelerate the accuracy of the epochs in IAO.These properties may be As shown in Equations ( 3) to ( 5), a series of outputs H 1 , H 2 , . . ., H t through the HL of BILSTM are fed as input to the attention model, and the distribution of attention weights is attained.Equation ( 5) indicates the accomplishment of the last state of the attention mechanism.Equation ( 4) shows the computation of attention weight by standardizing the score.Equation (3) defines the computation of similarities or correlations between the input and output features. where V e and W signify the weighted coefficient of the parameter learned in the training model.e i indicates the distribution probability at ith time steps.b shows the bias. Hyperparameter Tuning Using MAO Algorithm The MAO algorithm can be applied in this work for the hyperparameter tuning of the ASBiLSTM module.The AO mainly depends upon the prey-grabbing nature of the Aquila.AO is a population-based algorithm which exhibits its effectiveness in the field of complex and nonlinear optimization in a short period of time.The classical AO principally focuses on five significant steps namely initialization, expanded exploration, narrowed exploration, expanded exploitation and narrowed exploitation. An MAO was introduced in this study [28].By modifying the SCF from IAO, MAO was inspired to make further amendments to the AO.However, the convergence properties of SCF decelerate the accuracy of the epochs in IAO.These properties may be responsible for certain challenges in searching for an optimum result.To overcome these challenges, a modified version of IAO was introduced that integrates a modified search control factor (MSCF) that is particularly adapted to the 2nd and 3rd search processes.The subsequent section provides a detailed description of the MAO technique, which highlights certain modifications that were made and their effects on the optimization technique.The MSCF is used to control the search range, which reduces movement of the Aquila in terms of epochs.Accordingly, compared to the prior SCF, the search space is considerably narrower.Furthermore, the optimum solution is found considerably more quickly than in the prior technique.The modified MSCF is shown as follows: where t denotes the existing iteration and T shows the maximal iteration.The r parameter shows a random integer ranging from zero to one, where dir indicates the direction control factor.These factors play a major role in controlling the fight direction of the Aquila. The MSCF function aims to attain fast convergence by restricting the movement of the Aquila.Furthermore, it decreases optimization latency.The modified technique needs less time to recognize the optimum solution set than the original AO.Both optimization approaches were performed with sizes of 250 and 250 epochs. With the incorporation of the MSCF function, the presented technique includes four different search stages that are discussed in the following: Step 1: Vertical Dive Attack (S 1 ) The Aquila begin its hunting by identifying the target region and selecting the optimum hunting position by swooping high in the air.These attacks are called vertical dive attacks and are expressed as follows: In Equation ( 8), S 1 (t + 1) denotes the solution candidate of (t + 1) epochs, r shows the random integer in [0, 1] the interval, and S best (t) shows the better solution attained to the ith generation.(1 − Tt) is used for controlling the search region.Now, S(t) denotes the mean value of the existing solution to ith epochs. Step 2: Modified Full Search with a Short Glide Attack (MS) Before attacking the prey, the Aquila comprehensively searches the solution space via different directions and speeds, in what is called a full search with shorter glide attacks that can be shown as follows: In Equation ( 9), x, and y correspond to the positions or coordinates of the point making the spiral shape during the search step, r indicates the random integer within [0,1], and MCF (t) denotes the modified search control factor.Rather than applying the Levy flight (LF) distribution, we integrated MSCF to eliminate the problems of getting stuck in a locally optimal solution. Step 3: Modified Search Around Prey and Attack (MS) The prey's region is located accurately after the MS 2 search step.The Aquila thoroughly explores around the target, and with pseudo attacks, recognizes the prey's reaction in what is called a search around prey and attack. In Equation (10), S R (j) denotes the random set of solutions and MS 3 (i, j) indicates the existing solution for t epochs. Step 4: Walk and Grab Attack (S) Finally, the Aquila attacks from above based on the prey's movement for the 4th search approach.This search process can be denoted as "Walk and Grab Prey", where S 4 (t + 1) represents the solution attained so far, and lev(D) shows the Levy distribution for the D dimensional range.QF indicates the quality function for balancing the search process, G 1 denotes each kind of movement of Aquila during the hunt, and G 2 shows the fight slope of hunting. The fitness choice is a key component of the MAO method.Encoder performance is applied measure a better solution candidate.Now, the performance value is the foremost condition applied to develop an FF. where TP and FP indicate the true and false positive values. Results and Discussion The performance validation of the MAOSDL-TC method on the sentiment classification of COVID-19 tweets takes place using the Kaggle dataset [29], which holds 2750 samples with 11 classes, as portrayed in Table 1. Result Analysis A brief result of using the MAOSDL-TC technique on COVID-19 tweet classification is illustrated in Table 2 and Figure 4.The obtained results state that the MAOSDL-TC technique properly recognized all classes.On 70% of the TR set, the MAOSDL-TC technique provided an average of 99.19%, of 95.63%, of 95.55%, of 95.54%, and JI of 91.49%.In addition, on 30% of the TS set, the MAOSDL-TC approach attained an average of 99.45%, of 97.15%, of 96.89%, of 96.99%, and JI of 94.18%. Result Analysis A brief result of using the MAOSDL-TC technique on COVID-19 tweet classification is illustrated in Table 2 and Figure 4.The obtained results state that the MAOSDL-TC technique properly recognized all classes.On 70% of the TR set, the MAOSDL-TC technique provided an average accu y of 99.19%, prec n of 95.63%, reca l of 95.55%, F score of 95.54%, and JI of 91.49%.In addition, on 30% of the TS set, the MAOSDL-TC approach attained an average accu y of 99.45%, prec n of 97.15%, reca l of 96.89%, F score of 96.99%, and JI of 94.18%. Conclusions This article has concentrated on the improvement of the MAOSDL-TC method for classification of text sentiments in COVID-19 tweets.The MAOSDL-TC technique mainly concentrates on the recognition and categorization of different kinds of sentiments in COVID-19-related tweets.In the presented MAOSDL-TC technique, the following set of processes were involved, namely pre-processing, FastText, ASBiLSTM-based classification, and MAO-based parameter selection.In this work, the ASBiLSTM model for the classification of sentiments existing in the tweets.Lastly, the MAO system can be applied for the hyperparameter tuning process, which aids in improving the detection results of the ASBiLSTM model.The presented MAOSDL-TC method is validated on the benchmark tweets dataset.The experimental outcomes, with maximum accuracy of 99.45%, suggested the promising results of the MAOSDL-TC technique compared to recent models.This MAOSDL-TC technique not only improves accuracy but also enhances the better in- These results confirmed that the MAOSDL-TC technique exhibits enhanced performance over recent models. Conclusions This article has concentrated on the improvement of the MAOSDL-TC method for classification of text sentiments in COVID-19 tweets.The MAOSDL-TC technique mainly concentrates on the recognition and categorization of different kinds of sentiments in COVID-19-related tweets.In the presented MAOSDL-TC technique, the following set of processes were involved, namely pre-processing, FastText, ASBiLSTM-based classification, and MAO-based parameter selection.In this work, the ASBiLSTM model for the classification of sentiments existing in the tweets.Lastly, the MAO system can be applied for the hyperparameter tuning process, which aids in improving the detection results of the ASBiLSTM model.The presented MAOSDL-TC method is validated on the bench- Figure 2 . Figure 2. The architecture of the ASBiLSTM technique. Figure 2 . Figure 2. The architecture of the ASBiLSTM technique. Figure 3 Figure 3 represents the classifier performances of the MAOSDL-TC technique under the test database.Figure 3a,b shows the confusion matrix achieved by the MAOSDL-TC Figure 5 Figure5illustrates the training accuracy TR_accu y and VL_accu y of the MAOSDL-TC approach.The TL_accu y is determined by the evaluation of the MAOSDL-TC technique on the TR dataset whereas the VL_accu y is computed by evaluating performance on a separate testing dataset.The results exhibit that TR_accu y and VL_accu y upsurge with an increase in epochs.As a result, the outcome of the MAOSDL-TC technique increases on the TR and TS datasets with a rise in the number of epochs. Figure 4 . Figure 4. Average of MAOSDL-TC approach on 70:30 of the TR set/TS set. Figure 5 Figure5illustrates the training accuracy _ and _ of the MAOSDL-TC approach.The _ is determined by the evaluation of the MAOSDL-TC technique on the TR dataset whereas the _ is computed by evaluating performance on a separate testing dataset.The results exhibit that _ and _ upsurge with an increase in epochs.As a result, the outcome of the MAOSDL-TC technique increases on the TR and TS datasets with a rise in the number of epochs. Figure 5 . 16 Figure 6 . Figure 5. Accu y curve of the MAOSDL-TC approach.In Figure6, the TR_loss and VR_loss results of the MAOSDL-TC approach without optimization are revealed.The TR_loss defines errors among the predictive performance Figure 7 . Figure 7. Comparative outcome of MAOSDL-TC algorithm with recent methods. Figure 7 . Figure 7. Comparative outcome of MAOSDL-TC algorithm with recent methods. Table 1 . Description of database. Table 3 . Comparative outcome of MAOSDL-TC algorithm with recent methodologies. Table 3 . Comparative outcome of MAOSDL-TC algorithm with recent methodologies.
5,786.2
2023-10-03T00:00:00.000
[ "Computer Science" ]
Gravitational waves from binary compact star mergers in the context of strange matter In this article we will focus on the appearance of the hadron-quark phase transition and the formation of strange matter in the interior region of the hypermassive neutron star and its conjunction with the spectral properties of the emitted gravitational waves (GWs). A strong hadron-quark phase transition might give rise to a mass-radius relation with a twin star shape and we will show in this article that a twin star collapse followed by a twin star oscillation is feasible. If such a twin star collapse would happen during the postmerger phase it will be imprinted in the GW-signal. Introduction The four confirmed detections of the gravitational waves (GWs) emanated from the inward spiral and merger of pairs of black holes marked the beginning of a new era in observational astrophysics. The new field of gravitational-wave astronomy will uncover violent, highly energetic astrophysical events that could not be explored before by humankind. Without doubt, GWs from a binary compact star merger will soon be announced by the LIGO-VIRGO collaboration including an electromagnetic counterpart. 1 By analysing the power spectral density profile of the post-merger emission, the GW signal can set tight constraints on the high density regime of the equation of state (EOS) of elementary matter. The modification of the EOS due to a potential influence of a hadron-quark phase transition (HQPT) and the impact of strange quark matter on the EOS, which is currently solely probed in relativistic heavy ion collisions, might be imprinted in the post-merger phase of the emitted GW of a merging compact star binary. Hybrid star mergers represent optimal astrophysical laboratories to investigate the QCD phase structure and in addition with the observations from heavy ion collisions will possibly provide a conclusive picture on the QCD phase structure at high density and temperature [1]. Figure 1. Logarithm of the rest-mass density profile (time snap-shots at t = −0.17, 4.05, 13.16 ms) and gravitational wave amplitude |h| and h + at a distance of 50 Mpc for the ALF2-EOS with M tot = 2.7M . Soon after the merger (t := 0) the density reaches values above ρ trans = 3 ρ nuc , forming a mixed phase inner region of deconfined quark matter. At t BH = 14.16 ms the HMNS collapses and the free quark matter will be macroscopically deconfined by the event horizon of the formed rotating black hole. increases with time until it either collapses to a black hole or reaches a quasi-stationary hydrostatic equilibrium [2]. The emitted GWs of the merger and post-merger phases are strongly determined by the high density region of the EOS reaching values ρ max /ρ nuc ≈ 2 − 6 (ρ nuc := 2.705 × 10 14 g/cm 3 ). Fig.1 shows the GW-amplitude of an equal-mass neutron star binary merger simulation, wherein the used EOS (ALF2) has incorporated a slight phase transition to color-flavor-locked quark matter [2]. The upper panel of Fig.1 depicts the logarithm of the rest mass density profiles and the boundary of the HQPT is marked with a red curve. Although the ALF2-EOS comprises a HQPT, the properties of the resulting hybrid stars are hardly distinguishable from purely hadronic stars as the transition to the deconfined phase is very weak. So far, no simulation of a binary compact star merger containing a strong HQPT has been performed. However, the effects of a strong HQPT have been investigated in the context of static [4] and uniformly rotating hybrid stars [5] and the results show that tremendous changes in the star properties might occur including the existence of a third family of compact starsthe so called "twin stars" [6]. The Twin Star Collapse The possibility of neutron star twins has been discussed in the context of different types of phase transitions including pion condensation, HQPTs and phase transitions to hypermatter [3]. A twin star behaviour is present, if the third stable sequence of compact stars is separated from the second one by an unstable region. Fig. 2 shows a typical twin star behaviour for static hybrid stars calculated using the Tolman-Oppenheimer-Volkoff equation. For the hadronic part of the EOS a DD2-RMF parametrization has been used, while the HQPT was modelled using a soft piecewise polytrope mixed phase region (Γ MP = 1.07 ∈ ρ/ρ nuc = [3, 4.5]) followed by a stiff deconfined quark phase (Γ QP = 5.7 ∈ ρ/ρ nuc =]4.5, ∞[). In [4] it was argued that the unstable region (dotted curves in Fig. 2) opens the possibility of a catastrophic rearrangement of the twin star from one configuration to the other with a prompt burst of neutrinos (with energies of about 100 MeV) followed by a gamma ray burst (with photon energies of about 1 MeV) and a total release of energy of about 10 52 erg. However, dynamical simulations of such a twin star collapse have not yet been performed. In the following we will present for the first time the results of numerical simulations of a twin star collapse performed The underlying EOS has a strong hadron-quark phase transition implemented, where the mixed-matter phase exists in the rest-mass density range 3 ρ nuc ≤ ρ ≤ 4.5 ρ nuc ). The two stable "twin star" solutions are separated by a small unstable region which is visualized using dotted curves. The coloured circles display the initial configurations of the four different stars used in the underlying dynamical simulations (see Fig. 3). in full general relativity using the Einstein Toolkit and the WhiskyTHC code. The four different initial configurations of the compact stars (coloured circles in Fig. 2) were perturbed by a radially inward directed initial velocity kick. Fig. 3 shows the time dependence of the maximum value of the rest-mass density and the minimum of the lapse function. With the exception of Case 0 the unstable twin star region is reached during the evolution and the compact objects oscillate between their two twin star configurations. To illustrate these twin star oscillations, the evolution of the rest-mass density profile for several time snap-shots is displayed in Fig. 4. The black curve shows the profile of the initial star of Case 2, which is placed in the stable part of the mass-radius relation in the vicinity of the maximum mass star of the second sequence (see Fig.2). This initial star is radially perturbed using an inward directed velocity kick and as a result its density increases and the star shrinks (see solid blue curve in Fig. 4). During the collapse the density in the core reaches values ρ/ρ nuc the stiff part of the pure quark phase generates a high pressure which counteracts against the strong gravitational attraction. As a result, the large deconfined pure quark core pushes the fluid outward, causing the first twin oscillation. The dashed curve depicts the end point of the first oscillation at t ≈ 0.84 ms. The shape of the profile clearly shows that this star is solely composed of hadronic matter. It should be mentioned that the expected emission of neutrinos and high energetic photons or viscosity effects, which are not included in the simulations, will damp the oscillations. Summary Neutron star mergers represent optimal astrophysical laboratories to investigate the QCD phase structure using a spectrogram of the post-merger phase of the emitted gravitational waves. As gravitational waves emitted from merging neutron star binaries are on the verge of their first detection, it is important to understand the main characteristics of the underlying merging system in order to predict the expected GW signal. Numerical-relativity simulations of merging neutron star binaries show that the emitted GW and the interior structure of the generated hypermassive neutron stars depends strongly on the equation of state. The appearance of the hadron-quark phase transition in the interior region of the HMNS will change the spectral properties of the emitted GW if it is strong enough. If the unstable twin star region is reached during the "post-transient" phase, the f 2 -frequency peak of the GW signal will change rapidly due to the sudden speed up of the differentially rotating HMNS.
1,894.2
2018-02-01T00:00:00.000
[ "Physics" ]
Optimized polarization-independent Chand-Bali nano-antenna for thermal IR energy harvesting A novel, polarization-independent, wide-angle reception Chand-Bali nano-antenna is proposed. An adjoint-based optimization algorithm is used to create the same resonance at both linear polarizations of the incident radiation. The nano-antenna optimal parameters reveal that two hot spots with a strong field enhancement are created. These hot-spots could be integrated with metal–insulator–metal (MIM) diodes to form a rectenna for infrared (IR) energy harvesting. The metallic resonators allow for selecting several materials to facilitate the fabrication of the nano-antenna and the MIM diode. The Chand-Bali-based IR rectennas are investigated and simulations demonstrate an improvement of more than one order of magnitude in efficiency compared to ones using traditional nano-antennas. The proposed Chand-Bali nano-antenna Our design is composed of two elliptically shaped metallic patches.The first elliptic patch is designed to have its major radius along a certain direction.The second elliptic patch is cut by a smaller elliptic shape and this cut-ellipse has its major axis aligned perpendicular to the direction of the major axis of the first ellipse.From this preliminary configuration, it is possible for the nano-antenna to couple to the incident radiation with different polarizations.Figure 1 shows the structure of the proposed Chand-Bali nano-antenna.The nano-antenna is built with gold elliptic patches on top of a gold ground plane to prevent further transmission of the incident electromagnetic radiation.A thin TiO 2 insulator layer is sandwiched between the two metals.This design, as shown in Fig. 1a, forms a metal-insulator-metal (MIM) structure.Figure 1b shows the design parameters of the proposed Chand-Bali nano-antenna.As shown in the top view, 3 different ellipses-A, B, and C are characterized by the locations of their centers and their minor and major radii.The centers e 1 , e 2 , and e 3 are placed on the same axis.The developed Chand-Bali nano-antenna was assumed to lie in a periodic structure in the x-y plane with symmetric periodicity G as shown in Fig. 1b.The thicknesses of the layers (t m , t d and t g ) of the MIM structure are considered additional design parameters (see Fig. 1c).The thickness of the ground plane t g is kept fixed at 200 nm which is several times the skin depth at the suggested operating frequency of 30 THz.The expected magnetic resonances are due to the orientation of each elliptic patch as well as their major and minor radii. The proposed Chand-Bali nanoantenna exhibits the distinct advantage of dual-polarization operation with two open terminals.This unique combination of characteristics provides an excellent opportunity for seamless integration into both parallel and series networks, consequently enhancing the overall harvesting performance.Table 1 presents a comparative analysis of various nano-antennas reported in the literature, considering their dual polarization capabilities, number of antenna terminals, and the materials utilized in their respective designs. The adjoint-based optimization As described in the simulation steps, two different simulations were performed to determine the reflectance at each polarization.However, calculating the electric field enhancement at the gap is critical.ANSYS HFSS can be used to calculate the scattering (S-) parameters in addition to their derivatives with respect to the geometry and material parameters.These derivatives are estimated using a self-adjoint method with no additional simulations 55 .Therefore, seeking a strong electric field confinement in both polarizations simultaneously can be defined as an optimization problem.Gradient-based optimization algorithms generally require fewer iterations and hence simulations, as compared to global optimization methods.The gradient of the electric field is not possible through the self-adjoint method available in ANSYS HFSS.This implies that a huge number of simulations are required to approximate the gradient using finite difference methods, for example, especially in the case of many design parameters.Therefore, the need for dealing numerically with the S-parameters is crucial. The required link between S-parameters and the electric field enhancement can be derived through the coupled-mode theory (CMT) 63 .In CMT, the optimized field enhancement of a given nano-antenna with a specific material is directly proportional to the absorption quality factor Q abs .This optimum quality factor occurs at the reflectance valleys 64 .Thus, the field enhancement is associated with minimum reflectance wavelengths.An optimization algorithm can be used to minimize the reflectance using: where E 0 is the incident electric field and W is the objective function.The optimization problem can be formulated as, where W 1 and W 2 are the reflectance calculated for an incident electromagnetic wave with electric field polarized in x and y directions, respectively, at a wavelength of 10 μm.The vector c represents the linear and nonlinear geometrical constraints to avoid non-physical structures.The design parameters u is determined from the geometries as shown in Fig. 1b,c, where, (1) These 12 design parameters are categorized into three classes: The unit cell periodicity (G), the thicknesses of the top metals and the insulator layer (t m , t d ) respectively, and finally the major and minor radius of each ellipse (r x , r y ) and their center locations e i . Attempting to simultaneously minimize the reflectance of both polarizations did not yield a good design.Therefore, the optimization problem is updated to obtain a starting feasible point.First, the optimization is carried out for W 1 only to obtain an optimal point for the first polarization.As shown in Fig. 2a, the convergence of this optimization step is achieved after 15 iterations.These optimal design parameters are then used to perform the optimization step for both polarizations simultaneously as described in Eq. ( 3).This starting point, which is optimal for one specific polarization, is not optimal for the other one, as shown in Fig. 2b.The second optimization step started from an initial reflectance of (1 − 0.86) = 0.14 and achieved a reflectance of less than 0.01 after 13 iterations.The achieved design minimizes the reflectance for both polarizations as shown in Fig. 2c.Both peaks are very close to a unity value at wavelength of 10 μm.The optimization algorithm is performed in a MATLAB environment with tailored scripts to link ANSYS HFSS and automate the process. Results and discussions The field enhancement factor and field distribution Numerical simulations are carried out using the optimal set of design parameters obtained from executing the optimization algorithm.These optimal dimensions are presented in Table 2.The electric field at the center of each gap was simulated over the wavelength range from 8.5 to 11.5 μm (see Fig. 3a).A strong electric field confinement is noticed at 10 μm for both polarizations.The gap in the optimal design is 15 nm, which can be fabricated using electron beam lithography (EBL) 16,17 .The electric field enhancement factor approaches 1.5 × 10 5 and ~ 10 5 for the x and y polarized incident electromagnetic waves at 10 μm, respectively.It is expected to have different enhancement factor for different polarizations as the nano-antenna is not symmetric.However, both polarizations support the antenna's resonance at 10 μm.The small peak at shorter wavelength as shown in Fig. 3a can be attributed to surface plasmon resonance (SPP) supported by the nano-antenna array 9 .In Fig. 3b, the electric www.nature.com/scientificreports/field enhancement in the case of the x-polarized EM wave was normalized and plotted with the corresponding reflectance from the S-parameters.Both curves are identical around the resonance, thereby validating the assumptions from coupled-mode theory. The electric field distribution over the xy-plane at the resonance wavelength was calculated and is plotted in Fig. 4. When a normal incident wave impinges the nano-antenna with electric field polarized along the x-axis at resonance, the electric field will be confined across the two gaps.These confinements form two hot spots which support the operation of the MIM diode to rectify the harvested fields.The electric field vectors shown in Fig. 4a reveal that the charges on the elliptic patch on the right is divided into two longitudinally opposite polarities to support this resonance mode.When the incident electric field is vertically polarized along the y-axis, the charges over the elliptic patch are split between the upper and lower halves with opposite polarity to allow for the corresponding resonance as shown in Fig. 4b. The magnetic field distribution of the x-polarized incident wave is plotted in cross section passing through the hot-spot and parallel to the xz-plane as shown in Fig. 4c.The magnetic field distribution across the yz-plane for the y-polarized incident wave is presented in Fig. 4d.Both magnetic field distributions exhibit magnetic resonance at 10 μm 54 , which in turn allow for a wide-angle performance for oblique incidence.The reflectance is calculated with variable incident angle θ, at the resonance wavelength of 10μm and presented in Fig. 4e.The absorbance is over 92% for incident angles up to 80°.This important feature proves that the proposed Chand-Bali nano-antenna is one of the most competitive energy harvesters for the diffusive IR radiation. MIM diodes and rectenna efficiency The rectenna in the IR region consists of a nano-antenna connected to a diode.The nano-antenna receives the IR radiation with wavelengths matched to the resonance wavelength of the nano-antenna.This collected ultrahigh frequency AC signal is then passed through the diode to be rectified and produces a useful DC current.The proposed optimal Chand-Bali nano-antenna possesses a high electric-field enhancement at the designated gaps to assist and improve the diode's performance.Absorbing IR radiation with a very wide range of incident angles further boosts the rectenna performance.In spite of these merits, the impedance matching with the diode can be a challenge to the performance of the rectenna 29 .MIM diodes can theoretically operate up to visible frequencies 22 .However, one crucial concern is that the diode's high nonlinearity is generally associated with a large resistance 34 .This resistance varies from hundreds to Mega Ohms 34 .This huge difference with the resistance of the nano-antenna can prevent the highly efficient nano-antennas from delivering the collected power to the diode, which in turn would make the rectenna inefficient 17 .One of the solutions to resolve this conflict is to build nano-antennas with high impedance in order to mitigate the mismatching effects. The optimal Chand-Bali antenna is then numerically simulated in the transmission mode by defining a lumped port in one of the gaps with a matched lumped load at the other gap.The antenna's far-field analysis is carried out to estimate the far-field patterns and antenna parameters.The full-width-half-maximum (FWHM) of the proposed nano-antenna can be derived from Fig. 3b, and it is calculated to be from 9.3 to 10.7 μm.The simulations were carried out at the wavelength range of the FWHM.The performance of the nano-antenna outside this range is significantly attenuated due to poor coupling with the diode.Figure 5a represents the nanoantenna impedance calculated at the FWHM range. The impedance matching efficiency η m for an MIM diode with resistance R d , and a nano-antenna resistance R a can be formulated as 25 : The resistance at the resonance wavelength of 10 μm is ~ 180 Ω, which is more than 3 times that of the fabricated bow-tie nano-antenna at this wavelength 17 .This is reflected on the matching efficiency improvement by almost the same factor.Also, it is noticed that the reactance part of the nano-antenna impedance should be taken into consideration in computing the matching or coupling efficiency.The integration should include both parts in the calculations in order to avoid inaccurate efficiencies. The nano-antenna radiation efficiency was computed and is presented in Fig. 5b.The radiation efficiency is almost 43% at resonance, which is ~ 4 times that of the bow-tie nano-antenna described in Ref. 13 .The whole efficiency is double this value as the design can receive both polarizations simultaneously.The rectenna efficiency η Rec is approximated using the following formula: where η a is the nano-antenna efficiency related to the ability of the nano-antenna to collect the incident electromagnetic radiation, η s is the efficiency of transferring the collected energy by the antenna to the diode terminals, η c is the coupling efficiency between the antenna and the diode and η j is the efficiency of rectifying the AC power ( 5) www.nature.com/scientificreports/through the diode.The last term (η j ) can be determined by measuring the diode's responsivity.The coupling efficiency is proportional to the matching efficiency 29 .Therefore, the overall efficiency is likely to be boosted by three main factors.There is a dual-polarization operation which is reflected as approximately two times increase.Also, a larger nano-antenna resistance leads to the second increase with almost the same factor due the rise in the matching efficiency.Therefore, a rise of more than three times in the coupling efficiency compared to the bow-tie nano-antenna is achieved.Finally, the proposed nano-antenna has a radiation efficiency close to 4 times higher than results reported from bow-tie-based IR rectennas.These three factors successfully achieve more than one order of magnitude improvement in the rectenna's overall efficiency. Fabrication considerations The proposed Chand-Bali nano-antenna offers two space gaps which facilitate the fabrication of the diode by considering that the antenna's metallic patches also function as the two metallic sides of the MIM diode.However, from the diode characteristics and figures-of-merit, it is preferable to build the MIM diode with different metal electrodes rather than use the antenna's metal layers 33,34,65 .The difference in the work function between the two metals offers an opportunity to improve the diode's responsivity 29 .Therefore, the metallic cut-elliptic patch, which was initially designed of gold, was replaced by a titanium one.The work functions for gold and titanium are 5.1 eV and 4.33 eV, respectively, which is expected to increase the diode's rectification capability.Also, Ti is well known in forming a thin oxide layer when exposed to air, which in turn simplify the fabrication of the insulator layer to form the diode.However, one drawback is that the oxide layer grows in all possible directions, and as a result, the nano-antenna top layer may also have a TiO 2 layer on top.The final design would take the shape of a large array with parallel and series connections of each antenna, where each antenna cell has a periodicity of 6.5 µm in both directions.Figure 6a shows the electric field enhancement calculated to study the case of changing the material of the cut-elliptic patch from gold to titanium.The enhancement factor improved slightly by comparing peaks in Figs.3a and 6a.However, a slight red-shift occurred for both polarizations which is attributed to the different complex permittivities of gold and titanium at this wavelength range.The effect of adding a 10 nm-thick layer of TiO 2 on the top of the Ti patch was investigated.The simulations showed almost no change in the performance of the nano-antenna in both cases as presented in Fig. 6b.This insensitive performance reveals the feasibility of the proposed Chand-Bali nano-antenna in IR rectenna design under normal practical conditions. The ground plane can be fabricated from Ti instead of Au as this allows for easy growth of the TiO 2 oxide layer of the substrate.There is also a possibility to form a single-insulator layer MIM diode or multiple-insulator layers such as MIIM diode to improve the overall performance by considering the gap separations between both ellipses in the range of few nanometers 64 .The proposed nano-antenna design combined with the optimization algorithm offer a flexible and scalable way to build energy harvesters operating at a specific wavelength or a narrow wavelength range.The sharp tips of the proposed nano-antenna were rounded in the simulations and showed insensitive response for the absorbance capabilities at the resonance wavelength.Also, the radii of ellipses were varied to simulate the fabrication tolerances at this nanometer scale and resulted in insignificant shift in the resonance wavelength. MIM diode analysis The metal-insulator-metal (MIM) diodes are suitable candidates that can work with the proposed nanoantenna in the IR region.The dominant tunneling current through thin oxide layers with a few nanometers thickness enables the MIM diodes to rectify the ultra-high frequency AC signal received from the nanoantenna.Furthermore, the fabrication of the nanoantenna integrated with an MIM diode would reduce the complexity by designing the metallic ground plane made of titanium instead of gold.Also, one arm of the nanoantenna is designed to be made from titanium instead to improve the MIM diode asymmetry as shown in Fig. 6a.The estimated current 34,64 MIM diode is plotted in Fig. 7a.The different metallic electrodes show more asymmetric behavior as expected and illustrated in Fig. 7a.The resistance and responsivity of the Au-TiO 2 -Au based MIM diode are calculated from the estimated I-V characteristics and presented in Fig. 7b.An improved responsivity is expected from using multiple insulator layers in building the MIM diode. Conclusions A novel nano-antenna design was investigated for use in rectennas for infrared (IR) energy harvesting.The proposed Chand-Bali nano-antenna is an excellent candidate to receive randomly polarized IR radiation around 10 μm.An adjoint-based optimization algorithm was exploited to achieve maximum field enhancement at the nano-antenna gaps for dual polarizations simultaneously at the same operating wavelength.The algorithm succeeded in producing parameters for an optimal design that allow for near unity absorbance at 10 μm.The optimal Chand-Bali design possesses a strong electric field enhancement factor of more than 10 5 at the center of gaps whose width is 15 nm.Also, the nano-antenna was developed as a metal-insulator-metal (MIM) structure.This MIM structure exhibited a magnetic resonance and as a result extended the reception capabilities efficiently for angle of incidences up to 80°.The antenna resistance was 180 Ω which improved the matching with the diode.The radiation efficiency was also computed as 43% with a maximum detectivity of 5.5.The numerical simulations for different materials were carried out with insignificant impact on the nano-antenna's performance.The selection of metals and insulators supported connecting with several MIM diodes to improve the overall rectenna's performance.Finally, this optimized Chand-Bali nano-antenna achieved more than one order of magnitude improvement compared with the fabricated bow-tie nano-antennas operating at the same wavelength range. Numerical simulations To quantify the performance of the proposed Chand-Bali nano-antenna, the electric and magnetic fields should be calculated under the operation conditions.Therefore, the nano-antenna was analyzed using the finite element method (FEM) solver ANSYS HFSS.COMSOL Multiphysics was used to validate the results from ANSYS HFSS.The nano-antenna was built in an air box with periodic conditions on the sides to mimic the effect of an infinite array structure.A port is set on the top of the air box to excite the nano-antenna around 30 THz with a normal incident wave.A perfectly matched layer (PML) is designed on the top of the structure as an absorbing boundary condition.Gold and titanium dioxide are modeled using their complex permittivities within the considered frequency range 66,67 .Consistent mesh parameters are selected to ensure convergence of the calculations.From the defined port, S-parameters are computed, and reflectance and absorbance are then determined.The simulations were repeated twice under different electric field polarizations in order to determine the corresponding performance. Figure 1 . Figure 1.The proposed Chand-Bali nano-antenna structure: (a) 3D isometric view; (b) top view showing the design parameters of the three ellipses A, B, and C; and (c) cross-sectional view of the Chand-Bali nano-antenna showing the metal-insulator-metal (MIM) 3-layers whose thicknesses are design parameters. Figure 2 . Figure 2. (a) The convergence of the optimization algorithm measured with the objective function (W = 1 − reflectance) versus the iteration number; the algorithm approaches the maximum absorptivity after 15 steps for the case of x-polarized incident E-field.(b) The convergence of the algorithm in the case of two parallel simulations with electric field polarizations are in x and y directions, respectively.(c) The absorbance (= 1 − reflectance) of the Chand-Bali nano-antenna calculated at optimal design parameters for x-polarized incident electric field (solid), and y-polarized incident electric field (dotted). Figure 3 . Figure 3. (a) The enhancement of the electric field intensity |E/E 0 | 2 of the optimal design of the Chand-Bali nano-antenna structure vs. wavelength, the x-polarized is in solid while y-polarized is plotted as dotted lines.(b) The normalized enhancement factor for an x-polarized incident wave plotted with (1 − Refelctance) of the same x-polarized one showing matching response around the resonance wavelength. Figure 4 . Figure 4.The distribution of the electric field intensity |E|2 of the optimal design of the Chand-Bali nanoantenna structure calculated at 10 μm.(a,b) At the center xy-plane, where the darker color represents higher electric field intensity, all plots at the same scale, and the arrows represent the electric field vector at the same resonance wavelength of 10 μm for (a) x-polarized incident electric field, and (b) y-polarized incident electric field, respectively.(c,d) The grey scale spectrum maps the magnetic field intensity cross-sections emphasizing the creation of magnetic resonances.The cross-section is cut parallel to xz-plane passes through the hot-spot gap in (c), while it is parallel to yz-plane and passes through the two hotspots in (d).(e) The reflectance at the resonance wavelength with varying the incident angle, θ, the absorbance is over 92% for incident angles up to 80°. from Figure 5 . Figure 5. (a) The impedance of the Chand-Bali nano-antenna structure calculated around 10 μm, the resistance R in solid, and the reactance X in dashes, (b) the radiation efficiency of the optimal Chand-Bali nano-antenna calculated around the FWHM around 10 μm. 2 Figure 6 .Figure 7 . Figure 6.(a)The enhancement of the electric field intensity |E/E 0 | 2 of the optimal design of the Chand-Bali nano-antenna structure vs. wavelength, with the cut-ellipse is designed from Ti instead of gold, while the other elliptic patch is still in gold the x-polarized is in solid, the inset shows the new nano antenna structure after changing the materials.(b) The field enhancement with the Ti-Au patches as in (a) with the difference of adding a thin layer of TiO 2 over the Ti patch, the solid lines represent the case of no oxide layers, and the dashed lines with diamond symbols attributed the response after adding the 10 nm oxide layer, both cases show perfect fit. Table 1 . r Ay , r Bx , r By , r Cx , r Cy , e 1 , e 2 , e 3 , t m , t d ] T .A comparison between different nano-antennas presented in the literature and the Chand Bali nanoantenna.All nano-antennas work around ~ 30 THz (~ 10 µm) for energy harvesting applications. Table 2 . The optimal dimensions of the Chand Bali nano-antenna.
5,138
2023-10-16T00:00:00.000
[ "Engineering", "Physics" ]
Optical and Thermodynamic Investigations of a Methane- and Hydrogen-Blend-Fueled Large-Bore Engine Using a Fisheye Optical System The following paper presents thermodynamic and optical investigations of hydrogenenriched methane combustion, showing the potential of a hydrogen admixture as a means to decarbonize stationary power generation. The optical investigations are carried out through a fisheye optical system directly mounted into the combustion chamber, replacing one exhaust valve. All of the tests were carried out with constant fuel energy producing 16 bar indicated mean effective pressure. The engine under investigation is a port-fueled 4.8 L single-cylinder large-bore research engine. The test series compared the differences between a conventional spark plug and an unscavenged pre-chamber spark plug as an ignition system. The fuel blends under investigation are 5 and 10%V hydrogen mixed with methane and pure natural gas acting as a reference fuel. The thermodynamic results show a beneficial influence of the hydrogen admixture on both ignition systems and for all variations concerning the lean running limit, combustion stability and indicated efficiency, with the most significant influence being visible for the tests using conventional spark plugs. With the unscavenged pre-chamber spark plug and the combustion of the 10%V hydrogen admixture, an increase in the indicated efficiency of 0.8% compared to NG is achievable. The natural chemiluminescence intensity traces were observed to be predominantly influenced by the air–fuel equivalence ratio. This results in a 20% higher intensity for the unscavenged pre-chamber spark plug for the combustion of 10%V hydrogen compared to the conventional spark plug. This is also visible in the evaluations of the flame color derived from the dewarped combustion image series. The investigation of the torch flames also shows a difference in the air–fuel equivalence ratio but not between the different fuels. The results encourage the development of hydrogen-based fuels and the potential to store surplus sustainable energy in the form of hydrogen in existing gas grids. Introduction Sustainable energy production is a key issue in reducing the extent of climate change. For up to 150 years, the internal combustion engine (ICE) has been a key enabling technology for mobile and stationary power generation whilst being continuously optimized and improved. Thereby 25% of the world's power demand is satisfied through the use of ICEs, producing 10% of the world's greenhouse emissions (cf. [1]). Even with the increasing electrification of mobility, the ICE is a promising technology in the environment of Power to X. Hereby, surplus green energy provided by solar and wind sources is used to produce sustainable liquid fuels, such as oxymethylene ethers [2][3][4][5], or gaseous fuels, such as methane, hydrogen, and blends for storage and distribution. Onorati et al. [6] emphasize the possibilities of hydrogen in ICEs for sustainable energy production as well as the need for further investigations, especially regarding hydrogen and derived synthetic fuels. As of now, in the field of large-bore engines, these investigations are still rare (cf. [7]) but not of less importance, as these engines are used for stationary power supply and are the driving force of freight transportation. To provide the means for such investigations, a new kind of optical accessibility for large-bore engines was developed (cf. [8]), realized (cf. [9]), improved (cf. [10]) and presented in the following to investigate the combustion process of hydrogen methane blends. The admixture of hydrogen to methane is promising in many aspects regarding the optimization of the different fuel properties summarized in Table 1. The high stability of the tetrahedral molecule structure of methane causes its high stability and, therefore, its lower ignitability, whereas hydrogen is highly ignitable, improving the ignitability of mixtures. This results in improved lean-running conditions (cf. [7,[11][12][13][14][15]). Additionally, with the increased admixture of hydrogen, the probability of abnormal combustion increases due to the higher ignitability of hydrogen. With the following investigated low admixture of 5 and 10% V , these drawbacks are negligible, as [7] already shown, and as the following results will state. Therefore, these amounts tend to be a suitable way in the short-and mid-term timespan to replace fossil natural gas and can possibly be stored in natural gas grids. Further, the high laminar burning velocity of hydrogen improves combustion efficiency and the lean-running limits of fuel blends in contrast to the low laminar burning velocity of pure methane (cf. [7,[11][12][13][14]). Due to the lower carbon-to-hydrogen ratio of the fuel blends, the CO 2 emission decreases, which is further supported by more stable leanrunning combustion. One drawback of the admixture of hydrogen is the higher adiabatic flame temperature of hydrogen, resulting in increased NOx emissions, especially for higher amounts of hydrogen admixture (cf. [7,[11][12][13][14]). The combustion of hydrogen/methane mixtures basically follows the chemical reaction process of methane, altered in the elementary reaction steps according to the content of hydrogen and the combustion conditions (cp. [17]). Figure 1 shows the flame spectrum of methane combustion with its common radicals formed during combustion. The hydroxyl radical (OH*), as a main component in the chemical combustion process with its peak at 309 nm, starts forming at 1600 K and is observable during the main combustion process determining the flame front as well as after the main combustion [18,19] Energies 2023, 16, x FOR PEER REVIEW 3 of 32 Figure 1. Intensity spectra of methane combustion (data taken from [20][21][22].) To distinguish the flame front, the CH radical (CH*), as an intermediate product of the hydrocarbon combustion to the final product CO2, is a further possible indicator. The CH* mostly forms in regions with slight excess fuel and high temperatures. In addition, the CH* consumption by the flame is faster than the one of OH* and is responsible for the To distinguish the flame front, the CH radical (CH*), as an intermediate product of the hydrocarbon combustion to the final product CO 2 , is a further possible indicator. The CH* mostly forms in regions with slight excess fuel and high temperatures. In addition, the CH* consumption by the flame is faster than the one of OH* and is responsible for the flame's blue color. Hence, the CH* is also an indicator for the prompt NO-formation as the radical is bonding with the air's nitrogen to HCN and finally to NO ( [18,20,23]). The C 2 radical (C 2 *), with its peak at 517 nm, forms with a high concentration for premixed combustion under low air-fuel equivalence ratios. Further, C 2 * can act as a core for the polymerization of soot, thus presenting a first indicator for incomplete combustion. During combustion, the C 2 * radical radiates a large part of the combustion heat. C 2 * is responsible for flame colors ranging from yellow to green. ( [18,19,24]) Further, background radiation originating from the broadband spectrum of CO 2 * (@340-650 nm), HCO* (@340-523 nm) and HCHO* (340-523 nm) (cp. [19,25]) exists, which is not included in Figure 1. Another major occurrence is the black body radiation from soot incandescence arising, e.g., from lube oil ignition, which partly overlays the combustion radical spectra. For the addition of hydrogen to the combustion of methane in a continuous burner [26], reduced global radical emission spectra can be observed, especially for the carbon-based radicals (CH*, C 2 *, CO 2 *); this can be expected, as with an increased amount of hydrogen, the amount of methane decreases, and thus the amount of carbon atoms required to form these radicals diminishes. On the other hand, the global intensity of the OH* radicals decreases despite the higher amount of hydrogen for the higher admixtures. This can be explained by the reduction of the CH* radical, which reacts with O 2 and forms the hydroxyl radical. Similar findings are presented in [27] for non-premixed combustion with an emphasis on the importance of CH* radicals for OH* radical formation. The more hydrogen admixture, the more the global emission spectra differ from the typical methane emission, with more H 2 O radicals (H 2 O*) in the infrared region appearing due to an increasing amount of water (cf. [26]). Nevertheless, [20] mentions that this spectrum mainly arises from thermal excitation and less from chemical reactions. This was also observed in [7], especially after the combustion end was marked as 95% MFB. The H 2 O* radical is responsible for the red color of the flame. Di Iorio [28,29] optically investigated the already mentioned increased laminar flame speed of higher hydrogen percentages using a Bowditch type (cp. [30]) passenger-carsize fully optical single-cylinder research engine. The flame front was detected through observations of the OH* and CH* radicals that were identified as flame front indicators. As optical investigations of methane/hydrogen mixture combustion in large-bore applications are quite rare (cp. [7]), the following presents thermodynamical and optical measurement results of the combustion of NG, 5/95 and 10/90% V hydrogen mixtures. The optical results are derived from the natural flame chemiluminescence captured with a new type of optical access. Experimental Setup and Procedure The test bench used in the following experiments is described in [7,31,32] and has already been used in different setups for other investigations. Therefore, the following sections only briefly summarize the main experimental setup and procedure. Table 2 summarizes the main dimensions of the test engine. The fully optically accessible engine consists of two access types-a lateral optical ring and a vertical fisheye endoscope, both shown in Figure 2. The lateral access is realized by inserting an intermediate ring between the cylinder head and the cylinder liner. It contains different mounting positions for an endoscope. For the following experiments, these positions were sealed with steel inserts, as no camera or illumination was used. The fisheye endoscope replaces an exhaust valve and realizes a view from the top. Test Bench Infrastructure The test bench features automated feeding of the preconditioned media for the cooling water, oil, air, gaseous, and liquid fuel. The natural gas was obtained from the municipal gas network. The gas mixtures are provided directly from the premixed gas bundles. The gas supply pressure is kept constant at 12 bar using a dome pressure regulator before supplying it to the engine's six gas injectors, three of each located at one inlet runner. The gas mass flow is measured using a Coriolis gas flow meter. The screw-type compressor is capable of supplying charged air up to 9 bar. The air mass flow is measured using a rotary piston gas meter. To simulate a turbocharger pressure drop while upholding a constant turbocharger efficiency, a controllable flap is integrated into the exhaust path. A dynamometer is coupled with an induction machine to account for the high-power output of the single-cylinder engine. The test bench has an automated data acquisition system working with two different recording frequencies. The high-frequency resolution of 0.1 °CA captures the intake, exhaust, and combustion pressure. To record the combustion pressure, a Kistler 6041B piezoelectric pressure transducer and a 5011B charge amplifier are used. The intake pressure measurement uses a Kistler 4045A10 piezoelectric pressure transducer. The 4075A10 Kistler piezoelectric pressure transducer for the exhaust pressure measurement is coupled with a Kistler 7533B switching adapter to prevent the sensor from long-time exposure to high exhaust gas temperatures. The intake and exhaust pressure transducers are connected to a corresponding Kistler charge amplifier of type 4603 and plausibilized with a slow pressure measurement using a WIKA s10-type sensor. In addition to the automated data acquisition, an automated engine control system based on the National Instruments c-Rio and PIX system offers the possibility of both automated and manual control. An AVL Sesam FTIR conducts the measurement of exhaust species concentration. A description of measurement accuracy can be found in [33]. Those systems present state-of-the-art measurement techniques and equipment, and the measurements carried out show reproducible and small errors. Fuel Properties The experiments investigate the following fuel blends: 5/95, 10/90%V Hydrogen/Methane and NG as reference fuels in comparison. These mixtures represent potential re- This top view is captured using a high-speed camera mounted outside of the engine to protect the camera from the engine oscillations, so the image is redirected by a 45 • deflection mirror toward the camera. The camera is equipped with a Sigma macro-objective. The camera and the fisheye endoscope are aligned using a laser pointer mounted at the end of the endoscope, projecting the laser beam over the deflection mirror to the sigma objective. On the front of the objective, a blend can be mounted. Behind this blend, a mirror redirects the laser beam back to the laser pointer. The perfect alignment is reached as soon as the laser beam distinguishes and is not visible at the blend or the pointer itself. This proper alignment is especially important for the developed image post-processing algorithm mentioned in Section 4. The compensation of any relative motion between the camera and the fisheye endoscope, on the other hand, can be derived from the post-processing. As a reference point for this, a bright spot generated with an LED is rigidly fixed to the mirror frame. The optically enhanced engine is capable of an extended skipped fire engine operation with at least a fired operation time of 74 s, resulting in 462.5 fired cycles at 750 rpm (cf. [33]). The description of the operation strategy as well as a comparison between the optical setup and the all-metal engine, can be found in [33]. A detailed description of the test bench design and its development can be found in [10]. Test Bench Infrastructure The test bench features automated feeding of the preconditioned media for the cooling water, oil, air, gaseous, and liquid fuel. The natural gas was obtained from the municipal gas network. The gas mixtures are provided directly from the premixed gas bundles. The gas supply pressure is kept constant at 12 bar using a dome pressure regulator before supplying it to the engine's six gas injectors, three of each located at one inlet runner. The gas mass flow is measured using a Coriolis gas flow meter. The screw-type compressor is capable of supplying charged air up to 9 bar. The air mass flow is measured using Energies 2023, 16, 1590 5 of 26 a rotary piston gas meter. To simulate a turbocharger pressure drop while upholding a constant turbocharger efficiency, a controllable flap is integrated into the exhaust path. A dynamometer is coupled with an induction machine to account for the high-power output of the single-cylinder engine. The test bench has an automated data acquisition system working with two different recording frequencies. The high-frequency resolution of 0.1 • CA captures the intake, exhaust, and combustion pressure. To record the combustion pressure, a Kistler 6041B piezoelectric pressure transducer and a 5011B charge amplifier are used. The intake pressure measurement uses a Kistler 4045A10 piezoelectric pressure transducer. The 4075A10 Kistler piezoelectric pressure transducer for the exhaust pressure measurement is coupled with a Kistler 7533B switching adapter to prevent the sensor from long-time exposure to high exhaust gas temperatures. The intake and exhaust pressure transducers are connected to a corresponding Kistler charge amplifier of type 4603 and plausibilized with a slow pressure measurement using a WIKA s10-type sensor. In addition to the automated data acquisition, an automated engine control system based on the National Instruments c-Rio and PIX system offers the possibility of both automated and manual control. An AVL Sesam FTIR conducts the measurement of exhaust species concentration. A description of measurement accuracy can be found in [33]. Those systems present stateof-the-art measurement techniques and equipment, and the measurements carried out show reproducible and small errors. Fuel Properties The experiments investigate the following fuel blends: 5/95, 10/90% V Hydrogen/Methane and NG as reference fuels in comparison. These mixtures represent potential replacements for fossil natural gas in the natural gas grid infrastructure and are, therefore, relatively easily usable in the near future. Table 3 summarizes the properties of the municipal natural gas used. This mixture is assumed to be hydrogen-free, as the hydrogen concentration is below the detection limit. Table 4 directly compares the fuel properties of the tested blends in contrast to the reference fuel, natural gas. The properties are derived from an experimental gas analysis conducted by a certified laboratory. The properties differ not much, so direct replacement seems possible. Experimental Procedure and Settings The investigations include a variation of the equivalence air-fuel ratio (λ) from 1.5 to 1.8 in steps of 0.1 with an increasing amount of air while keeping the amount of fuel constant. The equivalence air-fuel ratio is within an uncertainty of 0.23% resulting from the measurement devices. Further, the center of combustion (CoC), defined as a 50% amount of burnt fuel, varied in four discrete steps of 7 ± 2 • , 11 ± 1 • , 15 ± 2 and 20 ± 2 • CA aFTDC Energies 2023, 16, 1590 6 of 26 within the indicated limit throughout all equivalence air-fuel ratios. The adaption of the CoC results in an adaption of the spark timing. All variations are carried out for each fuel of Table 4 and for each ignition system with a common spark plug and an unscavenged pre-chamber spark plug. All investigations test a constant amount of fuel energy within a tolerance of 2%, necessitating an adjustment of the amount of fuel according to the extent of the hydrogen admixture. This results in an indicated mean effective pressure of up to 16 bar. Further, the geometric compression ratio of 11.6 was kept constant for all investigations. The following thermodynamic results (burning duration, ignition delay, heat release rate, combustion temperature) are derived from a tuned GT-Power three pressure analysis (TPA) using the temperature results of 125 consecutive recorded engine cycles. Inputs for the calculation were the directly measured combustion chamber, inlet and exhaust pressures, which are averaged and corrected with a two-point offset. In the following presented burning duration, indicated efficiency, heat release rate and ignition delay are derived from the TPA. The CoV is derived directly from the measurement data using Equation (1) (cf. [34]). The indicated efficiency was calculated using direct measurements according to Equation (2). For a detailed overview of the experimental setting, Table 5 summarizes the boundary conditions of the engine's media supply. In contrast to [7,31,35], the herein presented investigations use the fully optically accessible engine and compare the combustion of an unscavenged pre-chamber spark plug to the combustion of a conventional spark plug using hydrogen methane blends and natural gas as a reference fuel. This first-of-its-kind optical investigation uses a fisheye optical system to observe these combustions. Further, the optical results are evaluated and compared to the thermodynamical findings. Evaluation of Thermodynamic Results The following sections summarize the thermodynamic results derived from the experiments to determine the effects of hydrogen admixture on the combustion process. Burning Duration (MFB10-90) The burning duration shown in Figure 3 is observed over a variation of the air-fuel equivalence ratio at a constant CoC of 8 • CA aFTDC (Figure 3a,b) and over a CoC variation at a constant air-fuel equivalence ratio of 1.7 (Figure 3c,d). The burning duration consists of the two timespans from the 10% mass fraction burned (MFB10) to 50% MFB (MFB50) and from 50% MFB to 90% MFB (MFB90). The division in these two parts shows the different influences of the admixture on the ignition and main combustion determined between the MFB10-50 and on the late combustion and burnout determined between MFB50-90. Regarding the ignition and main combustion, natural gas shows the longest burning duration in both engine setups using the conventional spark plug (SP) and the unscavenged pre-chamber spark plug (UP-SP). The 10HCH 4 fuel mixture shows the lowest burning durations in both engine setups. Especially for high air-fuel equivalence ratios, the hydrogen admixture becomes more effective as the ignition and combustion conditions deteriorate. This improved lean burning behavior can be explained by taking the fuel properties of hydrogen into account, especially the improved ignition and laminar burning velocity (cf. [36]). Nevertheless, for lower air-fuel equivalence ratios, the influence of the hydrogen admixture on the combustion turns out to be smaller, as the ignition and combustion conditions are sufficient for both ignition systems used. Comparing both ignition systems, the unscavenged pre-chamber spark plug leads to a better ignition with a shorter burning duration and shows a greater impact of the hydrogen admixture at higher air-fuel equivalence ratios. Additionally, the unscavenged pre-chamber allows a stable engine operation with natural gas and 10HCH 4 at an air-fuel equivalence ratio of 1.8, whereas using the conventional spark plug leads to considerable misfiring and unstable conditions during the skipped fire engine operation and is, therefore, not shown. The main combustion recorded over a variation of the CoC at constant air-fuel equivalence ratio also depicts a reasonable behavior as the burning duration decreases for both ignition systems with earlier CoC, respectively, ignition timings. Furthermore, natural gas depicts the longest combustion and ignition durations for the conventional spark plug (cf. Figure 3c). Ignition Delay The ignition delay shown in Figure 4 is calculated from the ignition timing until MFB2. It is obvious that the unscavenged pre-chamber spark plug reduces the ignition delay significantly, especially for higher air-fuel equivalence ratios. At the air-fuel equivalence ratio of 1.7, the difference between the ignition delay for NG is 18 °CA, for 5HCH4 11 °CA and 10HCH4 12.5 °CA. This is especially due to local air-fuel equivalence disturb- The second stage of the combustion, which includes the burnout, shows a similar behavior as the main stage of the combustion for the air-fuel equivalence ratio variation (cf. Figure 3b). Increasing air-fuel equivalence ratios lead to elongated combustion durations as the laminar burning velocity decreases. Concerning the CoC variation, earlier ignition timings lead to earlier CoC, decreasing the burning duration. An increased influence is recognizable as the CoC is later than 15 • CA aFTDC. Nevertheless, the admixture of hydrogen improves the combustion of late CoCs as the laminar burning velocity is increased. This improves burnout and affects the emissions as well as the efficiency of the combustion. Ignition Delay The ignition delay shown in Figure 4 is calculated from the ignition timing until MFB2. It is obvious that the unscavenged pre-chamber spark plug reduces the ignition delay significantly, especially for higher air-fuel equivalence ratios. At the air-fuel equivalence ratio of 1.7, the difference between the ignition delay for NG is 18 • CA, for 5HCH 4 11 • CA and 10HCH 4 12.5 • CA. This is especially due to local air-fuel equivalence disturbances influencing the formation of a spark core in contrast to the unscavenged pre-chamber spark plug. In addition, due to the optimized design of the unscavenged pre-chamber's overflow bores, the turbulence in the pre-chamber can be increased to enhance the ignition and growth of the flame kernel (cf. [37]). Further, with increased air-fuel equivalence ratios, the local concentration of fuel near the conventional spark plug decreases, which deteriorates the initiation of the combustion as well as the propagation of the flame front starting at the conventional spark plug. The admixture of hydrogen especially supports the ignition using the conventional spark plug, whereas the effect of both hydrogen mixtures is almost equal. With the unscavenged pre-chamber spark plugs, the effect of the hydrogen addition is almost negligible for the here used amount and shows a clear improvement only at higher air-fuel equivalence ratios. Similar findings in [7] support the results. Figure 5 shows the coefficient of variance (CoV) of the indicated mean effective pressure calculated according to Equation (1). Figure 5 also includes the stability limit of 2%, according to [38]. By covering a larger volume during ignition with the unscavenged prechamber, combustion runs more stable than using the conventional spark plug. With increasing air-fuel equivalence ratios, the combustion becomes more unstable as the cyclic variations increase until a misfire occurs. Especially for the conventional spark plug, this results in misfiring and, therefore, exceeds the stability limit. Even with the highest amount of hydrogen admixture used, here, the combustion using a conventional spark plug for ignition exceeds the stability limit at an air-fuel equivalence ratio of 1.8. The investigation results had to be neglected as severe misfires led to absolutely unstable conditions, especially as the engine was operated under skipped fire operation. For the unscavenged pre-chamber spark plug, the admixture of hydrogen is much more beneficial and stabilizes the combustion, if only at higher air-fuel equivalence ratios, whereas at lower air-fuel equivalence ratios, almost no influence is visible. Further, the admixture is Figure 5 shows the coefficient of variance (CoV) of the indicated mean effective pressure calculated according to Equation (1). Figure 5 also includes the stability limit of 2%, according to [38]. By covering a larger volume during ignition with the unscavenged pre-chamber, combustion runs more stable than using the conventional spark plug. With increasing air-fuel equivalence ratios, the combustion becomes more unstable as the cyclic variations increase until a misfire occurs. Especially for the conventional spark plug, this results in misfiring and, therefore, exceeds the stability limit. Even with the highest amount of hydrogen admixture used, here, the combustion using a conventional spark plug for ignition exceeds the stability limit at an air-fuel equivalence ratio of 1.8. The investigation results had to be neglected as severe misfires led to absolutely unstable conditions, Energies 2023, 16, 1590 9 of 26 especially as the engine was operated under skipped fire operation. For the unscavenged pre-chamber spark plug, the admixture of hydrogen is much more beneficial and stabilizes the combustion, if only at higher air-fuel equivalence ratios, whereas at lower air-fuel equivalence ratios, almost no influence is visible. Further, the admixture is low enough that the effects of lube oil ignition cannot deteriorate the combustion stability, as experienced in [7], leading to abnormal combustion. Especially taking the skipped fire operation condition of the optical engine into account, the stabilization of the combustion with an increased admixture of hydrogen is beneficial, resulting in higher combustion temperatures, faster heat-up of the engine and, therefore, improved ignition and combustion conditions. Figure 6 includes the indicated efficiency at a constant CoC of 8 °CA aFTDC and at a constant air-fuel equivalence ratio of 1.7 each, with a variation in the air-fuel equivalence ratio, respectively, the CoC. The indicated efficiency was calculated using Equation (2) from filtered, corrected and averaged measurement data. The unscavenged pre-chamber spark plug shows higher indicated mean efficiency due to better ignition and, therefore, faster combustion for the air-fuel equivalence ratio as well as CoC variation. As the lean running limit is increased, this results in higher efficiency at higher air-fuel equivalence ratios for the unscavenged pre-chamber, whereas the spark plug is not capable of the ignition of lean mixtures resulting in misfires and decreased efficiency (cf. Figure 6a). Especially for higher air-fuel equivalence ratios, the benefit of hydrogen admixture is recognizable as the efficiency is increased compared to NG due to stable ignition and combustion. For lower air-fuel equivalence ratios, the effect is less decisive as the ignition conditions here a better compared to higher air-fuel equivalence ratios. Figure 6b shows the effect of the CoC influence on the indicated efficiency. With late CoCs, the efficiency of both ignition systems deteriorates as the combustion duration increases, leading to higher wall heat losses, exhaust gas temperatures and less usable heat for the pressure increase. Nevertheless, an admixture of hydrogen improves the indicated efficiency as it increases the laminar burning velocity of the fuel blend compared to NG. Figure 6 includes the indicated efficiency at a constant CoC of 8 • CA aFTDC and at a constant air-fuel equivalence ratio of 1.7 each, with a variation in the air-fuel equivalence ratio, respectively, the CoC. The indicated efficiency was calculated using Equation (2) from filtered, corrected and averaged measurement data. The unscavenged pre-chamber spark plug shows higher indicated mean efficiency due to better ignition and, therefore, faster combustion for the air-fuel equivalence ratio as well as CoC variation. As the lean running limit is increased, this results in higher efficiency at higher air-fuel equivalence ratios for the unscavenged pre-chamber, whereas the spark plug is not capable of the ignition of lean mixtures resulting in misfires and decreased efficiency (cf. Figure 6a). Especially for higher air-fuel equivalence ratios, the benefit of hydrogen admixture is recognizable as the efficiency is increased compared to NG due to stable ignition and combustion. For lower air-fuel equivalence ratios, the effect is less decisive as the ignition conditions here a better compared to higher air-fuel equivalence ratios. Figure 6b shows the effect of the CoC influence on the indicated efficiency. With late CoCs, the efficiency of both ignition systems deteriorates as the combustion duration increases, leading to higher wall heat losses, exhaust gas temperatures and less usable heat for the pressure increase. Nevertheless, an admixture of hydrogen improves the indicated efficiency as it increases the laminar burning velocity of the fuel blend compared to NG. Figure 7 shows the heat release rate for both ignition systems and investigated fuels at a constant CoC of 8 • CA aFTDC and an air-fuel equivalence ratio of 1.7. Concerning the ignition system, the unscavenged pre-chamber shows later ignition timings due to a reduced ignition delay (cf. Figure 4) and reduced burning durations (cf. Figure 3). Apparent Heat Realease Rate (AHRR) The conventional spark plug needs much earlier ignition timings to overcome the deteriorated ignition behavior. Further, the addition of hydrogen only slightly alters the timing. The admixture results in a steeper heat release and higher peaks, as well as a retarded burnout. This supports the increased indicated efficiency of the combustion of the fuel blends (cf. Figure 6). Compared to the unscavenged pre-chamber spark plugs, Figure 7b shows a much steeper and faster combustion with increased peak value and a further retarded burnout, resulting in an increased indicated efficiency. The addition of 2% V is only of less influence concerning the heat release, while the ignition delay of NG and 5HCH 4 are almost equal. Nevertheless, the peak value is 1.3 • CA earlier and 4.6% higher. The maximum hydrogen admixture shows the highest heat release peak, resulting in fast combustion. Figure 7 shows the heat release rate for both ignition systems and investigated fuels at a constant CoC of 8 °CA aFTDC and an air-fuel equivalence ratio of 1.7. Concerning the ignition system, the unscavenged pre-chamber shows later ignition timings due to a reduced ignition delay (cf. Figure 4) and reduced burning durations (cf. Figure 3). Apparent Heat Realease Rate (AHRR) The conventional spark plug needs much earlier ignition timings to overcome the deteriorated ignition behavior. Further, the addition of hydrogen only slightly alters the timing. The admixture results in a steeper heat release and higher peaks, as well as a retarded burnout. This supports the increased indicated efficiency of the combustion of the fuel blends (cf. Figure 6). Compared to the unscavenged pre-chamber spark plugs, Figure 7b shows a much steeper and faster combustion with increased peak value and a further retarded burnout, resulting in an increased indicated efficiency. The addition of 2%V is only of less influence concerning the heat release, while the ignition delay of NG and 5HCH4 are almost equal. Nevertheless, the peak value is 1.3 °CA earlier and 4.6% higher. The maximum hydrogen admixture shows the highest heat release peak, resulting in fast combustion. Evaluation of the Optical Results The optical investigations focus on intensity traces of the natural flame chemiluminescence to support the findings of the thermodynamically derived results. A comparison of images at specific points in the combustion cycle can be used to derive differences in the combustion between the two ignition systems and the different investigated fuels. Procedure of the Image Evaluation The pre-and post-processing follow the approach detailed in [35]. Figure 8 summarizes the preprocessing procedure implemented in Matlab. The preprocessing includes basic image arithmetic to rotate and mirror the image, the compensation of the image movement, a cutout of the ROI (region of interest) and a debayering to derive the colored images. The post-processing includes the un-distortion of the images as well as a simplified reprojection algorithm detailed in Section 4.4. For this, a special calibration of the complete fisheye endoscope is necessary. The procedure behind the calibration is ex- Evaluation of the Optical Results The optical investigations focus on intensity traces of the natural flame chemiluminescence to support the findings of the thermodynamically derived results. A comparison of images at specific points in the combustion cycle can be used to derive differences in the combustion between the two ignition systems and the different investigated fuels. Procedure of the Image Evaluation The pre-and post-processing follow the approach detailed in [35]. Figure 8 summarizes the preprocessing procedure implemented in Matlab. The preprocessing includes basic image arithmetic to rotate and mirror the image, the compensation of the image movement, a cutout of the ROI (region of interest) and a debayering to derive the colored images. The post-processing includes the un-distortion of the images as well as a simplified reprojection algorithm detailed in Section 4.4. For this, a special calibration of the complete fisheye endoscope is necessary. The procedure behind the calibration is explained in detail in [35]. The calibration is carried out using the Kannala Brandt [39] approach. It is compared to an alternative approach in [35] and chosen as the most applicable. The calibration leads to the forward projection function shown in Figure 9, with a maximum angle of 97.2 • and a maximum image radius of 203 px. The calibration for the imaging was carried out on a setup outside the engine with similar mounting conditions as at the engine (cf. Figure 10). The alignment of the camera towards the optic at the testing rig and at the calibration setup uses a laser beam extinction. The calibration setup outside the testing rig facilitates a homogenous illumination, as well as the usage of the calibration pattern with a size of 297 mm × 42 mm × 10 mm. The pattern consists of 17 columns and 12 rows, resulting in 204 regular black and white squares with 176 usable control points and 325 unique distances between them. To estimate the accuracy of the calibration, a reprojection of the calibration pattern to six different offsets from the first lens is carried out. For each reprojection, the algorithm shown in [35] was used to derive the visible distances between the control points. Figure 11 shows the standard deviation of the derived distance to the real one with 25 mm. The quality of the results is dependent on the accuracy of the detection algorithm used to derive the control points from the image, the precision of the calibration itself and the accuracy of the measurement of the distance between the optic and the pattern. Especially due to the detection of the control points herein carried out with [40], the distance proves especially influential as its accuracy also determines the calibration quality. Further, the typical optical distortion effect of a fisheye optic can be seen in the results. With the object closer to the lens, the object becomes more distorted, resulting in a deteriorated resolution and, thus, a higher standard deviation. The higher standard deviation for the 24 mm distance results from the almost doubled amount of detectable and useable distances to determine the value. Here, the imaging results of the investigations can be used to derive a better estimation of the optic's accuracy. With an averaged standard deviation over all distances, a deviation of 3.3 mm compared to the engine's bore 2% proves a less comparable deviation. A mean value over all investigated distances seems valid, as the observed natural chemiluminescence is an integral field of sight method. To further improve the accuracy of the reprojection algorithm, a tuning of the camera's position in the real world can be carried out if at least the real-world coordinates of The calibration for the imaging was carried out on a setup outside the engine with similar mounting conditions as at the engine (cf. Figure 10). The alignment of the camera towards the optic at the testing rig and at the calibration setup uses a laser beam extinction. The calibration setup outside the testing rig facilitates a homogenous illumination, as well as the usage of the calibration pattern with a size of 297 mm × 42 mm × 10 mm. The pattern consists of 17 columns and 12 rows, resulting in 204 regular black and white squares with 176 usable control points and 325 unique distances between them. To estimate the accuracy of the calibration, a reprojection of the calibration pattern to six different offsets from the first lens is carried out. For each reprojection, the algorithm shown in [35] was used to derive the visible distances between the control points. Figure 11 shows the standard deviation of the derived distance to the real one with 25 mm. The quality of the results is dependent on the accuracy of the detection algorithm used to derive the control points from the image, the precision of the calibration itself and the accuracy of the measurement of the distance between the optic and the pattern. Especially due to the detection of the control points herein carried out with [40], the distance proves especially influential as its accuracy also determines the calibration quality. Further, the typical optical distortion effect of a fisheye optic can be seen in the results. With the object closer to the lens, the object becomes more distorted, resulting in a deteriorated resolution and, thus, a higher standard deviation. The higher standard deviation for the 24 mm distance results from the almost doubled amount of detectable and useable distances to determine the value. Here, the imaging results of the investigations can be used to derive a better estimation of the optic's accuracy. With an averaged standard deviation over all distances, a deviation of 3.3 mm compared to the engine's bore 2% proves a less comparable deviation. A mean value over all investigated distances seems valid, as the observed natural chemiluminescence is an integral field of sight method. Figure 12 shows the mean intensity trace over all recorded cycles for the CoC of 8, 10 and 15 °CA aFTDC and the different air-fuel equivalence ratios 1.5, 1.6, 1.7 and 1.8. Column 1 contains the traces for the unscavenged pre-chamber sparkplug, whereas column 2 shows the traces for the conventional spark plug. All traces are normalized relative to the maximum arising intensity for the unscavenged pre-chamber spark plug of CoC 8 °CA aFTDC and λ 1.5 to compare the intensity traces among the different CoC and air-fuel equivalence ratios as well as for both ignition systems. As already shown in the evaluation of the thermodynamic results, it is not capable of a stable ignition, respectively, combustion for the air-fuel equivalence ratio of 1.8, so no characteristic behavior can be determined. Thus, they are not included in the optical evaluation. According to Figure 12, a postponed center of combustion results in a less intense natural flame chemiluminescence. This can be explained by a reduced combustion temperature, resulting in a less intense broadband luminosity of the combustion. Evaluation of the Natural Chemiluminescence Intensity Trace The comparison of the combustion temperature derived from the TPA model with the intensity for the unscavenged pre-chamber spark plug at λ 1.5 and 1.8 for 10HCH4 and NG under the CoC variation is shown in Figure 13. Here, the decrease in the combustion temperature with higher air-fuel equivalence ratios as well as with late CoCs is visible. In particular, for the CoC of 15 °CA aFTDC, the maximum occurring temperature is retarded for all air-fuel equivalence ratios. This results from delayed and slow combustion under these conditions. To further improve the accuracy of the reprojection algorithm, a tuning of the camera's position in the real world can be carried out if at least the real-world coordinates of one point in the image are known. Two visible control points in the image were used for this and compared to the digital mockup of the testing rig. The analysis showed a mean error of 3.2 mm for the reprojection and, thus, imaging accuracy. Figure 12 shows the mean intensity trace over all recorded cycles for the CoC of 8, 10 and 15 • CA aFTDC and the different air-fuel equivalence ratios 1.5, 1.6, 1.7 and 1.8. Column 1 contains the traces for the unscavenged pre-chamber sparkplug, whereas column 2 shows the traces for the conventional spark plug. All traces are normalized relative to the maximum arising intensity for the unscavenged pre-chamber spark plug of CoC 8 • CA aFTDC and λ 1.5 to compare the intensity traces among the different CoC and air-fuel equivalence ratios as well as for both ignition systems. As already shown in the evaluation of the thermodynamic results, it is not capable of a stable ignition, respectively, combustion for the air-fuel equivalence ratio of 1.8, so no characteristic behavior can be determined. Thus, they are not included in the optical evaluation. According to Figure 12, a postponed center of combustion results in a less intense natural flame chemiluminescence. This can be explained by a reduced combustion temperature, resulting in a less intense broadband luminosity of the combustion. Evaluation of the Natural Chemiluminescence Intensity Trace The comparison of the combustion temperature derived from the TPA model with the intensity for the unscavenged pre-chamber spark plug at λ 1.5 and 1.8 for 10HCH 4 and NG under the CoC variation is shown in Figure 13. Here, the decrease in the combustion temperature with higher air-fuel equivalence ratios as well as with late CoCs is visible. In Further, the conditions for the chain branching mechanism of the combustion deteriorate towards lower pressure and temperature during the combustion for late CoCs. This behavior is observable for both ignition systems, and an even lower intensity can be observed for the conventional spark plug due to a lower ignition performance, leading to an Further, the conditions for the chain branching mechanism of the combustion deteriorate towards lower pressure and temperature during the combustion for late CoCs. This behavior is observable for both ignition systems, and an even lower intensity can be observed for the conventional spark plug due to a lower ignition performance, leading to an even more deteriorated combustion for later CoCs. The behavior of the air-fuel equivalence ratio variation seems valid, taking higher combustion temperatures (cf. Figure 13) and faster combustion into account. The higher combustion temperature arises from lower air-fuel equivalence ratio values due to a higher amount of fuel and less surplus air. In addition, a higher broadband luminosity from less quenching between the excited combustion molecules with oxygen for lower air-fuel equivalence ratios results in higher intensity (cf. [41,42]). The faster combustion leads to an early generation of chemiluminescence arising from carbon-based radicals with higher energy content, resulting in more intense radiation. This can be observed in Figure 13, as for low air-fuel ratios and early CoCs, the intensity's maximum is within 1 • CA difference between the temperature and intensity trace. For late CoCs, the combustion slows down, and the offset between the temperature and the intensity increases to almost 3 • CA. This becomes even more for high air-fuel equivalence ratios, as quenching and a further decrease in burning velocity occur. A difference of 10 • CA develops between the two maxima. The behavior between the maximum admixture of 10% V hydrogen and the NG is similar. For the unscavenged pre-chamber spark plugs, the 10% V hydrogen admixture shows the highest natural flame chemiluminescence intensity due to the highest combustion temperature (cf. Figure 13) with the most broadband radiation. This difference becomes more obvious as the ignition and combustion conditions deteriorate for a higher air-fuel equivalence ratio and a later CoC. The increased amount of hydrogen in the mixture counteracts the deteriorated conditions for ignition and combustion. Especially for the lowest air-fuel equivalence ratio of 1.5 and the earliest CoC of 8 • CA aFTDC, the difference between the fuel mixtures and the reference fuel NG is quite small. The same behavior can be observed in Figure 13 when comparing NG and 10HCH 4 at λ 1.5. The combustion temperatures are quite equal, with a difference of 40 • C for the earliest CoC and also for the CoC 10 • CA aFTDC. With the late CoC 15 • CA aFTDC, the influence becomes remarkable. This is due to the optimal ignition conditions at early CoC, a low air-fuel equivalence ratio and stable and complete combustion, resulting in high combustion temperatures and, thus, the high intensities of the broadband radiation. Those operating points are also quite similar because of the limited resolution of the camera, as all images are captured with the same exposure time for comparability. Nevertheless, as already shown in section three, the unscavenged pre-chamber spark plug shows a stable ignition, even for the deteriorated conditions at late CoC and high air-fuel equivalence ratios, and all used fuel mixtures. The results show similar behavior as mentioned in [7]. However, the end of combustion marked as MFB95 cannot be correlated with the natural chemiluminescence as further intensity above the MFB95 is visible. Similarly to what was discussed in [7], this can result from excited water forming during the after-combustion phase. Especially with a higher content of hydrogen and a lower air-fuel equivalence ratio, the intensity after the end of combustion is higher, resulting from higher combustion temperatures. The offset of the CoC and the center of intensity (CoI) shows a mean value of 13.98 • CA and a standard deviation of 2.2 • CA for the unscavenged pre-chamber spark plug. The offset between the CoC and CoI for the conventional spark plug shows a mean value of 14.29 • CA and a standard deviation of 2.2 • CA. The difference between the CoI and the CoC of both ignition systems for all the investigated variations seems almost constant with at least 14 • CA. Further, the images are dewarped using five different pinhole projections (cf. [35]), resulting in the specific display format. Figure 15 shows the same arrangement for the conventional spark plug. As already described in Section 1, the radicals arising during combustion are responsible for the flame's color. A comparison of the two fuels for the respective air-fuel equivalence ratio shows only a few differences in the color composition of the flame for both ignition systems. For the air-fuel equivalence ratio of 1.5 using the Further, the images are dewarped using five different pinhole projections (cf. [35]), resulting in the specific display format. Figure 15 shows the same arrangement for the conventional spark plug. As already described in Section 1, the radicals arising during combustion are responsible for the flame's color. A comparison of the two fuels for the respective air-fuel equivalence ratio shows only a few differences in the color composition of the flame for both ignition systems. For the air-fuel equivalence ratio of 1.5 using the unscavenged pre-chamber spark plug (cf. Figure 14), only the early stages of the combustion of NG and 10HCH 4 at points MFB5 and MFB10 show blue areas at the flame front. These can be attributed to the formation of CH* in the outer edges of the flame, the reactive flame front. In the early stages of the combustion of 10HCH 4 , some yellow areas can already be seen in the flame, which can indicate higher combustion temperatures as well as faster combustion leading to the earlier formation of more C 2 *. With MFB5, both flames show a reddish portion, which can be an indicator of the formation of water. The center of combustion shows a strong yellow flame, indicating a high combustion temperature and the resulting strong black body radiation and C 2 * formation. Further, there are reddish areas at the boundaries of the deep yellow parts of the flame. This intensifies as the flame progresses toward the burnout phase and is clearly visible for MFB95. Here, a bright yellow core is now formed, blending into the red spectral range in the direction of the combustion chamber wall. Evaluation of Combustion Image Series At the air-fuel equivalence ratio of 1.7, a clearly more pronounced blue component for the combustion of natural gas in the early stages of combustion compared to the mixture with 10% V hydrogen is visible. Due to the lower combustion temperatures in areas of leaner λ, a lower temperature-dependent background radiation, as well as a lower production of C 2 *, is to be expected. Nevertheless, the flame with a higher hydrogen content and thus a higher combustion temperature already shows first reddish (MFB5) and then first yellow (MFB10) areas. Further, these are due to the stronger black-body radiation of burning carbon at higher combustion temperatures at MFB50. During the burnout phase and the end of combustion (MFB95), shifted portions of the flame can also be seen in the direction of the combustion chamber wall as well as in the center, colored in the red spectral range, which could be attributed to the formation of thermally excited H 2 O*. A comparison of the two λ shows a clearly higher blue portion of the color for higher oxygen content, at least for the early phases of the combustion caused by the CH* that is well visible because of the lower superimposed background radiation and a slower combustion velocity. At the end of combustion and during the burnout phase, the images with a leaner air-fuel equivalence ratio show a slightly more pronounced shift into the reddish range. This is also resulting from weaker background radiation and lower temperatures during the burnout resulting from an overall lower combustion temperature. For the conventional spark plug test series (cf. Figure 15), an almost similar behavior can be observed. For low air-fuel equivalence ratios, blue parts in the flame front at the early stage of combustion hint at CH* formation. At an air-fuel equivalence ratio of 1.7, the images show a slight blue part at the edges of the flame at CoC, which is propagating toward the combustion chamber wall. These are visible as the combustion temperature is lower compared to the experiments using the unscavenged pre-chamber spark plug and, therefore, is less concealed by background radiation. Further, the slower combustion for high air-fuel ratios and the conventional spark plug results in a delay in the radical formation. The higher intensities displayed for the natural gas compared to the 10HCH 4 combustion at an air-fuel equivalence ratio of 1.5 arise from the cyclic variations. Comparing the images of the unscavenged pre-chamber spark plug and the conventional spark plug in Figures 14 and 15, a different flame propagation is visible. Especially in the early stage of the combustion, the areas ignited by the torch flames are visible in Figure 14, whereas Figure 15 shows a more compact area of the flame for the conventionalspark-plug-ignited combustion. Evaluation of Torch Rays For the unscavenged pre-chamber spark plug, a closer look at the early stages of the combustion shows the formation of torch flames originating from the unscavenged pre-chamber spark plug combining to create a continuous flame front (cf. Figure 16). Due to the position of the fisheye optical system, not all of the unscavenged pre-chamber spark plug's torch flames are visible in the recordings, as the rest is covered by the cap of the ignition system itself. Figure 16 shows the recorded image series for the 10HCH 4 fuel at CoC 8 • CA aFTDC with an air-fuel equivalence ratio of 1.7. The images depicted are meanvalue images over the 50 recorded cycles. The image recognition results of the detection algorithm used to extract the four visible torch flames out of a total of seven are included in Figure 16. The torch flames all have a bluish color indicating a quite high concentration of CH* responsible for the flame's blue color that is overlayed with yellow and orange, hinting at carbon-based radiation as the • CA advances. Figure 17 shows the arrangement of the torch flames. The algorithm for extracting the torch flame contour stops when a It has the shortest distance to the piston bowl and even extends as far as the bowl itself, as will be shown in Figures 18 and 19. Figure 17 shows the detailed proceeding for the reprojection post-processing, which is similar to the one presented in [35]. The reprojection plane used for the post-processing is perpendicular to the connection vector of the orthogonal projection points of the camera and ignition system origins to the horizontal base plane. After the virtual reprojection, the length of the torch flame L can be derived from the post-processed images. This is undertaken for both fuels of NG and 10HCH4 at the CoC of 8 °CA aFTDC and air-fuel equivalence ratios of 1.5, 1.6 and 1.7. Figures 18 and 19 summarize the results. Both figures show the mean value images of the 50 recorded cycles and are dewarped and individually scaled regarding their intensities for better visibility. Each figure contains the derived torch flame length, L, estimated as the max distance of the torch flames flame front to the engine's flame deck (cf. Figure 18). Further, three different points on the piston bowl's omega shape are overlayed to verify the length of L in accordance with the piston position over the engine rotation (s1-s3 cf. Figure 18). Comparing the derived length of L with these distances, it becomes evident that the detected torch flame is being redirected within the piston bowl. The tip of the torch flame touches the highest point of the piston at s1 even before the images are captured, as the derived length is always larger than s1. Due to the omega-shaped piston bowl and the squish flow, the flame is redirected towards the lowest point of the piston shape marked as s3. This can be recorded as the used measurement technique is an integral line of sight method, which captures all the intensity along a line of sight. Especially for the highest air-fuel equivalence ratio resulting in the slowest combustion, the development of the torch flame and the interaction of flame number 7 with the piston bowl can be seen for both fuels. The tip of flame 7 is growing stronger perpendicularly to the flame axis and shows a higher intensity as here, a larger volume already ignites and contributes to the captured intensity. The first part of flame torch 7, at about one-third from its origin, is very narrow and compact, hinting at a high flame velocity and momentum at the exit of the unscavenged pre-chamber. The other three visible torch flames are difficult to distinguish from the background noise. During combustion, the horizontally aligned torch flames It has the shortest distance to the piston bowl and even extends as far as the bowl itself, as will be shown in Figures 18 and 19. Figure 17 shows the detailed proceeding for the reprojection post-processing, which is similar to the one presented in [35]. The reprojection plane used for the post-processing is perpendicular to the connection vector of the orthogonal projection points of the camera and ignition system origins to the horizontal base plane. After the virtual reprojection, the length of the torch flame L can be derived from the post-processed images. This is undertaken for both fuels of NG and 10HCH 4 at the CoC of 8 • CA aFTDC and air-fuel equivalence ratios of 1.5, 1.6 and 1.7. Figures 18 and 19 summarize the results. Both figures show the mean value images of the 50 recorded cycles and are dewarped and individually scaled regarding their intensities for better visibility. Each figure contains the derived torch flame length, L, estimated as the max distance of the torch flames flame front to the engine's flame deck (cf. Figure 18). Further, three different points on the piston bowl's omega shape are overlayed to verify the length of L in accordance with the piston position over the engine rotation (s1-s3 cf. Figure 18). angles later than the depicted 7 °CA bFTDC. The detected length, L, of torch flame 7 is of a quite similar length for both fuels, as the torch flame reaches the piston bowl and is thus limited to the maximum possible length s3 within the algorithm's reprojection accuracy (cf. Section 4.1). The mean images of the torch flame of both fuels also show quite similar intensities and very similar colors. This may be due to the comparable low admixture of hydrogen with only 10%V. As also observed in [43], the 10%V admixture shows a modest increase in the laminar burning velocity. Comparing the derived length of L with these distances, it becomes evident that the detected torch flame is being redirected within the piston bowl. The tip of the torch flame touches the highest point of the piston at s1 even before the images are captured, as the derived length is always larger than s1. Due to the omega-shaped piston bowl and the squish flow, the flame is redirected towards the lowest point of the piston shape marked as s3. This can be recorded as the used measurement technique is an integral line of sight method, which captures all the intensity along a line of sight. Especially for the highest air-fuel equivalence ratio resulting in the slowest combustion, the development of the torch flame and the interaction of flame number 7 with the piston bowl can be seen for both fuels. The tip of flame 7 is growing stronger perpendicularly to the flame axis and shows a higher intensity as here, a larger volume already ignites and contributes to the captured intensity. The first part of flame torch 7, at about one-third from its origin, is very narrow and compact, hinting at a high flame velocity and momentum at the exit of the unscavenged pre-chamber. The other three visible torch flames are difficult to distinguish from the background noise. During combustion, the horizontally aligned torch flames progress along the jet axis, but mainly perpendicular to it, so that the combustion chamber formed by the cylinder head and piston bowl is covered increasingly by the flames. This can be explained, among others, by the swirl and squish flow of the engine as well as by the mounting position of the unscavenged pre-chamber spark plug. The mounting position prevents the horizontally aligned torch flames from propagating too deep into the combustion chamber in the direction of the torch flame axis as their paths intersect partially with the engine's valves. For lower air-fuel equivalence ratios, the combustion progress is so fast that only a few pictures can be captured. Here, the torch rays also develop more quickly and show bright combustion that is already turning yellow, implying a later stage of combustion and C 2 *-based chemiluminescence with a higher intensity. The MFB5 is identified to occur after the flame torches form a continuous flame front and at crank angles later than the depicted 7 • CA bFTDC. The detected length, L, of torch flame 7 is of a quite similar length for both fuels, as the torch flame reaches the piston bowl and is thus limited to the maximum possible length s3 within the algorithm's reprojection accuracy (cf. Section 4.1). The mean images of the torch flame of both fuels also show quite similar intensities and very similar colors. This may be due to the comparable low admixture of hydrogen with only 10% V . As also observed in [43], the 10% V admixture shows a modest increase in the laminar burning velocity. Thermodynamic Results The thermodynamic results all show plausible and explainable behavior that is also comparable to the combustion within an all-metal engine, as shown in [33]. This means that the setup is capable of a sufficient fired run time to stabilize the combustion and record the measurement data. The skipped fire operation mode even benefits from the admixture of hydrogen by a faster heat up of the combustion chamber and a faster stabilization but it also increases the thermal load onto the optical components. The herein investigated two different volume percentages of hydrogen cause no abnormal combustion and are, therefore, suitable for the engine by solely adapting the spark timing. The results of the test series show that even with a small amount of hydrogen admixture, the combustion process gains higher efficiency and combustion stability because of the enhanced lean running limit. This is especially true for the setup with a conventional spark plug but is also beneficial for the unscavenged pre-chamber spark plug. Here, the combination with the admixture of hydrogen enhances the lean running limit further as the ignition and combustion benefit from the higher laminar burning velocity of the hydrogen. With the elongation of the lean running limit, a NOx equivalent running strategy seems possible (cf. [7]) despite the higher combustion temperatures arising from the admixture of hydrogen that increase the NOx concentration. Optical Results The imaging was conducted using a fisheye optical system that was mounted in the cylinder head. Due to the integration of the optic into the engine using a non-central mount a large part of the extensive field of view is not usable. Regarding optical quality, the optic is capable of images with a resolution of 1.5 • CA, resulting in 330 µs gate time. With this setting, a sufficient intensity and usable field of view are possible to even capture the early stage of the combustion, especially for the unscavenged pre-chamber spark plug. Compared to the spectrometer investigations of premixed methane and blended methane/hydrogen combustion in a continuous burner presented in [26,44], the admixture of 10% V hydrogen has no significant influence on the combustion spectra intensities. Therefore, the assumption of a similar spectrum of the methane and fuel blend combustion seems feasible. Further, [45] show the influence of the air-fuel equivalence ratio and the admixture of hydrogen on the flame emission. With high air-fuel equivalence ratios for natural gas flames, the intensity of the CH* (@431 nm) decreases, whereas the emission of the C 2 * (@516 nm) radical vanishes, which is also responsible for the yellow part of the flame. The recorded natural chemiluminescence intensities seem, therefore, mainly dominated by the air-fuel equivalence ratio (cf. traces in Section 4.2 and combustion images in Section 4.3). This results in strong intensities from background radiation at low air-fuel equivalence ratios and overlays the much weaker intensities of the intermediate species, e.g., CH*, which is formed mostly in the reactive flame front. Further, the higher burning velocity supports a faster conversion of the fuel and thus offers less time for the intermediate radical's radiation. The presented intensities traces (cf. Section 4.2) of the natural flame chemiluminescence show a clear difference between the unscavenged pre-chamber spark plug and the conventional spark plug and some slight changes between the NG and the highest tested amount of hydrogen admixture of 10% V . A higher intensity of the natural flame chemiluminescence for hydrogen admixtures hints at combustion with higher temperatures and, thus, assuming the same energy content for all tests, proves the combustion to be more efficient. Further, a faster increase in the intensity traces leads to a faster combustion, resulting in a faster heat release. The Interpretation of the flame color uses dewarped images and supports the temperature, respectively, air-fuel equivalence ratio dominated intensities, as well as the findings from [26,44,45]. With colder combustion temperatures at high air-fuel equivalence ratios and a slower combustion, a more blue part of the flame is visible and less overlayed by background radiation as well as the more intense C 2 * radical vanishes at high air-fuel equivalence ratios (cf. [45]). For the investigation of the torch flame, dewarped mean value images and the presented reprojection algorithm are used. The virtual reprojection method is used to have a first look at the torch flame development using mean value images of the early stage of the combustion with the unscavenged pre-chamber spark plug. The used virtual reprojection underlies simplifications assumption and inaccuracies such as: - The contour extraction uses predefined sectors for each torch flame and a global threshold in each sector for image binarization. The threshold is optimized to reduce the standard deviation of the extracted contour over the different thresholds. - The reprojection is dependent on the accuracy of the optic's calibration. -Due to the used integral line of sight method used here, intersecting or overlayed intensities lie in the same line of sight and cannot be distinguished. This deteriorates the accuracy of the torch flame recognition and thus decreases the quality of the virtual reprojection compared to laser-based investigations using specific and determined sheets for observation. - As the engine uses a swirl in the charge air movement and the torch flame number seven is aligned with the cylinder bore axis, the influence arising from the swirl onto the torch flame is almost negligible. The usable signal noise ratio is low for the detection of the torch flame. Nevertheless, the virtual reprojection is a simple method to have the first easy-to-use comparison of the carried out natural chemiluminescence measurements to characterize the observed combustion. Further, the method can be used in case of a comparison with CFD results by deriving the same views with the same herein-used projection method. Korb [46] presents different ignition regimes of a scavenged pre-chamber for different holes of the pre-chamber as well as different cycles. Xu [47] presents time-resolved optical investigations of the natural OH* radical chemiluminescence in an RCEM for an unscavenged pre-chamber spark plug igniting natural gas. Due to a missing lift-off length of the observed ignition, the combustion is flame based. The RCEM used in these experiments shows interference of the torch flame oriented parallel to the cylinder axis with thy piston. The presented investigation uses mean value images to derive the torch flame contour. The mean value reduces the effect of the occurring cyclic variation and supports the derivation of a more reliable contour as background noise becomes less effective. Nevertheless, even with the mean value of all recorded images in one cycle, there seems to be no lift-off length between the cap of the unscavenged pre-chamber spark plug and the torch flame, especially for the torch flame 7. With this, the assumption of a predominant flame-based ignition, combustion seems more comprising. The evaluation of torch flame 7 s length shows the interaction with the piston bowl and the in-cylinder flow with an increased reactivity at the tip. The interaction seems similar for both investigated fuels under the reprojection accuracy. The remaining visible torch rays show a predominantly evaluation perpendicular to the flame axis, which ignites the combustion chamber. This seems to be due to the in-cylinder flow and the mounting position. For the low air-fuel ratio of only two, one image could be captured where a torch flame is visible. This is due to the good ignition and flame propagation conditions in the combustion chamber of the selected experimental point. A higher capture rate seems to be necessary to record the complete development of the torch flame, especially at low air-to-fuel ratios. Both investigated fuels show a similar behavior in the development of the torch flames when comparing intensities and color. This is due to the decreased deviation of both fuel properties, as also stated in [40]. Conclusions and Outlook The paper presents in detail: • The thermodynamic comparison of the combustion of NG, 5HCH 4 and 10HCH 4 in two test series using an unscavenged pre-chamber spark plug and a conventional spark plug under CoC and air-fuel equivalence ratio variations. • The optical comparison of the natural chemiluminescence intensity of the combustion of NG, 5HCH 4 and 10HCH 4 using an unscavenged pre-chamber spark plug and a conventional spark plug under a CoC and air-fuel equivalence ratio variation. • The interpretation of the flame color for NG and 10HCH 4 at two stages of air-fuel equivalence ratio at CoC of 8 • CA aFTDC for both ignition systems. • A discussion of the visible natural chemiluminescence of the torch flame from images that are post-processed by a virtual reprojection method. The following conclusions could be derived from the results: • Hydrogen admixture leads to faster combustion for the unscavenged pre-chamber spark plug compared to pure NG investigations and the tests of the conventional spark plug. • For a CoC of under 15 • CA aFTDC, the burning duration decreases for both ignition systems and fuels. • The unscavenged pre-chamber spark plug offers less ignition delay compared to the conventional spark plug. Using the unscavenged pre-chamber spark plugs, the influence of the here-used amount of hydrogen admixture is almost negligible. • The admixture of hydrogen improves combustion stability. This holds true, especially for the conventional spark plug but has only a minor effect for the unscavenged pre-chamber spark plug. • The indicated mean efficiency increases with increasing amounts of hydrogen and reaches its maximum for the herein investigated unscavenged pre-chamber spark plug and maximum admixture of hydrogen. • The apparent heat release rates of both hydrogen admixture levels are of similar quality for each ignition system. In detail, the conventional spark plug shows a slower and less intense heat release compared with the unscavenged pre-chamber spark plug. • The beneficial effect of the hydrogen admixture becomes especially visible for higher air-fuel equivalence ratios. • The intensity of the natural chemiluminescence increases mainly in proportion to the air-fuel equivalence ratio and the resulting burn velocity and is at its maximum for low air-fuel equivalence ratios and 10HCH 4 for the tests with the unscavenged pre-chamber spark plug. • After the thermodynamic end of combustion at MFB95, a remarkable intensity of the natural chemiluminescence remains. The intensity also follows the combustion temperature and can be a hint for thermal excited water. • The flame coloring at characteristic operation points during combustion (MFB 5, 10, 50, 95) develops proportionally to the intensity trace of the natural chemiluminescence and, thus, also the air-fuel equivalence ratio. • For lower combustion temperatures at high air-fuel ratios, a higher content of blue is visible in the flame leading to the assumption of a higher emergence of CH* and less overlayed broadband luminosity as well as less C 2 * radiation which vanishes for high air-fuel ratios. Further, a slow burn velocity supports the visibility of the CH* emissions. • High combustion temperatures at low air-fuel ratios result in an intense yellow flame that originates from broadband emissions and those of the C 2 * radicals. Due to the low content of the hydrogen admixture at low air-fuel ratios, enough carbon seems available to advance their local formation. • The end of the combustion shows a more reddish flame concentrated in the center of the combustion chamber, hinting at thermally excited H 2 O. The early stage of combustion using the unscavenged pre-chamber spark plug shows no obvious differences between both fuels concerning the color of the flame and the intensity. Only the influence of the air-fuel equivalence ratio is obvious, resulting in faster combustion with a more yellow flame under fuel-rich conditions. The correlation of the optical results with the thermodynamic ones shows a reasonable behavior of the combustion. It also proves the feasibility and comparability of the developed alternative fully optically accessible engine and emphasizes the potential of further enhancement to gain more quantitative measurements, e.g., with a similar approach as presented in [23]. Further, the development of a UV-transmission fisheye endoscopic system would support laser-based measurement techniques as well as the observation of the flame's natural OH-chemiluminescence. With such an optic, the testing and development of large-bore engines that can run on pure hydrogen and not only admixtures are possible. It allows the complete decarbonization of combustion engines and thus realizes the usage of a purely sustainable energy supply. Nevertheless, the presented investigations for the admixture of 10% V to methane shows already several advantages, even for the unscavenged pre-chamber spark plug. With this, a content of 10% V hydrogen in the natural gas grid seems already feasible from the point of view of stationary energy generation using combined heat and energy plant units, which can be the first step towards sustainable energy generation. Further investigations on the infrastructure of the gas grid are necessary to avoid risks for other consumers than combustion engines connected to the natural gas grid. Author Contributions: The optical setup (development and design), as well as the processing and interpretation of the thermodynamic and optical data belonging to S.K. S.E. supported the experiments on the testing rig. The original writing, visualization, writing-review and editing was conducted by S.K. The co-authors M.P., M.J. and G.W. supported the work with internal reviews. All authors have read and agreed to the published version of the manuscript. Funding: This work has received funding from the German Federal Ministry for Economic Affairs and Energy under funding code 03EIV013B. Acknowledgments: The authors would like to thank Fabian Liemawan Adji for his support as a student research assistant during the implementation of the pre-and postprocessing algorithms in Matlab.
16,965.2
2023-02-05T00:00:00.000
[ "Engineering", "Environmental Science", "Physics" ]
Novel Hyperbolic Homoclinic Solutions of the Helmholtz-Duffing Oscillators The exact and explicit homoclinic solution of the undamped Helmholtz-Duffing oscillator is derived by a presented hyperbolic function balance procedure. The homoclinic solution of the self-excited Helmholtz-Duffing oscillator can also be obtained by an extended hyperbolic perturbation method. The application of the present homoclinic solutions to the chaos prediction of the nonautonomous Helmholtz-Duffing oscillator is performed. Effectiveness and advantage of the present solutions are shown by comparisons. Introduction It has been widely accepted that homoclinic solutions play a fundamental role in global bifurcations and chaos predictions of dynamical systems [1,2].For instance, the experimental study of certain magnetic pendulum verified the homoclinic solutions as the precursors to chaotic vibration [3].Some occurrences of homoclinic solutions can be regarded as the criterion from single well chaos to cross well chaos motion of oscillators [4], or as the onsets of chaotic vibrations of asymmetric nonconservative oscillators [5].Homoclinic solutions were also adopted in bifurcation and chaotic vibration controls for beam structures [6,7].Another typical application of homoclinic solutions aims at solitary wave studies.For instance, a proper homoclinic solution can govern the solitary roll waves down an open inclined channel [8], or optical solitary waves propagating in fibers [9,10].The association between the singular solitary waves and homoclinic solutions can be interpreted based on phase plane analysis [11]. Because of their importance in nonlinear systems, many homoclinic solutions have been derived in the past few decades.Such works include but are not limited to the following: Xu et al. [12] proposed the perturbation-incremental method for homoclinic solutions; Chan et al. [13] applied the perturbation-incremental method to study the stability and the homoclinic bifurcations of limit cycles; Belhaq et al. [14] analytically developed criterions for predicting homoclinic connection of limit cycle.Mikhlin and Manucharyan [15] and Manucharyan and Mikhlin [16] applied the Padé and quasi-Padé approximants for homo-and heteroclinic solutions.Y. Y. Chen and S. H. Chen [17] and Chen et al. [18] developed perturbation techniques by hyperbolic functions for homoclinic solutions of strongly nonlinear oscillators.Cao et al. [19] improved the perturbation-incremental homoclinic solutions for strongly nonlinear oscillators.Recently, Li et al. [20] improved the perturbation method based on harmonic functions to derive homoclinic solutions of Helmholtz-Duffing oscillators. Nevertheless, to the best of our knowledge, the completely analytical, exact, and explicit homoclinic solution of the strongly nonlinear Helmholtz-Duffing oscillators has not yet been derived, in spite of the wide application of its equation for many engineering problems such as ship dynamics, oscillation of the human ear drum, oscillations of onedimensional structural system with an initial curvature, some electrical circuits, microperforated panel absorber, and heavy symmetric gyroscope [21][22][23][24][25][26][27][28][29].It should be pointed out that the previous typical solutions [ [14][15][16][17][18] become invalid for such mix-parity systems.Even for the conservative Helmholtz-Duffing oscillator, solutions by the perturbation methods [12,13,19,20] based on generalized harmonic functions can only be obtained implicitly, in which the infinite time domain of a homoclinic motion has to be transformed into a finite period of the harmonic.Moreover, for strongly nonlinear oscillators, as the perturbation-incremental method [12,13,19] consists of perturbation procedure with the incremental harmonic balance method, their solutions are always expressed by harmonic functions with numerical coefficients.That means such implicit solutions are semianalytical and seminumerical and cumbersome for practical application. This paper aims to present new homoclinic solutions of the Helmholtz-Duffing oscillators.The completely analytical, exact, and explicit homoclinic solution of the conservative Helmholtz-Duffing oscillator will be derived by a hyperbolic function balance procedure.Then the homoclinic solution of the self-excited Helmholtz-Duffing oscillator will also be obtained by an extended hyperbolic perturbation method.The application of the present solutions to the chaos prediction of the nonautonomous Helmholtz-Duffing oscillator is performed.The preference of the present solution will be illustrated by comparison. The Explicit and Exact Homoclinic Solution of the Undamped Helmholtz-Duffing Oscillator Consider the homoclinic solution of the undamped Helmholtz-Duffing equation If 2 = 0, (1) becomes the classical Duffing equation, which possesses a homoclinic solution with 1 < 0 and 3 > 0. Such homoclinic solution of classical Duffing equation has been discussed in detail in [17], in which the solution can be written as If 3 = 0, (1) becomes the classical Helmholtz equation, which possesses a homoclinic solution.Such homoclinic solution of classical Helmholtz equation has been discussed in detail in [18], in which the solution can be written as Noting the relationship as below, sech Here, to find a proper trial solution form for (1), we can observe the two special cases above.It can be seen that the two expressions above are similar, because they have the common form expressed as in which, when 3 = 0, the constants of ( 6) are While when 2 = 0, 1 < 0, and 3 > 0, the constants of ( 6) are 0 = ±√−2 1 / 3 , = 0, 0 = √ 1 , and = 0. Thus, the time derivative of ( 6) is Note that (, 0) is the homoclinic point.For 2 ̸ = 0 and 3 ̸ = 0, we adopt (6) as a trial solution for the homoclinic solution of (1) and try to determine all its constants by substituting ( 6) into (1); that is, In order to balance (8) for all time , we equate coefficients of like powers of the hyperbolic function term ( + cosh 0 ) and get the following nonlinear algebraic equations: From ( 12), or The left hand side of ( 12) can be regarded as the restoring force of the oscillator with = .In other words, (12) means that the displacement derivative of the potential energy curve at = is zero.Furthermore, we have to make sure that the potential energy curve at = is not concave.Thus, the displacement derivative of the restoring force at = will not be positive; that is, Therefore, can be determined by ( 13)-( 15) and then, ( 9)-( 11) can be discussed, respectively, in the two cases as follows. Example 1.Here we apply the method for equation which is a case of (1) with 1 = −1, 2 = −3, and 3 = 1.From ( 14), (21), and ( 22), we can determine all the constants and get the homoclinic solution as The time histories and the phase portraits of the solutions by different methods are shown in Figures 1 and 2, respectively.It can be seen from the figures that the present method yields accurate and explicit solutions in both the figures, while the generalized harmonic function perturbation method can only provide valid solution in Figure 2. The reason is that based on harmonic functions [12,13,19,20] the homoclinic solutions can only be expressed implicitly by the nonlinear time scale they adopted and be investigated only in phase planes.Such implicit solutions are too abstract or cumbersome to use in some practical problems.Therefore, the present explicit solutions in respect to time are more applicable. Perturbation Homoclinic Solution of the Self-Excited Helmholtz-Duffing Oscillator Consider the homoclinic solution of the self-excited Helmholtz equation Shock and Vibration where denotes constants.We assume the homoclinic solution of (26) can still be expressed in the similar form of (6); that is, however, the amplitude and the nonlinear time scale will depend upon the perturbation parameter .Thus and , respectively, can be expanded in the powers of ; that is, Then ( 27) can be rewritten as where Then After substituting ( 29) and ( 30) into ( 26), equating coefficients of like powers of yields the following equations: Then solutions 0 , 1 , . . .can be determined by solving linear equations ( 33), (34), . . .one by one.It can be seen that ( 33) is obtained from (1) via the transformation in (29).Therefore, the homoclinic solution of (33) can be given by (6).Multiplying (34) by 0 and integrating it from 0 to , we obtain where Noting the properties of hyperbolic functions, we have 0 (±∞) = , (37b) Thus letting 0 = −∞ and = +∞ in (35), we derive Equation ( 38), which can also be derived by the classical Melnikov method, represents the critical condition under which the homoclinic bifurcation occurs.In other words, there exists a homoclinic solution once all the constants in ( 26) satisfy (38).Letting 0 = 0 and = +∞ in (35) gives 1 ( 0 / ( + 1) + ) + 2 ( 0 / ( + 1) + ) 2 + 3 ( 0 / ( + 1) + ) 3 . (39) Furthermore, substituting 0 = 0 into (35) yields The three equations above allow , 1 , and 1 to be determined one by one.As an illustration, here we consider in which 0 , 1 , and 2 are constants.Noting (41), and substituting ( 6) and ( 7) into (36), the latter becomes where the expressions of the constants , , , , and are listed in Appendix.Noting that, in (41), 0 is an even function and 0 is an odd function with respect to , (38) can be rewritten as Thus substituting (42) into (43), one derives Equation ( 44) is the condition under which homoclinic bifurcation occurs.Substituting (43) into (39) gives Then (40) can be rewritten as Finally, the expression for homoclinic solution can be expressed by It can be seen that once 2 or 3 becomes zero, the present procedures can be reduced to the methods and solutions presented in [17] or in [18]. where cos Ω is the external harmonic excitation with the amplitude and the frequency Ω.According to the Melnikov method [1,2], the Melnikov function of (52) can be written as where 0 is the solution presented in (6) and ∧ denotes the vector cross product.Then the chaotic response of the system may occur if there exists a simple zero point of ().Thus, we substitute ( 7) into (53), and let () = 0; the latter yields be pointed out that the Melnikov method is only regarded as one of the conditions for chaotic prediction.At present, a chaotic motion should still be evaluated more thoroughly with qualitative theory and numerical method.Below are two examples which satisfy the Melnikov condition but with different characters of chaotic motions. Example 3. Consider the equation which is the case of (52) with 1 = −1, 2 = −1, 3 = 2, = 0.1, 0 = −1, and Ω = 2. From ( 17), (55), and (56) we can derive that = 1.43 when 0 = 0.94868 and = 1.11 when 0 = −0.94868.Therefore it can be estimated that chaotic motion may happen if > 1.43.The numerical results of homoclinic bifurcation by AUTO numerical method [30,31] show that, with = 3, the Lyapunov exponent value shows chaos behavior from = 0 to about = 145 and then converts to 0 gradually.That means the chaotic motion possesses dissipative chaos property.The Lyapunov exponent diagram is shown in Figure 5.The Lyapunov exponent value converts to less than 0.01 after = 1000.The phase portrait of the system after = 1000 is shown in Figure 6, which shows that the motion converts to a limit cycle.17), (55), and (56) we can derive that = 1.94 when 0 = 1.4138 and = 1.82 when 0 = −1.4138.Therefore it can be estimated that chaotic motion may happen if > 1.94.The numerical results of homoclinic bifurcation by AUTO numerical method [30,31] show that the Lyapunov exponent value stays more than 0 when = 2.90.The Lyapunov exponent diagram is shown in Figure 7.The Poincaré projection of the system from = 500 to = 5000 is shown in Figure 8.In the figure, the fractal character of a strange attractor can be observed, which supports the prediction of chaotic motion. Conclusions The present procedures are efficient for constructing homoclinic solutions of the Helmholtz-Duffing oscillator.The exact and explicit homoclinic solution of the undamped Helmholtz-Duffing oscillator is derived by a hyperbolic function balance procedure.The homoclinic solution of the self-excited system is then obtained by the extension of the hyperbolic perturbation procedure.The application to the chaos prediction of the nonautonomous Helmholtz-Duffing oscillator can also be conducted.
2,784.6
2016-01-01T00:00:00.000
[ "Mathematics" ]
Thermotoga maritima MazG protein has both nucleoside triphosphate pyrophosphohydrolase and pyrophosphatase activities. MazG proteins form a widely conserved family among bacteria, but their cellular function is still unknown. Here we report that Thermotoga maritima MazG protein (Tm-MazG), the product of the TM0913 gene, has both nucleoside triphosphate pyrophosphohydrolase (NTPase) and pyrophosphatase activities. Tm-MazG catalyzes the hydrolysis of all eight canonical ribo- and deoxyribonucleoside triphosphates to their corresponding nucleoside monophosphates and PPi and subsequently hydrolyzes the resultant PPi to Pi. The NTPase activity with deoxyribonucleoside triphosphates as substrate is higher than corresponding ribonucleoside triphosphates. dGTP is the best substrate among the deoxyribonucleoside triphosphates, and GTP is the best among the ribonucleoside triphosphates. Both NTPase and pyrophosphatase activities were enhanced at higher temperatures and blocked by the alpha,beta-methyleneadenosine triphosphate, which cannot be hydrolyzed by Tm-MazG. Furthermore, PPi is an inhibitor for the Tm-MazG NTPase activity. Significant decreases in the NTPase activity and concomitant increases in the pyrophosphatase activity were observed when mutations were introduced at the highly conserved amino acid residues in Tm-MazG N-terminal region (E41Q/E42Q, E45Q, E61Q, R97A/R98A, and K118A). These results demonstrated that Tm-MazG has dual enzymatic functions, NTPase and pyrophosphatase, and that these two enzymatic activities are coordinated. The members of the MazG protein family are categorized by the homology to Escherichia coli MazG protein. MazG proteins are highly conserved among bacteria and considered to be typical prokaryotic proteins. Although the cellular function of this protein family is still unknown, we have demonstrated that the carboxyl-terminal region of MazG interacts with Era (E. coli Ras-like protein), an essential GTPase in E. coli, and identified E. coli MazG as a nucleoside triphosphate pyrophosphohydrolase (NTPase), 1 which can convert (d)NTP to (d)NMP and PP i (1). There are a few NTPases known in bacteria. The E. coli MutT protein is an NTPase with a preference for dGTP but is able to hydrolyze all eight canonical nucleoside triphosphates (2). Lack of the mutT gene increases the spontaneous mutation frequencies from 100-to 10,000-fold over the wild-type level (3)(4)(5). An oxidized form of dGTP, 8-oxo-dGTP, is a potent mutagenic substrate for DNA synthesis to induce A:T to C:G transversion. 8-Oxo-dGTP can be hydrolyzed and eliminated from the nucleotide pool by MutT protein to prevent misincorporation of 8-oxo-dGTP into DNA (6). Genes for MutT homolog proteins have been identified in Proteus vulgaris and Streptococcus pneumoniae (7,8). Enzymatic activity similar to that of MutT protein has also been detected in mammalian tissues (9), and the genes for 8-oxo-dGTPase have been identified in humans, mice, and rats by cDNA cloning (10 -12). Among the MutT homologs, there is a small conserved region that is involved in the NTPase activity as well as the antimutator activity (13,14), known as the MutT signature (15). There are other proteins with the MutT signature, such as the protein encoded by E. coli orf17 gene, which has a preference for dATP and is not involved in antimutagenic activity (16). The deletion of the mazG gene did not result in a mutator phenotype in E. coli, suggesting that MazG is not associated with antimutagenic activity (1). Mj0226 from Methanococcus jannaschii efficiently hydrolyzes xanthosine 5Ј-triphosphate to xanthosine 5Ј-monophosphate and ITP to IMP but not the canonical standard nucleotides (17,18). The inosine triphosphate pyrophosphohydrolase activity has also been identified in human erythrocytes (19). The human inosine triphosphate pyrophosphohydrolase gene has been cloned and named as hITPase, which encodes a protein homologous to the M. jannaschii Mj0226 protein (20). The function of this protein family has been proposed to eliminate minor potentially mutagenic purine nucleoside triphosphates from cell. The genomic DNA of Thermotoga maritima has been sequenced (21), and its analysis using BLAST search reveals that the TM0913 gene encodes the MazG homolog in T. maritima. In this paper, we demonstrated that the MazG protein from T. maritima (Tm-MazG), unlike E. coli MazG, has not only the NTPase activity but also the pyrophosphatase activity, converting (d)NTP to (d)NMP and PP i and subsequently hydrolyzing the resultant PP i to P i . By the site-directed mutagenesis, the amino acid residues involved in both enzymatic activities were identified. We demonstrated that Tm-MazG has dual enzymatic functions, NTPase and pyrophosphatase, and that these activities are coordinated probably by having partially overlapping active sites. pyrophosphate assay kit purchased from Molecular Probes, Inc. (Eugene, OR). Strains and Plasmids-The genomic DNA of T. maritima was used as template for PCR to amplify the TM0931 gene with primer 1,5Ј-GAATTCCATATGAAAGAGGCAGGAATCCTCTTC-3Ј (an NdeI site is underlined) and primer 2,5Ј-CCCAAGCTTTCATGTTTCATCTCCTC-CCTTCG-3Ј (a HindIII site is underlined). The PCR product was digested with NdeI and HindIII and cloned into the NdeI-HindIII site of pET17b. This plasmid was designated as pET17b-Tm-MazG. pET17b-Tm-MazG was introduced into E. coli BL21(DE3) strain for protein expression. Protein Expression and Purification-The E. coli BL21(DE3) cells harboring the pET17b-Tm-MazG were grown to midexponential phase in M9 medium supplemented with 0.2% casamino acid and 50 g of ampicillin/ml, and then the expression of Tm-MazG was induced in the presence of 1 mM isopropyl-␤-thiogalactopyranoside for 4 h. The cells were harvested by centrifugation and resuspended in buffer A (100 mM potassium phosphate buffer, pH 6.0, 10 mM ␤-mercaptoethanol) and then lysed through a French press followed by centrifugation at 8,000 ϫ g for 10 min to remove cell debris and unbroken cells and by ultracentrifugation at 10,000 ϫ g for 1 h to remove membrane and insoluble fractions. The supernatant was treated at 80°C for 15 min and then centrifuged at 12,000 ϫ g for 15 min to remove denatured E. coli proteins. The resulting supernatant was loaded on a Q-Sepharose column and eluted with buffer A using a gradient of 0.1-1 M potassium phosphate. Fractions containing Tm-MazG were pooled together and dialyzed with buffer B (10 mM potassium phosphate buffer, pH 6.0, and 10 mM ␤-mercaptoethanol). The protein sample was then loaded onto a hydroxyapatite column (Bio-Rad), which had been equilibrated with buffer B. Tm-MazG was eluted with buffer B using a gradient of 0.01-1 M potassium phosphate. The fractions containing purified Tm-MazG were pooled together and concentrated. Protein concentrations were measured with the Bio-Rad protein assay dye reagent. The Tm-MazG mutants indicated in Fig. 7 were constructed by site-directed mutation. All of the Tm-MazG mutant proteins were purified with the same protocol as mentioned above. Enzyme Assay-With ␣-32 P-labeled nucleoside triphosphates as substrates, the NTPase activity of Tm-MazG was assayed by measuring the hydrolyzed products from the ␣-32 P-labeled nucleoside triphosphates by TLC. The assay was carried out in 20 l of reaction buffer (20 mM Tris-HCl, pH 8.0, 100 mM NaCl, 5 mM MgCl 2 , and 1 mM DTT) containing ␣-32 P-labeled nucleoside triphosphate and an appropriate amount of Tm-MazG at 70°C for 10 min. The reaction mixture (5 l) was mixed with 5 l of stop solution (2% SDS and 20 mM EDTA) to terminate the reaction. Each terminated reaction mixture (2 l) was spotted onto a polyethyleneimine-cellulose TLC plate, which was then developed in 0.75 M KH 2 PO 4 (pH 3.3). The corresponding unlabeled nucleoside triphosphates were spotted alongside. The dried plates were exposed to x-ray films to identify the hydrolyzed products. Spots of unlabeled nucleoside triphosphates were visualized by UV shadowing. With ␥-32 P-labeled nucleoside triphosphates as substrates, the NTPase activity of Tm-MazG was measured as follows. The reaction mixture (20 l) contained 100 M nucleoside triphosphate, 20 mM Tris-HCl (pH 8.0), 100 mM NaCl, 5 mM MgCl 2 , 1 mM DTT, and 1 g of Tm-MazG protein. The reaction was carried out at 70°C for 10 min and terminated by the addition of 20 l of a mixture of four parts of 20% Norit A (Sigma) and one part of 7% perchloric acid. After mixing and incubating for 2 min on ice, the mixture was centrifuged, and the radioactivity of the supernatant was measured with a liquid scintillation counter. Ci of [␣-32 P]GTP, and 0.5 g of Tm-MazG at 70°C for 10 min in the presence of 4 mM nucleotide competitor. The hydrolysis products were assayed by polyethyleneimine-cellulose thin layer chromatography. The amounts of [␣-32 P]GMP produced were estimated with a PhosphorImager. The hydrolysis activity with each nucleotide competitor indicated was depicted relative to the activity without any nucleotide competitor, which was taken as 100%. Each value is the mean from three independent experiments. The pyrophosphatase activity was detected with pyrophosphate as substrate. The reaction mixture (20 l) contained 1 mM pyrophosphate, 20 mM Tris-HCl (pH 8.0), 100 mM NaCl, 5 mM MgCl 2 , 1 mM DTT, and 1 g of Tm-MazG protein. The reaction was carried out at 70°C for 10 min and terminated by the addition of 5 l of the stop solution. The amount of P i released was measured by the standard colorimetric assay method as described by Ames and Dubin (22). Substrate Specificity of Tm-MazG Protein-The specificity of nucleotide hydrolysis by Tm-MazG protein was examined using various nucleoside triphosphates. The reactions were performed at 70°C for 10 min in 20 l of reaction mixture containing 20 mM Tris-HCl (pH 8.0), 100 mM NaCl, 5 mM MgCl 2 , 1 mM DTT, and 0.5 g of Tm-MazG protein. Rates of hydrolysis of nucleoside triphosphates were determined with six different concentrations of the substrates from 0.1 to 2 mM. Under these conditions, the velocity of the reaction was linear with time. Amounts of (d)NMP product were measured, and the kinetic parameters were calculated from the average values of three independent experiments. Detection of Pyrophosphate-Nucleoside triphosphate hydrolysis assays were performed as mentioned above with [␥-32 P]GTP as substrate. The reaction mixture (5 l) was mixed with 5 l of the stop solution to terminate the reaction. The samples (1 l) were spotted onto a Whatman 3MM paper, which was then developed in a solution containing n-butyl alcohol, n-propyl alcohol, acetone, 80% formic acid, and 30% trichloroacetic acid at a ratio of 40:20:25:25:15 (v/v/v/v/v) in the presence of 0.5 mg/ml EDTA. Then the Whatman 3MM paper was autoradiographed to identify products from [␥-32 P]GTP. The unlabeled monophosphate and pyrophosphate were analyzed alongside and visualized as described by Schwemmle and Staeheli (23). RESULTS Cloning, Expression, and Purification-MazG family proteins are highly conserved among bacteria. The TM0913 gene, Amounts of P i released were measured by the colorimetric assay. B, inhibition of the pyrophosphatase activity by AMPCPP. The reaction mixtures were incubated at 70°C for 2 min in the presence of 1 g of Tm-MazG at various concentrations of AMPCPP as indicated. Then pyrophosphate was added to a final concentration of 1 mM. The reaction was continued for another 10 min at 70°C. Amounts of P i released were measured by the colorimetric assay with the reaction mixture without Tm-MazG as blank control. C, inhibition of Tm-MazG-catalyzed GTP hydrolysis by AMPCPP. Tm-MazG-catalyzed GTP hydrolysis reaction was carried out at 70°C for 10 min in a 20-l reaction mixture with 100 M GTP, 10 Ci of [␣-32 P]GTP, and 1 g of Tm-MazG at various concentrations of AMPCPP as indicated. The hydrolysis products were assayed by polyethyleneimine-cellulose thin layer chromatography. encoding the MazG homolog in T. maritima, was cloned as described under "Experimental Procedures." Its original GTG start codon was changed to ATG. Expression of the gene cloned in pET-Tm-MazG plasmid transformed in E. coli BL21(DE3) resulted in a major band on an SDS-PAGE gel, corresponding to about 20% of total cellular protein. The cell lysate was incubated at 80°C for 15 min and centrifuged to remove denatured E. coli proteins. At this stage, MazG was substantially purified. Tm-MazG was subsequently purified by column chromatography with Q-Sepharose and hydroxyapatite columns, resulting in a highly homogenous protein band in SDS-PAGE. The molecular mass of this product was 29,728 Da as determined by mass spectrometry, which agrees well with the predicted mass of 29,674 Da for recombinant Tm-MazG without the N-terminal Met residue. Thermal denaturation of Tm-MazG protein was examined using a far-UV CD spectropolarimeter. Tm-MazG was very stable at high temperature, and no change was detected up to 85°C. The purified Tm-MazG showed a CD spectrum of a typical protein containing both ␣-helixes and ␤-sheets (data not shown). With [␣-32 P]GTP as substrate for Tm-MazG, the GTP hydrolysis activity was tested with unlabeled nucleoside triphosphates as competitors. In the presence of 4 mM competitors (400-fold excess), GTP hydrolysis was effectively blocked by all of the eight canonical nucleoside triphosphates, among which CTP was the strongest competitor. The ribonucleoside triphosphates were stronger competitors than the corresponding deoxyribonucleoside triphosphates (Fig. 1B). The general kinetic parameters for hydrolysis of the eight canonical nucleoside triphosphates by Tm-MazG were measured (Table I). Among the various nucleoside triphosphates used, the hydrolytic efficiency (k cat /K m ) of deoxynucleoside triphosphates was higher than that of the corresponding ribonucleoside triphosphates. dGTP was the most preferred substrate for Tm-MazG among deoxyribonucleoside triphosphates, and GTP was the most preferred among ribonucleoside triphosphates. The K m values for the ribonucleoside triphosphates were lower than those for the corresponding deoxyribonucleoside triphosphates except that for GTP, which was similar to that for dGTP. Consistent with the competition experiment results described above, the ribonucleoside triphosphates were stronger competitors, and CTP with the lowest K m value is the strongest inhibitor (Fig. 1B, column 5). Production of P i with Pyrophosphate as an Intermediate Product-As shown above, Tm-MazG converts (d)NTP to (d)NMP without any detectable production of (d)NDP. Therefore, pyrophosphate should be another product of the Tm-MazG-catalyzed nucleoside triphosphate hydrolysis. To detect the pyrophosphate product in the reaction mixture, the hydrolysis reaction was performed with [␥-32 P]GTP as substrate for Tm-MazG (1 g), and the reaction products were analyzed by paper chromatography and visualized by autoradiography as described under "Experimental Procedures." A radioactive spot was observed at the position of P i but not at the position of pyrophosphate, whether the reaction was performed at 37°C ( Fig. 2A, lanes 5 and 6) or 70°C ( Fig. 2A, lanes 8 and 9). The same results were obtained from the experiments performed at various other temperatures even for shorter reaction times (data not shown). However, with E. coli MazG (1 g) the radioactive signal was observed only at the position of pyrophosphate ( Fig. 2A, lane 2), and it shifted to the position of P i only when the reaction mixture was treated with yeast inorganic pyrophosphatase ( Fig. 2A, lane 3). Even if 10 g of E. coli MazG was used, only PP i and no P i was detected in the products of E. coli MazG-catalyzed GTP hydrolysis (data not shown). These results indicate that E. coli MazG hydrolyzes GTP to GMP and PP i , whereas Tm-MazG is able to convert GTP to GMP and P i . The results above indicate that Tm-MazG has not only NTPase but also pyrophosphatase activity that effectively hydrolyzes pyrophosphate, a primary product of the NTPase activity, to P i . In order to prove that pyrophosphate is a primary product of the NTPase activity, nonradioactive pyrophosphate was added into the Tm-MazG-catalyzed [␥-32 P]GTP hydrolysis reaction mixture at various concentrations, since the degradation of newly formed radiolabeled pyrophosphate is expected to be inhibited in the presence of excess nonradioactive pyrophosphate. As shown in Fig. 2B, the accumulation of radiolabeled pyrophosphate was detectable in the presence of 2 mM pyrophosphate (Fig. 2B, lane 3), indicating that pyrophosphate is indeed produced in the Tm-MazG-catalyzed [␥-32 P]GTP hydrolysis. With 5 mM pyrophosphate in the reaction mixture, the amount of radiolabeled pyrophosphate was dramatically reduced (Fig. 2B, lane 4), and it was not detectable in the presence of 10 and 25 mM pyrophosphate (Fig. 2B, lanes 5 and 6), indicating that pyrophosphate does function as an inhibitor for the NTPase activity at higher concentrations. The inhibitory effect of pyrophosphate on the NTPase activity was further confirmed by another experiment, in which the GTP hydrolysis reaction was carried out with [␣-32 P]GTP (100 M) as substrate in the presence of various concentrations of pyrophosphate (Fig. 3). It was found that GTP hydrolysis activity, as detected by the production of [␣-32 P]GMP, was decreased with higher pyrophosphate concentrations (Fig. 3, lanes 2-4), and the GTP hydrolysis was almost completely blocked at 5 mM pyrophosphate (Fig. 3, lane 4). With ATP as substrate for Tm-MazG, P i released could be detected by the standard colorimetric assay, whereas almost no P i production was detected with AMPCPP FIG. 5. The effects of temperature on the NTPase and pyrophosphatase activities of Tm-MazG. A, the NTPase assay was performed in a 20-l reaction mixture with 100 M GTP, 10 Ci of [␥-32 P]GTP, and 1 g of Tm-MazG for 10 min at various temperatures as described under "Experimental Procedures." B, the pyrophosphatase assay was performed in a 20-l reaction mixture with 1 mM pyrophosphate and 1 g of Tm-MazG for 10 min at various temperatures. The amount of P i released was measured by the colorimetric assay. as substrate under the same reaction condition (Fig. 4A), indicating that AMPCPP, in which the linkage between ␣and ␤-phosphates is not a phosphodiester bond, cannot be hydrolyzed by Tm-MazG. These results indicated that, in the Tm-MazG-catalyzed (d)NTP hydrolysis reaction, (d)NTP is first hydrolyzed between ␣and ␤-phosphates, yielding (d)NMP and PP i , and subsequently the resultant PP i is hydrolyzed to P i . Pyrophosphatase Activity of Tm-MazG-In order to directly demonstrate that Tm-MazG has the pyrophosphatase activity, the purified Tm-MazG was incubated with pyrophosphate at 70°C for 10 min, and then the amount of P i resulting from pyrophosphate hydrolysis was measured by a colorimetric assay. As shown in Fig. 4A, Tm-MazG indeed did have pyrophosphatase activity that directly hydrolyzed PP i to P i (Fig. 4A, column 3). Although the Tm-MazG preparation was purified from E. coli by heat treatment, one cannot exclude a possibility that the pyrophosphatase activity detected in Tm-MazG might be due to E. coli pyrophosphatase contamination. Therefore, we next examined the temperature dependence of the NTPase and pyrophosphatase activities. As shown in Fig. 5, both enzymatic activities increase as the reaction temperature increases, and the optimal temperatures are ϳ80°C for both enzymatic activities, which is expected for a T. maritima protein. The NTPase activity of Tm-MazG increased more than 10 times from 30 to 80°C (Fig. 5A), whereas the NTPase activity of E. coli MazG was the highest at 37°C and dropped at 80°C to only a few percent of that at 37°C (data not shown). It has been reported that the activity of the E. coli pyrophosphatase is severely inhibited at higher temperatures (24). These results indicate that the pyrophosphatase activity is an intrinsic enzymatic activity of Tm-MazG. To further support this notion, we tested the inhibitory effect of AMPCPP on the enzymatic activities of Tm-MazG. Tm-MazG was first incubated with various concentrations of AMPCPP at 70°C for 2 min, and then pyrophosphate was added into the reaction mixture. The amounts of P i released in the reaction Gaps in the amino acid sequences to optimize the alignment are indicated by dashes. Identical residues between the two sequences are shown in the middle, and functionally similar residues are shown by plus signs. mixture were measured after another 10-min incubation at 70°C. As shown in Fig. 4B, the pyrophosphatase activity was inhibited by AMPCPP; with increasing ratios of AMPCPP to pyrophosphate (1:1, 2.5:1, and 5:1), the amounts of released P i decreased to 97, 71, and 21%, compared with that without AMPCPP, which was taken as 100%. Next, Tm-MazG was incubated with a various concentration of AMPCPP at 70°C for 2 min, and then [␣-32 P]GTP was added into the reaction mixture. After another 10-min incubation at 70°C, the production of [␣-32 P]GMP was examined by thin layer chromatography followed by autoradiography. The GTP hydrolysis was inhibited with increasing AMPCPP concentrations (Fig. 4C). ATP hydrolysis was also inhibited by AMPCPP (data not shown). These results indicate that AMPCPP functions as an inhibitor for both NTPase and pyrophosphatase activities, and also confirm the notion that the pyrophosphatase activity is an intrinsic enzymatic activity of Tm-MazG protein. We also examined the effect of polyphosphate on the Tm-MazG pyrophosphatase activity and found that polyphosphate neither serves as substrate nor inhibits the pyrophosphatase activity under the reaction condition used. Site-directed Mutation Analysis of the Conserved Amino Acid Residues in Tm-MazG-Tm-MazG is a member of the MazG protein family, which is highly conserved in bacteria. Sequence alignments of MazG proteins from E. coli, Yersinia pestis, Vibrio cholerae, Pasteurella multocida, Hemophilus influenzae, Caulobacter crescentus, Agrobacterium tumefaciens, and T. maritima are shown in Fig. 6A. The amino acid alignments reveal that there is a common motif duplicated in Tm-MazG N-terminal and C-terminal regions (Fig. 6B). There are six conserved Glu residues in the duplicated motif in Tm-MazG. In the NMR structure of E. coli MutT protein, another NTPase, there are four conserved Glu residues at its active site (25), suggesting that the Glu residues play an important role in the NTPase activity. Site-directed mutagenesis was performed to analyze the role of these highly conserved Glu residues in Tm-MazG in its enzymatic activities. Mutations, such as E41Q/E42Q, E45Q, and E61Q, in the N-terminal region significantly reduced the NTPase activity to ϳ10% of the wild-type activity (Fig. 7A). Interestingly, mutations, such as E173A, E176A, E185A/ E186A, in the C-terminal region had little effects on the NTPase activity (Fig. 7A). Mutations at other conserved residues in the N-terminal region, such as R97A/R98A and K118E, also reduced the NTPase activity to about 10% of the wild-type activity (Fig. 7A). Surprisingly, the pyrophosphatase activity was substantially activated by all of the mutations described above (Fig. 7B). In particular, the E176A mutation showed the highest pyrophosphatase activity, which was about 3.7 times higher than the wild type activity (Fig. 7B, column 8). It should be noted that in Tm-MazG E41Q/E42Q, E45Q, E61Q, R97A/ R98A, and K118E mutants, the pyrophosphatase activity increased while the NTPase activity decreased in comparison with the wild-type Tm-MazG protein. CD spectrums of purified E45Q and R97A/R98A mutant proteins were carried out at 80°C. Both of their CD spectrums are similar to that of the wild-type Tm-MazG, indicating that they are properly folded (data not shown). However, there was no significant difference in cell growth between the cells overproducing the wild-type Tm-MazG and the cells overproducing the mutant proteins with limited NTPase activities. The overproduction of the E. coli MazG protein had also no significant inhibitory effect on cell growth. DISCUSSION The MazG protein family, consisting of the proteins homologous to E. coli MazG, is highly conserved in bacteria. The E. coli MazG homolog in T. maritima was encoded by the TM0931 gene. BLAST analysis of the sequence similarity between E. coli MazG and Tm-MazG revealed that their identity is 39% and their similarity is 59%. E. coli MazG had been characterized as an NTPase to hydrolyze (d)NTP to (d)NMP and PP i . In the present study, we demonstrated that Tm-MazG has not only NTPase activity but also pyrophosphatase activity. The fact that Tm-MazG has both NTPase and pyrophosphatase activities can be supported by the following experimental results: 1) Tm-MazG converts NTP/dNTP to NMP/dNMP without any detectable production of NDP/dNDP; 2) the production of radiolabeled PP i can be detected in the Tm-MazGcatalyzed [␥-32 P]GTP hydrolysis in the presence of excess amounts of nonradioactive PP i ; 3) Tm-MazG can hydrolyze ATP but not AMPCPP; 4) Tm-MazG can hydrolyze PP i to P i ; 5) both NTPase and pyrophosphatase activities increase at higher temperatures and have an optimal temperature at about 80°C; 6) AMPCPP functions as an inhibitor for both NTPase and pyrophosphatase activities; 7) mutations of the highly conserved amino acid residues in Tm-MazG affect both enzymatic activities. In the Tm-MazG-catalyzed (d)NTP hydrolysis reaction, (d)NTP is first hydrolyzed between ␣and ␤-phosphates, yield- FIG. 7. Enzymatic assays of Tm-MazG mutants. A, the relative GTP hydrolysis activities of Tm-MazG mutants compared with that of the wild-type Tm-MazG. The assay was performed at 70°C for 10 min in 20-l reaction mixture with 100 M GTP, 10 Ci of [␥-32 P]GTP, and 1 g of each protein indicated as described under "Experimental Procedures." B, the relative pyrophosphatase activities of Tm-MazG mutants compared with that of the wild-type Tm-MazG. The assays were performed at 70°C for 10 min in a 20-l reaction mixture with 1 mM pyrophosphate and 1 g of each protein indicated. The amounts of P i released were measured by the colorimetric assay. The activity of each protein was depicted relative to the activity of wild type Tm-MazG, which was taken as 100%. Each value is the mean from three independent experiments. ing (d)NMP and PPi, and then PP i is subsequently hydrolyzed to P i . Since PP i cannot be detected in the reaction mixture in the presence of Mg 2ϩ , PP i produced by the NTPase activity of Tm-MazG may remain at the same position for subsequent hydrolysis of the PP i to P i . At present, it is not certain whether two enzymatic activities share the same active site. However, since AMPCPP inhibits both enzymatic activities, their active sites may be at least very close to each other or partially overlapping. It is important to note that a motif with highly conserved Glu residues is repeated in Tm-MazG, one in the N-terminal region and the other in the C-terminal region. Interestingly, site-directed mutations at the conserved amino acid residues in the N-terminal region severely disrupted the NTPase activity, whereas mutations at the conserved residues in the C-terminal region did not. Moreover, since all these mutations enhanced the pyrophosphatase activity, it appears that the N-terminal Tm-MazG motif is important for the NTPase activity, whereas both N-and C-terminal motifs may be involved in the pyrophosphatase activity. It is also interesting to note that E. coli MazG has no detectable pyrophosphatase activity. The NTPase activity of E. coli MazG is significantly weaker than that of Tm-MazG, which may be due to the intrinsic pyrophosphatase activity of Tm-MazG that removes the pyrophosphate inhibitory effect on the NTPase activity. It remains to be determined how Tm-MazG has dual enzymatic activities. The determination of its threedimensional structure may provide insights into the enzymatic mechanisms of Tm-MazG. The cellular function of MazG is unknown at present. In E. coli, the mazG gene is located downstream of the mazEF addiction module, which has been proposed to be involved in the programmed cell death under stress conditions (26). However, T. maritima does not contain the mazEF homologues. Further studies are needed to elucidate whether MazG is functionally and/or physiologically related to the mazEF system in E. coli. Since the MazG protein is able to hydrolyze the phosphodiester bond between ␣and ␤-phosphates in NTP/dNTP, other compounds such as ITP, xanthosine 5Ј-triphosphate, and the Nudix enzyme substrates Ap4A and Ap3A may serve as substrates for MazG protein. Analyses of the GenBank TM data base reveal that there are proteins containing MazG domain and another functional domain. The proteins containing the uroporphyrinogen-III methylase domain and MazG domain were found in Clostridium acetobutylicum, Bacillus anthracis, Staphylococcus aureus, Clostridium perfringens, Bacillus halo-durans, and Thermoanaerobacter tengcongensis, and on the other hand, a protein containing the helix-turn-helix XRE domain and MazG domain was found in Streptococcus thermophilus bacteriophage. The function of these domains may be related to the MazG function in the cell.
6,346.2
2003-06-13T00:00:00.000
[ "Biology", "Chemistry" ]
Light-induced shift current vortex crystals in moiré heterobilayers Significance Employing light to drive different phenomena or to induce and tune exotic phases of matter lies at the heart of modern condensed matter science and technologies such as optoelectronics. Shift current, a second-order optical response in noncentrosymmetric materials, is a notable DC photocurrent (generated without a p-n junction) having prominent features like low dissipation and robustness against scattering. We investigate shift current generation and their microscopic real-space density distributions in WSe2/WS2 moiré heterobilayers. We identify an moiré quantum matter—light-induced shift current vortex crystals with associated magnetism—as well as an all-optical control route for its manipulation. These findings offer insight into the photophysics of van der Waals moiré systems and open rich opportunities for their applications. Transition metal dichalcogenide (TMD) moiré superlattices provide an emerging platform to explore various light-induced phenomena.Recently, the discoveries of novel moiré excitons have attracted great interest.The nonlinear optical responses of these systems are however still underexplored.Here, we report investigation of light-induced shift currents (a second-order response generating DC current from optical illumination) in the WSe 2 /WS 2 moiré superlattice.We identify a striking phenomenon of the formation of shift current vortex crystals-i.e., two-dimensional periodic arrays of moiré-scale current vortices and associated magnetic fields with remarkable intensity under laboratory laser setup.Furthermore, we demonstrate high optical tunability of these current vortices-their location, shape, chirality, and magnitude can be tuned by the frequency, polarization, and intensity of the incident light.Electron-hole interactions (excitonic effects) are found to play a crucial role in the generation and nature of the shift current intensity and distribution.Our findings provide a promising all-optical control route to manipulate nanoscale shift current density distributions and magnetic field patterns, as well as shed light on nonlinear optical responses in moiré quantum matter and their possible applications. shift current | moiré heterobilayers | vortex crystals | time-dependent GW The bulk photovoltaic effect (BPVE) is a general term that refers to the DC electric current generation in noncentrosymmetric materials under illumination of optical light (1)(2)(3)(4).Unlike conventional photovoltaic devices, BPVE does not require a p-n junction or external bias to separate the photoexcited electrons and holes for a DC current, providing a fundamentally new route for high-efficiency photovoltaics (5)(6)(7)(8)(9)(10)(11)(12)(13).Shift current, which is an intrinsic mechanism for BPVE, is a second-order optical response and could conceptually be interpreted as the "shift" of the intracell coordinates of the excited electrons (14)(15)(16)(17)(18)(19)(20)(21)(22).In contrast to drift current, light-induced shift current is of a pure quantum nature which originates from the spatial evolution of electronic wavepackets from photoexcitations.Previous studies have shown that shift current has deep connections with the topological properties of the electronic states and exhibits various useful features-such as low dissipation and robustness against scattering.In low-dimensional materials, a recent study moreover showed that electron-hole interaction (excitonic) effects play a key role in enhancing and modifying shift currents (22). Here, we report ab initio investigations of light-induced shift currents and their microscopic real-space distributions in rotationally aligned WSe 2 /WS 2 moiré superlattices.To capture accurately many-body excitonic effects from first principles, we employ a time-dependent adiabatic GW (TD-aGW) approach with real-time propagation of the density matrix in the presence of the external light field (22).Our findings show that electron-hole interaction effects play a crucial role in the character and magnitude of the shift current.The current density arising from illumination of light with different Significance Employing light to drive different phenomena or to induce and tune exotic phases of matter lies at the heart of modern condensed matter science and technologies such as optoelectronics.Shift current, a second-order optical response in noncentrosymmetric materials, is a notable DC photocurrent (generated without a p-n junction) having prominent features like low dissipation and robustness against scattering.We investigate shift current generation and their microscopic real-space density distributions in WSe 2 /WS 2 moiré heterobilayers.We identify an moiré quantum matter-lightinduced shift current vortex crystals with associated magnetism-as well as an all-optical control route for its manipulation.These findings offer insight into the photophysics of van der Waals moiré systems and open rich opportunities for their applications. frequency (corresponding to transitions to different moiré exciton resonances) and polarization exhibit distinct real-space distributions.Importantly, we identify a striking phenomenon of the formation of 2D periodic arrays of shift current vortices and their induced magnetic field nanopatterns.Under illumination frequency corresponding to transitions to the intralayer charge-transfer excitons (32), current vortices of opposite circulation chirality are created within each moiré unit cell forming a well-defined vortex crystal.Linearly and circularly polarized light create distinct types of vortex crystals, which exhibit antiferromagnetism and ferrimagnetism, respectively.We further demonstrate that the frequency, polarization, and intensity of the incident light can efficiently tune the location, shape, chirality, and magnitude of these photoinduced current vortices, suggesting a promising all-optical control to photocurrent density distribution and associated magnetism in TMD moiré superlattices. For rotationally aligned WSe 2 /WS 2 bilayer, the lattice constant mismatch between the two layers is about 4%, giving rise to a hexagonal moiré supercell with periodicity about 8.3 nm (Fig. 1A).To minimize the total energy, the superlattice reconstructs to increase the areas of the lower energy Bernal (B Se/W and B W/S ) stacking regions while decrease the higher energy areas (AA stacking regions) (32,40,41).This leads to a remarkable moiré structural reconstruction with strain as shown in Fig. 1B: local compression in the Bernal stacking regions and local expansion in the AA regions in the WSe 2 layer.Previous experimental and theoretical studies have revealed that this system has a type II band alignment and its low-energy optical properties for normal incident light are dominated by the WSe 2 intralayer excitons because the interlayer excitons have very small oscillator strengths and the WS 2 intralayer exciton excitations (with high energies) are well separated from the WSe 2 resonances (32)(33)(34).The large-scale inhomogeneous strain field of the WSe 2 layer (Fig. 1B) strongly modifies its band structure (leading to flat bands) and modulates the wavefunction of the quasiparticle moiré states in real space (32), resulting in the formation of novel moiré excitons. In contrast to a single low-energy peak (A-exciton) of the pristine WSe 2 monolayer (42,43) in the energy range shown in Fig. 1C, there are three moiré excitation peaks in the computed absorbance spectra of the moiré superlattice which match well with recent optical measuments (32)(33)(34).We note that the absorbance can not be captured even qualitatively by the independent particle approximation (IP, black line).Also, the identical absorbances obtained by our TD-aGW approach (red line) and the standard GW plus Bethe-Salpeter equation (GW-BSE) method (blue dots) validate the accuracy of the excitonic effects included in the TD-aGW calculations for large-scale TMD moiré systems.The formalism and computational details on the GW (44), GW-BSE (45,46), and TD-aGW (22) methods can be found in the Methods.As shown in ref. 32 and Fig. 1, the intralayer moiré excitons (peaks I, II, and III) of this system possess distinct microscopic character (Fig. 1D): The moiré exciton of peak I (lower energy, 1.67 eV) has the Wannier-type character where the correlated electron and hole densities coincide in space and are located around the AA stacking region, while the moiré exciton of peak III (higher energy, 1.87 eV) exhibits an intralayer charge-transfer character where the correlated electron and hole densities are spatially separated and are located at the AA and B Se/W stacking regions, respectively.Peak II (middle energy, 1.75 eV), on the other hand, has a mixed character. In the following, we present our results on the quantity of central interest of this work-the real-space distributions of light-induced microscopic shift current density.The local current density J (r, t ) in a time-dependent driving field for a quantum system is generally obtained from the expectation value of the current density operator , where e is the charge of the electron, and v is the velocity operator.This may be achieved using the density matrix (t ) through J (r, t ) = tr[ (t )  (r)] .We compute the interacting (t ) using the ab initio TD-aGW approach (see Methods).In the Bloch-state single-particle orbital basis, the optically induced local current density is then given by where n,k (r) is a Bloch state orbital with band index n and wavevector k, and N k is the number of k points in the Brillouin zone sampled.Using Fourier transformation, responses at different frequencies can be computed through J (r, t ) = Σ J (r, )e i t and the shift current density at position r is given by the DC component J DC (r) = J (r, = 0 ) .In some previous studies, the terms in Eq. 1 involving the diagonal ( n = m ) and off-diagonal ( n ≠ m ) elements of the density matrix were separated and treated differently, and given different terminologies (3,4).Such separation for the density matrix of an interacting many-particle system is conceptually unnecessary and basis dependent.In our calculations, the full density matrix is directly used to compute the complete real-space density distribution of the DC photocurrent to second order in the optical field (which we refer to as the shift current).In experiments of 2D systems, the 2D current density is a well-defined measurable quantity which describes the charge flow in the layer per unit length.Therefore, we integrate the computed 3D current density over the length of the supercell in the normal direction ( z ) in our simulation to obtain a 2D current density in the x-y plane.Throughout the rest of the paper, for the sake of simplicity, we will refer to the 2D DC photocurrent (shift current) density distribution as J (r) with r = x, y .In the literature, to our knowledge, only the shift current density averaged over the unit cell J = 1 Ω ∫ J (r)d r was studied for crystals, where Ω is the unit cell volume (or area in 2D).It is traditionally expressed, from integrating Eq. 1, in the form (22): [2] where E is the electric field of the incident light (assumed to be uniform over the sample) and a, b, c are Cartesian components.We shall call J the macroscopic current density.However, as shown below, it gives limited information compared to J (r) .The different components of the second-order macroscopic conductivity tensors ( abc ) are intrinsically connected by the global crystal symmetry of the system (47).Fig. 2 A and B show the spectra of xxx and xyy with Cartesian coordinates as defined in Fig. 1A, which hold the relation: xxx = − xyy , governed by the C 3v symmetry of the rotationally aligned WSe 2 /WS 2 moiré supercell (47) (i.e., threefold axis with mirror symmetry planes along armchair directions).In general, abc as a tensor in 2D has eight components; however, for our system with C 3v symmetry, xxx = − xyy = − yyx = − yxy , with all other components equal to zero (47).With inclusion of electron-hole interaction effects (red line) in the calculation, one can explicitly identify three prominent peaks (I, II, and III) in the computed shift current conductivity spectra (Fig. 2 A and B), corresponding to the three moiré exciton peaks in the absorbance (Fig. 1C).On the other hand, the results computed within the independent particle approximation (black line) are dramatically different and fail to capture these dominant moiré exciton features even qualitatively. We find that the real-space distributions of the microscopic shift current density J (r) exhibit significantly more complex characters and richer physics that are hidden from the macroscopic current density analysis.They are shown in Fig. 2 C-F for different linearly polarized light at frequencies corresponding to transitions to excitons I and III.First, as opposed to having a vanishing y component for the cell-averaged macroscopic current density J (i.e., with yxx = yyy = 0 ), the current density J (r) has remarkably large y component at different positions r within the moiré cell, yielding large local current density along the y direction.Second, when changing the light linearly polarized from along the x direction to the y direction (or vice versa), although the macroscopic photocurrent conductivity only has a trivial sign-flipping ( xxx = − xyy ), the local current density J (r) flows in prominently different pathways (comparing Fig. 2C with Fig. 2E or Fig. 2D with Fig. 2F).Third, with light polarized along the y direction and an intensity of 1.0 × 10 10 W/m 2 exciting exciton peaks I (1.67 eV) and III (1.87 eV), the cell-averaged macroscopic current density J is 0.09 A ∕m and 0.007 A ∕ m , respectively-showing an order-of-magnitude difference in the current generated by exciting the two excitons.However, their maximum local current densities turn out to be very close: The computed maximum J (r) for exciton I (Fig. 2E) and exciton III (Fig. 2F) are 0.16 A∕m and 0.13 A∕m , respectively.This reveals that the local shift current response for exciton III is as strong as that for exciton I. Exciton III's much smaller cell-averaged value ( J ) originates from the formation of current vortex structures as seen in Fig. 2 D and F. Our ab initio TD-aGW results predict that with light frequency in resonance with moiré exciton peak III, two current vortices are generated in each moiré supercell (in the WSe 2 layer).For in-plane linear polarizations, the two vortices have the same magnitude but opposite circulation chirality, residing on two sides (Upper and Lower in Fig. 2 D and F) separated by the mirror line ( y = 0 ).With light polarization direction switching from along the x direction (Fig. 2D) to along the y direction (Fig. 2F), the chirality of the current vortices flips: The vortex located at the upper (lower) side changes from clockwise (counterclockwise) to counterclockwise (clockwise), revealing a high optical tunability of the shift current flow.Additionally, it is found that changing the polarization direction of the linearly polarized light also significantly modulates the location and shape of the current vortices.More details of light-polarization-direction dependence of the shift current vortices are given in SI Appendix.For in-plane circular polarizations, the two vortices in the moiré supercell are still of opposite chirality but now have very different amplitude (Fig. 3 C and D), leading to a net chiral current within a moiré cell, which changes sign upon changing from left to right circular polarization (or vice versa).This phenomenon of having vortex pairs of opposite chirality can be understood generally from differential topology since the microscopic current density generated by uniform light illumination in a 2D periodic crystal can be mapped to a tangential vector field on a compact 2D surface.The Poincaré-Hopf theorem stipulates that the sum of the Poincaré indices at critical points (the winding numbers of vortices in our case) is equal to zero for a 2D torus.A moderate-size moiré superlattice that supports a charge-transfer exciton with appreciable oscillator strength would be optimal for observing shift current vortices.This is because very small-size superlattices might not host in-plane charge-transfer excitons (34), while excessively large-size ones may lead to negligible oscillator strength of such excitons. The formation of a periodic array of photocurrent vortex pairs induced by optical transitions results in a distinctive moiré-scale vortex crystal with fascinating magnetic properties.As discussed above, for in-plane linearly polarized light with frequency exciting exciton III, two vortices reside on two sides of the moiré cell (Upper and Lower in Fig. 2 D and F) separated by the y = 0 mirror line.They produce a 2D array of nanoscale antiparallel magnetic fields in the moiré superlattice, as shown in Fig. 3 A and B. Since the two current vortices have the same shape, magnitude but opposite circulation chirality, they yield a vanishing net magnetic flux and well-defined antiferromagnetism on the moiré supercell scale.On the other hand, upon illumination by circularly polarized light (Fig. 3 C and D), two distinct sets of current vortices are generated in the moiré superlattices residing on the AA and B Se/W stacking regions, respectively.These two vortices possess very different magnitude, shape, size, and circulation chirality, giving rise to a net finite magnetic flux to each moiré supercell and thus ferrimagnetism.By combining the WSe 2 /WS 2 moiré similarly sized honeycomb superlattice (such as twisted bilayer graphene), the antiferromagnetic/ferrimagnetic patterns from shift current vortices could provide the needed staggered magnetic fields through proximity effects to generate optically controlled topological phases based on concepts such as those from the Haldane model (48). The intrinsic properties (field intensity, shape, and orientation) of these moiré-scale magnetic nanopatterns could be efficiently controlled by the frequency, strength, and polarization of the incident light.For the linearly polarized light, along the real-space path AA-M-AA shown in Fig. 3E, the position for the positive and negative peaks of the induced B z appears alternatively with a distance about 4 nm, and the sign of the B z peaks may be flipped upon changing the light polarization from the x-to the y-direction (or vice versa).For the circularly polarized light, the left-handed circular polarization (LCP) and right-handed circular polarization (RCP) are related by time-reversal symmetry.As shown in Fig. 3 C, D, and F, upon changing the incident light from LCP to RCP (vice versa), the circulation chirality of the current vortices and the sign of induced magnetic field flip, but the magnetic field magnitude remains the same.It is worth noting that distinguished from common orbital magnetism which is at the atomic scale (with spatial extent of a few Å), the photocurrent vortices and induced magnetic nanopatterns here have the size of the moiré scale (several nm), providing much higher possibilities of being detected and manipulated in real experiments and applications.Spin-polarized scanning tunneling microscope (SP-STM) is a powerful experimental tool that can spatially resolve complex magnetic structures with an angstrom-level resolution (49)(50)(51), offering great potential to observe the real-space distributions of the predicted nanoscale magnetic patterns.Moreover, for the ferrimagnetic patterns (Fig. 3 C and D) which exhibit net finite magnetic field, other magnetic detection techniques, such as superconducting quantum interference device (SQUID) (26), could also be employed to measure the averaged field over an extensive surface area of the moiré heterobilayer. With a moderate incident light intensity of 1.0 × 10 10 W/m 2 , the maximum calculated 2D photocurrent density J (r) and induced local magnetic field B z (r) (evaluated on the top of the Se atomic plane) for the WSe 2 /WS 2 heterobilayer can reach 0.48 (0.13) A∕m and 360 (70) nT for circularly (linearly) polarized light.As shown in Fig. 3 G and H, the strength of J (r) and B z (r) can be tuned by the light intensity (defined as c 0 |E | 2 ∕2 where c and 0 are the light speed and permittivity of vacuum, respectively).The linear relationship between J (r) or B z (r) with light intensity originates from the second-order nature (Eq.2) of the DC photocurrent response: B z (r) ∝ J (r) ∝ E 2 .With the light intensity increases to 10 11 W/m 2 in Fig. 3 G and H, the local current density rises to several A∕m and the induced magnetic field is up to the order of magnitude of μ T. In experiments (52), laboratory laser intensity can typically reach to 10 13 W/m 2 , which is expected to give rise to an induced magnetic field as large as hundreds of μ T or even higher, comparable to the Earth's magnetic field strength (about 50 μT).Our results thus provide a promising all-optical control route to generate and manipulate DC microscopic shift current flows in TMD moiré superlattices.For the orientation-aligned (or small angle misalignment) WSe 2 /WS 2 moiré superlattice, setting the frequency of the incident light to excite the moiré charge-transfer excitons (peak III), a 2D shift current vortex crystal and induced magnetic field occur.We demonstrate that the location, shape, and circulation chirality as well as magnitude of the shift current vortices and hence those of the induced magnet fields can be effectively tuned by the frequency, polarization, and strength of the incident light.This is expected to be a general phenomenon for moiré superlattices with nanoscale excitons.Our results provide further understanding to nonlinear light-mater interaction in moiré quantum matter and reveals rich moiré exciton physics of shift currents. Moiré Structural Relaxation and Ground-State Electronic Structure Calculations.The structural relaxation of the rotationally aligned WSe 2 /WS 2 moiré superlattice is performed using force fields with the LAMMPS package (53) with the help of the TWISTER code (54).The moiré supercell contains 25 × 25 WSe 2 and 26 × 26 WS 2 unit cells with pristine cell lattice constants of 3.32 and 3.19 Å, respectively, yielding a moiré period of about 8.3 nm.To simulate the experimental setups (32), we encapsulate the moiré bilayer by a layer of hexagonal boron nitride (hBN) for the structural relaxation.The Stillinger-Weber potential ( 55) and Kolmogorov-Crespi potential (56,57) are used for describing the intralayer and interlayer atomic interactions for the TMD materials, respectively.In our calculations, at the reconstructed geometry, the tolerance of the force at each atom is taken as 10 −4 eV/Å. The ground-state electronic properties (mean-field orbital energies, wavefunctions, etc.) are obtained by density functional theory (DFT) with the Quantum Espresso package (58).We use Optimized Norm-Conserving Vanderbilt (ONCV) pseudopotential (59,60) and an exchange-correlation functional in the generalized gradient approximation (GGA) (61) in the DFT calculations.A planewave basis with an energy cutoff of 40 Ry is used for calculating and expanding the orbital wavefunctions. Excited-States and Linear Optical Response Calculations.The calculations of quasiparticle states based on the GW method (44) and of excitons and linear optical properties (45,46) based on the GW plus Bethe-Salpeter equation (GW-BSE) approach are performed using the BerkeleyGW package (62).The BSE is an eigenvalue equation for two-particle exciton states: [3] where E c,k and E v,k are quasiparticle energies of conduction and valence states, K is the electron-hole interaction kernel, Ω S is the exciton eigenvalue, and A S vc,k is the exciton eigenvector (in basis of k-space interband transitions) with exciton index S .After solving the BSE, the real-space moiré exciton wavefunctions are expressed as [4] where c,k ( v,k ) are Bloch wavefunctions of conduction (valence) bands, and r e r h are electron (hole) coordinates.In Fig. 1D, the real-space distributions of the electron densities e r e and hole densities h r h forming the excitons where the integrals are taken over the whole crystal.e r e ( h r h ) corresponds to the density of the excited electron (hole) given that the hole (electron) is anywhere within the crystal.The low-energy optical absorbance for the WSe 2 /WS 2 bilayer with in-plane light polarization is dominated by the WSe 2 intralayer excitons, because the interlayer excitations have negligible oscillator strength and the WS 2 intralayer excitations have high energies which are well separated from the WSe 2 exciton resonances (32)(33)(34).Therefore, the low-energy photoexcitation properties of the WSe 2 /WS 2 moiré bilayers could be well approximated by those of moiréreconstructed WSe 2 monolayer.The accuracy of such approximation has been verified in previous studies (32,33). Photoexcited Shift Current Calculations.The photoexcited shift current coefficients and the real-space distributions of the microscopic shift current density are computed by the ab initio TD-aGW approach, with real-time propagation of the density matrix in the presence of the external light field (22).In this theoretical framework, the time-dependent interacting density matrix is given by [5] where n and m are band indices, and nm,k (t) is the interacting density matrix in the Bloch-state basis which is the key quantity to compute lightinduced phenomena.H aGW nm,k (t) is the TD-aGW Hamiltonian defined as H aGW nm,k (t) = h nm,k + U ext nm,k (t) + ΔV ee nm,k (t), where h nm,k is the equilibrium quasiparticle energies which includes all the interactions at equilibrium (before application of an external optical field) at the GW level.The external field part is given by U ext nm,k (t) which denotes the light-matter interaction and is equal to − eE(t) ⋅ d nm,k , where E(t) is the optical electric field, and d nm,k is the dipole matrix (i.e., matrix element of the electron position operator r) which is computed using Berry connections, with particular treatments of the intraband parts ( n = m ) performed with a local smooth gauge method (22).In our calculations, a dephasing factor of 10 meV is used (added to Eq. 5) to simulate typical experimental spectral broadening (32). Importantly, excitonic (electron-hole interaction) effects within the TD-aGW approach are accurately captured by the photon-field driven time variations in the electron-electron interaction term ΔV ee nm,k (t) = ΔV H nm,k (t) + ΔΣ COHSEX nm,k (t) , where the first term is the change in the Hartree potential and the second term is the change in the electron self-energy which is taken to be the nonlocal Coulomb hole plus screened-exchange (COHSEX) GW self-energy in the static limit.The accuracy of the electron-hole interactions in TD-aGW is shown to be at the standard GW-BSE level, which is validated by the identical linear absorbances computed with the two methods as shown in Fig. 1C.Time evolution simulations excluding the electron-electron interaction ΔV ee nm,k (t) term yield results corresponding to making the time-dependent independent-particle (TD-IP) approximation.More information on the formalism and computational details of the TD-aGW method can be found in ref. 22 and SI Appendix. Calculating the electron-hole interaction kernel with the ab initio GW-BSE and TD-aGW methods through brute force would be computationally intractable for large-area moiré systems since there are thousands of atoms in the moiré cell.To overcome this challenge, we use the pristine unit-cell matrix projection (PUMP) method (32,33).Employing the PUMP method, we express the moiré electronic band (quasiparticle) states as a linear combination of pristine unit-cell states, and we use the resulting expansion coefficients to rewrite the moiré electronhole kernel matrix elements as coherent linear combinations of pristine unit-cell kernel matrix elements.The resulting ab initio BSE and TD-aGW calculations are performed using 12 moiré valence bands, 12 moiré conduction bands, and a 6 × 6 × 1 k-point sampling of the moiré BZ. In calculating the shift current response at a particular frequency 0 , a monochromatic light field E(t) = E 0 sin( 0 t) is employed and the photocurrent density is calculated from the expectation value of the velocity operator using the resulting time-dependent density matrix.The microscopic current density J(r, t) is obtained by evaluating Eq. 1 above, once we have obtained nm,k (t) .The macroscopic current density J(t) (cell averaging of J(r, t) ) is given by refs.22 and 63 [6] The macroscopic shift current density can be obtained by taking its DC component J DC = J =0 after performing Fourier analysis.The macroscopic shift current conductivity tensor abc at 0 are then computed from Eq. 2 above once we obtained J DC . The photocurrent-induced magnetic field is obtained from the 3D microscopic current density through the Biot-Savart law: Fig. 1 . Fig. 1.Moiré excitons of the WSe 2 /WS 2 heterobilayer.(A) Atomic structure of rotationally aligned WSe 2 /WS 2 moiré superlattice.The black outline denotes the hexagonal moiré supercell with lattice constant of 8.3 nm.The positions of three high-symmetry local stackings (AA, B Se/W , B W/S ) are labeled, and the X axis and Y axis are aligned to armchair (ac) and zigzag (zz) directions, respectively.AA-W (Se) atoms of WSe 2 are on the top of W(S) atoms of WS 2 , B Se/W -W (Se) atoms of WSe 2 are on the hollow sites (top of W atoms) of WS 2 , and B W/S -Se (W) atoms of WSe 2 are on the hollow sites (top of S atoms) of WS 2 .(B) Real-space strain distribution of the WSe 2 layer which originate from the moiré structural reconstruction.(C) Computed absorbance of the WSe 2 /WS 2 moiré superlattice: with electron-hole interactions (red solid line and blue dots from TD-aGW and GW-BSE approaches, respectively) and without electron-hole interactions (IP, black solid line).The positions of three moiré exciton resonance peaks (I, II, and III) are marked.A typical experimental spectral broadening (32) of 10 meV is used.(D) Real-space distributions of the electron densities e r e and hole densities h r h of exciton states forming peaks I and III, respectively, which are defined as e r e = ∫ | | | S r e , r h | | | 2 dr h and h r h = ∫ | | | S r e , r h | | | 2 Fig. 2 . Fig. 2. Shift currents of the WSe 2 /WS 2 moiré superlattice.(A and B) Spectra of macroscopic shift current conductivity tensor components xxx and xyy that give rise to the x component of the macroscopic shift current density ( J x ) with linear-polarized light field (E) along the x-and y-direction, respectively, as defined in Eq. 2. Electron-hole interactions are included in the TD-aGW (red line) and excluded in the TD-IP (black line) calculations.(C-F) Microscopic shift current density J(r) corresponding to incident light frequency at resonance with exciton peaks I and III with linearly polarization along the x direction (C and D) and y direction (E and F).The direction and amplitude of J(r) are represented by the orientation and length of the arrows. Fig. 3 . Fig. 3. Moiré shift current vortices and induced magnetic field from exciton peak III.(A-D) Real-space plot of the microscopic shift current density J(r) (black arrows) and the current-induced out-of-plane magnetic field B z (r) evaluated on the top of the Se atomic plane (color coding), produced by light field E with frequency of the exciton peak III (1.87 eV), with linear light polarization along the x direction (A) and y direction (B), and circular light polarization of left-handed helicity (C) and right-handed helicity (D).The upper (Lower) color-scale bar is for upper (Lower) two panels.(E) Induced magnetic field along the real-space path (along y): AA-M-AA (see marks in A) with different linear polarizations.(F) Induced magnetic field along the real-space path: AA-B Se/W -B W/S -AA (see marks in C) with different circular polarizations.In A-F, results are for a light intensity of 1.0 × 10 10 W/m 2 .(G and H) Maximum microscopic photocurrent density (G) and induced magnetic field (H) as a function of incident light intensity for different linear and circular polarizations.
7,036.4
2023-12-12T00:00:00.000
[ "Physics", "Materials Science" ]
Design and Manufacture of a Multiband Rectangular Spiral-Shaped Microstrip Antenna Using EM-Driven and Machine Learning 1 Abstract— This paper presents a multiband rectangular microstrip antenna using spiral-shaped configurations. The antenna has been designed by combining two configurations of microstrip and spiral with consideration of careful selection of the substrate material, the dimension of the rectangular microstrip, the distance between the turned spiral, and the number of turns of the spiral. The efficiency and accuracy have been improved using machine learning algorithms as well. Machine learning has been studied to model the proposed antenna based on the performance requirements, which requires a sufficient training data to improve the accuracy. Three different machine learning models are applied to improve the accuracy and generalization performance and compared to simulation and measurement results. Simulation, measurement, and machine learning results confirm that the proposed antenna is a new electrically small and operating over a wide range of high-frequency bands between 1 GHz– 4 GHz. Machine learning models have the best prediction ability with a mean square error (MSE) of 0.03, and 0.05. The antenna structure and size are compatible and suitable for several multi-band wireless mobile systems operating in L-band and S-band. The results, such as directivity, Half-Power Beamwidth, Voltage Standing Wave Ratio (VSWR), and S-parameter curves, are analysed and compared with the numerical formulation for both spiral and microstrip antennas. I. INTRODUCTION In recent years, the need for antennas have widely increased. It can certainly be considered as the main leading power behind the progressions being achieved in the field of modern communication and wireless technologies. Therefore, the interest in its development, production, and optimization appeared through various simulation techniques. The methodology discussed in this paper can be categorized into three categories: designing the proposed antenna by combining two types of antennas, fabrication, and modeling by artificial intelligence [1]. Because of the attractive similarity between the properties of microstrip patch and spiral antennas, they have been greatly applied in wireless communications, biological medicine, radar and electronic counter measurements [2], Manuscript received 2 September, 2020; accepted 11 January, 2021. [3]. They are more useable among other antennas and have attractive configurations for researchers and users due to their lightweight, low profile, cost, implementation, and ease for combining qualitative to obtain new configuration and performance [4]. Therefore, these antennas can be easily manufactured in large quantities. Many shapes and types of antennas based on different design processes and reconfiguration techniques have been researched. It has been noticed that microstrip and spiral antennas have simple and same condition of the configuration, consisting of a very thin radiating element (where o  is the freespace wavelength and t is the thickness) on a side of a substrate material (usually in the range of 2. 2 10, r   where r  is permittivity constant), while the ground plane is on the other side [5]. Selection suitable substrate material and configuration seem to offer possibilities for reducing the size and keeping the performance of an antenna. The radiating elements are usually photo-etched on the dielectric substrate, while the feed lines are laid on or passed through the dielectric. Both the permittivity and the thickness of the substrate material influence the performance. Typically, the form of radiating element of microstrip antennas may be circular, square, rectangular, thin patch, elliptical, triangular, and/or any different configuration [4], [5]. In the spiral antennas, it may be single, double or more windings directed right or left with different configurations, which are logarithmic, planar circular, rectangular, selfcomplementary, and Archimedean spirals [6]. In addition, both of the two types of the proposed may be electrically small and an element of a set of an array [7]- [9]. Generally, the number of antennas in an array can be as small as two or larger (several hundreds). The spiral antennas are referred to as frequency independent antennas. Antenna polarization is an important condition when researching and designing antennas [10]. Polarization is one of the fundamental characteristics of the antenna and in demand for many applications. Microstrip and spiral antennas are circularly polarized [5]. Circular polarization can be either right-handed circular polarization (RHCP) or left-handed circular polarization (LHCP), depending on the direction of the rotation of the field propagation versus time [11]. The input impedance (Z in ) is based on the type (strip or coaxial elements), dimension (thickness of the substrate and dielectric constant), and configuration (shape and physical elements) of the feeding system of the antenna. Microstrip and spiral antennas can be generally designed to transform Z in to 50 and 188 Ohms, respectively [5]. As proven above, the similarity of characteristics of microstrip and spiral antennas makes them attractive and possible to combine and present as a novel configuration [4]. As is well known, the modeling of an antenna is permanently achieved in 3D electromagnetic simulation environment, such as Applied Wave Research (AWR), High Frequency Structure Simulator (HFSS), Advanced Design System (ADS), Computer Simulation Technology (CST), etc., while each of them has a different computational method. Beside 3D electromagnetic simulation environment, the machine learning (ML) has been identified as competitive intelligence technique for antenna modeling and optimizing [1], and it can be widely utilized in several disciplines, such as engineering, education, science, meteorology, medicine, human resources recruiters, banking and economics. Various ML algorithms have been introduced to model characteristics of antennas, such as gain, directivity, and S-parameters ( 11 22 , , . S S etc ) [1]. They are mathematical processes performing random calculation during learning process and have the capability of learning and generalization to improve the antenna modeling and synthesis efficiency. Therefore, using ML models for antenna modeling can improve the efficiency and accuracy of the antenna. ML models are trained to realize the mapping between the input and output vectors to obtain the prediction/result as a data set. The training process finds out parameters that predict the best model for presented data. Several ML methods have been applied for antenna modeling and synthesis, such as Gaussian process [12], support vector machine [13], artificial neural networks (ANNs) [14], [15], and space mapping [16]. In [17], the performance of selection operator (lasso), ANNs, and knearest neighbour (kNN) ML is investigated for designing and optimizing of double T-shaped monopole Antenna. In [18], multistage collaborative ML (MS-CoML) methods, such as single-output Gaussian process regression (SOGPR) and symmetric multi-output Gaussian process regression (MOGPR) methods are introduced to collaboratively construct extremely accurate multi-task surrogate solutions/models for different antennas. They are single band microstrip antenna, substrate integrated waveguide (SIW) cavity-backed slot antenna (CBSA), and tri-band patch antenna. Therefore, for antenna modeling, ML algorithm can be classified as a constructor of a surrogate model/solution and as an optimization method [1]. In this study, three ML regressions have been used, which are decision tree regression (DTR), decision forest regression (DFR), and artificial neural networks (ANNs) [19]. These regression algorithms are popular in several applications. They save simulation time, train fast, and perform successfully, as well as are considered as a powerful tool to the overfitting problem. The main contribution of this work is developing a novel configuration to obtain an electrically small lateral size antenna, multiple operating frequencies to fulfil the coverage of 1 GHz-4 GHz and return losses with the directivity (D) of 7 dBi, a Half Power Beam Width (HPBW) lower than 90 °, and good propagation characteristics while maintaining matching of VSWR ≤ 2. ML algorithms are applied to predict new models, as well as to calculate various metrics used for measuring the model's performance, such as the mean squared error (MSE), root of mean squared error (RMSE), relative absolute error (RAE), and relative squared error (RSE). Finally, the accuracy and generalization for predicted, simulated, and measured models are compared, as well as differences and agreements between the obtained models have been cleared up. II. ANTENNA DESIGN STAGES PERMISSIONS To achieve the objective previously mentioned, several configuration steps have been experimentally realized and numerically evaluated [4]. The proposed antenna consists of three parts: radiator (microstrip patch: rectangular and spiral), substrate, and ground plane. The radiator and the ground plane are separated by the substrate. Each part has different thickness, while the length and the width of the substrate and ground planes have the equal lateral dimension [14]. The radiator part consists of a very thin metallic strip placed on the substrate. the proposed application. It is an electrically conductive plane and has different thickness than the conductor. The ground plane is a reflector to reflect the electromagnetic radiation. The radiation process is due to the fringing field between the periphery of the microstrip and the ground plane. The feeding system is coaxial located at the center (0, 0, 0) of the rectangular patch and adjusted for optimal matching of 50 Ohms of characteristic impedance. Typically, the matching process is performed by controlling the inner, outer radius of the feeding system and the length of the slot (cylinder). The characteristics of the feeding system and the substrate (except the width s W and length s L of the substrate) will be constant through experimental (configuration) steps. Each experimental step leads to another with considering the undesirable mutual coupling [20]. A. Microstrip Patch Antenna As is shown in Fig. 1, the length and width of the basic rectangular microstrip patch antenna are 0.3 cm and 0.5 cm, respectively, without adding any windings of the spiral. Note that the conductor/radiator and the feeding system are photo-etched on the thin dielectric substrate. For this configuration, both transmission line and cavity models are most accurate and can be easily analysed [21], [22]. For an efficient conductor, (1) is used to calculate the resonating frequency for rectangular microstrip antennas as shown in Fig. 1 [23]. Rectangular microstrips are preferred due to easy calculation, reconfiguration, and modeling. where W is the width of the rectangular patch and c is the speed of light in free space. The simulation of Fig. 1 leads to the result of f 0 = 2.5287 GHz and 13.927 GHz, while calculating (1) numerically leads to the result of f 0 = 39.5 GHz. There is a huge difference between simulated (1 GHz-4 GHz) and calculated results of f 0 , which direct us to other reconfigurations. B. Second Configuration This case is realized by adding a spiralled rectangular arm with a rotation of 360 ° in the direction of counterclockwise (CCW) starting from the left edge at distance of (-0.5, 0.25, 0.0158) and ending at (-1.0, -1.25, 0.0158) (all dimensions are in cm) as shown in Fig. 2. The arm is fed by the basic rectangular microstrip shown in first case. The width of the spiral rectangular arm is 0.25 cm, and the empty area is 0.5 cm. It is noted that the length L s and the width W s of the substrate are increased around 271 % and 360 %, respectively. C. Third Configuration Third case is similar to the previous case by adding another spiral rectangular arm, and under the same physical conditions. The new configuration and its increased dimensions are shown in Fig. 3. Figure 4 shows the positive effect on the performance of the antenna when adding new spiral arms gradually as new frequency bands (1 GHz-4 GHz) appeared. Consequently, the new spiral arm can be easily added, and then the antenna is simulated again to demonstrate the potential for obtaining new frequency bands as shown in Table I. D. Fourth Configuration This final simulation case shows the final configuration of the proposed antenna. In other words, the configuration form is a rectangular spiral-shaped microstrip antenna (RSMA) as shown in Fig. 5. The spiral arm is reduplicated as a length of transmission lines of characteristic impedance [22]. Each step of the reduplication increases the spiral arm by 0.5 cm (0.00095 o  ). The empty spaces between spiral arms and mid microstrip are not changed and have a width of 0.5 cm. While the distance between angular arms is increasing constantly, the size of the shape of RSMA also increases the width and the length that affected the results positively. The curvature between arms is at π/2 radians counterclockwise. The proposed antenna can be defined by its height, width, and number of horizontal and vertical spiral turns. This will determine the limit of lower and upper frequency bands range (see the final simulation and measured results in figures of Section III). The geometrical parameters of the antenna are optimized using commercial CST Microwave Studio based on the finite integration technique (FIT). The reduction of mutual coupling is considered the most important process in the case of design both single and arrays of rectangular, circular spirals and microstrips [24] [25], [26]. Mutual coupling, in the case of voltage/power transmitted, is confused by possible mismatches at both channels of transmitter and receiver. Therefore, the empty distance between arms, widths, and thickness of the configuration are carefully experimented and located. The wavelength is larger than the total length of the spiral. So, the magnitude and the current scattered constantly through the surface of the proposed conductors. The current path starts distributing from the mid to spiral conductor in a clockwise direction. The distribution of the current through the conductors makes the radiation [27]. There are 13 straight segments/sections (total of the conductor patches of the spiral) that show the negative and positive mutual inductance paths (see Fig. 6). Figure 6 shows that opposite segments carrying current in the inverse direction have negative mutual inductance while having positive mutual inductance in the case of the same directions [3], [28], [29]. The identity of all segments is assumed for the magnitude and the phase of the current. Therefore, 13 segments and 3.25-spiral turns can be written as a total of all inductances added to (positivenegative) mutual inductances and mathematically realized as 42 , 4 , 2 where m = 13 is the number of segments, n = 3 is the number of whole spiral turns, and M is the total of positive and negative mutual inductances. Similarly, the negative and positive mutual inductances M  can be given by For all that M  is larger than , M  so their contribution ratio to all inductances (C) values is much less due to the much considerable distance. Figure 7 implies that the antenna radiates best at 1.6 GHz, 2.04 GHz, 2.4 GHz, and 2.9 GHz with a bandwidth of 0.056 GHz, 0.105 GHz, and 0.145 GHz, respectively, at the return loss of -10 dB. Besides, VSWR curve in the frequency bands is presented in Fig. 8. as shown in Fig. 5. Note that the small circle must touch the upper edges of the middle microstrip with a circumference of 1.33 cm, and the large circle must also touch the inner edges of the outer arm of the spiral with a circumference of 11.99 cm. For low and high frequencies, the spiral formulas are given as: According to (5) and (6) Comparing the results shown in Table II to the previous results shown in Table I, the performance of the proposed antenna has been clearly realized by adding the final spiral arm. Now, the results proven below characterize the far-field radiation pattern in the positive z-direction of the final configuration in addition to the directivity over phi ∅ and theta θ angle in linear scaling mode ( Fig. 9 and Fig. 10). The results are shown for successive simulation cases. The final simulation case is considered as the optimum, which demonstrates the aim of the designed antenna; hence, it can be summarized as small size, narrow beam, and low frequency. In the third simulation case, some good and acceptable results have been obtained as well. However, entirely acceptable results that maintain the objective of the proposed antenna were obtained in the final case. III. MEASUREMENT Figure 11 shows the front section of the fabricated model, the metal part is copper (PEC) and spirally curled on the substrate. The back section is fully covered 11 S by the copper (PEC) which is called ground, drilled in the midpoint of radius 0.2 cm as an outer cylinder, but the inner cylinder is about 0.0585 cm to allow access of feeding. Middle section that has a dielectric constant of 2.2 is called substrate with material of Rogers RT 5880. Figure 12 clearly shows the disagreement between simulation and measurement models. Hence, ML regressions have been used to obtain an equivalent/surrogate model using the simulation model as an input included to the training data of the ML (see equations in Section V). Figure 13 shows the frequency chart describing the change of resonant frequencies while moving from a simulation case to the next in addition to the measurement case at the end. IV. MACHINE LEARNING REGRESSION ALGORITHMS The goal of using ML algorithms in antenna modeling is to predict new models' characteristics using the training data generated by the original computational EM model [1], [30]. In other words, ML is used to evaluate model's accuracy and generalization [1], [18]. It can be realized by learning the interconnection between the input x and the corresponding output y parameters by fitting a model from the data such that where y is the output of the antenna model and , u yY  while u is the output variable and c xX  is an input vector collecting c modeling variables. Therefore, the parameters of the models are typically computed to reach the minimized prediction error. The prediction error is the difference between the original (measured) value and the predicted value. Mean squared error (MSE) function is used to measure the accuracy while the MSE value settles at the minimum (9) where z is numerical index of the regression methods, is the process of aggregation of sample distances through data, is the process of determining sample distance, and is the process of normalization. According to Table III, performance metrics can be used for ML regressions, which are root of mean squared error (RMSE) that measures the average of the squares of the error, and then applies the square root to the obtained result, relative absolute error (RAE) that is the percentage of the result error, and relative squared error (RSE) that normalizes the total squared error by dividing the total error of the predicted values. Three ML regression algorithms are used for the purpose of antenna modeling and comparison of results: decision tree regression (DTR), decision forest regression (DFR), and artificial neural networks (ANNs) [19]. In general, the regression algorithm learns the value of the parameters of a function for an exceptional model of data. It might predict antenna performance's height by using a height function or predict the probability of performance drop based on test data values [31]. Regression algorithms have an advantage represented in combining input parameters from various characteristics by setting the contribution of each characteristic of the data to the regression function. The models of the antenna are built and trained using Azure machine learning based on global infrastructure that is made up of physical and connective network elements. The physical element consists of more than 160 Azure datacenters interconnected with one of the greatest networks on the world. These datacenters provide high availability, minimal latency time, scalability, and the latest advancements in cloud infrastructure [32], [33]. A. Artificial Neural Networks (ANNs) ANNs are the most common method to calculate and develop nonlinear regression based on a model of biological neurons [14]. ANN is the structure of many layers. These layers are categorized as follows: input layer, hidden layer, and output layer; each layer contains neurons. Neurons are interconnected with the corresponding links (weights). They basically perform computations, and then transmit knowledge from the input to the output. Multilayer ANN model is trained and defined as where k dj W are weights connecting th d neuron in the layer k to th j neuron in the layer 1 k ; they were initialized randomly. k bj B represents the bias of th j neuron in the layer , k and (.) f term represents the nonlinear activation function, such as the sigmoid function. B. Decision Tree Regression (DTR) Decision tree is regression or classification model built in the form of a tree and also known as a predictive model [34]. It is a stepwise method, depending on using a predefined loss function , L y F x to optimize the parameter values in the model. In other words, it measures the error in each learning stepwise, and then minimize/correct it in the following step, which is continued to the number of iterations of M [30]. In general, the decision tree splits a huge amount of training data into smaller subset training datasets and features containing instances with similar values (homogenous), and an associated decision tree is incrementally optimized [35]. The result is a tree with two kinds of nodes, such as decision and leaf/terminal nodes. In addition, from two or more kinds of branches extending from a decision node, each represents values for the parameters that are tested. Leaf node is considered as a decision on the numerical target output. Knowing that decision trees can handle and generate a model with two kinds of data, which are categorical and numerical data. The root node is the topmost decision node in a tree corresponding to the bestead model predictor as shown in Fig. 14. The size of the tree depends on the size of the input and output data 1 ,. The aim is to achieve an estimated approximation of ˆ( ) Fx to a function of ( ), Fx which reduces the expected value of some values of the loss function , ( ) arg min ( , ( )) . Most important part of algorithm for building decision trees is ID3 [36]. Therefore, ID3 is a realization algorithm developed to construct a decision tree for regression by replacing Gain with standard deviation reduction (SDR). C. Decision Forest Regression (DFR) It is an ensemble method that builds multiple decision trees and integrates their predicted models together to obtain a more accurate and stable model rather than depend on an individual tree [37], [38]. Each tree in the forest learns randomly from the samples of the training data. Some samples are selected to be used multiple times in an individual tree and some samples may not absolutely be selected as shown in Fig. 15. In other words, it is training each tree on different samples. Even though each tree may have a variance according to the training data, the forest will have minimum variance, but not at the value of increasing the bias [39]. is generated according to [40], which shows that 63.2 % of the original samples are reserved for a bootstrap sample. Hence, a decision tree algorithm is applied on each bootstrap sample of 1 , p ii i xy in order to generate |p| number of trees for the forest regression [41]. V. TRAINING OF MACHINE LEARNING REGRESSIONS The first process in ML model development is the generation and collection of datasets for training and testing. There are three data sets generated for the proposed antenna, namely, simulation, measurement, and test [14], [15]. Simulation data are generated by CST Microwave Studio, while the measurement data are given from Rhode & Schwarz ZVB20 Vector Network Analyzer. Test data are obtained outside (extrapolation) of the simulation and measurement to independently test the quality (accuracy and generalization capability) of the trained model. In training process, samples of the training data are being iteratively provided to the model. Then the model exercises the current parameter samples and predicts a new prediction. Prediction is compared to target, and the difference is shown as an error. Then returns to modulate and update itself to decrease that error in next prediction states. This means that model will update the values of its parameters according to the ML regression algorithms based on which they were generated as shown in Fig. 16. Models are trained by modulating their parameters' values to realize preferable results. Therefore, models are the results of the learned ML from training data. The measured model that is used as a target for prediction is seen in Fig. 11 and Fig. 12. The dimensions and configuration parameters of RSMA are variables, hence inputs and outputs {x i , y i } of the corresponding RSMA machine learning model are given by: 1 2 3 11 , , , , , T sim x l l l f s (13) 11 , T yS (14) where f is operating frequency, 12 ,, ll and 3 l represent the width, length of middle microstrip, and height of the radiating element, respectively. 11 is the simulation data. The subscript T points out the transpose of the input and output vectors or matrices. , n ii i xy can be expressed based on RSMA modeling problem as shown in (7). According to the proposed study, there are one output variable and multiple input variables. Note that ML models can accommodate and learn from multiple input variables to predict multiple output variables. The accuracy of the resulting regression models has been first achieved through the methods' plots as shown in Figs. [17][18][19], and then through measuring errors as shown in Table III. It is in line with using multiple predicting accuracy measures. ANN model is developed using Levenberg-Marquardt learning algorithm that combines two minimization methods, which are gradient descent and Gauss-Newton, with a learning rate of 0.005, number of iterations of 100, initial learning weights diameter of 0.1, and 1 hidden layer, including number of neurons of 100 (Fig. 17). Decision tree regression model is developed using single parameter for the trainer mode, number of leaves/tree is maximized to 20, number of samples/leaf nodes is minimized to 10, learning rate of 0.2 and the total number of trees constructed is 100 (Fig. 18). Decision forest regression model is developed using number of decision trees of 8, maximum depth of the decision trees of 32, number of random splits/nodes of 128, and maximum number of samples/leaf nodes of 1 (Fig. 19). The error obtained by different measures is compared using measured ( meas y ) and predicted ( pred y ) data. A measure pred y is realized for comparing errors with . meas y The main result was that the DTR and DFR algorithms have the best measure of error as shown in Table III. Figure 20 shows the extent of the correlation between the measurement and the prediction for regression models. While the correlation in the Fig. 20(b) and Fig. 20(c) is a positive remarkable, the correlation in the Fig. 20(a) is negative and quite small. Figure 4 and Figure 12 show the results of S-parameters of the all simulation and measurement stages. Therefore, the change between resonating frequencies and bandwidth in each case is clearly appeared. There are shifting without exact overlapping and generating new operating frequencies in every simulation stage, in addition to displaying multiple resonant frequencies in the measurement case, which are 1.4 GHz, 1.8 GHz, 2.27 GHz, 2.73 GHz, 3.26 GHz, and 3.73 GHz as shown in Fig. 12 and Fig. 13. It depends on adding a new spiral arm in each case. 3.4 GHz-3.8 GHz band with the bandwidth of 200 MHz is widely recognized as a supported band for 5G systems in LTE TDD mode, allocating asymmetric distribution of uplink and downlink resources in wireless systems. It is clearly shown that the number of resonant frequencies can be multiplied by increasing the length of the spiral conductors (C), with considering the empty space between them. The bandwidth is wider in the first, second, third, and fourth simulation cases than in the measurement case, but the value of the return loss in measurement case is less than in simulation cases. Equations (2), (5), and (6) theoretically show approximate results agreeing fairly with simulation results in some cases, while the results do not agree in most simulations (see Table II and the section of first and fourth configurations). The far-field patterns show the radiated power as a function of the direction of z-axis and vary as a function of the angles of ∅ = 0 ° and ∅ = 90 °. Observing two-dimensional far-field patterns, the radiation is maximum at 0 ° and 90 ° along z-axis and is a minimum broadside to the antenna. The angle HPBW is around 45 % of the peak power. Remarkably, good patterns can be obtained with combining microstrip and spiral conductors. B. Measurement and Regression Methods As it can be observed from the regression models, the proposed DTR and DFR evidently realized accurate models when compared to ANN model. Compared to ANN, the high performance of DTR and DFR is mainly achieved from the systematic, non-parametric, and methodical feature of the tree structure that can predict the target variable through simple processes learned from measured training data. Thus, there is a remarkable agreement between the prediction and measurement models with some differences as shown in Fig. 18, Fig. 19, Fig. 20(b), and Fig. 20(c), as well as in Table III. While ANN model is not highly accurate, it hardly remains within measurement boundaries as shown in Fig. 17 and Fig. 20(a). For a straightforward comparison, Table III includes error metrics for used regression models that enhance understanding accuracy and generalization capability of the model of RSMA. In [17] and [18], a similar comparative study is presented using three ML and EM models. As a result, ML techniques used in our study, [17], and [18] can additionally be utilized to recognize and solve significantly more complex antenna problems. Therefore, the results received from the studies imply that ML methods can be a parallel solution to EM simulation in novel antenna technology. The analysis of the previous results may lead the researchers to focus on developing and optimizing antennas through artificial intelligence methods. VII. CONCLUSIONS This article gradually discussed how to combine rectangular microstrip antennas with spiral antennas. The model results are numerical, 3D-EM, machine learning simulation, and measurement. In the sequence of simulation cases, the combined configuration called "spiral-shaped microstrip antenna" (RSMA) has been developed, making the goal of the proposed antenna, such as small size, good patterns, and less HPBW, to be achieved. The operating frequency range covers the range of 1 GHz-4 GHz (L-band and S-band). To verify the design of the proposed antenna, the model has been fabricated and measured. It can be a good choice for small covering with high data rate capacity, in addition to be operable and suitable for different wireless communication in indoor/outdoor environments (WLAN: 2.4 GHz-2.48 GHz, WiMAX: 3.4 GHz-3.69 GHz, and WiFi: 2.40 GHz-2.48 GHz). This research also explores three regression algorithms based on machine learning for predicting models of the proposed antenna, calculating the accuracy and generalization capability. Algorithms, such as decision tree regression, decision forest regression, and artificial neural network, were used in solving such modeling problem. Regression models predicted rightly with some acceptable differences. This work is successfully applied by regression algorithms for modeling the proposed antenna. The results indicate the reliability of the proposed prediction methods. Moreover, both researchers and practitioners may use different machine learning methods for modeling antennas. CONFLICTS OF INTEREST The author declares that he has no conflicts of interest.
7,288.4
2021-02-25T00:00:00.000
[ "Engineering", "Computer Science" ]
MYCORRHIZAL COLONIZATION AND PHENOLIC 1905 NOTAS CIENTÍFICAS MYCORRHIZAL COLONIZATION AND PHENOLIC COMPOUNDS ACCUMULATION ON ROOTS OF EUCALYPTUS DUNNII MAIDEN INOCULATED WITH ECTOMYCORRHIZAL FUNGI Compatibility between Eucalyptus dunnii and the ectomycorrhizal fungi Hysterangium gardneri and Pisolithus sp. _ from Eucalyptus spp. _, Rhizopogon nigrescens and Suillus cothurnatus _ from Pinus spp._, was studied in vitro. Pisolithus sp., H. gardneri and S. cothurnatus colonized the roots. Pisolithus sp. mycorrhizas presented mantle and Hartig net, while H. gardneri and S. cothurnatus mycorrhizas presented only mantle. S. cothurnatus increased phenolics level on roots. Pisolithus sp. and R. nigrescens decreased the level of these substances. The isolates from Eucalyptus seem to be more compatible towards E. dunnii than those from Pinus. The mechanisms involved could be related, at least in the cases of Pisolithus and Suillus, to the concentration of phenolics in roots. COLONIZAÇÃO E ACUMULAÇÃO DE COMPOSTOS FENÓLICOS EM RAÍZES DE EUCALYPTUS DUNNII MAIDEN INFECTADAS COM FUNGUS ECTOMICORRÍZICOS RESUMO Estudou-se a compatibilidade entre Eucalyptus dunnii e os fungos ectomicorrízicos Hysterangium gardneri e Pisolithus sp. _ isolados de Eucalyptus spp._, Rhizopogon nigrescens e Suillus cothurnatus _ isolados de Pinus spp._, in vitro. Pisolithus sp., H. gardneri e S. cothurnatus colonizaram as raízes. As micorrizas de Pisolithus sp. apresentaram manto e rede de Hartig; as de H. gardneri e S. cothurnatus apresentaram apenas manto. S. cothurnatus provocou aumento de fenóis nas raízes; Pisolithus sp. e R. nigrescens provocaram diminuição dessas substâncias. Os fungos isolados de Eucalyptus parecem mais compatíveis em relação a E. dunnii do que os de Pinus. A concentração de fenóis nas raízes parece estar envolvida nesse fenômeno, particularmente em relação a Pisolithus sp. e S. cothurnatus. 1 Accepted for publication on December 27, 1999. Part of B.Sc. thesis presented by the senior author to Universidade Federal de Santa Catarina (UFSC). 2 Biologist, Dep. de Microbiologia e Parasitologia, Centro de Ciências Biológicas, UFSC, Caixa Postal 476, CEP 88040-900 Florianópolis, SC, Brazil. CAPES scholar. E-mail<EMAIL_ADDRESS>3 Agronomist, Doct., Dep. de Microbiologia e Parasitologia, Centro de Ciências Biológicas, UFSC. E-mail<EMAIL_ADDRESS>4 Biologist, Ph.D., Associate Professor, Dep. de Botânica, Centro de Ciências Biológicas, UFSC. E-mail<EMAIL_ADDRESS> Specificity between ectomycorrhizal (ECM) fungi and host plants has been observed both in field (Molina & Trappe, 1982;Molina et al., 1992) and controlled conditions (Malajczuk et al., 1982(Malajczuk et al., , 1984;;Oliveira et al., 1994).Knowledge of the mechanisms controlling this phenomenon is important to understand mycorrhizal functioning and to guide the selection of isolates for inoculation programmes.Malajczuk et al. (1982Malajczuk et al. ( , 1984) ) reported that ECM fungi from eucalypts were unable to colonize Pinus radiata, while those from conifers did not colonize eucalypts.In incompatible pairings, phenolics accumulated in roots as a result of a hypersensitive reaction.In mycorrhizas of Picea abies, Larix decidua and Pinus sylvestris, a lower concentration of soluble and cell wall bound phenolics than in uninoculated roots was observed (Münzenberger et al., 1990(Münzenberger et al., , 1995(Münzenberger et al., , 1996)).Later, they observed that laccase and peroxidase activities differed between mycorrhizas and uninoculated roots of P. abies and L. decidua (Münzenberger et al., 1997).In both species, mycorrhizas contained the highest laccase activity and the lowest peroxidase activity.The high laccase activity could induce the polymerisation of soluble phenolics contributing to their decreasing.The low peroxidase activity would inhibit oxidative rigidification of cell wall.These reactions would favour root colonization by ECM fungi. In Southern Brazil, Eucalyptus spp.and Pinus spp., even when planted in the same sites, seldom have the same fungal symbionts.This difference in fungal diversity could possibly be related to fungus-host specificity.In this sense, this phenomenon deserves further consideration. Thus, this study was carried out with four ectomycorrhizal fungi well known by their specific occurrence in Eucalyptus or Pinus stands in Santa Catarina, Southern Brazil.The aim was to determine fungal infectivity towards Eucalyptus dunnii Maiden and its relationship with phenolic compounds accumulation on roots. Seeds of E. dunnii were disinfected in 70% ethanol (30 seconds) and surface sterilized in 1% sodium hypochloride (20 minutes).They were placed on the surface of Modified Melin Norkrans agar (MMN) (Marx, 1969) For mycorrhizal synthesis, five seedlings were placed concentrically on the surface of the cellophane film with their roots in contact with the fungus (Burgess et al., 1995) and kept at the same conditions described for the germination.Uninoculated controls were prepared similarly, except for the absence of fungal colony.There were ten replicates (dishes) per treatment (50 seedlings). Five weeks later, plants were carefully removed, shoots were eliminated, and roots were placed in distilled water.Roots were observed under stereomicroscope (30 x) to determine the number of colonized root tips per plant.After that, they were divided into two parts, one for microscope observations and another for extraction of phenolic compounds. Three samples of 100 mg of fresh roots were used per treatment for the extraction of total phenolics.Each sample was grinded in 2 mL of 70% ethanol and kept in a water bath at 60ºC before being centrifuged for 2 minutes at 700 g.The pellet was dissolved and extracted twice again by the same procedure (Phillips & Henshaw, 1977).The three extracts of the same sample were combined in order to obtain a final extract of 6 mL. Phenolic compounds were quantified in three aliquots of 0.5 mL from each sample (three samples/treatment), according to the Folin-Denis technique (Swain & Hills, 1959).The extract was dried at 50ºC for 24 hours.The pellet was redissolved in 1 mL of distilled water and 0.5 mL of Folin-Denis reagent and 1 mL of saturated calcium carbonate solution were added.The final volume was adjusted to 10 mL with distilled water.The solution was kept at room temperature for 45 minutes.Optical density was measured at 725 nm.The results were compared with a standard curve obtained with tanic acid at different concentrations (0, 20, 40, 60, 80 and 100 mg/mL). Data on number of colonized root tips and total phenolic compounds per plant were submitted to variance analysis and the averages compared by the t test. Pisolithus sp., H. gardneri and S. cothurnatus colonized E. dunnii roots, whereas no colonization was observed in plants inoculated with R. nigrescens.Pisolithus sp.formed typical mycorrhizas (Fig. 1B), with a well developed mantle and a Hartig net limited to the first layer of cortical cells.H. gardneri and S. cothurnatus mycorrhizas (Figs.1A and 1D), although presenting a well developed mantle, showed no discernible Hartig net besides a few hyphae dispersed in epidermal intercellular spaces.No mantle nor Hartig net were observed on roots inoculated with R nigrescens (Fig. 1C). According to statistical analysis, Pisolithus sp. and H. gardneri showed a higher infectivity in relation to E. dunnii _ with 15.8 and 2.8 colonized root tips per plant, respectively than S. cothurnatus and R. nigrescens which colonized only 1.4 and 0.0 root tips, in this order (Table 1).Smith & Read (1997) consider that mantle and mainly Hartig net are indicative of effective establishment of ectomycorrhizas.In this way, Pisolithus sp.(UFSC-Pt44) presents higher compatibility towards E. dunnii than the other fungi, because this fungus formed typical mantle and Hartig net structures on roots.In comparison, H. gardneri and S. cothurnatus mycorrhizas although presenting a well developed mantle, had no typical Hartig net, presenting a superficial colonization.However, other authors consider that an ectomycorrhiza is characterized by any case of a fungus forming a mantle, with or without Hartig net (Warcup, 1980).Superficial mycorrhizas in Eucalyptus spp.formed by Hysterangium spp.have previously been described by Warcup (1980) and Malajczuk et al. (1987).The former author demonstrated plant growth stimulation by this type of mycorrhizas.In this sense, H. gardneri and S. cothurnatus superficial colonizations have been considered as ectomycorrhizas as well. Data on number of mycorrhizal root tips show that Pisolithus sp. and H. gardneri were more infective towards E. dunnii roots than S. cothurnatus and R. nigrescens, suggesting that the isolates from Eucalyptus spp.are more compatible in relation to this plant than those from Pinus spp.Species of Suillus and Rhizopogon spp.are well known by associating specifically with certain plant genera mainly of conifers (Garbaye, 1990).Plants inoculated with S. cothurnatus presented a higher accumulation of phenolics in roots, with an average of 12.3 mg/mg of fresh weight (Table 1).Those inoculated with H. gardneri did not differ from controls, with an average of 11.1 and 10.2 mg/mg of fresh weight, in this order.Conversely, roots inoculated with Pisolithus sp. and R. nigrescens presented a lower level of these substances, 9.1 and 8.8 mg/mg of fresh weight, respectively. Métraux (1994) related phenolic accumulation in plants to defence mechanisms against pathogenes.Malajczuk et al. (1982Malajczuk et al. ( , 1984) ) related this phenomenon also to plant reaction to incompatible ECM fungi.In this study, S. cothurnatus induced a significant accumulation of phenolics in roots which coincided with a lower infectivity compared to Pisolithus sp. and H. gardneri.Roots inoculated with H. gardneri did not significantly increase the production of these substances, whereas those inoculated with Pisolithus sp. and R. nigrescens presented a lower level of phenolics than uninoculated roots.These observations indicate that only S. cothurnatus, isolated from Pinus sp., stimulated a hypersensitive reaction on E. dunnii roots.Conversely, R. nigrescens, also from Pinus, did not induce this reaction, but was unable to colonize roots.The incompatibility between this fungus and E. dunnii must be related to other mechanisms, unless the total absence of infection had prevented accumulation of phenolics. The results suggest that the specific occurrence of Pisolithus sp. and H. gardneri and the absence of S. cothurnatus and R. nigrescens in Eucalyptus plantations could be related to their compatibility/incompatibility towards these plants.Nevertheless, these results refer only to four isolates hence the need for more studies in order to establish a full explanation for the fungus-host specificity observed in Southern Brazil plantations. TABLE 1 . Mycorrhizal colonization and total phenolic compounds accumulation in Eucalyptus dunnii roots inoculated with ectomycorrhizal fungi after five weeks of incubation 1 .Values in the same column followed by different letters are significantly different according to t test (P=0.05).2Values are the average of 50 replicates per treatment.3Values are the average of nine replicates per treatment. 1
2,308.6
2000-09-01T00:00:00.000
[ "Biology", "Environmental Science" ]
Thermal control of sequential on-surface transformation of a hydrocarbon molecule on a copper surface On-surface chemical reactions hold the potential for manufacturing nanoscale structures directly onto surfaces by linking carbon atoms in a single-step reaction. To fabricate more complex and functionalized structures, the control of the on-surface chemical reactions must be developed significantly. Here, we present a thermally controlled sequential three-step chemical transformation of a hydrocarbon molecule on a Cu(111) surface. With a combination of high-resolution atomic force microscopy and first-principles computations, we investigate the transformation process in step-by-step detail from the initial structure to the final product via two intermediate states. The results demonstrate that surfaces can be used as catalysing templates to obtain compounds, which cannot easily be synthesized by solution chemistry. Interesting work which certainly can be recommended for publication in Nature Communication. The study addresses the topic of "on-surface" chemistry and it is timely to promote this issue for the physics community; it brings the two disciplines of chemistry and physics together to reach a common goal. Some additional comments: 1) Abstract: The Abstract ends with "..., which cannot be synthesized by solution chemistry." This general statement is too strict. It should be changed to "... cannot easily be synthesized ...". In principle it is a matter of motivation and the resources one puts in from the chemistry side to synthesize a new compound. 2) Introduction: An additional Reference (J. Am. Chem. Soc, 2016, 138, 5585-5593 by Liu et al. Control of Reactivity and Regioselectivity for On-Surface Dehydrogenative Aryl-Aryl Bond Formation) needs to be cited. This recent paper goes very much in-line with the actual manuscript. A corresponding comment on it should be added in the Introduction. 3) Figure 2d: The two hydrogen atoms are bond with a line in a wedged (or tapered) form, which indicates a specific stereochemistry; it would be more appropriate to draw just a line. 4) First reaction: (page 6) "... these two triple bonds have been cleaved ..." I mean no, because then there would be no bonding anymore between the carbons. Better is for example, "Two acetylene bridges are partially reduced while two hydrogen atoms are added to each one." 5) (page 6) "The remaining two non-covalently bonded electrons are ...". This sounds strange for chemists; maybe: "The biradical molecule is stabilized by ...". 6) (page 8) Better would be: " By now, ... consists of two pentacyclic and four hexacyclic carbon rings." 7) (page 8) "... -two hydrogen atoms have definitely been added during the process". It sounds like: before maybe not but now for sure. Please rephrase this sentence. Reviewer #3 (Remarks to the Author) The paper reports on the thermally-controlled chemical transformation of tDBA on Cu(111) surface. The chemical structures of reaction intermediates and products were identified using the combination of STM, AFM, and DFT calculations. The study found two aromatic molecules unreported before, and the detailed mechanism of the three-step reaction was derived. In general, this paper is well-written with comprehensive data analysis. The data quality is excellent, but partial results are conceivable. I would recommend the paper for publication in Nature Communications after the authors address the following comments. 1. nc-AFM provides the essential information for determining the chemical structures of molecular species in the presented reaction. Although some interpretations appear to be straightforward, there are images hardly resembling the deduced structure (Fig.2b,c,d). Could the authors elaborate how the adsorption site of tDBT derivative was determined in DFT calculation (Fig.S2b)? In this spontaneous hydrogenation process, where do the hydrogen atoms come from? It is well known ethyne group readily react on copper surface by forming two C-Cu bonds, should this reaction path be considered as well? 2. In supplementary Fig.1, figure caption mentions (d) chemical structure of the trimer, which I cannot find. Additionally, it is mentioned "The observed bond-like feature in the intermolecular contact relates to an apparent bond, which is an artefact caused by the flexible CO tip following the potential energy landscape." I would like to ask for a clarification: whether the assembly of three DBA was bonded via C-H-pi interactions OR they just got together by chance? If the argument of CO-tilting artefact prevails, I would doubt the validity of the approach of using AFM images to identify the intermediates or transition states in reactions. In view of the complicate configurations of the molecular moieties in the vicinity of catalyst atoms or active sites, should I believe those features in the acquired AFM images be real? Fig.S8 happens to be an example in this respect. The elongated bond length could arise from multiple origins; and the presence of a line feature is not a guarantee of the coordination bond that produces the organometallic complex. The bottom line for the above-mentioned questions: are the intra-molecular lines seen in Fig.3b,4b resulted from tip artefact too? 3. Because of the absence of side reaction in the discussed system, the ratios of different products should agree well with the calculated thermodynamic energy. Could the authors have a comment on the issue? 4. Most organic molecules tend to break down at elevated temperatures on reactive metal surfaces like copper, why is it suggested that the final reaction product can be collected from the surface by further thermal desorption? Reviewers' comments: Reviewer #1 (Remarks to the Author): The authors present a combined STM/nc-AFM/DFT study of the chemical transformations of a hydrocarbon precursor on Cu(111) upon thermal annealing identifiying three sequential species, two of them unachievable by normal synthesis. We thank Reviewer#1 for supporting our work and providing valuable comments to improve our manuscript. We carefully revised our manuscript accordingly. B. Originality and interest. The paper is of interest for the surface science community and for the organic synthesis researchers. However, it is not so original for on-surface chemistry in view of two recent papers: i) Imaging single-molecule reaction intermediates stabilized by surface dissipation and entropy, Nat Chem 2016, and ii) Thermal selectivity of intermolecular versus intramolecular reactions on surfaces, Nat. Commun. 2016, which should be cited by the authors. Nevertheless, since there are almost no examples of a careful Nc-AFM identifican of thermally evolved chemical compounds, I would recommend publication, though having in mind that this paper will not be the first of its kind. We agree with the comment that our work is not the first to report on on-surface chemical reactions. However, we believe that our work presents for the first time the sequential reaction of a single molecule on a surface. We now cite the two suggested articles. Please note that one of them (Nature Chem. 2016) appeared after our submission. We revised the manuscript on page 2 as "However, the production yield is generally relatively low, presumably related to the complicated reaction mechanism. For instance, Cirera et al. recently reported that the reaction temperature can tune the probability of the intermolecular and intramolecular reaction pathways. 12 " And on page 3 "and intermediates in the dimerization. 15 " The paper is brilliantly written and very easy to follow. We thank Reviewer#1 for supporting our work. D. Appropiate use of statistics and treatment of uncertainties. I am kind of worried because of two experimental limitations: D.1. Instrumental. The molecules are deposited on a sample placed on a manipulator, hold at a desired temperature. However, while transfering to a LT-STM omicron, there is a raise in temperature due to the fact that the wobble stick is at room temperature. Could the authors elaborate something on this respect? 26/7/16 We are familiar with this issue. To minimize the heat transfer, the tweezers of the wobble stick were cooled down by touching the cryostat for a while. For this reason, the temperature of the tweezers should be much lower than room temperature. Furthermore, the transfer to the microscope from the LT manipulator was usually done in less than 10 seconds (typically 3 seconds). To explain this, we added the following sentence in the experimental method section as "After deposition, the sample was transferred to the microscope by a wobble stick manipulator. In order to minimize the influence of the heat transfer to the sample from the manipulator, the tweezers of the manipulator were cooled down by touching a helium radiation shield for about 60 seconds and so that the temperature should be much lower than room temperature." D.2. The identification of the first chemical product after annealing should be relaxed in the manuscript. It is a tentatively identification, not fully corroborated by nc-AFM. In addition, references addressing in-take of hydrogen atoms by molecules should be introduced. We agree that the difficulty in fully imaging the first product makes our structural assignment somewhat tentative, despite the support of the simulations. We have modified the following sentence in the discussion (page 7) to make this clearer: "In this proposed structure, which is 1.4~eV more stable than the intact tDBA on Cu (111) As Reviewer#3 pointed out, we agree that it may be too speculative since we have never tried to collect the molecules. Yet, we are aware that the amount of the carbon decreased drastically during the polymerization by the dehydrogenation, meaning that the most of the molecule can be desorbed from the substrate. We revised the text on page 11 as "These results indicate that the final product can be selectively collected from the surface by further thermal desorption if the molecules desorb before intermolecular reactions." E. Conclusions. The paper is very solid and very appealing for the community. We thank Reviewer#1 for supporting our work. F. Suggested improvements: 26/7/16 F.1. Distribution of distinct species upon steps of annealing. What are the side-products, if any? The reaction yield to Product c (in figure5) is 100%. At least in our measurements, we could not find any side-products. Yet, Product c tends to be connected with organometallic bonds as shown in Supplementary Figure 7 and 8. Once they are connected, the in-plane diffusion of the carbon atom in the polymeric molecules can happen by annealing at higher temperature (Supplementary Figure 9). Then, the side-products, namely polycyclic aromatic compounds, can be produced. In order to explain this, we added three sentences on page 7 as "no side-product could be observed in our experiment". ,on page 9 as "Nevertheless, either monomer or polymer, the structure of all molecular units is the same as that described in Fig. 3c and so that no side-product was observed ( Supplementary Fig. 7 and 8)." ,and on page 11 as "Other polymeric molecules were formed by undefined in-plane carbon diffusion." F.2. Discussion on the mechanism of the reaction. Why these species are not possible by normal organic synthesis? What are the limiting steps? What impact do these species have for the organic community? Why these species are not possible by normal organic synthesis? What are the limiting steps? The first and second products are very reactive species, and immediately decomposed in air. Therefore, it is difficult or nearly impossible to produce these molecules by conventional organic synthetic methodologies. On the other hand, the third product, while it is a new molecule indeed, can be synthesized by the conventional procedures. To explain this, we added the following sentence on page 12: "We also note that the first two products could not be produced via conventional organic synthesis due to their reactivity and would decompose immediately in air. " What impact do these species have for the organic community? As we mentioned above, the most important achievement in the present manuscript is the production and detection of unstable species by controlling stepwise on-surface reactions. The present information stimulates this research field further to produce novel molecules including very reactive species which have never been synthesized in solution chemistry. From an organic chemist's viewpoint, the first product is an unstable radical species, and therefore the detection of such a species has a significant impact on the community. Moreover, the second product, benzo[a]indeno[2,1-c]fluorene, which belongs to the family of indenofluorenes, is the subject of the interests for the organic chemists, because of its unique electronic and magnetic properties. As such, we believe our manuscript lead to strong impact on the organic community. To explain this, we added the following sentences on page 12 26/7/16 "More specifically, the first product is an unstable radical species, and therefore the detection of such a species is a significant step for the chemical community. The second product, benzo[a]indeno[2,1-c]fluorene, belongs to the family of indenofluorenes, an area of intense interest due to their unique properties. 30 " G.-References: The paper is missing the two Nature references mentioned above and the NanoLetters by A. Riss et al.(Nano Lett. 14, 2251-2255(2014). The manuscript now cites these articles. H.-Clarity and context: Very clear and scholarly presented. Context should be slightly improved according to above mentioned suggestion. We thank Reviewer#1 for supporting our work. The study addresses the topic of "on-surface" chemistry and it is timely to promote this issue for the physics community; it brings the two disciplines of chemistry and physics together to reach a common goal. We thank Reviewer#2 for supporting out work. We carefully read the comments and revised our manuscript accordingly. Some additional comments: 1) Abstract: The Abstract ends with "..., which cannot be synthesized by solution chemistry." This general statement is too strict. It should be changed to "... cannot easily be synthesized ...". In principle it is a matter of motivation and the resources one puts in from the chemistry side to synthesize a new compound. We revised the last sentence as suggested. Control of Reactivity and Regioselectivity for On-Surface Dehydrogenative Aryl-Aryl Bond Formation) needs to be cited. This recent paper goes very much in-line with the actual manuscript. A corresponding comment on it should be added in the Introduction. We thank Reviewer#2 for suggesting this. We now cite this article in Introduction and added the following sentence on page 3 as "Furthermore, a systematic observation of tetracyclic pyrazino [2,3-f][4,7]phenanthroline, annealed on Au(111), reveals the regioselectivity for on-surface dehydrogenative Aryl-Aryl bond formation. 16 " 3) Figure 2d: The two hydrogen atoms are bond with a line in a wedged (or tapered) form, which indicates a specific stereochemistry; it would be more appropriate to draw just a line. We modified Fig. 2d according to the reviewer's comment. 4) First reaction: (page 6) "... these two triple bonds have been cleaved ..." I mean no, because then there would be no bonding anymore between the carbons. Better is for example, "Two acetylene bridges are partially reduced while two hydrogen atoms are added to each one." We revised as suggested. 5) (page 6) "The remaining two non-covalently bonded electrons are ...". This sounds strange for chemists; maybe: "The biradical molecule is stabilized by ...". We revised as suggested. 6) (page 8) Better would be: " By now, ... consists of two pentacyclic and four hexacyclic carbon 26/7/16 rings." We revised as suggested. 7) (page 8) "... -two hydrogen atoms have definitely been added during the process". It sounds like: before maybe not but now for sure. Please rephrase this sentence. We have rephrased the sentence on page 8 as follows: "Furthermore, in the AFM image 14 C-H bonds are clearly visible, implying that the elemental composition is now C$_{24}$H$_{14}$ --with two hydrogen atoms likely having been added during the process." The chemical structures of reaction intermediates and products were identified using the combination of STM, AFM, and DFT calculations. The study found two aromatic molecules unreported before, and the detailed mechanism of the three-step reaction was derived. In general, this paper is well-written with comprehensive data analysis. The data quality is excellent, but partial results are conceivable. I would recommend the paper for publication in Nature Communications after the authors address the following comments. We thank Reviewer#3 for supporting our work and giving valuable comments on our manuscript. We carefully read and revised our manuscript accordingly. 1. nc-AFM provides the essential information for determining the chemical structures of molecular species in the presented reaction. Although some interpretations appear to be straightforward, there are images hardly resembling the deduced structure (Fig.2b,c,d). Could the authors elaborate how the adsorption site of tDBT derivative was determined in DFT calculation (Fig.S2b)? In this spontaneous hydrogenation process, where do the hydrogen atoms come from? It is well known ethyne group readily react on copper surface by forming two C-Cu bonds, should this reaction path be considered as well? As Reviewer#3 points out, it is known that ethyne groups can react with copper to form C-Cu bonds and that was our initial guess as well. However, according to the performed DFT calculations (and first-principles molecular dynamics simulations) the formation of a new C-C bond within the molecule prevails, regardless of whether or not additional hydrogen atoms are introduced at this point. This is shown in Supplementary Information Fig. S4. If inspected carefully, from Fig.S4 one can see that there is indeed some interaction between the molecule and the copper surface, especially if no hydrogen atoms are added. However, this structure does not match with the corresponding AFM image. Indeed, only if one assumes that two hydrogen atoms are introduced to the molecule and it takes the form shown in Fig.S2a', does one obtain a qualitative agreement with the corresponding AFM image. We believe that the hydrogen atoms for the hydrogenation process come from the copper surface (please also see answers to Reviewer#1). In the calculations a few initial adsorption sites on the surface were tried and the final geometries were obtained using a robust conjugate gradient optimization with a force gradient convergence better than 10 meV/Å. Fig.1, figure caption mentions (d) chemical structure of the trimer, which I cannot find. Additionally, it is mentioned "The observed bond-like feature in the intermolecular contact relates to an apparent bond, which is an artefact caused by the flexible CO tip following the potential energy landscape." I would like to ask for a clarification: whether the assembly of 26/7/16 three DBA was bonded via C-H-pi interactions OR they just got together by chance? If the argument of CO-tilting artefact prevails, I would doubt the validity of the approach of using AFM images to identify the intermediates or transition states in reactions. In view of the complicate configurations of the molecular moieties in the vicinity of catalyst atoms or active sites, should I believe those features in the acquired AFM images be real? Fig.S8 happens to be an example in this respect. The elongated bond length could arise from multiple origins; and the presence of a line feature is not a guarantee of the coordination bond that produces the organometallic complex. In supplementary The bottom line for the above-mentioned questions: are the intra-molecular lines seen in Fig.3b,4b resulted from tip artefact too? With respect to the trimer, after discussion we agree that the evidence for its bonding is probably too speculative to be discussed in detail. Hence, we removed it from the SI. On the more general topic of bonding artefacts, this is where it is critical to support the AFM image with simulations -DFT agrees well with the Fig. 3b and Fig. 4b, and the intramolecular lines are not tip artifacts. This is already discussed in the SI in the case of Fig. 3. 3. Because of the absence of side reaction in the discussed system, the ratios of different products should agree well with the calculated thermodynamic energy. Could the authors have a comment on the issue? We do not mean that there are no side reactions. The undesired products tend to fuse with each other. Thus, we count only the monomer product. To explain this, we revised the manuscript on page 7 as: "It should be noted that the reaction selectivity is very high (nearly 100\% , no side-product could be observed in our experiment) as well, in contrast to previously reported on-surface transformations. This is attributed to a confined molecular backbone of tDBA which suppresses side reactions, and the few undesired products tend to fuse with each other, so our focus remains on the the monomer product." 4. Most organic molecules tend to break down at elevated temperatures on reactive metal surfaces like copper, why is it suggested that the final reaction product can be collected from the surface by further thermal desorption?
4,706.8
2016-09-13T00:00:00.000
[ "Chemistry", "Materials Science" ]
A Cost Modelling System for Recycling Carbon Fiber-Reinforced Composites Cost-effective and environmentally responsible ways of carbon fiber-reinforced composite (CFRP) recycling are increasingly important, owing to the rapidly increasing use of these materials in many industries such as the aerospace, automotive and energy sectors. Product designers need to consider the costs associated with manufacturing and the end-of-life stage of such materials to make informed decisions. They also need to understand the current methods of composite recycling and disposal and their impact on the end-of-life costs. A comprehensive literature review indicated that there is no such tool to estimate CFRP recycling costs without any prior knowledge and expertise. Therefore, this research paper proposed a novel knowledge-based system for the cost modelling of recycling CFRP that does not require in-depth knowledge from a user. A prototype of a cost estimation system has been developed based on existing CFRP recycling techniques such as mechanical recycling, pyrolysis, fluidized bed, and supercritical water. The proposed system has the ability to select the appropriate recycling techniques based on a user’s needs with the help of an optimization module based on the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). Estimating recycling costs has taken into consideration various factors such as different material types in different industries, transportation, and dismantling costs. The developed system can be employed to support early-stage designers and decision-making stakeholders in terms of understanding and predicting recycling costs easily and quickly. Introduction Carbon fiber-reinforced polymer matrix composites (CFRPs) are being rapidly adopted among emerging composite materials across various industries such as in aircraft and wind turbine blade manufacturing as well as in the transportation sector [1]. The global market capacity for CFRP was estimated to be approximately USD 5 billion in 2019 and it was expected to grow by 10.6% annually, reaching around USD 8 billion in 2024 [2]. In terms of the worldwide production of CFRP, it is estimated to reach almost 200 kt by 2022, whereas the amount produced in 2018 was 128 kt [3]. The reason behind such a relatively high demand is related to the superior properties of composite materials such as higher strength, lower weight (25 to 75% reduction in weight), and corrosion resistance compared to conventional materials such as steel and aluminum. As a result, using CFRPs enables energy saving and reducing carbon emissions associated with the life cycle of the final products. For example, recycling a kilogram of carbon fiber with a chemical method consumes 38 MJ of energy, whereas the production of virgin carbon fiber requires 5-15 times Table 1. Distribution of the global CFRP market by matrix material [10]. Matrix Type Market Size (bln USD) Another reason is the diversity of mixtures of composites that do not allow using standardized processes for the collection and sorting of waste. Finally, composite materials contain cores and coatings which require man force to be separated for recycling [11]. Along with the technical challenges, cost predication also tends to hinder the process of growth of composite waste's recycling rate. For instance, there are recycling methods that are not commercially viable due to their high dependence on energy consumption. Moreover, the recycled composites are often considered to be of lower quality in contrast to virgin composites; thus, the area of application is restricted to internal aircraft structures, for instance. Finally, composite waste recycling plants tend to be located far from the suppliers of the waste, which in turn, requires transportation cost supply and supply chain-related performance to be taken into consideration [12]. Boeing has established good practices of recycling carbon fiber waste by recycling up to 100% of its CFRP waste in cooperation with the company ELG Carbon Fibre based in Polymers 2021, 13, 4208 3 of 20 the UK. The partnership resulted in training employees and arranging recycling processes on 11 manufacturing sites [13]. Other carbon fiber (CF) recycling companies include Carbon Conversions (Lake City, SC, USA), HADEG Recycling GmbH (Stade, Germany), ELG Carbon Fibre Ltd. (Bilston, UK), and Takayasu Co., Ltd. (Gifu, Japan) [7]. Moreover, CF manufacturers tend to express interest in recycling as producing recycled CF consumes ten times less energy than virgin material. The energy and cost reduction are the strong drivers for recycling CF on the market. For example, recycled CF (rCF) costs around USD 18-25 per kg, whereas virgin CF (vCF) is valued at USD 33-66 per kg [14]. The production of vCF is expensive but also energy-intensive (energetic cost is 183-286 MJ/kg) [15]. Recycled CF can decrease costs by 70% and energetic costs by almost 98% [16]. Saved energy from using rCF is equal to the annual electricity use of 175,000 homes [16]. The increased application of carbon fiber-reinforced composites across various industries along with rising environmental concerns requires developing financially viable and effective recycling techniques. Different recycling techniques have been developed over the last twenty years. The most prominent techniques are mechanical, thermal (pyrolysis), and chemical (solvolysis) processes [15]. In the case of mechanical recycling, fiber and matrix are separated by shredding and then followed by grinding, resulting in flakes, powder and fibrous fractions [17]. In the case of thermal recycling techniques, among which are pyrolysis and fluidized bed processes, heat is used to decompose matrices and convert them into gases, tar, and char [18,19]. Pyrolysis is a process used at an industrial scale by most of the recycling companies; for example, ELG Carbon Fibre operates with a capacity of 2000 tons/year [20]. Finally, the solvolysis technique adopts chemical reactions in different organic liquids at high-pressure or supercritical conditions to break the matrix. Other techniques such as electrochemical and biotechnological techniques have also been developed but they are less advanced compared to others [21]. At present, more cost-effective ways of recycling CFRP are being developed. However, only a few of them offer proper business models for commercialization or integration into current waste management systems. Despite the increasing attention to recycling CFRPs, there is a gap in terms of developing cost modelling and its software tools for recycling carbon fiber composites. Limited studies have examined the financial performance of the CFRP recycling process. Li et al. conducted a life-cycle cost analysis of mechanical recycling and the further application of recycled carbon fibers [22]. According to the study, the low recovery rates from the process and low value for rCF were not enough to cover the costs of processing the waste. Meng et al., in turn, performed a financial analysis on the viability of the fluidized bed process for recycling CF and further applications in the automotive industry [23]. The study provided a comprehensive financial model and sensitivity analysis in order to find out that the carbon fibers can be recycled at the price of USD 5 per kg, which is equal to 15% of vCFs. A study by Vo Dong et al. [24] developed an economic and environmental model of different waste disposal routes for assessing their performance. Except for the traditional disposal routes such as landfilling and incineration, the recycling options were mechanical recycling, pyrolysis, microwave pyrolysis, and solvolysis in supercritical water. The study reported profound knowledge about various financial aspects of the considered recycling techniques. Xu et al. [25] modelled the costs of end-of-life automotive components for different recycling options. The reuse (remanufacturing) options of the crankshaft and composite material oil pan have been selected for the study, which involves reconditioning procedures. The developed model provided a cost structure with a prominent example of an activity-based cost estimation. Hagnell and Akermo [18] proposed the recyclate value model which modelled the potential of the closed-loop application of fiber-reinforced materials. The modelling tool evaluates the cost of recycled fiber with the connection to mechanical properties degraded after recycling. The study reported that 50% of cost reductions can be achieved with the comparable level of mechanical properties using recycled fiber for certain applications. Lefeuvre et al. [26] modelled a pyrolysis plant using [27]. According to the authors, the open-loop recycling (resulting in shredded CF) costs were EUR 288 per 35.5 kg, whereas the same amount of material for closed-loop recycling (long CF equivalent to vCFs) accounted for EUR 2.91. Hoefer developed a framework for economic decisions in wind turbine blade disposal [28]. The developed framework has inputs such as blade parameters, selling price, landfilling tax, etc., which allows for choosing between options such as remanufacturing, landfilling, and processing blades to sell a recyclate. The literature review indicated that no effort was made in developing a knowledgebased cost modelling tool to support selecting the recycling option of carbon fiber composites. Moreover, the research in this field is limited by industry type, recycling process and supply chain considerations. In other words, there is no record of a system that considers several waste sources (manufacturing, industrial), recycling processes and the whole recycling supply chain including waste transportation and dismantling when calculating the final cost of recycling CFRP. There is a lack of cost models that consider several factors simultaneously. Such a model could be helpful in understanding the recycling cost drivers and understanding the influence of recycling plant parameters and desired quality on the cost of recyclates for each recycling method. The cost estimation of recycling, particularly at the conceptual design stage, is a critical and, at the same time, difficult task. This research work aimed to develop a cost estimating model and its knowledge-based prototype software tool for different techniques of recycling CFRPs. The system has the capability of selecting suitable recycling processes that meet the user requirements. Development of a Cost Model for Recycling CFRPs CFRP recycling stages and their associated cost elements such as disassembly, transportation, capital investments (e.g., construction of a plant), and operating costs were taken into consideration to provide a fundamental assessment of the economic viability of recycling carbon fiber composites. The cost model was developed for recycling techniques to be assessed in terms of their capital costs (CAPEX) such as equipment/construction, and operational costs (OPEX) such as utilities, labor, depreciation, overhead, etc. The standard 10 years' project lifespan of a project was assessed for economic viability. Taxes and subsidies were not considered in the analysis by assigning a zero (0) value as the tax legislation varies from state to state. However, these inputs could be altered by a user. The economic indicators that allow assessing the break-even price for selling rCFs and utilities cost are represented at the end of this section. Additionally, the sensitivity analysis was performed to provide an insight into the uncertainty of input data such as recovery rate and annual capacity, which could significantly affect the results. The contribution of variable and fixed costs were determined by performing classical estimates and comparisons with similar research works [23,24]. The cost-related input data are given in Table 2. A 10-year depreciation period with a linear pattern was assumed. The capital investment costs were determined using the rule of six-tenths, according to which the designed capacity data can be adjusted to another intended capacity [29]. The operational costs including utilities and energy costs were obtained from the literature [30,31]. The labor cost was extracted from the official data of Eurostat (40-h working week with a wage of EUR 31.4 per hour) [32]. For all recycling techniques, it was assumed that the operating labor consists of four people, and the same assumption was made by Vo Dong et al. [24]. These parameters can be adjusted by a user. In terms of the economic indicators, the approach used by Vo Dong et al. [24] was adopted and the following assumptions were made: 1. Utilities cost per 1 kg of waste (UC). This represents the sum of all utility expenses for the chosen method. 2. An average unit cost per 1 kg of waste recovered (UCW). For this purpose, a breakeven value at zero net present value (NPV) is calculated. A discount rate of 10% is assumed for calculations. 3. The main parameter assessed is the average unit cost per 1 kg of fiber recovered (UCF). This parameter allows determining the break-even price of selling the recovered product. The latter two parameters are referring to costs with two different perspectives: the unit cost of recovered waste (UCW) could be useful for waste handlers, whereas the unit cost per fiber recovered (UCF) could reflect the final cost of recycled fibers. The formula for NPV could be found in Equation (1) [24]: where, −C tc -total capital costs a-tax rate (in this study, it is assumed to be zero (0)) D-depreciation (linear) ∝-discount rate (10%) Cost Elements There are three cost elements that were considered in this study, namely, capital cost for the recycling factory, transportation cost and disassembly cost. Capital cost focused on four recycling techniques which are pyrolysis, mechanical recycling (grinding), the fluidized bed process, and solvolysis in supercritical water. These processes had been considered both by the research community and industry and offer tangible results. This work is focused on the recovery pathways of carbon fiber. The choice of these methods is based on the literature review results and current practices predominant in the CFRP recycling industry. The material assessed in the study is assumed to have 65% of CF content except for the material considered in the supercritical water related study, in which authors have tested material with 50% fiber content [33]. Pyrolysis is one of the most developed and recognized methods in the industry with a good recovery rate of fibers' mechanical properties despite the high energy requirements. According to the study by Zhang et al. [34], the technology readiness level (TRL) of pyrolysis for CFRP has a value of eight (8) that corresponds to the "system/subsystem" development level. On the other hand, the solvolysis process performing the best in terms of recovery rates of CF properties corresponds to a TRL of 4 ("technology development" stage), most likely due to issues related to achieving positive profit values. Mechanical recycling is the simplest method for the recycling of composite materials. The material in this method is processed using shredders and millers. This technology results in the considerable deterioration of the mechanical properties of rCF. This tends to limit their capability to be utilized in high-value parts [5]. Finally, the fluidized bed process is one of the emerging methods and is characterized by relatively high tolerance levels to contaminated materials [35]. Although the recovery rates in the mentioned technologies do not reach 100%, the retention rate of the properties of recovered carbon fibers is promising. In this study, a 100% fiber recovery rate was assumed for the following processes: pyrolysis, fluidized bed process, and solvolysis in supercritical water [24]. The recovery rate for the grinding process is assumed to be 40%, which was adopted from the study of Li et al. [22]. Capital Cost The capital costs estimation is adopted from the literature by combining the rule of six-tenths and chemical engineering plant cost indices (CEPCI) [36,37]. According to the rule of six-tenths, the approximate cost of a new facility can be estimated based on the historical data of the previous facility at a different facility. After that, a CEPCI is used to adjust the cost data for the current period of estimation. The estimates were carried out in the year 2020 with the latest known CEPCI in the year 2019 [38]. The formula used for calculating the adjusted capital cost per design is shown below: where, Table 3 summarizes the capital costs used in the study with the adjusted CEPCI indices used for the cost model. Pyrolysis Pyrolysis is a thermal method that performs the decomposition of a matrix in the absence of oxygen at temperatures varying between 400 and 700 • C [35]. The method offers a number of advantages over other alternatives that recover fibers with retained mechanical properties; however, it still has its drawbacks. The decomposition process leaves char on the surface of the material which in turn negatively affects the performance characteristics of fiber [5]. There were recent developments achieved that allowed for the removal of the char by applying carbon dioxide and water vapor, opening new horizons for the more advanced application of the technology in the industry [43]. It is important to mention that the cost model for pyrolysis in this work does not include the char-removal step but only the main spending on the process. The capital costs were adapted from Vo Dong et al. [24], i.e., estimates of EUR 10,000,000 for a capacity of 50,000 tons of waste recovered annually. The capital costs were adjusted according to the CEPCI. The energy consumption rate of 30 MJ/kg is taken as a reference value from the study of Witik et al. [44]. However, some studies report the energy consumption rates being as low as 2.8 MJ/kg [45]. The energy from the accompanying products of the process was not considered. Mechanical Recycling Mechanical recycling is the most mature method of recycling composite materials with several steps of decreasing recyclate size [34]. In this method, the material is cut into pieces 50-100 mm in size and fed into a shredder. The pieces are then transformed into particles 10 mm to 50 µm in size [46]. The resultant recyclate material can be categorized by fiber content and fraction. Palmer et al. conducted a study on the classification of the recyclate [47]. The capital costs are adopted from the ERCOM plant with a capacity of 4000 tons per year with a shredder cost of EUR 200,000 [40]. The hammer mills are presented in the market as having a price of approximately a quarter of the shredder's cost with a capacity of 25-40 t/hour [48]. The plant was established in 1990 and was shut down in 2004 due to economic reasons [49]. The capital cost values were adjusted accordingly from the year 1990 using the CEPCI 358 [41]. The energy consumption levels during grinding are adopted from the equation derived by Howarth et al. [50] with the approximate consumption of 0.27 MJ/kg at the capacity of 150 kg/hour. where, E-energy consumption in MJ/kg The fluidized bed process was developed to recover high-grade glass and carbon fibers under moderate temperatures. In the process of recycling, the scraps with a reduced size of up to 25 mms are fluidized with a hot stream of air in a bed at temperatures varying between 450 and 550 • C [46]. Although the initial studies on the fluidized bed process reported losses in terms of tensile strength, Zheng et al. [51] reported an over 95% recovery rate of fibers after using the fluidized bed technique. The distinctive feature of the fluidized bed process is its capability to treat materials with contaminants. In general, the fluidized bed process requires capital investments of EUR 4.1 million for the capacity of 1000 tons/year [23]. The estimate was adjusted by the latest known CEPCI for the year 2019. The total energy consumed by the fluidized bed process has been estimated to be 6 MJ/kg [52]. Supercritical Water Solvolysis in supercritical water is a process in which the polymer matrix is decomposed for recovering CFs. The method provides the highest recovery rate with no or minimal decrease (1-2%) reported in tensile strengths compared with original fibers [53,54]. However, the method is not commercialized widely due to issues in terms of achieving profit. It was reported that substantial capital investments are needed in terms of equipment that can withstand excess pressures and temperature during the process [34,55]. According to Knight [33], for the solvolysis in the supercritical water method, EUR 4.9 million in capital investments for a plant working at a capacity of 150 kg/h are needed. Additionally, for 1 kg of composite material waste (50% wt), this recycling method requires 3.47 kWh of electricity, 19.75 kWh (1.90 m 3 ) of natural gas, 96 kg of cooling water and 4.6 kg of pure water. The prices are indicated in Table 4. 2.1.6. Transportation Cost As the model considered in this study is based on a hypothetical composite material treatment, specific locations of theoretical plants are not defined. This creates uncertainty. Nevertheless, the transportation distance cost assumed in this study was adopted from Li et al. as EUR 0.047 per km [22]. Disassembly Cost Dismantling costs for the automotive industry were assumed to be EUR 1.53 per kg based on the data obtained from Li et al. [22]. For the wind turbine industry, disassembly costs were extrapolated from different cost values pertinent to various wind turbine sizes of the Suncor Energy Project and were assumed to be equal to EUR 0.42 per kg [56]. For the aerospace industry, dismantling costs were obtained from publicly available sources. The average value of EUR 0.54 per kg is assumed based on the calculation of the dismantling costs of Boeing 747 reported by Cacciottolo [57]. It is important to note that these values are extremely vague and were used as indicative values; thus, the user is advised to calculate the disassembly costs for each case and enter the system. The Overall Architecture of the Proposed System The CFRP recycling process flow is shown in Figure 1 which indicates the required steps starting from the end-of-life waste to the resulting recycled CF. The costs are incurred at all stages, and therefore, are added to the total cost estimation. For example, dismantling, transportation, and size reduction costs exist in all types of recycling processes. However, only the mechanical recycling method requires cleaning which increases the cost of the process. Moreover, size reduction of large-scale materials, such as wind turbine blades, might be necessary before transportation. It is worth mentioning that treating residues (for example, ash) is not considered after recycling CF in the total cost calculation due to their negligible values. Polymers 2021, 13, x FOR PEER REVIEW 9 of 20 dismantling, transportation, and size reduction costs exist in all types of recycling processes. However, only the mechanical recycling method requires cleaning which increases the cost of the process. Moreover, size reduction of large-scale materials, such as wind turbine blades, might be necessary before transportation. It is worth mentioning that treating residues (for example, ash) is not considered after recycling CF in the total cost calculation due to their negligible values. Figure 2 illustrates the overall structure of the proposed software system for the cost estimation of recycling carbon fiber campsites. The cost of recycling consists of dismantling costs, capital costs, and operational costs. Each cost element is estimated according to the user input parameters and predefined coefficients allocated to each cost element (e.g., labor, transportation cost). The system consists of two main modules: (1) a knowledge-based system (KBS), which is composed of if-then rules to select appropriate recycling process, and (2) a database that stores all the data entries by the user along with the waste recycling specification data. The proposed rule-based system selects appropriate recycling processes and estimates capital, operational, disassembly and transportation costs required for CFRP recycling. For example, the algorithm for the selection of the recycling process according to predetermined rules is given in Table 5. Figure 2 illustrates the overall structure of the proposed software system for the cost estimation of recycling carbon fiber campsites. The cost of recycling consists of dismantling costs, capital costs, and operational costs. Each cost element is estimated according to the user input parameters and predefined coefficients allocated to each cost element (e.g., labor, transportation cost). The system consists of two main modules: (1) a knowledge-based system (KBS), which is composed of if-then rules to select appropriate recycling process, and (2) a database that stores all the data entries by the user along with the waste recycling specification data. The proposed rule-based system selects appropriate recycling processes and estimates capital, operational, disassembly and transportation costs required for CFRP recycling. For example, the algorithm for the selection of the recycling process according to predetermined rules is given in Table 5. THEN (The recycling process is solvolysis) Figure 2. Overall structure of the developed system. The system scenario of the proposed cost analysis process is shown in Figure 3. The system prompts a user to enter all the necessary characteristics of the waste material to be recycled such as the waste type and its weight. Such data is stored in the project database. The user selects the desired recycling process or chooses the automatic selection feature which suggests the recycling method according to the user's previously specified criteria. The waste characteristics are the main input to the cost estimation module. The selection of the cost estimation and recycling method requires continuous interaction between different modules such as the waste specification database and CFRP waste recycling process knowledge base. The knowledge base module consists of a set of rules for selecting an appropriate recycling process by utilizing the Technique of Ranking Preferences by Similarity to the Ideal Solution (TOPSIS). The TOPSIS method finds the alternative that is closest to the ideal solution and farthest from the most negative ideal solution [58]. IF (Quality of recovered fibers is not important) AND (Scalability of the process is very important) AND (Tolerance for contamination is very important) AND (Capital cost amount is not important) THEN (The recycling process is pyrolysis) IF (Quality of recovered fibers is not important) AND (Scalability of the process is very important) AND (Tolerance for contamination is very important) AND (Capital cost amount is very important) THEN (The recycling process is mechanical) IF (Quality of recovered fibers is very important) AND (Scalability of the process is not important) AND (Tolerance for contamination is very important) AND (Capital cost amount is not important) THEN (The recycling process is solvolysis) The system scenario of the proposed cost analysis process is shown in Figure 3. The system prompts a user to enter all the necessary characteristics of the waste material to be recycled such as the waste type and its weight. Such data is stored in the project database. The user selects the desired recycling process or chooses the automatic selection feature which suggests the recycling method according to the user's previously specified criteria. The waste characteristics are the main input to the cost estimation module. The selection of the cost estimation and recycling method requires continuous interaction between different modules such as the waste specification database and CFRP waste recycling process knowledge base. The knowledge base module consists of a set of rules for selecting an appropriate recycling process by utilizing the Technique of Ranking Preferences by Similarity to the Ideal Solution (TOPSIS). The TOPSIS method finds the alternative that is closest to the ideal solution and farthest from the most negative ideal solution [58]. Optimization Module To propose the appropriate recycling process for selection, a multicriteria decisionmaking analysis was conducted according to the user's potential criteria/requirements. The Technique of Ranking Preferences by Similarity to the Ideal Solution (TOPSIS) was used to solve the multicriteria decision-making (MDCM) problem. The TOPSIS is a convenient and simple technique that can take into account a significant number of alternatives. The purpose of this method is to calculate the distance to the ideal solution, which is adjusted by the user's preferences [58]. Optimization Module To propose the appropriate recycling process for selection, a multicriteria decisionmaking analysis was conducted according to the user's potential criteria/requirements. The Technique of Ranking Preferences by Similarity to the Ideal Solution (TOPSIS) was used to solve the multicriteria decision-making (MDCM) problem. The TOPSIS is a convenient and simple technique that can take into account a significant number of alternatives. The purpose of this method is to calculate the distance to the ideal solution, which is adjusted by the user's preferences [58]. Criteria Quantification Four criteria are available for assisting the user in the process of selecting the desired recycling method. The values are assigned ranging from 1-5 corresponding to the importance of the criterion from the least to the highest. Table 6 shows the quantified values for the assessed methods. The values are assigned based on the information obtained from the literature review. The chosen criteria are further analyzed following the steps below: 1. The construction of the comparison matrix, which is illustrated in Table 5. The matrix constructed is based on the four (4) recycling methods and respective criteria. According to Lee and Chang [59], the columns represent criteria and rows represent the respective methods. 3. The normalized matrix is adjusted by weights incurred from user inputs and calculated using the Equation (5) [58-60] 4. Ideal negative and ideal positive solutions are determined using Equations (6) and (7) [58][59][60] A + = max V ij = The maximum value of each column in V ij (6) A − = min V ij = The minimum value of each column in V ij (7) where, A + -positive ideal solution; A − -ideal negative solution. System Implementation and Validation A prototype software-based system was developed to implement the cost modeling methodology using Python 3 and PyQt5. Python is a powerful object-oriented programming language that supports big data and complex mathematics [61]. It also provides the necessary tools to build knowledge-based systems. PyQt5 is a Python library used for building graphical user interfaces (GUI). It allows the user interface to be written in a coded format that will be transformed into an automatic layout [62]. The software runs on any PC under Windows OS and macOS and is designed to be menu-driven so that there are fewer manual input entries. A user-friendly interface has been developed to allow users to use the software efficiently. In the system, the user is asked to answer questions and enter parameters in four steps which are represented in Figure 4a-c. The user has to specify a material type and annual capacity. He or she should select the industry sector for waste generation and input transportation distance between end-of-life products or manufacturing waste and recycling factory. The system has the capability of allowing the user to choose a recycling process or recommend a recycling process based on the user inputs and TOPSIS or based on predefined parameters. System Implementation and Validation A prototype software-based system was developed to implement the cost modeling methodology using Python 3 and PyQt5. Python is a powerful object-oriented programming language that supports big data and complex mathematics [61]. It also provides the necessary tools to build knowledge-based systems. PyQt5 is a Python library used for building graphical user interfaces (GUI). It allows the user interface to be written in a coded format that will be transformed into an automatic layout [62]. The software runs on any PC under Windows OS and macOS and is designed to be menu-driven so that there are fewer manual input entries. A user-friendly interface has been developed to allow users to use the software efficiently. In the system, the user is asked to answer questions and enter parameters in four steps which are represented in Figure 4a-c. The user has to specify a material type and annual capacity. He or she should select the industry sector for waste generation and input transportation distance between end-of-life products or manufacturing waste and recycling factory. The system has the capability of allowing the user to choose a recycling process or recommend a recycling process based on the user inputs and TOPSIS or based on predefined parameters. Figure 5 shows the cost estimation results generated by the developed system. The system output illustrates the total cost of four recycling processes. The system enables the user to change the input parameters and compare results. System Validation: Case Study Public data from a leading recycling carbon fiber composite company were employed in the developed system. ELG Carbon Fibre, targeted as a case study, is a recycling company based in the UK with 60 employees and a 4000 m 2 warehouse. The pyrolysis furnace installed at this company has a capacity of recovering 1500 tons of carbon fiber per year. The process contains three steps used for carbon fiber recovery and further production: (1) the mechanical shredding of laminates and prepregs; (2) a pyrolysis process; and (3) milling/non-woven mat production. At the current capacity of the supply chain of 1300 tons, it is noted that the recycled products cost about EUR 10-20 per kg, whereas the costs of virgin fiber products vary between EUR 30 and 40 per kg [63]. To validate the system, the closest parameters to the aforementioned conditions were input into the system. Table 7 shows the values of input parameters provided to the system and the unit cost per waste and per recovered CF obtained as a result. The unit cost recovered of fiber Figure 5 shows the cost estimation results generated by the developed system. The system output illustrates the total cost of four recycling processes. The system enables the user to change the input parameters and compare results. Figure 5 shows the cost estimation results generated by the developed system. The system output illustrates the total cost of four recycling processes. The system enables the user to change the input parameters and compare results. System Validation: Case Study Public data from a leading recycling carbon fiber composite company were employed in the developed system. ELG Carbon Fibre, targeted as a case study, is a recycling company based in the UK with 60 employees and a 4000 m 2 warehouse. The pyrolysis furnace installed at this company has a capacity of recovering 1500 tons of carbon fiber per year. The process contains three steps used for carbon fiber recovery and further production: (1) the mechanical shredding of laminates and prepregs; (2) a pyrolysis process; and (3) milling/non-woven mat production. At the current capacity of the supply chain of 1300 tons, it is noted that the recycled products cost about EUR 10-20 per kg, whereas the costs of virgin fiber products vary between EUR 30 and 40 per kg [63]. To validate the system, the closest parameters to the aforementioned conditions were input into the system. Table 7 shows the values of input parameters provided to the system and the unit cost per waste and per recovered CF obtained as a result. The unit cost recovered of fiber System Validation: Case Study Public data from a leading recycling carbon fiber composite company were employed in the developed system. ELG Carbon Fibre, targeted as a case study, is a recycling company based in the UK with 60 employees and a 4000 m 2 warehouse. The pyrolysis furnace installed at this company has a capacity of recovering 1500 tons of carbon fiber per year. The process contains three steps used for carbon fiber recovery and further production: (1) the mechanical shredding of laminates and prepregs; (2) a pyrolysis process; and (3) milling/non-woven mat production. At the current capacity of the supply chain of 1300 tons, it is noted that the recycled products cost about EUR 10-20 per kg, whereas the costs of virgin fiber products vary between EUR 30 and 40 per kg [63]. To validate the system, the closest parameters to the aforementioned conditions were input into the system. Table 7 shows the values of input parameters provided to the system and the unit cost per waste and per recovered CF obtained as a result. The unit cost recovered of fiber is EUR 6 per kg, which can be used as a raw material for the further processing and creation of rCF products. The difference between the indicated value and the system output can be explained by several factors. Firstly, it is important to note that the production of woven mats is not considered in this study, as the scope of the system is concerned only by the recycling process itself. Secondly, the estimation of recycling costs does not include taxes, which vary in different countries. Finally, it is clear that the price of recycled products ranging between EUR 10 and 20 per kg also includes profit margins, which allow the continuous operation of these plants, whereas EUR 6 per kg of the unit cost of rCF is a reasonable estimate for the main operation of fiber reclamation. Sensitivity Analysis Sensitivity analysis is an approach that shows how much a single uncertainty parameter could affect the output value. In this study, the system output is analyzed by changing the input parameters such as annual capacity, recycling process, and carbon fiber recovery rate. It should be noted that the sensitivity analysis does not consider the effect of factors acting simultaneously on the cost estimate, but only separately. Therefore, there is no probability distribution, and the sensitivity analysis is carried out based on single values. Figure 6 shows the average unit cost per mass of recovered carbon fiber (UCF) for four different recycling processes and four different annual capacities. It assumed a 100% carbon fiber recovery rate and shows that as the annual capacity increases, the unit cost of the recovered fiber decreases. The increase in annual recycling capacity has a significant effect on the UCF of all processes except for supercritical water. The difference in recycling costs between 500 and 4000 tons for the fluidized bed process, mechanical recycling, and pyrolysis represented 43%, 35%, and 29%, respectively. However, supercritical water had only an 11% decrease in the UCF under the same terms. Polymers 2021, 13, x FOR PEER REVIEW 16 of 20 Figure 6. Unit cost per mass of recovered carbon fiber at 100% recovery rate. In Figure 7, the average unit cost per mass of recovered carbon fiber is presented against the recovery rate of carbon fiber. The recovery rate varied from 10% to 100%. Logically, increasing the recovery rate reduces the average recycling cost of recovered carbon fiber. Supercritical water has the highest UCF regardless of recovery rate compared to other methods. Thermal methods including pyrolysis and the fluidized bed process result in similar UCF with increasing recovery rates; however, the UCF from pyrolysis is still lower compared to the fluidized bed process at any recovery rate. At the chosen capacity, these methods must have a recovery rate higher than 40% to be competitive compared to the cost of carbon fiber made of the polyacrylonitrile (PAN) precursor. Mechanical recycling has the lowest UCF amongst others; although, at a 10% recovery rate, the UCF of the process (EUR 13.1 per kg) becomes less attractive compared to thermal methods at recovery rates higher than 10%. It is also noted that the UCF from mechanical recycling with the recovery rate adopted in this study (40%) (EUR 3.3 per kg) is still higher than compared to costs yielded from pyrolysis and the fluidized bed process at their recovery rate. From the analysis, it can be stated that the UCF from pyrolysis, mechanical recycling, and fluidized bed process at the shown capacities can successfully compete with the manufacturing costs of cheap lignin-based carbon fiber (EUR 5.3 per kg) [64]. Solvolysis in supercritical water resulted in the highest UCF, which can be explained by large initial investments and utility costs. However, the process has the highest retention rate of properties amongst others and has potential in high-value applications. The estimated cost in the analysis (EUR 17.9-20.1 per kg) is still comparable with the reported cost of manufacturing carbon fibers from the polyacrylonitrile (PAN) precursor of non-aerospace grade (EUR 18.3 per kg), which still makes the process economically viable [65]. In Figure 7, the average unit cost per mass of recovered carbon fiber is presented against the recovery rate of carbon fiber. The recovery rate varied from 10% to 100%. Logically, increasing the recovery rate reduces the average recycling cost of recovered carbon fiber. Supercritical water has the highest UCF regardless of recovery rate compared to other methods. Thermal methods including pyrolysis and the fluidized bed process result in similar UCF with increasing recovery rates; however, the UCF from pyrolysis is still lower compared to the fluidized bed process at any recovery rate. At the chosen capacity, these methods must have a recovery rate higher than 40% to be competitive compared to the cost of carbon fiber made of the polyacrylonitrile (PAN) precursor. Mechanical recycling has the lowest UCF amongst others; although, at a 10% recovery rate, the UCF of the process (EUR 13.1 per kg) becomes less attractive compared to thermal methods at recovery rates higher than 10%. It is also noted that the UCF from mechanical recycling with the recovery rate adopted in this study (40%) (EUR 3.3 per kg) is still higher than compared to costs yielded from pyrolysis and the fluidized bed process at their recovery rate. Conclusions Estimating the end-of-life treatment cost is vitally important for early-stage designers, manufacturers and industry members in order to optimize the product and budget. Currently, the recycling industry spends a lot of resources on cost modelling of such new systems, especially in their early stage of development. Cost estimation requires expert knowledge in the recycling technical and business processes, which is difficult to gain owing to a lack of data and information available in the field. Therefore, a knowledgebased system for the cost prediction of various carbon fiber recycling techniques has been proposed. The recycling techniques such as mechanical recycling, pyrolysis, the fluidized bed process, and solvolysis in supercritical water were considered in this study. The prototype software was developed with a user-friendly interface, knowledge-based system, and optimization tool for selecting the suitable recycling process for different scenarios. The developed methodology estimates the total costs of CFRP recycling according to specified inputs. It also allows for taking into account exogenous factors such as transportation costs, disassembly costs, industry, and material differences. Moreover, the optimization module based on the TOPSIS assists the user in choosing the recycling process based on the most important criteria such as capital investments, scalability, the quality of fibers, and contamination tolerance. Additionally, the sensitivity analysis revealed that all methods are positively affected by the economy of scales, though the supercritical water technique is affected the least amongst them. The brief comparison with the prices of virgin carbon fibers revealed that all methods are cost-competitive, though supercritical water requires an almost 100% recovery rate to be economically viable. The findings of this research work could provide insights for both decision-makers namely, waste handlers and waste recyclers. However, the focus of the work was on estimating recycling costs and recommending suitable recycling process based on the user's needs. Further research efforts are required to examine possible applications of rCFs and estimating the costs of manufacturing products from rCFs. Moreover, investments should be made to develop the data management approach in order to feed the system with appropriate and up-to-date information from the industry and further automate the cost estimation process. In addition, the impact of Conclusions Estimating the end-of-life treatment cost is vitally important for early-stage designers, manufacturers and industry members in order to optimize the product and budget. Currently, the recycling industry spends a lot of resources on cost modelling of such new systems, especially in their early stage of development. Cost estimation requires expert knowledge in the recycling technical and business processes, which is difficult to gain owing to a lack of data and information available in the field. Therefore, a knowledgebased system for the cost prediction of various carbon fiber recycling techniques has been proposed. The recycling techniques such as mechanical recycling, pyrolysis, the fluidized bed process, and solvolysis in supercritical water were considered in this study. The prototype software was developed with a user-friendly interface, knowledge-based system, and optimization tool for selecting the suitable recycling process for different scenarios. The developed methodology estimates the total costs of CFRP recycling according to specified inputs. It also allows for taking into account exogenous factors such as transportation costs, disassembly costs, industry, and material differences. Moreover, the optimization module based on the TOPSIS assists the user in choosing the recycling process based on the most important criteria such as capital investments, scalability, the quality of fibers, and contamination tolerance. Additionally, the sensitivity analysis revealed that all methods are positively affected by the economy of scales, though the supercritical water technique is affected the least amongst them. The brief comparison with the prices of virgin carbon fibers revealed that all methods are cost-competitive, though supercritical water requires an almost 100% recovery rate to be economically viable. The findings of this research work could provide insights for both decision-makers namely, waste handlers and waste recyclers. However, the focus of the work was on estimating recycling costs and recommending suitable recycling process based on the user's needs. Further research efforts are required to examine possible applications of rCFs and estimating the costs of manufacturing products from rCFs. Moreover, investments should be made to develop the data management approach in order to feed the system with appropriate and up-to-date information from the industry and further automate the cost estimation process. In addition, the impact of uncertainty factors on the cost estimation of recycling CFRPs requires further investigation. The cost drivers in the recycling processes might have a variation and alter the final cost of recycling CFRPs depending on the country's energy balance, for example. Hence, the development of the cost uncertainty estimation framework and incorporating it into the system could be a future research area. This will allow for estimating the range of recycling costs and conducting statistical analysis with confidence intervals which will improve the reliability of the estimates provided by the system.
10,515.4
2021-12-01T00:00:00.000
[ "Engineering" ]
Measuring service quality at an online university: using PLS-SEM with archival data The aim of this study is to analyze, evaluate and validate the NSE (National Student Enquiry) as a service quality measure helping both higher education institutions (HEIs) and students in their decision making. Every year the Dutch foundation ‘Studiekeuze123’ sends out a survey (the NSE) to collect data on service quality regarding education at HEIs in the Netherlands. We used the 2019 NSE-data from the only e-learning university in the Netherlands, the Open Universiteit (OUNL), containing a sample of 1287 students. PLS-SEM was used to analyze a conceptual model in order to understand the service quality factors that promote students’ level of satisfaction and willingness to recommend the HEI. Overall, the findings reveal that the quality of the NSE is sufficient to be used for performance analysis. Nine out of twelve service components taken into account for the OUNL are found statistically significant affecting students’ satisfaction and willingness to recommend. The results help HEIs promoting and managing students’ perceptions of the quality of education and support students in their decision making process. Since many HEIs had to make a transition from onsite to online education within a short period of time, due to the Covid-19 pandemic, service quality became a major concern for HEIs. As online learning systems are expected to stay, analyzing the service quality of the OUNL as a reputed online HEI can help other HEIs getting their online learning systems on track. Introduction Higher education (HE) acts as a major driver in economic competitiveness (Singh & Prasad, 2016) and is vital for the development of a country's human capital (Annamdevula & Bellamkonda, 2016b;Gupta & Kaushik, 2018). The more and the better education, the more resistant and resilient a person, a nation or a civilization becomes. Quality of education is influenced by, and has an effect on, a number of stakeholders (Mahapatra & Khan, 1 3 2007; Srikanthan & Dalrymple, 2007) such as: providers of resources (e.g. public and private funding bodies), users of outputs, i.e. graduates (e.g. employers), and employees of the higher educational institution (HEI), e.g. academics and administrators. Students, however, are suggested to be the primary recipients of the service provided and are considered to be the most relevant stakeholder of HEIs (Abdullah, 2006;Annamdevula & Bellamkonda, 2016b;Bowden, 2011;Gremler & McCollough, 2002;Hill, 1995;Marzo-Navarro et al., 2005;Sander et al., 2000;Sultan & Wong, 2014). The customer-centric (or student-centered) perspective on the quality of education (Sultan & Wong, 2012), therefore, will be the central focus of our study. Service quality in higher education from a students' perspective has been examined and empirically tested in a number of studies. Chitty and Soutar (2004), for example, empirically tested the European customer satisfaction index (ECSI) for its applicability in a HE setting. Abdullah (2006) developed the HEdPERF scale as a measuring instrument of service quality specifically in the higher education sector. The PHEd measure is presented by Sultan and Wong (2010b) as a comprehensive performance-based service quality model applicable to HEIs. In their studies Senthilkumar and Arulraj (2011) respectively Annamdevula and Bellamkonda (2016b) developed and validated a service quality instrument for its applicability in HEIs in specifically India, called SQM-HEI and HiEduQual respectively. The HEDQUAL scale, developed by Icli and Anil (2014), is a measurement scale of service quality in higher education, particularly for MBA programs. Teeroovengadum et al. (2016) introduced the higher education service quality scale HESQUAL as a second-order factor model integrating both the functional and the technical aspect of higher education quality. The SERVQUAL scale (Parasuraman et al., 1988(Parasuraman et al., , 1991 and the SERVPERF scale (Cronin & Taylor, 1992), however, have received most attention in literature on educational service quality (Brochado, 2009;Sultan & Wong, 2012). Though there is no doubt about the importance of service quality in higher education, there is no common consensus on the type and number of service quality dimensions nor how to measure service quality in a HE context (Clewes, 2003;Annamdevula & Bellamkonda, 2016a, b). There is, however, substantial evidence that service quality in an educational context has to be regarded as a multidimensional construct (Teeroovengadum et al., 2019) containing multi-item dimensions (Gupta & Kaushik, 2018). Concluding, a variety of instruments have been developed to assess the quality of higher education. The NSE (National Student Enquiry), issued by the Dutch foundation 'Stud-iekeuze123' for the first time in 2010, is one of these instruments. It is a survey employed on a regular basis. The NSE is used not only by HEIs to evaluate and rank their educational quality, but also by potential students to support their buying decision process since choosing a HEI is regarded as an uncertain and high-risk decision (Sultan & Wong, 2013). The NSE, therefore, is considered a prime information source helping students, both nationally and internationally, in selecting a particular HEI. The NSE primarily measuring service quality of HEIs, therefore, has become a means of differentiating one HEI from others and is a relevant tool in the Dutch HE arena. Despite the NSE is a national student survey that exists for over ten years and in the Netherlands has become an important quality measure for HEIs as well as tool in students' buying decision process, the NSE is neither presented nor discussed so far in international academic literature focusing on service quality in a HE context. Our aim, therefore, is not to develop a new or adapted measure for quality assessment in the context of HEIs, but to present, analyze, and evaluate the NSE as an existing tool to measure students' perceived service quality and to assess its effects on students' satisfaction and willingness to recommend the HEI. This will be done in the context of a public, online university (i.e., the Open Universiteit in the Netherlands: OUNL). The OUNL is one of fourteen universities in the Netherlands but is the only online university. As many HEIs had to make a transition from onsite to online education within a short period of time, due to the Covid-19 pandemic, service quality became a major concern for HEIs since the conditions and characteristics of online education differ from those of the traditional face-to-face approach (La Rotta et al., 2020). The OUNL provides online learning for over 35 years and reached top-three positions with respect to student satisfaction for more than fifteen consecutive years. As online learning systems are expected to stay also after the Covid-19 pandemic is on its return, analyzing the service quality of the OUNL as a reputed online HEI can help traditional class-based HEIs getting their online learning systems on track. This paper is organized as follows. In the next section we first embed the NSE in theory, which is opposite to the regular approach to extract the tool from theory. The characteristics and components of the NSE are linked to literature. Then, the methodology applied for this study is presented. In the results section we focus on testing the reliability, validity and applicability of the NSE measurement using PLS-SEM with archival data collected by 'Studiekeuze123'. Finally, research conclusions are presented, implications identified, and limitations and directions for future research are highlighted. Literature review Since we use secondary data collected through the NSE survey tool initiated by the 'Stud-iekeuze123' foundation we have to theoretically frame the tool and the type of data gathered. As Studiekeuze123 measures service quality in HE, student satisfaction and students' willingness to recommend, we theoretically elaborate on these three concepts. Service quality in HE Both, defining service quality in HE and its measurement are debated extensively in literature (Brochado, 2009;O'Neill & Palmer, 2004). As a result, there is hardly consensus, neither about the definition of service quality in a HE context, nor about how to measure service quality (Annamdevula & Bellamkonda, 2016a;Clewes, 2003;Sultan & Wong, 2010a). Definitions of service quality in HE for example differ depending on the perspective taken, being either service quality as a result of the difference between expectations and performance based on the gaps model as presented by Parasuraman et al. (1988Parasuraman et al. ( , 1991, or service quality in terms of the perception component alone, that is without comparing to expectations (Brochado, 2009;Abdullah, 2006). In measuring service quality in HE a number of alternative instruments have been implemented and evaluated (e.g. Brochado, 2009;Gupta & Kaushik, 2018). Brochado (2009), for example, compares five alternative measures of service quality in HE. Though SERV-QUAL, developed by Parasuraman et al. (1988Parasuraman et al. ( , 1991, is the most popular scale in the HE setting (Gupta & Kaushik, 2018), Brochado (2009) concludes that both the SERVPERF scale and the HEdPERF scale (Abdullah, 2006) are best capable of measuring service quality in HE. SERVQUAL measures service quality in terms of the difference between expectations and performance perceptions using the gaps model as presented by Parasuraman et al. (1985Parasuraman et al. ( , 1988Parasuraman et al. ( , 1991. SERVPERF, HEdPERF and the PHEd measure (Sultan & Wong, 2010b), however, measure service quality without comparing performance to expectations. Many researchers now believe that a performance-based measure is an improved means of measuring service quality in HE (Abdullah, 2006;Cronin & Taylor, 1992, 1994O'Neill & Palmer, 2004;Sultan & Wong, 2010a). Regarding measuring service quality in HE, the NSE does not include performance perceptions to be compared with expectations as in the confirmation-disconfirmation paradigm (Brochado, 2009). The NSE follows the perception paradigm since it only includes the students' perceptions of performance as a determinant of service quality. Therefore, the approach by NSE fits with Cronin and Taylor (1992) who argue that service quality is derived from perceptions of performance alone, and that a performance-based measure explains more of the variance in measuring service quality than a perceptions-minusexpectations measure (Cronin & Taylor, 1994). The superiority of perception-only measures over perception-minus-expectation measures in an educational setting is supported further by Li and Kaye (1998) and Dabholkar et al. (2000). The perception-only paradigm as applied in the NSE, therefore, is suggested to be a valid approach. Dimensions of service quality in HE Not only alternative instruments measuring service quality in HE have been studied empirically and conceptually, also the dimensions of service quality in HE have been studied (e.g. Gupta & Kaushik, 2018). Gupta and Kaushik (2018: 580) noticed "a huge variation in the items as well as constructs while exploring the dimensions", "moving from a simple uni-dimensional construct to complex multidimensional constructs" (page 592). In their extensive literature review Gupta and Kaushik (2018) identified a range of three up to and including twenty-two dimensions of service quality in HE, the modus being five dimensions. Sultan and Wong (2012) identified a minimum of three, a maximum of eight and a modus of five service quality dimensions after investigating fifteen studies in HE. The survey instrument called NSE includes a total of nineteen dimensions of the service quality concept. Twelve of them are applicable to the OUNL, the only university in the Netherlands offering distance learning as its core activity. Since the OUNL is (1) an academic university as opposite to a university for applied sciences, is (2) a university for distance learning only, and (3) the main language is Dutch, seven dimensions are not applicable. Table 1 shows all nineteen dimensions of the service quality concept included in the NSE (as of 2019) and those twelve applicable to the OUNL. Sultan and Wong (2010b, 2012, 2013 suggest that service quality models in HE should include "the three critical aspects of service quality, academic, administrative and facilities" (Sultan & Wong, 2013: 78). In Appendix 2 Table 7, we illustrate how these three critical aspects of service quality are captured by the performance-only service quality models developed for the measurement of perceived service quality specifically in the higher education sector, i.e.: HEdPERF, PHEd, HESQUAL, and NSE. Despite the variation in items and number of dimensions we conclude that the main service quality aspects academic, administrative and facilities are covered by the NSE. Student satisfaction In line with the student perspective on service quality in education, satisfaction is also defined from the perspective of the student. Students' perspective is central since students are the primary recipients of the service provided and, therefore, are considered to be the Note: +: applicable; n.a.: not applicable; *: Two items excluded due to high proportion of missings; **: Three items excluded since they are not applicable for distance learning (e.g. the number of seats, availability of working stations, and physical library) most relevant stakeholder of HEIs (Annamdevula & Bellamkonda, 2016b;Bowden, 2011;Gremler & McCollough, 2002;Hill, 1995;Marzo-Navarro et al., 2005;Sander et al., 2000;Sultan & Wong, 2014). Student satisfaction can be defined as "the favorability of a student's subjective evaluation of the various outcomes and experiences associated with education" and "is being shaped continually by repeated experiences in campus life" (Elliott & Shin, 2002: 198). In the NSE student's satisfaction is driven by a student's general assessment of "a web of interconnected experiences" associated with the HEI (Elliot and Shin 202: 198) and, therefore, is a cumulative concept (Teeroovengadum et al., 2019) measured through a multi-item scale and suggested to be a global or overall measure of satisfaction. Assessing overall student satisfaction using a composite satisfaction scale is suggested "to have more diagnostic value for strategic decision making" (Elliott & Shin, 2002: 207), which at the end is the main goal of Studiekeuze123 to distribute the NSE. Willingness to recommend Customer loyalty is described as "a deeply held commitment to rebuy or repatronize a preferred product/service consistently in the future, thereby causing repetitive same-brand or same brand-set purchasing, despite situational influences and marketing efforts having the potential to cause switching behavior" (Oliver, 1999: 34). Based on this definition customer loyalty contains an attitudinal component and a behavioral component (Baldinger & Rubinson, 1996;Hennig-Thurau et al., 2001;Koslowsky, 2000;Marzo-Navarro et al., 2005). Loyalty, however, "might not be an appropriate consequence in the context of higher education; instead, behavioural intention may play a vital role" (Sultan & Wong, 2013: 79;Sultan & Wong, 2014). As such, the NSE meets this observation since it measures students' loyalty through a behavioral intention (i.e., the willingness to recommend the HEI) as a reflection of the attitudinal component of the loyalty concept. According to the theory of reasoned action, behavioral intentions are very accurate predictors of corresponding behavior (Ajzen & Fishbein, 1980;Fishbein & Manfredo, 1992). Integrated model including service quality, satisfaction and loyalty The number of studies that examined integrated models of service quality in a HE context is limited. Sultan and Wong (2012) provide an overview of major studies and conclude that, overall, students' satisfaction and loyalty are the main target variables. Perceived service quality has been found to be the critical determinant of satisfaction in different contexts (e.g., Carlson & O'Cass, 2010;Cronin & Taylor, 1992;Gounaris et al., 2010;Parasuraman et al., 1985Parasuraman et al., , 1988Schijns et al., 2016) including a HE context (e.g., Hasan et al., 2008;Alves & Raposo, 2007;Annamdevula & Bellamkonda, 2016a, b;Dehghan et al., 2014;Guolla, 1999;Ham & Hayduk, 2003;Sultan & Wong, 2012, 2013. Satisfaction is suggested to be an essential step in loyalty formation (Oliver, 1999). Student satisfaction has been found to be the crucial mediator in the effects of service quality on student loyalty in general (Annamdevula & Bellamkonda, 2016a, b;Dehghan et al., 2014) and on students' willingness to recommend the institution in particular (Al-Alak, 2006;Athiyaman, 1997;Marzo-Navarro et al., 2005;Sultan & Wong, 2013. Synthesizing the results of studies as exposed in the preceding discussion, the following hypotheses are put forward. H1.(1,n): (Each component of) The HEI's service quality, as perceived by the students, has a significant positive effect on students' overall satisfaction (n = the number of service quality components taken into account). H2: Students' overall satisfaction has a significant positive effect on the willingness to recommend the HEI. In summary, the present study examines an integrated model where service quality is conceptualized on a perception-only measure (the NSE), containing multiple service quality dimensions, each having a positive effect on students' overall satisfaction which subsequently results in positive word of mouth which has been modelled as the final consequence. Figure 1 shows our conceptual model. Sampling method The online survey technique is the sampling method used by Studiekeuze123 to collect data from students. All Dutch HEIs are invited to participate in the NSE. The participating HEIs send in the e-mail addresses of their students. In January 2019 the students were invited to participate in the online NSE. It's a user-friendly survey that can be completed through several devices (desktop, tablet, smartphone). After the initial invitation six reminders were sent, since response fell behind. The NSE ultimately was (Studiekeuze123, 2019) which is consistent with similar studies testing and evaluating service quality models in HE (e.g., Sultan & Wong, 2010b). Sampling size The NSE database captures data of 2,344,266 respondents, studying at a total of about ninety HEIs in the Netherlands, and collected from 2010 up to and including 2019. Due to the Covid-19 pandemic the survey for 2020 has been cancelled. In the 2019 NSE survey thirty-two HEIs participated, generating data from 93,874 respondents (Studiekeuze123, 2019). The OUNL is one of the thirty-two HEIs that participated in het 2019 NSE. Studiekeuze123 invited 7655 OUNL-students to participate in the 2019 NSE survey, representing 53% of the OUNL student population (Open Universiteit, 2020). 1955 Students responded, a gross response rate of 25.5%. Ultimately, after profound data screening and cleaning 1287 usable records were included for further analyses in our study (a net response rate of 16.8%). Sampling profile For privacy reasons the dataset as provided by Studiekeuze123 hardly includes background information about the respondents. Most background information refers to the HEI itself (code, name, location, etc.). Information about student's educational program and stage (bachelor/master) is available. Student's stage is used as a controlling variable in our analyses. 663 students (52%) are in the bachelor stage, 624 students (48%) are in the master stage. Students belong to various departments and schools across the HEI as shown by Table 2. Service quality The NSE includes eighty-nine items measuring service quality in education. Not all these eighty-nine items are applicable to the OUNL (see Table 1) since the OUNL is a distance university located in the Netherlands having the largest number of off-campus students. Therefore, service aspects referring to, for example, the number of seats, availability of working stations, meeting rooms or physical library are not applicable. As a result, the perceived service quality construct in our study includes sixty items, covering twelve service quality dimensions, including academic, administrative and facility services (Sultan & Wong, 2013. All twelve service quality dimensions are conceptualized as formative measures. Table 1 includes a sample item for each of the service quality dimensions applicable to the OUNL. Satisfaction Twelve items were used to measure students' overall satisfaction construct in our study. The twelve items reflect the twelve dimensions of service quality as applicable to the university at study. Satisfaction, therefore, is also conceptualized and operationalized as a formative measurement model. Table 3 provides some insight in the twelve items for measuring students' overall satisfaction. Willingness to recommend To measure student loyalty, a single-item scale was used, capturing students' willingness to recommend the HEI. Item wording: "Would you recommend your educational program to friends, family or colleagues?" All the items for service quality and overall satisfaction were measured on a symmetric and equidistant five-point Likert scale ranging from 1 (very dissatisfied) to 5 (very satisfied). Willingness to recommend was also measured on a five-point Likert scale, but now anchored at (1) 'No, absolutely not' and (5) 'Yes, absolutely'. Method of analysis Since our main goal is to present, analyze, and evaluate the NSE in an attempt to identify key drivers for students' satisfaction and willingness to recommend, a PLS-SEM approach is applied. Additional reasons to prefer PLS-SEM over CB-SEM (covariance-based SEM) are that our model includes many indicators (73 in total) as well as many (i.c. 13) formatively measured constructs. Also, a lack of normality as indicated by our data screening favors the use of PLS-SEM (Ghasemy et al., 2020). Results For our empirical analyses we used a dataset containing 1287 qualified records extracted from the NSE database. The results are based on the final operationalization used in this study. We start with assessing the results of the measurement models followed by the analysis of the structural model as suggested by Hair et al. (2017). Results of the measurement model analyses All measurement models, except the single-item scale for willingness to recommend, are formative in nature. We, therefore, follow the formative measurement models assessment procedure as described by Hair et al. (2017). First we assess convergent validity doing a redundancy analyses for each formative construct. The NSE contains global, single-item measures of all twelve constructs that we used in the redundancy analyses. All path coefficients between the formative constructs and their global single-item equivalents are above the recommended threshold value of 0.70 (minimum of 0.802), suggesting that all formatively measured constructs show convergent validity. Next, we assess our formative measurement models for collinearity issues by inspecting the outer VIF values (Variance Inflation Factors). VIF values range from a minimum of 1.029 (Testing_and_Assessment_06) to a maximum of 4.749 (Testing_and_Assess-ment_04). The outer VIF values of all items, therefore, are below the threshold value of 5 (Hair et al., 2017), suggesting that collinearity is not an issue with regard to our formative measurement models. Third step is to assess the significance and relevance of the formative indicators. First, we test the significance of the outer weights. From Appendix 1 Table 6 we conclude that all formative indicators are significant at the 5% level, except the following six indicators: 'General satisfaction_03'; 'Professors/Lecturers_02'; 'Content and structure of study_03'; 'Content and structure of study_10'; 'Quality care_02'; 'Testing and assessment_06'. Of these six formative indicators, only 'Testing and assessment_06' has a loading less than 0.5 (i.e., 0.174). However, all loadings including the loading for 'Testing and assessment_06' (t-value = 2.970; p value = 0.003; 95% BCa [0.057-0.285]) turn out to be significant. We, therefore, retain all formative indicators. Since our results indicate that our measurement models are sufficiently valid and reliable we proceed with analyzing our structural model. Results of the structural model analyses First, we assess our structural model for collinearity issues. Table 4 shows the values for the inner VIFs. Since all inner VIF values are below the threshold value of 5, collinearity between the constructs is not a major issue (Hair et al., 2017). Table 4 also includes the R 2 values for the endogenous latent constructs in our model: general satisfaction and willingness to recommend. The R 2 value of willingness to recommend (0.51) can be considered moderate, whereas the R 2 value of general satisfaction (0.89) can be described as substantial (Hair et al., 2017). The f 2 effect sizes are also included in Table 4. The f 2 values suggest that content and structure of study has a large effect (0.252) on general satisfaction. Professors/Lecturers have a medium effect (0.128) on general satisfaction. Academic guidance/counselling (0.021), study load (0.020), and testing and assessment (0.021) all three have a small effect on general satisfaction. All other exogenous constructs are suggested to have no effect on general satisfaction since their f 2 effect sizes are less than 0.02 (Hair et al., 2017). Table 5 shows the relevant path coefficients (hypotheses) and their significances. Hypotheses 1.1-1.12 refer to the effects of the twelve service quality components on students' overall satisfaction. Hypothesis 2 refers to the effect of students' overall satisfaction on the willingness to recommend the HEI. Nine of the twelve service quality components significantly impact students' overall satisfaction (Hypothesis 1). 'Acquired general skills', 'Information provided' and 'Study facilities' do not affect 'General satisfaction'. In the specific context of the OUNL, 'Content and structure of study' (0.372), 'Professors/Lecturers' (0.230), and 'Academic guidance/counselling' (0.089) emerge as top three satisfaction drivers among OUNL students. 'General satisfaction' has a significant impact on 'Willingness to recommend' (Hypothesis 2). IPMA -results In order to provide policy makers and university managers with actionable results, the data are also analyzed based on the importance-performance paradigm (Ghasemy et al., 2020;Martilla & James, 1977;Slack, 1994). The importance-performance map analysis (IPMA) assumes that evaluation criteria students use vary in importance. Importance ratings enable universities "to identify key drivers of student satisfaction and help them set the priorities for improvement efforts" (Elliott & Shin, 2002: 201). The IPMA, therefore, is of practical value for HEI management as it is "a means of both assessing and directing continuous quality improvement efforts within this sector" (O'Neill & Palmer, 2004: 49). In predicting overall student satisfaction the importance dimension of a predecessor service quality construct (e.g., content and structure of study; (the didactic skills of) Professors and lecturers; acquired scientific skills; study load) is represented by its total effect. The performance dimension of a predecessor service quality construct is represented by its average latent variable score. Both dimensions, importance (total effects) and performance (average latent variable scores), are provided by the SmartPLS 3 software (Hair et al., 2018;Ringle & Sarstedt, 2016) and, therefore, do not have to be asked by means of a questionnaire. From the IPMA results, shown in Fig. 2, we find that the service quality construct 'content and structure of the study' has a relatively high importance (strong total effect) for predicting overall student satisfaction. 'Information provided', as a predecessor, has a relatively low importance (weak total effect). The performance levels (the average latent variable scores, rescaled on a range from 0 to 100) range from 60 to 80. The IPMA results as shown in Fig. 2 support earlier findings based on our f 2 analyses and analysis of the path coefficients: for the OUNL, 'Content and structure of study', 'Professors/Lecturers', and 'Academic guidance and counselling', in that order, emerge as top three satisfaction drivers among OUNL students. The IPMA analysis also shows the relative high and comparable levels of performance on all service dimensions, regardless their level of importance. That is, the university performs well on each service dimension despite its importance. Conclusions and discussion Testing and validating the service quality model for HEIs as used by the Dutch foundation Studiekeuze123 shows that the NSE instrument is robust and capable of measuring service quality in a HEI context from students' perspective. Our study reveals that the dimensions of service quality taken into account are nomological valid and show adequate reliability and validity supporting the findings of Brenders (2013). Nine out of twelve service quality components positively impact students' satisfaction (Hypothesis 1). Our findings support previous research indicating perceived service quality is a critical determinant of satisfaction in a HE context (e.g., Hasan et al., 2008;Alves & Raposo, 2007;Annamdevula & Bellamkonda, 2016a, b;Dehghan et al., 2014;Guolla, 1999;Ham & Hayduk, 2003;Sultan & Wong, 2012, 2013. In our study 'Content and structure of study' and 'Professors/Lecturers' by distance are the most important satisfaction drivers among OUNL students, followed by 'Academic guidance and counselling', 'Testing and assessment' and 'Study load'. Content and structure as well as (interaction with) lecturers and faculty were also found by Ehlers (2004) and Peng and Samah (2006) as important service quality factors in the context of e-learning. Students as the primary recipients of the service provided want value for money and, therefore, expect high-quality content in exchange for the fees they pay. In their process of development, growth and becoming employable, students have to cooperate with professors and lecturers. Students and faculty are the co-producers of education and, therefore, students depend on faculty in successfully completing their studies. The IPMA analysis also shows the relative high and comparable levels of performance on all service dimensions, regardless their level of importance. That is, the OUNL performs well on each service dimension independent of its importance. This observation might explain why this particular university has been positioned in the topthree rankings of Dutch universities for over fifteen years: the university scores consistently high on all service quality dimensions. The university cannot afford underperforming on any of the service quality dimensions, in order to hold its high quality standards and top-three ranking. Based on the R 2 values and f 2 effects sizes the results also demonstrate a strong predictive ability of satisfaction and willingness to recommend. The R 2 values of willingness to recommend (0.51) and general satisfaction (0.89) can be considered moderate respectively substantial. The present study also shows that students' overall satisfaction impacts the willingness to recommend the HEI (Hypothesis 2), supporting previous research that found student satisfaction to be a key predictor for student loyalty in general (e.g., Annamdevula & Bellamkonda, 2016a, b;Dehghan et al., 2014) and for the willingness to recommend the HEI in particular (e.g., Al-Alak, 2006;Athiyaman, 1997;Marzo-Navarro et al., 2005;Sultan & Wong, 2010a, b, 2013. Overall, we conclude that the NSE is nomological sound and theoretically grounded as well as an empirically supported measure of service quality in HE. By applying our study in an online HE context we contribute to the limited research on identifying service quality factors in an e-learning setting (La Rotta et al., 2020;Uppal et al., 2018). Our model and results provide us with both theoretical and practical insights. Theoretical implications Theoretical contributions are made in several ways. This study empirically examined the applicability of the National Student Enquiry (NSE) by validating constructs such as perceived service quality, student satisfaction and behavioural intention for higher education institutions. The NSE has been developed and improved during more than a decade and turns out to be "a reliable and valid instrument for measuring service quality of higher education from the students' perspective", a necessity when service quality is to be improved (Teeroovengadum et al., 2016: 245). Service quality in HE is to be conceptualized as a multidimensional concept containing multi-item dimensions as suggested by a vast majority of literature (e.g., Gupta & Kaushik, 2018;Teeroovengadum et al., 2019) and supported empirically by our study. By capturing a total of 19 dimensions, including the three critical service quality aspects academic, administrative and facilities, the NSE measure reflects a holistic approach to service quality in HE (Teeroovengadum et al., 2016). In the past the NSE has been tested and validated mainly in a face-to-face modality (Brenders, 2013). Uppal et al. (2018) and La Rotta et al. (2020), however, conclude that there is an increasing need to effectively assess service quality of HE in online settings, but that limited research was found in the literature. Since our study examined and validated the NSE particularly under an online modality we contribute to the scarce literature available addressing this issue. Our study demonstrates and examines an integrated model including perceived service quality, satisfaction and loyalty in a HE setting. Our empirical findings support the positive relationships between perceived service quality and student satisfaction, and between student satisfaction and behavioural intentions (e.g., Al-Alak, 2006;Annamdevula & Bellamkonda, 2016a, b;Athiyaman, 1997;Dehghan et al., 2014;Marzo-Navarro et al., 2005;Sultan & Wong, 2010a, b, 2013. Bearing on previous point, a service quality measure following the performance-perception paradigm is very appropriate when more comprehensive and integrated service quality models in HE need to be tested (e.g., Dabholkar et al., 2000;Li & Kaye, 1998). The NSE, as a result, provides a useful tool for managers improving service quality in HEIs and for researchers building more comprehensive and robust models. We will elaborate more on both in following sections. Managerial implications This study focused on presenting and evaluating the NSE measure and identifying key drivers for students' satisfaction and willingness to recommend analyzing archival data from an online university in the Netherlands (the OUNL) using PLS-SEM. The current results show that the NSE is an appropriate and practical measurement instrument of service quality in a HE context supporting HEIs in promoting students' satisfaction and willingness to recommend in an attempt to obtain advantage in a competitive environment. Further, Studiekeuze123, provides data for a lot of HEIs in the Netherlands and from 2010 up to now. This offers the opportunity for not only benchmarking between HEIs, but also analyzing their development over time applying a longitudinal study. Since for many HEIs the number of respondents seem to be high enough the same approach and method can be used for internal benchmarking, e.g. between different educational programs at the same HEI. Hennig-Thurau et al. (2001), for example, found major differences between students from different educational programs in terms of the most relevant dimensions of service quality. In general, through the NSE Studiekeuze123 offers a means to provide general as well as more in-depth results. Our study is an attempt to show how policy makers in HEIs can make use of the NSE data to reinforce service quality, and promote student satisfaction and willingness to recommend. For the OUNL in particular, the results offer a (service) quality-based approach to increase students' satisfaction and propensity to recommend the OUNL. The most important service quality aspects and drivers of students' satisfaction were provided by content and structure of the study, professors and lecturers, and academic guidance and counselling. Service quality of online education has become a major concern for most HEIs since they had to make a transition from on-campus to online education due to the Covid-19 pandemic. As online learning systems are expected to stay also after the Covid-19 pandemic is on its return, analyzing the service quality of the OUNL as a reputed online HEI can help traditional class-based HEIs getting their online learning systems on track. Generally, our results are congruent with those obtained in studies examining service quality in a face-to-face HE context. That is, corresponding service quality dimensions are relevant in both an online context and a face-to-face context. For example, in both surroundings competences, attitudes and didactic quality of academic staff are relevant academic service quality factors. Either face-to-face or virtual, teachers contribute significantly to their students' learning processes (La Rotta et al., 2020). It is to be expected, however, that in an online environment academic staff needs to exploit other types of competences, attitudes and didactic approach compared to a face-to-face context. For example, in an offline context academics transfer (their) knowledge to students through often long lasting and busy class-based lectures, while in an online context academics primarily need to encourage and support students to gain knowledge themselves by studying, e.g. by offering students short but activating tasks that put them to work. The type of student-teacher interaction (teaching versus learning), therefore, is very different and requires other competences, attitudes and didactic quality from academics. The same holds for e.g. study facilities as a relevant service quality dimension in both online and offline environments. In an offline context physical resources (e.g. suitability of the classrooms, number of workplaces, and quality of study rooms) are fundamental in supporting the academic process. In an online context, however, these resources are expected to be less relevant or even not applicable since students are following an online program (off-campus) and are not likely to claim these services. On the other hand, however, a stable and easy to use electronic learning environment (ELO) is of core importance in an online environment (La Rotta et al., 2020;Uppal et al., 2018), while in a face-to-face context the interaction platform is more of a supportive nature. These potential differences in perceived service quality, however, need to be explored further and supported empirically. We will elaborate on this suggestion for further research in the next section. Limitations and directions for future research Our study has some limitations providing avenues for more research in the future. This study examined the service quality drivers for students' satisfaction analyzing data from a single university in the Netherlands providing online education (the OUNL). Generalizations to a wider population, therefore, should be made with caution. Studiekeuze123, however, provides data for a lot of HEIs in the Netherlands and from 1 3 2010 up to now. This offers the opportunity for more representative follow-up studies examining the generalizability of the NSE measurement and the structural model in a wider HE context. HE is a pure high-contact service requiring interpersonal contact over a long-term period to get an outcome (Sultan & Wong, 2012). In such circumstances, relationship quality is particularly suitable (Vieira et al., 2008). Therefore, when evaluating service quality in education, it is suggested to include (aspects of) relationship quality as a consequence of service quality. Though there is no consensus defining relationship quality, it can be defined as "the cognitive evaluation of business interactions by key individuals in the dyad, comparatively with potential alternative interactions" and includes several distinct but related components such as satisfaction, trust and commitment (Vieira et al., 2008). Although student satisfaction is found to be a primary antecedent to student loyalty, it is suggested that satisfaction alone is insufficient in generating loyalty (Bowden, 2011). Hennig-Thurau et al. (2001) and Wong (2012, 2013), for example, found that perceived service quality positively affects trust in a HE context. Also, the relationship between service quality and commitment is suggested to be significantly positive (Hennig-Thurau et al., 2001). Both, trust and commitment are found to be contributing determinants, next to satisfaction, in generating loyalty (e.g., Garbarino & Johnson, 1999;Schijns et al., 2016), but have received limited attention in HE research (Bowden, 2011). Including additional components of relationship quality, such as trust and commitment, a more comprehensive model on service quality and its consequences can be developed and empirically tested in a HE context. A more comprehensive model can also be reached by including antecedents to service quality. Sultan and Wong (2012, 2013 suggest that (marketing) information and student's past experiences with similar service encounters affect perceived service quality. As a result, a more comprehensive model contributes to a deeper understanding of how service quality, satisfaction and other relevant variables relate to each other and subsequently drive students' loyalty. The university included in this study is a public institution as most of the HEIs included in the NSE database. The NSE database, however, also includes a number of private HEIs. The results found in this study may differ for private HEIs. It may be worthwhile, therefore, to do a comparative study to explore potential differences in service quality and student satisfaction between HEIs in the public and private sector (Hasan et al., 2008). Another type of comparative study can focus on potential differences in service quality perceptions between students following an online program and students following their program onsite. We concluded that corresponding service quality dimensions are relevant in both an online context and a face-to-face context. We suggest, therefore, that differences may not be found on the level of service quality dimensions, but on the level of items composing a service quality dimension. A group comparison (online versus face-to-face) complemented with an importance-performance analysis on item-level is expected to provide a better reflection of the characteristics of an online HEI and provide tools for HE that is developing distance learning. The lack of background information limits gaining deeper insights. Including student's background information in terms of e.g. gender, age etc., which is available to Stud-iekeuze123 but is not provided through the NSE database for privacy reasons, would help gaining deeper insights. Annamdevula and Bellamkonda (2016a: 457), for example, suggest that "age and gender play a major role in determining the different perceptions of students about the constructs investigated." On the contrary, Bowden and Wood (2011) conclude that the role of gender doesn't matter in the formation of student-university relationships. The student-centered perspective on the quality of education was the central perspective of our study as in most studies on the quality of HEIs (Silva et al., 2017). As indicated in our introduction, however, quality of education is influenced by, and has an effect on, a number of stakeholders, such as: providers of resources (e.g. public and private funding bodies), users of outputs, i.e. graduates (e.g. employers), and employees of the HEI, e.g. academics and administrators. Studies from the point of view of stakeholders other than students, however, are scarce. Following for example Marinho and Poffo (2016), Smith et al. (2007) and Teeroovengadum et al. (2016) new studies could evaluate service quality in HEIs as perceived by e.g. academic staff, the management of the university, or the service department. The NSE examines students' service quality perceptions, students' overall satisfaction and their willingness to recommend the HEI using one method, i.e. a questionnaire. The use of questionnaires in which respondents are asked to report their service quality perceptions, their overall satisfaction and their behavioural intentions is quite common in the social sciences and related fields since questionnaires have "particular advantages in terms of low expense, wide potential reach, and ease of administration" (Gorrell et al., 2011: 508). Common method bias is a type of bias often associated with questionnaire-based studies. Common method bias "refers to the situation where the method of data gathering itself introduces a bias, leading to spuriously elevated correlations between the concepts being measured" (Gorrell et al., 2011: 509). Thus, when using only one method measuring students' service quality perceptions, their overall satisfaction and their willingness to recommend, common method bias is likely to be introduced. Future research in the field of service quality perceptions in HE should consider using different data gathering methods in an attempt to achieve a degree of data triangulation that further supports the validity of the NSE instrument. To conclude, the NSE has become a standard in the Dutch HE context used by both HEIs and students (prospective as well as switching students) in their decision making. The scale has not been disseminated internationally, however, and adapting and applying the scale to other countries and/or cultures, therefore, is challenging. The current availability of the NSE questionnaire in German and English, besides Dutch, could facilitate international distribution of the questionnaire in that respect. Table 7 Three critical service quality aspects covered by performance-based service quality models, developed specifically for application in HE context ity items. Appendix 1 Academic service quality -Teaching quality: competent lecturers and professors; -Availability of lecturers; -Course development: program and content; -Teacher-student relationships. -Knowledgeable in course content; -Caring and courteous staff; -Responding staff; -Feedback on progress. -Attitudes and behaviors of academics; -Curriculum; -Pedagogy; -Competence of academics; -Transformative quality (e.g., developing general, scientific and professional skills). -The content, structure, and cohesion of the study program; -Acquirement of general skills; -Acquirement of scientific skills; -Didactic quality of Professors and Lecturers. -Timely publication of programme schedules and schedule changes.
9,851.8
2021-06-01T00:00:00.000
[ "Education", "Economics" ]
Single-cell analysis reveals exosome-associated biomarkers for prognostic prediction and immunotherapy in lung adenocarcinoma Background: Exosomes play a crucial role in tumor initiation and progression, yet the precise involvement of exosome-related genes (ERGs) in lung adenocarcinoma (LUAD) remains unclear. Methods: We conducted a comprehensive investigation of ERGs within the tumor microenvironment (TME) of LUAD using single-cell RNA sequencing (scRNA-seq) analysis. Multiple scoring methods were employed to assess exosome activity (EA). Differences in cell communication were examined between high and low EA groups, utilizing the “CellChat” R package. Subsequently, we leveraged multiple bulk RNA-seq datasets to develop and validate exosome-associated signatures (EAS), enabling a multifaceted exploration of prognosis and immunotherapy outcomes between high- and low-risk groups. Results: In the LUAD TME, epithelial cells demonstrated the highest EA, with even more elevated levels observed in advanced LUAD epithelial cells. The high-EA group exhibited enhanced intercellular interactions. EAS were established through the analysis of multiple bulk RNA-seq datasets. Patients in the high-risk group exhibited poorer overall survival (OS), reduced immune infiltration, and decreased expression of immune checkpoint genes. Finally, we experimentally validated the high expression of SEC61G in LUAD cell lines and demonstrated that knockdown of SEC61G reduced the proliferative capacity of LUAD cells using colony formation assays. Conclusion: The integration of single-cell and bulk RNA-seq analyses culminated in the development of the profound and significant EAS, which imparts invaluable insights for the clinical diagnosis and therapeutic management of LUAD patients. INTRODUCTION Based on the most recent global cancer report released by the International Agency for Research on Cancer, it has been revealed that lung cancer (LC) is the second most frequently detected form of cancer and is the primary cause of cancer-related fatalities.The incidence and mortality rates of this disease are recorded at 11.4% and 18.0% respectively [1].Immunotherapy, a groundbreaking treatment method, has brought about a significant transformation in the management of lung adenocarcinoma (LUAD), which constitutes approximately 40% of all histological types of LC.It has proven to be an effective therapeutic approach for various types of cancer [2].However, only a minority of LC patients exhibit durable responses to immunotherapy.Therefore, the identification of reliable biomarkers is crucial for the implementation of immunotherapy and predicting the prognosis of LC patients [3][4][5][6][7]. Extracellular vesicles, ranging in size from 30 to 150 nm, are cell-derived vesicles that can transmit signaling AGING molecules involved in cellular physiological regulation and participate in tumor invasion and metastasis [8].Studies have revealed that extracellular vesicles promote tumor cells to evade immune surveillance and can serve as immunotherapeutic agents by altering the secretion of tumor-derived extracellular vesicles [9,10]. Single-cell sequencing technology, a novel sequencing technique, enables the measurement of the entire transcriptome at the single-cell resolution, allowing for the differentiation of different cell types.It can rapidly identify genetic differences between cancer and noncancer cells, elucidate molecular mechanisms driving tumor development, and reveal somatic mutations during tumor evolution.By unraveling the heterogeneity of the TME, this method has been utilized to identify unique immune cell subpopulations potentially associated with tumor immune surveillance, thereby suggesting potential drug targets [16,17].Some studies have indicated that intra-tumoral heterogeneity contributes to cancer progression and enhances treatment resistance.Single-cell RNA sequencing (scRNA-seq) has been employed to assess the prognosis and drug resistance of LC, breast cancer, ovarian cancer, and gastric cancer [18,19]. Therefore, establishing a signature based on extracellular vesicle-associated genes may serve as an effective approach for predicting the immunotherapeutic response and prognosis of tumor patients, which is also the objective of this study. Dataset source Bulk RNA-seq data, mutation data, and clinical characteristics of patients diagnosed with LUAD were obtained from The Cancer Genome Atlas (TCGA) database (https://portal.gdc.cancer.gov/).The scRNA-seq dataset GSE131907 [20], which encompassed tissues from 20 LUAD patients, including 11 surgically resected tumor tissue samples, 4 biopsy samples obtained through puncture, and 5 pleural effusions, was sourced from the Gene Expression Omnibus (GEO) database (http://www.ncbi.nlm.nih.gov/geo/).Additionally, external validation cohorts (GSE30219, n = 86; GSE31210, n = 227; and GSE42127, n = 133) were retrieved from the GEO database.To ensure data comparability, the expression data were transformed into the transcripts per million (TPM) format.Addressing potential batch effects was accomplished using the "combat" function of the "sva" R package [21,22].Furthermore, the TCGA database standardized the data format by applying a log2 transformation to the bulk sequencing data, mutation data, and clinical details of LUAD patients before analysis. Single-cell dataset analysis The R package "Seurat" [23][24][25] was employed for the processes of cell clustering and dimension reduction.In order to exclude specific cells, criteria were implemented, considering those that exhibited an expression of more than 6,000 or fewer than 300 genes, or a proportion of unique molecular identifiers (UMIs) derived from the mitochondrial genome that surpassed 10%.Through the application of principal component analysis (PCA) on the genes expressed with variability, the dataset's dimensionality was effectively reduced.Subsequently, clustering analysis was conducted utilizing the "FindClusters" function, incorporating 20 PCA components and a resolution parameter of 1.2.Canonical marker genes were employed to annotate the resulting two-dimensional representation of cell clusters, thereby facilitating the identification of known biological cell types.The Seurat "FindAllMarkers" function was utilized to determine marker genes associated with cell clusters, making comparisons between cells within a specific cluster and those in all other clusters.The "cellchat" R package [26] was employed to infer communication networks between cell subpopulations.Scoring of exosome gene sets was carried out utilizing various methods such as "AUCell," "UCell," "Singscore," "ssgsea," and "AddModuleScore". Building a high-performance EAS Prognostic key genes were identified through the implementation of univariate Cox regression and lasso regression analyses [27,28].Subsequently, a refinement process was undertaken to select the genes and determine their corresponding coefficients, utilizing multivariate Cox regression [29,30].The calculation of the risk score for LUAD patients was performed using the following formula: The risk score was calculated as follows: Risk score = Σ [Coef (k) × Expr (k)], where Coef (k) represents the abbreviation for regression coefficients, and Expr (k) denotes the expression level of prognostic model genes.The application of the risk score calculation was applied to the dataset's LUAD patients, leading to their stratification into high-and low-risk groups based on the median risk score.The model's predictive performance was assessed through the utilization of receiver operating characteristic (ROC) curves, with exceptional performance indicated by area under the curve (AUC) values surpassing 0.65.PCA analysis was employed to visually depict the distribution of patients among different risk groups. Nomogram construction and evaluation An enhanced and more precise nomogram was developed by merging the risk score with clinical characteristics, utilizing the "rms" R package [31].This process significantly augmented the prognostic predictive ability.The efficacy of the nomogram was assessed through the utilization of the c-index and ROC curves.Stratified analyses based on age, pathological T, N, and clinical stage were performed to evaluate the predictive significance of both the risk score and clinical features. Enrichment analysis In order to evaluate the biological characteristics, the utilization of Gene Set Variation Analysis (GSVA) and Gene Set Enrichment Analysis (GSEA) was implemented.For this analysis, downloadable files from the GSEA website were employed, specifically the files titled "h.all.v7.5.1.symbols.gmt,""c5.go.v2023.1.Hs.symbols.gmt,"and "c5.go.v2023.1.Hs.symbols.gmt."The quantification of enrichment scores for 29 immune signatures was performed using the ssGSEA approach. Mutations between different risk groups The "maftools" R package [32] was utilized to conduct a comprehensive examination of somatic mutations in the high-and low-risk group of LUAD.The mutation annotation format (MAF) was generated from data extracted from the TCGA database.The assessment of tumor mutation burden (TMB) was performed for each patient with LUAD.The visualization of the mutation landscape and immune infiltration scores was achieved through the utilization of the "ComplexHeatmap" R package [33].Based on the median risk score and median TMB, TCGA-LUAD patients were classified into four distinct groups, and a comparison was made between their survival disparities in relation to the median risk score and TMB. The TME and immunotherapy The evaluation of immune cell content involved the utilization of seven immune infiltration algorithms, accessed through the TIMER 2.0 database (http:// timer.comp-genomics.org/).Heatmaps were employed to visually depict the variations in immune cell infiltration across different risk groups.Furthermore, the "estimate" R package [34] was employed to calculate the immuno-logical scores, stromal scores, and ESTIMATE scores of LUAD patients.In order to predict the responsiveness to immunotherapy, The Cancer Immunome Atlas (TCIA) database was explored for Immunophenoscores (IPS) associated with TCGA-LUAD.A comparison of IPS was performed between the high-risk and low-risk groups in this study [35].Additionally, the "oncoPredict" R package was utilized to predict potentially effective chemotherapeutic agents between the risk groups [36]. Cell lines culture and qRT-PCR BEAS-2B cells, which are normal human lung epithelial cells, along with A549 and H1299 cells, representing human LUAD cell lines, were obtained from the Cell Resource Center of Shanghai Life Sciences Institute.These cells were cultured in F12K or RPMI-1640 supplemented with 10% fetal bovine serum (FBS), 1% streptomycin, and penicillin.The cell cultures were maintained at a temperature of 37°C, under conditions of 5% CO2 and 95% humidity.The extraction of total RNA from the cell lines was carried out following the manufacturer's instructions using TRIzol.Subsequently, cDNA synthesis was performed utilizing the PrimeScriptTM RT kit.Realtime polymerase chain reaction (RT-PCR) was conducted using SYBR Green Master Mix, and the expression levels of each mRNA were normalized to the GAPDH mRNA level.Quantification of the expression levels was performed using the 2 −ΔΔCt method.The primers used for the experiment were provided by Tsingke Biotech (Beijing, China). Colony formation A transfection of 1000 cells was performed, and they were subsequently placed in 6-well plates for approximately 14 days.After a two-week period, cell clones were visually observed without the aid of magnification.Following this, the cells were washed and fixed in 4% paraformaldehyde (PFA) for 15 minutes.Staining with crystal violet (Solarbio, China) was conducted for 20 minutes, followed by air drying at room temperature.The cell count per well was then determined.AGING Statistical methods The statistical analyses and data processing procedures were carried out using R, specifically version 4.2.0.To establish statistical significance, survival analysis was conducted using Kaplan-Meier curves, and the log-rank test was employed.All survival curves were generated using the "survminer" R package.Heatmaps, on the other hand, were generated using the "pheatmap" R package [37].For variables demonstrating a normal distribution, quantitative differences were assessed through either a two-tailed t-test or a one-way analysis of variance (ANOVA).In cases where the data did not follow a normal distribution, the Wilcoxon test or Kruskal-Wallis test was utilized.All statistical analyses were conducted within the R environment, with a P < 0.05 considered as indicating statistical significance. The scRNA profiling of LUAD The study's flow chart is presented in Figure 1.The scRNA-seq dataset underwent quality control measures.The expression characteristics displayed by each individual sample are illustrated in Supplementary Figure 1A, 1B.No significant fluctuations in cell cycles were observed in the principal component analysis (PCA) reduction plot, as depicted in Supplementary Figure 1C.A total of 20 samples were included in this study, and the cellular distribution remained relatively constant across each sample, suggesting minimal batch effects.Therefore, the samples were deemed suitable for subsequent analysis (Figure 2A).The expression of representative genes used for cell type identification is demonstrated in Figure 2B.By utilizing the tSNE dimensionality reduction algorithm, all cells were classified into 37 more detailed clusters (Figure 2C).The expression of characteristic marker genes corresponding to each cell cluster is visualized in the bubble plot shown in Figure 2D.The presence of 11 distinct cell types, such as fibroblasts, B cells, and NK cells, is revealed in Figure 2E.Furthermore, Figure 2F presents the proportional distribution of the 11 cell types in different samples. Exploring exosome activity within the single cell microenvironment There are differences in the percentage of cells between early-stage and advanced LUAD tissues (Figure 3A). In Figure 3B, a combination of five scoring methods (including AUCell, Ucell, singscore, ssgsea, and Addmodulescore) was employed to assess exosome activity (EA), revealing that epithelial cells exhibited the highest exosome activity.The tSNE diagram displayed the exosome activity across various cell types, highlighting stronger EA in epithelial and myeloid cells (Figure 3C). Figure 3D demonstrated significant disparities in exosomal activity levels between earlystage and advanced LUAD tissues.To unravel the underlying biological mechanisms associated with the different scoring scores, the hallmark gene set was utilized to explore the pathways that exhibited significant differences between the high-and low-EA groups.The principal enrichment pathways observed in the high-EA groups included oxidative phosphorylation, adipogenesis, and the p53 pathway (Figure 3E). Cellular interactions analysis Differences in the number of cellular communications between groups with high-and low-EA groups were presented in Figure 4A and Supplementary Figure 2A, 2B. Figure 4B showed the number and percentage of various signaling pathways in the high-and low-EA.Significant differences in signals emitted between the high-and low-EA groups, with more signals active only in the high-EA group (Figure 4C).Significant alterations were also observed in the roles fulfilled by various cell types within different subgroups.In the low-EA group, both myeloid and epithelial cells exhibited weak efferent and afferent signals.However, in the high-EA group, their signaling capabilities were significantly enhanced (Figure 4D).Supplementary Figure 2C and Figure 4E unveiled significant discrepancies in the profiles of ligand-receptor pairs between the high-and low-EA groups.Notably, the SPP1-CD44 receptor-ligand pair emerged as a more crucial player in the low-EA group. Construction of a risk model In Figure 5A, the TCGA and GEO independent cohorts were observed to exhibit significant batch effects.However, after removing the batch effect, more accurate results were obtained.The training set from TCGA was utilized for model construction, leading to the identification of 41 prognostic genes through univariate COX analysis (P < 0.01).The forest plot depicted the results of the univariate COX analysis, revealing 14 hazardous factors and 27 protective factors (Figure 5B).Subsequently, LASSO and Cox regression analysis were employed to establish the prognostic model (Figure 5C).The hazard ratio (HR) values associated with each variable included in the model were presented in Figure 5D, while Figure 5E displayed the corresponding coefficients of specific variables.AGING Evaluation of the model In Figure 6A-6C, it was observed that a worse prognosis was exhibited by the high-risk group in the TCGA training set, test set, and the entire cohort (P < 0.001).Additionally, a significantly poorer prognosis of patients in the high-risk group compared to the lowrisk group was noted in the GEO30219 test cohort, Clinical correlation and nomogram construction A heatmap was generated by combining clinical information and the high-and low-risk groups to visualize the distribution of clinical characteristics among different risk groups.Statistical analysis in Figure 7A revealed significant differences between the two groups concerning T and N stages, clinical stage, and fustat (P < 0.05).Notably, the high-risk group displayed a higher proportion of older patients and more advanced N and T stages (Figure 7B).Furthermore, a nomogram was constructed using clinical characteristics and risk scores (Figure 7C) to enhance the accuracy of prognosis prediction in LUAD patients.The nomogram plots can assist clinicians in assessing patient risk more accurately and guiding future treatment decisions.The calibration curve and decision curve analyses demonstrated the superior efficacy of this nomogram Enrichment analysis The evaluation of pathways exhibiting significant differences between the high-and low-risk groups was carried out using the hallmark gene set.In Figure 8A, it was demonstrated that enrichment in cell cyclerelated pathways, including mTORC1 signaling, MYC targets V1, E2F targets, G2M and MYC targets V2, among others, was predominantly observed in the high-risk group.For GO and KEGG enrichment analysis, GSEA was employed.The GO enrichment results, as depicted in Figure 8B, indicated that the high-risk group exhibited significant enrichment in such as ribosome biogenesis, rRNA processing, uronic acid metabolic process, and more.Conversely, the low-risk group primarily showed enrichment in pathways related to immunoglobulin complex and translation repressor activity.In terms of KEGG enrichment, the main pathways enriched in the high-risk group were cell cycle and pentose and glucuronate interconversion.To assess the differences in immune cell infiltration and immune-related pathways between the high-and low-risk groups, the ssGSEA method was utilized.The analysis revealed that the low-risk group exhibited higher levels of immune cell infiltration, including T helper cells, pDCs, macrophages, and others.Moreover, greater activity in certain immune-related pathways, such as Type II IFN response, checkpoint, HLA, among others, was demonstrated by the low-risk group (Figure 8C, 8D). Immune infiltration assessment and mutation landscape The degree of immune infiltration was evaluated using seven algorithms within the TIMER 2.0 database, and the comparison revealed greater immune cell infiltration within the low-risk group (Supplementary Figure 3).Immune infiltration levels were assessed using the "ESTIMATE" R package, wherein correlation analysis unveiled a noteworthy negative correlation between the risk score and immune score, alongside a positive correlation with tumor purity (Figure 9A). Figure 9B exhibited higher immune scores and ESTIMATE scores within the low-risk group (P < 0.05), indicating a heightened overall state of immunity and immunogenicity within low-risk group.Representative gene variants were compared between the high-risk and lowrisk groups (Figure 9C).The top five genes in terms of mutation frequencies were TP53, TTN, MUC16, CSMD3, and RYR2.The low-risk group exhibited a higher TMB relative to the high-risk group (Figure 9D), albeit lacking statistical significance.Patients were stratified based on risk scores and TMB, revealing that the low-TMB and high-risk groups exhibited the most unfavorable prognosis (Figure 9E). Immunotherapy and chemotherapy drugs Considering the significance of immune checkpoints in the success of tumor immunotherapy, we investigated the differential expression of immune checkpoints between the two risk groups.Low-risk patients exhibited significant upregulation of thirteen immune checkpoint genes, including CD40LG, CD48, and CD27.In the high-risk group, seven immune checkpoint genes, including CD276, CD274, and CD70, were significantly elevated (Figure 10A).Correlation analysis, depicted in Figure 10B, illustrated the relationship between risk scores, model genes, and immune checkpoint gene expression.Red color indicated a positive correlation, while blue color indicated a negative correlation.It was evident that the risk score exhibited a significant negative correlation with the majority of immune checkpoint genes, such as BTLA, CD27, and CD48.Immunophenoscore (IPS) was employed to select patients likely to respond to immune therapy.In our study, we observed that low-risk patients had a higher IPS when receiving CTLA-4 immunotherapy (Figure 10C).This finding suggested that low-risk patients may demonstrate enhanced responsiveness to immune checkpoint inhibitors (ICIs) and derive greater benefits.By utilizing the "oncopredict" R package, we explored potentially effective chemotherapy drugs for both high-and low-risk groups.Our findings indicated that ABT737 and Acetalax may be more efficacious in low-risk patients, while ERK_6604 and Dasatinib may exhibit higher sensitivity in high-risk patients (Figure 10D). Experimental validation The expression differences of seven model genes were compared between tumor tissues and normal tissues, as depicted in Figure 11A-11G.Notably, high expression of SEC61G was observed in tumor tissues.Furthermore, Figure 11H illustrated that LUAD patients with high expression of SEC61G exhibited poorer survival outcomes.Additionally, the experiments demonstrated that A549 and H1299 LUAD cells exhibited higher expression of SEC61G compared to normal lung cells (Figure 11I).Downregulation of SEC61G resulted in a significant reduction in the number of cell clones within the LUAD cell lines (Figure 11J, 11K).These findings strongly suggest that high expression of SEC61G can promote the proliferation of LUAD cells. DISCUSSION With an estimated 1.8 million deaths, accounting for approximately 18.0% of total cancer-related deaths, LC remains the primary contributor to cancer-related mortality.In terms of incidence, LC ranks second at 11.4%, following breast cancer.In 2020, LC stood as the second most frequently diagnosed cancer and a significant cause of cancer-related fatalities.It constituted around 11.4% of all diagnosed cancers and approximately 18.0% of all cancer-related deaths [1].It is estimated that from 2020 to 2050, the macroeconomic cost of global cancer will reach 25.2 trillion US dollars, with the highest economic burden caused by tracheal, bronchus, and LC (15.4%, 3.9 trillion US dollars) [38].AGING LC patients are typically diagnosed at an advanced stage and can undergo surgical resection or chemotherapy; however, the treatment outcomes are often Immunotherapy is an innovative approach in cancer treatment, offering advantages that traditional anti-cancer therapies cannot match [3].It can prolong progression-free survival (PFS) and OS by dynamically modulating the immune system to target cancer cells from multiple angles and directions, thereby helping the immune system to impede or slow down the growth cancer cells, destroy cancer cells, or prevent cancer from spreading to other parts of the body [5,39,40].However, immunotherapy also comes with complexities and uncertainties.Excessive activation of the immune system may lead to severe adverse reactions during treatment [17].To enhance the effectiveness of immunotherapy and minimize the occurrence of adverse reactions, there is an urgent need to identify more accurate predictive indicators. Exosomes, small vesicles released by cells carrying a diverse range of biologically active molecules derived from living cells, are known as extracellular vesicles.They can be taken up by adjacent cells through direct fusion, endocytosis, or specific receptor binding, thereby transferring the encapsulated information to target cells.In the context of the TME, exosomes serve as crucial regulatory factors in intercellular communication.They participate in cell-cell contacts and control cellular signal transduction, thus playing important roles in tumor development and progression.The significant association between exosomes and LC has been highlighted in numerous studies.Furthermore, exosomes can be detected in various body fluids, making them promising candidates as diagnostic and prognostic biomarkers for LC.In a study conducted by Grimolizzi et al., the levels of miR-126 were compared in serum, exosomes, and exosome-depleted serum of healthy individuals, as well as early and advanced nonsmall cell lung cancer (NSCLC) patients.It was found that miR-126 was uniformly distributed in healthy individuals, whereas in early and advanced NSCLC patients, miR-126 was primarily present in exosomes.These findings suggest the involvement of miR-126 in regulating the microenvironmental niche of NSCLC and highlight its potential value for NSCLC diagnosis and personalized therapy [4].Elevated expression levels of exosomal miR-23b-3p, miR-10b-5p, and miR-21-5p were found to be associated with poor overall survival (OS) in LC patients, as reported by Liu et al.These findings suggest that plasma exosomal miR-23b-3p, miR-10b-5p, and miR-21-5p have potential as noninvasive prognostic biomarkers for LC [41].In the study conducted by Kanaoka et al., a significant correlation was observed between exosomal miR-451a and lymph node metastasis, vascular invasion, and tumor stage in LC.It may serve as a reliable biomarker for predicting recurrence and prognosis in patients with stage I, II, and III non-small cell LC [42]. The objective of this study was to examine the association between ERGs and the prognosis of LUAD.Through COX regression and Lasso regression analyses, a prognostic model was developed utilizing seven ERGs.Based on the median risk value, patients were classified into high-risk and low-risk groups using the established model.Notably, the high-risk group demonstrated a notably inferior prognosis in comparison to the low-risk group.To validate the accuracy of the model, ROC curves were performed on the training cohort and testing cohorts.The AUC values of the TCGA cohort and the GEO30219 validation cohort were above 0.7 at 1 year, 3 years, and 5 years, indicating good discriminative ability.Although the AUC values of the GEO30210 and GEO42127 validation cohorts were slightly lower, they still demonstrated reasonable discriminative capacity.Furthermore, clinically relevant ROC curves and decision curves revealed that the risk score outperformed other clinical features in terms of clinical utility.Compared to the low-risk group, the high-risk group had a higher proportion of patients in stages II-IV, consistent with traditional clinical staging.These findings suggest that the model can provide more accurate prognostic predictions for LC patients. Previous studies have indicated that patients with higher TMB may exhibit increased sensitivity to immunotherapy [15].In our study, although the difference was not statistically significant, we observed that the low-risk group had higher TMB levels compared to the high-risk group.Further survival analysis revealed that patients in the high-risk group with low TMB had the poorest prognosis, suggesting that these patients may demonstrate better sensitivity to immunotherapy.Within the signature we developed, the gene SEC61G was associated with adverse prognosis in LUAD patients.Our cell experiments demonstrated elevated expression of SEC61G in LUAD tissues, and knockdown of SEC61G significantly decreased the proliferative capacity of LUAD cells.These findings provide additional evidence for the involvement of SEC61G in LUAD.A critical role in various tumors is played by SEC61G, which is a subunit of the endoplasmic reticulum translocon.In their study, Ma et al. observed high-expression SEC61G in breast cancer, which correlated with unfavorable prognosis.Furthermore, they demonstrated that overexpression of SEC61G contributes to the development and metastasis of breast cancer by modulating glycolysis, a process regulated by the transcription factor E2F1.These findings highlight the potential of targeting SEC61G as a therapeutic strategy for breast cancer treatment [43].In the study conducted by Meng et al., the role of SEC61G in kidney cancer was explored, revealing its upregulation in tumor tissues and its correlation with unfavorable prognosis.Furthermore, the knockdown of SEC61G was observed to hinder cell proliferation, migration, and invasion, while promoting apoptosis.These findings suggest that SEC61G holds promise as both a potential prognostic biomarker and therapeutic target for kidney cancer [44].Similarly, in our study, we identified SEC61G as a potential target for LUAD, further emphasizing its significance in cancer research. Additional experimental validation is essential to confirm these findings, as the constructed EAS in this study enables the prediction of prognosis in patients with LUAD and reveals potential opportunities for the implementation of immunotherapy. AUTHOR CONTRIBUTIONS The study was conceived and designed by SL, SZ, and XH.Data collection was conducted by YY.HZ and XC performed the statistical analysis.The first draft of the manuscript was written by YZ and YD.The experiment was conducted by SL and SZ.The final approval of the AGING submitted version was given by HL and QH.All authors contributed to the manuscript and approved the submitted version. Figure 2 .Figure 3 .Figure 4 .Figure 5 . Figure 2. Notes on cellular subpopulations.(A) There was no significant batch effect on the cell distribution of the samples.(B) Show expression of typical cell type marker genes.(C) tSNE diagram of descending clustering binning.(D) A bubble chart showing the typical marker gene expression corresponding to each subgroup.(E) Cells are annotated into 11 different cell types.(F) The proportion of 11 cell types in different samples. Figure 7 . Figure 7. Clinical correlation analysis and construction of nomogram.(A) Heat map was constructed by combining clinical features and model gene expression to demonstrate the distribution of clinical features and model genes in high-and low-risk groups.(B) Bar graphs showing the proportion of T-stage, N-stage, fustat, and clinical stage in the high-and low-risk groups.(C) A nomogram was constructed by combining age, risk score and clinical stage.(D) Concordance index curves.(E) Decision curve.(F) ROC curves showing AUC values for clinical characteristics, risk scores and nomogram scores at 1-, 3-, 5-, and 7-years, respectively. Figure 8 . Figure 8. Enrichment pathways between different risk groups.(A) GSVA enrichment analysis demonstrates the enrichment of hallmark gene sets between different risk groups.(B) GSEA enrichment analysis demonstrating the enrichment of differential genes to GO pathways between high-and low-risk groups.(C, D) ssGSEA enrichment analysis demonstrating the enrichment of immune cell infiltration and immune-related pathways between high-and low-risk groups. Figure 9 . Figure 9.Immune infiltration assessment.(A) Scatter plot of correlation between risk score and stromal score, immune score, ESTIMATE score, and tumor purity.(B) Boxplots of differences between risk groups in stromal score, immune score, ESTIMATE score and tumor purity.(C) Heat map demonstrating the differences in immune cell infiltration between high-and low-risk groups assessed using four algorithms.(D) Boxplots of differences between risk groups in TMB.(E) Survival curves showing the difference between survival among four subgroups (high-risk and high-mutation, high-risk and low-mutation, low-risk and high-mutation, low-risk and low-mutation). Figure 10 . Figure 10.Immune checkpoint and immunotherapy analysis.(A) Boxplots showing the difference in immune checkpoint expression between high-and low-risk groups.(B) Correlation scatter plots showing the correlation between model genes and risk scores and immune checkpoint expression.(C) TCIA analysis showing the difference in IPS scores between different risk groups to infer the possible benefit of receiving PD-1 and CTLA-4 treatment in different risk groups.(D) Boxplots demonstrating the possible sensitivity of chemotherapeutic agents between different risk groups. Figure 11 . Figure 11.Experimental validation of model gene and in vitro experiment with SEC61G knockdown.(A-G) Boxplots showing the differential expression of CCL20, MAP3K8, SEC61G, SLC34A2, CD79A, BIRC3, and RBM39 between tumor and normal tissues.(H) Survival curves showing the difference between SEC61G high and low expression groups.(I) Histogram shows the relative SEC61G expression between BEAS-2B, A549 and H1299.(J, K) After SEC61G knockdown, the cloning ability of A549 and H1299 cell lines decreased significantly.
6,525.4
2023-10-24T00:00:00.000
[ "Medicine", "Biology" ]
Lifetime predictions of non-ionic and ionic biopolymers: kinetic studies by non-isothermal thermogravimetric analysis In this paper, films based on sustainable polymers with variable charge have been investigated by non-isothermal thermogravimetry in order to predict their lifetime, which is a key parameter for their potential use in numerous technological and biomedical applications. Specifically, chitosan has been selected as positively charged biopolymer, while alginate has been chosen as negatively charged biopolymer. Among non-ionic polymers, methylcellulose has been investigated. Thermogravimetric measurements at variable heating rates (5, 10, 15 and 20 °C min−1) have been performed for all the polymers to study their degradation kinetics by using isoconversional procedures combined with ‘Master plot’ analyses. Both integral (KAS and Starink methods) and differential (Friedman method) isoconversional procedures have shown that chitosan possesses the highest energetic barrier to decomposition. Based on the Master plot analysis, the decomposition of ionic polymers can be described by the R2 kinetic model (contracted cylindrical geometry), while the degradation of methylcellulose reflects the D2 mechanism (two-dimensional diffusion). The determination of both the decomposition mechanism and the kinetic parameters (activation energy and pre-exponential factor) has been used to determine the decay time functions of the several biopolymers. The obtained insights can be helpful for the development of durable films based on sustainable polymers with variable electrostatic characteristics. Introduction In recent years, biopolymers have attracted a growing interest as sustainable alternatives for the fabrication of green materials promising for technological [1][2][3][4][5] and biomedical [6][7][8][9][10][11][12][13] applications. In this regard, polysaccharides have been largely used to replace the traditional packaging materials, which are based on petroleum-based plastics [1,14,15]. To this purpose, polysaccharides can be filled with inorganic nanoparticles (such as nanoclays with variable morphology, including halloysite nanotubes and kaolinite nanosheets) in order to obtain nanocomposite materials competitive with the traditional plastics in terms of mechanical resistance and thermal stability [15][16][17]. It is noteworthy that polysaccharides can be considered the most abundant group of natural macromolecules. Therefore, their use in the production of bioplastics presents both economic and environmental benefits [18]. As an example, a circular economy with the reduction of greenhouse gases can be achieved by using biodegradable sources such as natural polymers. In addition, the Highlights 1. The lifetime predictions for biopolymeric films have been carried out by non-isothermal thermogravimetry. 2. Chitosan exhibits the largest energetic barrier to the decomposition compared to those of alginate and methylcellulose. 3. The thermal decomposition of chitosan and alginate can be described by the R2 kinetic model (contracted cylindrical geometry) 4. The thermal decomposition of methylcellulose follows the D2 mechanism (two-dimensional diffusion). 5. Chitosan possesses the largest half-life at 25 °C. use of biopolymers decreases the municipal solid wastes, which are mostly composed of the traditional plastics. The biosynthesis of these macromolecules can occur in woods, algae and plants [14]. Additionally, they can be produced by fungi and bacteria [14]. The chemical and physico-chemical characteristics of polysaccharides are strictly dependent on their surface charge. As examples, cellulose is an uncharged biopolymer, while alginate and chitosan possess anionic and cationic groups, respectively [19]. Cellulose is a polymeric chain formed by glucose monomers strictly linked via β-(1 → 4) glycosidic bonds. Due to its chemical composition, cellulose is insoluble in aqueous media limiting its use for numerous purposes [20,21]. The chemical modification of cellulose drives to the synthesis of water-soluble biopolymers that can be employed for different types of applications, including drug delivery [22,23] and materials sciences [19,21,24]. Within this, cellulose ethers (such as methylcellulose, hydroxypropylcellulose and carboxymethylcellulose) were employed in the fabrication of functional biomaterials, which include sustainable films for packaging [19], hydrogels as carriers for active molecules [25] and surface protectives of artworks [26]. Alginate was widely employed in biomedical applications, such as in the development of injectable biomaterials [27], nanofibers for wound healing [28] and scaffolds for tissue engineering [29]. The combination of alginate with methylcellulose allowed for the fabrication of antimicrobial films, which were obtained through the tape-casting method [30]. Chitosan represents an emerging biopolymer due to its antimicrobial and hydrophobic properties. Recent literature proved that chitosan prevents the proliferation of pathogens promoting the plant growth [31]. As evidenced in a recent review [32], chitosan combined with oppositely charged polyanions (polyelectrolytes, surfactants) can generate different types of composites (coacervate, soluble complex, thin films, hydrogel) with specific functionalities. In this regard, chitosan/hyaluronan multilayers can be considered suitable as bone scaffolds because of their efficient coating capacity of substrate materials [33]. Layered composite tablets for sodium diclofenac were fabricated by exploiting the electrostatic attractions between chitosan and alginate [34]. The sequential casting procedure was employed for the preparation of flame retardant films based on chitosan matrix filled with halloysite clay nanotubes [35]. As expected, the lifetime of the biopolymers represents a crucial parameter in the fabrication of biocompatible packaging materials. Non-isothermal thermogravimetry represents an accelerated tool for the lifetime prediction of organic molecules, including polymers [36,37] and drugs [38,39]. Furthermore, isothermal thermogravimetric approaches can be employed to determine the kinetics of degradation for several polymeric materials [36] as well as for biomasses [40,41]. To this purpose, both integral and differential isoconversional methods revealed adequate to study the kinetics of the thermal decomposition of the macromolecules. The further investigation of the thermogravimetric data with the Master plot analysis drives to the determination of the full kinetic parameters and, consequently, to the simulation of the decay time functions at variable temperatures. Accordingly, the lifetimes of the investigated materials can be easily predicted. Within this, literature reports that non-isothermal thermogravimetry was successful in the lifetime estimation of polar (such as chitosan) [42] and apolar polymers, including polyethylene [43] and polypropylene [44] particles. It should be noted that the thermal characterization of microparticles based on thermoplastic polymers is crucial for their use in advanced technological applications [45]. In this work, non-isothermal thermogravimetry was employed to predict the lifetimes of sustainable films based on both ionic (alginate and chitosan) and non-ionic (methylcellulose) biopolymers. The obtained results can be helpful for the development of packaging materials based on bioplastics. Preparation of biopolymer-based films Biopolymer-based films were prepared by using the aqueous casting method reported by Bertolino et al. [19]. To this purpose, each biopolymer was homogeneously dispersed in water by magnetically stirring for 2 h at 25 °C. The concentration of the biopolymer dispersions was fixed at 2 wt%. It should be noted that chitosan was dissolved in aqueous solvent at pH = 4, which was reached by adding 0.1 mol dm −3 of glacial acetic acid dropwise to water. Afterwards, the biopolymer dispersions were poured into glass Petri dishes (diameter = 9 cm) at 40 °C until the complete water evaporation. After the removal from the dishes, the films were stored in a desiccator at 25 °C. Non-isothermal thermogravimetric analysis Thermogravimetric (TG) analyses were carried out through a Q5000 IR apparatus (TA Instruments) under nitrogen atmosphere. In this regard, the experiments were conducted using nitrogen flows of 25 and 10 cm 3 min −1 for the sample and the balance, respectively. The mass of each sample was 5.0 ± 0.5 mg. The films were grounded before the TG analyses, which were carried out using Platinum-HT sample pans (100 μL). The TG measurements were performed in the range between 25 and 600 °C, while the heating ramp was systematically varied in order to investigate the kinetics of the biopolymer degradation by non-isothermal thermogravimetric methods. Specifically, we selected heating rates (β) of 2, 5, 10, 15 and 20 °C min −1 . Prior to the determination of the kinetic parameters, we compared the thermal stability of the different biopolymer films by the analyses of the TG curves at 5 °C min −1 . Within this, we determined the onset temperature (T ons ) as well as the decomposition temperature (T d ) taken at the peak of the differential thermogravimetric (DTG) curves. Moreover, we calculated the mass change from 25 to 150 °C (ML 150 ) to estimate the moisture content of the biopolymer films. As concerns the kinetic investigation, the TG curves at variable heating rates were analysed by KAS and Friedman methods in order to determine the activation energies (E α ) of the biopolymer decomposition on dependence of the conversion degree (α). In addition, the Master plot analysis was used for the treatment of the TG data allowing us to estimate the pre-exponential factor of the decomposition process and, consequently, the lifetime of the biodegradable films. Isoconversional methods and Master plot analysis KAS method is an integral isoconversional procedure based on the following equation [46]: where T α,i represents the temperature with a specified conversion degree (α) under a selected heating rate (β). According to Eq. 1, the slope of ln(β/T 2 ) vs 1/T plots allows us to calculate the activation energy at variable conversion degree (Eα). Another integral isoconversional procedure is the Starink approach [47]. It allows us to obtain a more accurate estimate of E α and the method is described as Thus, E α can be calculated from the slope of plots of ln(β/ T 2 ) versus 1/T. The Friedman approach is a differential isoconversional method, which correlates E α to the first derivative of α with respect to temperature (dα/dT). The Friedman method can be expressed as being f(α) a function of the extent of conversion, while A and R are the pre-exponential factor and the gas constant, respectively. Based on Eq. 3, E α can be estimated by the slope of the ln (β dα/dT) vs 1/T linear trends. The Master plot analysis was conducted by the determination of the y(α) curve as where E 0 represents the average activation energy estimated from the isoconversional procedures. Thermal behaviour of the biopolymer-based films The thermal stability of the biopolymer films was explored by thermogravimetry, which is an established technique for the thermal characterization of macromolecules [48][49][50][51][52][53]. A preliminary investigation on the thermal characteristics of the films was carried out by the analysis of the TG curves determined at β = 5 °C min −1 (Fig. 1). As a general consideration, we observed a mass loss from 25 to 150 °C due to expulsion of the water molecules physically adsorbed on the biopolymer films. On this basis, the ML 150 values (Table 1) reflect the moisture content of the materials. We detected that alginate and chitosan possess similar affinities towards water, while the ML 150 value of methylcellulose is significantly lower compared to the ionic biopolymers. Fig. 1 Thermogravimetric curves of biopolymer films obtained at β = 5 °C min −1 As shown in Fig. 1, the films evidenced a mass loss in the range 200-420 °C due to the thermal decomposition of the biopolymers. Specifically, this degradation stage can be attributed to the fracture of glycosidic bonds, dehydration, decarboxylation and decarbonylation for alginate [54], while the deacetylation and the cleavage of glycosidic linkages contribute to the decomposition of chitosan [42]. It should be noted that the alginate exhibited a small mass loss at ca. 500 °C because of the thermal degradation of fragments formed in the previous degradation stage [54]. Similarly, chitosan evidenced a degradation step in the range 450-600 °C that can be related to the thermal destruction of pyranose ring as well as to the decomposition of the residual carbon [42]. Oppositely, the MC degradation occurred in one single step due to the cleavage of glycosidic bonds as reported in literature [55]. We explored the thermal resistance to the degradation by considering the corresponding onset temperatures (T ons ), which are presented in Table 1. Moreover, we determined the decomposition temperatures (T d ) from the peaks of the DTG curves (Fig. 2). Both T ons and T d data highlighted that methylcellulose possesses the highest thermal stability, while alginate is the biopolymer with the lowest resistance to the thermal degradation. In particular, we observed that the decomposition temperature of methylcellulose is larger of ca. 70 and 100 °C compared to those of chitosan and alginate, respectively. Kinetics of the biopolymer degradation The kinetics of the biopolymer degradation was studied through non-isothermal thermogravimetry using isoconversional procedures (KAS, Starink and Friedman methods) combined with Master plot analysis. Similar approaches were used for the kinetic investigations of macromolecules [39,56,57] and organic/inorganic composite materials [35,49]. Figure 3 shows the dependences of the activation energy of the biopolymer degradation on the conversion degree determined by using the KAS approach. According to literature [46], we can state that the activation energy is constant within the whole conversion degree range for all biopolymers being that the variations between the maximum and minimum values of E α are lower than 20-30% of the average activation energy. Similar observations were detected for E α vs α trends determined by Starink method (Fig. 4). Fig. 3 Dependence of the activation energy on the conversion degree determined by KAS method Fig. 4 Dependence of the activation energy on the conversion degree determined by Starink method As shown in Fig. 5, the analyses by Friedman method provided E α vs α functions with a greater level of noise with respect to those obtained by KAS method (Fig. 3). Similar results were detected for the kinetic studies of cellulose degradation in historical woods [58]. We calculated the average activation energies (Table 2) for the degradation processes of the biopolymers using the E α values obtained by both isoconversional procedures. We observed that the average activation energies obtained by KAS method are comparable to those calculated by using both Starink and Friedman approaches. In addition, we detected that KAS and Starink methods provided more accurate results as evidenced by the smaller errors on the average activation energies. As a general result, we observed that that the degradation of methylcellulose presents the lowest activation energy compared to those related to the thermal decomposition of the ionic biopolymers. However, it should be noted that the activation energy values depend on the specific mechanism of the polymer decomposition. On this basis, the direct comparison of the activation energy values is valid if the degradation processes of all biopolymers can be described by the same kinetic model. According to this consideration, the Master plot analysis was conducted on the TG data in order to obtain the decomposition mechanism and, consequently, the full kinetics parameters, which allowed us to predict the lifetime of the biopolymers. To this purpose, we determined the y(α) Master plots by Eq. 4. It should be noted the Master plot analyses were carried out using only the E 0 values from KAS method because of the higher accuracy with respect to those obtained by Friedman approach. The obtained y(α) vs α plots was interpreted on the basis of the ICTAC recommendations [46] highlighting that the degradation of ionic biopolymers (chitosan and alginate) can be described by the R2 kinetic model (contracted cylindrical geometry), while the decomposition of methylcellulose can be ascribed to the D2 mechanism (two-dimensional diffusion). It should be noted that both R2 and D2 are reaction models of the decelerating type [46]. According to the R2 mechanism, the pre-exponential factors for the degradation of ionic biopolymers were determined by fitting the y(α) data with the following equation On the other hand, the pre-exponential factor for the degradation of methylcellulose (A = 3.32 • 10 13 s) was estimated using the following expression (valid for the D2 kinetic model): The determination of the degradation mechanisms as well as the kinetic parameters (activation energies and preexponential factors) can be exploited to predict the lifetimes of the biopolymer films at variable temperatures. Lifetime prediction of the biopolymer-based films The time (t α ) needed to reach a certain conversion degree at a fixed temperature (T 0 ) is related to the kinetic parameters of the biopolymer degradation by the following equation: where g(α) is the integral form of the reaction model that depends on the specific mechanism. As concerns the kinetic models employed for the investigated biopolymers, g(α) = 1 − (1 − α) 1/2 and g(α) = [(1 − α)·ln(1 − α) + α] represent the functions for R2 and D2 mechanisms, respectively. According to Eq. 7, we determined the t α vs α curves at T 0 = 25 (Fig. 6), 100 (Fig. 7) and 300 °C (Fig. 8). These trends represent the simulations of the biopolymer decomposition over time under isothermal conditions driving to the prediction of the lifetimes for chitosan, alginate and methylcellulose. Based on the simulated curves, we determined the half-lives (t 1/2 ) of the biopolymers (Table 3). Namely, we calculated the times at which the films lose 50% of their initial weights. As reported in literature [59], t 1/2 values are generally used to describe the lifetimes of polymeric materials. Based on the data in Table 3, we can state that chitosan is the biopolymer with the highest stability at 25 and 100 °C, while MC shows the largest half-life at 300 °C. As a general result, alginate exhibited the lowest resistance to the degradation. It is important to note that TG measurements were conducted using Nitrogen flows. Therefore, the t 1/2 values reflect the lifetimes of the films under inert atmosphere. Conclusions The kinetic characteristics of the thermal decomposition of biopolymer films were estimated by non-isothermal thermogravimetry. In particular, isoconversional procedures (KAS, Starink and Friedman methods) combined with the Master plot analysis allowed us to determine the full kinetic path (activation energy, pre-exponential factor and reaction mechanism) useful to predict the lifetime of all the biopolymers. In this study, we investigated films based on differently charged biopolymers, including alginate (anionic), chitosan (cationic) and methylcellulose (non-ionic). All isoconversional procedures evidenced that the chitosan degradation presents the largest activation energy. In particular, the average activation energies determined from KAS method were 240, 225 and 180 kJ mol −1 for the decomposition processes of chitosan, alginate and methylcellulose, respectively. The Master plot analysis showed that the decomposition of both chitosan and alginate can be described by the R2 kinetic model (contracted cylindrical geometry), while the degradation of methylcellulose follows the D2 mechanism (twodimensional diffusion). Based on the kinetic parameters, we determined the simulations of the decay time functions for the biopolymer films at 25, 100 and 300 °C. The half-lives obtained from the simulated functions highlighted that chitosan possesses the strongest resistance to the decomposition at 25 and 100 °C. On the other hand, MC exhibited the largest half-life at 300 °C. In conclusion, this work evidences that non-isothermal thermogravimetry represents an effective tool to investigate the lifetime of biopolymer films. Among the investigated biopolymers, chitosan can be considered very promising for the fabrication of durable films. Funding Open access funding provided by Università degli Studi di Palermo within the CRUI-CARE Agreement. The work was
4,341.4
2021-07-19T00:00:00.000
[ "Materials Science" ]
Simulation of a vessel sensor network of ZigBee standard The subject of the paper is a vessel sensor network designed to collect data from wireless sensors for various purposes. The peculiarity of this network application is the presence of a large number of premises, wireless communication between which is complicated. One of the possible approaches to solving this problem is to use coordinators connected by a switch into a local network, and a large number of routers that provide data relaying along the chain. Studies of the proposed vessel wireless network in the OMNeT ++ simulation modeling environment using Castalia framework are carried out in the paper. The process of modeling both the network itself and its individual components, and various modes of their operation is shown. The interaction of network nodes through intermediate routers is considered. Estimation of losses during packet delivery along a chain of routers, the time spent on processing packets in routers and delays in packet delivery to the network coordinator from the sending node is based on the compiled model. It is concluded that with the increasing complexity of the network structure (growth in the number of routers), there is a proportional rise of the packets delivery delay over the wireless network to the vessel local network. However, all transmitted packets are delivered to the addressee. Introduction Wireless networks are widely applied in various fields including shipping automation. The use of wireless sensor networks (WSN) is considered promising, these are distributed self-organizing networks that are resistant to the failure of individual network nodes. WPAN standards, and in particular the ZigBee standard based on IEEE group protocols are the most appropriate for use in these networks 802.15.4 [1]. ZigBee network on the vessel can be used to collect data from devices equipped with sensors: temperature, pressure, humidity, lighting, position and state of mechanisms, etc. Using wireless networks to compile information on the vessel has many advantages over traditional wired networks. The basic ones are the absence of cables, reduction of time and installation cost, low cost of network maintenance, ease of setting up and putting it into operation. However, monitoring systems based on WSN must be reliable and have low data transmission latency, as well as resilient to external influences. Simulation modeling plays an essential role in the estimation of WSN parameters. Wireless network model of 802.15.4/ZigBee standard deployed on the vessel is considered in the paper. The OMNeT ++ simulator and the Castalia library, which model network behavior under realistic patterns of a wireless channel and media environment with fairly tangible behavior of network nodes, are used to estimate network parameters. The generalized structure of vessel WSN based on the ZigBee network, designed to compile information from sensor detectors located on different ship decks, is shown in Figure 1. Here SR is the local network server (Server), SW -Switch, GT -Gateway, C -Coordinator, R -Router, S -Sensor. The coverage area of such a network inside the vessel premises is several tens of meters and depends on the number of used routers. To transfer information collected within the ZigBee network, other data transfer technologies are used outside of it. Since Ethernet is the main data transmission medium on the vessel, then an Ethernet gateway is required to connect the ZigBee network to the vessel local network. The application of gateways joined through a cable connection to the switch of the vessel network makes it possible to organize an extensive WSN consisting of several ZigBee networks that cover all decks and premises. Figure 1 demonstrates that the proposed WSN provides data collection from sensor detectors distributed across various areas of the vessel. A feature of this network is the ability to quickly reconfigure the network depending on the state of the network nodes, the presence of obstacles in the path of signal propagation and interference. This improves the stability and reliability of the network. Development of WSN vessel model of ZigBee standard and its components The basic tool for WSN analysis is simulation modeling without real equipment use which allows us to evaluate the projected network parameters at the stage of its development. There are lots of simulation modeling tools which are requested various demands. The OMNeT++ simulation modeling environment (simulator) is among them [3]. This simulator has an advanced graphical interface and provides flexible ability to change the parameters of the simulation model, and its functionality is not inferior to other simulation modeling tools. The Castalia framework being the most requested library for simulating ZigBee networks, was used to simulate a vessel WSN in the OMNeT ++ environment [4,5]. The library model is based on veritable radio stations (using CC2420 microcircuits) for low power communication and supports a variety of modulation and transmission types. Functions of routing and environment access protocols, including IEEE802.15.4, are implemented in the library. The Castalia framework has a large number of configurable parameters and is designed to evaluate a variety of the network characteristics. The simulation outcomes of a wireless channel are very close to actual indicators, since they take into account various significant features of a tangible wireless channel. A typical network, described by the Castalia library, assumes the use of nodes with the same set of parameters. It was used to simulate the WSN behavior using the OMNeT ++ simulator. The architecture of links of the model modules is displayed in Figure 2. Figure 2 shows that the Castalia library describes the sensor network as a set of nodes (Node), and besides each node is connected with sensors that control some physical process (Physical Process). Each node can be connected with one physical process. Sensor nodes communicate with each other by dint of a shared wireless channel (Wireless Channel). The node model (Node) is presented as a composite object of the OMNeT ++ system, the node diagram is shown in Figure 3 The Communication component is also composite (Figure 3 (b)). The object components implement the functions of access control (MAC), access to the radio channel (Radio) and the routing module (Routing). All further components of the model are simple and implemented in C ++. WSN modeling in OMNeT ++ simulator After creating the network for which the calculation is made, it is necessary to set the model parameters in the omnet.ini file, a fragment of the initialization file is shown in Figure 4. The MultipathRingsRouting routing module was worked out in order to ensure that packets are transmitted along a chain of routers to a coordinator. The packet structure for routing is shown in Figure 5. Here the source is the sender's address; destination -recipient address; sequenceNumber; multipathRingsRoutingPacketKmd is the routing packet type; sinkID -routing ID center; senderLevel is the level of the routing ring from which the packet is sent. Figure 5. Routing packet structure The packet transmission algorithm consists of two phases which are determined by the packet type. Setting up the links of nodes is done by installing their levels (rings formation) during the first phase. For this purpose, the coordinator sends a packet with the MPRINGS_TOPOLOGY_SETUP_PACKET type to the channel. A node, receiving such a packet, in case its level is not set, increases the obtained level by one, remembers it as the own and transmits the packet further. The rest of the nodes do the same. Thus, level rings are formed around the coordinator. Data exchange takes place during the second phase. The node level is recorded when a message is sent from this node in a packet. If the node level is less than the packet level when receiving a packet with the NETWORK_LAYER_PACKET type, then the packet is transmitted further until it reaches the gateway. If the node level is greater or equal to the packet level, then such a packet is ignored (as a wrong direction). The algorithm serves to improve the efficiency of data delivery to the gateway and reduce the load on the network. WSN simulation outcomes in OMNeT++ simulator A number of experiments were carried out in order to determine the characteristics that designate the QoS parameter for assessing the performance of the developed WSN model (Quality of Service). The number of data packets that had been transferred in the first transmission cycle; the number of lost packets; Latency -processing time of an incoming packet at the node; Delay -the time required for the packet to leave the sender's node and reach the coordinator were estimated as the research result. The conducted experiments using the developed models confirmed the guaranteed delivery of packets from the sending node to the network coordinator. Two parameters were monitored when evaluating Latency (stay duration of data packet at a node for its further forwarding to other nodes): moving time of a packet through the node from the application layer to the radio channel and the transit time of the node (the time taken to move from the radio channel to the radio channel). Figure 6 (a) schematically demonstrates sending a packet from the application layer with the packet header analysis, changing the header if necessary, and finding a route to the destination node. The packet arrives at the radio layer in Figure 6 (b), it is analyzed and sent in the same way as in the case of sending it from the application layer. The result of modeling revealed that packet movement time from the application layer to the radio channel is 68.5 ms, and the time for the packet movement from the radio channel to radio channel is 75 ms. Delay routing parameter shows the time of packet delivery to the destination node when it is forwarded through intermediate nodes (routers). The change in the packet transit time from the sending node to a destination node (coordinator) with an increase in the number of intermediate transit nodes (router nodes) was investigated during the simulation. Figure 7 shows an increase in the delivery time of a packet from a sending node to a destination node with an increase in the number of packet routing nodes between them. Figure 6. Packet movement through the node: (a) -from application layer to radio channel; (b) -from radio channel to radio channel Conclusion The paper considers a generalized structural diagram of the WSN network on a vessel taking into account its connection to the vessel local network for compiling data from sensor detectors located in various premises. The peculiarity of using WSN of the ZigBee standard on a vessel is the structure complexity of its connections, due to the presence of a large number of shielding partitions that impede wireless transmission of data between network nodes. One of the problem solutions is WSN structuring with the help of a group of coordinators located on various vessel decks, and the application of a large number of routers to relay data packets from sensor detectors. In the event of an obstacle, the solution allows restoring the violated paths automatically during data transmission and find an alternative path to the addressee without a significant time delay for restoring the network integrity. The model describing the WSN operation on a vessel was developed on the ground of ZigBee network algorithm. Its research was conducted to evaluate the behavior of the WSN network under realistic transmission channel models in the OMNeT++ simulator and the Castalia library in order to determine the packet delays during their transmission over the wireless network. The experiments using the developed models confirmed the guaranteed packets delivery over the network, but indicated an increase in delivery time with a growth in the number of packet relaying. The resulting dependence of the delivery delay on the number of router nodes at its initial section has a character close to linear. It can be concluded that with the increasing complexity of the network structure (growth in the number of relays), there will be a proportional rise in the packets delivery delay over the WSN network to the vessel local network. Since the vessel WSN under consideration belongs to PAN networks characterized by a low data transfer rate, such a delivery delay in most cases cannot be considered critical. The most essential is the fact that according to the simulation outcomes, all transmitted packets will assuredly be delivered to the addressee.
2,844.8
2021-10-01T00:00:00.000
[ "Computer Science" ]
Development of a Low-Cost Experimental Procedure for the Production of Laboratory Samples of Torrefied Biomass : Currently, the search for alternative sources of energy is not only due to the scarcity of non-renewable sources, since these still have an availability capable of meeting actual consumption needs, but also due to the negative environmental impacts that its consumption presents. Thus, the use of biomass as a renewable and sustainable energy source is increasingly presented as an alternative that must be taken into account. Torrefaction is a conversion process that aims to improve the properties of biomass through its thermal decomposition at temperatures between 220 and 320 ◦ C. Torrefaction can be defined by several variables, which have an impact on the final quality of the torrefied biomass. Therefore, there is an increase in the number of studies involving this topic, in order to improve the production of biomass and its use as a renewable energy source, in addition to reducing the costs of this process. In this work, a protocol was developed for a laboratory test procedure to produce low-cost torrefied biomass samples using equipment that can present a cost reduction of around 90%. The samples were analyzed to prove the viability of the developed protocol. The results obtained agree with the current literature, also confirming the improvement of the biomass properties. This work can serve as a platform for the development of other technologies, such as gasification for the production of hydrogen from torrefied biomass. Introduction Fossil energy is, nowadays, the primary source used worldwide. Despite the scarcity of these energy source expected within the next 50 years, some authors, such as Matias and Devezas (2007), argue that this will not be the motive leading to their replacement by alternative sources, but rather a new technological, environmental, and social paradigm, being imperative to reduce CO 2 emissions, responsible for the greenhouse effect and climate change, using renewable sources [1]. Biomass is the oldest source of energy used by humans, and is becoming increasingly promising, mainly based on its properties, allowing it to replace fossil energy, and consequently reducing biomass more attractive as a fuel when compared to non-heat-treated biomass [13]. The torrefaction process can be divided into several phases, according to Bergman et al. (2005), as presented in Figure 1 [14]. [14]). Temperature and Residence Time Knowing materials' composition allows the understanding of what reactions occur and how the biomass behaves during the heating phase [15]. Biomass exposure to temperature will lead to the destruction of its structure, and, consequently, to mass loss. This disaggregation depends on the exposure time to temperature [16]. Biomass components have different functions and interact according to residence time and temperature [15]. The different variables of the drying process influence changes in the structure and composition, such as particle size, temperature, processing time, and heating rate [17]. The residence time affects the degradation of hemicellulose, while cellulose is more affected by the temperature [14]. Temperature has a more direct and significant influence on torrefaction characteristics than the residence time, defining the reaction kinetics, while the residence time affects only the characteristics of the process, depending on the temperature used, as described in the works of Prins et al. (2006a;2006b) [18,19]. That is, the residence time, for the same temperature range, can tend the yield to the solid fraction or to the gas fraction (with the mixture of permanent and condensable gases), causing the depolymerization reactions of the constituent compounds of biomass to occur with greater or lesser speed [19]. Heating Rate The heating rate (°C/min) influences secondary reactions, which in turn affects the distribution of solid, gaseous, and liquid materials [20]. Strezov et al. (2008) showed that the pyrolysis liquid yield from biomass from Pennisetum purpureum increased with the increment in the heating rate, while the coal yield did not change [21]. Karim et al. (2010) suggested that the increment in the heating rate reduces heat and mass transfers between particles [22]. [14]). Temperature and Residence Time Knowing materials' composition allows the understanding of what reactions occur and how the biomass behaves during the heating phase [15]. Biomass exposure to temperature will lead to the destruction of its structure, and, consequently, to mass loss. This disaggregation depends on the exposure time to temperature [16]. Biomass components have different functions and interact according to residence time and temperature [15]. The different variables of the drying process influence changes in the structure and composition, such as particle size, temperature, processing time, and heating rate [17]. The residence time affects the degradation of hemicellulose, while cellulose is more affected by the temperature [14]. Temperature has a more direct and significant influence on torrefaction characteristics than the residence time, defining the reaction kinetics, while the residence time affects only the characteristics of the process, depending on the temperature used, as described in the works of Prins et al. (2006a;2006b) [18,19]. That is, the residence time, for the same temperature range, can tend the yield to the solid fraction or to the gas fraction (with the mixture of permanent and condensable gases), causing the depolymerization reactions of the constituent compounds of biomass to occur with greater or lesser speed [19]. Heating Rate The heating rate ( • C/min) influences secondary reactions, which in turn affects the distribution of solid, gaseous, and liquid materials [20]. Strezov et al. (2008) showed that the pyrolysis liquid yield from biomass from Pennisetum purpureum increased with the increment in the heating rate, while the coal yield did not change [21]. Karim et al. (2010) suggested that the increment in the heating rate reduces heat and mass transfers between particles [22]. Process Atmosphere Composition Gas flow can promote changes during the process, causing potential secondary interactions between newly formed gases [23]. According to some studies, CO is formed during a secondary reaction in which CO 2 and water vapor react with solid materials from the process, with increasing temperature [18,19,24]. The amount of oxygen (O 2 ) in the atmosphere does not influence any change in the reactivity of the biomass, nor in the characteristics of the solid products of the reaction [25]. Instability Control Temperature is the most important parameter to control. However, since inertia makes it faster or slower, it is difficult to control and keep temperature ranges stabilized, in order to guarantee the quality of the final products [8]. During torrefaction volatile compounds are produced, and if those are not extracted, the cooling stage can promote the formation of hydrocarbon-based compounds, such as tar, interfering with the torrefied biomass self-ignition process. The solution would be the implementation of a volatile extraction procedure during torrefaction [9]. Torrefaction Reactors Biomass combustion without prior drying presents several disadvantages, one of which is its instability during the process, resulting from the high moisture content [26]. Torrefaction reactors can be divided into three major types: laboratory, pilot-industrial, and commercial [26]. Laboratory scale reactors can be considered extremely important for research and development studies on torrefaction processes and products, and for other applications in pilot-industrial and commercial scales [27]. Although several different reactors can be defined, additional research on the ideal reactor design for minimum energy consumption must be conducted [26]. There are four subcategories of laboratory-scale reactors: • The batch reactor is considered the most simplistic one. A certain amount of material is loaded in the reactor and heated with an electric resistance. It is a reactor that has a higher occurrence of exothermic reactions, raising the temperature of the biomass core. Possible temperature variations occur along the reactor. The heated inert gases pass through a packed or fixed bed, which can create agglomerations of material. They can be vertical or with a horizontal grid. Heat transfer is done indirectly, causing a greater energy expenditure [26]; • The microwave reactor uses high-frequency electromagnetic waves, forcing the vibration of water molecules, increasing temperature. It is a reactor that has less heating time and greater temperature uniformity, with a compact design. It is also a conceptual system, with only qualitative assessments. Heating is achieved through the vibration and friction of the molecules (300 MHz to 300 GHz), which is why it is a volumetric heating reactor [28]; • The rotary drum reactor is the most common, receiving biomass (inflow) and discharging it (outflow). There is the possibility of direct and indirect heating of biomass. There is a difficulty in controlling the process temperature due to occurrence of radiative heat on the drum surface. Direct or indirect heating of biomass and a hybrid model may also occur. There is constant mixing of biomass [8]; • The fluidized bed reactor guarantees a uniform temperature of the biomass on a grid, with the hot gas flowing from the bottom, with the solid particles floating and behaving like a fluid. There is a high heat transfer rate. There is difficulty in separating the bed material, if used, from biomass. A drag of fine particles may occur. A high heat transfer coefficient and temperature uniformity in the bed occurs. There is a high quality of torrefied biomass [29]. Properties of Torrefied Biomass Torrefaction makes biomass more energetically appealing, when compared to natural biomass [13]. The most significant properties of the torrefied biomass are moisture content, grindability, and heating value [6,13]. Moisture content of natural biomass varies between 10 and 50%. However, since higher moisture content represents energy loss when burning, this is an important parameter to take into account [13]. Therefore, the torrefaction process includes stages intended to dry the biomass, which reduces the moisture content to about 1 to 3% before the actual torrefaction stage [6]. The reduction of moisture presents positive consequences concerning transportation and storage, becoming lighter and less susceptible to biodegradation due to its low water content. Biomass, in its natural state, is fibrous and tenacious, but with torrefaction, loses this toughness, caused by the volatilization of hemicellulose and cellulose depolymerization, resulting in the shortening of its fibers [8,13]. The length of the particles also decreases, facilitating grinding, handling, and fluency [6,30]. The amount of H and O lost during torrefaction is higher than the C lost, causing an increase in the HV. The HV of torrefied biomass is higher since there is an increment in fixed carbon (FC), in contrast to the output of oxygenated compounds, leaving more carbon available to be oxidized and, thus, releasing energy [6]. Torrefaction causes an increase in HV that can reach 58%, depending on the different types of biomass, to around 18-26 MJ/kg [31]. Sample Preparation The biomass used in this study was wood chips of Pinus pinaster. For comparison, the biomass was analyzed both in green (as received without prior drying) and dry to characterize the raw material before the torrefaction process and afterward to compare the evolution of its properties. As previously mentioned, Pinus pinaster was the source of biomass used to conduct this study. The chips passed through a sieving system that led to samples of approximately 20 mm, to ensure uniformity. The drying process was carried out in a lab hoven, at 90 • C for 6 h. Samples of approximately 500 g were weighed and wrapped with conventional aluminum foil, as presented in Figure 2. For this study, to guarantee reproducibility, tests were carried out in duplicate. Clean Technol. 2020, 2 FOR PEER REVIEW 5 into account [13]. Therefore, the torrefaction process includes stages intended to dry the biomass, which reduces the moisture content to about 1 to 3% before the actual torrefaction stage [6]. The reduction of moisture presents positive consequences concerning transportation and storage, becoming lighter and less susceptible to biodegradation due to its low water content. Biomass, in its natural state, is fibrous and tenacious, but with torrefaction, loses this toughness, caused by the volatilization of hemicellulose and cellulose depolymerization, resulting in the shortening of its fibers [8,13]. The length of the particles also decreases, facilitating grinding, handling, and fluency [6,30]. The amount of H and O lost during torrefaction is higher than the C lost, causing an increase in the HV. The HV of torrefied biomass is higher since there is an increment in fixed carbon (FC), in contrast to the output of oxygenated compounds, leaving more carbon available to be oxidized and, thus, releasing energy [6]. Torrefaction causes an increase in HV that can reach 58%, depending on the different types of biomass, to around 18-26 MJ/kg [31]. Sample Preparation The biomass used in this study was wood chips of Pinus pinaster. For comparison, the biomass was analyzed both in green (as received without prior drying) and dry to characterize the raw material before the torrefaction process and afterward to compare the evolution of its properties. As previously mentioned, Pinus pinaster was the source of biomass used to conduct this study. The chips passed through a sieving system that led to samples of approximately 20 mm, to ensure uniformity. The drying process was carried out in a lab hoven, at 90 °C for 6 h. Samples of approximately 500 g were weighed and wrapped with conventional aluminum foil, as presented in Figure 2. For this study, to guarantee reproducibility, tests were carried out in duplicate. Equipment Used for Torrefaction As previously mentioned, the objective is the creation of a torrefaction protocol using widely available and regular equipment present in common laboratories. The chosen equipment was a common ceramic muffle, formed by a metallic monobloc with refractory bricks and insulated with a kaolin canvas. Electrical resistances, located on the lateral and bottom surfaces, are used to heat the muffle. A controller allows the setup of different temperature thresholds and residence times, as presented in Table 1. An opening on the top allows for torrefaction gas extraction. Equipment Used for Torrefaction As previously mentioned, the objective is the creation of a torrefaction protocol using widely available and regular equipment present in common laboratories. The chosen equipment was a common ceramic muffle, formed by a metallic monobloc with refractory bricks and insulated with a kaolin canvas. Electrical resistances, located on the lateral and bottom surfaces, are used to heat the muffle. A controller allows the setup of different temperature thresholds and residence times, as presented in Table 1. An opening on the top allows for torrefaction gas extraction. Table 1. Correlation of the levels and the torrefaction phases. Stage Phase Cooling This type of muffle was used because it is a device that is very easily available in laboratories, such as for materials characterization or chemical analysis. It is a type of low-cost equipment and, above all, very easy to use, where its purchase price varies according to its size and programming capacity. The average cost of this equipment, like the one used in the present study, can vary within the range EUR 1500-3000, depending essentially on the origin of the manufacturer, but in any case a related cost that can be just 10% when compared with those previously mentioned standard pieces of equipment available in the market. Definition of Parameters As previously mentioned, different tests took place with previously dried pine chips and a variation of two torrefaction parameters. The parameters to be changed were the temperature and residence time, in order to select the best set of parameters to use and to obtain good quality samples, without the need of an expensive reactor. Table 2 defines the parameters applied to each series of torrefaction tests carried out. Table 2. Parameters applied to each series of tests. T1 T2 T3 T4 T5 T6 T7 T8 T9 Temperature Moisture Content To determine this parameter, a Radwag Mac 210 was used, which consists of a precision scale and a halogen lamp, that obtain the humidity value by drying the samples. Initially, a sample of at least two grams was introduced, and then the heating caused by the lamp promotes water evaporation. Finally, the value of the relative humidity content contained in the sample was given in a percentage through the difference between the initial and final mass. Thermogravimetric Analysis For thermogravimetry (TGA) of the torrefied samples, an Eltra Thermostep model was used. It consists of an oven with a precision scale, where the crucibles are inserted. The analyses occur with a nitrogen rich gas flow of 150 mL/min and with a heating rate of 50 • C/min, to reach 900 • C. During heating, moisture, volatiles, and fixed carbon contents were determined. Finally, ash content was established from the residue remaining. This procedure requires a previous grinding of the torrefied samples, for which a Retsch SM-300 mill was used. The crucibles were weighted, and one gram of sample was then introduced in each one of the containers. An empty crucible serves as a blank sample. Elemental Analysis To determine the elemental composition, a Leco CHN628 analyzer was used. The incineration of the samples up to 900 • C in an atmosphere rich in oxygen, burning all organic compounds, produced CO 2 , H 2 O, N 2 , and SO 2 . After, using a gas chromatography detector, the levels of carbon, hydrogen, and nitrogen were obtained. In this case, it was also necessary to have samples previously ground. As soon as the combustion and afterburner chambers reached their temperatures of 900 • C and 850 • C, respectively, the analysis of the samples began. After obtaining the results, the oxygen content of the samples was then calculated based on Equation (1). where w(O) is the oxygen content (%), w(C) is the carbon content (%), w(H) is the hydrogen content (%), w(N) is the nitrogen content (%), and w(S) is the sulfur content (%). Heating Value Biomass HV can be determined in two distinct manners. The fuel property known as high heating value (HHV) is defined as the amount of energy released as heat and the latent heat of vaporization of the water vapor created during combustion, while low heating value (LHV) represents only the amount of energy released as heat. Considering that after torrefaction, the biomass content of moisture is quite low, LHV is almost equal to HHV. Therefore, only the LHV was determined for this study. where HHV is the high heating value (MJ/kg), FC is the fixed carbon (%), VM is the volatile matter content (%), and A is the ash content (%). Energy Density and Mass and Energy Yield To complement the analysis of the samples, the parameters of energy densification ratio (EDR), mass yield ratio (MYR), as well as the energy yield (EY), were evaluated. According to Grigiante and Antolini (2014), these parameters can be determined analytically from Equations (3)-(5), respectively [33]. where HHV torrefied biomass is the high heating value of the torrefied biomass (MJ/kg) and HHV dried biomass is the high heating value of the dried raw biomass (MJ/kg). where w torrefied biomass is the mass of the dried torrefied biomass (g) and w dried biomass is the mass of the dried raw biomass (g). EY (%) = EDR × MYR (5) Torrefaction Severity With the completion of the experiments, it was possible to carry out an initial visual assessment of the different degrees of torrefaction, as can be seen from . Through the analysis of Figure 3, it can be seen that the series corresponding to the torrefaction parameters in Table 2 was the most severe one since it presents the darkest color. Figure 4 portrays the results from Series 2. The samples present a dark shade with brownish tones, indicating a lower degree of torrefaction than the previous series. Through the analysis of Figure 5, it can be seen that Series 3, presents a brownish color, which indicates it was the lowest intensity used. These observations were supported by the chemical characterization of the samples. Clean Technol. 2020, 2 FOR PEER REVIEW 8 color, which indicates it was the lowest intensity used. These observations were supported by the chemical characterization of the samples. Table 3 presents the averages of the results obtained for the natural, dry, and torrefied biomass, considering that these were always performed in duplicate. These data represented in the table resulted from the moisture content determination, thermogravimetric (fixed carbon content, volatile content, ash content, and moisture content) and elemental analysis (CHN). Table 3 presents the averages of the results obtained for the natural, dry, and torrefied biomass, considering that these were always performed in duplicate. These data represented in the table resulted from the moisture content determination, thermogravimetric (fixed carbon content, volatile content, ash content, and moisture content) and elemental analysis (CHN). Table 3 presents the averages of the results obtained for the natural, dry, and torrefied biomass, considering that these were always performed in duplicate. These data represented in the table resulted from the moisture content determination, thermogravimetric (fixed carbon content, volatile content, ash content, and moisture content) and elemental analysis (CHN). Table 3 presents the averages of the results obtained for the natural, dry, and torrefied biomass, considering that these were always performed in duplicate. These data represented in the table resulted from the moisture content determination, thermogravimetric (fixed carbon content, volatile content, ash content, and moisture content) and elemental analysis (CHN). The determination of the moisture content involved two types of analyses. The analysis mentioned in Section 3.2.1 determined the surface water loss, while the actual moisture content was obtained through TGA since it is a more precise method. As stated by Tumuluru et al. (2011), the moisture content decreases during the drying process [6]. After the analyses of the samples, it is possible to observe a reduction of moisture of approximately 35%. It is important to remember that all samples were dried before torrefaction tests. Overview Through the analysis of the torrefied samples, it is noticeable that the different series of torrefaction present different moisture levels, between 1 and 3%, as anticipated by Tumuluru et al. (2011). When comparing these values with those of the dry sample, some differences are noteworthy. The post-torrefaction storage, the atmospheric conditions present during the collection of the biomass samples, and the constant pre-drying parameters may explain the fluctuations in the values obtained. For example, the use of a desiccator for the final phase of cooling the samples can be a decisive factor so that they do not acquire any moisture after being removed from the muffle, which was not used in these tests. Additionally, these discrepancies may also be due to the use of different residence times during the drying stage of the torrefaction. Thermogravimetric Analysis (TGA) The torrefaction process causes an increase in the amount of fixed carbon in the biomass as the intensity increases. The ash content presents a similar progression. However, the volatile content displays the opposite behavior. This effect was verified in all torrefaction tests performed, as shown in Table 3, although it is easier to observe through the analysis of Figures 6-8. All results were compared with the dry biomass. Through the observation of Figure 6, it is possible to establish a relationship between the torrefaction intensity and the parameters mentioned above. The higher the degree of torrefaction, the greater the variation. Figure 6b displays the increase for FC throughout Series 1, by approximately 37% in T1, 39% in T2, and 56% in T3, while revealing the opposite behavior for volatile content, with an approximate loss of 38% in T1, 40% in T2, and 58% in T3. Concerning the ash content, Figure 6a presents its increase in concentration, intensified by the degree of torrefaction. There was an approximate increase of 0.7% in T1, 0.8% in T2, 1.5% in T3. The analysis of variables in Series 2 followed the same patterns as the ones presented in Series 1. As for the FC, as shown in Figure 7b, it was found that it increased by approximately 17% in T4, 22% in T5, and 35% in T6. The volatile content, also presented in Figure 7b, suffered an approximate loss of 17% in T4, 23% in T5, and 36% in T6. As for the ash content, Figure 7a shows an increase of approximately 0.3% in T4, 0.35% in T5, and 0.8% in T6. during the drying stage of the torrefaction. Thermogravimetric Analysis (TGA) The torrefaction process causes an increase in the amount of fixed carbon in the biomass as the intensity increases. The ash content presents a similar progression. However, the volatile content displays the opposite behavior. This effect was verified in all torrefaction tests performed, as shown in Table 3, although it is easier to observe through the analysis of Figures 6-8. All results were compared with the dry biomass. Through the observation of Figure 6, it is possible to establish a relationship between the torrefaction intensity and the parameters mentioned above. The higher the degree of torrefaction, the greater the variation. Figure 6b displays the increase for FC throughout Series 1, by approximately 37% in T1, 39% in T2, and 56% in T3, while revealing the opposite behavior for volatile content, with an approximate loss of 38% in T1, 40% in T2, and 58% in T3. Concerning the ash content, Figure 6a presents its increase in concentration, intensified by the degree of torrefaction. There was an approximate increase of 0.7% in T1, 0.8% in T2, 1.5% in T3. The analysis of variables in Series 2 followed the same patterns as the ones presented in Series 1. As for the FC, as shown in Figure 7b, it was found that it increased by approximately 17% in T4, 22% in T5, and 35% in T6. The volatile content, also presented in Figure 7b, suffered an approximate loss of 17% in T4, 23% in T5, and 36% in T6. As for the ash content, Figure 7a shows an increase of approximately 0.3% in T4, 0.35% in T5, and 0.8% in T6. Series 3 displayed a smaller variation between the different samples, probably due to the short residence times tested and the lower temperatures. Fixed carbon values, shown in Figure 8b, suffered Through the observation of Figure 6, it is possible to establish a relationship between the torrefaction intensity and the parameters mentioned above. The higher the degree of torrefaction, the greater the variation. Figure 6b displays the increase for FC throughout Series 1, by approximately 37% in T1, 39% in T2, and 56% in T3, while revealing the opposite behavior for volatile content, with an approximate loss of 38% in T1, 40% in T2, and 58% in T3. Concerning the ash content, Figure 6a presents its increase in concentration, intensified by the degree of torrefaction. There was an approximate increase of 0.7% in T1, 0.8% in T2, 1.5% in T3. The analysis of variables in Series 2 followed the same patterns as the ones presented in Series 1. As for the FC, as shown in Figure 7b, it was found that it increased by approximately 17% in T4, 22% in T5, and 35% in T6. The volatile content, also presented in Figure 7b, suffered an approximate loss of 17% in T4, 23% in T5, and 36% in T6. As for the ash content, Figure 7a shows an increase of approximately 0.3% in T4, 0.35% in T5, and 0.8% in T6. Series 3 displayed a smaller variation between the different samples, probably due to the short residence times tested and the lower temperatures. Fixed carbon values, shown in Figure 8b, suffered Series 3 displayed a smaller variation between the different samples, probably due to the short residence times tested and the lower temperatures. Fixed carbon values, shown in Figure 8b, suffered an increase of 2.5% in T7, 3.6% in T8, and 3.8% in T9. For the volatile content, also shown in Figure 8b, there was a decrease of 2.4% in T7, 3.6% in T8, and 4% in T9. As for the ash content, as shown in Figure 8a, there was a small increase depending on the degree of torrefaction. Considering smaller torrefaction parameters were used in Series 3, the variation of the values analyzed was smaller than for the first and second series. Finally, it is possible to state that the results obtained were in agreement with the reviewed literature [3,13,31]. In some of the studies analyzed, it appears that torrefaction causes a decrease in volatile content around 1.5 to 45%, an approximate increase in the content of fixed carbon from 1 to 40%, and an increase in the ash content from 0.1 to 12% [3,13]. Elemental Analysis (CHN) Concerning the elemental analysis, through the examination of Table 3, it can be seen that the values registered do not present a substantial distinction between the natural and dry samples, which was expected. As for the torrefied samples, it was possible to verify an increase in the carbon content afterward, when compared with the control samples. It was also possible to see an increase in the carbon content with the intensity of the torrefaction. Furthermore, there is also a decrease in the amount of hydrogen, depending on the torrefaction intensity. Through the values obtained from the elemental analysis of the samples, H/C and O/C ratios were calculated, thus building the van Krevelen diagram represented in Figure 9. Clean Technol. 2020, 2 FOR PEER REVIEW 11 afterward, when compared with the control samples. It was also possible to see an increase in the carbon content with the intensity of the torrefaction. Furthermore, there is also a decrease in the amount of hydrogen, depending on the torrefaction intensity. Through the values obtained from the elemental analysis of the samples, H/C and O/C ratios were calculated, thus building the van Krevelen diagram represented in Figure 9. [34]). Through the analysis of the graph, it is possible to distinguish the three different series used in the process and to confirm that Series 1 (T1, T2, and T3) was submitted to the highest torrefaction intensity, as previously mentioned. In the van Krevelen diagram, these samples can be found close to the zone corresponding to coal and anthracite, since they present smaller H/C and O/C. During hydrothermal carbonization, the removal of oxygen and hydrogen (H) occurs, which leads to a final product in the solid-state with a lower relationship between oxygen and carbon and hydrogen and carbon [11]. Sample T6 from Series 2 presents the same location in the diagram since it was subjected to the same conditions as sample T3 from Series 1. Samples T4 and T5 from Series 2, are in the coal area, although with higher H/C and O/C, due to the lower intensity of the torrefaction process when compared to Series 1. Samples T7, T8, and T9 from Series 3 present the highest H/C and O/C values examined, due to the low residence times that were used during the torrefaction stage, which may not be sufficient to trigger the start of the process. The values obtained through the elemental analysis are in line with the studied literature. The O/C and H/C ratios vary between 0.4 and 0.8, and 1.2 and 2, respectively, for natural or dry biomass samples. Following torrefaction, they vary between 0.1 and 0.7, and 0.7 and 1.6, respectively [3]. Heating Value As previously mentioned, the calorific values were calculated using the results obtained through the thermogravimetric analysis (fixed carbon, volatile, and ash contents). Table 4 displays the results obtained. Through the analysis of the graph, it is possible to distinguish the three different series used in the process and to confirm that Series 1 (T1, T2, and T3) was submitted to the highest torrefaction intensity, as previously mentioned. In the van Krevelen diagram, these samples can be found close to the zone corresponding to coal and anthracite, since they present smaller H/C and O/C. During hydrothermal carbonization, the removal of oxygen and hydrogen (H) occurs, which leads to a final product in the solid-state with a lower relationship between oxygen and carbon and hydrogen and carbon [11]. Sample T6 from Series 2 presents the same location in the diagram since it was subjected to the same conditions as sample T3 from Series 1. Samples T4 and T5 from Series 2, are in the coal area, although with higher H/C and O/C, due to the lower intensity of the torrefaction process when compared to Series 1. Samples T7, T8, and T9 from Series 3 present the highest H/C and O/C values examined, due to the low residence times that were used during the torrefaction stage, which may not be sufficient to trigger the start of the process. The values obtained through the elemental analysis are in line with the studied literature. The O/C and H/C ratios vary between 0.4 and 0.8, and 1.2 and 2, respectively, for natural or dry biomass samples. Following torrefaction, they vary between 0.1 and 0.7, and 0.7 and 1.6, respectively [3]. Heating Value As previously mentioned, the calorific values were calculated using the results obtained through the thermogravimetric analysis (fixed carbon, volatile, and ash contents). Table 4 displays the results obtained. Table 4. Results for calorific value for Series 1, 2, and 3. Par.* S1 S2 S3 Nat . Dry T1 T2 T3 T4 T5 T6 T7 T8 T9 HHV It is possible to see an increase between 2.6 and 56.3% in the heating value of the torrefied samples when compared to the dry sample. However, Chew and Doshi (2011) suggest an increase in calorific value of up to 58% [31]. Energy Density and Mass and Energy Yields From the HHVs obtained for the control sample (dry biomass) and the torrefied samples, it was possible to calculate the energy density for each one. Figure 10 displays the results obtained. Dry T1 T2 T3 T4 T5 T6 T7 T8 T9 It is possible to see an increase between 2.6 and 56.3% in the heating value of the torrefied samples when compared to the dry sample. However, Chew and Doshi (2011) suggest an increase in calorific value of up to 58% [31]. Energy Density and Mass and Energy Yields From the HHVs obtained for the control sample (dry biomass) and the torrefied samples, it was possible to calculate the energy density for each one. Figure 10 displays the results obtained. Through Figure 11, it was possible to verify that there is a direct relation between mass loss and the increment in energy density, as mentioned by Bergman and Kiel (2005) [35]. Series 1 is the one with the highest torrefaction intensity, and consequently, was the one with the highest energy density. Series 2 also shows an increase in its energy density, although it is not as sharp as the one presented in Series 1. Lastly, Series 3 does not show significant fluctuations in its energy density. Since the energy density is related to the calorific value of the samples, it is expected that those with the highest energy value are those with the highest energy density. The mass yield of each sample was calculated between the difference of its initial and final mass. Figure 11 presents the results obtained. Through Figure 11, it was possible to verify that there is a direct relation between mass loss and the increment in energy density, as mentioned by Bergman and Kiel (2005) [35]. Series 1 is the one with the highest torrefaction intensity, and consequently, was the one with the highest energy density. Series 2 also shows an increase in its energy density, although it is not as sharp as the one presented in Series 1. Lastly, Series 3 does not show significant fluctuations in its energy density. Since the energy density is related to the calorific value of the samples, it is expected that those with the highest energy value are those with the highest energy density. The mass yield of each sample was calculated between the difference of its initial and final mass. Figure 11 presents the results obtained. All samples display loss of mass caused by the torrefaction process, as can be seen in Figure 12. This loss is more accentuated for Series 1 (>50%) since the torrefaction process was more intense for these samples. Concerning Series 2, the loss of mass was about 40 to 50%, however less marked than in Series 1. Series 3 presented the lowest mass losses observed (<11%), which again indicates that it was subjected to very low torrefaction intensity. Figure 11 presents an evaluation of the energy efficiency of all the samples. All samples display loss of mass caused by the torrefaction process, as can be seen in Figure 12. This loss is more accentuated for Series 1 (>50%) since the torrefaction process was more intense for these samples. Concerning Series 2, the loss of mass was about 40 to 50%, however less marked than in Series 1. Series 3 presented the lowest mass losses observed (<11%), which again indicates that it was subjected to very low torrefaction intensity. Figure 11 presents an evaluation of the energy efficiency of all the samples. Although the increase in torrefaction intensity causes an increase in the energy density of the samples, the loss of mass makes it less energy efficient. In other words, although a sample of torrefied biomass has a higher calorific value than that of natural biomass, it is necessary to utilize larger quantities of torrefied biomass to achieve equivalent energy efficiency. Therefore, and as expected, the energy efficiency decreases with the increase of torrefaction intensity. Experimental Protocol For the production of torrefied biomass samples, the recommended parameters are those shown in Table 5, and which result from the data obtained from the T4 test, which was the test that provided material with the closest properties to the materials produced in industrial reactors, as described by Nunes (2020) [8]. Thus, the experimental procedure proposed here, includes the following steps: All samples display loss of mass caused by the torrefaction process, as can be seen in Figure 12. This loss is more accentuated for Series 1 (>50%) since the torrefaction process was more intense for these samples. Concerning Series 2, the loss of mass was about 40 to 50%, however less marked than in Series 1. Series 3 presented the lowest mass losses observed (<11%), which again indicates that it was subjected to very low torrefaction intensity. Figure 11 presents an evaluation of the energy efficiency of all the samples. Although the increase in torrefaction intensity causes an increase in the energy density of the samples, the loss of mass makes it less energy efficient. In other words, although a sample of torrefied biomass has a higher calorific value than that of natural biomass, it is necessary to utilize larger quantities of torrefied biomass to achieve equivalent energy efficiency. Therefore, and as expected, the energy efficiency decreases with the increase of torrefaction intensity. Experimental Protocol For the production of torrefied biomass samples, the recommended parameters are those shown in Table 5, and which result from the data obtained from the T4 test, which was the test that provided material with the closest properties to the materials produced in industrial reactors, as described by Nunes (2020) [8]. Thus, the experimental procedure proposed here, includes the following steps: Although the increase in torrefaction intensity causes an increase in the energy density of the samples, the loss of mass makes it less energy efficient. In other words, although a sample of torrefied biomass has a higher calorific value than that of natural biomass, it is necessary to utilize larger quantities of torrefied biomass to achieve equivalent energy efficiency. Therefore, and as expected, the energy efficiency decreases with the increase of torrefaction intensity. Experimental Protocol For the production of torrefied biomass samples, the recommended parameters are those shown in Table 5, and which result from the data obtained from the T4 test, which was the test that provided material with the closest properties to the materials produced in industrial reactors, as described by Nunes (2020) [8]. Thus, the experimental procedure proposed here, includes the following steps: • Biomass samples must be prepared according to the procedure presented and described in Section 3.1.1. Sample Preparation; • The muffle must be programmed according to the parameters presented in Table 5, and must therefore allow the programming of at least four temperature levels and timed heating ramps; • After removing the material from the muffle, when a temperature sufficiently safe to open the oven is reached, it must rest inside a desiccator, until it reaches room temperature, in order to prevent the sample from acquiring moisture. Conclusions and Future Work Biomass torrefaction is a very promising emerging technology that has the potential to support energy production. In this study, the creation of an experimental procedure is presented to produce torrefied biomass samples. During the development of this research, it was necessary to take into account the torrefaction parameters used, such as temperature and residence time, as these were crucial for obtaining torrefied samples of similar quality when compared to samples obtained by laboratory reactors. Both for samples of natural and dry biomass, as well as for torrefied samples, the properties of moisture content, fixed carbon content, volatile content, ash content, calorific value, mass, and energy yield, and energy density were analyzed. The results show that it is possible to obtain torrefied samples comparable to those of studies developed through the use of reactors. They also show that the properties analyzed are directly related to the torrefaction severity, which, in turn, depends on the parameters used when defining this protocol. Visual monitoring of the torrefaction intensity of the samples was essential to predict and confirm the results obtained by the chemical characterization of the samples. The chemical analysis of the samples indicates more appealing results in the sense of obtaining samples with characteristics similar to those obtained in previous studies, as mentioned. The quality of the results obtained is also fitting to corroborate the quality of the suggested protocol since it was possible to obtain samples with considerable properties for the three series, even though several torrefaction intensities, caused by varying parameters, were used. Although less severe torrefaction parameters were used for Series 3, the small fluctuation of the values analyzed indicates that the parameters used for this series are the minimum required to start the torrefaction stage for this procedure. Furthermore, it allows us to conclude that the residence time is the parameter that mostly affects the torrefaction severity. In addition, with these results, it is possible to conclude that the use of common and widely available low-cost equipment, such as a laboratorial muffle, can achieve a cost reduction that can reach 90% in equipment acquisition. In terms of future perspectives, the development of the torrefaction process can lead to the evolution of other technologies, such as the production of hydrogen from torrefied biomass. Considering the wide variety of types of biomass and their differences in both structural and chemical composition, different samples of different species may be used in the future to prove the effectiveness of the developed experimental procedure.
9,944.2
2020-10-06T00:00:00.000
[ "Environmental Science", "Engineering" ]
Software Historical languages are increasingly being modelled computationally. Syntactically annotated texts are often a sine-qua-non in their modelling, but parsing of pre-modern language varieties faces great data sparsity, intensified by high levels of orthographic variation. In this paper we present a good-quality Early Slavic dependency parser, attained via manipulation of modern Slavic data to resemble the orthography and morphosyntax of pre-modern varieties. The tool can be deployed to expand historical treebanks, which are crucial for data collection and quantification, and beneficial to downstream NLP tasks and historical text mining Introduction Dependency parsing is important in many downstream natural language processing (NLP) tasks, including event extraction, word vector representation enhancement, and text classification and summarization. Training good-quality parsers for historical languages is a challenging task, since they normally provide very little data with very high levels of linguistic variation, which in machine learning easily translates into high levels of noise. In this paper we present a variety-agnostic part-of-speech (PoS) tagger and dependency parser for Early Slavic (OldSlavNet) trained on multi-lingual Slavic data spanning a thousand years via orthographic and morphosyntactic harmonization of the modern data with their pre-modern counterparts. Early Slavic and Modern Russian data was The code (and data) in this article has been certified as Reproducible by Code Ocean: (https://codeocean.com/). More information on the Reproducibility Badge Initiative is available at https://www.elsevier.com/physical-sciences-and-engineering/computer-science/journals. * Corresponding author. now available for Russian and Serbian, can be downloaded from the parser's repository and used to harmonize new Modern Russian and Serbian texts with Early Slavic, thus potentially improving the parsing performance. The parser is especially crucial to expand historical treebanks, large collections of digital texts annotated with syntactic information: treebanks are a versatile source of data, not only directly exploited in many NLP tasks, as the aforementioned ones, but they are used by the humanities at large as a stand-alone collection of carefully digitized textual data enriched with linguistic information. Data and parser architecture The parser works in the UD framework [6], one of the most widely employed formats for dependency parsing. The tool's neural-network architecture is based on jPTDP [7]. The following are the main new features in OldSlavNet's model: -ArgParse substitutes the older OptParse to allow for wider reusability of our code. -RMSProp [8] is employed instead of Adam [9] as optimizer to avoid exploding gradients. The initial learning rate was set to 0.1 instead of None. -Since the previous experiment in [10], the training set has been expanded with Modern Russian and Serbian data. OldSlavNet's documentation contains a detailed breakdown of the corpus on which the parser was trained and tested. Usage The following is the end-to-end process to use the tool to tag new Early Slavic text: 1. Pre-process your text file: Convert your Early Slavic text to the CoNLL-U UD-format by running the converter.py script included in OldSlavNet's repository. The input must be an already tokenized, one-sentence-per-line text file. Fig. 1 Install the required dependencies: Run: pip install -r requirements. Impact OldSlavNet's previous version (known as jPTDP-GEN) enabled [10], which discussed the improvement of dependency parsers for lowresource historical languages using cross-dialectal data. OldSlavNet, a generic (i.e. variety-agnostic) parser, was shown to perform better than two variety-specific parsers for Early Slavic, indicating that markedly non-standardized historical languages are likely to benefit more from the development of generic, cross-variety models, than from specialized ones. Since [10], OldSlavNet has further improved its real-world performance (i.e. its ability to tackle a wider range of pre-modern Slavic varieties and genres) thanks to additional data from Modern Russian and Modern Serbian, as Table A.1 shows. OldSlavNet has been trialled on new texts in the TOROT Treebank [1,2], a major annotated historical corpus for Slavic and offspring of the PROIEL project [11,12]. The expansion of historical Slavic treebanks using OldSlavNet will contribute to the advancement of research domains that benefit from syntactically annotated data, particularly from less-resourced languages with great spelling variation: 1. Semantic change detection: A methodological gap which has been noted for decades [13] is the integration of syntactic information in meaning change modelling. Early Slavic treebank data can now be used in semantic change detection by generating word representation that are both semantically and syntactically constrained (e.g. syntactic word embeddings [14] and syntactic topic models [15]), thus improving the semantic models themselves. Understanding the mechanisms of meaning change in different historical contexts will help design better tools for semantic change detection, which has a wide range of applications in text processing, including information retrieval [16][17][18], culturomics [19], Diachronic Text Evaluation (DTE) [20,21], recontextualization of past texts [22], OCR error correction [23], and abusive content detection [24], among others (see [25] for a detailed survey of applications). 2. Improving NLP system evaluation practices: Early Slavic is ideally placed to be used in the evaluation of NLP systems and methods, in light of its many related subvarieties and its high orthographic variation. This is a challenge in computational models of language change, since NLP systems tend to disregard low-frequency types, which are inevitable in historical sources. More syntactically annotated data for Early Slavic will allow us to systematically investigate how NLP approaches to infrequent tokens impact the generalization of a system's results, thus improving our evaluation practices. 1 3. Improving representativeness: Expanding Early Slavic treebanks will allow us to develop methods for large-scale quantitative diachronic analyses of linguistic phenomena in languages other than English. The lack of large, non-English diachronic corpora has been stressed in the literature (e.g. [26] and [25]) as a possible bias in historical linguistic research that aims at generalizing findings cross-linguistically. Limitations and future improvements The scripts used to harmonize Russian and Serbian orthography and morphology to Early Slavic are still experimental. Presently, only the tokens belonging to the most frequent morphological tags have been harmonized. Figs. 3 and 4 illustrate how the harmonization routine currently works on a Serbian and a Russian sentence respectively. Given the promising results, in following releases we plan to develop harmonization scripts encompassing a wider range of morphotags, which is expected to yield even better parsing performance on pre-modern Slavic varieties. A drawback of the current version of OldSlavNet is that it takes already sentencized text (i.e. with one sentence per line, as shown in Fig. 1) as an input, which requires users to manually split their text into sentences. Implementation of OldSlavNet with spaCy [27] is however underway, in order to complement the parser with an Early Slavic sentencizer that takes an unbroken texts as input and provides a onesentence-per-line output, which can then be directly fed to OldSlavNet to add syntactic annotation. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
1,568
2021-02-23T00:00:00.000
[ "Computer Science" ]
Electromagnetic Expansion and Fragmentation of Hollow Aluminium 5052 Tube Electromagnetic forming is a high-speed forming technology by which hollow profiles can be compressed or expanded. It is done with a pulsed magnetic field to apply Lorentz’ forces at electrically conductive material. Electromagnetic hollow tube expansion is limited by the fragmentation tendency. This work attempts to use a combination of analytical and computational approach to compute the net tangential stress during tube expansion. A simplified analytical framework to estimate the temporal evolution of plastic stresses present in aluminium alloy AA5052 at low and high applied magnetic pressures is developed based upon dynamic imaging. The time resolved images captured using current synchronised high speed camera record the overall dimensional changes of the tube that is validated by multi-physics simulation of expansion process. Imaging of hollow tube expansions at two selected peak currents has been carried out at various current levels in the range 76 160 kA. The direct visualisation of the increase in the tube diameter at two current levels provided a comparison of the developing net tangential stresses in the hollow tube during the undamaged and fragmented expansion. Imaging of tube expansion also facilitates the estimation of the strain rate experi-enced by the tube and was in the range of ~1700 s −1 to ~1200 s −1 . The propensity of fragmentation was found to be due to the level and duration of generated tangential stresses above the yield stress during expansion of the aluminium tubes. Presented study provides a mean of exploiting the enhanced formability of aluminium alloys using electromagnetic forming. Introduction Electromagnetic forming is realised by the interaction of pulsed magnetic field arising from a suitably placed pulsed solenoid coil which induces a secondary current in the conductive work-piece [1] [2]. As a result, a radial force on the work-piece originates which may have sufficient magnitude to deform it at high strain rates. This principle has been harnessed for non-contact expansion of metallic objects as well as for joining of dissimilar metals under the action of an electromagnetic pulse. The method is quite attractive for large sheet metal forming as it combines good quality with low production cost for manufacture of thin components [2]. Hollow tube and rings expansion under electromagnetic loading is the simplest form of the electromagnetic forming methods. A cylindrical coil is used to provide the electromagnetic force on the job piece tube surface. It has been shown that the profile of electromagnetic force generated by the cylindrical coil is maximum at its mid length. As a result, the tube deformation is in-homogenous in the axial direction [3]. This could cause strain incompatibilities of the deforming tube, thereby limiting the full extent of expansion without defects. Optimum coil design [2] or using field shapers [1] are some of the methods used for obtaining a more homogenous magnetic field force profile. In order to understand the fragmentation of the tubes, insights from the large body of experimental and theoretical simulation of the ring expansion process, emphasising necking formation, is found to be useful [3]. Primarily the emphasis is to provide an explanation for the delay in necking which some studies attribute to the strain rate sensitivity of the material such as aluminium [3] and others to inertial effects [3]. Unlike the situation of ring expansion, in case of unconstrained tube expansion the curvature of the expanding surface is non-trivial and hence needs to be modelled explicitly. With advancement in the availability of multi-physics simulation softwares, few recent simulations of the tube expansion process have been reported in literature [4] [5] [6] [7]. To the knowledge of the authors, the fragmentation behaviour of tubes has not been modelled. Thomas and Triantafyllidis [8] have studied the fragmentation of tubes by generating theoretical formability limits and experimentally comparing the multiaxial strain conditions near and far from the necked region. In order to understand the fragmentation of electromagnetic tubes, it is required to have a first-hand assessment of the stresses induced during the expansion process. Experimentally this approach could be addressed by directly recording the expansion by dynamic imaging. The electromagnetic forming process occurs within few hundred micro-seconds, and hence the visualisation of the process becomes very challenging. Visualisation of the work-pieces would enable not only recording their relative motion but also estimate the parameters of deformation such as strain rate and stresses. Deformation processes in techniques such as impact Taylor test [9], plate on plate impact test [10] and split Hopkinson bar [11] occur in similar time frame H. Choudhary et al. as electromagnetic forming. In these techniques, dynamic behaviour of the objects has been investigated by the use of high-speed camera. For example, a ring expansion process captured by high-speed camera enabled the velocity of expansion as well as the nucleation of cracks along the circumference. Apart from recording the deformation of the object, the use of high-speed camera has also been to record the rapid dynamics of anode arc root in a dc arc plasma torch [12], single wire arc spray process [13] and collisions of aluminium plates occurring in magnetic pulse welding [14]. Aluminium-magnesium alloy AA5052 (Al-2.5Mg)-a commonly used alloy due to its excellent high strength to weight ratio, corrosion resistance, weldability and recycling potential-is a potential alloy for electromagnetic pulse welding. In this work electromagnetic free expansion of aluminium alloy AA 5052 tubes was carried out at different coil currents with the aim of providing reasonable estimates of the strain rate and stresses during expansion and fragmentation of the tubes. This was achieved by using a high-speed camera synchronised with the current signal to record the tube expansion. Electromagnetic Forming Set Up A tubular work-piece was formed using solenoid pulsed electromagnet in an electromagnetic forming set-up. The overall electrical circuit for the tube expansion is of capacitive-resistive-inductive nature as shown in Figure 1. In electromagnetic forming a capacitor bank is charged through power supply to store electrostatic energy in it. This stored energy is discharged into the electromagnet with the help of high voltage-high current switch. This produces a current of a damped sinusoidal waveform through the electromagnet, which in turn generates a time varying pulsed magnetic field that induces eddy currents through the surface of the work piece (metallic tube). The magnetic field between coil and work piece, and the current through the work piece generates a repulsive force × J B in the outward radial direction causing the tube to expand. Furthermore, the applied magnetic pressure causes deformation of the work-piece only in areas close to the winding of the coil. The experimental setup ( Figure 2) consisted of an electromagnetic coil, a capacitor bank, a high-speed camera interfaced with a personal computer (PC) and a digital storage oscilloscope (DSO). These are briefly described below. 1) Electromagnetic coil: Electromagnet (EM) used in the experiment had 7 number of turns, an outer diameter of 60 mm (64 mm with insulation) and inner diameter of 47 mm. The coil was distributed over a length of 47 mm, with the conductor of the coil having a cross-section of 5 mm × 8 mm and a gap of 2 mm between turns. The AA5052 tube (work-piece) was fitted on the EM reinforced coil concentrically as shown in Figure 3. The resistance and inductance of the EM coil were measured using an LCR meter and the values were found to be very close to the respective calculated values (Table 1). Figure 4). The resulting current i(t), is determined by the resistance R, inductance L and capacitance C of the circuit. The electrical equivalent circuit is shown in Figure 4. The expression of current through the coil is given by Equation (1). The expression of the resulting generation of magnetic pressure (in MPa) is derived in Equation (2). where, V = charging voltage in kV, L = inductance in µH, R = resistance in Ω, C = capacitance in µF, W d = damped frequency of oscillations, W n = natural frequency of oscillations, ∈ = damping coefficient, B = magnetic field in Tesla, H = magnetic field intensity in AT/m, l = coil's mean length, i(t) = current in coil, P = magnetic pressure in MPa, N = number of turns of the coil. 3) High-speed camera interfaced with PC and digital storage oscilloscope (DSO): To capture electromagnetic expansion, a high-speed camera (Make: PCO AG, Germany, Model No. pco1200 hs, standard sensor size 1280 pixels × 1024 pixels, frame rate 500 frames per second FPS) was used. As the expansion process is expected to be completed in few hundred microseconds (~500 µs) the number of frames per second of camera was set to 10,000 FPS, over a sensor area of 980 pixel × 60 pixels focused on the tubular object. A flood light of 1 kW power was used to illuminate the object to reduce the exposure time. CAMWARE (provided by high-speed camera manufacturer) software was used for capturing images on PC. A trigger signal (5 V) from camera was given to the oscilloscope. Current signal from capacitor bank was used to trigger the oscilloscope. Simultaneous acquisition of current signal from EM coil power supply and signal from camera was done using DSO in order to synchronize the captured images in the high-speed camera. Calibration for image size was done before capturing the images, with one pixel corresponding to 1 mm. High Rate Testing Annealed AA5052 (the tube material) was tested using a split-Hokinson bar for determining the room temperature high strain rate uniaxial properties. For this purpose, cylindrical compression specimens of 5 mm diameter and 5 mm height were used. Multiple tests were carried out within the strain rate range of 1200 - Tube Material and Geometry The work-piece chosen for the studies was annealed AA5052 grade aluminium alloy with room temperature quasi-static yield strength of about 60 -80 MPa. The tube subjected to electromagnetic loading was a hollow cylinder of inner diameter 65 mm, outer diameter 70 mm and length 100 mm. Electromagnetic Forming Experiments Carried out Ten electromagnetic forming experiments were carried out, each at different peak coil currents and achieving a specific frequency, as shown in Table 2. Note that two tests were carried out with peak coil current of 131, but with different frequencies. Analytical Framework for Estimation of Deformation Parameters Deformation parameters of circumferential stress, true strain and strain rate are computed from the time interleaved images by measuring the outer diameter of the tube as a function of time and estimating the velocity of expansion. The strain in the tube due to its radial expansion, results in the thinning of the tube. The strain in the tube can be written as where, T = tube thickness. Applying the volume constancy relation i.e. ( ) where D 0 and D i are the outer and inner diameters of the tube and is the average diameter of the tube. Ignoring the axial change Using the above equations one obtains the following expression, Using the average diameter the overall strain can be written as follows where D 1 = initial average diameter; D 2 = current average diameter. The strain rate can be expressed as From Lal and Hillier [17], the relationship of the balance of forces causing radial momentum in the tube would provide the expression for the resultant tangential stress θ σ , Here P Gen is the generated magnetic pressure, ρ is the mass density, t is the time; κ = {0.1 -0.167}, r 2 is outer radius of coil, r 1 is inner radius of coil, l is ac- is geometry dependent factor, τ is coupling dependent factor between coil and tube, and α, β are factors given by 6(b). It is to be noted that the above expression becomes identical to the one used by Janiszewski [18] to compute the tangential stresses for explosive expansion of ring during its deacceleration stage when generated magnetic pressure P Gen is negligible. The above expression is derived by ignoring the axial inertial forces. The thickness during expansion can be calculated from the volume constancy condition and by assuming that change in length is negligible during the expansion of the tube, hence Numerical Simulation The use of electromagnetic numerical simulation helps to get an insight into the magnetic field distribution between coil and tube during electromagnetic forming [19]. In this work, a 2D axisymmetric simulation model is developed using COMSOL. Magnetic vector potential method is used to solve the Maxwell's equations [20] (see Table 3). The following assumptions are made in the developed model [5]. • Displacement current and free charge density are neglected. • The electrical conductivity and permeability of material are constant and isotropic. • The effect of temperature on material properties is ignored. High Strain Rate Mechanical Properties The flow stress behaviour of annealed AA5052 at two selected strain rates between 1200 -3000 s −1 is shown in Figure 5. The flow stress behaviour was found to be very similar at the two strain rates. The room temperature yield stress was about 150 MPa that hardened to about 300 MPa at a strain of 0.2. It could be inferred from the nature of the curves that strain rate sensitivity of flow stress for AA5052 at room temperature and dynamic strain rates is about 0.09, can be considered not very significant. EM Expansion of AA5052 Tubes Experiments as per Table 2 were performed. The photographs of the expanded tubes are shown in Figure 6. It is seen that up to a peak current of 131 kA the tubes deformed but did not fracture. At peak currents of 140 kA and higher the tubes fractured. The diameter of expanded tubes was measured and the percentage change in diameter at different current level was determined as shown in Figure 7. Simulation of Magnetic Pressure The stored electrostatic energy in capacitor bank is suddenly discharged into electromagnetic coil by closing the spark gap switches. The expression of current through the coil is given by Equation (1). The expression of generated magnetic pressure is derived in Equation (2). 2-D axis symmetric simulation was performed to calculate factor κ see Equation (6c), which is a function of the coil geometry, gap between coil and tube, the tube material and the coupling be- centre is plotted using equation 1b. Figure 8 (b) is simulation result at peak value of current (117 kA) for 7 turns of solenoid coils. Along with supplied coil current (Figure 8(b)) the waveforms of P Gen as per Equation (2) is also shown, referred to as P calculated . Superimposed on that is the pressure through simulation P simulation . While P simulated shows magnetic pressure at ID of tube surface. Figure 8(b) shows that the calculated peak pressure is 200 MPa and the simulated peak pressure is 60 MPa. From this κ can be obtained as the ratio of the simulated to the calculated peak pressure, and is obtained as ~0.33. Figure 8(c) shows the simulated radial displacement at the tube center as well as the estimated displacement rate from the displacement curve. It is seen that the time dependence of the radial displacement has a sigmoidal dependence. This nature of the curve has also been obtained during aluminium alloy tube expansion by electromagnetic forming where the radial displacement is monitored by Photonic Doppler Velocimetry (PDV) [21]. The displacement rate was estimated from the tube radial displacement-time curve using a seven point moving average of the slope at each recorded time data. The displacement rate shows a double hump separated approximately by quarter time period of the coil current cycle. This is similar to the direct measurement of velocity using PDV technique is A1060 alloy [22]. The simulation of displacement predicts a diametral expansion of ~17 mm. The maximum displacement rates are greater than 100 m/s. There is significant difference in the rates of the two peaks which is in variance with the observation by Jeanson et al. [22]. These differences could arise because of the simplifications used for simulations such as the use of 2D axi-symmetric model of tube, considering circular coils rather than helical and ignoring temperature dependency of the material. Visualisation of Imaging The expansion experiments with camera were carried out at two different levels of peak current at a charging level of (15 kV, 224 µF) and (16 kV, 112 µF), which resulted in a peak current of 117 and 140 kA, respectively. The lower level of 117 kA caused expansion of the aluminium tube and the higher current level of 140 kA led to not only expansion but also fragmentation of the tubes. The current waveform synchronized with camera is shown in Figure 9. The sequence of images showing the tube expansion process under the action of peak current of 117 kA and 140 kA are shown in Figure 10. The time between two images was 98 µs and is the sum of the exposure time (60 µs) and read out time (38 µs) of the camera. The expansion images were captured for 293 µs for 117 kA peak current and 488 µs for 140 kA peak current within the given inter-frame rate. As pointed out that the camera voltage was synchronised with the minima of the current pulse, the expansion recorded is considered to be the average of the increasing and decreasing phases of the magnetic pressure on the specimen. During the rising portion of the current, the increasing magnetic pressure controls the expansion and in the decreasing portion, which represents weakening of the magnetic forces, the expansion is sustained by inertial forces. As in the subsequent pulse cycles, due to dampening of the peak current, there is progressive reduction of the contribution of magnetic pressure for expanding the tube. Thus, expansion is increasingly constrained after the first current cycle; as the developed magnetic pressure is unable to maintain the balance between the tangential stresses and inertial forces generated in the tube. This results not only in variation in the tangential stresses in the increasing and decreasing portion of the current cycle but also variation in the expansion rate of the tube and developed strain rate. In order to capture this behaviour the inter frame rate should be at least about 20 times faster than that used for the present work. Such information would be invaluable to elucidate the dynamics of the electromagnetic expansion process in materials. However for the purpose of the present work to characterise the broad stages in the expansion and fragmentation of tubes and to determine the nominal deformation parameters of stress, and strain rate the record of average increase in tube diameter extending both in acceleration and deacceleration stages is sufficient. Estimation of Strain Evolution during Expansion The increase in diameter of the tube and its calculated thinning as a function of the test time for the two levels of peak current pulse used in the study is shown in Figure 11. The evolution of diameter D was fitted with time t to a third degree polynomial function as Similar approach was also adopted by Janiszewski [18] to describe the expansion of ring diameter by electromagnetic pulse. From the above equation describing the functional relationship between diameter and time of pulse one can derive the second derivative which is required for the computation of the tangential stresses, It is seen that the for the peak current of 117 kA the diameter increases up-to ~90 mm while for the peak current level of 140 kA the diameter increases up-to ~120 mm. Thus it can be concluded that with increase in peak current greater expansion of the tube is obtained. From the images captured during expansion at the two peak current levels, it is seen that at 117 kA the final expanded tube is intact, whereas the tube expanded at 140 kA fragmented at t ~ 300 μs. It can thus be concluded that the fragmentation is initiated in the time range of 200 to 300 μs after the tube has expanded within the range of 90 to 105 mm. Figure 11(b) shows the variation of the estimated tube thickness assuming volume constancy. The final tube thickness estimated from the volume constancy relation is ~2 mm for the tube expanded with peak current of 117 kA. The final thickness along the equatorial plane of the tube after expansion was measured with a wall thickness gauge to be on average 2.1 mm (the gauge is of Kroeplin make with a least count of 10 µm). Given the assumptions in calculations and the error in measurements and variation of tube thickness, the match is reasonable. In the tube expanded with a peak current of 140 kA fragmentation occurred as a result of which the thickness could not be measured along the equatorial plane accurately. It is likely that the tube expansion and the concurrent thinning of the cross-section at the high rates could not be withstood by the tube leading to its fragmentation. To confirm this, the strain rate and the evolution of the generated forces and stresses in the tube need to be estimated in a reasonable manner. (a) (b) Figure 11. Evolution of (a) diameter and (b) thickness with time during EMF of AA5052. Also shown are the curves of third degree polynomial fitting the data. Estimation of Strain Rates The strain rates were calculated from camera images. In this calculation the spring back effect has been ignored as the effects on strain calculation is not considered significant. The strain rate is then simply the time elapsed for capturing the successive images using the fast camera. The evolution of the general diametral strain rate with strain for the two cases is shown in Figure 12. The abscissa shows strain calculated from Equation (5). The plot in Figure 12 shows that in both the cases of expanding the tube at high and low peak current the starting strain rate is the same at ~1750 s − Evolution of Tangential Stresses in Expanding Tube The variation of the magnetic force during the expansion process for the two peak currents used for expanding the tube is shown in Figure 13. In both the plots the band of yield stress of the material in the strain rate range imposed by the electromagnetic pulse is also shown. It facilitates determination whether the magnetic force generated due to the applied pulse is greater than the yield stress, and consequently induces plastic deformation of the material. It is seen that for the peak current level of 117 kA only in the first pulse the generated peak force is greater than the yield stress of the material. In case of the peak current level of 140 kA the first three pulses are within or above the yield stress. This indicates that by increasing the current level the generated peak magnetic force in increased in magnitude in the initial few pulses. As a result, the ability of the generated magnetic force to cause work-piece deformation is increased. Apart from the applied magnetic stresses which cause deformation only for fraction of the pulse when they are above the yield stress level, inertial stress generated in the deforming tube cause additional stress. The variation of the inertial stress during the test time is calculated from the peak current 117 kA. The plot indicates that the inertial stresses remain above the band of yield stress until about 200 µs. For the case of peak current of 140 kA the fragmentation of the work-piece hindered the estimation of the inertial stresses. However assuming conservatively that inertial levels are at least of the same levels as the peak current of 117 kA, it is clear that the total tangential stresses, by increasing the peak current to 140 kA, is significantly larger. The comparison of the tangential stresses generated in the work-piece estimated from Equation (6) is shown in Figure 14. As expected the tangential stresses are significantly larger for the peak current of 140 kA as compared with 117 kA. They are not only larger in magnitude but also extend for longer duration above the yield level of the deforming work-piece material. The fragmentation of the tube at 140 kA suggests that the stresses generated were not suitable for obtaining defect free expansion. Therefore the upper limit of expanding the tube of annealed AA5052 grade aluminium alloy is applying the electromagnetic pulse at 140 kA. The present study on the expansion of aluminium tubes has systematically investigated the propensity for the tubes to remain intact over a wide range of current levels from ~70 kA to 140 kA. The analysis in the present study shows that strain rates of ~1700 s −1 are developed resulting in peak displacement rates of ~200 m/s by using a novel imaging technique which synchronised imaging with the applied current. These values are consistent with the results obtained by Grady and Benson [23] for 1100 grade aluminium alloys obtained by high impact electromagnetic ring expansion test. However, in contrast to ring expansion, electromagnetic tube expansion provides in-plane deformations even at large strains under adiabatic conditions [22] which make the later more suitable for study of fragmentation. However the stresses are not uniaxial primarily because of axial inertial stress components creating a plane stress situation. Nevertheless the fragmentation is known to be governed by two broad factors-the capacity of the expanding material to absorb the impact occurring at high rates (the von karman velocity limit [24]) and the interplay of the radial and tangen- Conclusions The following conclusions can be derived from the study. 1) Experiments have been conducted for current levels varying from 76 kA to 160 kA in 7 turns electromagnet. 2) The expansion of aluminium tube was imaged using high-speed camera synchronised with current which facilitated an estimation of the imposed strain rate and tangential stresses. 3) The expansion of the tube at peak current levels of 76.5 kA to 131 kA resulted in uniform expansion and that carried out using above 140 kA resulted in fragmentation. 4) The strain rate of tube expansion in both the peak currents was in the range 1700 -1200 s −1 . 5) It was found that the magnitude of the peak current with specific coil turns is decisive in the development of the level and duration of tangential stresses to cause either expansion or fragmentation of the expanding tube.
6,125
2020-10-21T00:00:00.000
[ "Engineering", "Materials Science", "Physics" ]
The Importance of Correlation between CBCT Analysis of Bone Density and Primary Stability When Choosing the Design of Dental Implants—Ex Vivo Study This study aims to determine the correlation between the mean value of bone density measured on the CBCT device and the primary stability of dental implants determined by resonant frequency analysis. An experimental study was conducted on a material of animal origin: bovine femur and pig ribs. Two types of implants were used in this study: self-tapping and non-self-tapping of the same dimensions. Results of the experimental study showed a statistically significant correlation between bone density expressed in HU units and the primary stability of self-tapping and non-self- tapping dental implants expressed in ISQ units in bovine femur bones and self-tapping implants and pig rib bones. There was no statistically significant correlation between non-self-tapping dental implants in pig rib bones. Self-tapping and non-self-tapping implants did not show statistical significance in the primary stability in bones of different qualities. The analysis of bone density from CBCT images in the software of the apparatus expressed in HU units can be used to predict the degree of primary stability of self-tapping and non-self-tapping dental implants in bones of densities D1 and D2, and self-tapping dental implants in bones of the lower quality D4. Introduction Caries and periodontitis represent the most common causes of tooth loss. The replacement of the missing teeth is very important for the patient both from the health and from the psychosocial aspect [1][2][3][4][5]. Implant therapy, aimed at replacing the missing teeth, has been used successfully for the past 50 years and has also been recognized as an effective treatment option. The introduction of osseointegrated titanium implants back in 1965 resulted in the expansive development of implantology [6]. Numerous studies have shown that implant therapy is considered a predictable type of dental therapy with a very high average success rate of around 90-95% [7][8][9][10][11]. • quantity and quality of bone tissue at the implant site, • implant design, and • surgical implantation technique [15]. Alveolar Bone Bone density stands as a significant predictor for the success of implant therapy [16]. Therefore, the evaluation of bone density represents an integral part of preimplantological clinical and radiographic examination. Methods that enable a three-dimensional radiological presentation of the alveolar extensions of the upper and lower jaw include Computed Tomography (CT) and Cone Beam Computer Tomography (CBCT), which are the preferred methods for the analysis of bone density in the preimplantation phase [17]. Misch and Kircos [18] in 1999, and Northon and Gamble [19] in 2001 proposed a classification of bone density based on CT images using interactive software, with the data on bone quality at the site of the future implant being obtained based on the objective and quantitative result expressed in Hounsfield units. Application of CBCT Adequate radiological imaging during the planning process must provide quality imaging, and allow for realistic analysis and qualitative and quantitative measurements of the upper and lower jaw. Qualitative measurements, as well as bone density measurements, can be assessed and presented both visually and numerically using modern dental imaging Computed Tomography (CT) and Cone Beam Computer Tomography (CBCT) [20][21][22][23]. The principle of operation of the CBCT device is based on measuring the attenuation of X-rays that are absorbed differently by passing through different types of tissues. By passing through different tissues, the radiation weakens due to the absorption and scattering of X-rays. Detectors, after measurement, convert rays into electrical signals. The computer software synthesizes the image based on the data obtained from the detector. The synthesized image consists of the image matrix and its volume element (voxel) within which the pixel image element is created. A voxel is three-dimensional, and a pixel is two-dimensional. In addition to using voxels of smaller dimensions to increase the accuracy of the HU number, algorithms are being developed that try to solve the problem of estimating the coefficient of linear attenuation for areas that are not fully recorded. [24] CBCT imaging contains high spatial resolution images with voxel reconstructed CBCT data ranging between 0.07 and 0.4 mm [23]. Depending on the CBCT, an accuracy level of 200 µm should be feasible, however, with certain deviations [25]. The clinician has a software overview of the mean values of HU units in the given cylinder, i.e., in and around the virtually positioned implant, depending on the set parameters. The advantages of CBCT over CT are reflected in the lower radiation to the patient during exposure, easier installation and lower cost of the device. Since its launch, CBCT has grown exponentially with over 85 different CBCT models being available [26]. In addition to the proven variations in the obtained HU values using CBCT and CT [27,28], there has been an increasing number of studies that use software analysis of bone density using CBCT to evaluate bone density [29][30][31][32]. Implant Design As the design, shape and dimensions of implants can alter surgical outcomes (primary stability, bone compression) as well as biomechanical parameters (force distribution during occlusal function), various designs of commercially available implant systems have been developed with a view to providing optimal implant therapy to patients [33]. Implant macrodesign also applies to the shape and design of the thread, as well as the geometry, angle, slope, depth, thickness (width) and spacing of the thread. The most important role of the macrodesign is to provide adequate stability after implantation, but also to ensure interaction with bone tissue through osseointegration [34,35]. Primary Implant Stability The absence of clinical mobility of the implant following implantation represents the stability of the implant. Achieving and maintaining the stability of implants stands as a prerequisite for successful osseointegration and the clinical outcome of dental implant therapy. The Resonance Frequency Analysis (RFA) method was first introduced in 1996 (Meredith) and is a non-invasive diagnostic method that enables clinical measurement of implant stability as well as the monitoring of the biological tissue response and osseointegration as a function of time [36,37]. With higher bone density (HU) values and a higher primary implant stability measured in ISQ units, Hausfield units can be used as a diagnostic parameter to assess possible implant stability. [38][39][40]. The models of the bovine rib and pork femur are analogous to human bone density, the software of the CBCT device on which the study was performed enables the virtual (guided) planning of the implant position that fully corresponds to the actual implant position built into the model in the next phase of the study. In this way, we obtained a direct relationship between the mean value of the bone density around the implant and the primary stability in ISQ units. The study was planned ex vivo because, in the following phases of the complete project, pathophysiological examinations of bone models were performed to examine the relationship between bone density and primary stability in more detail. The authors considered the prevalence and accessibility of CBCT devices in clinical practice, whereby the therapists can assess bone density and select the appropriate implant in relation to the conditions, while utilizing adequate preimplant analysis and planning. Clinical application is best reflected in the fact that the therapists may decide to use a non-tapping or more invasive self-tapping implant, or conduct additional procedures such as bone condensation or underprep drilling. This study was performed under the hypothesis "Analysis of bone density of CBCT image (Cone Beam Computer Tomography) in the software of the device expressed in HU units (Hausfield Units-HU) can predict the value of primary implant stability, which stands as one of the basic factors for successful osseointegration, thus guiding the choice of the implant design". The aims of the study: • Determine the correlation between the mean value of the bone density measured on the CBCT device and the primary stability of self-tapping and non-self-tapping dental implants determined by resonant frequency analysis on samples of pig ribs and a bovine femur; • Compare the obtained values of primary stability on self-tapping and non-self-tapping implants installed on samples of pig ribs and bovine femur samples. Materials and Methods This experimental study dealt with the correlation between radiological analysis of bone density and the primary stability of dental implants of different designs on animal origin material. Experimental Animal Models The experimental study used a bovine femur as a model of the human lower jaw (bone density D1/D2) and pork ribs of equal cortical thickness of 2 mm as a bone model of the human upper jaw (bone density D3/D4) [41]. All samples were obtained from experimental animals-males (due to higher bone density analogous to humans), six months old. Samples were provided from the local slaughterhouse. In order to preserve and minimize changes in the physical properties of the bone, the samples were prepared according to the instructions established by Sedline and Hirch, which means that the bone was kept moist at all times, stored frozen in saline at −10 • C and used over the next 3-4 weeks [42]. For the purposes of the study, 20 samples of pork ribs and 20 samples of bovine femur were used. Implants Used in the Study In the experimental part of the study, two types of implants were used: • self-tapping Bredent Narrow SKY (Bredent ® , Weissenhorner Str. 2, 89250 Senden -Germany) dental implants, with the following dimensions: 3.5 × 10 mm, and • non-self-tapping NobelReplace Conical Connection (Nobel Biocare, Nobel Biocare Services AG, P.O. Box, CH-8058 Zürich-Flughafen, Switzerland) with the following dimensions: 3.5 × 10 mm. Implants had the same dimensions but different macro design of the threads. Both types of implants are recommended by the manufacturers for placement in bones of different quality. The Bredent Narrow SKY features a conical, cylindrical implant shape, with double self-tapping compression threads [43]. NobelReplace Conical Connection is characterized by a conical shape, with non-aggressive, non-self-tapping threads [44,45]. In this experimental study, a total of 80 implants were used, that is 40 self-tapping and 40 non-tapping implants. Individual Stent Fabrication An individual stent or guide was made for each part of the rib and femur samples used in the study, using the appropriate material (3D Resin-Bredent, Germany) with two sleeves for the pilot drill with a diameter of 2.25 mm. Material for the experimental part of the study, bones of animal origin, were prepared and dimensionally adjusted. The prepared material, bones, were stamped with condensing silicones in order to obtain working models, cast from hard gypsum, used to make guides. The guide was made of self-binding two-component acrylate 3D Resin-Bredent, Germany, which is thermally and dimensionally stable and recommended for laboratory development of guides in navigation implantology. Due to the release of heat during the bonding process, the making of the guide was done indirectly on a model, in order to avoid the possible influence of heat on the bone surface. Preparation, fixation and marking of the sleeves was performed on the model. Then, each guide with sleeves was checked, marked and fixed to the bones ( Figure 1). Radiographic Analysis of Bone Density Experimental material, parts of pig ribs and bovine femurs, with prepared, fixed and marked sleeves were recorded individually on a specially designed and adjusted stand of the CBCT device Planmeca 3D Promax-Asentajankatu 6, FI-00880 Helsinki, Finland. Radiographic Analysis of Bone Density Experimental material, parts of pig ribs and bovine femurs, with prepared, fixed and marked sleeves were recorded individually on a specially designed and adjusted stand of the CBCT device Planmeca 3D Promax-Asentajankatu 6, FI-00880 Helsinki, Finland. All samples were recorded under the same conditions, 200 µm voxel, 90 kV, 10 mA, 36.4 s, 3112 mGy/cm 2 . Multiplanar reconstruction was performed in Romexis software (Planmeca Romexis 5.3.4.39, Asentajankatu 6, FI-00880 Helsinki, Finland) with associated mathematical software algorithms for reducing CBCT artifacts. Prior to the start of the study, the calibration was performed by an authorized Planmeca support (Beam check, QA test and flat field calibration test). According to the recommendations from the literature, we used smaller voxels (200 µm) and the latest version of Romexis software with accompanying algorithms to scatter scattered signs. In the software of the CBCT device, implants with sleeves, Bredent Narrow SKY and NobelReplace Conical Connection, dimensions 3.5 × 10 mm were selected from the implant base. The selected implants are virtually placed so that the longitudinal axis of the implant coincides with the axis and shape of the sleeve. After the virtual positioning of the implant, the program automatically produced the values of the average bone density, expressed in Hounsfield units in the cylinder, that is, inside the virtual implant and 1 mm in the surrounding bone. The mean value of bone density was used for the purposes of the research (Figure 2). Procedure for Experimental Implant Placement After checking the position of the guide, the experimental bone was fixed and the experimental implant placement was initiated. The procedure involved preparation of the bearing in the bone and placement of the implant in the bearing, according to the manufacturer's instructions, and specialized implant surgical sets Bredent ® and Nobel Biocare. The pilot drill was put through the sleeves from the navigation implantology set to a depth of 10 mm. This was followed by a preparation without the use of guides, using drills for the appropriate bone type and implant system (Figure 3). Procedure for Experimental Implant Placement After checking the position of the guide, the experimental bone was fixed and the experimental implant placement was initiated. The procedure involved preparation of the bearing in the bone and placement of the implant in the bearing, according to the manufacturer's instructions, and specialized implant surgical sets Bredent ® and Nobel Biocare. The pilot drill was put through the sleeves from the navigation implantology set to a depth of 10 mm. This was followed by a preparation without the use of guides, using drills for the appropriate bone type and implant system (Figure 3). Procedure for Experimental Implant Placement After checking the position of the guide, the experimental bone was fixed and the experimental implant placement was initiated. The procedure involved preparation of the bearing in the bone and placement of the implant in the bearing, according to the manufacturer's instructions, and specialized implant surgical sets Bredent ® and Nobel Biocare. The pilot drill was put through the sleeves from the navigation implantology set to a depth of 10 mm. This was followed by a preparation without the use of guides, using drills for the appropriate bone type and implant system (Figure 3). One Bredent Narrow SKY self-tapping and one NobelReplace Conical Connection non-self-tapping implant were installed in each bovine femur and pig rib sample. The implants were placed in the bearing mechanically with a set torque of 35 N/cm 2 . One Bredent Narrow SKY self-tapping and one NobelReplace Conical Connection non-self-tapping implant were installed in each bovine femur and pig rib sample. The implants were placed in the bearing mechanically with a set torque of 35 N/cm 2 . Primary Stability Measurement Procedure After placement of both implants, measurements of primary implant stability were performed using resonant frequency analysis (RFA), via Osstell mentor device (Integration Diagnostics AB, Stampgatan 14, 411 01 Göteborg, Sweden). The implant is placed and tightened manually, using a force of 4 to 5 Ncm and a suitable SmartPeg, type 49 for Bredent Narrow SKY and type 60 for NobelReplace Conical Connection. The Osstell mentor probe is placed at the right angle and with a distance of 2-3 mm, next to the SmartPeg, in four positions: buccal, oral, mesial and distal. There are two threads on the top of the device. After switching on, the first thread becomes a magnet that excites the magnet on the Smartpeg. The second thread registers the vibration produced by Smartpeg. After a short sound, the ISQ value is read on the register of the device. During this study, we used the mean ISQ value obtained from four different directions. Statistical Analysis Descriptive statistical methods, methods for testing statistical hypotheses, methods for testing correlations, and methods for examining the correlation between outcomes and potential predictors were used to analyze primary data. Depending on the type of variables, the data description is given as n (%) or as ±sd. T-test was used as a method for testing statistical hypotheses. The Pearson linear correlation coefficient was used to examine the correlation of the two variables. Statistical hypotheses were tested at the level of statistical significance of 0.05. The obtained data were then statistically processed to obtain a correlation between the mean value of bone volume density and the value of primary stability of implants. The following measurements were performed during the study: • Bone densities based on CBCT images in HU units; • Primary stability of dental implants in ISQ units. Results An experimental study on the material of animal origin was conducted using 20 samples of pig ribs and 20 samples of bovine femurs, in which two implants were installed. In this study, a total of 40 self-tapping implants and 40 non-self-tapping implants (50%) were installed. Table 1 shows the distribution of mean values of the bone density expressed in Hounsfield units in relation to the bovine femur and pork rib. The arithmetic mean and standard deviation of bone density expressed in HU units in the bovine femur stood at 851.8 ± 193.0, while in the pig rib it was 255.7 ± 66.1, which is a statistically significant difference (t = 18,478; p < 0.001). Figure 4 shows the correlation between the mean values of bone density measured on a CBCT device and expressed in HU units and the primary stability of self-tapping dental implants installed in a pig rib. In self-tapping implants in the pig rib, there is a statistically significant mean positive correlation between the bone density expressed in HU units and the primary stability of the dental implants expressed in ISQ units (r = 0.506; p = 0.023). Figure 5 shows the correlation between the mean values of bone density measured on a CBCT device and expressed in HU units and the primary stability of non-self-tapping dental implants in pig ribs. In non-self-tapping implants in the pig rib, there is no statistically significant c lation between the bone density, expressed in HU units, and the primary stability o dental implants, expressed in ISQ units (r = 0.318; p = 0.172). Table 2 shows the values of primary stability of the self-tapping and non-self-tap implants in the pig rib. In non-self-tapping implants in the pig rib, there is no statistically significant correlation between the bone density, expressed in HU units, and the primary stability of the dental implants, expressed in ISQ units (r = 0.318; p = 0.172). Table 2 shows the values of primary stability of the self-tapping and non-self-tapping implants in the pig rib. The arithmetic mean and standard deviation of the values of the primary stability of the self-tapping implants expressed in ISQ units in the pig rib was 68.2 ± 3.8, while in the non-self-tapping implants it was 67.0 ± 4.5, which is not a statistically significant difference (t = 0.947; p = 0.350). Mean bone density values measured on the CBCT device, expressed in HU units, and their correlation to the primary stability of the self-tapping dental implants installed in the bovine femur are given in Figure 6. the self-tapping implants expressed in ISQ units in the pig rib was 68.2 ± 3.8, while in t non-self-tapping implants it was 67.0 ± 4.5, which is not a statistically significant differen (t = 0.947; p = 0.350). Mean bone density values measured on the CBCT device, expressed in HU units, a their correlation to the primary stability of the self-tapping dental implants installed the bovine femur are given in Figure 6. In the self-tapping implants installed in the bovine femur, there is a statistically s nificant strong positive correlation between the bone density expressed in HU units a the primary implant stability expressed in ISQ units (r = 0.880; p < 0.001). Figure 7 shows the correlation between the mean values of the bone density mea ured using a CBCT device and expressed in HU units and the primary stability of the no self-tapping dental implants in the bovine femur. In the non-self-tapping implants in the bovine femur, there is a statistically significa mean positive correlation between the bone density expressed in HU units and the p mary implant stability expressed in ISQ units (r = 0.584; p = 0.007). The values of the primary stability of the self-tapping and non-self-tapping implan in the bovine femur are shown in Table 3. The arithmetic mean and standard deviation of the primary stability of the self-ta ping implants in the bovine femur was 75.8 ± 3.4 ISQ, while in the non-self-tapping im plants it was 74.2 ± 3.9 ISQ, which does not represent a statistically significant differen (t = 1381; p = 0.175). In the self-tapping implants installed in the bovine femur, there is a statistically significant strong positive correlation between the bone density expressed in HU units and the primary implant stability expressed in ISQ units (r = 0.880; p < 0.001). Figure 7 shows the correlation between the mean values of the bone density measured using a CBCT device and expressed in HU units and the primary stability of the non-selftapping dental implants in the bovine femur. In the non-self-tapping implants in the bovine femur, there is a statistically significant mean positive correlation between the bone density expressed in HU units and the primary implant stability expressed in ISQ units (r = 0.584; p = 0.007). The values of the primary stability of the self-tapping and non-self-tapping implants in the bovine femur are shown in Table 3. Table 3. The value of ISQ in bovine femur with respect to the implant type. The arithmetic mean and standard deviation of the primary stability of the self-tapping implants in the bovine femur was 75.8 ± 3.4 ISQ, while in the non-self-tapping implants it was 74.2 ± 3.9 ISQ, which does not represent a statistically significant difference (t = 1381; p = 0.175). Discussion In today's modern dental therapy, implant therapy represents the therapy of choice, indicated for all types of toothlessness. With the development of implantology and through the use of a multidisciplinary medical approach, most contraindications for implant therapy have been translated from absolute to relative contraindications, with a tendency to reduce complications and increase the success rate. The success rate of implant therapy, according to most studies, ranges from 90% to 95%. There are numerous factors that influence the success of implant therapy, and are usually related to the characteristics of the patient, the type of implant and the skills of the therapist [46][47][48][49][50][51][52]. The experimental part of this study, which was performed on the material of animal origin, pig ribs and a bovine femur, assessed the influence of bone density, based on CBCT images, on the primary stability in different designs of dental implants, self-tapping and non-self-tapping. The expansive development of radiological diagnostics and its application in implantology has enabled objective pre-implant preparation as well as the selection of the optimal treatment plan. Qualitative and objective bone analysis was enabled with the introduction of a three-dimensional multiplanar overview of the jaw bone tegmen, first by using Computed Tomography (CT), then by using Multilayer Computed Tomography (MSCT) and by applying Cone Computed Tomography (CBCT) to implant practices [53,54]. The evaluation of the quality of the bone density in both the experimental part of the study on bones of animal origin, as well as in the clinical part of the study on humans, was performed on the basis of the software analysis of CBCT images and expressed in HU units. The validity of the assessment of the bone density expressed in HU units, based on CBCT and CT images has been the focus of numerous studies. Armstrong (2006) and Arisan et al. (2013) pointed out that there is a difference in the values of HU units in the analysis of material recorded under the same conditions on CT and CBCT [54,55]. The research for the assessment of the bone density, expressed in HU units, can include the analysis of CT images and the analysis of CBCT images as valid methods for the evaluation of bone density and is the method of choice in the preimplantology phase. Due to its availability, lower radiation dose, and simpler installation, CBCT represents a more common method in the dental practice [53,[56][57][58][59][60][61][62]. In the experimental study, the average bone density expressed in HU units in the bovine femur, which was used as a human lower jaw model, was 851.8 HU. The lowest obtained average value of the bovine femur bone density amounted to 442.9 HU, while the highest was 1236.9 HU. All of the obtained values correspond to the bone quality classified in categories D1 and D2, according to Misch [23], and Q1, Q2 and Q3 according to Norton and Gamble [24], which, according to these authors, can be found locally in the alveolar extensions of the lower jaw. The average bone density in the pig rib, which was used as a model of the human upper jaw, was 255.7 HU. The minimum value was 99.7 HU and the maximum 388.6 HU. The average value of the pork rib bone quality corresponds to the bone quality classified in category D4 according to Misch [18] and Q4 according to Norton and Gamble [19], which, according to these authors, can be found in the lateral region of the upper jaw. The partial volume effect, as described in the literature, can explain the registered minimum value of the average bone density on one sample of pork rib without a noticeable deviation of the primary stability and a macroscopically noticeable lower strength on that sample. When different types of tissues are found in one voxel, in this case, soft tissue cavities and bone beams, with different X-ray attenuation coefficients, this sometimes affects the image quality and may result in an inaccurate CT or CBCT reading [57]. The results of the presented experimental study did not show a significant statistical difference in the primary stability between the self-tapping and non-self-tapping implants in both types of bone tissue. In an in vitro study aimed at examining the correlation of the primary stability of two types of implants in different types of bone tissue, Bilhan et al. came to similar results, and that is that there is no significant statistical difference (2015). The results of the values of the primary stability in their study are slightly higher for both types of implants compared to our study, which can be explained by the use of implants of a larger diameter and length [58]. Falco et al. (2018), whose study was also performed on a material of animal origin, showed a statistically significant difference in the primary stability in the self-tapping implants compared to non-self-tapping implants in a lowdensity D4 bone, while in other bone types there was no significant difference [59]. Other studies dealing with the primary stability of implants of different designs conclude that the main determinants of primary stability are the bone density and implant macrodesign, and that self-tapping implants are recommended for lower-density bones and for immediate implantation [60][61][62][63][64]. According to the results of the presented experimental study on pig ribs, there is a statistically significant mean positive correlation between the bone density expressed in HU units and the primary stability of self-tapping dental implants expressed in ISQ units (r = 0.506; p = 0.023). While there was no statistically significant difference in this type of bone when it comes to non-self-tapping dental implants, Isoda et al. (2012), in a similar in vitro study using a material of animal origin, demonstrated a significant positive correlation between the bone density measured on a CBCT device and the primary stability of self-tapping dental implants [65]. Möhlhenrich et al. (2019) published similar results on the correlation between bone density measured on the CBCT device and the primary implant stability [66]. The difference in the correlation between the bone density assessment and primary stability in the two types of implants in the lower-density bone can be explained by the macrodesign of the thread. In their review paper, when analyzing thirteen papers that analyzed the correlation between different factors and the primary stability of implants, Marquezan et al. (2012) concluded that self-tapping implants with more aggressive threads show a better primary stability compared to non-self-tapping dental implants [67]. In self-tapping implants installed in the bovine femur, there is a statistically significant strong, and in non-self-tapping, a moderately strong positive correlation between the bone density expressed in HU units and the primary implant stability expressed in ISQ units in the presented study. Fuster-Torres et al. (2001) and Isoda et al. (2012) reported a positive correlation between the bone density expressed in HU units and the primary implant stability, which coincides with the results of the presented study [65,68]. Conclusions On the basis of the presented results, we can conclude the following: • By analyzing the density of the bone tissue in the CBCT images in the software of the device expressed in HU units, we cannot predict the degree of the primary stability of the non-self-tapping dental implants in bones of a lower quality D4, according to Misch, and Q4, according to Norton and Gamble; • Self-tapping and non-self-tapping dental implants installed in D4-and Q4-quality bones do not show a significant statistical difference in the primary stability; • By analyzing the density of the bone tissue in the CBCT images in the software of the device expressed in Hausfield units, we can predict the degree of the primary stability of the self-tapping dental implants in bones of the densities D1, D2 and Q1-Q3; • By analyzing the density of the bone tissue in the CBCT images in the software of the device expressed in HU units, we can predict the degree of the primary stability of the non-self-tapping dental implants in bones of the densities D1, D2 and Q1-Q3.
6,981.8
2022-05-11T00:00:00.000
[ "Medicine", "Engineering" ]
Context-Adaptive Learning Designs by Using Semantic Web Services IMS Learning Design (IMS-LD) is a promising technology aimed at supporting learning processes. IMS-LD packages contain the learning process metadata as well as the learning resources. However, the allocation of resources - whether data or services - within the learning design is done manually at design-time on the basis of the subjective appraisals of a learning designer. Since the actual learning context is known at runtime only, IMS-LD applications cannot adapt to a specific context or learner. Therefore, the reusability is limited and high development costs have to be taken into account to support a variety of contexts. To overcome these issues, we propose a highly dynamic approach based on Semantic Web Services (SWS) technology. Our aim is moving from the current data- and metadata-based to a context-adaptive service-oriented paradigm. We introduce semantic descriptions of a learning process in terms of user objectives (learning goals) to abstract from any specific metadata standards and used learning resources. At runtime, learning goals are accomplished by automatically selecting and invoking the services that fit the actual user needs and process contexts. As a result, we obtain a dynamic adaptation to different contexts at runtime. Semantic mappings from our standard-independent process models will enable the automatic development of versatile, reusable IMS-LD applications as well as the reusability across multiple metadata standards. To illustrate our approach, we describe a prototype application based on our principles. Introduction IMS Learning Design (IMS-LD) is a promising technology to support learning processes. It enables the integration of learning activities with available learning resources based on an established standard. Following the IMS-LD specification (IMS, 2006), the description of a learning process is included into a composite learning object together with the used learning resources -the physical data assets. Whereas the learning resources are allocated at design-time of a specific learning design, the actual learning context -e. g. the needs of an individual learner -is known at runtime only. Therefore, a learning design based application can not adapt dynamically to specific learning contexts and only few opportunities to reuse a learning design do exist (cf. Amorim, Lama, Sánchez, Riera, &Vila, 2006 andKnight, Gašević, &Richards, 2006). To overcome these issues and thus enable a dynamic adaptation to the learning context and learner needs, we follow the idea of providing the learner with a dynamic supply of appropriate learningrelated functionalities at runtime. The considered functionalities are in principle provided by several organizations and accessible by means of Web service technology. Using Web services, the resulting services [1] are autonomous and platform-independent computational elements, and thus the delivered resources can be shared with anyone through the Internet. However, standard Web service technology does not provide the facilities to completely describe the capability of a service in a way to be understood by software programs -the meaning of inputs, outputs and applicable constraints, as well as the context in which a service can be used. In contrast, Semantic Web Services (SWS) technology provides an infrastructure in which new services can be added, discovered and composed continually, and the organizational processes automatically updated to reflect new forms of cooperation. It combines the flexibility, reusability, and universal access that typically characterize a Web service, with the expressivity of semantic mark-up, and reasoning of Semantic Web (Berners-Lee et al, 2001). Based on semantic descriptions of functional capabilities, a SWS broker automatically selects and invokes Web services appropriate to achieve a given goal in a specific context. In our vision, learning processes are described in terms of user objectives (learning goals) and abstract from any specific data and metadata standard. Goals are accomplished by automatically selected functionalities fitting the actual user needs and process contexts. Functionalities support the process accomplishment by delivering the adequate resources to the user. To actualize this vision, we adopted a layered approach: Web services provide the base layer of executable functionalities; a SWS broker and ontologies support the gradual abstraction from the functionality selection, composition, and invocation to the process context adaptation; finally, semantic mappings will enable the automatic development of versatile, reusable IMS-LD applications as well as the reusability across multiple metadata standards in order to achieve interoperability of a specific learning design. The result is a highly dynamic service-oriented framework based on semantic Web services (SWS) technology. In this way, we enable a paradigm-shift from the current manual allocation of resources at design-time to an automatic allocation of functionalities at run-time, which indeed provides the dynamic adaptation to different contexts. Furthermore, the introduction of standard-independent semantic process models addresses the reusability across multiple metadata standards. Finally, both the dynamic adaptation and standard independence lead to a reduction of the development costs. The rest of the paper is structured as follows: the following section provides brief background information about the the specific approach adopted -i.e. IRS-III (Cabral et al 2006) as SWS broker and WSMO (WSMO Working Group, 2004) as reference ontology for describing services; Section 3 analyses the issues of current e-Learning technologies to detail the motivation of our approach; Section 4 then describes our approach of using a SWS-oriented architecture to support learning contexts, followed by a section introducing our ontological framework; Section 6 explains a prototype application based on IMS-LD and our SWS based approach, followed by the description of the implemented semantic mappings to enable the context-adaptation at runtime in Section 7. To validate the benefits of our approach, Section 8 provides a formalized comparison of our approach with the current state of the art. Finally, Section 9 summarizes the contributions of our work and provides an outlook to future work. IRS-III (Cabral et al 2006), the Internet Reasoning Service, is an implementation of a SWS broker environment. It provides the representational and reasoning mechanisms, which enable the dynamic interoperability and orchestration between services as well as the mediation between their semantic concepts. IRS-III utilizes a SWS library based on the reference ontology Web Service Modelling Ontology (WSMO) (WSMO Working Group, 2004) and the OCML representation language (Domingue et al 1999) to store semantic descriptions of Web services and knowledge domains. Different forms of service implementations can be described, encapsulated and exposed as SWS by using IRS III: standard WSDL-based services, Java functionalities or LISP functions. Background: IRS-III, a broker-based approach for SWS WSMO is a formal ontology for describing the various aspects of services in order to enable the automation of Web service discovery, composition, mediation and invocation. The meta-model of WSMO defines four top level elements: • Ontologies (Gruber, 1993) provide the foundation for describing domains semantically. They are used by the three other WSMO elements. • Goals define the tasks that a service requester expects a Web service to fulfil. In this sense they express the requester's intent. • Web Service descriptions represent the functional behaviour of an existing deployed Web service. The description also outlines how Web services communicate (choreography) and how they are composed (orchestration). • Mediators handle data and process interoperability issues that arise when handling heterogeneous systems. Current Issues of the Learning Design Approach IMS-LD is entirely based on providing a learner with learning resources appropriate to a given learning objective. Like other technologies in this area -e. g. ADL SCORM (Advanced Distributed Learning, 2006) based on IMS Simple Sequencing -, IMS LD follows an approach of providing a learner with composite content packages containing the learning resources as well as the standard specific process metadata. Learning support usually is based on the following practices: • Use of specific metadata and learning resources -whether data or services -to support a specific learning objective. • Resources are manually associated with specific learning objectives based on the subjective appraisals of a individual learning designer. • Learning resources are allocated at design-time, i.e. when the actual learning context is not known. Due to these facts, the following limitations have been identified (cf. Amorim, 2006, Collis, Strijker, 2004and Knight, Gašević, & Richards, 2006: L1. Limited appropriateness and dynamic adaptability to actual learning contexts. It is assumed that every learning objective occurs in a specific context which could be defined by the preferences of the actual learner -e. g. her native language or her technical platform. Learning data is allocated at design-time of a learning process -i.e. when the composite content package is developed. This limits the appropriateness of the data to the actual learning context, since the actual learning context can only be considered at runtime of a learning process. Moreover, the use of data excludes the dynamic adaptability a priori. In parallel to data-centric approaches, analogous issues can also be observed with service-oriented approaches. However, in that case, the issues are related to the allocation of services only. L2. Limited reusability across different learning contexts and metadata standards. Due to L1, for every different learning context or specific learner requirement a new learning design (content package) has to be developed. For example, a learning package suiting the needs of a learner with specific preferences -e. g. her native language -cannot be used for other contexts or learners having distinct requirements. Since metadata is described based on standard-specific specifications, an individual content package cannot be reused across different standards. L3. High development costs. Due to L1 and L2, high development costs have to be taken into account when developing IMS LD-compliant e-Learning packages. Context-Adaptive Learning Designs based on Automatic Service Selection and Invocation This section describes our vision as well as the approach to support context-adaptive learning designs. Moreover, we use the formalization introduced in the previous section to highlights the benefits of our approach. Vision To overcome the limitations L1 and L2 described in Section 3, we consider the automatic allocation and invocation of functionalities at runtime. A typical learning related service functionality provides the learner for instance with appropriate learning content or topic-specific discussion facilities. Learning processes are described semantically in terms of a composition of user objectives (learning goals) and abstract from specific data and metadata standards. When a specific learning goal has to be achieved, the most adequate functionality is selected and invoked dynamically regarding the demands and requirements of the actual specific context. This enables a highly dynamic adaptation to different learning contexts and learner needs. This vision is radically distinctive to the current state of the art in this area, since it shifts from a dataand metadata-centric paradigm to a context-adaptive service-oriented approach. Moreover, using adequate mappings, our standard-independent process models can be translated into existing metadata standards in order to enable a reuse within existing standard-compliant runtime environments. Addressing the limitations L1 and L2 identified in Section 3, we consequently reduce the efforts of creating learning process models (L3): one unique learning design can adapt dynamically to different learning contexts and can be translated into different process metadata standards. Approach: Semantic Abstractions from Learning Data and Metadata Our approach is fundamentally based on utilizing SWS technologies to realize the following principles: P2. Abstraction from learning process metadata standards To support these principles, we introduce several layers as well as a mapping between them in order to achieve a gradual abstraction ( Abstraction from Learning Data and Functionalities To abstract from existing learning data and content we consider a Web Service Layer. It operates on top of the data and exposes the functionalities appropriate to fulfill specific learning objectives. This first step enables a dynamic supply of appropriate learning data to suit a specific context and objective. Services exposed at this layer may make use of semantic descriptions of available learning data to accomplish their functionalities. In order to abstract from these functionalities (Web services), we introduce an additional layer -the Semantic Web Service Layer. This layer enables the dynamic selection, composition and invocation of appropriate Web services for a specific learning context. This is achieved on the basis of formal semantic, declarative descriptions of the capabilities of available services which enable the dynamic matching of service capabilities to specific user goals. Abstraction from Learning Process Metadata A first layer concerned with the abstraction from current learning process metadata standards is the Semantic Learning Process Model Layer. It allows the description of processes within the domain of E-Learning in terms of higher level domain concepts -e. g. learning goals, learners or learning contexts. This layer is mapped to semantic representations of current learning metadata standards in order to enable the interoperability between different standards. To achieve a further abstraction from domain specific process models -whether it is e. g. a learning process, a business process or a communication process -we consider an upper level process model layer -Semantic Process Model Layer. This layer introduces for instance the mapping between learning objectives and business objectives to support all kind of organizational processes. Mappings Based on mappings between the described layers, upper level layers can utilize information at lower level layers. In particular, we consider mappings between a learning objective and a WSMO goal to enable the automatic discovery and invocation of a Web service (Web Service Layer) from, for instance, a standard-compliant learning application (Learning Application Standard Layer). As a result, a dynamic adaptation to individual demands of a learner within a specific learning context is achieved by using existing standard-compliant learning applications. It is important to note, that we explicitly consider mappings not only between multiple semantic layers but also within a specific semantic layer. Ontological Framework This section describes the ontological framework aimed at implementing the semantic layers introduced in Section 4.2. Ontology Stack To implement the described semantic layers (Section 4.2), we follow an approach of a staged ontological mapping between semantic models of a process at different levels of abstraction. Our approach considers different ontologies aimed at providing abstract semantic descriptions of data as well as processes. Figure 2 gives an overview of the main ontological representations considered in our approach as well as their relationships. The general Upper Process Ontology (UPO) abstracts from the process domain and implements the Semantic Process Model Layer. The UPO is currently being developed as part of the SUPER project [5] and will enable the description of a process independent from its specific purpose and can be mapped to domain specific process ontologies such as the LPMO. In order to enable a high level of interoperability of our ontologies, we intend to align the LPMO as well as the UPO to the DOLCE foundational ontology (Gangemi et al, 2002). In particular, context descriptions are based on the Descriptions and Situations module (DDnS) (Gangemi et al, 2003) of Dolce. Furthermore, the UPO is mapped to the WSMO standard. Therefore, these ontologies realise a gradual mapping between a standard learning application and WSMO entities. It has to be highlighted, that our ontological architecture explicitly considers mappings not only between several semantic layers but also within a specific semantic layer. This enables for example the mapping of our LPMO concepts to other existing semantic descriptions of learning related concepts. Semantic Learning Process Model Layer From an e-Learning perspective, the LPMO has to be perceived as the central ontology within our architecture, since it describes the semantics of a learning process from a general point of view and independent from any supported platform or learning technology standard. The following figure depicts an extract of the proposed LPMO containing some of its main concepts as well as some mappings to key concepts within different semantic layers: Fig. 3. Conceptual model of parts of the LPMO and key mappings to the UPO and the WSMO framework As shown in Figure 3, a learning objective as defined in the LPMO is mapped to a upo:Goal -which represents a central concept within the Semantic Process Model Layer. This concept is furthermore mapped to the wsmo:Goal to enable a mapping and matching of appropriate Web services. Besides the proposed mappings between several semantic layers, mappings are also considered within a specific layer to enable a wide applicability of our approach. E. g. semantic concepts of our LPMO can be mapped to other existing semantic concepts representing learning-related entities within different approaches -e. g. learning process modules as defined in (Naeve et al, 2006) and (Koper, 2004). To illustrate the feasibility of our approach, we describe a prototype application based on our conceptual framework (Section 4.2). The following sections report the generic application architecture and the steps to specialize it. The current prototype realizes a simple use case scenario described in Section 6.1. Although both IMS LD and ADL SCORM are supported in the scenario, the following sections focus on the IMS LD-compliant application only. In general, the approach for deploying ADL SCORM based application followed analogous implementation steps. Example Scenario: Supporting Language Learning in different Learning Contexts In this example scenario, several learners request to learn three different languages: English, German, Italian. It is assumed that each learner has different preferences -e.g. him/her spoken native language -which have to be considered. For example, a German native speaker learning the language "English" should be provided with German learning resources to teach the English language. In addition, two different metadata standards should be supported: IMS LD and ADL SCORM. Following the current approach of creating an IMS LD-compliant content package which contains all physical learning resources, for every individual learner a specific package would have to be created in order to achieve a high level of appropriateness to the individual learner needs. In addition, for every metadata standard which has to be supported, a new standard-compliant process model has to be created. Applying the vision and approach introduced in Section 4, one unique process model -the learning design -can adapt dynamically at runtime to different contexts and needs. SWS-oriented Architecture Our current implementation makes use of standard runtime environments: IRS III (Section 2) is used as development environment for WSMO descriptions and as SWS broker; the Reload software suite [4] is used for editing and runtime processing of IMS LD. Several distributed Web service and data repositories provide the functionalities to achieve learning goals. Figure 4 outlines the Semantic Web Service Oriented Architecture (SWSOA) used in the current prototype. The defined architecture realizes both P1 and P2 principles described in Section 4. Fig. 4. SWS-based software architecture as utilized in the prototype application to support context-adaptive learning designs Implementation Steps To support the scenario described in Sections 6.1, the following elements had to be provided within the generic architecture presented above: 1. Learning Web services libraries. Web services were provided to support the authentication of the learner, the retrieval of semantic learner profiles, learning metadata and learning contents. Web services utilized in this demonstrator were partly developed within the LUISA project [1]. WSMO Ontologies. To implement the Semantic Learning Process Model Layer, initial semantic representations of ADL SCORM, IMS LD, the LPMO and content objects provided by the Open Learn Project [3] have been created. To support individual learner preferences, we particularly consider semantic learner profiles, describing the native language of every learner. All ontologies have been developed by using OCML (Domingue et al., 1999) as ontology language. 3. Mappings between semantic layers as well as metadata standards. We created mappings between the initial implementations of semantic representations of metadata standards (IMS LD, ADL SCORM) and the LPMO as well as WSMO. For instance, we defined a mapping between the lpmo:Objective and the objective description used within the IMS LD metadata (imsld:Objective). Moreover, semantic learning object descriptions based on the LPMO were mapped to OpenLearn content units (ol:Content Unit), whereas the language of a content unit (ol:Language) was mapped to the native language of a learner (lpmo:Language). Since the UPO is not currently supported by any run-time environment, the LPMO objective is directly mapped to a WSMO goal. Figure 5 depicts the main ontological mappings as defined in our prototype. The defined mappings are performed at runtime as specific functionalities. These functionalities are exposed as Web services, which are part of an external learning Web services Library. The following Section 7 details the implemented mappings. 4. WSMO Goal, Web Service, and Mediator descriptions of the available Web services, based on the concepts defined in the WSMO ontologies. Standard-compliant content packages describing the learning activities. An IMS LD compliant was provided and included into IMS content packages. Instead of grounding the learning activities to static learning data, no static resources were associated with these learning processes. In contrast, only references to the described WSMO goals were associated with every learning activity. This mapping is achieved by associating a learning activity within the learning metadata with HTTP references to a web applet enabling to request the achievement of a specific WSMO goal from the SWS broker. Context-Adaptive Learning Design: Runtime Mappings In this section, we illustrate the mappings needed to support an automatic allocation of learning resources. All mappings described below are performed at runtime within the involved runtime environments (Reload and IRS-III) to accomplish the automatic adaptation to the actual learning context. The last sub-section sequences the mappings and reports the obtained results. Mapping between IMS LD and WSMO If we consider the scenario described in Section 6.1, four learners with different native languages -English, French, and Spanish and German -require to learn or improve their skills in three different languages -German, Italian, and English. By using our IMS LD compliant e-Learning application, all learners will be provided with one unique context-adaptive IMS LD content package. The package includes the learning process metadata, but it does not contain any physical resource. Instead, each learning activity refers to a WSMO goal. This enables the SWS broker (IRS-III) to select and invoke appropriate services able to achieve the goal. Request for Goal-Achievement Listing 2. Portion of source code of web applet requesting the achievement of a learning goal from the SWS broker Mapping between WSMO Goal and WSMO Web Service In our example scenario, several Web services are invoked to retrieve semantic learning metadata, learner profile descriptions and E-Learning content as well as to map between different semantic concepts. Therefore, a mapping between a WSMO goal and WSMO Web services was implemented based on the WSMO framework. Usually, different services are able to achieve a given goal. This means, several Web services are linked to a specific WSMO goal by using a dedicated WSMO mediator (WSMO WG Mediator). Based on semantic capability descriptions of available services, the most appropriate service can be selected to suit a given goal. The following OCML code listing shows a portion of a WSMO description of Web service able to provide learning content to teach the language German: wsmo:WebService Listing 3. Partial source code of a WSMO Web service and its capability description In Listing 3, a WSMO description defines the assumption of a Web service that the objective provided by the IMS LD content package is valued by "Learn German". The WSMO service used in our prototype application to achieve this objective requires an orchestration of several services to support this learning objective. Therefore, the goal achievement triggers a sequence of services needed to get information about the actual learner, to retrieve content appropriate to her specific objective as well as to select content appropriate for her specific requirements. Fig. 7. Orchestration of Web services to achieve a specific goal aimed at language learning For instance, if a learner is authenticated as an English-speaking person (lpmo:Language=English) and uses an IMS LD-based package to learn the language German, an imsld:Activity with the imsld:Objective=Learn German is mapped to a specific WSMO-Goal. The accomplishment of such a goal involves the selection, orchestration and invocation of different Web services, which perform the described mappings and retrieve appropriate learning content: (i) the imsld:Objective is mapped to the lpmo:Objective concept; (ii) the lpmo:Objective is used to retrieve the semantic learning object metadata (LOM) of an appropriate learning object; (iii) the retrieved LOM is used to obtain an Open Learn learning unit appropriate to the individual language of the learner and its current objective. Each of these goals is accomplished by a distinct Web service dynamically selected at runtime. Mappings between Semantic Concepts of IMS LD, LPMO and Learning Resources Furthermore, we introduced mappings between semantic concepts of the IMS LD metadata, the LPMO as well as the used learning content objects. As shown in Section 6.3, we provide a mediation between different objective descriptions (lpmo:Objective, imsld:Objective) as well as a mapping between the native language of the learner and the language of the utilized learning content (lpmo:Language, ol:Language). These mappings were implemented using semantic descriptions of the relevant concepts as well as Web services which are able to mediate and map between these concepts. The mapping services were implemented as LISP functions which were exposed as Web services by using the IRS III Publisher (Cabral et al, 2006). At runtime, these services are invoked as part of a more complex service orchestrations to achieve a specific learning objective. The semantic concepts were implemented in OCML. The following figure presents the mapping of the language of a content object with the native language of the learner. wsmo:Ontology-Concept (def-class lpmo-id-object () ((has-lpmo-id :type string)) ) Listing 4. LISP based service and semantic concepts to map learning objects to the native language of a learner Performing Mappings at Runtime At runtime, an end-user (learner) accesses a standard-compliant player and loads the standardscompliant content packages as defined in bullet 5 of Section 6.3. The learning application then sequentially presents all of the learning activities that would have to be performed. An initial activity first authenticates the learner and retrieves the semantic learner profile description. The WSMO goal associated with such an activity is invoked, and the SWS broker dynamically selects and invokes the WSMO Web service showing the appropriate capabilities to achieve the specified goal. At this point, the learner preferences are set within the player environment. In the same way, when the learner selects an individual objective within the standard content package, our infrastructure dynamically selects and invokes semantic Web services according to him/her preferences and stated objectives. For instance, if a learner is authenticated as an English-speaking person (lpmo:Language=English) and uses an IMS LD-based package to learn the language German, appropriate Web services are selected and invoked as described in Section 7.2. Figure 9 depicts a screenshot of the same learning activity within the provided IMS LD after another Learner was authenticated as a French-speaking learner. It has to be highlighted that with our approach the IMS LD adapted to the specific learning context by selecting an appropriate service to provide learning content in the French language only. Moreover, the contents provided by the application have been retrieved from two distinct sources. Although the considered scenario is very simple, our approach already introduces a dynamic context-adaptation at runtime. Since the application fully realizes the general principle and approach stated in Section 4, the scenario could be easily extended in the future to achieve a dynamic adaptation to more complex learning contexts. Consequently, the necessary effort can be described with: Based on this formula, we can expect an enormous linear increase in the development costs with an increase in the number of processes which have to be supported. Required Effort by applying Context-adaptive Learning Designs Let us refer to the formalization introduced in Section 8.1. According to our vision, the number of process models m necessary to support different processes p is equal to p. However, we have to consider a first effort e initial to fully provide the facilities to support our semantic framework: i. e. semantic representations of the process contexts, mappings to metadata standards as well as SWS descriptions. Thus, the effort to be spent can be described as follows: As shown in Figure 10, we foresee that the advantages of our SWS based vision can be observed with an increasing number of learning processes, since it benefits from lower process model development efforts but requires an initial amount of work to provide necessary facilities. Validation based on Example Scenario To support the use case described in Section 6.1, we have to support three different learning processes according to the formalization introduced in the previous section. Each of the learning processes is dedicated to teach a specific language: Italian, German and English. Therefore p=3. In addition, we have to support one learning context parameter c -the native language of the learner. This context parameter can be valued by five different values v -English, German, French, Spanish, and an unknown native language. Furthermore, two different metadata standards s have to be supported -IMS LD and ADL SCORM. Therefore, the cumulative effort to describe the necessary process models respectively the necessary content packages can be expressed as follows: The prototype application (Sections 6, and 7) implemented by applying the vision and approach described in Section 4 to support the example scenario (Section 6.1), took into account the same 3 different learning processes p aimed at teaching 3 different languages: If we assume an effort e m of 1 man-month (mm) and assume furthermore the availability of all facilities enabling our development approach, we do not have to consider the initial development effort e initial. for comparing the efforts for supporting the described scenario by following our approach and the traditional approach as described in Section 6.1: Figure 11 shows that supporting the example scenario by following the traditional approach does require an amount of 24 mm. Every new learning process has to be taken into account with a necessary amount of 8 mm to satisfy just the simple requirements of the example use case. In contrast, by following our SWS-based approach, every new learning process can be supported with just one additional mm. Due to the dynamic adaptation at runtime, a standard-compliant learning design could basically suit all different kind of individual learner requirements and context parameters in the future. We want to highlight that generalizing the effort of creating different learning process models is highly simplistic and is just utilized, to enable a quantification and comparison of expected efforts. Moreover, it is important to note that the initial effort e initial could also be high (e.g. 10 mm), but also in this case, considering 2 processes to represent only, using the approach proposed here already provides an advantage. Conclusion Our approach -the support of learning objectives based on a dynamic invocation of SWS at runtime of a learning design -follows an innovative approach and is distinctive to the current state of the art in this area. By using SWS technology, our approach overcomes the limitations described in Section 3 and supports a high level of standard-compliancy and reusability within existing runtime environments, since it is fundamentally based on compliancy with current E-Learning metadata standards. In particular, the following contributions should be taken into account: • Dynamic adaptation to specific learning contexts at runtime • Automatic allocation of learning resources based on comprehensive semantics • High reusability across learning contexts and metadata standards • Platform-and standard-independence • Reuse and integration of multiple available learning resources and sources • Decrease of development costs e cum e cum' Since our framework is an ongoing work, next steps have to be concerned with the implementation of complete ontological representations of the introduced semantic layers as well as of current e-Learning metadata standards and their mappings. For example, currently the Semantic Process Model Layer is not used and semantic mappings between the Learning Process Model Ontology and the IMS LD standard are only developed in extracts. Nevertheless, the availability of appropriate Web services aimed at supporting specific process objectives has to be perceived as an important prerequisite for developing SWS based applications. To provide more valid quantifications of the expected benefits, further case studies are needed to illustrate the formalized measurements introduced in the sections above. Besides that, future work could also be concerned with the mapping of semantic process models across different process dimensions -e. g. business processes or learning processes to enable a complete integration of a SWSOA in an organizational process environment.
7,422.8
0001-01-01T00:00:00.000
[ "Computer Science" ]
New Insights Into Functions of IQ67-Domain Proteins IQ67-domain (IQD) proteins, first identified in Arabidopsis and rice, are plant-specific calmodulin-binding proteins containing highly conserved motifs. They play a critical role in plant defenses, organ development and shape, and drought tolerance. Driven by comprehensive genome identification and analysis efforts, IQDs have now been characterized in several species and have been shown to act as microtubule-associated proteins, participating in microtubule-related signaling pathways. However, the precise molecular mechanisms underpinning their biological functions remain incompletely understood. Here we review current knowledge on how IQD family members are thought to regulate plant growth and development by affecting microtubule dynamics or participating in microtubule-related signaling pathways in different plant species and propose some new insights. INTRODUCTION IQ67-domain (IQD) proteins, originally identified in Arabidopsis thaliana and rice , are a class of calmodulin-binding proteins unique to plants . They are common in a wide variety of land plants from moss to vascular plants, and they play a critical role in basic host defenses Levy et al., 2005), cell shaping (Huang et al., 2013;Liu et al., 2020), and drought resistance Wu et al., 2016;Yuan et al., 2019). The proteins locate to various compartments including the nucleus, cytoplasm, plasma membrane, and microtubules in Arabidopsis (Burstenbinder et al., 2017b), but their subcellular localization patterns vary (Tables 1, 2). IQD proteins have a central region of 67 conserved amino acids, the eponymous IQ67 domain, which is responsible for recruiting calmodulin, which acts as a Ca 2+ sensor (Abel et al., 2013). There are two types of IQ67 domain: (1) the Ca 2+ -independent IQ motif, the IQ motif (IQxxxRGxxxR or I/L/VQxxxRxxxxR/K); and (2) the Ca 2+ -dependent IQ motifs, the 1-5-10 and 1-8-14 motifs. Wu et al., 2011). The IQD protein family has now been comprehensively annotated in several plants ( Table 2). Even though their functions differ in some plants studied, for example, SUN/IQD regulates cell division to elongate tomatoes (Wu et al., 2011); IQD1 acts as a defense against herbivores such as aphids in Arabidopsis Levy et al., 2005); while ZmIQDs and PtIQDs respond to drought stress Cai et al., 2016), the underlying molecular basis or the function of other undefined IQDs in different plants may share same mechanisms, but this has not been confirmed. IQD PROTEINS, THE SCAFFOLD PROTEINS ASSOCIATED MICROTUBULES Scaffolding proteins interact or bind with several proteins to form an anchoring complex in specific intracellular niches such as the cell membrane, cytoplasmic matrix, or nucleus, and they play an important role in signal transduction. As scaffolding proteins, IQDs play an important role in plant growth and development (Abel et al., 2013;Burstenbinder et al., 2013Burstenbinder et al., , 2017a and link Ca 2+ signals with some organelles (Burstenbinder et al., 2017b). Yeast two-hybrid and pulldown experiments have verified that Arabidopsis IQD1 and IQD20 interact with CaM/CaML both in vivo and in vitro. Kinesin light chain is generally located at the end of kinesin and participates in cargo transport (Saez et al., 2020). Therefore, IQD may co-localize with microtubules in addition to its classic nuclear localization, a finding subsequently confirmed using high-resolution fluorescence microscopy. IQD1 interacted with KLCR1 and CaM, thereby linking kinesin to Ca 2+ second messenger signaling (Steinhorst and Kudla, 2013;Bi et al., 2018). Other IQD family proteins may also mediate different kinesin-dependent cargo transport signaling pathways such as protein sorting or cell wall formation (Kong et al., 2015), and these proteins and interactions require further study. ABNORMAL SHOOT 6 AND CORTICAL MICROTUBULES Microtubules in plant cells are non-centrosome microtubule organized (Paradez et al., 2006;Wasteneys and Ambrose, 2009). Cortical microtubules (CMTs) in the interphase, preprophase band (PPB), spindle and the membrane forming body (phragmoplast) in the mitosis cell form the plant-specific microtubule arrays (Hamada, 2014). Cortical microtubules (CMTs) determine the shape of plant cells (Wasteneys and Ambrose, 2009). Usually MT-associated proteins (MAPs) interact with cortical microtubules to regulate cell shape, such as Augmin complex, Katanin, SPR2, MOR1 and so on . However, the dynamic regulation of cortical microtubule arrays is complex, which need further studied. Li et al., 2020 first identified two previously unknown plantspecific positive regulators of cMT severing and ordering, ABNORMAL SHOOT 6 (ABS6) and SHADE AVOIDANCE 4 (SAV4). ABS6 binds to MT through its C-terminal and it is a kind of plant-specific IQD protein (Li et al., 2020). KATANIN 1 (KTN1), the p60 catalytic subunit of the classical MT-severing enzyme katanin, positively regulate ABS6-mediated cMT severing (Li et al., 2020). Augmin complexes and SPR2 located to the cMT crossover sites suppress KTN1-mediated cMT severing (Wightman et al., 2013;Wang et al., 2018;Tian and Kong, 2019). However, it is not known whether SPR2 inhibit the microtubule cleavage function of ABS6 FIGURE 1 | The role of microtubule-associated proteins in cortical microtubule severing and ordering. SPR2, Augmin localized in the cMT crossover sites can prevents KTN1-mediated cMTs severing and ordering (Wightman et al., 2013;Wang et al., 2018;Tian and Kong, 2019).KATANIN 1 (KTN1): p60 catalytic subunit of MT cleavage enzyme katanin, promotes cortical microtubule severing and ordering. It is the positive regulator of ABS6 in cortical microtubule severing and ordering (Li et al., 2020). ABS6, a plant-specific IQD protein and MAP, promotes cortical microtubule severing and ordering (Li et al., 2020). directly, and whether SPR2 interacts with ABS6 (Figure 1), similar to the direct physical interaction between ABS6, SAV4, and KTN1. Additionally, only half of the C-end of ABS6 is combined with MT, which is also an interesting issue to be explored. If, as Li et al. (2020) guess, other proteins are required to adjust the conformation of ABS6 to make the full-length ABS6 interact with KTN1 and SAV4. Which proteins can regulate its conformation, has not been studied so far. Arabidopsis IQD5 and Pavement Cell Shape Pavement cell are tightly packed in plant epidermis, with many lobes (Cosgrove, 2018;Cosgrove and Anderson, 2020). The lobes formation would be related to the dynamics of the cytoskeleton (Panteris and Galatis, 2005;Cosgrove and Anderson, 2020). Disordered cortical microtubules usually correlate with wider pavement cell indentations and reduced lobe length. Due to the abnormal expression of IQD5, IQD11, IQD14, IQD16, and IQD25 in Arabidopsis, cortical microtubules become disordered in pavement cells to affect their shape, indicating that IQD proteins may regulate anisotropic growth and shape formation by regulating the order of cortical microtubules (Burstenbinder et al., 2017b;Liang et al., 2018;Mitra et al., 2019). Different IQDs affect microtubule organization in different ways to produce unique phenotypes (Liang et al., 2018). Due to the limitations of intracellular Ca 2+ imaging and the functional redundancy of the IQD family, the specific regulatory mechanisms are still unclear (Mitra et al., 2019). It is complex. IQD5 is highly expressed in the vegetative organs of plants and combines evenly across cortical microtubules (Liang et al., 2018). In iqd5-1 mutants, microtubule stability decreases, thereby disordering microtubules in cotyledon cells and decreasing the interdigitation of pavement cells. Therefore, IQD5 stabilizes microtubules by decreasing their dynamics. In Arabidopsis M2 seedlings, pavement cells in IQD5 mutants (bQ18E, iqd5-1, and iqd5-2) lack interdigitating lobes compared to wild-type Col-0, with cells becoming smaller and rounder. In three-dayold cotyledons, leaf length is reduced and the neck width is increased in mutants. IQD5 therefore plays an essential role in regulating Arabidopsis leaf morphogenesis (Liang et al., 2018). However, the mechanisms of IQD5 affecting leaf morphogenesis remain to be explored. Furthermore, Ca 2+ signaling plays a key role for the pavement cell morphology and IQD5's recruitment to cortical microtubules (Mitra et al., 2019). The IQD-KLCR module stabilizes cortical microtubules laterally, especially at the microtubule-plasma membrane interface (Mitra et al., 2019).Unlike IQD5, which inhibits microtubule dynamics to stabilize microtubules, microtubule-associated proteins exist in Arabidopsis that affect microtubule organization by promoting their growth, contraction, and catastrophe frequency, thereby enhancing microtubule dynamics and ensuring normal sorting (Liang et al., 2018) [e.g., MOR1 in the Arabidopsis MAP215 family (Twell et al., 2002)]. This coordinated regulation of microtubule dynamics by different proteins enables microtubule cytoskeletal organization, nucleation, and severing. Intracellular signals are thereby transmitted in an ordered manner to control normal plant development (Liang et al., 2018). OsIQD14 and the Shape of Seed in Rice Rice is an important crop that has been subject to extensive efforts to increase grain size and yields. Rice OsIQD14 (Yang et al., 2020), an IQD family protein, is highly expressed in rice seed hull cells, regulating microtubule cytoskeletal dynamics to control rice grain size. In addition to localizing to the nucleus and cytoplasm, OsIQD14 also distributes along microtubules. When OsIQD14 is depleted, grains become wider and shorter and crop yields increase; when OsIQD14 is overexpressed, grains become longer and narrower without an effect on overall yield. OsIQD14 interacts with MAPs to cause catastrophic events such as expansion and contraction, thereby reducing microtubule dynamics to form narrower cells. The IQD C-terminus binds to microtubules, and the IQ67 region at the N-terminus interacts with CAM; both proteins are located on microtubules. However, the specific molecular mechanism of IQD affecting the shape of rice seeds, such as how to respond to Ca2 + signals to affect the interaction between IQD and CaM remains to be explored. Breeding has traditionally been manipulated by altering intracellular signal transduction through GW5 and GW5L (Duan et al., 2017;Liu et al., 2017). GW5 is an IQD protein located in the plasma membrane and is involved in brassinosteroid signaling. And It is similar to OsIQD14 about its regulation of seed shape (Duan et al., 2017;Liu et al., 2017;Yang et al., 2020). OsIQD14 controls cytoskeletal dynamics and cell morphology in rice by integrating auxin and calcium signaling pathways to increase rice yield. Regarding its specific mechanism, many hypotheses have been proposed, including the interaction among OsIQD14, SPR2 and CaM proteins is regulated by auxin/blue light and Ca2 + signal (Yang et al., 2020). Moreover, it is unclear whether there are other microtubule-related proteins such as katanin, MOR1, and Augmin involved with the process, and how they regulate microtubule dynamics and respond to environment signals. IQD/SUN in Tomato The tomato plant is a useful model for studying fleshy fruit development. Since the Solanum lycopersicum genome is small and highly conserved, it serves as a reference for other species in the Solanaceae family such as peppers, eggplants, and potatoes . Due to improvements in living standards and cultural changes, new fruits and vegetables such as square watermelons, large green peppers, and long tomatoes are now of commercial interest. Therefore, the study of genes that regulate the shape of edible plant organs is of increasing interest. The microtubule-binding proteins IQD/SUN, OFP (ovate family protein), and TRM (TON1 recruiting motif proteins) can interact with each other to form complexes and combine with microtubules to regulate microtubule-related pathways and ultimately affect tomato fruit shape (van der Knaap et al., 2014;Lazzaro et al., 2018;Wu et al., 2018). SUN, OVATE, and TRM are all implicated in tomato shaping (Xiao et al., 2008(Xiao et al., , 2009Wu et al., 2016). IQD is a microtubule-binding protein, and TRM is also located in microtubules (Lee et al., 2006;Drevensek et al., 2012). Ovate is the archetypal OFP, and while OFPs are mostly nuclear, the OFP-TRM complex migrates through the cell to bind to microtubules (Lazzaro et al., 2018;Snouffer et al., 2020). IQD/SUN and TRM elongate tomatoes, while ovate (OFP) inhibits elongation. IQD12 controls fruit elongation via alterations to cell division patterning, while TRM1-5-like genes promote the elongation of fruits, grains, leaves, and tubers, with OFP1 having the opposite effect (Wu et al., 2011Lazzaro et al., 2018). IQD locates to microtubules and regulates microtubule dynamics by interacting with KLCR, CMU (Cellulose-Microtubule Uncoupling), and other related proteins. AtIQD5 may mediate the coupling of cellulose synthase movement to orbital microtubules, and cortical microtubules act as the template to transport CSCs to the plasma membrane. The slightest deviation to the trajectory of anchoring to the cell wall will directly affect the cell wall positioning of CSCs, consequently affecting the directional deposition of cellulose in the cell wall and the direction of cell expansion (Endler and Persson, 2011); ultimately, this will change the cell shape and the organ. AtOPF4 directly affects cell wall formation by interacting with KNAT7 (Li et al., 2011). Furthermore, cell division is affected by Pok1, which is mainly regulated by TRM, as well as the interaction between Pok1 and ROPs (Rho-like GTPases). These proteins also locate to the PPB, spindle, and phragmoplast. OFP and TRM regulate cell division during ovary development . Similarly, AtIQD5 also localizes to the PPB, spindle, and cortical microtubules in roots. Moreover, OPFs, TRMs, and TTP complexes are involved in cell plate positioning during cell division, which in turn affects organ shape. CONCLUSION AND PERSPECTIVES In addition to affecting the shape of the cells and organs of some plants, IQDs can also enhance drought resistance of some plants including cabbage, corn, moso bamboo, and poplar Cai et al., 2016;Wu et al., 2016;Yuan et al., 2019). The 26 ZmIQD genes in maize are regulated by drought stress. BrIQD5 is a potential target gene to improve the drought tolerance of cabbage, and four drought-related proteins have been found to interact with BrIQD5. However, this work remains in its infancy, and the IQD-related molecular pathways underpinning drought resistance need further study. For the important role of IQD in plants, we should try to use transgenic or gene editing technology to modify the structure or expression of IQD in plants. For example: transfer the osIQD14 gene of rice into wheat or corn to increase their production? Transform the drought resistance genes BrIQD5 in cabbage into wheat and corn to promote insistence level. This could be a direction for future exploration.
3,152.8
2021-02-18T00:00:00.000
[ "Biology", "Environmental Science" ]
Comparison of Single Cell Transcriptome Sequencing Methods: Of Mice and Men Single cell RNAseq has been a big leap in many areas of biology. Rather than investigating gene expression on a whole organism level, this technology enables scientists to get a detailed look at rare single cells or within their cell population of interest. The field is growing, and many new methods appear each year. We compared methods utilized in our core facility: Smart-seq3, PlexWell, FLASH-seq, VASA-seq, SORT-seq, 10X, Evercode, and HIVE. We characterized the equipment requirements for each method. We evaluated the performances of these methods based on detected features, transcriptome diversity, mitochondrial RNA abundance and multiplets, among others and benchmarked them against bulk RNA sequencing. Here, we show that bulk transcriptome detects more unique transcripts than any single cell method. While most methods are comparable in many regards, FLASH-seq and VASA-seq yielded the best metrics, e.g., in number of features. If no equipment for automation is available or many cells are desired, then HIVE or 10X yield good results. In general, more recently developed methods perform better. This also leads to the conclusion that older methods should be phased out, and that the development of single cell RNAseq methods is still progressing considerably. Introduction For more than 10 years, single cell RNA sequencing (scRNAseq) has been one of the main technologies to transform science [1][2][3].It has become common to not only investigate tissue, but also to zoom in onto individual (rare) cell populations, to differentiate between cell populations, between specialized cells within them, or diverging responses within the same cell population [4,5].While some of the first scRNAseq methods were complex with a myriad of manual steps (e.g., [6] and references within), the ongoing development has resulted in a large variety of commercial suppliers and kits, which are remarkably diverse in the number of cells required, their protocol complexity, and equipment requirements. Continuous development has improved the accuracy, sensitivity, and throughput of scRNAseq methods, but also created a plethora of methods to choose from.There are marked differences between these methods and choosing the right one for each application can be challenging.As a genomics core facility, which routinely performs single cell sequencing and implements new methods, we would like to recommend to our customers the best method for each application.In addition, when considering which methods to recommend to customers or which new methods to implement, various metrics need to be evaluated. While the costs of reagents might be the most apparent for the customer, the required technician's hands-on time or technological considerations are no less important.With a look to the underlying technology, it needs to be considered how diverse the underlying cell population is, and if a low-throughput method with a 96-or 384-well plate might be sufficient, or if a bigger population with many thousands of cells might be necessary.Other factors such as the necessary sequencing depth, the possible detection of isoforms or the sequencing of non-polyA-tailed transcripts make method selection not trivial. Here, we evaluate a multitude of methods for their performance across a range of different quality control parameters.We discuss their suitability to deliver reproducible single cell transcriptomics data.These results provide guidance both for individual researchers, consortia, and for core facilities. Materials and Methods Detailed information for all methods can be found in the Supplementary Materials. Cell Growth and Sorting K562 is a human multiple melanoma cell line and was obtained for the ATCC (ATCC CCL-243).Cells were maintained at 37 • C under 5% CO 2 in RPMI medium supplemented with 10% FBS and penicillin-streptomycin. Cells were counted with a Countess II from Invitrogen/ThermoFisher Scientific (Waltham, MA, USA).K562 and mESC cells for the plate-based single cell transcriptome assays (Smart-seq3, PlexWell, FLASH-seq, SORT-seq, and VASA-seq) were sorted in a checkerboard pattern into 96-or 384-well plates using CellenOne X1 (Scienion, Berlin, Germany).Cells were sorted into 96-or 384-well plates containing different cell lysis media, or were thermally or enzymatically lysed after sorting, depending on the method (see Supplementary Materials).Plates were sealed and frozen at −80 • C in the case that processing was not directly started. In brief, for the 5 plate-based methods (Smart-seq3, PlexWell, FLASH-seq, SORT-seq, VASA-seq), cells were dispensed into a 96-or 384-well plate with a CellenOne instrument.Cells were lysed, cDNA was generated, and in some methods, the cDNA concentrations were quantified and further checked on a Bioanalyzer.The cDNA was tagmented and amplified by PCR to generate Illumina sequencing libraries.The amount of input cells specified in Supplementary Table S1 takes empty wells and other controls into account, and therefore only includes dispensed K562 and mESC cells.As an example, the 384-well plates for VASA-and SORT-seq contained 8 empty controls; therefore, only 376 cells are listed. For 10X, 1 million/mL K562 and 1 million/mL mESCs cells were mixed 1:1 in PBS and processed according to the manufacturer's instructions in the Chromium Next GEM Single Cell 3 ′ protocol v 3.1.Single cell emulsions were generated on the Chromium Controller (10X Genomics, Leiden, The Netherlands), and 8250 cells were loaded to target a recovery of 5000 cells. For HIVE and HIVE CLX, the manufacturer's instructions from the HIVE scRNAseq v1 Processing Kit User Protocol (Honeycomb Biotechnologies, Waltham, MA, USA) were followed.A total of 15,000 cells were loaded, with the target to recover 6000 cells for the HIVE and 30,000 on the HIVE CLX to recover 11,000.We processed three hives, with K562, mESC, and 1:1 K562:mESC mixture.One additional HIVE and HIVE CLX with both K562 and mESC cells each were frozen (approximately 6 months and 6 weeks, respectively). For the Split-seq library, the instructions in the Evercode WT Mini manual v 2.1.2from August 2023 (Parse Biosciences, Seattle, WA, USA) were followed, with minor modifications (see Supplementary Materials). Bulk RNAseq RNA was isolated with the RNeasy plus Micro kit by Qiagen, with 500,000 cells per sample.Bulk total RNA was prepared from triplicates of K562 cells and mESCs according to the Illumina TruSeq stranded mRNA protocol (Illumina, San Diego, CA, USA).A part of the workflow was automated with the Bravo automated liquid handling platform (Agilent Technologies Inc, Santa Clara, CA, USA). Sequencing The generated libraries were sequenced on Illumina systems, either single read or paired-end reads of 50 bp (Smart-seq3, PlexWell, FLASH-seq, Bulk) or paired end 26 and 60 bases (VASA-seq, SORT-seq) were generated.For 10X, the libraries were sequenced yielding paired-end reads of 28 and 90 bp, for HIVE, paired-end reads of 26 and 51 bp. Supplementary Table S1 describes in detail the read length and sequencing system for each library. Data Analysis In brief, the data were processed in pipelines, designed to be as similar as possible to the different methods, which were implemented in Snakemake v 6.11.0 [17] and used the same reference genome, a concatenated FASTA file of GRCh38 [18] and GRCm38 [19] included in the cellranger software refdata-gex-GRCh38-and-mm10-2020-A [12].Differences in the pipelines are attributable to intrinsic features of the sequencing method, differences in paired-end status, UMI presence, barcode detection, and are mainly restricted to the parameters used in STAR [20].All details can be found in the Supplementary Materials.It was ensured that all pipelines run the same version of all included programs.All read files were trimmed with CutAdapt [21] for 3 ′ adapters with the '-a' option.In the case of paired-end data, both reads were trimmed together, with the additional option '-A' (3 ′ adapters for the second read).Reads were mapped with star v 2.7.9.a [20].The conversion of SAM/BAM files and attainment of the related statistics was performed with SAMtools [22].For most analyses, all samples were normalized to 20,000 read pairs per cell on average, except for the estimation of multiplets, non-detected genes, and sequencing saturation, which were performed on the full data. The python StatsModel package v 0.14 [23] was used for regression calculations.The single cell count matrices were further analyzed in R 4.2.1 [24] with Seurat 4.3.0[25].Figures were generated in R 4.2.1 [24] or in Python3 with Matplotlib v 3.5.1 [26].Figures were assembled and annotated with Inkscape v 1.2.1 [27]. Results We benchmarked a multitude of single cell transcriptome assays.For a systematic comparison of the methods, we used two cell types.The human K562 cell line, which is a very homogeneous cancer cell line, and mouse embryonic stem cells (mESCs), which are native mouse cells.Bulk RNA sequencing data of both cell lines were generated as a ground truth to assess differences between single cell and bulk assays.The study design, workflows, and outcomes are depicted in Figure 1.For most assays, single cells were dispensed by CellenOne (in one case F.SIGHT) into microtiter plates, except for Evercode and HIVE where both cell types were mixed, and 10X genomics, where the cells were put in emulsion droplets by the Chromium Controller.Subsequently, single cell libraries were made according to the published protocol or the manufacturer's instructions, and then sequenced.In the case of HIVE and HIVE CLX, one out of four in each batch were frozen (six months and six weeks, respectively), according to the manufacturer's instructions.Sequencing data were normalized per protocol to an average of 20 k clusters per single cell and aligned to a combined mouse and human reference genome.After read counting, further analysis and visualizations have been created to show the performance of each technology and compare them to each other and the ground truth (Figure 1).All workflows utilized various equipment, depicted next to the workflows, except for HIVE, which is a self-contained workflow.All samples were afterwards sequenced on an Illumina sequencer and normalized to 20,000 reads per cell on average.The data were then trimmed with CutAdapt, mapped with Star, and analyzed with Seurat.For Smart-seq3, the UMI and body reads were divided and analyzed separately. Single Cell RNAseq Requirements Besides the scientific results described in the following sections, the technological requirements also need to be considered.All workflows utilized various equipment, which can be expensive and complicated to acquire (Figure 1).HIVE is an exception, as it is a self-contained workflow.The bulk RNAseq workflow does not require any automation either but was partially automated with a Bravo automated liquid handler.The 10X workflow requires the Chromium Controller from 10X as the only machine.All plate-based methods require a method for cell sorting, in this case, the CellenOne or alternatively FACS.These methods have the additional disadvantage of requiring many liquid dispensing and pipetting steps, which for some (SORT-and VASA-seq) were performed with a Nanodrop II pipetting robot and an Echo 525 robot liquid handler, and for others (Smart-seq3, PlexWell, FLASH-seq) with a Mantis liquid handler, I.DOT liquid handler, and a Mosquito pipetting robot.In theory, these steps can be performed manually, but even with automation, these methods take 3-4 days to complete. The overall required time between the different methods is quite comparable.Handson time ranges between 8 and 16 h, yet the total time, including incubation times and other logistic considerations like safe stopping points for freezing, is about 3 days.Bulk RNA sequencing with automation only requires 2 days, whereas VASA-seq needs 4 days.Despite similar levels of automation and commercial solutions being available, the hands-on time varies and ranges between 8 h for 10X, 9 h for SORT-seq, and up to 16 h for PlexWell and VASA-seq. Quality Control First, the alignment percentages of the two included reference genomes, the human GRCh38 and the mouse mm10 (Figure 2), were assessed.A 90% mapping ratio of reads (or of UMIs, where applicable) to one of the organisms was used to assign a cell to being either human or mouse, whereas cells with a lower percentage were assigned as a multiplet.Three of the five plate-based assays did not yield any mixed cells as expected due to the single cell dispensation by CellenOne.The 10X genomics, HIVE, HIVE CLX, Evercode, SORT-seq, and VASA-seq datasets indicated the presence of mixed cells, ranging from 2% (10X) to 9% (HIVE/HIVE CLX), with the Evercode WT Mini being an outlier (49%; full details are available in Supplementary Table S1).In the 10X, HIVE, HIVE CLX, and Evercode datasets, mixed cells also contained more overall features and a higher diversity.This is indicative of mouse-human cell duplets in a single droplet/well as it is inherent to the methodologies.Remarkably, in the SORT-seq and VASA-seq datasets, mixed cells were called, despite the single cell dispense by CellenOne.All parameters for mixed cells were in the same range as for the individual single cells.Most of the mixed cells in these datasets remained close to the 90% cutoff. A high amount of mitochondrial RNA is indicative for a poor cell condition, and it is recommended to remove those cells from downstream analyses.The percentage of mtRNA was plotted for each assay (Figure 3; detailed numbers are available in Supplementary Table S2). On average, the percentage of mitochondrial RNA is higher for human K562 cells than for mouse ESCs.Bulk sequencing resulted in the least amount of mitochondrial RNA in human cells, but in mouse cells, Evercode, HIVE CLX, and VASA-seq resulted in less mtRNA (bulk 2.24%, Evercode 0.7%, HIVE CLX 1.3%, VASA-seq 1.4%).These methods also showed the least amount of mtRNA in human cells, but more than bulk sequencing (bulk 1.8%; least amount in Evercode, 2.4%).A minimum amount of mtRNA is physiological for all cells, as no cell had 0% of mtRNA, but should still be minimized for information gain.Single cell handling and dispense adds additional stress to cells which translates to higher mtRNA percentages.Smart-seq3 and SORT-seq show an elevated percentage of mtRNA compared to the other plate-based methods, SORT-seq especially in mouse cells.This shows variation between methods: The plates for both VASA-and SORT-seq were prepared at the same time, yet VASA-seq resulted in better values for the mtRNA.In mouse cells, the amount of mtRNA remained below 5%, except for SORT-seq with an average of 12%, whereas in human cells, it remained mostly below 10% (except for SORT-seq and Smart-seq3 body with 14%, and 10X nearly reaching 10%) [28].Most methods maintain the mtRNA at over 90% of the human cells below the cutoff (with the exception of 10X, SORTseq, and Smart-seq3 body), and in more than 80% below the cutoff for mouse cells (with the exception of SORT-seq and Smart-seq3 body).No bimodal distribution was detected in the sequencing data which had a higher average of mtRNA (Smart-seq3, SORT-seq, and 10X).Despite these good averages, filtering remains necessary, since some cells exceed the average by far.The highest amount of mouse mtRNA in a single cell was recovered in Smart-seq3 body (32%) and in humans in the 10X data (87%), and such cells should be excluded from further processing., which accumulates all cells with a mtRNA percentage of more than 20%.In each subplot, the cutoff of 10% for human and 5% for mouse cells is indicated with a black line.A grey dotted line represents the average for the technology. Performance of the Single Cell Methods An important metric to assess is the complexity of the library: are there many or only a few different transcripts captured?The Shannon index [29] is a metric to evaluate diversity, which was used to evaluate the spread of coverage over the various genes.A zero indicates that all data points are equal (e.g., all read counts are equal to 1), while higher numbers indicate a more unique data spread.Bulk RNAseq captures the most transcripts, resulting in a diversity of 8.5 for both human and mouse.For the single cell sequencing methods, HIVE CLX resulted in the most diverse read mapping for both cell types (7.4-7.5),closely followed by PlexWell, VASA-seq, HIVE, Evercode, and FLASH-seq in humans (7.2 or better), and closely followed by PlexWell, VASA-seq, and FLASH-seq in mice (7.2 or better).The overall range of values was not spread quite far, with the worst values obtained by Smart-seq3 with values of 6.6-6.7 (Figure 4).Mixed cells from the 10X, HIVE/CLX, and Evercode data resulted in a higher diversity than single-species sequence sets, but this was not observed for the mixed cells from VASA-and SORT-seq (not shown).Smart-seq 3 showed the lowest diversity both for body and UMI reads in human, and lowest and third lowest in mouse.Next, the number of features detected per assay on the normalized data was examined (Figure 5).The average of detected genes in the single cell assays approximated mostly around 2000-4000 (precise numbers can be found in Supplementary Table S2).Smart-seq3 UMI had the lowest averages (2400 human, 1600 mouse), and HIVE/CLX, PlexWell and FLASH-seq detected the most features in both cell types.For the K562 data, FLASH-seq, PlexWell, HIVE/CLX, and 10X are comparable, whereas in the mouse cells, 10X did not perform.The highest number of features in a human single cell was detected by Evercode and HIVE CLX with approximately 8400, and in mouse by Evercode with 8400. We further investigated the feature overlap between the different single cell technologies and bulk RNA sequencing.Most features, which were detected in bulk sequencing, were also detected in HIVE CLX, with ~800 human and ~500 mouse features not detected in HIVE CLX, in both cases followed by HIVE and Evercode (Figure 6).Of all the features which were detected between at least one single cell method and the bulk RNAseq (14,141 to 20,030 in human cells, and 13,085 to 18,601 in mouse cells), most were detected consistently between all of the investigated methods (11,865 features in human cells, and 11,066 features in mouse cells).The lowest number of features from the bulk sequencing was detected in PlexWell for human (~6700 not detected) and for SORT-seq in mouse (~6000 not detected).Surprisingly, the single cell sequencing methods also detected a range of features not detected in bulk sequencing.The HIVE CLX technology detected the most, with more than 6000 in human cells and more than 4800 in mouse cells.The lowest number of extra features was detected by PlexWell and FLASH-seq in humans (~500) and mouse by SORT-seq (~300).It was further investigated how the features, which were not detected in the various single cell methods, were ranked in the bulk sequencing data, e.g., if nondetected genes were highly or lowly expressed.For both human and mouse, more than 50% of non-detected genes in the single cell methods ranked in the lowest 25% of expressed genes in bulk, and more than 75% in the lowest 40% of expressed genes in bulk (if one outlier is excluded 93% and 92% on average are in the lowest 40% for human and mouse respectively), indicating that in the single cell methods mostly lowly expressed genes are missed.All single cell methods (except for one sample of HIVE CLX in mouse) missed at least one gene in the top 50% of expressed genes, with some methods even missing genes in the top 2% of expressed genes.In mouse cells, all methods except for the HIVE also missed genes in the top 20% of expressed genes.This number was slightly lower in human cells.In general, the more overlapping genes are detected between a single cell method and bulk sequencing, the less likely it is that a highly expressed gene was not detected.In contrast, it was also considered how many genes were detected in the single cell methods, but non detected in bulk sequencing.Also here, most genes which were newly detected ranked rather low in expression, with on average more than 80% of newly detected genes ranking in the lowest 20% of expressed genes, and on average more than 97% in the lowest 40% of expressed genes.Not all newly detected genes ranked lowly, with some methods also detecting new genes with a high gene expression, up to the top 10% or even top 5% of expressed genes.(A) Human data, (B) mouse data.In each subpanel, on the left, the expression of genes only detected in bulk RNAseq is depicted, on the right, the expression of genes only detected in the various single cell methods (for cells identified as human/mouse in the subpanel (A,B).The intensely colored middle of the bar represents the genes which were detected with at least one read in both methods. Besides the total number of features, the relationship between new features gained and additional sequencing depth is relevant.Therefore, the ratio between these two variables on the non-normalized data was investigated (Figure 7).In humans, the HIVE CLX yields the best ratio of features to reads, followed by Evercode and 10X.In mice, the first two places are swapped, with Evercode yielding the best ratio, followed by HIVE CLX and 10X.The biggest difference can be seen for SORT-seq and HIVE.For SORT-seq, it does not perform well in the K562 cells together with Smart-seq3 body, but performs well in mouse cells.In contrast, for HIVE, it performs well in human cells, but has the worst yield in mouse cells.Sequencing saturation is not reached at 200,000 reads per cell for most methods, except for FLASH-seq and PlexWell in mouse, where the saturation plateaus after this point. In the case of methods where the cell is assigned based on a barcode, rather than an Illumina index, the barcoding efficiency needs to be factored in.The sequencing data need to be separated into the distinct barcodes, and not all barcodes will be derived from cells, but rather from background.For the data presented here, SORT-seq, VASA-seq, and 10X Next GEM 3 ′ had the best efficiency, as 85-92% of the data were assigned to cells.This efficiency was lower for the HIVE and HIVE CLX, where only 60% of the data were assigned to a cell.The Evercode WT method showed a difference between human and mouse cells, as in human cells 59% of the data were retained, whereas in mouse cells only 38% were retained, and the combined libraries were placed in between them (50%).One of the differentiating characteristics of single cell assays is full or partial transcript coverage.Therefore, the distribution of reads over the whole gene was investigated and shown in Figure 8.Each assay shows its own transcript coverage profile, which is always as expected from the library construction technology used.Bulk RNAseq and FLASH-seq yielded the most equal coverage, whereas a bias for an increased coverage at either the 3 ′ or 5 ′ was visible in most other methods.The plate-based assays (Smart-seq3 body, FLASH-seq, and PlexWell) have coverage over the whole transcript, and from both strands of the genome due to the paired-end sequencing.10X, Hive, VASA-seq, Evercode WT mini, and SORT-seq yielded only reads from the sense strand over the whole length of the transcript, and show a preference for 3 ′ end reads.Some of the more internal reads have previously been attributed to semi-random binding of internal polyA repeats [30].UMIs of the Smart-seq3 protocol were only detected on the 5 ′ end of the genes.To quantify the imbalance in coverage from 5 ′ to 3 ′ end of the transcript, we calculated the relative coverage per exon (reads/base) over each gene.For all genes, which had at least half of their exons covered, the relative standard deviation over their exon coverage was calculated and averaged per dataset (Supplementary Table S3).FLASH-seq shows the least imbalance with 14.9% relative standard deviation, followed by bulk RNAseq with 15.8%, and then by PlexWell with 16.1%, Smart-seq3 body with 17.2%, Evercode WT with 18.3%, and VASA-seq with 19.1%.The methods with known 3 ′ and 5 ′ bias had clearly higher deviations, with 10X having 20.1%, HIVE and HIVE CLX having 21.1%, Smart-seq3 UMI with 24.1%, and SORT-seq with 24.7%. Comparability of Profiles One of the main questions is how comparable and reproducible the transcript profiles are between any of these methodologies.UMAP grouping with Seurat showed on the first component a separation of cells into human and mouse (Figure 9A).Furthermore, three other main observations can be made from this plot.The first one is that the VASA-seq data only slightly cluster with the other methods, as can be seen for the human cells in Figure 9B.The second observation is that the bulk RNAseq data group within the VASA-seq cluster.The third observation is that for the mouse cells the grouping is based on the method although all technologies group together in lower dimensions, except for VASA-seq.The UMI and body components of Smart-seq3 are also grouped together but show a clear separation into both components.The PlexWell and FLASH-seq methods are derived from Smart-seq2, and group together here.If the technologies are investigated separately, then also a batch effect is visible for the mouse cells, but not the human cells.Otherwise, most of the mixed cells from both 10X and HIVE are forming separate groups, which cannot be seen for the SORT-and VASA-seq mixed cells.Overall, all methods are consistent, and show good agreement and reproducibility.The technological impact of all methods is less than the biological impact of the used cell material.This can also be seen in Figure 10, which is another representation of all the combined datasets.In this case, we correlated gene expression between the datasets (each dataset treated as a single expression profile).The correlation within one method is in general high, exceeding 0.8 and in most cases 0.9.Most datasets from different methods show a moderate correlation of 0.7 or higher to other datasets, with the exceptions of VASA-seq, which shows mostly a different profile, and Evercode WT, which shows a clearly distinct profile.The correlation of bulk RNAseq to other datasets did not differ considerably from differences within the single cell methods (except for Evercode), giving no method a noticeable advantage over others. Discussion The field of single cell sequencing is growing in complexity and new methods appear every year.This development has resulted in many commercial suppliers and kits, which are remarkably diverse in the number of cells required, their protocol complexity, and equipment requirements.We compared multiple available methods to evaluate their advantages and disadvantages and to provide guidance for individual researchers, consortia, and core facilities. Time Requirements and Automation Most of the methods can be (partially) automated, to save hands-on time and reduce errors.The plate-based assays require liquid handling and pipetting robots for efficient use.This can be a big obstacle if these are not already present in a laboratory, since purchase costs can be prohibitive.If they are available in a laboratory, then their use can make any of the described assays efficient, with absolute handling times of less than 4 days (including waiting times), and reduced error prone manual steps.Without any robots, variable handling and incubation times for a larger number of cells would negatively impact the results.There are two main alternatives, if such robots are not available.The first is the 10X platform, which requires only one machine and has everything necessary for single cell preparation built in.This decreases the complexity of the preparation, but increases the upfront capital cost and the throughput.The second alternative are methods which do not require any equipment and as much upfront capital investments, such as the HIVE or Evercode.Also, here the costs increase, due to the commercially supplied package, but are less of an investment than the 10X instrument.Since the necessary materials are disposable, they need to be bought for each preparation.Such methods are therefore the most suitable for laboratories, which do not have any instruments available nor perform single cell sequencing regularly. The required hands-on time after automation is less of a decisive factor than generally anticipated.A total hands-on time of 8 h for 10X is a big relative time difference to the maximum of 19 h hands-on time for Evercode.However, the number of laboratories with sufficient throughput to make this a deciding factor is relatively small.The decision between a low-or high-throughput method is more likely to make a difference. Filtering Cells Apoptotic cells will be depleted of chromosomal RNA, due to the loss of membrane integrity; therefore, apoptotic cells will mostly yield the remaining mitochondrial RNA.It has been best practice to date to remove apoptotic cells with a mitochondrial RNA percentage higher than 5%, due to earlier indications of this being a reasonable threshold [31].A recent publication reported this as 10% for human cells and 5% for mouse cells [28] though, and in some protocols, 15% is used [32].The difference between human and mouse cells is also visible in the data presented here.For all methods, except for SORT-seq, the fraction of mitochondrial RNA was higher in human cells than in mouse cells.A non-negligible part of human cells also exceeded the 10% threshold (Smart-seq3 and 10X), although only in the Smart-seq3 datasets the 15% threshold is exceeded by a considerable number of cells.Contrary to human, in mouse cells the 10% threshold is rarely exceeded (except for SORT-seq).Overall, this correlates with the observations by Osorio et al. [28].It would be sensible to derive a threshold per method or per dataset, but in all our datasets, no bimodal distribution is visible; therefore, it is not possible to derive a binary state of being apoptotic or not and to filter on this, based purely on the mtRNA amount. Multiplets The intermixing of human and mouse cells facilitates estimation of the rate of multiplets in a dataset.Other research showed multiplet rates from 2.5 to 37% [33] and 10X predicts a multiplet rate between 0.8 and 8% depending on the cell number [34] (although this has been reported to be higher [35]).With a mixing of two different species, we assume that we are able to detect 50% of all multiplets, as we will detect human/mouse and mouse/human multiplets, but not mouse/mouse or human/human multiplets.In our 10X data of 2500 cells, we detected 2% mouse/human multiplets.Taking into account the non-detectable, 1% human/human multiplets and 1% mouse/mouse multiplets, results in a 4% multiplet rate.This is higher than the predicted multiplet rate by 10X (below 2.5%).The same holds for HIVE, where the predicted multiplet rate is 9% for our HIVE datasets and 14% for the HIVE CLX datasets [36], and our data show an inferred multiplet rate of 18%.An outlier in this case is the Evercode data, as the multiplet rate was significantly higher than expected (49% detected, versus theoretically less than 2% [37]).It was however noted by the manufacturer that mixing of cells with uneven RNA content can lead to failure of the cells with lower RNA content, as they will be underrepresented in contrast to the cells with higher RNA content.A lower amount of sequencing output from the mESC cells in the pure libraries was noted, and therefore this can be a contributing factor, as a good part of the multiplets might be genuine mESC cells with low RNA content, mixed with high K562 background. As shown in Figure 2, cell multiplets of distinct species are easily detected, due to the varying mapping rate of these cells, the elevated feature rate, and increased Shannon diversity.Difficult to detect are multiplets of same-species cells.As can be seen from the data in this manuscript, such cells are not detectable by simple metrics.While the multi-species multiplets show distinct characteristics compared to the single cells, this is due to the greater genetic diversity within the double-species multiplets, and therefore not applicable to single-species multiplets.A simple filtering for the cells ranking highest for features or diversity is not applicable either, since the double-species multiplets may rank lower than some of the single-species cells in these comparisons.It also cannot be excluded that the highest scoring single-species cells are single-species multiplets.Various computational methods have been developed [33] to detect these cell multiplets, but these were not benchmarked, as this was not the focus of this research. Only a few multiplets were seen in the plate-based assays, as expected.Some cells in the SORT-seq and VASA-seq samples showed a human/mouse (or mouse/human) ratio below 90%, which is in principle not expected in a plate-based assay (although more than half of these showed a ratio >85%), but before sequencing all samples are mixed; therefore, barcode hopping, background RNA, etc. could be wrongly assigned.These "multiplets" did not show any characteristics of real multiplets, e.g., an elevated number of features or a higher diversity, and are therefore probably misidentified.The other plate-based assays showed no mixed cells, as expected. Batch Effect Multiple batches of K562 and mouse embryonic stem cells were used in this benchmarking study.Two different main observations are made in this regard.The first observation is that the results for the K562 cell line and the mESC can be clearly distinguished.Mouse and human cells were separated by both tSNE and UMAP and showed different profiles.But while the clustering of the K562 cells yielded mostly overlapping groups for the different methods, the mESC cells showed more of a gradient between the different methods.The K562 cell line is a stable human cancer cell line, and in theory, no major differences would be expected.In contrast, the mESCs do not seem as a biologically defined group of cells, and vary, potentially due to biology and due to the amount of passaging in the laboratory.Despite this, a clear grouping was still visible for both cell types. Overlap and Difference between the Methods The second main observation is that most methods overlap to some degree.VASA-seq is an exception, showing a strong separation from the other methods, likely due to the inclusion of more unique transcripts and non-polyadenylated transcripts, amongst which are histone genes.The bulk RNAseq was grouped within the VASA-seq data, which on first glance could be attributed to the additionally captured transcripts from VASA-seq, yet the overlap between the detected genes shows that this cannot be the sole cause, as other methods detect more overlapping transcripts.Some of the additionally detected histone genes show a high expression; therefore, the cause is more likely quantitative, in addition to being qualitative.Three of the investigated methods are in principle similar and derived from each other (Smart-seq3, PlexWell, and FLASH-seq being variations/further developments of Smart-seq2), which is also visible in the results.SORT-seq and 10X are both 3 ′ methods, but despite the principal similarities in technology, do not form a strong overlapping cluster.Despite these general differences between the methods, the agreement is in general high, which indicates that most results probably can be trusted independent of the method, but not yet high enough that a mixing of different methods within one experiment can be recommended. Conclusions To conclude, multiple single cell sequencing methods that vary widely in methodology were compared to each other and to bulk RNAseq (see Figure 11).In the case that researchers do not need the single cell resolution, we would advise the use of the bulk RNAseq method, since for many metrics, the bulk outperforms single cell methods.From the tested single cell RNAseq methods, Smart-seq3, which is the oldest full-transcript method used in this investigation, shows sub-par results, and we recommend researchers to search for better performing methods.The metrics of the 10X data also do not compare favorably in terms of transcript coverage and multiplets, but 10X still has the advantage of yielding the highest throughput, which the other methods do not offer, except for HIVE and Evercode.The HIVE seems to be the most suitable for laboratories, which do not have the necessary equipment for the other methods and require a high throughput.The Evercode method has in principle the same advantage, but cannot be recommended due to the issues with the multiplets, which makes it not suitable in many situations, and use cases will require more background knowledge.Moreover, with HIVE, samples can be stored before performing the scRNA library preparation, which allows for sample retrieval over time before processing or sending core facilities to sequencing in one batch.As shown in the results, there are no notable differences observed, making storage indeed a viable option.VASA-seq shows good results and detects non-polyadenylated transcripts, which other methods do not.VASA-seq could therefore be the method of choice, especially if non-polyA transcripts are of interest.FLASH-seq and PlexWell show comparable performances in many aspects and can be good alternatives if non-polyadenylated RNAs are not of interest.However, the PlexWell kit has been recently discontinued by the manufacturer, and therefore cannot be recommended anymore.A method failed on a metric, if it showed worse performance in comparison to the average of the other methods or did not meet known standards (e.g., mtRNA cutoffs).If two groups were apparent, then the worse performing group was marked as failed, the better as passed.SORT-and VASA-seq achieved a medium score for the multiplets, since in theory, no multiplets should be present, yet we still detected some.10X gets a medium score on the equipment requirements, since only one machine is necessary, in contrast to no equipment required or multiple robots being required.A recommendation for a method is given if it passed at least half of the evaluated criteria.The exceptions are PlexWell, which cannot be recommended anymore since it has been discontinued (although a new kit is available), and Evercode, since the multiplet issue makes it not suitable in many circumstances. Figure 1 . Figure 1.Overview methods: the diagram depicts the workflow.Mouse and human cells were utilized for all methods.Cells were applied separately or mixed for some of the workflows (bulk RNAseq, 10X, HIVE), or were sorted into a well plate (96 or 384) in a checker-board pattern, alternating human (red) and mouse (blue) cells (for Smart-seq3, PlexWell, FLASH-seq, VASA-seq, and SORT-seq).All workflows utilized various equipment, depicted next to the workflows, except for HIVE, which is a self-contained workflow.All samples were afterwards sequenced on an Illumina sequencer and normalized to 20,000 reads per cell on average.The data were then trimmed with CutAdapt, mapped with Star, and analyzed with Seurat.For Smart-seq3, the UMI and body reads were divided and analyzed separately. Figure 2 . Figure 2. Multiplets: a varying number of multiplets is expected per method.A species cutoff of 90% is used to define mouse (blue), human (red), or mixed cells (black).Alignment percentages of mouse and human are indicated at the top and bottom, respectively.The numbers of counted cells for mouse and human are indicated on the left and right, respectively. Figure 3 . Figure 3. mtRNA: percentage of mtRNA is shown for each technology marked by color and plotted against the percentage of cells.Each bar represents a range of 1% point, except for the last bar after 20+, which accumulates all cells with a mtRNA percentage of more than 20%.In each subplot, the cutoff of 10% for human and 5% for mouse cells is indicated with a black line.A grey dotted line represents the average for the technology. Figure 4 . Figure 4. Diversity: Shannon diversity value per technology is indicated by the color plotted against the percentage of cells.Each bar represents a range of 0.1, except for the bar below 6, which accumulates all values below 6, independent of the actual value.A grey dotted line represents the average of the technology. Figure 5 . Figure5.Features: number of features detected per cell.Each bar represents a range of 500 features, with two exceptions.The bar labeled 8000+ represents all cells with more than 8000 features, independent of the actual value.The bar labeled ~17,000/~18,000 contains the three replicates for bulk RNAseq data, which contains these high number of features.A grey dotted line represents the average of the technology. Figure 6 . Figure 6.Detection of genes in single cell RNA sequencing methods in comparison to bulk RNAseq.(A)Human data, (B) mouse data.In each subpanel, on the left, the expression of genes only detected in bulk RNAseq is depicted, on the right, the expression of genes only detected in the various single cell methods (for cells identified as human/mouse in the subpanel (A,B).The intensely colored middle of the bar represents the genes which were detected with at least one read in both methods. Figure 7 . Figure 7. Reads to features: this diagram shows the identified features per cell versus the number of reads for that cell.Only cells with a maximum of 200,000 reads are displayed, as only a minor number of cells contained more reads. Figure 8 . Figure 8. Gene coverage: relative coverage over features.The x-axis shows the relative length of a transcript, from 5 ′ to 3 ′ .Black lines indicate coverage from reads aligning to the sense strand of the genome, yellow lines indicate coverage from reads aligning anti-sense, e.g., for Smart-seq3, the UMI is only present in reads starting from the TSO at the 5 ′ end of the transcript.Therefore, in Smart-seq3 UMI, we see high coverage at the 5 ′ side of the transcripts, and only alignment on the sense strand of the genome.In contrast, Smart-seq3 reads body are derived from paired-end sequenced, tagmented full-length cDNA, yielding reads over the whole transcript and on both strands of the genome. Figure 9 . Figure 9. Cell clustering.(A) UMAP of all combined datasets, color based on species.(B) UMAP of all combined datasets, color based on method.The bulk RNAseq points have been enlarged and highlighted with arrows for better visibility.Both diagrams were generated with eight PCA dimensions. b Figure 10 . Figure 10.Correlation matrix: the Pearson correlation of all expression profiles.(A) Correlation of human gene expression.(B) Correlation of mouse gene expression. Figure 11 . Figure 11.Overview of performance metrics.A method failed on a metric, if it showed worse performance in comparison to the average of the other methods or did not meet known standards (e.g., mtRNA cutoffs).If two groups were apparent, then the worse performing group was marked as failed, the better as passed.SORT-and VASA-seq achieved a medium score for the multiplets, since in theory, no multiplets should be present, yet we still detected some.10X gets a medium score on the equipment requirements, since only one machine is necessary, in contrast to no equipment required or multiple robots being required.A recommendation for a method is given if it passed at least half of the evaluated criteria.The exceptions are PlexWell, which cannot be recommended anymore since it has been discontinued (although a new kit is available), and Evercode, since the multiplet issue makes it not suitable in many circumstances.
9,594.2
2023-12-01T00:00:00.000
[ "Biology" ]
The Geographic Information Systems (GIS) Application in the Evaluation of Sanitary Services in the Big Algerian Cities Empirical Study on the City of Annaba The GIS is one of the empirical scientific methods that are applied in many studies related to the infrastructure. Particularly the distribution of services whether in terms of spatial distribution or ways of present and future work. Moreover, it helps to provide a real picture about distribution and the nature of its connection with the residents’ distribution, density, road network and the existing imbalance in these relations. The present intervention provides an applied study about the GIS upon the sanitary services in the city of Annaba (the fourth Algerian city) to illustrate the reality of the sanitary equipments distribution and to know the deficiencies by determining their adequacy and efficiency in presenting the sanitary services. This is done by mapping maps (Digital ones) in order to clarify the distribution of sanitary equipments according to its ordinal system (hospitals, clinics, treatment rooms). In addition, there are maps for densities’ distribution, road network, the prominent distances between sanitary equipments with certain dimensions by specifying a service area for each category of sanitary services, by which it can be analysed to reach the perception of new sites that are considered for the sanitary equipments. This perception is based on the program of ArcGIS10.3. Hence, the applied studies of Geographic Information Systems on the sanitary services have unlimited potentials to reach the exact scientific results, so that, it can obtain the necessary information for enhancing and developing that kind of services in the city of Annaba (Algeria). Introduction The sanitary services are considered as the most necessary services that affect persons and society's health. Strategies vary from one country to another to provide people with health services, and one of the important elements of this strategy consists in the widespread of the sanitary services' network over the urban space. The importance of the sanitary services reflects the economic and social development of the country considering its necessity for the residents; this sector has been taken great importance in many countries because it is a criterion of the country's economic and social development in order to allow a better function of the urban system. This requires a balanced spatial distribution for these services in the way it can line with the population density over the urban sectors [1][2][3]. The importance to study the sanitary equipments is linked the resident's life due to the offered services which respond to their real needs. The geographical knowledge especially the applied one has noticed great improvement in different sectors, and with the appearance of the GIS technology on the geographical field, with its great capability of spatial analysis. It calls many researchers to inter deeply into this technology, from this, the importance of employing this technology has come in analyzing the fact of the spatial distribution to the sanitary equipments which are presented to this kind of services and its connection with the residents' distribution, their density, roads network in order to clarify the Imbalance relationships and to appear its spatial distribution efficiency in the city of Annaba, containing its (measuring geographic distribution). Through the program of Arc GIS 9 that is used for current study, it should have the convenience of the researcher with the importance of the GPS to manage the sanitary services in the big Algerian cities and its capacity in studying the spatial distributions also to know its efficiency to meet the residents' needs. The city of Annaba which has been chosen for this study is considered as one of the big Algerian cities occupying the fourth place in the national urban network because of the high population growth, and the importance of the sanitary services distributed over the urban tissue as a part of the public urban services, and its relationship with the residents due to the provided curative and preventive services for a wide geographical space and for different age groups as well as its connection with the state as the first responsible for providing its services for the different categories of the society and the need of the city to a scientific study of the spatial distribution reality relying on the technique of the GIS as an efficient instrument for applied geographic research. The city of Annaba is suffering from the deficiency of the spatial distribution of the sanitary equipments over its urban sectors in line with the density in the sense of reflecting low level of efficiency in the city 's sanitary equipment, and to find the imbalance in its Variables by the development of a base of numeric geographic knowledge on the sanitary equipment and its variables and to analyze the static and analytic data, to provide accurate information for the decision makers in the urban management. The Situation of the City of Annaba with Industrial and Service Importance for its Region The city of Annaba is situated in 7.45° long and 36.55° large, in the eastern side of the Algerian coast ( Figure 1). The city is bounded from the East and the West, by the Edough Mountain (1008 m), from the East, the Mediterranean Sea, from the South, the Annaba plains. The El daheb and Sibouse valleys are poured in the eastern south side. The city of Annaba is presented as an administrative region since the French colonial, which qualified it to be an important industrial area during the Constantine plan (1958). Annaba is connected with an important road network (National Road no 44, national road no 16), besides to the railways that relate different regions. The importance of this situation is related to the port that is considered as one the most important eastern ports, in addition to the international airport in the eastern south along the national road no 44. All those make it a point of capitals, functions and flux pole. Collecting, Developing Data and Constructing the Geographic Data The first step consists in the collection and development of data, the second step structuring the geographic data basis as follows: Collecting and developing data It includes the spatial and descriptive data related to the sanitary services in the city of Annaba: The spatial data and its various resources Mapping data: It is necessary in all analytical phases of this study, which is the basis of the success of geographic information systems (GIS), and is considered as a set of maps with different scales are: Urban structure plan for the city of Annaba (1/7.500) is derived from the master plan of Annaba urban group, which serves as the essential map on which the sanitary equipment are determined according to their location and also the rest of other services, as well as the clarification of the structural road network and adopted in the division of the city into 24 urban sectors, the shortage has been completed by its updating through work field. Topographic map of the area of study-scale (1/50.000) with the coordinates (UTM), which illustrate the different slopes as well as mountain passes and valleys that cut the site of the city of Annaba [4,5]. Remote sensing data: It is relying on the analysis of 4 satellite images LANDSAT : ETM 2000, TM 1987, MSS 1973 combines medium and high precision that allows to know the pace of urban growth of the city and its trends and its stages and understand the various mechanisms of urban city dynamic, and then realize history of the emergence and the completion and signing of health services equipment especially in the urban extension areas, plus 4 other satellite images: ASTER for the years: 2001 and 2006 to find out the status of the surface and forms of constructing health services equipment. These data are reinforced by aerial photographs to determine the location of health services equipment in urban areas, which are difficult to detect (especially traditional Arab and chaotic modes of urban tissues and some tissues European modes) at this level of distinctive precision using aerial photos for different years and scales, namely: 1957 (1/20.000), 1967 (1/20.000) 1992 (1/27.000), besides carved images from Google Earth. Field work data: It has been relying on field work to complete the rest of the data in order to sign equipment health services according to their locations on the map depending on their names and addresses as stated in the data obtained from the Directorate of Health and Population of the WILAYA of Annaba, in this context was used device Global Positioning (GPS) of Product Type 80 by GERMIN firm and adjust the coordinates within a kilometer coordinates (UTM) system being used in the search maps. The following step is to introduce urban structure plan to the computer and turn it into information network (pixel) numbered map using scanner size A0 and accuracy not less than (High resolution DPI 600), after the conservation of the numbered map in file format Tief, the map is returned to its real geographic origin on the earth's surface, using geographic coordinates (UTM) taken with a GPS. Descriptive data: The provision of descriptive data by using all the theoretical references and sources related to the subject of services in general and health services in particular, as well as geographic information systems and their applications in contemporary urban and geographic studies especially those associated with urban management in terms of services. Also the provision of official statistics related to the sizes and population densities for the year 2008 by direct contact of the National Statistics office, as well as data related to the number of health services equipment by its categories and numbers of nursing staff, and the number of families, according to their addresses by contacting the Directorate of Health and Population of the wilaya of Annaba (2013-2014), as well as the Directorate of Construction and urbanism of the wilaya of Annaba to get an idea on the numbers of the city's neighborhoods and their sectoral division and area and other urban functions scattered across the urban fabric [6][7][8]. Building geographic database The process of building geographic database is necessary for any geographic system are basic rules, and by designing a database is where the link between sites equipment health services and schedules quantity and descriptive, is the selection of items that have a relationship to health services between sites equipment health services to become valid for analytical and statistical treatment until is displayed in maps and graphs allow to realize the efficiency of health services in Annaba, in order to visualize the elaborate system of control, which is evident from Figure 2. Components of Health Services in Annaba and Spatial Distribution The sanitary equipment related to the population size and the degree of economic and social development, this combines public and private health equipment, which know variation in their potential and service ability according to a pyramid scheme ordinal that would reflect the efficiency of Sanitary services provided, and are: Figure 3 shows the spatial distribution of public sector's sanitary equipment, and through which we find that hospitals occupy the top of the pyramid among health components which offer services of quality and central nature and this impose imperative nature and concentration in Annaba, which represents the capital of the wilaya compared to the rest neighboring municipalities, all gathered within what is known as " University Hospital Center ", which includes the following health facilities: Public sector's health facilities • Hospital "Ibn Rushd" with 470 bed capacity provides health services surgery. • Hospital "Ibn Sina" with 269 bed capacity and provides health services specialist. • Hospital "Durban" with 190 bed capacity and provides health services specialist along with specialty of ear surgery. • Clinic of St. Teresa with 94 bed capacity includes health services in pediatrics. • Square March clinic with 50 bed capacity and providing health services specialist in ophthalmology. • Saint Augustine clinic to provide health services specialist in dental surgery. • Elisa clinic provides health services in the dental surgery. Besides institutions of Health which combines 07 polyclinics and 12 treatment rooms and clinic births, which are unevenly distributed across urban sectors of the city as illustrated in Figure 3. Health facilities belonging to the private sector It is contributed to the private sector in the provision of health services in Annaba through 08 surgical clinics with total capacity of 305 beds as shown in Table 1. In addition, we find that the private sector that contributes to health care through the following health facilities: • 05 hemodialysis centers and equipped with 87 generators. • Two (02) medical imaging centers. • 08 medical scanning clinics available on 06 Scanner devices and two devices to detect the magnetic resonance. • 08 laboratories for medical analysis. Spatial and Statistical Analysis of Geographic Data Related to Health Services in Annaba The evaluation of health services in Annaba is based on the analysis of data related to the location of sanitary services and treats them by using spatial and static analysis means provided by GIS software ArcGIS 10.3, and is the most important of these methods. Function of making border on the phenomenon This phenomenon helps to evaluate the location of each equipment for sanitary services separately, through the function range "Buffer" on the list of "Proximity" and listed under the Tools menu analysis "Analysis tools" in the program that has been selected, which allowed to draw three bands around each equipment according to the distances specified for each criterion by giving each range a value representing the degree of risk, to cover all the criteria taken into account and collect them to know the degree of appropriation for each equipment separately, and the closer the output from zero the more the location is appropriate, and therefore index to assess the health services provided. Function of closest neighbor This function within statistical analysis tools "Spatial Statistics Tools", based on measuring the distance between the location of a particular health equipment and another nearby it, this function allows to clarify the distribution pattern of equipment health services in Annaba, when it is a regular pattern, this means that there are factors that affect it, or if the pattern is chaotic, it refers to the factor of chance. Function of the average center and distance criteria: This function under the list of "Measuring Geographic Distribution" and interested to the urbanized area in each urban without considering empty area to realize the tendency of sanitary equipment to centralization (geometric centralization) inside the urbanized area of the Urban sector and becoming closer from residential areas. Analysis of Sanitary Equipment Efficiency and Ensure their Role in the Urban Management The used methodology and use of congruence of information between various spatial data and descriptive data allow to know how suitable are the sites of sanitary services (It has been relied in this research on the public sanitary fittings because it is featured by a hierarchal system in presenting the different sanitary services for various population classes) in Annaba and efficiency in providing this type of service by applying GIS as a tool to ensure its efficient management and contribute to the lifting of its the adequacy and efficiency. Functional efficiency of the sanitary service The spatial distribution of health service equipment in the city of Annaba is characterize by the contrast across (Figure 3), the most equipment is concentrated in the El nasr urban sector two hospitals (2) and 1 treatment Hall. This sector has extended during the French colonial and has embraced this type of services in the city, the reason why it serves not only its inhabitants but the residents coming from other urban sectors as well so the health services provided and also those arriving from neighboring municipalities and other wilaya as Taref, Guelma, and Skikda, this is what makes population density reach 24 people/hectare, however the number of sanitary equipment diminish gradually in other urban sectors, reaching one equipment in each sector. Number of these sectors are especially mentioned: "Belaid Belkacem," "Bouhdid", "05 July 1962", "Safsaf-Willow", "Didouche Mourad", "Sibouse", and "M'hafer" where we find only multiple clinic services or treatment hall. include buildings still under construction this makes the population density being between 24-154 people/ha, which is reflected in the low functional efficiency of health facilities compared to the rise in the size of the population, many urban sectors are characterized by the total absence of these equipment we can mention here the following sectors: "Jabhat El tahriri El Watani FLN," Sidi Brahim, "" Port Said "," Hippone ", and" March 08 ", forcing the inhabitants of these sectors to move to other sectors, regardless of their population density (Figures 4 and 5). Health equipment efficiency using the criterion of distance By analyzing the values of the closest neighbor that is limited between 0.37 and 0.78 over the city of Annaba urban sectors were detected the pattern of health services equipment distribution, this pattern, does not take a similar situation in some urban sectors and even between types of sanitary equipment. For the distribution of treatment hall they are characterized by a concentrated pattern in some urban sectors such sectors are : "old city" and "beautiful view" and "Nasr" and "Bourtuqal" and pattern random as sectors "July 5" and "Oued forcha", and in other sectors we find a structured pattern such sectors "Sidi Aïssa" and "Oued Kouba" and "Sibouse" (Figure 6). This situation is due to the road network structure which vary from one sector to another depending on the characteristics of the urban fabric and the date of construction and also to the pattern of distribution of the rest of the other services ( Figure 5), as for multiple services clinics and hospitals, urban distributed across sectors takes a random pattern in some of them, while others have a regular pattern. Spatial location efficiency of health services To build a model for selecting the best suitable locations for health services equipment in Annaba, it needs to turn all the criteria that are established previously to algebraic maps (Algebra Maps) (zero -mono areas suitable and unsuitable), using the spatial analysis tools "Spatial Analysis Tools" the operation started by bringing all the data that has been processed previously in its linear and retina, "cellular", then find the straight distance of all criteria that were in its linear state "Vector" the result is areas with equal distances are divided into six sections covering all the urban fabric of the city of Annaba, and then have been subject for requalification into six categories. Where the suitable categories are given number six despite the fact that they occupy the first rank in the classification as the highest degree, the unsuitable areas are given number one as the minimum degree without considering its location in the sixth place in the classification, then are doing arithmetic enables tool analyst spatial do by Raster Calculator for all resulting layers in its retina after giving a relative weight to each layer, based on their importance in determining the site of the health equipment and the degree of risk expected of them (Table 2), here the layers are collected it after multiplying them by their weights producing a new layer which reveals the best suitable sites for locating health services equipment in Annaba. S. No Variable Weight Basing on what has been approved as criteria in the in the model of appropriation, it was found that highways in Annaba has ranked first in the application, due to its small number and its proximity to a small number of treatment halls rate of 0.32% we specify here sector "Sidi Aïssa" and "Oued Kouba" and "08 May 1945". The second place was occupied by branches of the civil protection rate of 0.76% for its proximity to some of the multi-service clinics, then come the degree of slope by 4.25% due to the city's growth and expansion on flat areas to avoid severe sloped areas which are exploited as zones for entertainment, recreation and rest, while we find water areas rafting a ratio of 9.21% as representing the less higher regions, and they are extended on limited area in the western side"Belaid Belkacem", "Safssaf ", "Bouhdid", and "05 July 1962", which belongs to health services and then we find the proximity of the treatment rooms by 11.15% and multi-service clinics by 13,65%, as well as hospitals 15.44%, largely due to the urban plan and urban fabric properties located therein, The valleys represent 19.04% due to the multiplicity of mountain passes and valleys emitted from Edough Mount most important: "Oued Kouba", "Oued Forcha", "Oued Bouhdid", "Oued Edeheb" and sweeping most of the urban fabric to pour into the Eastern side of the sea. Category The degree of appropriateness So we find most sanitary equipments juxtapose the with different distances, while the criterion of the middle situation of health services occupy the highest percentage of 28.50% because it is not takes into consideration in the location of health services and their situation on the edges of the urban fabric especially for hospitals and multi-services clinics. Basing on the spatial analysis we have evaluated locations of health services equipments in Annaba, which are divided into three categories depending on the degree of suitability, the results and represented in the Table 3. It is clear from Table 3 and Figure 7 that treatment rooms are characterized by an acceptable degree of suitability that is reflected in the average of total grades estimated at 4.60 while multi-service clinics and hospitals have increased their proportion of the site with class "inappropriate" with an estimated average of 6.60 degrees, and thus we find to 64.00% of health services have fulfilled certain or all criteria considered in this model, and this situation can be explained that the sanitary services are not characterized by the location of suitable degree" are located in the following urban sectors "beautiful scenery" "Oued Kouba" and "Sidi Aïssa," they are sectors of planned neighborhoods mainly European neighborhoods which occupy the central area of the city of Annaba. While chaotic sectors urban are characterized by the existence of health equipment of "unsuitable sites" "like Urban sectors "M'hafer", "Safssaf ", "Bouhdid", and "05 July 1962" because their location were not studied and did not take into consideration the necessary criteria, because the completion of this type of equipments was done in urgent state in service urgency to cover the significant delay after the housing delivery, this phenomenon exist also in the sector of "old city" this sector includes ancient and traditional residential neighborhoods which have been degraded and most of its residents have left it. Conclusion The GIS has become one of the most important tools which enforce the sanitary services' equipments. It is considered as one of the most important services sectors in the city. This technology allows to treat the various data in a collective picture using spatial and statistic analysis to get efficient results which tolerate spatial practice. This research has taken the spatial distribution efficiency for the sanitary services' equipments (public) in Annaba to use GIS through the field of spatial distributions in the program of "Arc GIS" in order to analyze the spatial data using criteria distance, the closest neighbor, the real rectified center and the middle geographic centers. The practice of this technology on the sanitary services in Annaba shows the existence of clear spatial disparities in distributing the sanitary services' equipments across the urban sectors of the city. This has an impact on the efficiency and sufficiency of the service presented to people. Moreover, these equipments are characterized by their regular and centralized mode of distribution treatment hall and chaotic and irregular distribution for the clinics and hospitals. This has been confirmed by the closest neighbors In addition, the model of appropriation reveals the choice of the selective sites depending in the most variable way between highways however the sanitary equipments are situated in the centre of the urban sector this makes the sites of these equipments distributed between the appropriate sites, accepted and inappropriate. Hence, it is necessary to put appropriate maps to maintain public sanitary services in Annaba and to establish its sites and data in automatic system to get digital maps able to be adjusted and offer efficient solutions and help for urban management in the field of sanitary equipment by using GIS.
5,654.6
2017-01-01T00:00:00.000
[ "Computer Science" ]
Cooperative learning: Homogeneous and heterogeneous grouping of Iranian EFL learners in a writing context One of the important aspects of learning and teaching through cooperation is the group composition or grouping “who with whom”. An unresolved issue is that of the superiority of heterogeneity or homogeneity in the structure of the groups. The present study was an attempt to investigate the impact that homogeneous and heterogeneous groupings of Iranian EFL learners regarding their prior levels had on their writing ability when working cooperatively. Having administered a standardized preliminary English test (PET) and a writing test taken from PET sample tests as a pre-test, 66 high and low proficient learners were assigned into three groups: heterogeneous, homogeneous high, and homogeneous low groups. Following the end of the treatment that took 10 sessions each for 30 min, all groups received a writing test as a post-test. The results demonstrated that learners improved their performance through cooperation, whether working with stronger or weaker peers. However, heterogeneous grouping showed superiority over homogeneous grouping at the low level. Low students in the heterogeneous class made more relative gains than high students in the same class. It must be noted that low students did not improve at the expense of high students. The results revealed that cooperative learning could be especially beneficial for low students. It is hoped that the findings of the present study will give teachers deep insights into group compositions in cooperative learning courses, and will help them make better group experiences for students. Subjects: Language & Linguistics; Language & Literature; Language Teaching & Learning PUBLIC INTEREST STATEMENT Working together in groups has always been emphasized as an interesting feature of classroom practice. One important question is the best way these groups can be formed. In other words, researchers are interested in finding out the best group composition. Present study examines the impact of two different methods of grouping students in writing classrooms. More specifically, the study reports on the difference between homogenous (students of the same level) and heterogeneous (students of different levels) grouping of students in writing classrooms. Introduction A review of literature demonstrates that many theoretical perspectives believe that learning improves when it is carried out as a constructive and social activity. According to Barros and Verdejo (1998), cooperative learning (CL) originally based on the social constructivist view of learning and as a major teaching/learning strategy is an attempt to make instruction more relevant and students more responsible. Marr (1997) defined CL as the instructional technique or grouping structure in which students are divided into heterogeneous/homogeneous groups to complete instructional activities. There is a considerable body of research validating the effectiveness of CL. Gillies, Ashman, and Terwel (2008) report that concepts such as cooperative, competitive, and individualistic learning have been investigated in social psychology and about 750 studies have been conducted on the benefits of CL since 1800. The purpose of CL is elaborated upon by Johnson and Johnson (1989). CL, in their view, is to make each group member a stronger individual in his/her own right. It is not having students merely sitting together, helping the others do their work. Having students who finish their work first to assist others is not a form of CL, either. Neither is assigning a group of students to work together without assuring that all contribute to the product. Baer (2003) holds that the concept of grouping is an important issue in any CL practice. In his words, a very important feature of CL is an appropriate assignment to groups since grouping "who with whom" in the courses which employ CL as the major instructional model is very important. With respect to the fact that with a change in group composition a whole educational course can be either more efficient or unsuccessful, it seems reasonable to investigate this issue empirically instead of getting confused about many contradictory findings. Baer (2003) goes on to suggest two major ways to group students in CL which are called homogeneous and heterogeneous groupings. In homogeneous groups, students are grouped according to their abilities, genders, and/ or races so that everyone in the group is the same regarding ability level, gender, or ethnicity, etc. Its major counter-strategy, i.e. heterogeneous grouping, groups students with a variety of different ability levels, talents, and interests together to complete a single activity. The frequent practice of CL and also the necessity for an informed decision on the part of instructors require scientific research in investigating whatever happens in a cooperatively organized classroom. Therefore, the present study aims at evaluating the effect of homogeneous and heterogeneous groupings of low and high learners working cooperatively on the writing ability of Iranian EFL intermediate learners. It will be highly beneficial for the instructors to know more about the structure of groups in assigning learners to different groups. Actually, the importance of the present study is to provide an opportunity for an informed and scientific decision for the practitioners in the field of EFL. It can also improve our understanding of how such grouping strategies-either homogeneous or heterogeneous-will influence language learning in a course that employs CL as a significant instructional technique. Research questions The present study aims at evaluating the impact that homogeneous and heterogeneous grouping of learners working cooperatively has on the writing ability of Iranian EFL intermediate learners. The rationale for the selection of writing is writing as a process which matches well to the cooperation (Storch, 2005). Through cooperative writing, different members take on a role and through different stages of pre-writing, rough drafting, rereading, revising and editing, they come to a final draft. Furthermore, the present study aims at investigating rigorously what happens to different ability levels, i.e. high and low students in either grouping format. More specifically, in this study attempts were made to answer the following two research questions: (1) Is there any statistically significant difference in the writing performance of homogeneous and heterogeneous groups of Iranian intermediate EFL learners in cooperative learning? (2) Is there any statistically significant difference in the writing performance of homogeneous and heterogeneous groups with regard to their proficiency levels? Literature review Nowadays the focus of language learning is on communicative competence, rather than linguistic competence; many scholars intensify the key role of communicative competence in language learning which is obtained in groups rather than in an isolated way; in the study of Roseth, Johnson, and Johnson (2008), it was revealed that cooperative goal structures (in comparison with competitive or individualistic goal structures) led to more positive peer relationships and higher achievement. Similarly the result of the research of Gillies and Boyle (2010) indicate the superiority of working cooperative learning in the classroom (in the case of mediated-learning interactions, disciplinary comments and the students' verbal behavior) over practicing group work only. Among different language skills, writing as a productive skill, which develops in a process-like manner, seems to adapt itself well to the cooperation process. In second language writing, researchers (e.g. Hyland, 2000;Liang, 2010) used to focus on peer response as the only form of collaborative writing in an EFL context. In addition, it seems that the use of small group/pair work in writing classes is quite limited. It tends to be limited to the beginning stages of joint writing (brainstorming), or to the final stages of writing in which students review each other written text and make suggestions on how it can be improved (Storch, 2005). Regarding the learners view on cooperative writing, results of the studies on students' attitudes to group/pair work in general are mixed. Some studies reported that learners had positive attitudes to pair and group work (Mishra & Oliver, 1998;Roskams, 1999), while others reported that learners had reservations about pair and group (Hyde, 1993;Kinsella, 1996). However, according to Storch (2005), most of these studies rely on surveys rather than on interviews conducted with students immediately after experiencing a collaborative activity. In a study done by Storch (2005), all students were positive about group and pair work. However, although most were positive about collaborative writing; two students felt that pair/group work is changed to oral activities, such as group discussion, rather than writing activities. Those who found the experience positive said that it provided them with an opportunity to compare ideas and learn from each other different ways of expressing their ideas. Participants 104 Iranian female adults, aged between 18 and 35 volunteered to participate in the study. They all had enrolled in English courses in Kish language school in Tehran. The students were informed that participation in this study is voluntary and they were fully informed about the research before they agreed to take part. In addition, in accordance with the accepted ethical practice, the researcher assured the participants that their identities would not be disclosed in any resulting publication. Regarding age and educational background, the participants were heterogeneous, and most of them were students of different fields at University level. These learners were administered a preliminary English test (PET) from which the writing paper was excluded in order for researcher to determine their proficiency levels. Out of 104 students, 9 students failed their exam according to the standards of the school, and 8 students did not register for the following term. Therefore, 87 students were left. Afterwards, the writing test was administrated to group students according to their writing proficiency level. In order to expand the writing ability differences of the subjects, the score of those students who fell within .4 standard deviation (SD) below and above mean were not used in the study. The researcher had no alternatives but to eliminate the scores within the range of .4 SD below and above the mean to have a real heterogeneous group, because the number of students was small. Therefore, the scores of 21 students were not used in the study, and the rest (66 students) were allowed to participate in the study, and were assigned into three groups. Participants, who were at low-intermediate level according to the results of PET, were assigned to one of three groups based on their writing ability level. Table 1 illustrates the three groups in detail. Design The design of the study can be deemed quasi-experimental since there was a pre-test at the beginning of the study followed by 10 sessions of treatment, and there was a post-test at the end of the treatment. The researcher herself taught the participants. Moreover, two raters (researcher and another EFL teacher who was made well aware of the scoring procedure) scored the papers and the results were analyzed to estimate the inter-rater reliability. Learners' writing performance was the dependent variable of the present study. There were two independent variables as well. The first one was type of grouping which was divided into homogeneous and heterogeneous groupings. The second one was the ability level of the students divided into high and low proficient students. Therefore, there were three groups participating in the present study: the heterogeneous group consisted of high and low students, high-level homogeneous group, and low-level homogeneous group. Instruments Two major instruments were used in the present study: preliminary English test (PET) and writing tests. Firstly, the researcher administered Cambridge preliminary English test (version A) as a language proficiency test so as to homogenize the participants and make sure that they are at the same level of proficiency. Having administered the PET test from which the writing section was excluded, the researcher gave the participants a writing test (pre-test). The purpose of the administration of the pre-test was to identify the low proficient and high proficient students with respect to their writing ability and assign them to different groups. The topic given to the students was "The Rainy Day". The minimum length of the composition was 150 words. They were also precisely guided as to what was expected of them. The same topic as the pre-test, which was "The Rainy Day", was assigned and the performance of the learners on the writing test was scored for the post-test. There was an optimal distance (a period of one month) between administrating the pre-test and the posttest; therefore, the test effect foster was naturally eliminated (Best & Kahn, 1989). Writing test scoring criterion In this study, Jacobs, Zinkgraf, Wormuth, Hartfiel, and Hughey's (1981) composition profile is used to score the students performance on writing components. Each paper is rated on five components; Content 30 points, Organization 20 points, Vocabulary 20 points, Language Use 25 points, and finally Mechanics 5 points. This scale is also broken down into numerical ranges that correspond to four mastery levels, excellent to very good, good to average, average to poor, and very poor. These levels are characterized by key words showing specific criteria for excellence in composition (Hadley, 2003). Briefly, the analytic rating was included in the study because it was thought that analytic rating could simplify and objectify the rating of essays, and that it therefore might lead to more reliable writing scores (Hadley, 2003). Treatment sessions Having mentioned the aims of the course as well as the benefits of cooperative learning, the instructor made the students aware of the format of the groups and explained "who with whom" is supposed to work till the end of the course. Afterwards, she taught students some interaction strategies such as modified-interaction strategies and social-interaction strategies deemed essential for them to acquire so as to negotiate for meaning and participate in meaningful interaction. Naughton (Naughton, 2006). Every session, learners in the form of cooperative pairs, were supposed to write a group composition. The researcher chose the topics from sample PET writings. Each treatment session began with a teacher presentation to introduce the method utilized. Then the participants worked with their partners to write a story on a topic taken from a sample PET writing. The students were supposed to write stories in 150 words in class for 30 min. Some of the CL techniques that were utilized in the class are: think-pair-share, constructive controversy or structured controversy, roundtable, jigsaw, group investigation, and cooperative integrated reading and composition on which the researcher elaborated thoroughly in chapter two. During the treatment sessions what was of utmost importance was keeping the cooperative atmosphere of the class. In fact, a group of students sitting at the table doing their own work, but free to talk with each other as they work is not structured to be a cooperative group as there is no positive interdependence (Jacobs, 1987). There should be an accepted common goal on which the group will be rewarded for their efforts and all these show the important role that teachers have to keep the spirit of cooperation (Johnson, Johnson, & Holubec, 1994). According to Jacobs (1987), "when students write group composition, making each group member responsible for one part of the task can help avoid loafing by less active or less able students" (p. 331). Resource interdependence also leads into positive interdependence. Resource interdependence "exists when each member has only a portion of the information, resources, or materials necessary for the task to be completed and members' resources have to be combined in order for the group to achieve its goal" (Johnson & Johnson, 1989, p. 24). During the treatment sessions the researcher walked in the class and among the pairs to observe the cooperative activities of the learners. The researcher insisted on the cooperation of all the participants who were accountable for a part of the task. Students who worked in pairs were assigned different roles. Assigning a role to each student in the group helped to reduce behavior problems. These roles changed so that students did not become bored. The assigned roles included: leader (who led the group in the implementation of the assignment), time keeper (who set the time and let the group know when it is time to start), or encourager (who encouraged group members to participate in discussions and share their idea) (Johnson et al., 1994). In order to encourage group solidarity further, the teacher resorted to positive reward interdependence, too. The teacher tried to make a connection between the rewards that one group member received and that which another one received. In so doing, students earned points for their partner based on how well they did relative to their previous quizzes (Johnson et al., 1994). Results To deal with the first hypothesis of the research, which claims there is not any statistically significant difference in the writing performance of homogeneous and heterogeneous groups of Iranian intermediate EFL learners in cooperative learning, the researcher has tabulated descriptive statistics in Table 2. As it is shown in Table, low and high students in both heterogeneous and homogeneous groups at the post-test improved in their writing ability (e.g. the mean score of heterogeneous high students on pre-and post-test were 19.20 and 21.40, respectively; the mean score of heterogeneous low students on pre-and post-test were 8.09 and 13.20, respectively). Paired t-tests were run, so as to figure out whether the differences between the means of students at the pre-test and post-test were significant or not. A t-test and the results of t (10) = 4.37, p < .001 suggested that there is a significant difference between the mean scores of the heterogeneous high students on the pre-test and post-test of writing ability. Heterogeneous low students also improved on the post-test (their mean scores were 8.09 and 13.20 on pre-and post-test, respectively), and their mean difference was significant at .00 level of significance, t (10) = 15.95, p < .00. The same results were obtained for both homogeneous high and low students. The homogeneous high students' mean scores on the pre-test and post-test were 16.61 and 19.37, respectively, and the difference was significant, as a t-test and the results of t (21) = 13.46, p < .00 suggested. Homogeneous low students' mean scores were 10.73 and 12.62 on the pre-test and post-test, respectively, and the difference was significant as suggested by a t-test and the results, t (21) = 9.46, p < .00. Therefore, the first hypothesis which said that there is not any statistically significant difference in the writing performance of homogeneous and heterogeneous groups of Iranian intermediate EFL learners in cooperative learning was rejected. As it was shown, all groups improved at the post-test. A summary of paired t-tests for learners at different groups is also displayed in Table 3. To better understand the obtained results, they are displayed in two figures. Figure 1 displays mean scores of low homogeneous and low heterogeneous students at both pre-and post-tests. The figure shows that low proficient students in both heterogeneous and homogeneous groups at the post-test improved in their writing ability. The achievement gain for homogeneous low students was 1.88 and it was 5.11 for heterogeneous low students. Therefore, it has been concluded that homogeneous and heterogeneous grouping has an effect on the writing ability of the students. The second hypothesis claimed that there is not any statistically significant difference in the writing performance of homogeneous and heterogeneous groups among low proficient Iranian intermediate EFL learners. As it was shown in Figure 1, heterogeneous low students obtained a higher mean gain, i.e. 5.11 than the homogeneous ones. This reveals that the heterogeneous grouping has been more effective for the writing ability of low-level students. In other words, heterogeneous low students have improved more as the result of the treatment. A between-groups t-test was run to compare the means of heterogeneous low and homogeneous low students and showed t (31) = 8.9, p < .00, which maintained that the difference was significant. Therefore, the second hypothesis states that there is not any statistically significant difference in the writing performance of homogeneous and heterogeneous groups among low proficient Iranian Figure 2 displays mean scores of high homogeneous and high heterogeneous students at both pre-and post-tests. Figure 2 illustrates totally reverse results for the high-level students. Again both homogeneous and heterogeneous students obtained more achievement gains and mean scores at the post-test in comparison with the pre-test. This time; however, homogeneous high students outperformed heterogeneous high students. This reveals that the homogeneous grouping has been more effective for the writing ability of high-level students. The result of a between-groups t-test was also calculated, t (13) = 1.02, p < .32. Consequently, the difference was not significant, and the third hypothesis which stated there is not any statistically significant difference in the writing performance of homogeneous and heterogeneous groups among high proficient Iranian intermediate EFL learners was not rejected. Since the researcher needed to figure out the interaction between grouping strategy and ability level, she ran t-tests to compare the achievement gains of low and high students in both heterogeneous and homogeneous groups. The independent between-group t-test results that compared the gains of low students and high students in the heterogeneous group support that the low students in the heterogeneous group gained relatively more than their high counterparts in the same group t (16) = 4.86, p < .32. Finally, the researcher investigated the results of the independent between-group t-test that compared the achievement gains of high students in the heterogeneous and homogeneous groups. The mean gain and standard deviation for the high students in the heterogeneous group was 2.20 and 1.6, respectively. The mean gain and standard deviation for the high students in the homogeneous group were 2.76 and .96, respectively. A t-test and the results showed that high students in the heterogeneous group gained at least as their high counterparts in the homogeneous group, t (13) = 1.02, p < .32. Therefore, it can be suggested that the relative gains made by the low students in the heterogeneous group were not made at the expense of their high peers. In general, the results obtained show that both homogeneous and heterogeneous grouping have a facilitating effect on the writing ability of low and high students. However, heterogeneous grouping seems to be preferable to homogeneous one, since when low students worked with knowledgeable peers, they developed their language skill and got a good experience. Besides, the relative gains made by the low students in the heterogeneous group were not made at the expense of their high classmates. It suggests that high students benefited from teaching less proficient students as well. Discussion and conclusion The findings suggest that learners boosted their writing performance through cooperation with either low or high proficient learners in both homogeneous and heterogeneous groups. Machado and Mattos (2000) cited Donato (1994) who demonstrated in his article that scaffolding can be obtained through collaborative work among peers of the same level of competence in L2 acquisition settings, and not only through the unidirectional help of a more capable peer or expert, as the majority of research on scaffolding has shown. Indeed, a number of studies in L2 classrooms (e.g. Ohta, 2001;Swain & Lapkin, 1998) have shown that scaffolding can also occur in peer instruction. However, it was proved that CL was more successful for low proficient students in the heterogeneous group. This can be explained from a sociocultural perspective, too. Vygotsky (1978, p. 128) argued that, from the very beginning of life, for development to occur, a child needs to interact with a more able member of society to receive assistance, which has been referred to as "Scaffolding". The important point about the metaphor of scaffolding is that it not only helps the weaker accomplish the task at hand, but also enables the child to perform the task independently (Greenfield, 1984). Consequently, it can be that low students have improved more through interaction with their more capable peers. Ellis (2013) also reiterated that to benefit from interactions and exchanges, the L2 learners need to communicate with someone who has sufficient proficiency in the target language to ensure that the input is not just at the learner's level, but at times, slightly beyond it. Therefore, the researcher came into this perception that the students with a low command of English need to get more help and feedback from their partners. On the other hand, the high proficient students in the heterogeneous group achieved as much as high proficient students in the homogeneous group despite the fact that they spent considerable time working with weaker students. This finding can be explained from a sociocultural perspective as well. Van Lier (2014) believes that although Vygotsky's work focused on the cognitive development of children, the theory is applicable to all learning and to both asymmetrical (i.e. expert-novice) and symmetrical (i.e. equal-ability) groupings. This way, students can learn from the act of teaching others. The act of teaching or explaining to others may help L2 learners develop their language knowledge and internalize what they learnt before (Allwright, 2014). As to the effectiveness of CL practices, novice teachers are recommended to make the students cooperate with their classmates. However, Iranian students usually do not tend to work or learn cooperatively, and they do not feel comfortable with this kind of learning. It does not imply that teachers have to give up using this approach in their classes. It means that teachers need to aware their students of the benefits and advantages of cooperative learning, and put emphasis on the importance of their participation in the classroom work, and let them get habituated to it through practice. In the present study, the researcher observed that the discomfort which the students felt at the beginning of the semester changed dramatically. They became involved with each other very well. The present study aimed at seeking scientifically for the superiority of two major cooperative grouping strategies (homogeneous and heterogeneous grouping) for Iranian high and low students and their performance on writing. The obtained results can be considered useful and fruitful for language teachers, the great decision-makers in the classroom. Although language institutes typically group learners homogeneously in classes by means of placement tests, oral interviews, … there are some students in the same classes who are at lower proficiency level or are weaker at one skill in comparison with the rest of the class; in other words, one of their skills is lagging behind the other skills in comparison with the other students. Therefore, teachers who have sometimes large-size classes are puzzled by the numerous types of students. In these classrooms more proficient students are mixed with less proficient students, and even are thrown together with less proficient ones. Therefore, teachers should ask whether peer interaction can be useful, productive, for both groups in these situations. Making better group experiences for students is essential. According to a Vygotskian approach, in heterogeneous groups, more competent learners scaffold weaker ones and help their progression (Mynard & Almarzouqi, 2006). The pedagogical implication of the ZPD for SLA/FLA is that learners were helped in doing something will be able to do that something without help (Mynard & Almarzouqi, 2006). In a cooperative setting, the teacher is also required to monitor students' interaction (Klingner & Vaughn, 1999). Therefore, teachers need to do some courses to get familiar with appropriate teaching strategies to manage the class (Calderon, 1990). So, teachers should not be left alone in this process. Support from peers, students, from policy-makers, from training courses as well as findings from empirical research on the use of cooperative learning and group composition are deemed important in this process. The researcher hopes that the results obtained from the present study will be beneficial for those involved in language teaching to help language learners improve their language proficiency. Besides, the researcher hopes that the findings of this study will lead to more studies of cooperative learning group composition. Suggestions for further research The study at hand investigated the effect of two CL grouping strategies on EFL low and high students' written performance. Because of the multiple facets of cooperative learning and group composition, the researcher tried to limit the scope of her research. Therefore, the following are suggested for future research: (1) As a next step, a study can be done to investigate the same grouping strategies on the other skills of the language like speaking and reading. (2) The participants taking part in the present study were of the same gender. A study can be done to compare separate-gender classes with the mixed-gender ones. (3) The subjects who participated in the present study were adults aged between 18 and 35. A similar study on subjects of different age range may yield interesting results. (4) In the present study, students were assigned to either homogeneous or heterogeneous group based on their proficiency level. Students can be grouped homogeneously or heterogeneously based on other factors such as ethnicity, field of study, age of students, etc. …, and their effects on learners' writing ability or other language skills can be investigated. (5) In the present study, the teacher decided who works with whom and assigned students to either group. A study can be conducted to see whether students' preference to choose their partners has any positive effect on the writing ability or other language skills. (6) In the study at hand, students were taught interaction strategies at the beginning of the treatment session. However, the effect of this training on the patterns of interaction that arose as small groups of students were working cooperatively to complete tasks was not scrutinized meticulously. (7) In the present study, the researcher observed learners' interactions and mechanisms to which they resorted when engaged in cooperative tasks. Another study can be designed to audiotape and meticulously analyze the nature of collaborative dialogs. (8) This study showed that heterogeneous grouping is extremely beneficial, especially for low proficient students. The study took a sociocultural perspective and suggested that high learners or experts helped low students or novices in doing tasks. However, the researcher did not explore the scaffolded help that the expert provided the novice. Therefore, a study can be conducted to analyze the tutorial interaction and scaffolding tools within a cooperative work. (9) In the present study, the use of L1 was not banned to the extent that students tried to meet the goal of the task. Moreover, the researcher observed that students made use of their first language as a scaffolding tool. As a next step, a study can be done to scrutinize the use of L1 as a scaffolded help for accomplishing a task and learning the second language. In short, it is hoped that both positive and negative experiences and reactions reported here will help teachers decide upon the adequacy of homogeneous and heterogeneous grouping of high and low achievers in cooperative groups for their writing classes, and if one of them is judged to be more appropriate, help make it more useful.
7,119
2016-02-18T00:00:00.000
[ "Education", "Linguistics" ]
Digestive enzymes and gut morphometric parameters of threespine stickleback (Gasterosteus aculeatus): Influence of body size and temperature Determining digestive enzyme activity is of potential interest to obtain and understand valuable information about fish digestive physiology, since digestion is an elementary process of fish metabolism. We described for the first time (i) three digestive enzymes: amylase, trypsin and intestinal alkaline phosphatase (IAP), and (ii) three gut morphometric parameters: relative gut length (RGL), relative gut mass (RGM) and Zihler’s index (ZI) in threespine stickleback (Gasterosteus aculeatus), and we studied the effect of temperature and body size on these parameters. When mimicking seasonal variation in temperature, body size had no effect on digestive enzyme activity. The highest levels of amylase and trypsin activity were observed at 18°C, while the highest IAP activity was recorded at 20°C. When sticklebacks were exposed to three constant temperatures (16, 18 and 21°C), a temporal effect correlated to fish growth was observed with inverse evolution patterns between amylase activity and the activities of trypsin and IAP. Temperature (in both experiments) had no effect on morphometric parameters. However, a temporal variation was recorded for both RGM (in the second experiment) and ZI (in both experiments), and the later was correlated to fish body mass. Introduction The energy metabolism is the main and most important biological parameter involved in all physiological processes of lower vertebrates. Their energy requirements are derived from the oxidation of organic compounds (carbohydrates, lipids and proteins) produced from food digestion [1]. Digestion is an elementary process in fish metabolism because it determines the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 The threespine stickleback (Gasterosteus aculeatus) is a noncommercial small-body teleost fish (35-55 mm, mean standard size) widely found in boreal and temperate regions of the northern hemisphere. It inhabits coastal marine waters, brackish waters, and a wide array of freshwater habitats [19]. The interest of stickleback is mainly ecological and scientific because it occupies an intermediate trophic level [20] and contributes to valuable ecosystem functions: it is considered as an omnivore which feeds on small invertebrates and different insect larvae, but can also serve as prey for many species of carnivorous benthic invertebrates (when juvenile), birds and fish (adults and older stages) [19,20]. Thus, this species is generally considered as a scientific treasure. It is a well-studied model for experimental studies in multidisciplinary fields of biology: aquatic evolutionary biology, ecology and behaviour [21,22]. It is also considered as a good sentinel fish species in aquatic ecotoxicology [23][24][25]. Several authors have addressed threespine stickleback digestive physiology [19,[26][27][28][29]. Their studies concerned the ingestion process (food type and availability, frequency of ingestion, foraging behaviour) and the resulting energetic aspect. However, to our knowledge, no study has focused on digestion or on the digestive enzymes of this animal model. We addressed the enzymatic aspect of G aculeatus digestion to supplement existing data on the digestive process and energy metabolism of this species, and for long term consider using these parameters as new biomarkers in ecotoxicology. Since temperature is the most determining factor of fish physiology (it influences survival, growth and reproduction), we studied its influence on digestive enzyme (amylase, intestinal alkaline phosphatase, trypsin) activity levels. To avoid or control for the potential influence of other parameters such as age, size, or food availability, we set up two original experimental designs under laboratory conditions using threespine stickebacks reared at the INERIS (French National Institute for Industrial Environment and Risks). First, stickebacks were exposed to a temperature-photoperiod cycle that mimicked seasonal (spring to autumn) variations for 240 days. For the second experiment, stickebacks were exposed to three steady-state temperatures (16,18 and 21˚C) for 120 days. Sticklebacks are typically present in waters with temperatures ranging from 4 to 20˚C [19], so 16-18˚C appeared to be an optimal temperature range for their growth and reproduction (Guderley 1994), while 21˚C is in the range of stressful temperatures for them [30]. The temperatures we selected corresponded to a global rise of 1.8-4.0˚C as predicted in a few decades by climate change scenarios [31]. We also studied gut morphometric parameters (RGM, RGL and ZI) in both experiments. Ethics statement This experiment was conducted in accordance with the European directive 2010/63/UE for the protection of animals used for scientific purposes. The INERIS registration number, where the experiments were conducted, is the C60-769-02. The experimental protocols were submitted and reviewed by a French nationally recognized ethical committee: CREMEAP (Comité Régional d'Ethique en Matière d'Expérimentation Animale de Picardie), under the registration number 96. Fish origin and acclimation conditions Our experiments were carried out on sticklebacks from the same population originating from INERIS artificial rivers (Verneuil-en-Halatte, France). Sticklebacks were all born the same year after natural reproduction in mesocosms between May and September and were a few (6)(7)(8)(9)(10)(11) months old at the beginning of the experiment. Three months before the start of the experiments, sticklebacks were transferred for acclimation to experimental temperatures (14˚C for experiment 1; 16˚C for experiment 2) to 300-liter laboratory opaque tanks with a continuous freshwater circulation system (0 ppt), a photoperiod according to the experiment (see below) and a constant feeding regime with the following properties: 5% of crude protein, 0.15% of crude fat, 1.6% of crude fiber and 91.1% of moisture (frozen commercial chironomid larvae ad libitum, 3% of body weight/day, Ocean Nutrition TM , Belgium). Each tank was under continuous aeration to maintain a saturated oxygen concentration. Sticklebacks were periodically (every 15 days) classified according to their size classes, i.e. small (30-35 mm, 36-40 mm) and large (41-45 mm and >45 mm) and distributed among the different tanks. The feeding level was adjusted at each time-point and classification, and was maintained as a constant ratio of the fish mass, as described by Leloutre [32]. Experimental design Experiment 1: Influence of the temperature-photoperiod cycle. In the first experiment, after the acclimation period (see above), sticklebacks were exposed to a temperature cycle (14 to 20˚C) and photoperiod cycle ( Table 1). The temperature was increased regularly (1˚C month -1 on average) during the first 180 days of the experiment, and then decreased until the end (1.5˚C month -1 on average) to reach approximately 17˚C on day 240. In addition, the photoperiod was also modulated (see Table 1). Temperature and photoperiod varied together as they do in natural environments, and their effects were not distinguishable. To assess their combined effects, we tested differences between measurement times. Moreover, we also studied the potential influence of sex and fish size. To describe enzyme and digestive tract parameters, we discretized fish length in four classes: 30-35 mm, 36-40 mm, 41-45 mm, and >45 mm. Thus, according to the acclimation conditions, four tanks (one per size class) containing 300 individuals each were submitted to the temperature-photoperiod cycle. Every other month, 15 individuals were sampled in each tank. In order to minimize stress as much as possible, sampling was carried out in one go, and size was measured only after dissection, and then classification into the correct size group was performed. This explains why different numbers of sticklebacks were analyzed each time. Water parameters (dissolved oxygen, pH and temperature) were assessed throughout the experiment to ensure optimal water quality; their effects on digestive enzymes and gut morphometric parameters were not studied (Table 1). Individuals from the different tanks were fed similarly to acclimation conditions according to fish mass. Due to technical issues, 5 samples were lost on sampling days 180 and 240. Experiment 2: Influence of prolonged maintenance of sticklebacks at three different water temperatures. In the second experiment, three water temperatures were selected: (i) 16˚C and 18˚C, representing optimal temperatures for sticklebacks [33], and (ii) a higher temperature of 21˚C, in the range of stressful temperatures for sticklebacks [30]. To investigate the effect of temperature acclimation on stickleback digestive process, homogeneous reproductive Table 1. Water parameters and photoperiod during the mimicked seasonal variation experiment. Only temperatures and the photoperiod were modulated. The pH, dissolved oxygen and conductivity changed with water temperature, but their effects were not studied. Conditions Water adult sticklebacks (955.80 ± 55.13 mg and 40.40 ± 0.49 mm) were used. After a 3-month-long acclimation period at 16˚C, sticklebacks were randomly transferred into three new 300-liter opaque tanks (n = 30/tank/condition), with constant feed (frozen commercial chironomid larvae ad libitum, with the same composition as for acclimation conditions) and photoperiod (14h/10h light/dark cycle). Water temperature was adjusted at a rate of 0.5˚C day -1 , using TANK 1 water conditioner (TK500, TECO SRL, Italy) for 18 and 21˚C. Once the desired temperatures were reached, sticklebacks were maintained there for 120 days, and ten individuals from each group were sampled on days 15, 60 and 120 of the experiment. During the experiment, water parameters (dissolved oxygen, pH and temperature) were continuously monitored (Table 2). Enzyme analysis and biometric parameters For all experiments and at each sampling date, a 24-hour starvation period was applied before stickleback euthanasia, as recommended by Debnath [4]. Sticklebacks were sacrificed by cervical dislocation after anesthesia with tricaine methanesulfonate MS222 (70 mg. L -1 , SIG-MA-ALDRICH, France), weighed, and then standard length was measured. The whole digestive tract was removed on ice, rinsed with cold Tris-HCl buffer (0.01 M, pH 7, SIG-MA-ALDRICH, France), cleaned of exterior fat, weighed and measured. Samples were then homogenized with ceramic (3 mm Ø) and glass (1mm Ø) beads in cold Tris-HCl buffer (0.01 M, pH7), using PRECELLYS24 1 homogenizer (BERTIN TECHNOLOGIES, France), at 5,500 rpm 2x10 sec, and centrifuged at 15,000 x g for 30 min at 4˚C. Supernatants were stored at -80˚C until analysis. Measurements of amylase and intestinal alkaline phosphatase (IAP) activity levels according to Junge et al. [34] and Panteghini and Bais [35], respectively, were performed with adapted methods, using Thermo-Scientific Gallery ready-to-use reagents. Trypsin activity measurements were performed according to the Garcia-Carreño and Haard [36] method, using N-benzoyl-DL-arginine 4-nitroanilide hydrochloride (BAPNA, 3 mM) as a substrate. All enzymatic assays were adapted on the Gallery™ Automated Photometric Analyzer (Thermo Fisher Scientific Oy) and performed at 37˚C, by kinetic colorimetric assay at 405 nm. Results are reported in U. g -1 of gut tissue. dx.doi.org/10.17504/protocols.io.nmtdc6n [PROTOCOL DOI]. Statistical analysis Data were processed using R statistical software (v3.3.1) at α = 0.05 and were analyzed using ANOVA or ANCOVA. Normality and homogeneity tests (Shapiro and Levene tests, respectively) were used. When normality or homoscedasticity were not met, data were transformed using Box-Cox, log, or square root method. Pair-wise t-test, with Bonferroni correction, was used as post hoc test to compare groups. In experiment 1, firstly, digestive enzyme parameters (amylase, IAP and trypsin) and gut morphometric parameters (RGL, RGM, ZI) were analyzed at each time-point independently using ANCOVA. Fish length was used as a covariate and sex as a factor. To assess the combined effects of temperature and photoperiod, differences over time were investigated. Then, in a second step, one-way ANOVA was performed with time-point as a factor either on all sticklebacks together (no sex effect, fish length effect neglected) or on males and females taken separately (fish length effect neglected) according to the results of the first analysis. Biometric parameters (body mass, standard length, Fulton's condition factor and GSI) were analyzed first at each time-point independently, using two-way ANOVA, with sex and size classes as factors. Since sex had no effect on these parameters, except for GSI, one-way ANOVA was performed for each size class, with time-point as a factor either on all sticklebacks together (body mass, standard length, Fulton's condition factor) or on males and females taken separately (for GSI). In experiment 2, we first analyzed the effect of sex on the different parameters at each timepoint independently, using two-way ANOVA. Sex and temperature condition were considered as factors. Since no effect of sex was recorded for any of the parameters except GSI, two-way ANOVA was performed with temperature and time as factors. This analysis was conducted on all sticklebacks, except GSI that was performed on males and females taken separately, to assess differences over time. Finally, principal component analysis (PCA) was performed on digestive enzyme activity levels for the two experiments and included the other parameters: fish biometric parameters (body mass, standard length, Fulton's Factor and GSI), gut morphometric parameters (RML, RGM and ZI), and water parameters (dissolved oxygen, pH and temperature), as supplementary variables (explanatory variables). Experiment 1: Influence of the temperature-photoperiod cycle First, the potential effect of sex on the different parameters was tested. No effect was observed (p<0.05) on most of the parameters except GSI (Table 3) and amylase activity at one timepoint (Supplementary data, S1 Table and S1 Fig). In fact, amylase activity on day 60 was lower in females than in males (Supplementary data, S1 Fig). This difference was not observed at the other dates. Table 3 presents the different parameters according to size classes and time-points. As described above, only 15 sticklebacks per condition were caught at each sampling, and they were classified in the appropriate experimental condition after dissection; this is why groups are numerically unbalanced (Table 3). In addition, because sticklebacks belonged to the same generation, the smallest ones grew up and were no longer small enough from day 240 for the 35-40 mm size class and from day 180 for the 30-35 mm size-class. ANCOVA was used with length as a covariate (numerical continuous variable) to avoid this bias in statistical analyses. Throughout the experiment, amylase activity was the highest, followed by IAP activity, and then trypsin activity. No overall effect of size was observed (see supplementary data), except for amylase and trypsin in a few individual cases (Supplementary data, S1 Table and S1 and S3 Figs). Time condition had a significant effect on the activity of the three digestive enzymes ( Fig 1). Overall, the activity levels of stickleback digestive enzymes increased significantly on days 120 (18.32˚C, Pair-wise t-test: p< 0.00001) and 180 (20.02˚C, Pair-wise t-test: p< 0.05 and decreased on day 240 (17.24˚C), as compared to day 180. The lowest mean activity levels of the three digestive enzymes were recorded on day 0, while the highest values were observed on day 120 for amylase and trypsin, and on day 180 for IAP ( Fig 1). Concerning gut morphometric parameters (Table 3), RGL and RGM values ranged from 0.47 to 0.64 and from 0.034 to 0.052, respectively. Neither fish size nor time affected RGM or RGL, except in some cases: RGM had decreased while size had increased on day 60, and RGM had increased in 35-40 mm sticklebacks on day 240 as compared to day 0 (Supplementary data, S2 Table). On the other hand, Zihler's index (ZI) was strongly affected by fish size (oneway ANOVA, p<0.0001) at each sampling date, with higher values for small fish, ranging from 5.34 on day 240 (> 45 mm) to 13.23 on day 0 (30-35 mm). For each size class, no overall difference in stickleback body mass or length was observed throughout the experiment. As expected, there was a significant increase in body mass and length according to size class at each date. Fulton's condition factor (K) was the same for all size groups. As regards GSI, despite an increase in absolute values from day 60 to day 180 indicating the onset of reproduction in a few individuals (especially females), no overall significant differences were observed between the different size classes or between sampling dates due to the high variability recorded within groups. A PCA was conducted including the following explanatory variables: water parameters, photoperiod, biometric parameters, and gut morphometric parameters (Fig 2A). Individuals were distinguished by the first axis according to time conditions ( Fig 2B) and were explained positively Table 3. Biometric and gut morphometric parameters measured in sticklebacks exposed to a temperature-photoperiod cycle after 0, 60, 120, 180 and 240 days. by water temperature and photoperiod (R 2 = 69% and 59% respectively, p<0.00001), and negatively by oxygen, conductivity and pH (R 2 = 75%, 66% and 53% respectively, p<0.00001). None of the biometric and gut morphometric parameters explained the variance of the three digestive enzyme activities. Experiment 2: Influence of prolonged maintenance of sticklebacks at three different water temperatures This experiment started with sticklebacks of the same size and weight in order to evaluate the effect of three temperatures and durations on their digestive capacity. As in the first experiment, there was no effect of sex on biological parameters except for GSI. Duration was the most impacting factor on the activity of the three digestive enzymes (Two-way ANOVA p< 0.00001, Fig 3). Similarly, to the first experiment, amylase activity was highest, followed by IAP activity and trypsin activity, respectively. Overall, the highest level of amylase activity was recorded on day 15 in all groups and decreased significantly from day 60 to day 120 (Pair-wise t-test, p< 0.00001). Trypsin and IAP activity levels increased significantly from day 60 at 16˚C (Pair-wise t-test, 0.02 and p< 0.00001 for trypsin and IAP, respectively), and only on day 120 at 18˚C (Pair-wise t-test, p<0.00001) as compared to day 15. Amylase activity was negatively correlated to weight and length (R 2 = 53 and 55%, respectively). Trypsin and IAP activity levels had significantly increased on day 120 and were positively correlated to weight (R 2 = 55 and 73%, respectively, p< 0.0001) and length (R 2 = 51 and 68%, respectively, p< 0.0001), except at 21˚C. At that temperature, trypsin activity remained the same throughout the experiment and was lower than activity at the other water temperatures (16 and 18˚C) on day 120. Temperature influenced amylase activity only on day 15, with higher activity at 16˚C than in the other groups (Pair-wise t-test, P<0.0001). In the highest-temperature group (21˚C) trypsin activity was lowest on day 120 (Pair-wise t-test, p<0.001) as compared to the other groups. When sticklebacks were exposed to the highest water temperature (21˚C), a significant increase in IAP activity was observed on day 120 (Pair-wise t-test, p<0.00001). However, trypsin activity remained stable throughout the experiment, and similar to the activity measured on day 15. Morphometric parameters (Table 4) were mainly affected by duration. Only RGL was not affected by time or temperature and ranged from 0.59 to 0.66. RGM ranged between 0.032 and 0.048 and had significantly increased on day 120 (Pair-wise t-test, p <0.05), while ZI significantly decreased in all groups from day 60 (Pair-wise t-test, p< 0.001) as compared to day 15, and ranged from 4.91 to 7.83. Sticklebacks grew significantly in body mass and body size throughout the experiment. Those from the 21˚C group exhibited the significantly lowest body mass after 60 days and the lowest body size after 120 days as compared to the other groups. Overall, neither temperature nor duration affected Fulton's condition factor (K) or GSI in both males and females, except in a few exceptional cases (Table 4). Digestive characteristics of G. aculeatus In this study, we focused on the activity of three digestive enzymes (i.e. amylase, trypsin and IAP) and three gut morphometric parameters (i.e. RGL, RGM and ZI) of the threespine stickleback. Sticklebacks were fed with a constant-composition food (frozen chironomid larvae). Under these fixed nutritional conditions, the activity of amylase was higher than that of trypsin, and this for all experiments. Higher amylase activity was reported in benthophage fish (bream, carp, roach) that usually feed on chironomid larvae than in other fish species (pike, burbot, perch) [39]. A higher amylase activity levels were generally noted in omnivorous fish [13]. In natural environments, sticklebacks have a wide choice of food items, and they hunt for prey visually. Thus, this species is admittedly omnivorous, with a preference for food of animal origin [29]. In literature, feeding habits of G. aculeatus were usually addressed by studying their stomach contents [29]. To our knowledge, this is the first study that addresses the digestive enzymes of G. aculeatus, and our first results concord with what was reported in literature on the feeding habits of this species, despite the lack of dietary diversity. Concerning gut morphometric parameters, we described for the first time three parameters in relation to G. aculeatus gut morphology (RGL, RGM and ZI). RGL and ZI are crude measures and were explored as potential indices to identify feeding habits of fish based on their gut length [8]. According to feeding guilds reported by Al-Hussaini [10,40], fish are recognized as carnivores, omnivores or herbivores, when RGL ranges respectively between (0.6-2.4), (1.3-4.2) and (3.7-6.0). According to ZI values, Kramer and Bryant [41] also categorized fish with a low body mass (0.3-3.0 g) as carnivorous (ZI = 2.3-3.2), omnivorous (ZI = 2.4-5.8) or herbivorous (ZI = 11.6-55.0). In our study, RGL ranged from 0.50 to 0.64 and corresponded to RGL values of carnivorous fish, and ZI values ranged between 4.91 and 13.60, and did not match with any of the three categories described above. Contrary to what has been reported in the literature for other species, the use of these parameters to classify sticklebacks according to their feeding habits is not obvious, which is probably due to the experimental conditions used in our study (i.e. calibrated and undiversified diet). In addition, sticklebacks have an elongated-fusiform body shape [28]. As reported by German and Horn [8], gut morphometric parameters should be treated with caution in fish with elongate, eel-like body shape such as pricklebacks. Digestive performance of G. aculeatus RGM is usually addressed to evaluate the amount of tissue dedicated by fish to their digestive tract [8]. In our study, RGM values were higher in small sticklebacks (30-35 mm) than in large ones (> 45 mm) and suggest that small sticklebacks fed on chironomid larvae increased their gut mass to maximize extraction of nutrients and energy from their diet and ensure growth. Increasing gut mass is one of the mechanisms used by animals to increase energy intake from food [42]. This study provides new information to supplement existing data on the digestive process and energy metabolism of the threespine stickleback. However, the specific experimental conditions of this study (calibrated undiversified food ration) must be kept in mind when considering these first results. Further experiments are nedded with different diet to confirm these informations. Substrate composition of diet is well known to be a modulating factor on digestive enzyme activities and gut morphometric parameters. Hence, in this study, we chose to control this parameter in order to study the effects of other factors such as fish size, sex and temperature. Effect of temperature on digestive capacity The effect of temperature on the digestive capacity of G. aculeatus was addressed using two different experiments: (i) after exposure to temporal modulation of temperature and photoperiod miming seasonal variations and (ii) after long period exposure to three fixed temperatures. The activity of digestive enzymes was modulated in different ways depending on the exposure scenario. The effects of temperature on digestive enzymes can be direct [43], as they can be indirect via the modulation of food intake [44], or the gut morphometric parameters [42]. In our study, morphometric parameters were not affected by temperature in both experiments. Hence, digestive enzyme modulation could be the result of a direct effect of temperature by modulating synthesis and/or secretion process or indirect by altering the ingestion capacity of sticklebacks. Unfortunately, in our study food intake was not measured, and should be taken into account in further experiments. However, whether the effects are direct or indirect, in this study we do not address the mechanistic of thermal effect on the digestive enzymes. In fact, we wanted to determine how these parameters could react in front of temperature modulation and what if we could use these parameters as markers of a stressful situation such as a long exposure to a higher water temperature. The first experiment showed temporal variation of digestive enzyme activity levels, with increases on days 120 (18.3˚C) and 180 (20.0˚C) when the water temperature and photoperiod were highest. Fish digestive enzyme activity can be affected by several factors, e.g. seasonal factors like temperature and the photoperiod [39,45]. Seasonal variation of amylase and proteolytic activity levels was found in roach and rudd [7,15]. Adaptation of enzyme activity levels to temperature was species-and/or food-dependent, so the author concluded that the two species had different strategies for seasonal adaptation of digestive enzymes that both achieved the same aim: in roach by direct temperature dependence, in rudd by inflexible annual rhythm (i. e. photoperiod), coinciding with the annual temperature pattern of the water. Carbohydrase activity levels increased in two teleost fish (bream and roach) when temperature rose to 20˚C [46]. The author explained this elevation by a greater food intake coinciding with the higher temperature. In fact, increased food intake is one of the mechanisms aimed at offsetting increased energy demand. This increase generally results in greater digestive enzyme activity [42]. In natural conditions, a slight decrease in the amount of food eaten by sticklebacks was recorded during cold periods, followed by an increase during warmer periods, in relation to the increased energy demands during the breeding season [19,28]; this supports our finding. Amylolytic enzyme activity increased to maximum levels in bream and roach during the warmer season, which coincides with the period of sexual maturity [12]. We evaluated reproduction based on GSI and showed no correlation with the activity levels of the three digestive enzymes, despite an increase of global GSI means from day 60 to day 180 that coincided with an increase in water temperature. This indicates that reproduction was set on in a few individuals (especially females). The photoperiod and water parameters were the most explanatory variables for digestive enzyme activities. Temperature variations were associated with fluctuations in other water parameters (pH, dissolved oxygen and conductivity), so that their effects on digestive enzymes were not considered. Further experiments should explain their contribution. In the first experiment, the photoperiod was modulated along with water temperature and showed a positive correlation with the three digestive enzymes. Sticklebacks are visual predators, so their feeding behavior was affected by the day/night cycle, and it had either slowed down or completely stopped in nocturnal periods [47]. In miiuy croaker (Miichthys miiuy) larvae and juveniles, the photoperiod had no effect on trypsin or amylase activity levels, yet lipase activity was significantly higher in longer light/dark (18/6 and 24/0) photoperiods [45]. In the second experiment, there was no difference in digestive enzyme activity levels between sticklebacks submitted to the three temperatures, except for amylase on day 15 in the 16˚C group and trypsin on day 120 in the 21˚C group, and the adaptation patterns did not match with what was recorded in the first experiment when sticklebacks were exposed to a mimicked seasonal variation of temperature. In fish, temperature adaptation of enzymes may be achieved through several mechanisms: (a) changing the molecular conformation, (b) modulating energy activation, (c) changing affinity for the substrate, (d) modulating enzyme secretion, (e) producing various isoenzymes [43]. The absence of differences in digestive enzyme activity in sticklebacks from the different temperature groups can be explained by a prior adaptation of their digestive system to an increase in water temperature that was established before the start of the second experiment through production of isoenzymes with a large temperature range. When the second experiment began, sticklebacks were already at 16˚C and were exposed to a progressive increase in temperature from day 0 (the beginning of experiment 1). Under a low rate of water temperature increase (0.04˚C h -1 ) corresponding to the increase applied in the present study, amylolytic activity levels in goldfish, carp, roach and perch juveniles seem to be at the same levels in all seasons [48]. The temporal decrease of amylase activity levels in our three temperature groups could be explained by the production of new isoenzymes with low-energy activation, which are more efficient even at low concentrations. Lower-energy activation of amylase activity has been reported in different fish species as an adaptation of the digestive system to temperature maintenance through the production of new efficient isoenzymes [39]. Exposure of sticklebacks to three different temperatures for 120 days influenced their growth (weight and length), which was correlated to digestive enzyme activity (Fig 4). Environmental temperature has major effects on metabolism, growth, and fundamental biochemical processes, and these effects are well documented in the literature [47,49]. In the early growth stages of almost all fish species, as well as in sticklebacks, growth rapidly increases when temperature rises, reaches a peak at optimum temperatures, and rapidly decreases when temperatures become adverse. We evidenced a limitation effect of the "high" 21˚C water temperature on stickleback growth. Sticklebacks reared at the "high" temperature weighed less than those raised at 16 or 18˚C. Several studies have addressed the effect of temperature on stickleback growth [19]. Sticklebacks can tolerate a wide range of temperatures but prefer relatively cool water (< 18˚C) [33], which is in accordance with our results. However, the bioenergetics model of Hovel [49] suggested that 22˚C could be the optimal temperature for growth with an upper limit of 25˚C. But the model was designed based on a 3-day experimental design and did not address prolonged temperature exposure of sticklebacks. Digestive capacity (e.g. proteolytic enzymes) and metabolic capacity required to support tissue protein synthesis have been reported as factors that can partly contribute to setting the rate of fish growth [50]. Some authors reported a correlation between growth and trypsin, chymotrypsin and alkaline phosphatase activity levels in fish [51,52]. In the second experiment, trypsin activity did not change in sticklebacks exposed to 21˚C, and was significantly lower than in sticklebacks from the 16 and 18˚C groups on day 120. This low trypsin activity level may explain the difference in mean weights between sticklebacks from the 21˚C group and sticklebacks from the other two groups. This difference can also be explained by the effect of temperature on stickleback metabolism. In fact, high temperature has been reported to increase the metabolic rate of living organisms [19]. In such a situation, optimum energy allocation is affected, and the energetic balance is redirected to maintenance rather than growth [53]. Decreased growth could be the consequence of an active metabolism following ingestion of a given amount of food [54]. A water temperature of 21˚C corresponds to a predicted summer temperature reported by the IPCC [31] in a few decades. Our results show that exposure of sticklebacks to these elevated temperatures for a long period could be compromising for their physiology by affecting growth, which may subsequently have long-term impacts on other physiological functions such as reproduction. Sticklebacks (fed ad-libitum) probably allocated the energy from digested food to maximizing maintenance and reproduction while compromising growth and trypsin synthesis. This interesting result suggests that trypsin activity could be a potential marker of a thermal stress situation but requires further investigation to confirm it. Water temperature had no major effect on gut parameters in both experiments, except on ZI in small fish (< 40 mm) which significantly decreased with the increase in stickleback body mass on days 60 and 120 as compared to day 0. Several studies have addressed the effect of temperature on the digestive tract of vertebrates, and have reported a decrease in intestinal length and mass when animals were exposed to elevated temperatures [55,56]. This involution at warm temperatures could result from a regulatory mechanism that maintains the gut in an appropriate condition without incurring excessive energetic costs. Digestive tissues are among the most costly tissues to maintain in terms of energy. Therefore the intestine should be long enough for dietary nutrient uptake to be sufficient [57]. Effect of body size on digestive capacity Stickleback size had no or only a slight effect on the activity levels of the three digestive enzymes. This could be due to the way results were expressed (activity per gram of gut), suggesting that this allometric scaling already took into account the potential effect of size. Amylase and trypsin activity levels decreased with increasing age in European sea bass larvae [58]. These results were explained by the increase in total proteins in elderly larvae. To avoid the bias of the protein load, we chose to express enzymatic activity levels in units per gram of gut. Several studies have shown that fish age or developmental stage had variable effects on digestive enzymes depending on species and food habits, but most of them focused on early larval or juvenile stages of economically relevant fish species [12,45,59,60]. Alkaline phosphatase activity increased with size in three teleost fish species (pike, perch, and roach). In contrast, carbohydrase (i.e. amylase) activity decreased with fish size [12]. German [61] compared digestive enzyme activity levels in four species of herbivorous and carnivorous pricklebacks (Cebidichthys violaceus, Xiphister mucous, Xiphister atropurpureus and Anoplarchus purpurescens) and determined the effects of age on these parameters. The author reported a size effect on certain digestive enzymes in some species but not in others. For example, in Cebidichthys violaceus, pepsin activity decreased significantly with a size increase, whereas in the other three species no significant difference was observed. Amylase activity increased in Cebidichthys violaceus, Xiphister mucous and Xiphister atropurpureus along with a size increase, whereas no changes were observed in Anoplarchus purpurescens. No size changes in relation to trypsin activity were observed in any of the four species. No effect of age or size on carbohydrase (i.e. amylase, sucrase, maltase) and lipase activity levels in Eurasian perch was noted either [62]. The authors discussed the results by specifying the restricted range of age classes covered by their study. We investigated the effect of stickleback size on their digestive capacity in the first experiment. No size effect was recorded on RGM or RGL during the experiment. However, ZI was substantially impacted by size, and was higher in small sticklebacks (< 40 mm) than in large ones (> 40 mm). These differences were mostly related to body weight variations. The literature says that gut morphometric parameters increase with fish body size [8], but this is not in agreement with our results. These findings are not surprising considering the fusiform body shape of sticklebacks: the structure of sticklebacks' intestine (a straight tube) is defined by its elongate body shape and cannot be longer than its abdominal cavity, unlike a flat and streamlined body shape which allows looping and convolution of the intestine [63]. Conclusions To our knowledge, this study is the first to characterize the digestive activity of G. aculeatus by focusing on three digestive enzymes (amylase, IAP, and trypsin) and three gut morphometric parameters (RGM, RGL, and ZI). Sticklebacks were fed exclusively with frozen chironomid larvae, with constant protein, fat and fiber composition. We chose to control the diet parameter, in order to study the effect of other factors (i.e. size, sex and temperature) on the digestive parameters. In these fixed nutritional conditions, sticklebacks exhibited higher amylase than trypsin activity in both experiments, characterizing an omnivorous fish, which is in accordance to feeding habits of this species, defined by others parameters (stomach contents) in the literature. When considering gut morphometric parameters, RGL and ZI failed to categorize sticklebacks according to their feeding habits, probably due to the lack of dietary diversity. Our study showed no size effect, but a temporal variation of the three digestive enzymes was observed when temperature was progressively modulated to mimic seasonal variation. The activity of the three digestive enzymes was higher in hot periods (with a long photoperiod) and lower in cold days (with a short photoperiod). The highest levels of amylase and trypsin activity were observed at 18˚C, while the highest IAP activity level was recorded at 20˚C. When sticklebacks were exposed to three constant temperatures (16, 18, or 21˚C), no differences were observed among groups, but a significant temporal effect was observed, with inverse evolution of the patterns between amylase activity and the activity of the other two digestive enzymes. The temporal effect on digestive enzymes was correlated with the effect of temperature on fish growth. Cool temperatures (16 and 18˚C) favored a high growth rate (based on body mass evolution), while a temperature of 21˚C limited growth efficiency even with a daily ad libitum diet. The results of this study suggest that in the context of global warming long exposure to a high water temperature (21˚C) could compromise stickleback physiology by affecting their growth. Altered growth parameters (weight and size) were correlated to a decrease in trypsin activity and suggest that this enzyme could be used as a marker of thermal stress in the threespine stickleback. While keeping in mind the specific experimental conditions of this study (calibrated undiversified food ration), these findings can be used to supplement existing data on the digestive process and energy metabolism of threespine sticklebacks. The absence of body size effects on digestive enzyme activity and the response of digestive enzymes to temperature changes can be considered as interesting results for possible use in an ecotoxicological context. Supporting information S1 Table. ANCOVA results of digestive enzymes activities in sticklebacks exposed at 0, 60, 120, 180 and 240 days to a temperature-photoperiod cycle. The model was constructed, considering size as continuous covariate, and sex as factor. (DOCX) S2 Table. ANCOVA results of gut morphometric parameters in sticklebacks exposed at 0, 60, 120, 180 and 240 days to a temperature-photoperiod cycle. The model was constructed, considering size as continuous covariate, and sex as factor. (DOCX) S1 Fig. Linear regression model explaining the effect of size and sex on amylase activity in sticklebacks exposed at 0, 60, 120, 180 and 240 days to a temperature-photoperiod cycle. The model was constructed, considering size as continuous covariate, and sex as factor. Note that a single regression line was plotted in absence of sex effect. Males were plotted in blue, and females in red. (TIF) S2 Fig. Linear regression model explaining the effect of size and sex on the intestinal phosphatase alkaline activity (IAP) in sticklebacks exposed at 0, 60, 120, 180 and 240 days to a temperature-photoperiod cycle. The model was constructed, considering size as continuous covariate, and sex as factor. Note that a single regression line was plotted in absence of sex effect. Males were plotted in blue, and females in red. (TIF) S3 Fig. Linear regression model explaining the effect of size and sex on trypsin activity in sticklebacks exposed at 0, 60, 120, 180 and 240 days to a temperature-photoperiod cycle. The model was constructed, considering size as continuous covariate, and sex as factor. Note that a single regression line was plotted in absence of sex effect. Males were plotted in blue, and females in red. (TIF)
8,927.8
2018-04-03T00:00:00.000
[ "Biology" ]
Non-Linear Effect of Volume Fraction of Inclusions on the Effective Thermal Conductivity of Composite Materials : A Modified Maxwell Model In this paper, non-linear dependence of volume fraction of inclusions on the effective thermal conductivity of composite materials is investigated. Proposed approximation formula is based on the Maxwell’s equation, in that a non-linear term dependent on the volume fraction of the inclusions and the ratio of the thermal conductivities of the polymer continuum and inclusions is introduced in place of the volume fraction of inclusions. The modified Maxwell’s equation is used to calculate effective thermal conductivity of several composite materials and agreed well with the earlier experimental results. A comparison of the proposed relation with different models has also been made. Introduction Theoretical prediction of effective thermal conductivity (ETC) for multi-phase composite materials is very useful not only for analysis and optimization of the material performance, but also for new material designs.The correct modeling for thermal coefficients of these materials has a great value due to their excellent thermal and mechanical properties and their use in industrial applications and technological fields.The challenges in modeling complex materials come mainly from the inherent variety and randomness of their internal microstructures, and the coupling between the components of different phases.In literature, several attempts have been made to develop expressions for effective thermal conductivity of two-phase materials by various researchers such as, Maxwell [1], Lewis and Nielsen [2], Cunningham and Peddicord [3], Torquato [4], Hadley [5], Agari and Uno [6], Misra et al. [7], Singh and Kasana [8], and Verma et al. [9].Lewis and Nielsen [2] reported a semi-empirical model incorporating the effect of the shape and the orientation of particles or, the type of packing for a two-phase system.Other approach for a thermal conductivity prediction was initiated by Torquato [4] for dis-persed spherical or cylinder particles.This approach also takes into account the filler geometry and the statistical perturbation around each filler particle.Agari and Uno [6] also proposed another semi-empirical model, which is based on the argument that the enhanced thermal conductivity of highly filled composites originates from forming conductive chains of fillers.Verma et al. [9] developed a porosity dependence correction term for spherical and non-spherical particles.Calmidi and Mahajan [10] presented a one-dimensional heat conduction model, considering the porous medium to be formed of two-dimensional array of hexagonal cells.Bhattacharya et al. [11] extended the analysis of Calmidi and Mahajan for metal foams of a complex array of interconnected fibers with an irregular lump of metal at the intersection of two fibers.Pabst and Gregorova [12] developed a simple second-order expression for the porosity dependence of thermal conductivity. In this study, a non-linear second-order correction term is developed in place of volume fraction of inclusions and used in the Maxwell's model [1] for estimation of ETC of metal filled composite materials.Originally, Maxwell's model was derived for low dispersion of filler particles in the matrix.Here, a non-linear second-order empirical expression in place of filler volume fraction has been proposed and the unknown coefficients have been determined using boundary conditions and experimental results reported earlier.Volume fraction of inclusions in the Maxwell's model is then replaced by the non-linear second-order correction term.The results obtained using modified Maxwell's model show a better agreement with experimental values. Mathematical Formulation By solving Laplace's equation and assuming absence of any interactions between the filler particles, Maxwell [1] calculated the effective thermal conductivity (ETC) of a random distribution of spheres in a continuous medium for low filler concentrations as: Where e , m and k k f k are effective thermal conductivity, matrix thermal conductivity and thermal conductivity of fillers, respectively, and  is the volume fraction of inclusions.This model was developed for low dispersions i.e. for lower volume fraction of filler phase.Maxwell's model fails to predict ETC of composite materials having higher volume fraction metallic inclusions.In composite materials, the inclusions most frequently used are particles of carbon, aluminum, copper, iron, silicon, brass, graphite and magnetite, respectively.Therefore, to predict ETC of composite materials some correction is needed in the Maxwell's model.This correction may be in thermal conductivity of the constituent phases or in the fractional volume of the constituents. Pabst and Gregorova [12] developed a model, which shows the non-linear porosity dependent thermal conductivity of two-phase materials.Verma et al. [9] have also developed a model for ETC of two-phase materials with spherical and non-spherical inclusions using a correction term.Some experimental results [13][14][15][16][17][18] also show the non-linear dependence of ETC on the volume fraction of filler phase.On reviewing all these facts, we concluded that there should be a non-linear correction term in place of volume fraction of inclusions in dissimilar materials.Therefore, keeping in mind the above facts, we assumed a non-linear second-order correction term in place of volume fraction of inclusions as: Here α and β are empirical constant obeying the following boundary condition as: (1) When 0 From Equation (2), condition (1) is satisfied and to satisfy condition (2) constants  and  should have the following relation: 1 By using Equation ( 3) Therefore, on replacing volume fraction of inclusions  by correction term F p in Maxwell's Equation (1), the expression for ETC becomes 2 ) Using this relation, we have calculated ETC of several samples like high-density polyethylene (HDPE), polypropylene (PP) filled with metal particles, epoxy resin filled with SiO 2 , α-Al 2 O 3 , AlN and sandstones filled with air, n-Hepten, and water respectively. Results and Discussion The value of empirical constant  is found to depend upon ratio of thermal conductivity of the constituents, size, shape and distribution of filler particles in the matrix and therefore have different values for different type of materials.To determine  , curve-fitting method was applied for various samples using data reported earlier [13][14][15][16][17][18] and found that the expression for α comes out to be: Here A and B are slope and intercept of the curves for various samples.Optimized values of these constants have been used in such a way that, it should have consistency with the boundary conditions (1) and ( 2).The values of A and B of various samples computed using relation (6) are shown in Table 1. To validate modified Maxwell's relation (5), several samples of HDPE and PP filled with metal particles, epoxy resin filled with SiO 2, α-Al 2 O 3 and AlN and sandstones filled with air, n-Hepten, and water with increasing filler concentration have been taken in the present computations.For calculations purpose, input parameters have been used from the results published earlier [13][14][15][16][17][18].The values of ETC for various samples are calculated using modified Maxwell's relation (5) and the comparison of predicted values and experimental results are shown in Figures 1-15. The results for HDPE filled with metal particle are shown in Figures 1-4.It is seen from the Figures that models [1,2] predicts higher value of ETC at lower filler concentration where as our model have better predictions, but at higher filler concentration, our model predict little increases, then particles begin to touch each other and form conductive chains in the direction of heat flow, due to this ETC increases rapidly.Probability of forming conductive chains is higher in the case of smaller particles.Slightly oxidized aluminum particles for the preparation of PP/Al samples [14] were used and the thermal conductivity value used for our computations of ETC is the one given for pure heavy aluminum.In reality, the thermal conductivity of the fillers used in this computation is probably lower than this value, and depends on the mean size of the particles.Therefore, at higher concentration of filler particles, modified Maxwell's model predicts higher value of ETC then the experimental results and larger deviation occur.However, the model predicts fairly well up to 50% of filler concentration. The results for Epoxy resin and HDPE filled with ox- 5) predict better value than [16].Figure 11 shows that the models [1,2] predict lower value of ETC but our model prediction have better agreement and it is also observed from Figure 12. The results for ETC of sandstone filled with air, on composite materials.For more heavily metal filled composites, a non-linear increase in thermal conductivity was observed and almost all the models fail to predict ETC in this region.Because most of the theoretical models do not consider the size, shape and distribution of filler particles in the matrix and at higher filler content, the filler particles tend to form agglomerates due to it conductive chains form, resulting in a rapid increase in thermal conductivity. Conclusions In the present paper, it is concluded that there is a linear variation in ETC when filler particles approaches up to 15% by volume in the matrix.Non-linearity occurs when filler content increases from 15% by volume in most of the materials having high thermal conductivity ratio of the constituents.The present relation ( 5) has a constant α, which may depend on various factors of the materials like fractional volume of inclusions, conductivity ratio of the constituents, size, shape and distribution of inclusions and therefore have different values for a variety of materials.We note that the distribution of inclusions in the matrix has strong implications on ETC of composite materials.Nearly all theoretical models assume homogeneous dispersion in the matrix but it is not true for most of the complex materials.We also noticed that the expression derived for F p using the concept of non-linearity works well for a variety of materials like HDPE and PP filled with metal particle, epoxy resin filled with SiO 2 , α-Al 2 O 3 and AlN and sandstones filled with air, n-Hepten and water.It is also concluded that whatever an approach is used a correction term is always needed to predict correct values of ETC of randomly mixed real systems.It is always present in the models in one form or the other.We have also reached at the conclusion that in most of the models, correction terms are of non-linear in nature when conductivity ratio of the constituents is high. Figure 1 . Figure 1.Comparisons of experimental and predicted values of ETC; HDPE filled with Tin. Figure 2 .Figure 3 . Figure 2. Comparisons of experimental and predicted values of ETC; HDPE filled with Zinc.lower values of ETC.Figures 2-4 show that the values of ETC calculated by most of the existing models are higher but our model predicts fairly well in the whole range of filler volume fractions.It has been observed from the Figures 5-7 that ETC Figure 4 . Figure 4. Comparisons of experimental and predicted values of ETC; HDPE filled with Iron. Figure 5 . Figure 5. Comparisons of experimental and predicted values of ETC; HDPE filled with Silicon. Figure 6 . Figure 6.Comparisons of experimental and predicted values of ETC; Polypropylene filled with Aluminum (mean diameter of 8 µm). Figure 7 . Figure 7. Comparisons of experimental and predicted values of ETC; Polypropylene filled with Aluminum, (mean diameter of 44 µm). Figure 8 . Figure 8. Comparisons of experimental and predicted values of ETC; Epoxy resin filled with SiO 2 .ides are shown in Figures 8-12.It is observed from Figures 8-10 that Maxwell's model [1] calculate lower value of ETC but modified Maxwell's Equation (5) predict better value than[16].Figure11shows that the Figure 9 . Figure 9. Comparisons of experimental and predicted values of ETC; Epoxy resin filled with Al 2 O 3. Figure 10 . Figure 10.Comparisons of experimental and predicted values of ETC; Epoxy resin filled with AlN. Figure 11 .Figure 12 . Figure 11.Comparisons of experimental and predicted values of ETC; HDPE filled with Aluminum Oxide.n-Hepten and water are shown in Figures13-15.We note from these Figures that ETC decreases as the volume fraction of filler phase increases due to the lower value of thermal conductivity of filler phase.Models[1,2] observe this effect but they predict higher values of ETC.It is noticed from these Figures that modifiedMaxwell's Figure 13 . Figure 13.Comparisons of experimental and predicted values of ETC; Sandstones filled with Air. Figure 14 . Figure 14.Comparisons of experimental and predicted values of ETC; Sandstones filled with n-Hepten. Figure 15 . Figure 15.Comparisons of experimental and predicted values of ETC; Sandstones filled with Water.
2,805.6
2011-10-25T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Cavity Casimir-Polder forces and their effects in ground state chemical reactivity Here we present a fundamental study on how the ground-state chemical reactivity of a molecule can be modified in a QED scenario, i.e., when it is placed inside a cavity and there is strong coupling between the cavity field and vibrational modes within the molecule. We work with a model system for the molecule (Shin-Metiu model) in which nuclear, electronic and photonic degrees of freedom are treated on the same footing. This simplified model allows the comparison of exact quantum reaction rate calculations with predictions emerging from transition state theory based on the cavity Born-Oppenheimer approach. We demonstrate that QED effects are indeed able to significantly modify activation barriers in chemical reactions and, as a consequence, reaction rates. The critical physical parameter controlling this effect is the permanent dipole of the molecule and how this magnitude changes along the reaction coordinate. We show that the effective coupling can lead to significant single-molecule energy shifts in an experimentally available nanoparticle-on-mirror cavity. We then apply the validated theory to a realistic case (internal rotation in the 1,2-dichloroethane molecule), showing how reactions can be inhibited or catalyzed depending on the profile of the molecular dipole. Furthermore, we discuss the absence of resonance effects in this process, which can be understood through its connection to Casimir-Polder forces. Finally, we treat the case of many-molecule strong coupling, and find collective modifications of reaction rates if the molecular permanent dipole moments are oriented with respected to the cavity field. This demonstrates that collective coupling can also provide a mechanism for modifying ground-state chemical reactivity of an ensemble of molecules coupled to a cavity mode. I. INTRODUCTION The field of (non-relativistic) cavity quantum electrodynamics (CQED) has proved that the quantum nature of light can be exploited for many interesting applications that involve the modifications of material properties in one way or another [1,2]. In this context, strong lightmatter coupling is particularly appealing [3]. The regime of strong coupling is achieved when the coherent energy exchange between the excitations of a material (excitons) and of the cavity light modes is faster than the decay rate of either constituent. The resulting excitations are the well-known polaritons, which combine properties of both light and matter, leading to many interesting applications (see [4] for a recent review). In recent years, strong coupling to organic materials has received great attention for its potential to greatly influence fundamental features of the underlying organic molecules such as their optical response [5][6][7], transport properties [8][9][10][11][12], or chemical reactivity [13][14][15]. In particular, the potential of polaritonic chemistry, i.e., the ability to influence the chemical structure and reactions of organic compounds through coupling to a cavity, has attracted a lot of interest [16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34]. Most of the research on polaritonic chemistry with organic molecules has dealt with electronic strong cou-More recently, the possibility of influencing the thermally driven reactivity of organic molecules in the ground state has been demonstrated by coupling the cavity to vibrational transitions of the molecules [14,[35][36][37]. This opens a wide range of possibilities due to the fact that no external input of energy is needed at all, such as cavity-enabled catalysis and manipulation of groundstate chemical reactions. Cavity-induced modifications to the ground state have also been theoretically studied. In particular, for model molecules without ground-state dipole moments and only electronic dipole transitions, it has been shown that there is no collective enhancement of energy shifts [17], and more specifically, that chemical reactions are not strongly modified even under ultrastrong collective coupling [30]. In a series of papers based on more microscopic models, Flick and co-workers have shown that ground state properties can be significantly modified under single-molecule (ultra-)strong coupling [18,25,26], but have not treated chemical reactivity. In the present work, we aim to understand cavityinduced modifications of ground-state chemistry in coupled molecule-cavity systems. It is structured as follows: In section II we present the light-matter interaction Hamiltonian for a single molecule coupled to a nanoscale cavity. After a brief discussion of the validity of this Hamiltonian, we study a simple model system, the Shin-Metiu model and in section III obtain the cavitymodified reactivity from formally exact quantum rate calculations [38][39][40]. In section IV, we develop a simplified theory that allows to understand ground-state chemical reactivity changes based on well-known concepts such as transition state theory (TST) [41,42] by exploiting the cavity Born-Oppenheimer approximation [25]. We show in section V that, to a good approximation, perturbation theory can be used to predict cavity-induced chemical changes in terms of bare-molecule ground-state properties, and also allows to make explicit connections to electrostatic, van der Waals, and Casimir-Polder interactions. This is exploited in section VI to demonstrate that for a realistic experimental geometry, a multimode nanoparticle-on-mirror cavity [43][44][45], the effective single-molecule coupling can be significant. In section VII, we study the modification of reaction rates in the 1,2-dichloroethane molecule, demonstrating the potential of a cavity to catalyze or inhibit reactions, or even to modify the equilibrium configuration of the molecule. In section VIII, we discuss in detail the dependence of chemical reaction rates on the frequency of the cavity mode. We observe that, in contrast to polariton formation, which requires the cavity photon and molecular excitations to be resonant, no such requirement exists for the change of reaction rates in the cavity. In the case of a single molecule as treated up to that point, the coupling strengths required to obtain significant changes in chemical reactivity correspond to the most tightly confined plasmonic nanogap cavities available experimentally [43][44][45][46]. In section IX, we thus extend our model to an ensemble of molecules and find a collective enhancement of the effect under orientational alignment of the molecular dipoles. We mention here that we do not explicitly treat the case of many molecules coupled to a cavity with a continuum of modes, i.e., the case which corresponds to the experimentally used Fabry-Perot cavities with in-plane dispersion [14,37]. For the sake of simplicity, we also neglect solvent effects. While these are well-known to be important in chemical reactions, their effect depends strongly on the chosen solvent and experimental setup (particularly in nanocavities). However, we mention that the latest experimental studies indicate that solvent effects might be responsible and/or relevant for the experimentally observed resonance-dependent ground-state chemical reactivity [35,36]. A. Light-matter Hamiltonian We start from the general non-relativistic light-matter Hamiltonian of QED in minimal coupling, describing a collection of charged particles coupled to the electromagnetic (EM) field. Here and in the following, we use atomic units ( = 4π 0 = m e = 1) unless stated otherwise. whereÊ ⊥ (r) = − 1 c ∂ ∂t is the transverse part of the electric field (with the longitudinal part responsible for the instantaneous Coulomb interaction Q i Q j /r ij ), and we use the Coulomb gauge ∇·Â = 0. We note explicitly that here, the EM operators represent free-space modes (i.e., without boundary conditions imposing a cavity structure), while the collection of charged particles (specifically, electrons and nuclei) represent both the material part of the cavity (e.g., mirrors) and the emitters (such as molecules). In particular, the cavity material together with the EM field modes will have approximately bosonic eigenmodes that can be identified as the "cavity modes" and, in general, will be given by superpositions of material and EM field excitations [47], as explicitly shown for plasmonic systems in [48]. For simplicity and generality, in the following we assume that the cavity-molecule system we are treating is well-described within the quasistatic approximation, which applies when all distances in the problem are significantly smaller than the relevant wavelengths. In particular, this is a good approximation for small plasmon-and phonon-polariton nanoantennas and nanoresonators, which are the only currently available systems that achieve strong enough field concentration to obtain strong single-emitter couplings with "real" atoms or molecules [43,[49][50][51][52] (as opposed to "artificial atoms" such as superconducting qubits [53][54][55]). In the quasistatic limit, the transversal fields are negligible 1 , so that =B =Ê ⊥ ≈ 0, and the Hamiltonian simply becomesĤ with the sums over i and j still including all particles in the (nano)cavity as well as the molecules. We next separate the particles into several groups: one containing the cavity material, and one for each molecule. We assume that the cavity material is "macroscopic" enough that it responds linearly to external fields [47,48,[56][57][58][59], and can thus be well-described by a collection of bosonic modes with frequencies ω k and annihilation operators a k (e.g., corresponding to the "instantanteous" plasmon modes in [48]). For simplicity, we first consider a single molecule including n e electrons and n n nuclei. The Hamiltonian then becomeŝ The bare molecular Hamiltonian corresponds to the first two terms: the kinetic energy of n n nuclei and the electronic Hamiltonian. The latter includes the kinetic energy of the n e electrons and the nucleus-nucleus, electron-electron, and nucleus-electron interaction potentials. This operator depends on all the electronic and nuclear positions,x = (x 1 ,x 2 , . . . ,x ne ) andR = (R 1 ,R 2 , . . . ,R nn ), respectively. The following two terms correspond to the bosonic cavity modes and the interaction of the molecular charges (with j running over both electrons and nuclei) with the electrostatic potential φ k (r), i.e., the Coulomb potential corresponding to the charge distribution of each cavity mode. By performing a multipole expansion of the molecular charges, and assuming that the molecule is uncharged and sufficiently localized, this term can be well-approximated bŷ µ ·Ê(r m ), i.e., the interaction of the molecular dipole with the cavity electric field (the gradient of the potential) at the position r m of the molecule, which we write as is the position operator of the harmonic oscillator, and the electric field strength is determined by λ k = λ k k , with polarization vector . The coupling constant can be related to both the single-photon electric field strength and the (positiondependent) effective mode volume of the quantized mode, Here, the effective EM mode volume is defined as V eff,k = ε(r) 2 |E(r)| 2 d 3 r ε(rm) 2 |E(rm)| 2 , although the normalization integral formally diverges for lossy modes and has to be properly generalized [48,[60][61][62]. The proper description of the light-matter interaction Hamiltonian under (ultra)strong-coupling conditions is a very active topic of discussion in the literature [63][64][65][66][67][68][69][70][71]. In particular, much of this discussion centers on the importance of the so-called dipole self-energy term 1 2 (λ ·μ(x,R)) 2 that arises in the Power-Zienau-Woolley transformation, where the interaction with the transversal field is transformed to an electric-field-dipole interaction with the same form as Eq. (4) plus the abovementioned dipole self-energy term. As we have discussed, and as is well-known in the literature on macroscopic QED [72], this term does not appear for interaction with purely longitudinal modes that are well-described within the quasistatic approximation, i.e., in situations where retardation and propagation effects of the EM fields can be neglected. Given the fact that reaching strong or ultrastrong coupling with one (or a few) atoms or molecules requires strongly sub-wavelength mode volumes, V eff,k (2πc/ω k ) 3 , it follows that the quasistatic approximation should be applicable for most realistic cavities with few-emitter strong coupling. On the other hand, this extreme field localization also can require going beyond the point-dipole interaction either by directly using the interaction with the full space-dependent potential φ k (r) [73] or by including higher multipoles in Eq. (4) [74]. Doing so also resolves the formal lack of a ground state when the computational box is made too large and no dipole self-energy term is present [66,75]. However, it should be noted that if the sum over cavity modes is truncated and the effect of all but one (or a few) modes is approximately represented by renormalizing the emitter potential (and emitter-emitter interactions in the multiple-emitter case), it is necessary to add back an effective (collective) dipole self-interaction to avoid doublecounting of modes, as explained in [65]. We note that while we have explicitly treated a (nano)cavity within the quasistatic approximation, in which the cavity fields can be understood as due to the instantaneous Coulomb interaction between charged particles, it still makes sense to speak of the cavity modes as electromagnetic or photonic modes with an associated electric field. The modes, which physically correspond to, e.g., plasmonic or phonon-polaritonic resonances, can be seen as strongly confined photons. These modes are most easily obtained by solving Maxwell's equations for a given geometry, either numerically or with approaches such as transformation optics [76]. Only in the limit of extremely small nanocavities does it become possible, and sometimes necessary, to treat them explicitly as a collection of nuclei and electrons using ab initio techniques [77][78][79]. In the following, we will first treat a cavity in which only a single mode has significant coupling to the molecule (in appendix A, we discuss some systems in which this is a valid approximation). Since the interaction depends on the inner product between the electric field and the total dipole momentμ = nn i Z iRi − ne ix i , only the projectionμ =ˆ ·μ is relevant, and we only have to deal with scalar quantities. For the sake of simplicity, we rewriteμ →μ. We also assume perfect alignment between the molecule and the field unless indicated otherwise. B. Molecular model In order to study changes in ground-state chemical reactivity induced by (vibrational) strong coupling, we first treat a simple molecular model system that is numerically fully solvable and has been extensively studied in model calculations of chemical reaction rates, the Shin-Metiu model [80]. It treats three nuclei and one electron moving in one dimension, as presented in Fig. 1 Two of the nuclei are separated by a distance L and fixed in place, while the remaining nucleus and the electron are free to move. The repulsive interaction of the mobile nucleus with the fixed ones is given by a normal Coulomb potential, while the attractive electronnuclei interaction is given by softened Coulomb potentials V en (r i ) = Zerf(r i /R c )/r i , where r i is the distance between the electron and nucleus i and R c is the softening parameter. The system has two stable nuclear configurations (minima of the ground-state Born-Oppenheimer surface) that represent two different isomers of a charge or proton transfer reaction. Given that the electronic excitations energies and thus the nonadiabatic couplings between different potential energy surfaces can be varied easily by changing the parameters of the Shin-Metiu model, it has been extensively studied in the context of correlated electron-nuclear dynamics [81,82], as well as in the context of polariton formation under strong coupling [25,26]. The parameters chosen throughout the present work are Z = 1, L = 10 Å ≈ 18.9 a.u., M = 1836 a.u., and R c = 1.5 Å ≈ 2.83 a.u. (for all three nuclei), resulting in the Born-Oppenheimer potential energy surfaces shown in Fig. 1(b), with negligible nonadiabatic coupling between electronic surfaces. The figure also shows the first few vibrational eigenstates close to each minimum (tunneling through the central energy barrier is negligible for these states, so that they can be chosen to be localized on the left or right, respectively). In Fig. 1(c) we show the ground-state permanent dipole moment µ g (R) = g|µ(R)|g . Below we demonstrate that, to a good approximation, the ground-state potential energy surface and dipole moment are sufficient to describe the change in the molecular ground-state structure and chemical reactivity due to the cavity. Additionally, we note here that the light-matter coupling strength for formation of vibro-polaritons, i.e., hybridization of the photon mode with the vibrational transitions of the molecule, is determined by the transition dipole moment and frequency of the quantized vibrational levels of the molecule. Within a lowest-order expansion around the equilibrium position, [83]. III. QUANTUM REACTION RATES In this section, we analyze the cavity-induced change in the rate of the ground-state proton-transfer reaction from the left minimum at R ≈ −4 a.u. to the right one. In the present section, we take advantage of the simplicity of the Shin-Metiu model to exactly compute the quantum reaction rate without any approximations, which automatically takes into account all quantum effects such as tunneling or zero-point energy. We follow the approach of Miller [40], based on the correlation function formalism introduced in [38,39]. This states that the rate for a molecular reaction is given by where Q r (T ) = tr[exp (−βĤ)], with β −1 = k B T , is the partition function of the reactants at temperature T and C f f (t) is the flux-flux autocorrelation function, defined as This correlation function is computed as the trace of a product of operators, where U (t c ) = exp(−iĤt c ), with t c = t − iβ/2, is the complex time evolution operator and F represents the symmetrized flux operator Here,P is the nuclear momentum operator and the surface dividing the reactant and product states is defined by the zeros of the function s = s(R). In our case, the line that defines products and reactants is R = 0, i.e., s(R) = R. The flux-flux autocorrelation function describes the temporal flux of positive-momenta probability through the dividing surface of a thermally averaged initial state (which is accounted for by the thermal part of theÛ (t c ) operator). Negative values of C f f (t) indicate recrossing of the dividing surface in the opposite direction, thus contributing to a rate decrease. In order to obtain the rates of the coupled electronicnuclear-photonic system, we discretize all three degrees of freedom, using a finite-element discrete variable representation [84] for x and R, as well as the Fock basis for the cavity photon mode. This allows to diagonalize the full Hamiltonian, Eq. (3), and thus to trivially calculate Eq. (6) for arbitrary time t. For numerical efficiency, we perform the diagonalization in steps, first diagonalizing the bare molecular Hamiltonian, performing a cut-off in energy, and then diagonalizing the coupled system in this basis. We have carefully checked convergence with respect to all involved grid and basis set parameters and cutoffs. As is well known [80], due to the absence of dissipation in the model, for large times the correlation function becomes negative and oscillates around zero, corresponding to the wave packet that has crossed the barrier returning back through the dividing surface after reflection at the other side of the potential (at R ≈ 6 a.u.). However, in a real system the reaction coordinate is coupled to other vibrational and solvent degrees of freedom that will dissipate the energy and prevent recrossing. To represent this, we choose a final time t f around which the correlation function stays equal to zero for a while and only integrate up that time in Eq. (5). The time chosen, t f = 35 fs, corresponds to typical dissipation times in condensed phase reactions, and is similar to values chosen in the cavity-free case [80]. We now study the cavity-modified chemical reaction rates of the hybrid system for different coupling strengths λ. We note that a coupling strength of λ = 0.035 a.u. corresponds to a Rabi splitting of Ω R ≈ 0.10ω ν for the first vibrational transition. For the sake of comparison, we mention that single-molecule electronic strong coupling has been achieved with mode volumes of ∼ 40 nm 3 [43], corresponding to λ ≈ 0.007 a.u., and there are indications that effective sub-nm 3 mode volumes could be reached due to single-atom hot spots [44,45], which would allow the coupling strength to reach values up to λ ≈ 0.05 a.u.. Fig. 2 shows the rates in an Arrhenius plot, i.e., the logarithm of the rate divided by the temperature as a function of the inverse temperature. The straight lines in Fig. 2 confirm that the hybrid light-matter system follows the behavior described by the Eyring equation [41], which connects the rate of a chemical reaction with the energy barrier E b that separates reactants from products: Here, κ is a transmission coefficient, typically considered equal to one if nonadiabatic effects can be neglected close to the transition state. This equation follows from classical transition state theory [41,42] and is often used in the context of chemical kinetics. We thus observe that even under vibrational strong coupling and the accompanying formation of vibropolaritons, i.e., hybrid light-matter excitations, the reaction rate can still be described by an effective potential energy barrier. However, the effective height of the energy barrier is modified through the CQED effect of strong coupling, leading (for the studied model) to significantly reduced reaction rates. Although we here treat a single-mode and single-molecule system, these general observations agree with experimental studies [14,35,37]. However, in order to gain further insight into this effect and enable calculations beyond simple model systems, it would be desirable to have a theory that is not based on full quantum rate calculations (which require the calculation of nuclear dynamics in 3N − 6 dimensions). In the next section, we show that this can be achieved by applying (classical) transition state theory to the combined photonic-nuclear potential energy surfaces provided by the Cavity Born-Oppenheimer approximation [26]. IV. CAVITY BORN-OPPENHEIMER APPROXIMATION The starting point of the cavity Born-Oppenheimer approximation (CBOA) [25,26] is to write the cavity mode energy as an explicit harmonic oscillator, as discussed in section II. The cavity photon degree of free- dom is then treated as nuclear-like and its "kinetic energy"p 2 /2 grouped with the nuclear kinetic energy operators iP 2 i /(2M i ) before performing the standard Born-Oppenheimer approximation. This leads to a set of electronic CBO potential energy surfaces (PES)Ṽ i (R, q) parametric in both nuclear R and photonic coordinates q, obtained by diagonalizing the new electronic Hamilto- Conceptually, the inclusion of the cavity mode thus simply corresponds to a single additional nuclear-like degree of freedom. The CBOA now consists in neglecting nonadiabatic couplings between different PES (i.e., neglecting the action of nuclear and photonic kinetic operators on the electronic states) and assuming photonic and nuclear "motion" to proceed on each PES independently. Due to the formal equivalence between nuclear and photonic degrees of freedom within this picture, all the standard results of BO theory apply. In particular, the CBOA is a good approximation when the separation between the PES is larger than typical kinetic energies of the nuclei and the photonic mode. The case of vibrational strong coupling, where the photon energy is comparable to vibrational excitation energies, exactly fulfills this condition. The accompanying Rabi splitting can then be understood as simply normal mode hybridization on the nuclear-photonic potential energy surface, as already noted in the original article demonstrating vibrational strong coupling [83], and discussed in more detail in appendix B. In the context of cavity-modified chemical reactivity in the ground state, the formal equivalence between photonic and nuclear motion in the CBOA in particular allows to apply standard tools such as transition state theory to obtain an estimation for reaction rates. TST im-plies that it should only be necessary to calculate the effective energy barrier for the reaction within the groundstate CBO surface. We test this for the model studied in section III, i.e., the Shin-Metiu model coupled to a cavity mode on resonance with the first vibrational transition. The twodimensional PESṼ 0 (R, q) is shown in Fig. 3(a) for a coupling strength of λ = 0.02 a.u., which corresponds to a vibrational Rabi splitting of Ω R ≈ 0.05ω ν . The second panel, Fig. 3(b), shows the minimum along q of this surface as a function of R, i.e., along the path indicated by the curved dashed line in Fig. 3(a), for a set of coupling strengths λ that induce a Rabi splitting of up to Ω R = 0.1ω ν . This path closely corresponds to the minimum energy path of the proton transfer reaction within the CBOA. As the coupling is increased, the minima become deeper, while the transition state (TS) at R = 0 stays unaffected. This leads to an effective increase of the reaction barrierẼ b =Ṽ 0 (R TS , q TS ) −Ṽ 0 (R min , q min ), as shown in Fig. 3(c). This panel also shows the corresponding change in the rate predicted by Eq. (8). The full lines correspond to the energy barrier calculated within the CBOA (blue) and the corresponding rate (red) according to TST, while the dashed lines show the effective energy barrier E (eff) b extracted from the fit to the Arrhenius plot Fig. 2 and the corresponding change in the rate obtained from the full quantum rate calculation in section III. As can be seen, the effective and CBOA energy barriers agree very well, with just an approximately constant overestimation of the barrier in CBOA due to quantum effects such as zero-point energy and tunneling. This leads to excellent agreement for the change of the reaction rate obtained from the full quantum calculation and the CBOA-TST prediction. As expected from our previous discussion, the reaction rate of the hybrid cavity-molecule system decreases dramatically as the coupling increases due to the increase of the energy barrier height. Finally, we also calculate the CBOA energy barrier corrected by∆ zp , the difference between the zeropoint vibrational frequencies at the minimum and transition states as obtained from the Hessian of the PES (disregarding the direction of negative curvature at the TS). This is shown as a dash-dotted line in Fig. 3(c), and considerably improves the absolute agreement with the effective barrier extracted from the full quantum rate calculations. While we have up to now worked within a singlemode model, the CBO approximation actually makes it straightforward to treat multiple photonic modes. The ground state PES then parametrically depends on multiple parameters q k , one for each mode, just as a realistic molecule depends on multiple nuclear positions R i . Similarly, the adiabatic surfaces are not harder to calculate than for the single-mode case, and minimization strategies can rely on the same approaches used in "traditional" quantum chemistry. We note that for a general cavity, the mode parameters can be obtained either by explicitly quantizing the modes (which is in general a difficult proposition) or, alternatively, by rewriting the spectral density of the light-matter coupling (proportional to the EM Green's function) as a sum of Lorentzians [76,[85][86][87]. V. PERTURBATION THEORY As we have seen, the cavity Born-Oppenheimer approximation provides a convenient picture to evaluate cavity-induced changes in chemical reactivity based on energy barriers in electronic PES that are parametric in nuclear and photonic coordinates. In particular, the interaction term ω c qλ ·μ, with q a parameter, is equivalent to that obtained from applying a constant external electric field. The CBO PES for arbitrary molecules can thus be calculated with standard quantum chemistry codes. However, obtaining the barrier in general still requires minimization of the molecular PES along the additional photon coordinate q (or coordinates q k , if multiple modes are treated). If the coupling is not too large and the relevant values of q are small enough, the CBO ground-state PES can instead be obtained within perturbation theory, which up to second order in λ is given bỹ where V 0 (R) and µ 0 (R) are the bare-molecule groundstate PES and dipole moment, respectively, while α 0 (R) is the ground-state static polarizability [88], and encodes the effect of excited electronic levels, with then just requires the calculation of the bare-molecule ground-state properties V 0 (R), µ 0 (R), and α 0 (R). In addition to providing an explicit expression for the CBO ground-state PES in terms of bare-molecule ground-state properties, the simple analytical dependence on q in Eq. (11) allows to go one step further and obtain explicit expressions for the local minima and saddle points (i.e., transition states). In these configurations, the conditions ∂ qṼ0 (R, q) = ∂ RṼ0 (R, q) = 0 are satisfied. This yields a set of coupled equations that can be solved in order to find the configuration of the new critical points along the reaction path. The first equation gives the explicit condition which can be used to obtain the potential profile along the minimum in q, where we have dropped terms of order λ 4 since the perturbation-theory PES Eq. (11) is only accurate to second order. This shows that the energy barrier on the CBO surface (within second-order perturbation theory) can be calculated directly from the bare-molecule potential and permanent dipole moment. In Fig. 4, we analyze the validity of Eq. (14) for computing the barrier height within the Shin-Metiu model. It can be observed that perturbation theory works quite well for the whole range of couplings, with a relative error in the cavity-induced change of the energy barrier of about 10% for the largest considered couplings. Due to the exponential dependence of the rates on barrier height, this corresponds to an appreciable error in the rate constant, but still provides a reasonable estimate. Note that in the case of the Shin-Metiu model, the error of the energy barrier stems entirely from the change at the minimum configuration, as the transition state has zero dipole moment due to symmetry and is not affected by the cavity. It is interesting to point out that Eq. (14) closely resembles the expression obtained in electric field catalysis where an external voltage is applied [89], or to electrostatic shifts provided by some catalysts [90]. This strategy exploits the Stark effect, i.e., the energy shift observed in the presence of a static electric field, to induce changes in the energies of the transition state relative to the minimum configuration. As noted before, the CBOA corresponds to treating the influence of the cavity through an adiabatic parameter q determining the electric field strength. However, instead of being externally imposed, in our case the effective field, determined by Eq. (13), is the one induced in the cavity by the permanent dipole moment of the molecule itself. This also lends itself to an electrostatic interpretation of the effect. In addition to the minimum energy barrier of the CBO PES itself, the effective energy barrier is also affected by the zero-point energy due to the quantization of nuclear and photonic motion (see Fig. 3). We can obtain its cavity-induced shift within perturbation theory by using Eq. (13) to rewrite Eq. (11) as where , such that the photonic zero-point energy ω eff (R)/2 is decreased due to the polarizability of the molecule. We note that this only accounts for the quantization of the photonic motion along q. As we show in appendix B, close to a local minimum at R 0 , there is an additional correction due to the vibrational contribution to the molecular polarizability, which to second order is given by − is the on-resonance vibrational Rabi splitting as discussed in section II. As can be appreciated from Fig. 3, the contributions due to zero-point (photonic and vibrational) fluctuations only contribute negligibly to the change in reaction rate in the Shin-Metiu model. In general, a significant change of polarizability (either electronic or vibrational, which can be comparable in some molecules [91][92][93]) from the equilibrium to the transition state configuration could lead to similarly large effects as a change in the permanent dipole moment, especially if the cavity frequency ω c is relatively large. However, it can be estimated that the vibrational contribution to the zero-point energy shift is negligible for conditions typical for vibrational strong coupling. To be precise, at resonance ω c = ω v , this reduces to −Ω 2 R /(8ω v ). Even for a relatively large vibropolariton Rabi splitting of Ω R ≈ 0.2ω v [83,94,95], this contribution is of the order of ≈ 10 −2 ω v , and thus small compared to typical barrier heights. Finally, we note that the energy shifts above can be straightforwardly generalized to the case of multiple cavity modes within second-order perturbation theory. As can be easily verified, this simply leads to a sum over modes k, giving a final energy shift This general expression, which is just the second-order energy correction due to coupling to a set of cavity modes within the CBO, corresponds to the well-known Casimir-Polder energy shift [96]. The additional CBO approximation, in which nonadiabatic transitions between electronic surfaces are neglected, amounts to the approximation that the relevant cavity frequencies ω k are much smaller than the electronic excitation energies V m (R) − V 0 (R), such that only the (electronic) zero-frequency polarizability α 0 (R) appears in the second term. In contrast, the first term depends only on the ground-state molecular permanent dipole moment µ 0 = 0|μ|0 , which does not involve electronically excited states, and the CBOA thus does not amount to an additional approximation. In appendix C, we demonstrate that, for the case that the cavity can be approximated as a point-dipole (valid for a sufficiently small nanoparticle), the perturbative energy shifts obtained here correspond exactly to van der Waals forces [97], with the first term being the Debye force due to interaction between the permanent molecular and the induced nanoparticle dipole, and the second term the London force due to interaction between fluctuating dipoles. Under the point-dipole approximation, the sum over cavity modes for the Debye force can again be rewritten in terms of the zero-frequency polarizability of the nanoparticle. Eq. (16) is general for any kind of molecular reaction as long as the light-matter coupling is not too large. It demonstrates that the most relevant bare-molecule properties determining cavity-induced chemical reactions in the ground state are the permanent dipole moment and polarizability close to equilibrium, µ 0 (R 0 ) and α 0 (R 0 ), and transition state, µ 0 (R TS ) and α 0 (R TS ), configurations, and not the transition dipole moment of the vibrational excitation close to equilibrium, µ ν ∝ µ 0 (R 0 ), that determines the Rabi splitting. In addition to changing reaction barriers, it should be noted that the cavityinduced modification could potentially lead to a plethora of diverse chemical modifications, such as a change of the relative energy of different (meta-)stable ground-state configurations and thus a change of the most stable configuration, or even the creation or disappearance of stable configurations. Furthermore, depending on the particular properties of the molecule, the cavity-induced change in the energy barriers can either lead to suppression or acceleration of chemical reactions. VI. MULTIMODE CAVITY: NANOPARTICLE-ON-MIRROR To demonstrate that the effects predicted above can be significant in realistic systems, we treat a nanoparticle-on-mirror cavity with parameters taken from the experiment in [43]. This consists of a spherical metallic nanoparticle (radius R = 20 nm) separated by a small gap from a metallic plane, see the inset of Fig. 5. In this system, there is a series of multipole modes coupled to the molecule [76], with nontrivial behavior. Although several strategies can be employed to obtain the quantized light modes in this system [45,76], we instead exploit that the dominant contribution we found above is due to Debye-like electrostatic forces induced by the permanent molecular dipole, and thus simply solve the electrostatic problem. To be precise, we calculate the energy shift of a permanent dipole in this cavity as obtained by its interaction with the field it induces in the cavity itself. Due to the simple involved geometric shapes (a sphere and a plane), this can be achieved by the technique of image charges and dipoles (see appendix D for details of the calculation). We furthermore rely again on perturbation theory, i.e., we assume that the molecular rearrangement due to its self-induced field is negligible. Within this approximation, the energy shift we obtain from the purely electrostatic calculation is equivalent to the term proportional to µ 2 0 in Eq. (16). The corresponding change ∆E b in the height of the energy barrier for the Shin-Metiu molecule is shown in Fig. 5 as a function of the gap size (as a point of reference, the estimated gap size in [43] is 0.9 nm). We find that the change in energy barrier can be significant, corresponding to a change of the reaction rate by an order of magnitude or more (cf. Fig. 4). For comparison, in the figure we also show the effective coupling strength λ eff = k λ 2 k corresponding to each gap size. This value corresponds to the coupling strength in a singlemode cavity that would give the same total energy shift as obtained in this realistic multi-mode cavity. We note that we have here treated a perfect spherical nanoparticle, and did not include atomic-scale protrusions, which have been found to lead to even larger field confinement due to atomic-scale lightning rod effects [44,45,98]. For the experimental gap size of 0.9 nm, the effective coupling still becomes as large as λ eff ≈ 0.031 a.u., corresponding to V eff = 4π/λ 2 eff ≈ 1.9 nm 3 . This corresponds to a change in the energy barrier of δE b ≈ 0.07 eV for the Shin-Metiu model within second-order perturbation theory, which starts to break down at these couplings, as we previously saw in Fig. 4. This large effective coupling demonstrates the importance of the multi-mode nature of these cavities and the contribution of optically dark modes, as the "bright" nanogap plasmon mode that is seen in scattering spectra has an estimated mode volume of ≈ 40 nm 3 . VII. REALISTIC MOLECULE: 1,2-DICHLOROETHANE In the following, we apply the CBOA-TST theory to treat the internal rotation of 1,2-dichloroethane. In order to obtain the ground-state CBO surface under strong light-matter coupling, we calculate the (ground and excited-state) bare-molecule potential energy surfaces and permanent and transition dipole moments for a scan along the rotation angle (defined as the Cl-C-C-Cl dihedral angle). For simplicity, we here use the relaxed ground-state configuration of the bare molecule for each rotation angle, i.e., we neglect cavity-induced changes in degrees of freedom different from the internal rotation angle. The molecular properties are obtained with density functional theory calculations with the B3LYP [99] hybrid exchange-correlation functional and the 6-31+G(d) basis set. Excited states were computed with timedependent density functional theory within the Tamm-Dancoff approximation [100]. All calculations were performed with the TeraChem package [101,102]. The rather simple 1,2-dichloroethane molecule presents several characteristic configurations along the rotation of the chlorine atoms around the axis defined by the carbon-carbon bond (see top of Fig. 6). It thus constitutes an excellent model system to show several possible effects induced by coupling to a cavity. In Fig. 6(a) we present the calculated ground state energy landscape and dipole moment, while some relevant configurations are shown at the top. Analogously to the Shin-Metiu case, we present the path of minimum energy along q in Fig. 6(b), but here calculated within perturbation theory, Eq. (14). We have explicitly checked that the contribution due to London forces is negligible here as well, and focus on the Debye-like contribution in the following. We see that the most stable configuration (θ = 180 • ) shows no change due to the absence of a permanent dipole moment, while the most unstable one presents a large energy shift. Therefore the different energy barriers of the system, represented versus the coupling strength in Fig. 6(c), are altered significantly. Here we compare the energy barriers as predicted by perturbation theory (dashed lines) with the ones from a full diagonalization of the electronic Hamiltonian within the CBOA (full lines). In order to perform a full calculation we have calculated the electronic potential energy surfaces and the full dipole moment operator for a basis of 17 electronic states. We also indicate the points at which the coupling leads to important changes in the relative rates calculated with TST, i.e., the coupling/energy at which we achieve either suppression ofk/k = 0.5 or enhancement ofk/k = 1.5 or 2. We see that in the case of perturbation theory (triangles) the energy changes are slightly underestimated and thus larger couplings are needed to reach the same rate change as in the full calculation (circles). As can be clearly seen, this still relatively simple molecule shows several different kinds of phenomena. We see that the reaction rate out of the global minimum at θ = 180 • , corresponding to E 3 , is increased. On the other hand, E 1 increases and the local minimum situated at θ = 70 • is thus stabilized. Fig. 6(b) suggests that this effect could potentially become more dramatic for larger couplings than treated here, as θ = 70 • could become the new global minimum of the system. Finally, it is worth noting that the locations of the minima in energy also change for larger couplings. This shift is most noticeable for the minimum at θ = 70 • , which transforms toθ ≈ 68 • for λ = 0.05 a.u.. VIII. RESONANCE EFFECTS The results presented above predict a change in the ground-state reactivity that is actually independent of the cavity photon frequency and in particular does not rely on any resonance effects between the cavity mode and the vibrational transitions of the molecule. Although the CBO PES can and does represent vibro-polariton formation through normal-mode hybridization, as discussed above and in appendix B, the subsequent TST used to predict changes in chemical reaction rates is an inherently classical theory and does not depend on the quantized frequencies of motion on the PES, and, as mentioned above, neither on the transition dipole moment between vibrational levels (determined by the derivative of the permanent dipole moment). While we have shown that TST agrees almost perfectly with full quantum rate calculations, where nuclear and photonic motion is quantized and polariton formation is thus included, all calculations above have been performed for the resonant case ω c = ω ν . We thus investigate whether there is any resonance effect on chemical ground-state reactivity by performing full quantum rate calculations for a wide range of cavity frequencies within the Shin-Metiu model. In Fig. 7, we represent the changek/k in the calculated reaction rate of the coupled system relative to the uncoupled molecule as a function of ω c , for three different coupling strengths λ. Here, the values at ω c = ω ν correspond to the results shown in Fig. 3. We observe that the cavity rates are essentially constant with the frequency, with only a small modulation (k(ω c → ∞) −k(ω c → 0) = 0) that becomes more important for larger couplings. represented in Fig. 7, this goes from a relative modulation of 0.4% for λ = 0.005 a.u. to a 7% modulation for λ = 0.02 a.u.. However, no resonance effects are revealed close to the vibrational frequency of the molecule, ω ν . At the same time, the vibrational frequency appears to be the relevant energy that separates the high-and lowfrequency limits for the rates, with TST working particularly well exactly around that value. In the following, we show that both limits can be understood by different additional adiabatic approximations. In the high-frequency limit, ω c ω ν , the photonic degree of freedom is fast compared to the vibrational one, and can thus be assumed to instantaneously adapt to the current nuclear position R. This implies that the photonic degree of freedom can be adiabatically separated (just like the electronic ones), and nuclear motion takes place along an effective 1D-surface determined by the local minimum in q, i.e., along the path sketched in Fig. 3(a), or, within lowest-order perturbation theory, along the surface defined by Eq. (14). Quantum rate calculations along this effective 1D PES indeed reproduce the reaction rate in the high-frequency limit perfectly (not shown). Furthermore, we note that in this limit, it becomes convenient to directly group the photonic and electronic degrees of freedom to obtain polaritonic PES [17,28] when performing the Born-Oppenheimer approximation, as successfully used for electronic strong coupling. In particular, this approach leads to exactly the same expression for the effective ground-state PES [17]. In the low-frequency limit, ω c ω ν , on the other hand, the photonic motion is much slower than the vibrations and can also be adiabatically separated. The pho-tons are now too slow to adjust their configuration and q can be assumed to stay constant during the reaction. The full quantum rate can then be obtained by performing a thermal average of independent 1D quantum rate calculations for each cut in q of the two-dimensional surfacẽ V (R, q). Here, the (normalized) thermal weight at each q, P(q) = exp(− E (q)/k B T ), is calculated by calculating the average thermal energy of the system E (q) for constant q. Again, this approximation agrees perfectly with the full quantum rate calculation for ω c → 0 (not shown). These results imply that, on the single-molecule level, the formation of vibro-polaritons when ω c ≈ ω ν is not actually required or even relevant for the cavity-induced change in ground-state chemical structure and reactivity. This fact can be appreciated by a simple intuitive argument: vibrational strong coupling primarily occurs with the lowest vibrational transitions close to the equilibrium configuration, while chemical reactions that have to pass an appreciable barrier are typically determined by the properties of the involved transition state, and the associated barrier height relative to the ground-state configuration. In general, neither of these are related to the properties of the lowest vibrational transitions (i.e., curvature of the PES and derivative of the dipole moment at the minimum). The absence of resonance effects can also be appreciated through the connection to the well-known material-body-induced potentials obtained within perturbation theory. For example, if the EM mode is wellapproximated by a point-dipole mode, the obtained energy shift in the CBO PES can be rewritten as a vander-Waals-like interaction between the permanent dipole moment of the molecule and the dipole it induces in the nanoparticle. This corresponds to the Debye force. In turn, the zero-point energy of the EM field reproduces the London dispersive force due to vacuum fluctuations, and depends on the polarizability of the molecule. For an arbitrary EM environment, this effect can also be directly linked to Casimir-Polder forces [96,103], which exactly correspond to the generalization of emitter-emitter interactions to arbitrary material bodies (e.g., cavities). In particular, within the perturbative regime, the applicability of Casimir-Polder approaches could also be used to replace the explicit sum over modes k by integrals involving the EM Green's function [47,72], which is readily available for arbitrary structures. This provides an additional argument for the absence of resonance effects in our calculations, as (ground-state) Casimir-Polder forces are well-known not to depend on resonances between light and matter degrees of freedom. While we do not explicitly treat the situation in recent experiments on the modification of ground-state reactions by vibrational strong coupling (which were found to depend strongly on resonance conditions [14,[35][36][37]), we believe that our results indicate that the resonancedependent effects cannot be explained by a straightforward modification of ground-state reaction energy barri-ers at thermal equilibrium, as these would be captured by TST within the CBOA also in a many-mode, manymolecule setting. IX. COLLECTIVE EFFECTS We now turn to the description of collective effects, i.e., the case of multiple molecules. For simplicity, we again restrict the discussion to a single cavity EM mode. As discussed in section III, the single-molecule effects we have discussed up to now only become significant for coupling strengths λ = 4π/V eff corresponding to the smallest available plasmonic cavities, which typically operate at optical frequencies. However, typical experimental realizations of vibrational strong coupling are performed in micrometer-size cavities filled with a large number of molecules [14,83,94,104]. In this case, the per-molecule coupling λ is so small that the single-molecule effects discussed above are completely negligible. For strong coupling and the associated formation of vibro-polaritons, the coherent response of all molecules leads to a collective enhancement of the Rabi splitting Ω R,col = √ N Ω R . However, as we have seen that the cavity-induced modification of the single-molecule ground state does not depend on the formation of polaritons, it is not a priori obvious whether this collective enhancement of the Rabi splitting also translates to cavity-induced collective modifications of the effective reaction barrier. We thus repeat the analysis performed for the singlemolecule case above for the case of multiple molecules, working directly within the cavity Born-Oppenheimer approach. We note that the arguments for its applicability for treating ground-state chemical reactions translate straightforwardly from the single-to the many-molecule case. For N identical molecules, the CBO light-matter interaction Hamiltonian becomeŝ whereĤ dd accounts for direct intermolecular (dipoledipole) interactions. We stress that we again assume that only a single cavity mode is significantly coupled to the molecules. The cavity-mediated dipole-dipole interaction is thus fully contained within the light-matter coupling term, andĤ dd corresponds to the free-space expression [65]. In the following discussion, we will again use lowest-order perturbation theory to obtain analytical insight. The cavity-molecule and dipole-dipole interaction terms are then independent additive corrections. We first focus on the cavity-induced effects, and will discuss the influence of direct dipole-dipole interactions later, in particular when studying a prototype implementation: A nanosphere surrounded by a collection of molecules. For simplicity of notation, we again use scalar quantities to indicate the component of the dipole along the field direction, but keep the index to make this explicit, i.e., λ i = λ i i and i · µ(R i ) = µ (R i ), so that we can rewrite the interaction term of the Hamiltonian as ω c q N i λ iμ (x i ; R i ). The full Hamiltonian now corresponds to a many-body problem even for simple model molecules. Within second-order perturbation theory, the new (many-molecule) ground-state PES is where R t = (R 1 , R 2 , . . . , R N ) collects the nuclear configurations of all the molecules. With this result, we can again apply the corresponding conditions for finding critical points in order to analytically find the minimum along q and the corresponding total energy of the hybrid system up to second order in λ i , (19) It can be seen that the cavity-induced shift depends on the square of the sum of the (coupling-weighted) permanent dipole moments of the molecules, not on the sum of their squares. Assuming perfect alignment and identical configurations for all molecules, this gives an energy shift −N 2λ2 |µ 0 (R)| 2 , whereλ = 1 N i λ i is the average coupling. The per-molecule energy shift is then linear in N , indicating collective enhancement of the molecule-cavity interaction. In contrast, the London-force-like change in zero-point energy due to the modification of the effective cavity frequency is additive, with a total zero-point energy shift 1 2 (ω eff − ω c ) proportional to N , and shows no collective enhancement for single-molecule reactions. It is interesting to note that the connection between polarizability and the dielectric function of a material through the Clausius-Mossotti relation suggests that this energy shift is equivalent to the change of mode frequency due to the refractive index of the collection of molecules. The shift in cavity mode frequencies due to refractive index changes after chemical reactions is exactly the effect used in experiments to monitor reaction rates under vibrational strong coupling [14,35,37]. We also mention that at higher levels of perturbation theory, cavity-mediated contributions analogous to the Axilrod-Teller potential, i.e., van-der-Waals interactions between three emitters, appear in the intermolecular potential [47,105]. Based on Eq. (19), we can analyze the effect of the cavity on the reaction rate of a single molecule within the ensemble. This is determined by the energy difference between minimum-energy and transition-state configurations of that molecule, with the other molecules fixed in a stable configuration (here chosen to be the minimum for all of them). For simplicity, we assume that the critical configurations R Min and R TS of the coupled system are equal to the uncoupled ones (as we have seen above, the shifts are generally small). We can then directly express the change in the energy barrier of the moving molecule (chosen to be molecule i = 1 here) as This expression can be straightforwardly interpreted, with the first part corresponding to the Debye-like interaction of molecule 1 itself with the cavity, and the second part corresponding to the cavity-mediated interaction of molecule 1 with all other molecules (which itself can be understood as the sum of two equal contributions, the interaction of the moving molecule with the cavity field induced by all other molecules, as well as the interaction of all other molecules with the cavity field induced by the molecule). Within perturbation theory, this Debye-like energy shift is again equivalent to the electrostatic energy, in this case that of a collection of permanent dipoles interacting with the cavity, i.e., a material structure. This makes the connection to electric field catalysis [89] even more direct, with the difference that the electric field is not generated by applying an external voltage, but represents the cavity-enhanced field of all the other molecules. The fact that the main contribution is just the electrostatic energy shift also demonstrates the equivalence of our results to the approach of taking into account non-resonant effects through cavitymodified dipole-dipole and dipole-self interactions [65]. To treat the dependence on molecular orientations explicitly, we define the alignment angle θ i for each molecule through µ 0, (R i ) = |µ 0 (R i )| cos θ i . Inserting this in Eq. (21), we obtaiñ where λ r,i = λ i /λ is the relative coupling of molecule i, cos θ = 1 N i λ r,i cos θ i is the coupling-weighted average orientation angle, and primed quantities indicate that only molecules 2 to N are taken into account (for N 1, they can be replaced by unprimed quantities). We obtain a term proportional to the number of molecules N , i.e., there is a collective effect on the single-molecule energy barrier that is reminiscent of the collective Rabi splitting, N λ 2 ∝ Ω 2 R,col . Note that the collective change of the energy barrier still depends on the molecule having a different permanent dipole moment in the transition and minimum configuration. Furthermore, it requires the molecules not participating in the reaction to have a non-zero permanent dipole moment and an average global alignment, such that cos θ = 0. This could be achieved by fixing the molecular orientation by, e.g., growing self-assembled monolayers [106] or using DNA origami [50,107], or for molecules that can be grown in a crystalline phase, such as anthracene [108] (although polar molecules tend not to grow into crystals with a global alignment [109]). Another strategy to achieve alignment under strong coupling that has been successfully used experimentally is to align molecular liquid crystals through an applied static field [110]. However, for general disordered media such as polymers or molecules flowing in liquid phase [14,104], the angular distribution is typically isotropic, leading to cos θ ≈ 0. In that case, our theory predicts that no collective effect on reactivity should be observed unless the cavity itself induces molecular orientation (see below). We note for completeness that the collective Rabi splitting depends on the average of the squared z-component of the transition dipole moments, i.e., cos 2 θ , which is nonzero unless all molecules are aligned perpendicular to the electric field of the cavity mode, and equal to 1/3 for isotropic molecules. In order to test the strength of the collective effect in realistic situations, and to compare it with the effect of direct (free-space) dipole-dipole interactions, we now treat a specific configuration, as depicted in Fig. 8(a): A nanocavity represented by a metallic sphere of diameter d = 8 nm, surrounded by a collection of Shin-Metiu "molecules", located at distances from 1 nm to 16 nm from the sphere. We place a collection of up to N = 6000 molecules at random positions within that volume, imposing a minimum distance of 1.5 nm between the molecules. A metal sphere with a Drude dielectric function (or a dielectric sphere with a single resonance, such as a phonon mode) can be approximated as a cavity with only three modes, the dipolar localized surface plasmon resonances aligned along x, y, and z (see appendix A for details). Higher order multipole modes only couple significantly to emitters that are very close to the surface. We first assume all molecules to be aligned perfectly with the electric field of the z-oriented dipolar mode of the sphere. In this configuration, the sum over x-and y-oriented fields at the origin cancels out for large N . For these directions, there is thus no Debyelike collective effect, and we can restrict our attention to just a single mode of the sphere (the z-oriented dipole mode) 2 . As mentioned above, within perturbation theory, where the Debye-force like contribution can be un- derstood within a fully electrostatic picture, it is straightforward to include the direct (free-space) permanentdipole-permanent-dipole interaction, as it is simply a further additive electrostatic contribution. In Fig. 8(b), we show the total electrostatic energy of the system, as well as the relative contributions due to molecule-sphere and direct molecule-molecule interactions, as a function of N . For the configuration considered here, for which we have not performed any optimization of total energy, the dipole-dipole interactions give a positive contribution to the total energy that is significantly larger than the collective dipole-sphere interaction. The relative strength of dipole-dipole and dipole-sphere interactions depends on the details of the configuration, and we have checked that, e.g., it is also possible to maintain the same collective interaction while obtaining an overall negative contribution from dipole-dipole interactions by not choosing random positions as we did for simplicity. In contrast to the total energy, the change in energy barrier predicted by Eq. (21) for the most strongly coupled molecule of the ensemble is dominated by the (collective) sphere-dipole interactions, as shown in Fig. 8(c). The barrier height indeed increases approximately linearly with N , with changes of up to ≈ 0.09 eV due to the cavity-mediated interaction, and an associated suppression of the reaction rate by a factor of ≈ 30 at room temperature. In the geometry treated here, the energy shift of the target molecule due to dipole-dipole interactions with the other molecules also increases linearly with N , as the molecular dipoles combine to all act in the same direction at the sphere location, with an effect that is roughly half of the cavity-mediated interaction. As mentioned above, the details depend strongly on the configuration and cavity properties, and in particular, it is also possible to choose configurations where the direct dipole-dipole interactions dominate. While a more exhaustive treatment is beyond the scope of this article, we mention that in initial explorations, we did not find any simple configuration where the cavity-mediated interactions were significantly larger than direct dipole-dipole interactions. While the barrier height increases here, the effect we predict can also lead to a decrease, for example in the case that the transition-state dipole moment is larger than in the minimum configuration, cf. Eq. (22). This would be expected, e.g., in dissociation reactions in which the molecule splits into two partially charged fragments, and is also seen for the backreaction from the right to left minimum in the Shin-Metiu model for the case that all other molecules are in the leftmost minimum (see Fig. 9). For comparison, Fig. 9 shows the effect of average alignment for the sphere-molecule system considered above, for the case of N = 6000 molecules corresponding to a molecular density of ≈ 2 · 10 8 µm −3 . It displays the CBO PES within second-order perturbation theory as a function of R 1 , with all other molecules fixed in the minimum configuration, and along the photonic minimum q = q m . For cos θ = 1, this demonstrates that the collective cavity effect on the surface is significant, with the position of the critical points shifting compared to the bare molecule. For the Shin-Metiu model studied here, the barrier height is actually increased compared to the approximate prediction Eq. (21), which does not take into account these shifts. In contrast, when there is no average orientation, cos θ = 0, the effect on the surface is minimal and is reduced to the single-molecule result. The single-molecule energy shifts we predict for perfect alignment can be significant. This implies that the molecules, if they are free to rotate in place, could lower their energy by aligning with the electric field of the cavity mode, which could possibly lead to self-organization (for the example system above, this also requires breaking of the overall spherical symmetry). The details of this effect depend on the precise setup, such as the cavity material and shape, molecular and solvent properties, etc., and would require a more complete treatment taking thermodynamical effects and free energy into account [111,112], which is beyond the scope of the current work. However, we mention that it has recently been shown that strong coupling and the associated formation of polaritons itself could lead to alignment due to the associated decrease of the lower polariton energy, provided that a significant fraction of molecules are excited to lower polariton states [29,113]. Although thermal excitation can be efficient for vibrational strong coupling due to the relatively low energies of vibro-polaritons, on the order of a few times the thermal energy k B T , it should be noted that the arguments in [29,113] do not directly translate to thermal-equilibrium situations. In that case, a change in state energy due to improved orientation also leads to a change in population, with the average energy per degree of freedom staying constant and thus no net energy gain. Finally, we mention that in contrast to the singlemolecule case, the generalization of the above arguments to many cavity modes is not straightforward, and the results are thus not directly applicable to, e.g., Fabry-Perot cavities with a continuum of modes following a dispersion relation as a function of the in-plane wave vector, as employed in existing experiments [14,[35][36][37]. Our results indicate that solving the electrostatic problem (where all modes are implicitly taken into account) should predict the changes in energy barriers, but, e.g., the scaling with number of molecules is not immediately obvious, and as mentioned above, statistical effects should be treated more carefully. Only for the special case that all modes have the same electric field distribution (e.g., different dipolar resonances of a small nanoparticle), the sum over modes can be performed straightforwardly. X. CONCLUSIONS To summarize, we have analyzed modifications of ground-state chemical reactivity in hybrid cavitymolecule systems, motivated by experimental results showing this for vibrational strong coupling [14,37]. By treating a simple model system, the Shin-Metiu model, we were able to show through full quantum rate calculations on the single-molecule level that ground-state thermally driven reaction rates can indeed be significantly modified under strong light-matter coupling. We then demonstrated that this change can be interpreted through classical transition state theory, i.e., by the change in the height of an effective energy barrier (or activation energy) by working within the cavity Born-Oppenheimer approximation. In this approximation, the cavity photon is formally treated like a nucleus, such that ground-state reactions can be represented through motion on a PES with a single additional nuclear-like degree of freedom. The use of perturbation theory leads to simple analytic expressions relating the effective barrier heights to purely ground-state molecular properties, namely the uncoupled ground-state PES, dipole moment, and polarizability of the molecule. We showed that within second-order perturbation theory, the energy shifts determining the barrier height on the CBO PES can be directly related to well-known intermolecular forces, i.e., the Debye and London forces, and more generally to Casimir-Polder interactions. We stress that while perturbation theory allows us to make connections to well-known results, our approach generalizes Casimir-Polder forces beyond the perturbative regime and applies for any coupling strength. Additionally, we have shown explicitly that the emergence of vibrational strong coupling does not affect the validity of the derived expressions for the effective energy barriers. At the same time, the CBOA provides a straightforward way to connect to well-known theories of chemical reactivity. The fact that the energy shifts obtained here become appreciable for realistic nanocavities with strongly sub-wavelength field confinement and thus sufficiently large λ demonstrates that the (generalized) vander-Waals forces due to the interaction of the molecular dipole with the polarization it induces in the cavity can become strong enough to lead to significant changes in chemical reactivity. We also note that in the context of Casimir-Polder forces, it is well-known that for sub-wavelength separations between emitters and material systems, it is sufficient to work within the quasistatic approximation, in which only the longitudinal electromagnetic Green's function contributes and the interaction does not depend on whether the Power-Zienau-Woolley transforma-tion has been performed or not. In this context, it is also well-known how to go beyond the quasistatic approximation, and the contribution from longitudinal and transversal fields (including the 2 term and all EM field modes) is naturally included within the Green's function [72]. We demonstrated the applicability of our approach for a realistic multi-mode cavity, a nanoparticle-on-mirror setup [43], and found that the effective single-molecule coupling strength in this case becomes significant (corresponding to a mode volume of ≈ 2 nm 3 ) even though the mode volume of the main optically active mode is significantly larger (≈ 40 nm 3 ). We furthermore applied our theory to a real molecule, 1,2-dichloroethane, and showed that reaction rates can be both suppressed and enhanced depending on the relative value of the molecular dipole moment at the critical configurations (local minima and saddle points of the PES). A cavity could thus serve as a catalyst or as an inhibitor of a ground-state reaction, and could even alter the global equilibrium configuration of the molecule, all without any kind of external energy input, with all reactions simply driven by thermal fluctuations. This represents a potential way to efficiently optimize the desired yield of a molecular reaction. We then found that on the single-molecule level, the effects discussed above do not rely on any particular relation between the cavity photon frequency ω c and the vibrational transitions in the molecule ω ν , and thus in particular not on the formation of polaritons (hybrid lightmatter states). This is consistent with the interpretation of the energy shifts as generalizations of Casimir-Polder interactions beyond the perturbative regime. We also showed that the small modulation of the reaction rate as a function of ω c that is observed numerically can be understood by simple adiabatic approximations, and again is not related to polariton formation. For the case of many-molecule strong coupling, where the single-molecule coupling λ is typically so small that the single-molecule effects described above are negligible, we demonstrated that the PES and reaction barriers can be significantly modified by collective effects provided that the permanent dipole moments of the molecules are oriented with respect to the cavity mode field, such that they induce an overall static electric field. However, it should also be noted that similar effects could be achieved by direct dipole-dipole interactions if one manages to align all molecules such as to create a strong field at the position of a single molecule. An interesting open question is whether the cavity-mediated interactions could induce alignment in materials that do not show this in the absence of the cavity, or if direct dipole-dipole interactions would prevent this. Finally, it should be noted that we have throughout assumed that the whole system is in thermal equilibrium, i.e., that the effective temperature is identical both for the molecules and the cavity EM mode. This implies that system-bath interactions do not have to be explicitly modelled, as the system can simply be assumed to be at a given temperature (as explicitly included in the quantum rate calculations and TST). This assumption would break down if the internal vibrational temperature of the molecules is different from the temperature of the thermal radiation bath that the cavity is coupled to. In that case, the effective temperature of the system could potentially become an average of the internal and external bath temperatures. In particular, the effective temperature relevant for a given reaction could depend on whether vibrational motion along that reaction coordinate is hybridized with the cavity mode, such that the external black-body radiation bath would conceivably couple more efficiently to that mode than to others. Such effects have been studied for Casimir-Polder forces, where resonant contributions that exactly cancel at thermal equilibrium can become important in nonequilibrium situations [114,115], and possibly give rise to additional collective effects [116]. Our work demonstrates the possibility of modifying ground-state chemical reactions and molecular properties in hybrid cavity-molecule systems without an external input of energy. We believe that the theory presented here lays the groundwork for a profound understanding of this novel cavity effect and could be used to predict experimentally available chemical modifications. where ω 0 = ω p / √ 3. This is identical to the polarizability of a single-mode quantum oscillator at frequency ω 0 with transition dipole moment µ eg = ω 0 a 3 /2 [88], Here, spherical symmetry implies that there are three degenerate quantum oscillators, corresponding to the quantized localized surface plasmon resonances in this case, directed along three orthogonal axes (e.g., x, y, and z). If the dielectric function is instead given by Lorentzian function representing a material resonance (e.g., a phonon mode) at frequency ω ph and with resonator strength characterized by ω f , i.e., (ω) = 1 + 6ω0 , with the quantized mode now corresponding to a localized surface phonon polariton resonance. We have thus found that these simple models can be quantized by considering just a single or few cavity modes. where the first term corresponds exactly to the static energy of a dipole µ 0 at r m interacting with a polarizable sphere at the origin, and the second term corresponds to the London force [122]. Appendix D: Electrostatics of a nanoparticle-on-mirror cavity In here we derive the electrostatic energy of a dipole µ inside a plasmonic nanocavity made up of a spherical metallic nanoparticle of radius R separated by a gap ∆ from a planar metallic mirror. This can be achieved using the method of image charges by considering a formally infinite series of images, with each image in a component of the cavity inducing an image in the other. In practice, this infinite converging series can be truncated after a finite number of terms to obtain any desired degree of accuracy. Considering both a charge q and a dipole µ at position r relative to the center of a perfectly conducting grounded sphere of radius R, the resulting images will be located at r = (R/r) 2 r (where r = |r|) and consist of a charge and dipole given by Here, it is important to take into account that the image of a dipole in a sphere always consists of both a charge and a dipole. The corresponding expressions for a plane can be obtained by simply taking R → ∞ (and moving the center of the sphere accordingly to keep the planar surface fixed). The cavity-induced energy shift of the dipole is then given by U = − 1 2 E ind · µ, where E ind is the total field generated by all image dipoles and charges, and the factor 1 2 is due to them being induced. It is also interesting to note that since a dipole induces a nonzero image charge, the total induced dipole moment of the sphere is not origin-independent. In particular, the induced dipole moment (for q = 0) relative to the sphere center is µ +r q = (R/r) 3 3r(r · µ)/r 2 − µ , which corresponds to the dipole moment obtained when treating the nanoparticle as a polarizable point particle (cf. appendix A). Accordingly, in a multipole expansion about the sphere center, higher-order multipoles are nonzero and neglecting them corresponds to an approximation, while using the image dipoles and charges as given above is exact.
16,909.2
2018-07-27T00:00:00.000
[ "Physics" ]
The streptococcal multidomain fibrillar adhesin CshA has an elongated polymeric architecture The cell surfaces of many bacteria carry filamentous polypeptides termed adhesins that enable binding to both biotic and abiotic surfaces. Surface adherence is facilitated by the exquisite selectivity of the adhesins for their cognate ligands or receptors and is a key step in niche or host colonization and pathogenicity. Streptococcus gordonii is a primary colonizer of the human oral cavity and an opportunistic pathogen, as well as a leading cause of infective endocarditis in humans. The fibrillar adhesin CshA is an important determinant of S. gordonii adherence, forming peritrichous fibrils on its surface that bind host cells and other microorganisms. CshA possesses a distinctive multidomain architecture comprising an N-terminal target-binding region fused to 17 repeat domains (RDs) that are each ∼100 amino acids long. Here, using structural and biophysical methods, we demonstrate that the intact CshA repeat region (CshA_RD1–17, domains 1–17) forms an extended polymeric monomer in solution. We recombinantly produced a subset of CshA RDs and found that they differ in stability and unfolding behavior. The NMR structure of CshA_RD13 revealed a hitherto unreported all β-fold, flanked by disordered interdomain linkers. These findings, in tandem with complementary hydrodynamic studies of CshA_RD1–17, indicate that this polypeptide possesses a highly unusual dynamic transitory structure characterized by alternating regions of order and disorder. This architecture provides flexibility for the adhesive tip of the CshA fibril to maintain bacterial attachment that withstands shear forces within the human host. It may also help mitigate deleterious folding events between neighboring RDs that share significant structural identity without compromising mechanical stability. The cell surfaces of many bacteria carry filamentous polypeptides termed adhesins that enable binding to both biotic and abiotic surfaces. Surface adherence is facilitated by the exquisite selectivity of the adhesins for their cognate ligands or receptors and is a key step in niche or host colonization and pathogenicity. Streptococcus gordonii is a primary colonizer of the human oral cavity and an opportunistic pathogen, as well as a leading cause of infective endocarditis in humans. The fibrillar adhesin CshA is an important determinant of S. gordonii adherence, forming peritrichous fibrils on its surface that bind host cells and other microorganisms. CshA possesses a distinctive multidomain architecture comprising an N-terminal target-binding region fused to 17 repeat domains (RDs) that are each ϳ100 amino acids long. Here, using structural and biophysical methods, we demonstrate that the intact CshA repeat region (CshA_RD1-17, domains 1-17) forms an extended polymeric monomer in solution. We recombinantly produced a subset of CshA RDs and found that they differ in stability and unfolding behavior. The NMR structure of CshA_RD13 revealed a hitherto unreported all ␤-fold, flanked by disordered interdomain linkers. These findings, in tandem with complementary hydrodynamic studies of CshA_RD1-17, indicate that this polypeptide possesses a highly unusual dynamic transitory structure characterized by alternating regions of order and disorder. This architecture pro-vides flexibility for the adhesive tip of the CshA fibril to maintain bacterial attachment that withstands shear forces within the human host. It may also help mitigate deleterious folding events between neighboring RDs that share significant structural identity without compromising mechanical stability. Bacteria occupy almost every ecological niche on Earth (1,2). Their capacity to colonize diverse environments is in part enabled by their ability to adhere to the surfaces of materials and other cells. Adherence allows anchorage and persistence within a defined environment, confers significant evolutionary advantage, and promotes bacterial infection in animals and humans (3,4). Identifying and characterizing the cellular machineries employed by bacteria to adhere and colonize is of broad fundamental interest and may inform the development of anti-infective agents, medical devices, or vaccines (5,6). Frequently, bacteria utilize proteinaceous surface decorations termed adhesins to facilitate attachment to extracellular target molecules. Different adhesins recognize and bind different (a)biotic targets, and there is considerable diversity in the molecular architectures of these important polypeptides. Larger filamentous adhesins may be grouped into one of two categories based on their distinguishing structural features: pili and fibrils. Pili have been implicated in numerous physiological processes and are found in both Gram-positive and Gram-negative bacteria (7)(8)(9). Fibrillar adhesins are produced by a wide variety of bacteria. They exhibit considerable sequence diversity, and much still remains to be learned about their structures and functions. Fibrils are usually composed of a single polypeptide, which is covalently anchored to the cell wall via a C-terminal LPXTG motif (10 -12). Streptococcus species, including both commensal strains and pathogens, are prodigious producers of fibrillar adhesins (13)(14)(15)(16). Streptococcus gordonii, a pioneer oral bacterium and opportunistic pathogen, employs the fibrillar adhesin CshA (cell surface hydrophobicity protein A) to enable binding to host cell surfaces and other microorganisms (17). This ϳ259-kDa polypeptide shares Ͻ10% sequence identity to any protein of known structure (17)(18)(19). CshA possesses a distinctive multidomain architecture, comprising an N-terminal signal pep-tide (41 aa residues), 3 a nonrepetitive target binding region (778 aa), a repetitive region composed of 17 sequentially arrayed repeat domains (RDs; ϳ100 aa each), and an LPXTG anchor (see Fig. 1). CshA forms peritrichous fibrils of ϳ60 nm on the surface of S. gordonii (17), and heterologous expression of this protein on the surface of Enterococcus faecalis results in the formation of a dense furry layer comprised of multiple closely associated CshA polypeptides, which confers adhesive properties (17). Similarly, ⌬cshA strains of S. gordonii show reduced binding to other oral microorganisms and host molecules, including fibronectin (Fn) (18 -20). Recently, the molecular details of host Fn binding by CshA were established, with this polypeptide shown to bind Fn via a distinctive "catch-clamp" mechanism, mediated by discrete domains within the nonrepeat region of the protein (21). This mode of binding involves the action of the intrinsically disordered N-terminal domain of the protein and its neighboring ligand-binding domain, which function in concert to form a robust protein-protein interaction via a readily dissociable precomplex intermediate. In this study, using a combination of structural and biophysical methods, we show that the Ͼ175-kDa multidomain repeat region of CshA (CshA_RD1-17) adopts an elongated polymeric structure in solution, with a distinctive conformation dictated by the interplay of fully and partially ordered domains and intrinsically disordered regions. Equilibrium folding studies of individual CshA repeat domains reveal diversity in the stabilities and unfolding profiles of these proteins, despite their often considerable (Ͼ90%) sequence identities. The NMR structure of CshA_RD13 has been determined, which identifies a previously unreported all ␤-fold flanked on either terminus by unstructured linker regions. Complementary AUC and smallangle X-ray scattering (SAXS) studies of CshA_RD1-17 provide support for the CshA repeat region adopting a transitory structure characterized by alternating regions of order and disorder. Together, our data suggest a molecular architecture within which individual repeat domains contribute additive strength to the intact polypeptide but also minimize the likelihood of domain misfolding that may arise as a consequence of high sequence and structural identity to adjacent RDs. This is enabled via the acquisition of destabilizing mutations that preclude the adoption of a fully folded state. Our work identifies a distinctive polymeric protein architecture and resolves the molecular intricacies of its structure and organization. In turn, this provides greater insight regarding the capacity for bacterial adhesins to promote colonization of sites within the host that are continuously exposed to the flow of blood, saliva, or tissue fluids. The intact CshA repeat region adopts an extended polymeric structure in solution Consistent with previous domain assignments, the repeat region of CshA was considered to comprise residues 820 -2500 of the 2507-amino acid full-length CshA polypeptide (21) (Fig. 1). The intact CshA repeat region, from here on referred to as CshA_RD1-17, was amplified from S. gordonii DL1 (22) chromosomal DNA and cloned into the pOPINF expression vector (23) ( Table S1). The resulting construct was used to facilitate overexpression of an N-terminally hexahistidine-tagged variant of CshA_RD1-17 in Escherichia coli, and the resulting recombinant material was purified to homogeneity using a twostep process. CshA_RD1-17 was found to be a homogeneous, monodisperse species in solution, of Ͼ95% purity. Analysis of CshA_RD1-17 using CD spectroscopy, followed by deconvolution of the resulting spectrum into secondary structural elements, revealed the protein to be predominantly ␤-sheet (ϳ45%), with a significant disorder content (ϳ39%; Fig. 2A). Sedimentation velocity analytical ultracentrifugation confirmed that the polypeptide is monomeric and adopts an extended configuration in solution with an f/f 0 value of 2.84 ( Fig. 2B and Table S2). Complementary SAXS analysis ( Fig. 2C and Table S3) provided further evidence that CshA adopts an elongated structure, with a radius of gyration of 120 Å and maximum diameter of 408 Å, as derived from the pair distance distribution (P(r)) function (Fig. 2D). Structural disorder is apparent from the Kratky plot, which diverges from the baseline at high q, and the Porod exponent, which is lower than observed for a well-folded globular protein (Fig. 2E). The structural disorder evident from CD and SAXS analysis suggests a flexible dynamic structure, in keeping with the biological role of CshA. The measured scattering data are well-described by the flexible cylinder model ( Fig. 2F and Table S4), in which CshA is characterized by a higher Kuhn length and lower contour length than that expected for a random coil. The large deviation from random coil behavior is consistent with a significant proportion of folded regions in the solution structure. These data imply that the polypeptide adopts an elongated, flexible ultrastructure in solution that occupies an ensemble of configurations. Structure of S. gordonii CshA Individual CshA repeat domains exhibit varying stabilities and unfolding behaviors Having established the solution ultrastructure of CshA_ RD1-17, we next sought to investigate the molecular origins of the polypeptide's physical properties. Comparative sequence analysis of assigned CshA repeat domains reveals considerable variation in the amino acid sequences of these regions ( Fig. 3A and Fig. S1). The repeat region comprises a central core of domains with very high sequence identity (domains [3][4][5][6][7][8][9][10][11][12][13][14] punctuated by the deviant repeat domain 7. The sequence of this domain diverges significantly from those of the other 16 repeat domains that comprise the intact repeat region. Surprisingly, a significant number of adjacent domains located within the central 3-14 core exhibit high sequence identity. Domains 3 and 4, domains 5 and 6, domains 10 and 11, domains 11 and 12, and domains 12 and 13 share Ͼ90% sequence identity (Fig. 3A), an arrangement that contravenes current dogma regarding the organization of tandemly arrayed domains within multidomain proteins (24). The sequence identities of the terminal domains of CshA_RD1-17, namely 1, 15, 16, and 17, are significantly lower than those identified in the central core region. Interestingly, in addition to repeat domain 17, domains 6, 11, 12, and 13 all possess a C-terminal LPXTG cell-wall anchor motif, suggesting that evolutionary pressure to present the adhesive nonrepeat region of CshA at a maximal distance from the cell surface may have driven extension of the repeat region via gene duplication. In an effort to explore the structural significance of sequence variation between individual CshA repeat domains, a subset of these proteins were cloned, recombinantly overexpressed in E. coli, and purified to homogeneity using the same general strategy (Table S1). Representative domains were selected covering a breadth of amino acid sequences. These were repeat domains 1, 3, 5, 7, and 13. Each could be readily produced in high quantities and to high purities (Ͼ95%, as judged by SDS-PAGE analysis). The stabilities and unfolding behaviors of each of these proteins were assessed in vitro by monitoring their unfolding in the presence of increasing concentrations of the chemical denaturant urea ( Fig. 3B and Table 1). Unfolding behavior was monitored by intrinsic tyrosine fluorescence, exploiting the presence of at least one such residue in each of Structure of S. gordonii CshA the repeats 1, 3, 5, 7, and 13. Of the isolated domains examined, CshA_RD13 exhibited the highest overall stability (Ϫ3.42 kcal mol Ϫ1 ), whereas remarkably, CshA_RD5 showed no fluorescence intensity change when titrated with urea, despite the 91% sequence identity with repeat 13, including the two tyrosine residues at precisely the same positions: 52 (residue Tyr 2084 ) and 92 (residue Tyr 2123 ) (Fig. S1). CD spectroscopy of repeat domain 5 also indicated that this domain was largely unstructured, even in the absence of urea (data not shown). Although CshA_RD3 and CshA_RD7 are less stable than CshA_RD13, they do exhibit a mildly cooperative unfolding transition, whereas CshA_RD1 is barely stable even in the absence of urea but also exhibits a weakly cooperative unfolding transition. Complementary size-exclusion chromatography (SEC) analyses of individual CshA repeat domains provide further support for variability in the degree of foldedness of these proteins (Fig. S2). The largely unfolded CshA_RD5 elutes earlier from a SEC column than its better folded counterparts and significantly earlier than the well-folded CshA_RD13. Structure of S. gordonii CshA Solution structure of CshA_RD13 In an effort to provide a structural framework for the observed biophysical properties of CshA_RD1-17, CshA_RD13, which possesses the highest cross-domain sequence identity to all other CshA repeat domains (Fig. 3A), was selected for structure elucidation. Of the five single repeat domains produced recombinantly, CshA_RD13 has the greatest tolerance to urea unfolding, suggestive of high stability (Fig. 3B). The structure of this protein was determined using solution NMR ( Fig. 4 and Figs. S3 and S4). Assignment proved challenging because of repetitive sequence motifs and a high degree of mobility leading to both the absence of some signals and the doubling (or more) of others (Fig. 4A). Nonetheless, a high degree of assignment was achieved for the core region of the protein covering residues 2053-2130 ( Table 2). The N-terminal region (residues 2032-2052 plus a 19-residue tag) was found to be largely unstructured with few inter-residue NOEs and no unambiguously assignable long-range NOEs. For this reason, no structural restraints were included for this part of the sequence, and the structure was only calculated and validated for residues 2053-2130. In addition to the high degree of disorder in the N-terminal part of CshA_RD13, several other regions of slow exchange (ms) were detected. Two sets of NMR signals were observed for the initial N-terminal loop comprised of residues 2053-2062, of which only the major set was used for structure calculations. A hydrogen-deuterium exchange experiment showed that the Val 2059 NH group is involved in a hydrogen bond that persists for over an hour, suggesting that interconversion between these conformations is either very slow or, more likely, that they are very similar and both involve a hydrogen bond between Val 2059 H and Asp 2056 O (as determined from initial structure calculations conducted without hydrogen bond restraints). Multiple conformations were also observed for residues Asp 2113 and Asn 2115 , which lie in the ␤5-␤6 loop. The ␤5-␤6 loop lies adjacent to the N-terminal loop, suggesting that slow exchange between these two regions may be coupled. The ␤4-␤5 loop (Pro 2096 -Pro 2106 ) and C-terminal tail (Ser 2124 -Val 2130 ) are both ill-defined in the structural ensemble (Fig. 4B), which is in part due to several broad, missing, or unassigned signals and thus a low density of structural restraints that might reflect the underlying dynamics of these regions. Several residues along the outside edge of the ␤3 and ␤5 strands have NOEs that could not be assigned to residues within the globular domain. Most likely these arise from interactions with the N-terminal tail of the protein, although no unambiguous assignment to particular residues was possible. CshA_RD13 adopts a ␤-sandwich fold comprising two threestranded anti-parallel ␤-sheets arranged at an angle of ϳ35°r elative to one another (Fig. 4C). The two sheets are connected by an 11-amino acid linker that fuses ␤4 to ␤5 and forms an extended loop that wraps around the C-terminal apex of the protein. The interface between the two ␤-sheets is predominantly hydrophobic and forms the compact core of the protein (Fig. 4D). There is a high degree of amino acid sequence conservation in this region in other CshA repeat domains, suggesting that each domain retains this unique core fold. In addition to the highlighted hydrophobic residues, the two tyrosine resi-dues of CshA_RD13 (Tyr 2084 and Tyr 2123 ) reside at the ␤-sandwich interface (Fig. 4D), adding credence to the validity of our folding studies. Assessment of the solvation state of this pair of residues is likely to provide an accurate measure of protein unfolding. The overall shape of CshA_RD13 can be likened to a cylinder, which is tapered at both termini. The terminal regions of the protein present sizeable patches of charge, suggestive that neighboring repeat domains may be able to engage in complementary charge-charge interactions with one another. Four of the five residues universally conserved in all 17 repeat domains (Gly 2082 , Gly 2090 , Gly 2102 , and Asp 2113 in CshA_RD13) contribute to the constrainment of tight turns between individual ␤-strands (␤2-␤3, ␤3-␤4, ␤4-␤5, and ␤5-␤6; SI). The fifth residue is located in the disordered N-terminal interdomain linker. CshA_RD1-17 adopts a transitory dynamic structure comprising alternating regions of order and disorder To reconcile our structural and biophysical data, we attempted to construct a pseudoatomic model describing the molecular architecture of CshA_RD1-17 in its entirety. The partial foldedness of the polypeptide implied from our SAXS and CD data, in addition to the variations in unfolding behavior of selected repeat domains and the observation of disordered linker regions at either terminus of the CshA_RD13 NMR structure, suggests that CshA may adopt a structure comprised of alternating regions of order and disorder. To verify this model, we applied an ensemble optimization method (EOM) to our SAXS data ( Fig. 5 and Table 3). Homology models of each repeat domain were generated and used to formulate pseudoatomic models describing the molecular architecture of CshA_RD1-17, in which well-folded domains alternate with disordered regions approximated by a random coil. The data were well-described by a model containing all 17 RD homology structures, although the calculated ensemble R g (99 Å) was lower than that determined experimentally (120 Å) ( Fig. 5 and Table 3). The contour length of this structure is ϳ660 Å, more than half of that determined from the data using the flexible polymer model. However, this model underestimates the proportion of disordered structure as measured using CD. To compensate, only repeat domains predicted to be largely ordered (1, 3-4, 7-8, 14 -16) were included in the model, yielding an ensemble with an average R g in agreement with our experimentally measured value (Fig. 5 and Table 3). These findings indicate that the ultrastructure of CshA_RD1-17 does not adhere to a standard "beads-on-a-string" configuration, wherein individual well-folded RDs are arranged in a defined sequence within the polypeptide chain, but rather a highly dynamic architecture wherein a subset of RDs fail to adopt a fully folded state, thus leading to a highly dynamic transitory structure dominated by the interplay of ordered, disordered, and partially ordered regions. Discussion Fibrillar adhesins are an important family of bacterial surface proteins that make significant contributions to environmental and host colonization, biofilm formation, host tissue invasion, and pathogenicity. As virulence factors, they represent attrac- Structure of S. gordonii CshA tive targets for the development of therapeutic strategies and interventions. Although many fibrillar adhesins have been identified in commensal and pathogenic bacteria, only a small number of these proteins have been subjected to detailed molecular level characterization. Examples include SasG, M protein, and the AgI/II family polypeptides (10,12,(26)(27)(28)(29)(30)(31)(32)(33). Each of these adhesins exploits a startlingly disparate molecular mechanism to facilitate the formation of fibrillar structures on the bacterial cell surface. The S. gordonii fibrillar adhesin CshA plays an important role in host colonization. CshA possesses a distinctive modular architecture that comprises 17 ␤-sandwich domains fused in series by flexible linkers. Although there is diversity in the sequences of individual repeat domains, amino acid sequence analysis suggests that each retains a conserved hydrophobic core that forms the basis of a compact protein fold. The structure of the representative repeat domain CshA_RD13 has been elucidated and provides a valuable test subject for understanding CshA repeat domain structure and function. The high degree of mobility in CshA_RD13 made assignment and structure calculation for this protein challenging; nonetheless, the core globular part of the protein is well-defined (Fig. 4). DALI analysis of CshA_RD13 failed to identify any closely related structural homologues of the protein, and technically the domain exhibits a new fold. However, the flattened ␤-sandwich is reminiscent of Ig domains found in many other repeat domain-containing proteins such as titin and cadherin (25). Folding studies of individual CshA RDs reveals remarkably variable stabilities considering their high sequence identities (Fig. 3). Five of the repeat domains (domains 1, 3, 5, 7, and 13) were expressed individually and subjected to equilibrium unfolding to assess their relative stabilities. Repeat domains 3, 7, and 13 all displayed a cooperative unfolding transition with a relatively small free energy of folding, although not unusual for small domains (for example, see the study by Gruszka et al. (27)). Equilibrium unfolding of CshA_RD1 revealed a weakly cooperative transition (m D-N ϭ 0.66 kcal mol Ϫ1 M Ϫ1 ) and only very marginal stability (0.64 kcal mol Ϫ1 ), indicating that a sig- Structure of S. gordonii CshA nificant proportion of the molecules are unfolded even in native conditions. Because CshA_RD1 is markedly divergent from all of the other repeat domains, it is difficult to relate differences in sequence to changes in stability. Interestingly, the sequences of the terminal repeat domains CshA_RD1 and CshA_RD17 differ considerably from those located centrally within the polypeptide. This may reflect the fact that they have coevolved to be adjacent to a nonrepeat domain and the cell wall, respectively. Because this repeat follows the nonrepetitive region in the overall CshA structure, it may require the presence of that region to interact and stabilize it. No transition could be observed at all with CshA_RD5, which is surprising because it has 91% identity with CshA_RD13. An examination of the differences between the primary sequences of CshA_RD5 and CshA_RD13 with respect to the NMR structure of the latter suggests some differences that may be responsible for destabilizing CshA_RD5 relative to CshA_RD13. Thr 2077 on ␤1b and both Pro 2118 and Thr 2122 on ␤5 in CshA_RD13 are all solventexposed to some degree and have been substituted with valine, leucine, and isoleucine, respectively, in CshA_RD5, leading to unfavorable exposure of hydrophobic residues to the aqueous solvent. Pro 2079 , which forms part of a type II ␤-turn between strands ␤1b and ␤2, is substituted with a serine, which statistically has a greater preference for type I ␤-turns. Mapping amino acid conservation across all 17 repeat domains onto the structure of CshA_RD13 indicates partial conservation of hydrophobic residues that reside within the hydrophobic cores of each RD. The central section of CshA_RD1-17 comprises 12 of 13 serially arrayed repeat domains that possess a high degree of sequence identity and appear closely structurally related (Fig. 3A). The sequential arrangement of high similarity domains is at odds with the known sequence to folding relationships in tandemly arrayed protein domains, in which sequence disparity between neighboring domains is postulated to minimize protein misfolding (24). It is tempting to speculate that interdomain linker length and disorder plays an important role in this process, ensuring that the spatial distance between neighboring domains is sufficient to allow each individual domain to adopt its fully folded conformation prior to translation of its superseding neighbor. However, what is clear from our folding studies and corroborated by hydrodynamic analysis of CshA_RD1-17 is that a subset of CshA RDs do not adopt a well-folded conformation either alone or in the context of the intact CshA polypeptide. This generalized loss in foldedness appears to arise because of the acquisition of destabilizing mutations within the hydrophobic core of some repeat domains. Significantly, these mutations appear to arise in instances where there is significant amino acid sequence identity to neighboring domains ( Fig. 3A and Fig. S1). This may represent a strategy to minimize the likelihood of inter-domain misfolding events, thus mitigating adhesin aggregation on the bacterial cell surface. Alternatively, the solventexposed hydrophobic residues may help to mediate interaction between CshA polypeptides during assembly of the cell-surface adhesive layer. The functional significance of the dynamic transitory structure of CshA_RD1-17 is yet to be unambiguously established, however, it is unquestionable that the combination of folded and partially folded regions will confer a high degree of flexibility to the polypeptide. This may enable the optimal projection of CshA's adhesive tip from the S. gordonii cell surface and in doing so maximize the capture radius of the adhesin. In addition, the partially folded structure may provide a mechanism of force damping following fibronectin binding. This could offer a mechanical advantage by mitigating the effects of shear forces following target engagement. This would be of particular significance in the bloodstream, where it is necessary for S. gordonii to maintain an intimate association with the surface of host cells while resisting the force of blood flow. The transitory structure of CshRD1-17 would provide a deformable tether with the capacity to dissipate the kinetic energy of binding under flow. In summary, here we report the identification and characterization of an entirely new architecture for multidomain bacterial surface proteins as typified by the S. gordonii adhesin CshA. This ultrastructure is characterized by the presence of fully and partially folded repeat domains, along with regions of intrinsic disorder, which affords a dynamic yet mechanically robust polymeric structure. Our study extends the diversity of natural protein architectures that are employed to enable microbial adherence to biotic and abiotic substrata and provides new insight into the capacity for bacteria to adhere and persist at sites exposed to shear forces. Moreover, this information establishes a foundation for the development of interventions that target CshA and related polypeptides that can be applied to disease prevention and anti-biofouling strategies. Gene cloning DNA sequences encoding CshA_RD1-17, CshA_RD1, CshA_RD3, CshA_RD5, CshA_RD7, and CshA_RD13 were amplified from S. gordonii DL1 (22) chromosomal DNA using appropriate primers (Table S1), incorporating appropriate consensus sequences for subsequent cloning into the expression vector pOPINF (23), precut with HindIII and KpnI. Ligations were performed using the In-Fusion TM (Clontech) cloning system as per the manufacturer's instructions. The resulting constructs encode N-terminally hexahistidine-tagged variants of each of the proteins under investigation. The sequences of all constructs were verified by DNA sequencing before being transformed into E. coli BL21 (DE3) cells for protein expression. Protein expression For the expression of unlabeled CshA_RD 1-17, CshA_RD1, CshA_RD3, CshA_RD5, CshA_RD7, and CshA_RD13, cultures of E. coli BL21 (DE3) cells harboring the respective expression plasmid were grown with shaking (200 rpm) in 1 liter of LB (Luria-Bertani) broth supplemented with carbenicillin (50 g ml Ϫ1 ) at 37°C, to A 600 ϭ 0.4 -0.6. Protein expression was induced by the addition of isopropyl ␤-galactopyranoside to a final concentration of 1 mM, and the cell cultures were transferred to 20°C with shaking at 200 rpm and grown for a further 16 h. For expression of 15 15 NH 4 Cl. The cells were grown with shaking at 37°C to A 600 ϭ 0.4 -0.6 and were then grown with shaking (200 rpm) at 20°C for a further 16 h. For expression of 15 N 13 C-labeled CshA_RD13, a culture (100 ml) of E. coli BL21 (DE3) cells harboring CshA_RD13::pOPINF was grown overnight with shaking at 37°C. The cells were harvested by centrifugation, washed in resuspension buffer, and used to inoculate 2 liters of M9 minimal medium (50 mM KH 2 PO 4 , 25 mM Na 2 HPO 4 , pH 6.8, 10 mM NaCl, 1 mM MgSO 4 , 0.3 mM CaCl 2 , 1 mg ml Ϫ1 biotin, 1 mg ml Ϫ1 thiamin), supplemented with carbenicillin (50 g ml Ϫ1 ), trace elements (5 ml/liter, 100ϫ), 0.5 g/liter 15 NH 4 Cl, and 2 g/liter 13 C glucose. The cells were grown to A 600 ϭ 0.8 -0.9. Protein expression was induced by the addition of isopropyl ␤-galactopyranoside (1 mM), and the cell cultures were transferred to 25°C, with shaking at 200 rpm and grown for a further 16 h. Protein purification All recombinant proteins were purified using the same general strategy. The cells were harvested by centrifugation and lysed. Cell debris was removed by centrifugation, and the remaining supernatant liquids were applied to a HiTrap Ni 2ϩ affinity column (GE Healthcare). The proteins were eluted with an imidazole gradient of 10 -500 mM over 15 column volumes. The fractions (2 ml) found to contain the target protein of interest (as identified by SDS-PAGE analysis) were pooled and concentrated. Protein samples were subjected to further purification using SEC by passage through either a Superdex 16/60 S75 column (CshA_RD1, CshA_RD3, CshA_RD5, CshA_RD7, and CshA_RD13) or a Superdex 16/60 S200 column (CshA_RD1-17), both from GE Healthcare. For unlabeled proteins, SEC was performed in 50 mM Tris-HCl, 150 mM NaCl, pH 7.5. For labeled proteins, SEC purification was performed in 20 mM phosphate, 50 mM NaCl, pH 7.5. Protein-containing fractions were pooled, concentrated to 20 mg ml Ϫ1 , and stored at 4°C. Analytical ultracentrifugation Sedimentation velocity analytical ultracentrifugation experiments were performed using a Beckman Optima XL-I. Sedimentation of the CshA_RD1-17 was monitored at 40000 rpm and 20°C using the UV-visible absorption system at a wavelength of 280 nm. The sample concentration was 6.22 M in buffer (20 mM Tris-HCl, 150 mM NaCl, pH 7.5). The sedimentation profiles were fitted in SEDFIT using the continuous distribution c(s) Lamm equation model. The partial specific volume of CshA_RD1-17 (0.7279 cm 3 g Ϫ1 ) was calculated from the primary sequence using SEDFIT. The density and viscosity of the buffer were measured using an Anton-Paar rolling-ball viscometer (Lovis 2000 M/ME) and found to be 1.002921 g cm 3 and 1.0218 mPa⅐s, respectively. Small angle X-ray scattering SAXS data of CshA_RD1-17 were collected at the Diamond Light Source synchrotron (Beamline B21) with a fixed camera length configuration (4.014 m) at 12.4 keV. Size-exclusion chromatography-coupled SAXS (SEC-SAXS) using an Agilent HPLC system was utilized to collect the data. The sample was measured at a concentration of 25.8 M in buffer (20 mM Tris-HCl, 150 mM NaCl, 5 mM KNO 3 , 1% sucrose, pH 7.5). Twodimensional scattering profiles were reduced using in-house software. The data were scaled, merged, and background-subtracted using the ScÅtter software package (34). GNOM and BAYESAPP were used to generate pair distance distribution plots from the scattering curves. Form factor fitting was carried out with SASVIEW using a flexible cylinder model. The model describes a chain that is defined by the contour length (L) and the Kuhn length (b). The Kuhn length is defined as twice the persistence length, over which the chain can be described as rigid, and values above that expected for a random coil can be ascribed to the range of possible torsional angles between residues and to folded structural elements within the polypeptide. The contour length is the linearly extended length of the particle without stretching the backbone. For completely disordered chains behaving as a random coil, b is between 18 -20 Å. The theoretical contour length for a fully disordered protein is 3.84 Å per residue and is defined by the number of residues and the spacing between C␣ positions. EOM was to analyze the experimental data using the ensemble optimization. RANCH was used to generate a pool of 10,000 independent conformational models based on the primary sequence and homology models of folded RD domains. GAJOE was used to select an ensemble of models whose combined theoretical scattering profiles best approximated the measured data using a genetic algorithm. Proteolytic His-tag cleavage Following nickel affinity and size-exclusion purification of recombinant CshA_RD1, CshA_RD3, CshA_RD5, CshA_RD7, and CshA_RD13 CshA proteins, their hexahistadine tags were cleaved off by 3C protease digestion (Pierce). This was carried out according to manufacturer's protocol (Pierce): 3C protease (1 mg ml Ϫ1 ) was incubated with His-tagged CshA protein (5 mg ml Ϫ1 ) overnight at 4°C with agitation. The cleaved CshA proteins were separated from the uncleaved material by passage through a HiTrap Ni 2ϩ affinity column (GE Healthcare) equilibrated with buffer (20 mM potassium phosphate, 100 mM NaCl, pH 7.0). Cleaved protein was eluted with 5 column volumes of the same buffer. Uncleaved protein was then eluted with elution buffer (20 mM potassium phosphate, 100 mM NaCl, 1 M imidazole, pH 7.0). Cleaved protein was concentrated to 5-10 mg ml Ϫ1 . Equilibrium unfolding studies Equilibrium unfolding studies were performed by monitoring the change in intrinsic tyrosine fluorescence as a consequence of increasing urea concentration. All spectra were collected using a Horiba-Jobin YVON Fluorolog. Protein con- Structure of S. gordonii CshA centrations of 10 M in buffer (20 mM potassium phosphate, 100 mM NaCl, pH 7.0), plus varying concentrations of urea, were mixed, and samples were left to equilibrate for 1 h at 20°C prior to analysis. All fluorescence experiments were performed at 23°C. For each sample, an emission spectrum was measured over the range 290 -320 nm using an excitation wavelength of 278 nm. For analysis, the fluorescence intensity at 306 nm was plotted as a function of urea concentration, and the data were fitted to a two-state equilibrium unfolding model. NMR spectroscopy NMR data sets were collected at 20°C, utilizing a Varian VNMRS 600-MHz spectrometer with a cryogenic cold probe. All NMR data were processed using NMRPipe (35). 1 were also recorded at 20°C on a Varian INOVA 900 MHz spectrometer with a cryogenic cold-probe (Henry Wellcome Building for NMR, University of Birmingham). Proton chemical shifts were referenced with respect to the water signal relative to DSS. Spectra were assigned using CcpNmr Analysis 2.4 (36). Structure calculations were conducted using ARIA 2.3 (37). 20 structures were calculated at each iteration except iteration 8, in which 200 structures were calculated. The 20 lowest energy structures from this iteration went on to be water-refined, and the 15 lowest energy structures were chosen as a representative ensemble. Network anchoring was used during iterations 0, 1, and 2, and all iterations were corrected for spin diffusion (38). Two cooling phases, each with 8000 steps, were used. Torsion angle restraints were calculated using both TALOSϩ (39) and DANGLE (40). Restraints were included for residues where both programs gave an unambiguous result in the same area of the Ramachandran plot. The restraints were based on those provided by DANGLE but extended if the TALOSϩ restraints went beyond these. This process resulted in slightly fewer, looser restraints than either program on their own but aimed to reduce the number of over-restrained angles. 1 angle restraints were introduced for Val 2107 and Val 2109 because the orientation of these side chains was clearly defined by their NOE pattern, although the selection of structures based on global energy scores meant that not all structures resulted in these orientations unless these restraints were introduced. The hydrogendeuterium exchange experiment showed 28 NH groups to be protected after 8 min, including two Gln side-chain amides (see Fig. S4). In addition, NOEs were observed to a ThrH␥1 hydrogen, suggesting that this was also involved in a hydrogen bond. Initial structure calculations were conducted without hydrogen bond restraints. Hydrogen bond donors were then identified, and corresponding hydrogen bond restraints were included in later calculations. Structures were validated using the Protein Structure Validation Software suite 1.5 (41) and CING (42).
8,218
2020-03-30T00:00:00.000
[ "Biology" ]
Optimal radio labelings of graphs Let $\mathbb{N}$ be the set of positive integers. A radio labeling of a graph $G$ is a mapping $\varphi : V(G) \rightarrow \mathbb{N} \cup \{0\}$ such that the inequality $|\varphi(u)-\varphi(v)| \geq diam(G) + 1 - d(u,v)$ holds for every pair of distinct vertices $u,v$ of $G$, where $diam(G)$ and $d(u,v)$ are the diameter of $G$ and distance between $u$ and $v$ in $G$, respectively. The radio number $rn(G)$ of $G$ is the smallest number $k$ such that $G$ has radio labeling $\varphi$ with $\max\{\varphi(v) : v \in V(G)\}$ = $k$. Das et al. [Discrete Math. $\mathbf{340}$(2017) 855-861] gave a technique to find a lower bound for the radio number of graphs. In [Algorithms and Discrete Applied Mathematics: CALDAM 2019, Lecture Notes in Computer Science $\mathbf{11394}$, springer, Cham, 2019, 161-173], Bantva modified this technique for finding an improved lower bound on the radio number of graphs and gave a necessary and sufficient condition to achieve the improved lower bound. In this paper, one more useful necessary and sufficient condition to achieve the improved lower bound for the radio number of graphs is given. Using this result, the radio number of the Cartesian product of a path and a wheel graphs is determined. Introduction The channel assignment problem is the problem to assign channels to each TV or radio transmitters such that the interference constraints are satisfied and the use of spectrum is minimized. The problem was first introduced by Hale [11] in 1980. The interference between transmitters is closely related to geographic location of the transmitters. The closer the transmitters are, the higher the interference is and vice-versa. Hence, the frequency difference between two radio channels assigned to radio transmitters is in the inverse proportion to the distance between two transmitters. Initially only two level interference, namely high and low, was considered and accordingly, two transmitters are called very close and close, respectively. In a private communication with Griggs during 1988, Robert proposed a variation of the channel assignment problem in which close transmitters must receive different channels and very close transmitters must receive channels that are at least two apart. The problem is studied by mathematicians using graphs labeling approach. In a graph, the transmitters are represented by vertices and two vertices are adjacent if two transmitters are very close and distance two apart if they are close. The problem of assignment of channels to transmitters is associated with graph labeling problem. Motivated through this problem, Griggs and Yeh introduced L(2, 1)-labeling (or distance two labeling) in [9] as follows: An L(2, 1)-labeling of a graph G = (V (G), E(G)) is a function ϕ from the vertex set V (G) to the set of non-negative integers such that |ϕ(u) − ϕ(v)| ≥ 2 if d(u, v) = 1 and |ϕ(u) − ϕ(v)| ≥ 1 if d(u, v) = 2, where d(u, v) is the distance between u and v in G. The span of ϕ is defined as span(ϕ) = max{|ϕ(u) − ϕ(v)| : u, v ∈ V (G)}. The λ-number, denoted by λ(G), is defined as the minimum span over all L(2, 1)-labelings of G. The L(2, 1)-labeling and other distance two labeling problems have been studied by many researchers in the past two and half decades; for example, see the survey articles [4,20]. In [5,6], Chanrtrand et al. extended the constraint on distance from two to the largest possible distance; the diameter of graph G and introduced the concept of radio labeling as follows: set of natural numbers) such that the following is satisfied for every pair of distinct vertices The assigned integer ϕ(u) is called the label of u under ϕ and, the span of ϕ is defined as The radio number of G, denoted by rn(G), is defined as with minimum taken over all radio labelings ϕ of G. A radio labeling ϕ is optimal if span(ϕ) = rn(G). It is clear that an optimal radio labeling ϕ always assign 0 to some vertex and hence the span of ϕ is the maximum integer assigned by ϕ. A radio labeling is a one-to-one integral function from V (G) to the set of non-negative integers. Therefore, any radio labeling ϕ induces It is clear that if ϕ is an optimal radio labeling then span(ϕ) ≤ span(ψ) for any other radio labeling ψ of G. A radio labeling problem is recognized as one of the tough graph labeling problems. In [5,6], Chartrand et al. gave an upper bound for the radio number of paths and cycles. Liu and Zhu determined the exact radio number for paths and cycles in [15]. Even determining the radio number for basic graph families like paths and cycles was challenging. In [16,17,18], Vaidya and Bantva determine the radio number for total graph of paths, strong product P 2 with P n and linear cacti. The radio number of trees remain the focus of many researchers in recent years. In [10], Halász and Tuza determine the radio number of level-wise regular trees. In [13], Li et al. determine the radio number of complete m-ary trees. In [14], Liu gave a lower bound for the radio number of trees and, a necessary and sufficient condition to achieve the lower bound; the author presented a class of trees, namely spiders, achieving this lower bound. In [3], Bantva et al. gave a different necessary and sufficient condition to achieve this lower bound and presented banana trees, firecrackers trees and a special class of caterpillars achieving this lower bound. necessary and sufficient condition to achieve the lower bound in [2]. They also discussed the radio number of line graph of trees and block graphs. Liu et al. also studied the radio k-labeling of trees in [7]. In [8], Das et al. gave a technique to find a lower bound for the radio number of graphs. In [1], Bantva improved this technique to find a lower bound for the radio number of graphs and gave a necessary and sufficient condition to achieve the improved lower bound. Using these results, the author determined the radio number of the Cartesian product of paths and Peterson graph. In this paper, one more useful necessary and sufficient condition to achieve the improved lower bound for the radio number of graphs given in [1] is established. Some subgraphs of a given graph G are characterized such that if the radio number of G achieves the lower bound given in [1] then these subgraphs also achieve the lower bound. Using these results, the radio number of the Cartesian product of the path graphs with wheel, star and the friendship graphs are determined. Preliminaries The book [19] is followed for standard graph theoretic terms and notation. Only simple finite connected graphs are considered throughout this paper. The distance d G (u, v) between two vertices u and v is the least length of a path joining u and v in a graph G. The suffix is dropped whenever the graph G is clear in the context. The diameter of a graph G, denoted by diam(G), The subgraph induced by S, denoted by G(S), is a subgraph of G whose vertex set is S and Let H be an induced connected subgraph of G with diam(H) = k. Define layers L i of graph G with respect to subgraph H as follows: Set L 0 = V (H) and L 1 = N (L 0 ). Recursively define which is known as the maximum level in a graph G. Since G is a connected graph, Define the total distance of layers of graph G, denoted by L(G), as For a graph G, define Let G be any connected graph then for any u, v ∈ V (G), note that the distance between u and v in a graph G satisfies the following inequality: In [8], Das et al. gave a technique to find a lower bound for the radio number of graphs. In [1], Bantva improved this technique and gave a lower bound for the radio number of graphs which is given in the following theorem. [1] Let G be a simple connected graph of order p, diameter d and L 0 ⊆ V (G). Denote k = diam(L 0 ) and δ = δ(G). Then Although, both the lower bounds given in [8] and [1] seems to be identical in notation but the difference lies in fixing the L 0 . In [8], Das et al. set a vertex or a clique of graph G as L 0 while Bantva set all vertices of an induced subgraph H of G as L 0 with the property that two non-adjacent vertices of V (H) have distance equal to diam(L 0 ). The readers may notice that this improved technique gives a better lower bound for the radio number of graphs, which is sharp for some classes of graph. The author of [1] presented one such class of graphs, which consists of the Cartesian product of the path graph and Peterson graph in [1]. In this paper, the condition to fix L 0 is further relaxed as follows. subgraph of G with the property that the vertices of G can be ordered as In [1], Bantva also gave a necessary and sufficient condition (given in the next theorem) to achieve the lower bound (3) for the radio number of graphs. Main Result In this Section, we give one more useful necessary and sufficient condition to achieve the improved lower bound for the radio number of graph given in [1], which rely only on the ordering of vertices of a graph. Theorem 3.1. Let G be a simple connected graph of order p, diameter d ≥ 2 and L 0 is fixed in G as described earlier. Denote k = diam(L 0 ) and δ = δ(G). Then holds if and only if there exists an ordering O(V (G)) := (x 0 , x 1 , . . . , x p−1 ) of V (G) such that the following conditions are satisfy. Moreover, under the conditions (a) and (b), the mapping ϕ defined by is an optimal radio labeling of G. Proof. Necessity: Suppose that (5) Note that ϕ is a radio labeling of G and so ϕ( Sufficiency: Suppose that an ordering O(V (G)) := (x 0 , x 1 , . . . , x p−1 ) of V (G) satisfies conditions (a)-(b) of hypothesis and ϕ is defined by (7) and (8). Note that it is enough to prove that ϕ is a radio labeling with span equal to the right-hand side of (5). Let x i and x j (0 ≤ i < j ≤ p−1) be two arbitrary vertices then by (8) and using (6), we have and hence ϕ is a radio labeling. The span of ϕ is given by . This together with (3) implies (5). A graph with no cycle is called acyclic graph. A forest is an acyclic graph. A tree is a connected acyclic graph. A spanning subgraph of a graph G is a subgraph with vertex set V (G). Observe that the diameter of P m ✷W n is m + 1. Proof. We consider the following two cases. Case-1: m is even. In this case, set {(u m/2 , v 0 ), (u m/2+1 , v 0 )} of P m ✷W n as L 0 then diam(L 0 ) = k = 1 and the maximum level in P m ✷W n is h = m/2. In this case, set {(u (m+1)/2 , v 0 )} of P m ✷W n as L 0 then diam(L 0 ) = k = 0 and the maximum level in P m ✷W n is h = (m + 1)/2. The order of P m ✷W n and L(P m ✷W n ) are given by Substituting (13) and (14) into (3) we obtain the right-hand side of (10) which is a lower bound for the radio number of rn(P m ✷W n ). We prove that this lower bound is tight. Let τ and σ are as defined earlier in Case-1. Let α be a permutation defined on {1, 2, . . . , n} as follows: Using permutations α, τ and σ, we first rename (u i , v j )(1 ≤ i ≤ m, 0 ≤ j ≤ n) as (a r , b s ) as follows: if 1 ≤ i ≤ m and j = 0; or i = m and 1 ≤ j ≤ n; if i = (m + 1)/2 and 1 ≤ j ≤ n; (u i , v στ (j) ), if 2 ≤ i ≤ (m − 1)/2 and 1 ≤ j ≤ n; (u i , v σ(j) ), if (m + 3)/2 ≤ i ≤ m − 1 and 1 ≤ j ≤ n. Claim-2: The above defined ordering O(V (P m ✷W n )) := (x 0 , x 1 , . . . , x p−1 ) satisfies (6). Let x i and x j (0 ≤ i < j ≤ p − 1) be any two arbitrary vertices. Denote the right-hand side of (6) by . If b = a + 1 then we consider the following two cases: (i) j = i + 2n and (ii) j = i + 2n. If j = i + 2n then d(x i , x j ) = 1 and in this case, E(i, j) < 0 < d(x i , x j ) and if j = i + 2n then d(x i , x j ) = 2 and in this case, otherwise E(i, j) ≤ 2 ≤ d(x i , x j ) which completes the proof of Claim-2. An n-star, denoted by K 1,n , is a tree consisting of n leaves and another vertex joined to all leaves by edges. Denote the vertex set of K 1,n by V (K 1,n ) = {v 0 , v 1 , . . . , v n } with E(K 1,n ) = {v 0 v i : 1 ≤ i ≤ n}. A friendship graph F n is a graph obtained by identifying one vertex of n copies of cycle C 3 with a common vertex. Denote the vertex set of F n by V ( Proof. Observe that P m ✷F n can be regarded as a subgraph of P m ✷W 2n with identical L 0 = {(u m/2 , v 0 ), (u m/2+1 , v 0 )} when m is even and L 0 = {(u (m+1)/2 , v 0 )} when m is odd and hence by Theorem 3.3, the radio number of P m ✷W 2n and P m ✷F n are identical. Example 3.1. In Table 1, an ordering of vertices and the corresponding optimal radio labeling of P 7 ✷W 7 is shown. Table 1: An ordering and optimal radio labeling for vertices of P 7 ✷W 7 . (u i , v j ) i→ Example 3.2. In Table 2, an ordering of vertices and the corresponding optimal radio labeling of P 8 ✷W 7 is shown.
3,543.4
2022-08-29T00:00:00.000
[ "Mathematics" ]
CA-CAS-01-A: A Permissive Cell Line for Isolation and Live Attenuated Vaccine Development Against African Swine Fever Virus African swine fever virus (ASFV) is the causative agent of the highly lethal African swine fever disease that affects domestic pigs and wild boars. In spite of the rapid spread of the virus worldwide, there is no licensed vaccine available. The lack of a suitable cell line for ASFV propagation hinders the development of a safe and effective vaccine. For ASFV propagation, primary swine macrophages and monocytes have been widely studied. However, obtaining these cells can be time-consuming and expensive, making them unsuitable for mass vaccine production. The goal of this study was to validate the suitability of novel CA-CAS-01-A (CAS-01) cells, which was identified as a highly permissive cell clone for ASFV replication in the MA-104 parental cell line for live attenuated vaccine development. Through a screening experiment, maximum ASFV replication was observed in the CAS-01 cell compared to other sub-clones of MA-104 with 14.89 and log10 7.5 ± 0.15 Ct value and TCID50/ml value respectively. When CAS-01 cells are inoculated with ASFV, replication of ASFV was confirmed by Ct value for ASFV DNA, HAD50/ml assay, TCID50/ml assay, and cytopathic effects and hemadsoption were observed similar to those in primary porcine alveolar macrophages after 5th passage. Additionally, we demonstrated stable replication and adaptation of ASFV over the serial passage. These results suggest that CAS-01 cells will be a valuable and promising cell line for ASFV isolation, replication, and development of live attenuated vaccines. Introduction African swine fever virus (ASFV) causes the highly contagious disease, African swine fever (ASF), which results in high mortality rates in pigs.Early reports of ASF in Kenya in 1921 were associated with an ancient sylvatic cycle, which caused nearly 100% mortality in domestic pigs infected with acute hemorrhagic fever (Monteagudo et al., 2017).Following its introduction to Portugal from Angola in 1957, it spread across Europe and caused significant losses to the pig industry.However, European countries, except Sardinia, succeeded in eradicating the disease in 1995 through stringent disease control measures (Chathuranga & Lee, 2023;Martins et al., 2021;Turlewicz-Podbielska et al., 2021).A number of transcontinental occurrences have been reported since 2007, with the most significant occurrence in Georgia in 2007.Since then, other transcontinental occurrences have been reported in Africa, Europe, Asia, and Oceania and were most recently introduced in North Seung-Chul Lee, Yongkwan Kim and Ji-Won Cha contributed equally to this work. America (Sun et al., 2021).The World Organization for Animal Health documented a span from January 2020 to January 2022 during which ASF outbreaks were observed across 35 different countries or regions on a global scale.These outbreaks resulted in the infection of 4767 domestic pigs, leading to the unfortunate loss of 1,043,334 animals, as well as 18,262 cases in wild boars, causing the loss of 29,970 animals (Wang et al., 2022).The causative pathogen responsible for ASF is a sizeable double-stranded DNA virus classified within the Asfaviridae family (Cackett et al., 2020).Robust biosecurity measures and established sanitary practices implemented both at the farm and national scales have been employed in the ongoing effort to combat the dissemination of this virus.Nonetheless, the effectiveness of these approaches remains limited, particularly in certain regions where resource constraints pose challenges to their full implementation.Therefore, there is an urgent need to develop an effective ASFV vaccine.Over the past few decades, numerous vaccination tactics have been explored, encompassing approaches such as inactivated vaccines, DNA-based vaccines, subunit vaccines, and viral-vectorbased vaccines.However, most of these vaccines failed to induce effective protective immunity against ASFV (Chathuranga & Lee, 2023).Through the culmination of prior investigations, the most encouraging vaccine candidates to date consist of live attenuated African Swine Fever viruses (LA-ASFV), demonstrating remarkable efficacy, offering complete protection of up to 100% against homologous pathogenic ASFV challenges (Pérez-Núñez et al., 2022;Tran et al., 2022).Live attenuated viruses have reduced virulence as a result of deletion of virulence-associated genes, either naturally (King et al., 2011) or through passage into tissue culture (Krug et al., 2015) or through genetic modification methods (Abkallo et al., 2021;Borca et al., 2018).However, for the development of an effective ASFV vaccine using LA-ASFVs, specific cells capable of producing stable ASFV vaccine strains are essential. Pig-derived ASFV isolates have restricted cell tropism and normally replicate only in primary porcine cells such as blood-derived macrophages, monocytes, and pulmonary alveolar macrophages.Hence, primary monocytes or alveolar macrophages have been employed to investigate the isolation and amplification of ASFV, explore virus-host interactions, and replicate ASFV infections in in vivo models (Franzoni et al., 2017).However, primary cells have disadvantages including low reproducibility, high batch-tobatch variation, time-intensive procedures, expensive cell extraction, and animal welfare considerations (Meloni et al., 2022).Hence, it is imperative to discern passaged cell lines that facilitate robust ASFV replication, facilitating its isolation, virus purification, aiding biological investigations, and enabling the development of live attenuated vaccines.To date, limited ASFVs were propagated, titrated, and passaged using several established cell lines such as IPAM, COS-1, WSL, Vero, and PIPEC (Carrascosa et al., 2011;Meloni et al., 2022).However, there are no commercially available cell lines that have been shown to be suitable for passaging from field samples to produce LAVs.In this study, we derived and validated highly ASFV permissive homogenous cell clone from MA-104 parental cell line, denoted as CA-CAS-01-A (CAS-01) and validated for ASFV replication and stable virus passage. African Swine Fever Virus Isolation African swine fever virus (ASFV) positive spleen samples from wild boar were provided by the National Institute of Wildlife Disease Control and Prevention (NIWDC) in Republic of Korea.Next, the spleen sample was immersed in PBS supplemented with 1% penicillin/streptomycin (P/S), and minced.The mashed tissue was centrifuged at 4 °C and 4000 rpm for 10 min, and the supernatant was separated.Then, the separated supernatant was filtered through a 0.45 µm filter.Afterward, viral DNA extraction was performed from the filtered suspension, and virus positivity was reconfirmed through real time polymerase chain reaction (RT-PCR), and used immediately in the experiment or stored at − 80 °C (Fig. 1A).The two ASFV field strains used in this study was ASFV/INJE/11893 and ASFV/ INJE/13167/2021.All experiments dealing with ASFV were conducted in accordance with the Standard Operating Procedure (SOP) in the biosafety level 3 (BSL-3) laboratory of the NIWDC in Korea. Cell Lines and CAS-01 Cell Cloning Primary pulmonary alveolar macrophage (PAM) (Optipharm Inc.) was cultured in 10% Fetal bovine serum (FBS) (Gibco™) and 1% Penicillin-Streptomycin (Gibco™) added RPMI-1640 medium (Hyclone™) in an incubator at 37 °C and 5% CO 2 atmosphere.African green monkey kidney epithelial cell line (MA-104, CRL-2378.1TM)was cultured in Alpha Modification of Eagle's Minimum Essential Media (Gibco™) supplemented with 10% FBS and 1% Penicillin-Streptomycin at 37 °C with 5% CO 2 environment.Cloning of the MA-104 cell subpopulations was performed by the limiting-dilution method.Suspended MA-104 cells were diluted at a mean concentration of 1 cell/well in 10% FBS containing MEM media and dispensed in to 96-well cell culture plates.Cells were incubated at 37 °C in an atmosphere of 5% CO 2 .The initial monolayer of cell cloning was subjected to subsequent sub-cloning for cell amplification.In order to select high-permissive cell clones for ASFV, cell clones were infected with ASFV in to each monolayer for seven passages.Virus replication was determined at the 7th passaged virus by the tissue culture infectious dose (TCID 50 ) and RT-PCR assays.MA-104 cell line was used as the reference, and the cell clone with higher ASFV replication was further amplified, stocked and denoted as CA-CAS-01-A (CAS-01).CAS-01 cells (Choong Ang Vaccine Laboratories Co., Accession No. KCTC 14568BP) were cultured in Alpha Modification of Eagle's Minimum Essential Media (Gibco™) supplemented with 10% FBS and 1% Penicillin-Streptomycin at 37 °C with 5% CO 2 atmosphere. Virus Infection and ASFV Passage in CAS-01 CAS-01 cells were cultured at 5 × 10 5 cells/flask in a T25 flask and incubated in 2% FBS and 1% Penicillin-Streptomycin. Next, isolated ASFV/INJE/11893 and ASFV/ INJE/13167/2021 was infected into cells separately, and culture media was replaced with virus production-serum free medium (VP-SFM) (Gibco™).Twenty-four hours later, again media were changed with VP-SFM added 2% FBS and incubated for 6 days in a 5% CO 2 atmosphere at 37 °C.The collected cell pellet was subjected to two-time freezing and thawing (F/T), and it was re-suspended with the supernatant.Then, 2 ml of the suspension was continuously passaged in a similar manner to the above.Cycle threshold value (Ct value) was measured on the 7 days of post-infection (dpi) for each passage, and viral infectivity was confirmed by staining through Immunocytochemistry (ICC) using immunoperoxidase assay. Extraction of Viral DNA Three hundred microliter of isolated ASFV virus or ASFV-infected cell lysate was mixed with 200 μl of cell lysis buffer and 20 μl of Proteinase K (Promega™, MC5005).Next, the mixture was heated at 56 °C for 30 min.Viral DNA was extracted in 50 μl increments using the Maxwell ® RSC 48 Instrument in the Maxwell RSC Whole Blood DNA Kit (Promega, AS1520) according to the manufacturer's instructions. Real-Time Polymerase Chain Reaction (RT-PCR) The Ct value for the ASFV DNA template was confirmed using the commercially available VetMAX™ African Swine Fever Virus Detection Kit (ThermoFisher™, A28809) targeting the ASFV P72 gene (B646L).RT-PCR was performed in the QuantStudio™ 6 Flex Real-Time PCR (Applied Biosystems™, 4485691) machine using the Quant Studio Real-Time PCR Software according to the manufacturer's instructions for a total of 40 cycles. Immunocytochemistry (ICC) Using Immunoperoxidase Assay ASFV-infected CAS-01 cells were subjected to ICC assay.In order to fix the cells, 80% Acetone was added on the cells after removing the media from the plate.The cells were allowed to dehydrate and fix for 10 min after adding the acetone.Afterward, acetone was removed and 10 min of drying were undertaken at room temperature.Next, cells were washed once with PBS, and rabbit Anti-ASFV p30 polyclonal antibody (Creative-diagnostics™, CABT-RM033) was added as the primary antibody at a 1:500 dilutions and incubated at 37 °C for 1 h.Following 4 times washing with PBS, polyclonal goat anti-rabbit immunoglobulin (Enzo™, ADI-SAB-301-J) was used as a secondary antibody at a 1:1000 dilution and incubated at 37 °C for 1 h.Finally, cells were washed 4 times with PBS and detected using alkaline phosphatase substrate (ImmPACT ® Vector ® Red Substrate, Alkaline Phosphatase, Vector, SK-5100).The substrate was washed aware with distilled water to avoid over-staining. Determination of Viral Titers by Hemadsorption Doses (HAD 50 ) Primary PAM cells were seeded at 5 × 10 5 cells/ml respectively and incubated under 37 °C temperature and 5% CO 2 atmosphere.Serial dilution of the ASFV-infected sample was prepared in 96-well U bottom plate.Twelve hours later, Primary PAM cells were infected with tenfold diluted ASFV-containing samples.At 2-h post-infection (2 hpi), 0.4% red blood cell (RBC) was added to each well and incubated for 7 days.The wells showing hemadsorption were marked every day for 7 days.Finally, using the reed-Muench method, wells showing hemagglutination were selected and counted. Determination of Viral Titers by Tissue Culture Infectious Dose (TCID 50 ) CAS-01 was incubated at 1 × 10 5 cells/ml in a 96-well plate at 37 °C and 5% CO 2 atmosphere.After 12 h of cell incubation, ASFV was serially diluted in a 96-well U bot-tom plate to infect the cells and cultured in 96 well-flat bottom plates for 7 days.The wells showing virus-induced CPE were marked every day for 7 days.Then, using the Reed and Muench method, wells showing CPE were selected and calculated.An immunoperoxidase assay was performed on the plate on which CPE was confirmed.Tissue Culture Infectious Dose was calculated using the Reed and Muench method by counting wells in which more than 50% of the cells inside the wells were stained. Statistical Analysis The statistical analysis was performed using GraphPad Prism software version 6 for Windows (GraphPad Software).Comparison between CAS-01 and MA-104 cell TCID 50 results were analyzed by the unpaired t-test.*p < 0.05 was regarded as statistically significant. Identification of ASFV Permissive Homogeneous Cell Subpopulation from MA-104 Parental Cell Line Recently, two research groups identified MA-104 cells as an ASFV susceptible cell line (Kwon et al., 2022;Rai et al., 2020).However, isolated focal cytopathic effects were also observed in the ASFV-infected MA-104 monolayers, emphasizing the presence of highly ASFV-permissive MA-104 cell subpopulations in the heterogeneous parental MA-104 cells.In a previous study, a homogenous porcine reproductive and respiratory syndrome (PRRS)-susceptible cell population was identified and isolated from MA-104 cells and named MARC-145 (Kim et al., 1993).In this study, isolated MA-104 subpopulations were infected with ASFV for seven passages, and ASFV replication at 7th passage was evaluated by real-time PCR (RT-PCR) for ASFV DNA and TCID 50 assays.As shown in Fig. 1B, we found that susceptibility to ASFV infection was different in each cell clone compared to the MA-104 parental cell line.Finally, we identified a clone with higher viral replication than that of the parental cell line, denoted as CA-CAS-01-A (CAS-01) cell.CAS-01 cells were deposited in the Korean Collection for Type Cultures (KCTC) under accession no.KCTC 14568BP. RT-PCR assay to detect ASFV DNA, as well as HAD 50 and TCID 50 assays were performed for the cells or viruses collected during each passage.During the serial passaging of ASFV/INJE/11893/2021, the Ct value of ASFV DNA gradually increased from passage 1 to the 3rd passage (21.703, and 28.608 at passages 1, and 3, respectively).However, we observed a rapid de-crease in the Ct value from the 5th passage to the 12th passage, exhibiting rapid viral propagation (20.653, 18.236, 13.988, and 13.348 at passages 5, 7, 9, and 12, respectively) (Fig. 2B).Furthermore, during serial passaging of ASFV, the viral titers in CAS-01 cells increased and stably maintained the virus replication with increasing passages as of the 5th passage, reaching a peak at the 12th passage, with log 10 6.8 HAD 50 /ml and log 10 5.5 TCID 50 /mL (Fig. 2C, D).As shown in Fig. 2A, the infectivity of isolated ASFV was verified in the initial passage through the ICC staining assay, and hemadsorption was distinctly confirmed in early passages, however, it remains uncertain whether it surpasses 50% of the total wells.Thus, we were unable to obtain the TCID 50 and HAD 50 results from passages 1 and 3. Interestingly, ASFV/INJE/13167/2021 infection results Ct value of ASFV DNA 18.300 in 1st passage while virus replication was detected as 4.0 HAD 50 /ml and log 10 1 TCID 50 /ml.Similar to the ASFV/INJE/11893/2021 strain infection ASFV/INJE/13167/2021 serial passage exhibited a rapid increase of virus replication (Fig. 2E-G).Together, our data demonstrate that ASFV infection could detected from the first passage via ICC, virus titer and hemadsorption was distinctly confirmed in CAS-01 cells, while significantly higher virus replication was detected via TCID 50 and HAD 50 assays from passaged virus, indicating the potential of employing this novel cell line for ASFV isolation, replication, and adaptation. Next, we assessed and compared the propagation of ASFV in primary PAM and CAS-01 cells by TCID 50 assay.The kinetics of P-2 ASFV replication in primary PAM cells showed that virus production increased from 12 hpi, reaching a maximum titer of log 10 7.3010 TCID 50 /ml at 144 hpi (Fig. 3G).On the other hand, the kinetics of P-2 ASFV replication in CAS-01 cells showed that virus propagation increased from 12 hpi, reaching a maximum titer of log 10 3.800 TCID 50 /ml at 48 hpi, after which viral production started to decrease slightly and fluctuate until 144 hpi (Fig. 3H).Because the early-passaged ASFV had low virus concentration, the virus titer was highest at 48 hpi and comparatively lower than that of primary PAM cells at the tested time points.However, as shown in Fig. 3I, kinetics of P-12-ASFV replication in CAS-01 cells showed that virus production increased from 12 hpi (log 10 3.2504 TCID 50 /ml), reaching a maximum titer of log 10 7.0000 TCID 50 /ml at 144 hpi.Taken together, these results suggest that cell-adapted ASFV can efficiently replicate in CAS-01 cells. Cytopathic and Hemadsorption (HAD) Properties of ASFV-Infected CAS-01 Cells Further, to determine the infectivity of cell-passaged ASFV in CAS-01 cells, we observed its cytopathic effect and hemadsorption properties.As shown in Fig. 4A, ASFVinfected cells exhibited a prominent cytopathic effect (CPE) that was detectable at 6 days post-infection (dpi) in CAS-01 cells.However, ASFV-infected primary PAM cells have a round morphology, with massive vacuolization of the cytoplasm of cells that were detached from the culture plate.Therefore, proper CPEs could not be observed.In contrast, a significant CPE was observed in CAS-01 cells upon ASFV infection at the same multiplicity of infection (MOI).As indicated by the arrows, CAS-01 cells had a round morphology and formed clusters of cells.It is more relevant to evaluate the produced ASFV infective virus by the exploitation of a characteristic feature of the swine monocytes infection, which developed a rosette of erythrocytes around the infected cell.It is the basis of a conventional assay by "hemadsorption," widely used both for diagnostic purposes and virus titration.As shown in Fig. 4B, hemadsorption was observed at 6 dpi in CAS-01 cells, similar to that in primary PAM cells.Our results demonstrate that cell-passaged ASFV-infected CAS-01 cells form rosettes with a phenotype very similar to the rosettes typically observed in ASFVinfected primary PAM cells. Discussion Suitable in vitro systems for the detection, isolation, and manipulation of field isolates of pathogenic ASFV are only available for porcine primary macrophages and monocytes derived from peripheral blood or other tissues (Meloni et al., 2022).The quality of primary cell preparations may vary from batch to batch due to differences in the health of donor animals and preparation techniques, and the generation of primary macrophages is time-consuming and expensive and can result in contamination leading to cell waste (Gao et al., 2022).Moreover, using primary cells to produce large-scale vaccines is ethically challenging and, therefore, not feasible.Consequently, the development of safe and effective ASFV vaccines and diagnosis has been hampered by the lack of continuous cell lines suitable for ASFV isolation and propagation (Gao et al., 2022;Masujin et al., 2021).In light of these obstacles, the development of sustainable cell lines susceptible to ASFV infection is urgently needed. In this study, we derived and validated highly ASFVpermissive CAS-01 cells capable of ASFV replication and passage.First, we isolated highly ASFV-permissive CAS-01 cell subpopulations from the MA-104 parental cell line by cell cloning.Second, we demonstrated stable replication and adaptation of isolated ASFV/INJE/11893/2021 and ASFV/ INJE/13167/2021 over the serial passage, and we observed higher infectivity of the passaged virus than un-passaged ASFV in CAS-01 cells.Third, replication of passaged-ASFV in CAS-01 cells was confirmed by Ct value for ASFV DNA, HAD 50 /ml assay, TCID 50 /ml assay, and cytopathic effects and hemadsoption were observed similar to those in primary porcine cells.Taken together, our results strongly indicate that ASFV can be efficiently isolated and propagated using CAS-01 cells, which may be useful for the development of cell-adapted vaccines against ASFV. ASFV isolation rates may vary with sample conditions, and the amount of virus at the beginning of passage is important for viral adaptation.Recently, the cell line MA-104 was used to identify infectious ASFV strains in clinical samples (Kwon et al., 2022;Rai et al., 2020).However, the parental MA-104 cell line has been shown to exhibit heterogeneous permissiveness to PRRS infection (Kim et al., 1993).Since we also observed similar results in a preliminary study of ASFV infection in MA-104 cells, the current study was designed to clone a highly ASFV-permissive homogenous cell population from the heterogeneous parental MA-104 cell line.As a result, we selected CAS-01 cells that exhibit significantly higher ASFV replication during cell passage than other cell clones. Serial passage of ASFV may lead to better adaptation and viral replication in ASFV field isolates.Preliminary studies have demonstrated that ASFV can be attenuated by serial passage in cell culture, suggesting the practical application of this technique for virology research, including vaccine development (Zhang et al., 2023).Due to repeated passages and cell adaptation in vitro, the viral genome can undergo genetic and phenotypic changes (Borca et al., 2021;Krug et al., 2015).ASFV that has adapted to being passaged in cells typically has prominent deletions in the genome, particularly in variable regions at both ends (Pires et al., 1997;Tabares et al., 1987).Generally, cell-adapted ASFV strains show decreased virulence and immunogenicity in swine (Sanford et al., 2016).However, some cases showed the same level of attenuation and better protective efficacy (Borca et al., 2021).The exact mechanism for the relationship between genomic alterations in cell-adapted ASFV and immunogenic characters remains unclear and requires further studies.However, this strategy is an essential methodology for the development of live attenuated ASFV vaccines for commercial production.In this study, ASFV/INJE/11893/2021 and ASFV/ INJE/13167/2021 was continuously passaged in CAS-01 cells for 12 passages, and stable virus propagation and maintenance were confirmed.As reported in the previous literature (Pires et al., 1997;Tabares et al., 1987), prominent genome alterations are possible in ASFV viruses passaged in CAS-01 cells.To validate this critical hypothesis, our future investigations will focus on analyzing the entire genome sequence of passaged ASFV with the aim of verifying any genomic alterations that may occur during passage in CAS-01 cells and also performing vaccine studies including cellular, humoral immune response and protective efficacy following cell-passaged ASFV challenge in vivo conditions.Interestingly, upon cell passaged-ASFV infection, CAS-01 cells showed clear cytopathic effects, including rounded and formed clusters with a vacuolated cytoplasm compared with primary PAM cell infection.In addition, clear rosette formation was confirmed in CAS-01 cells through hemadsorption assay.These results suggest that CAS-01 cells are useful for ASFV isolation and virus passage.Moreover, growth characteristics of cell passaged-ASFV in CAS-01 cells showed high viral replication efficacy, similar to ASFV replication in the primary PAM cells. In conclusion, we demonstrated that the CA-CAS-01-A (CAS-01) cell line is highly susceptible to ASFV infection and useful for the propagation of virulent ASFV strains.This cell line can be maintained in research laboratories or vaccine laboratories and exhibits numerous valuable properties for the isolation, replication, and adaptation of ASFV.Therefore, CAS-01 cells will be invaluable for advancing our understanding of ASFV and developing technologies to combat it, such as live-attenuated vaccines. Fig. 1 Fig. 1 Isolation of African swine fever virus and screening of MA-104 cell subpopulations against ASFV infection.A ASFV-positive wild boar spleen samples, delivered from the National Institute of Wildlife Disease Control and Prevention (NIWDC) were crushed in PBS with 1% penicillin/streptomycin (P/S), and the crushed tissues were centrifuged at 4 °C and 4000 rpm for 10 min.Then, the supernatant was filtered through a 0.45 μm filter.The obtained virus
5,173
2024-03-13T00:00:00.000
[ "Medicine", "Biology" ]
Multiple regulation pathways and pivotal biological functions of STAT3 in cancer STAT3 is both a transcription activator and an oncogene that is tightly regulated under normal physiological conditions. However, abundant evidence indicates that STAT3 is persistently activated in several cancers, with a crucial position in tumor onset and progression. In addition to its traditional role in cancer cell proliferation, invasion, and migration, STAT3 also promotes cancer through altering gene expression via epigenetic modification, inducing epithelial–mesenchymal transition (EMT) phenotypes in cancer cells, regulating the tumor microenvironment, and promoting cancer stem cells (CSCs) self-renewal and differentiation. STAT3 is regulated not only by the canonical cytokines and growth factors, but also by the G-protein-coupled receptors, cadherin engagement, Toll-like receptors (TLRs), and microRNA (miRNA). Despite the presence of diverse regulators and pivotal biological functions in cancer, no effective therapeutic inventions are available for inhibiting STAT3 and acquiring potent antitumor effects in the clinic. An improved understanding of the complex roles of STAT3 in cancer is required to achieve optimal therapeutic effects. Toll-like receptors. STAT3 is also directly activated by TLR stimulation during the production of IgG by human B cell 10 and TLR mediated STAT3 activation is required for antibody production and IL-10 production 10 . As a classical activator of TLR4, the lipopolysaccharide (LPS) can remarkably increase the level of phosphorylated STAT3 in the human bladder cancer T24 cell line, suggesting the activation of STAT3 by TLR4 signaling 11 . The activation of TLR3 during oxidative stress protects photoreceptor survival and visual function. In this TLR3 protection during injury, STAT3 is activated and has a critical role 12,13 . STAT3 activation is also correlated with the high expression of TLR2 in tumor tissue 14 . In addition, TLR7 ligation can also induce STAT3 activation and interface with Notch, as well as the canonical NF-κ B and MAP kinase pathways 15 . In addition to cytokines and growth factors, CpG can directly activate STAT3 within minutes via TLR9. This finding reveals a second mechanism by which STAT3 mediates immunosuppression 16 while creating a potent checkpoint or inhibitor of antitumor immune response 17 . The mechanism by which TLR9 activates STAT3 was recently demonstrated. JAK2 is recruited by Frizzled 4 (FZD4) and then is activated depending on the TLR9 engagement with CpG oligodeoxynucleotidesve (ODNs) and this links CpG-TLR9-FZD4 signaling with subsequent STAT3 tyrosine phosphorylation 18 . miRNAs. Recent studies have indicated that miRNAs are critical regulators of STAT3 signaling in the pathogenesis of cancer. MiR-519d functions as a tumor suppressor in breast cancer by suppressing STAT3 expression 19 . The low expression level of miR-20a, a negative regulator of STAT3, can enhance de-repressed STAT3 expression and activation and boost proliferation pathways in hepatocellular and this suggest miR-20a may represent a novel potential therapeutic target and biomarker for survival of cancer patients 20 . Let-7 miRNA family members are widely considered to be tumor suppressors. Let-7 re-expression in poorly differentiated PDAC cell lines can enhance the cytoplasmic expression of suppressor of cytokine signaling 3 (SOCS3), which blocks STAT3 activation by JAK2, and reduce the phosphorylation of STAT3 and its downstream signaling events, thereby reduce the growth and migration of PDAC cells 21 . Iliopoulos and colleagues revealed that Src activation triggers a nuclear factor (NF)-κ B-mediated inflammatory response that directly activates LIN28 transcription, which leads to let-7 inhibition and causes a high expression of IL-6 coupled with the activation of STAT3. Their study demonstrated that the interaction of let-7 and IL-6-STAT3 completes a negative-feedback loop in cellular transformation and first describes the importance of epigenetic regulation in promoting inflammation and cancer 22 . Additionally, downregulation of miR-200 and let-7 via STAT3 can induce the EMT phenomenon in breast cancer; conversely, inactivation of STAT3 or re-expression of both miRNAs proved sufficient to induce mesenchymal-to-epithelial transition (MET) in mesenchymal breast cancer 23 . Tyrosine phosphatases. Tyrosine phosphorylation catalyzed by tyrosine kinases (PTKs) is critical for STAT3 activation. By contrast, the dephosphorylation of STAT3 by PTPs including SHP2, SHP1, CD45, PTP1B, PTP2B and PTPRT, are essential to ensure proper amplitudes and kinetics of STAT3 activation 24 . SHP2 negatively regulating STAT3 was observed in melanoma cells and glioma cells 25,26 , and morin inhibits STAT3 tyrosine 705 phosphorylation in tumor cells through activation of protein tyrosine phosphatase SHP1 27 . In addition, adiponectin significantly inhibits leptin-induced JAK2 activation and STAT3 transcriptional activity via increasing PTP1B protein and activity in oesophageal cancer cells 28 . PIAS protein family. The protein inhibitor of activated STAT (PIAS) proteins which vary between 507 (PIASy) and 650 (PIAS1) amino acid residues are encoded by four genes, namely, PIAS1, PIASx (PIAS2), PIAS3, and PIASy (PIAS4). PIAS proteins regulate transcription through several mechanisms, including blocking the DNA-binding activityof transcription factors, recruiting transcriptional co-repressors, and promoting protein SUMOylation. Recent studies showed that PIAS proteins can deregulate the activity of STAT3 29-31 . SOCS protein family. SOCS (Suppressor of cytokine signaling) contain eight members (CIS, SOCS1, SOCS2, SOCS3, SOCS4, SOCS5, SOCS6, and SOCS7 32 ). The SOCS proteins negatively regulate the JAK-STAT3 signaling pathway through three mechanisms: first, by inhibiting JAK kinase or target JAKs for degradation by proteasome; second, by shielding the STAT3 binding sites on the cytokine receptor; third, by targeting proteins for proteasomal degradation via ubiquitination. Among those proteins, SOCS1 and SOCS3 are the best characterized so far. A recent study showed that SOCS1 and SOCS3 can promote myogenic differentiation by inhibiting JAK1 and gp130, respectively 30 . Platelet factor 4 (PF4) inhibits the IL-17/STAT3 pathway by upregulating the expression of SOCS3 33 . Other regulation patterns of STAT3. Apart from the phosphorylation at Tyr705, STAT3 can also be activated by the phosphorylation of Ser727. Various serine kinases, such as MAPK (p38MAPK, ERK, and JNK), PKCδ , mTOR, NLK, and an H-7-sensitive kinase, have been reported to phosphorylate STAT3 at the serine 727, which is required for STAT3 maximal transcriptional activity [34][35][36][37][38][39] . NF-κ B activation is a well-known player that promotes the production of IL-6, which stimulates STAT3 activation. Interestingly, a recent study indicated that STAT3 was responsible for eliciting constitutive NF-κ B activity in human melanoma and prostate cancer cells 40 . This finding reveals a STAT3 → NF-κ B → IL-6 feed-forward signaling loop in carcinogenesis; meanwhile, the molecular mechanism linking inflammation to cancer was gradually clarified [41][42][43] . Other factors such as UV radiation or sun light, carcinogen, stress, smoke, and infection are also known to have a significant role in STAT3 activation. (Fig. 1.) STAT3 regulates gene expression through epigenetic modification during cancer progression Although the persistent phosphorylated form of STAT3 has been found in several cancers and leads to gene expression promoting cell proliferation and resistance to apoptosis, as well as tumor angiogenesis, invasion, and migration, unphosphorylated STAT3 also acts as a weak but potentially biologically relevant transcription factor that can activate a series of STAT3 target genes 44,45 through direct binding to a responsive GAS promoter and promotes the development of cancer. The mechanism of unphosphorylated STAT3 target DNA binding is by regulating chromatin organization and binding to AT-rich DNA sequences, which play important roles in regulation of gene expression and/or chromatin organization because of their special structure with a narrow minor groove that can be recognized by proteins 46 . Unphosphorylated STAT92E proteins can maintain heterochromatin via the regulation of histone H3 Lys-9trimethylation (H3K9me3) in Drosophila 47,48 . The epigenetic modification function of unphosphorylated STAT3 was also identified by Timofeeva 1 who suggested that cancer cells such as DU145 and MCF-7 cells have a more open or, at least, more accessible chromatin conformation than the non-transformed MCF-10A cells, thereby allowing for unphosphorylated STAT3 binding, suppression of CHOP expression, and subsequent inhibition of the apoptosis of cancer cells with the involvement of the N-terminal domain of STAT3. The epigenetic gene silencing effect of gene promoter region mediated through the CpG methylation by DNA methyltransferase 1 (DNMT1) and other members of the DNMT family have key roles in the inhibition of tumor-suppressor gene expression in cancer cells. STAT3 acetylation, another activation form of STAT3, can also contribute to regulate DNMT1 binding to several tumor-suppressor gene promoters and promote the promoter methylation of relative genes and the development of cancer. STAT3 is acetylated on a single lysine residue, Lys685, by its co-activator p300/CREB-binding protein (CBP) in response to cytokine treatment, such as IL-6, LIF, and OSM [49][50][51] . Acetylated STAT3 induces promoter gene methylation of a major tumor-suppressor gene, ARHI, and thereby leads to the low expression level of ARHI and promotes cancer cell proliferation in ovarian cancer 2 . Several other tumor-suppressor genes, including CDKN2 cell A, DLEC1, STAT1, and PTPN6, can also be induced promoter methylation by acetylated STAT3 in cancer cell lines 3 . Scientific RepoRts | 5:17663 | DOI: 10.1038/srep17663 Pivotal biological functions of STAT3 in cancer STAT3 is involved in EMT promoting cancer invasion and metastasis. As a phenotypic switch, EMT is characterized by cells losing the epithelial polarity and acquiring mesenchymal characteristic resulting in the decrease of cell-cell junction and promoting the invasion and metastasis ability of cells. Enough evidences show that the EMT phenotypes plays a significant role in promoting progression of many cancers, such as non-small cell lung cancer (NSCLC) 52 , ovarian carcinomas 53 , hepatocellular carcinomas(HCC) 54 , breast cancer 55 , nasopharyngeal cancer 56 . Various studies demonstrated that STAT3 can modulate the expression of EMT-related transcription factors (Twist, Snail, ZEB1, etc.) and thereby influence the EMT phenotypes. For instance, in HCC cells, STAT3 was revealed to bind to the promoter of Twist, mediate its transcriptional activity, and then promote the EMT process and increase the cells invasion and migration ability for the first time 57 . In bresat cancer, STAT3 activation by EGF treatment induced higher Snail expression and the high expression level of snail was reversed by N-myc downstream-regulated gene 2 (NDRG2) through inhibits STAT3 binding to the Snail promoter and subsequently inhibit the EMT process and cancer progression 58 . Additionally, prolonged activation of STAT3 leads to low expression of let-7 and miR-200 coupled with the upregulation of ZEB1 in OSM-triggered EMT, which contributes to the acquisition of the mesenchymal phenotype and invasive capability as well as promotion of breast cancer progression 23 . These findings suggest that STAT3 responds to the integrating signals from multiple extracellular stimuli that influence the EMT phenotype, regulates the level of EMT-related transcription factors, and enhances the cancer cell abilities of invasion, metastasis. Therefore, targeting STAT3 may provide a means to reverse the EMT phenotypes and prevent cancer invasion and metastasis. STAT3 in tumor microenvironment. As a major regulator of tumorigenesis, tyrosine phosphorylated STAT3 has been detected and is mainly distributed on the leading edge of tumors in association with stromal, immune, and endothelial cells 59 . This result effectively suggests that STAT3 has a critical role in the communication between cancer cells and their microenvironment which has the following aspects. (A) Production of humoral factors. For instance, the paracrine sources of IL-6 from cancer-associated fibroblasts, adipocytes, or myeloid cells on the edge of the tumors and the autocrine production of IL-6 can both activate the STAT3. The pSTAT3 in turn promotes the expression of IL-6, thus forming amplification loops of the production of IL-6, which can induce the vast expression of autocrine and paracrine cytokines and growth factors, including IL-8, CCL5, CCL2, CCL3, IL1-β , GM-CSF, VEGF, and MCP-1, which are highly expressed and play an important role in the generation and development of cancer. (B) Interaction with fibroblasts, adipocytes and macrophages. Cancer-associated fibroblasts (CAFs) can promote cancer progression via remodeling the ECM, induction of angiogenesis, recruitment of inflammatory cells, and directly stimulating cancer cell proliferation via the secretion of growth factors and mesenchymal-epithelial cell interactions, which are mainly regulated by the IL-6-STAT3-Twist signaling pathway by upregulating the expression of CXCL12 60 , a Twist target gene associated with the regulation of the CAF phenotype. When cancer-associated adipocytes separated from the breast cancer patients co-culture with MCF7 and MDA-MB-231 breast cancer cell lines, adipocytes revert to an immature and proliferative phenotype, and promote cancer cell migration via the high expression of IL-6 61 Growing evidence indicates that activated STAT3 participates in angiogenesis regulation, with a critical role 67 . VEGF and bFGF are known to be involved in endothelial cell proliferation, extracellular matrix degradation, endothelial cell migration, and modulation of junctional adhesion molecules; both have been described as the leading mediators of angiogenesis that can be upregulated by activated STAT3 in glioblastoma stem cells 68 , papillary thyroid cancer 69 , and colorectal cancer 70 , thereby promoting the formation of new blood vessels and development of cancer. STAT3 is involved in various aspects of tumor microenvironment to cultivate a favorable environment for cancer development. Targeting STAT3 present a feasible stategy to weaken the supporting function of tumor microenvironment and improving the therapeutic effect for cancer. STAT3 regulates CSCs. Given its important role in sustaining the self-renewal and differentiation of Embryonic Stem CellS ( ESCs) [71][72][73] , STAT3 is also evidently essential for regulating CSCs of cancers such as ovarian cancer 74 , HCC 75 , breast cancer 76 colorectal cancer 77 , glioblastoma 78 , lung cancer 79 , and prostate cancer 80 . The STAT3 regulatory mechanism of stem cell self-renewal and differentiation is mainly focused on the ESC-specific roles of LIF. When LIF binds to LIFR and gp130, the heterodimerized compound can activate STAT3 and then, birdged by Bcl3 to the Oct4 signaling and maintain pluripotency of ESCs 71,81 . Other IL-6 family members, such as OSM, CNTF, CTF-1, and CLC, which form heterodimerization of gp130 with the LIF receptor, can also maintain the self-renewal and differentiation of stem cells because of their shared signaling mechanisms that converge on STAT3. As a multifunctional cytokine, IL-6 has been implicated in the maintenance of stem cancer cells through the IL-6/gp130/STAT3 signaling pathway. In gene expression profiles of CD44 + /CD24breast CSCs, IL-6 has been demonstrated to be upregulated 15 . Liu 79 showed that IL-6/JAK2/STAT3 pathway upregulates DNMT1 and enhances cancer initiation and lung CSC proliferation via the downregulation of p53 and p21, which results from DNA hypermethylation. In addition, IL-6/STAT3/NF-κ B signaling pathways are both activated in CSCs and its microenvironment 82,83 . Activation of these pathways stimulates further cytokine production and generates positive feedback loops that in turn drive CSC self-renewal. Furthermore, the constitutive activation of STAT3/NF-κ B signaling can regulate the Notch pathway, which appears to play a key role in CSCs in a variety of cancers and controls cell fate determination, survival, proliferation, and the maintenance of stem cells 84 . Although the activation of STAT3 via IL-6 has been identified as necessary for promoting CSC-like phenotypes, other STAT3 activators are also involved in the regulation of CSCs. Conti 85 first showed the role of TLR2 in mammary CSC self-renewal through binding to its receptor HMGB1, increasing the secretion of IL-6 and subsequently activating the STAT3 signaling pathway. Downregulation of miR-1181 can promote CSC-like phenotypes in human pancreatic cancer by promoting the STAT3 signaling pathway and the activation of the CSC transcription factor SOX2 86 . RhoC expression is found to be correlated to CSC formation in head and neck squamous cell carcinoma (HNSCC). RhoC elevates the expression level of IL-6 and then promotes the phosphorylation of STAT3 ser727 and STAT3 tyr705 as well as the high expression of Nanog, oct3/4, and sox2 in HNSCC 87 . In addition, a novel EGFR/STAT3/Sox-2 paracrine signaling pathway which is required for macrophage-induced upregulation of Sox-2 and CSC phenotypes in tumor cells is identified 88 . STAT3 inhibitors. STAT3 is considered as an ideal molecular target of cancer therapy because this target plays a pivotal role in tumorigenesis and cancer cell biology. As such, great efforts have been devoted to the discovery of potent and selective inhibitors that target STAT3. STAT3 inhibitors are divided into two types depending on whether the activity of STAT3 can be inhibited indirectly or directly. Indirect inhibitors block upstream effectors, such as cytokine and kinases, involved in STAT3 activation. For instance, ALD518, a humanized anti-IL-6 antibody, helps NSCLC patients obtain therapeutic benefits against cachexia, anemia, and drug resistance 89 . WP1066, a JAK2 inhibitor, suppresses ovarian cancer growth, migration, and invasion; this inhibitor also enhances the chemosensitivity of ovarian cancer cells and decreases the rate of STAT3 phosphorylation 90 . Direct inhibitors directly block the SH2, DNA-binding, and N-terminal domains of STAT3 to suppress protein dimerization, to inhibit DNA binding, and to prevent nuclear translocation, respectively. Among these domains, the SH2 domain has been considered as the most commonly investigated site because of its critical involvement in STAT3 activation; furthermore, inhibitors targeting the SH2 domain constitute the largest class of direct inhibitors. Inhibitors are also divided into three classes of compounds on the basis of structure. (A) One of these classes includes peptides and peptidomimetics. Although peptides and peptidomimetics can directly disrupt the dimerization of STAT3 and effectively inhibit its transcriptional activity, these inhibitors present several challenges related to low cell permeability and stability. (B) Another class of these compounds comprises small molecule inhibitors. With advances in medicinal chemistry and structural applications based on high-throughput virtual screening and site-directed computational fragment-based drug design approach in silico, small-molecule STAT3 inhibitors, which overcome problems related to cell permeability, show much feasibility to inhibit the STAT3 activity. Novel synthetic or natural small-molecule STAT3 inhibitors have been evaluated by using preclinical models. LY5, a novel non-peptide, cell-permeable small molecule inhibitor of STAT3 dimerization, blocks STAT3 activation with low IC50 values (0.5-1.4 M) and strong binding affinity to the STAT3 SH2 domain. LY5 selectively inhibits persistent STAT3 activation and induces the apoptosis of medulloblastoma cells and becomes a promising therapeutic drug candidate for human medulloblastoma by inhibiting STAT3 signaling 91 . OPB-31121, an inhibitor assessed through active clinical trials, interacts and exhibits a high affinity to the SH2 domain of STAT3 92 and elicits a significant antitumor effect on leukemia 93 and gastric cancer 94 . Silibinin, a natural polyphenolic flavonoid extracted from the seeds of milk thistle (Silybum marianum), is an optimum inhibitor of pSTAT3 in gastric 95 , breast cancer 96 , prostate 97 in preclinical studies and clinical trials related to this flavonoid have also been conducted. However, therapeutic effects on cancer patients remain unsatisfactory because the bioavailability of its flavonolignan structure is low 98 . Homoharringtonine (HHT), another natural compound extracted from Cephalotaxus harringtonia, significantly inhibits the STAT3 activity by suppressing the IL-6/JAK1/STAT3 signaling pathway and induces the apoptosis of Gefiinib-resistant lung cancer cells. In vivo, HHT remarkably suppresses the tumor growth but Gefiinib does not exhibit comparative effect in nude mice injected H1975 cells and this identifies the HTT as a novel potential natural inhibitors for patients with NSCLC in a EGFR-independent manner 99 . (C) Oligonucleotides. Because of the application of advanced molecular techniques, oligonucleotides inhibitors targeting STAT3 seem viable to selectively inhibit STAT3 activity. Decoy Oligonucleotide (ODN) usually is a double stranded 10-20 base pair DNA containing a TF's consensus and selectively inhibits STAT3 activity by competitively binding to the DNA binding domain of STAT3; thus, specific gene expression is effectively attenuated. First-in-human trial of a STAT3 decoy oligonucleotide in head and neck tumors has been recently assessed. With improvements in cyclization, STAT3-targeting ODN seems more amenable to systemic administration and can yield optimum effects by downregulating STAT3 target genes and by suppressing tumor growth 100 . G-quartet oligonucleotides are G-rich oligodeoxynucleotides that form four-stranded potassium-dependent intramolecular G-quartet structures and occupy sites within the SH2 domains of STAT3. These oligonucleotides effectively inhibit Stat3 activation and tumor growth in head and neck cancer 101 , NSCLC 102 , Prostate Cancer 103 . Nonetheless, their large size and potassium dependence limit their cellular delivery and possibility to be assessed in clinical trials. Small interfering RNA (siRNA) is a natural post-transcriptional gene-silencing mechanism to turn off unwanted genes. Targeting STAT3 using siRNA represent a useful approach for the treatment of breast cancer 104 , lung adenocarcinoma 105 . However, further studies should be conducted regarding STAT3 silencing for cancer therapy. (Fig. 2.) Conclusion and future directions. Although, STAT3 is an ideal target of cancer therapy because of its multiple regulatory pathways and pivotal biological functions in cancer; furthermore, various inhibitors targeting STAT3 have been developed for cancer therapy, no candidate compounds are potent enough to provide beneficial therapeutic effects for cancer patients. As such, new directions for cancer Scientific RepoRts | 5:17663 | DOI: 10.1038/srep17663 therapy by targeting STAT3 should be explored. For instance, small-molecule inhibitors of GPCR, TLR, and miRNAs related to STAT3 regulation can be applied to treat cancer. Current anticancer-targeted therapeutics mainly focuses on inhibiting the tyrosine phosphorylation of STAT3, however, epigenetic modification function of STAT3 may present a novel and powerful therapeutic approach for cancer treatment. Therefore, further studies should be conducted to address the questions regarding STAT3 in cancer and to find the best efficient strategies that can inhibit the STAT3 activity to gain optimum therapeutic effects.
4,884.6
2015-12-03T00:00:00.000
[ "Biology" ]
Lipid Nanocapsule-Based Gels for Enhancement of Transdermal Delivery of Ketorolac Tromethamine Previous reports show ineffective transdermal delivery of ketorolac by nanostructured lipid carriers (NLCs). The aim of the present work was enhancement of transdermal delivery of ketorolac by another colloidal carriers, lipid nanocapsules (LNCs). LNCs were prepared by emulsification with phase transition method and mixed in a Carbomer 934P gel base with oleic acid or propylene glycol as penetration enhancers. Permeation studies were performed by Franz diffusion cell using excised rat abdominal skin. Aerosil-induced rat paw edema model was used to investigate the in vivo performance. LNCs containing polyethylene glycol hydroxyl stearate, lecithin in Labrafac as the oily phase, and dilution of the primary emulsion with 3.5-fold volume of cold water produced the optimized nanoparticles. The 1% Carbomer gel base containing 10% oleic acid loaded with nanoparticles enhanced and prolonged the anti-inflammatory effects of this drug to more than 12 h in Aerosil-induced rat paw edema model. Introduction Lipid nanocapsules (LNCs) are a new generation of biomimetic nanovectors composed of an oily core of mediumchain triglycerides of capric and caprylic acids known under the commercial name of Labrafac that is surrounded by a shell composed of lecithin and a pegylated surfactant called Solutol HS 15. Solutol is a mixture of free PEG 660 and PEG 660 hydroxystearate and oriented towards the water phase. Lecithin is composed of 69% phosphatidylcholine soya bean and is generally used in small proportions to significantly increase LNC stability [1,2]. Their structure mimics lipoproteins [3,4] while have a hybrid structure between polymer nanocapsules and liposomes. LNCs present a great physical stability up to 18 months with size ranges from 20 to 100 nm. They are prepared by a phase inversion of an oil/water emulsion due to thermal manipulation and in the absence of organic solvents with good monodispersion [5]. The aqueous phase consists of MilliQ water plus sodium chloride salt, which helps to decrease the phase-inversion temperature (PIT) [5,6]. Preparation of LNCs involves two steps. In the first step all mixed components are heated from room temperature up to T2 temperature, above the PIT, to obtain a W/O emulsion. Then the temperature is dropped to T1 below the PIT, by a cooling process that leads to the formation of an O/W emulsion. After several temperature cycles between T2 and T1 the temperature is set 1-3 • C lower from the beginning of the O/W emulsion before dilution. At the second step by a sudden dilution with cold water added to the mixture an irreversible shock causes to break the microemulsion system, and stable nanocapsules are formed [7]. Three temperature cycles of heating and cooling at the rate of 4 • C/min are usually applied between 85 and 60 • C [5,8]. The gastrointestinal side effects of nonsteroidal antiinflammatory drugs (NSAIDs) have limited their widely oral use as analgesics in the treatment of local inflammation. This has prompted researchers to investigate the feasibility of alternative dermal and/or transdermal drug delivery systems. Ketorolac is a pyrrolizine carboxylic acid derivative of NSAIDs with potent analgesic and moderate anti-inflammatory activity, a relatively favorable therapeutic agent for the management of moderate to severe pain [18]. Ketorolac tromethamine is administered intramuscularly and orally in divided multiple doses for short-term management of postoperative pain. Its oral bioavailability is 90% with a very low first pass metabolism. However, the drug is reported to cause severe gastrointestinal side effects such as gastrointestinal bleeding, perforation, peptic ulceration, and acute renal failure [19]. Because of the short half-life (4 to 6 h) of ketorolac, frequent dosing is required to alleviate pain. To avoid intramuscular injection and frequent dosing regimens, dermal and transdermal delivery of ketorolac is an attractive alternative. Additionally, high analgesic activity and low molecular weight of ketorolac make it a good candidate for transdermal delivery. Several transdermal delivery strategies such as use of permeation enhancers [20], proniosomes [21], its prodrugs [22], iontophoresis [23], ultrasound [24], cyclodextrins and liposomes [25], and nanostructured lipid carriers (NLCs) [26] have been developed so far. NLCs are mixtures of solid and liquid lipids (oils) which provide greater solubility for drugs than solid lipids. These nanostructures of ketorolac were ineffective in increasing the drug percutaneous absorption due to the high degree of mutual interaction between the drug and carrier lipid matrix. For this reason we propose another colloidal lipid nanocarrier, that is, LNCs for transdermal delivery of ketorolac due to their high content of hydrophilic surfactants which may improve the problem of previous nanoparticles of this drug and reduce high degree of interactions between the drug and nanoparticles. The LNCs are prepared by an emulsification-phase conversion process with 10-40% or more of surfactants and contain no organic solvent. Preparation and Optimization of LNCs Using Taguchi Design. Table 1 displays the four control factors that were selected in the optimization study. A standard orthogonal array L 9 [27] was used to examine this four-factor system. L and subscript 9 denote the Latin square and the number of the experimental runs, respectively. A run involved the corresponding combination of levels to which the factors in the experiment were set. All studied factors had three levels. All experiments were performed in triplicate. Four studied responses included particle size, zeta potential, loading efficiency, and drug release efficiency percent until 65 min (RE 65 %). The experimental results were then analyzed by the Design Expert software (version 7, USA) to extract independently the main effects of these factors, followed by the analysis of variance (ANOVA) to determine which factors were statistically significant. Identifying controlling factors and qualifying the magnitude of effects, as well as identifying the statistically significant effects, were emphasized. The optimum conditions were determined by the Taguchi's optimization method [28] to yield a heightened performance with the lowest possible effect of the noise factor. To prepare the LNCs 400 mg drug was dissolved in 2.73 mL of aqueous phase containing 1.75% NaCl (according to the aqueous phase) and different amounts of polyethylene glycol hydroxyl stearate as the surfactant (according to Table 1). The oily phase was Labrafac which contained lecithin as the stabilizing agent. The amount of each variable is shown in Table 1. The two phases were added to each other on a magnetic stirrer, and the mixture temperature was raised from room temperature to 85 • C gradually during 15 min. Then it was cooled to 25 • C. Three temperature cycles (85-60-85-60-85 • C) were applied to reach the inversion process. The temperature of the mixture before dilution was set 57 • C, in the o/w emulsion. Step II was an irreversible shock induced by dilution (1.2-3.5 times) with cold deionised water (0 • C) added to the mixture maintained at the previously defined temperature. This fast-cooling dilution process led to the formation of stable nanocapsules. Afterwards slow magnetic stirring for 5 min was applied to the suspension [7]. Particle Size and Zeta Potential of the LNCs. Size and zeta potential of all drug-loaded LNC samples were measured by photon correlation spectroscopy (PCS, Zetasizer 3000, Malvern, UK). All the samples were diluted one to ten ratio with deionized water to get optimum 50-200 kilo counts per second (Kcps) for measurements. Intensity Z-Average particle size, polydispersity index, and zeta potential were measured. Morphology Study. Morphology of the LNCs was characterized by scanning electron microscopy (SEM). The nanoparticles were mounted on aluminum stubs, sputter coated with a thin layer of Au/Pd, and examined using an SEM (Seron Technology 2008, Korea). Drug Loading Efficiency in LNCs. Entrapment efficiency percent (EE%) was determined by measuring the concentration of unentrapped free drug in aqueous medium [29]. The aqueous medium was separated by centrifugation (Sigma 3K30, Germany). About 0.5 mL of the LNC dispersion was placed in the eppendorf Amicon Ultra centrifugal filters with cut-off 10 KDa and centrifuged at 15000 rpm for 10 min. The encapsulated drug in nanoparticles was separated, and the amount of free ketorolac in the aqueous phase was estimated by UV spectroscopy method at λ max = 319.3 nm. The EE% and loading percentage were calculated using: Preparation and Optimization of LNC-Based Gels Using Factorial Design. Three different variables each in 2 levels were evaluated for preparation of the gel bases ( Table 2). The gel formulations were prepared using a 2-level factorial design. Carbomer was dispersed in water using an overhead stirrer at a speed of 600 rpm for 3 hr. Carbomer gels were diluted to final concentration of 0.5-1% with the optimized formulation of LNCs and then neutralized using 0.5 w/w% triethanolamine. Then one of the absorption enhancers (oleic acid or propylene glycol) was added at different concentrations ( Table 2). Skin Permeation through Excised Hairless Rat. In vitro permeation of ketorolac from various gel formulations was evaluated using full thickness abdominal skin excised from adult Wistar rats weighing 150-180 g. The visceral side of the freshly excised skin was cleaned free of any adhering 4). The skin samples were mounted on Franz diffusion cells with a diameter of 2.6 cm and a receptor volume of 28 mL such that the dermal side of the skin was exposed to the receptor fluid and the stratum corneum remained in contact with the donor compartment. PBS (pH 7.4) was filled in the receptor compartment and stirred continuously with the help of a magnetic stirrer. The receptor medium was water jacketed at 37 • C. On the epidermal side of the skin, 1 g of the gel was spread evenly. Two mL samples were withdrawn from receptor medium and replaced with fresh medium at 0.5, 1, 2, 4, 16, and 17 h. Samples were analyzed spectrophotometrically for the content of ketorolac at 323 nm. Blank formulations (without drug) were used as a reference for the determination of ketorolac to negate any possible interference from the skin components or formulation components. Cumulative amount of drug (Q) permeated through skin was plotted as a function of time (t). The drug concentration in the donor cell (C d ) and its surface area (S) were used for calculation of the permeability (P): Flux (J s ) was calculated from (3) in which (dQ/dt) is the amount of drug flowing through a unit cross-section (S) of the skin in unit time (t) To obtain the diffusion coefficient (D) of the drug through the skin (4) was used: In which (t L ) is the lag time of drug permeation and (h) is the thickness of the rat skin. Finally the partition coefficient (K m ) of drug between skin and vehicle was obtained from: All the experiments were performed in triplicate. After optimization of the gel formulation according to the highest skin permeability, the optimized gel was applied for in vivo studies in alleviating the Aerosil-induced paw edema in rat. Aerosil-Induced Paw Edema in Rats. Male Wistar rats were studied into 6 groups of six rats, each group receiving a different topical treatment. 0.1 mL of 2.5% Aerosil suspension in distilled water was injected in the right hind foot of each rat. Immediately after injection of Aerosil the rats of the test groups were administered the developed optimized LNC-based gels containing 0.5 or 2% ketorolac, the 2 standard groups were treated with the traditional gels of 0.5 or 2% free ketorolac in the same gel base as LNCs, the control group received no treatment, and another group received the blank vehicle. Measurement of the foot volume was performed by the displacement technique using plethysmograph (Ugo Basile, Italy) immediately before and 2, 4, 8, 12, and 24 h after the injection of Aerosil. Edema inhibition rate (I) after different treatments was calculated using: where V o is the mean paw volume before Aerosil injection, V t is the mean paw volume after Aerosil injection, E c is the edema rate of the control group, and E t is the edema rate of the treated group [30]. Statistical Analysis. SPSS software version 11.5 was used for all statistical analysis. One-way analysis of variance (ANOVA) followed by a Tukey's post hoc test was used for comparison between cumulative percentage of drug released at the end of each release test. In vivo data were expressed as mean ± SD. Differences between mean values were analyzed using one-way analysis of variance (ANOVA) followed by a Dunnett's post hoc. A significant level of P < 0.05 denoted significance in all cases. Results and Discussion Different formulations of ketorolac LNCs were prepared according to Table 3 and were characterized for their physical properties including particle size, surface charge (zeta potential), drug loading efficiency percent, and release efficiency until 65 min of release test (RE 65 %). The results are shown in Table 3. Optimization was performed to obtain the optimal points regarding the constraints in which the particle size was in its minimum level, the absolute value of zeta potential in its maximum level, while release efficiency percent and loading efficiency percent were in their range levels ( Table 3). The optimized formulation of LNCs was predicted by Design Expert software to contain 20% polyethylene glycol hydroxyl stearate (coded as level I), 25% Labrafac, 3.25% lecithin, and diluting cool water 3.5-fold of the volume of the primary emulsion (all coded as level III in Table 1). The optimized formulation of LNCs was prepared and characterized for their physical properties ( Table 3). The nanoparticulate nature of the LNC dispersion was confirmed by SEM studies (Figure 1), otherwise some aggregation of nanoparticles is obvious. The morphology of this optimized LNC formulation is seen in Figure 1. Drug release profiles through LNC formulations are compared with the optimized formulation in Figure 2. As this figure shows most of the studied LNCs release about 70% of the loaded drug within 65 minutes with a zero order or Baker-Lonsdale kinetics model indicating the particulate nature of the LNCs. Statistical analysis of the release efficiency (RE 65 %) of different formulations of LNCs by Design Expert software shows that increasing amounts of Solutol and Labrafac decreases the RE 65 %, but increasing the cold water volume during dilution step increases the RE 65 %. Lecithin amount has not significant effect on RE 65 %. Considering the high solubility of the drug in both water and organic solvents and as there is not any burst effect in the release profiles ( Figure 3) it seems that the drug is accommodated in the core of the matrix of LNCs. To optimize the gel base for loading LNCs, the Carbomer gels were loaded with the optimized LNC formulation, and permeability of drug through the excised hairless skin of rat was measured. Different formulation of gel bases were designed by an irregular 2-level factorial design and 3 variables each in 2 levels were studied ( Table 2). The results of measurements of permeability, flux, and partition coefficient of drug between skin and vehicle are seen in Table 4. As shown in Table 4, skin permeation was significantly enhanced by gel of O 10 C 1 . Oleic acid possesses skin penetration enhancing ability. Considering skin structure, water solubility is an important parameter that drastically influences drug permeation profile. The skin is composed of a comparatively lipophilic stratum corneum and hydrophilic viable epidermis and dermis. On the basis of the permeability results, O 10 C 1 gel containing LNCs of ketorolac showed an optimal balance between lipophilicity and hydrophilicity. This behaviour could be explained by balancing the high percentage of polyethylene glycol hydroxyl stearate (Solutol), the hydrophilic surfactant used in the LNCs, and lipophilic oleic acid used in preparing the gel base. Comparing the results of Table 4 for the optimized gel containing LNCs of ketorolac with the traditional gel of free ketorolac indicates 13 fold increase in permeability for the gel-based LNCs. This indicates a significant increase in permeability and flux of ketorolac when encapsulated in the LNCs indicating a much better affinity of drug to the stratum corneum. On the other hand, Table 4 shows that K m value for the gel-based LNCs has a much higher value than what is reported for NLC of this drug by Puglia et al. [26]. Generally a high K m value indicates that the vehicle has a poor affinity for the drug and a low K m value, indicating a high degree of mutual interaction and the tendency of drug to remain in the vehicle [26]. This shows the LNCs have lower interaction with ketorolac tromethamine than NLCs due to higher hydrophilicity of Solutol used in their formulation. Therefore, the vehicle could not sequester the drug, and the drug is available for diffusion while in other gel formulations this balance is not produced well and skin permeation of the drug is drastically less ( Table 4). The optimized gel formulation was considered to contain 1% Carbomer and 10% of oleic acid. This gel was applied for in vivo evaluation in controlling the oedema in paw of rat after Aerosil injection. Figure 3 depicts the results of inhibition percent of inflammation after application of free ketorolac 0.5 and 2% in gel base as a conventional gel and the drug encapsulated in LNCs compared to the vehicle and control group of rats. As this figure shows, conventional and LNC gels containing 0.5% of ketorolac had significant difference (P < 0.05) with the control and vehicle group in reducing the edema in paw of rats at all time point of the study except at 24 hr. In these gels the maximum inhibition% in edema happened at 4 h. However, the effect of LNC gels containing 0.5% of ketorolac continued until 8 hr so that the area under the inhibition% time curve (AUC 0-24 ) was significantly greater Gel0.5% Vehicle control Figure 3: Anti-inflammatory activity (Inhibition %) of ketorolac on paw edema induced with Aerosil injection (0.1 mL of 2.5% w/w) in rats (control), after administration of transdermal gels of optimized ketorolac LNC (0.5% and 2%), the vehicle, and the gel containing 0.5 or 2% of free ketorolac. The formulation of the gels of ketorolac LNC contained (1% Carbomer gel base containing 10% oleic acid and 2% ketorolac loaded in optimized LNCs (S 20 O 25 W 3.5 L 3.25 ) prepared by 20% polyethylene glycol hydroxyl stearate, 3.25% lecithin, 25% Labrafac as the oily phase, and cold water as much as 3.5-folds of the total volume of the primary emulsion). for 0.5% optimized LNC than 0.5% gel (P < 0.05) ( Table 5). After 12 hr the inhibition% declined for both 0.5% gel and LNC. The anti-inflammatory effect of both 2% LNC and conventional 2% ketorolac gels lasted for more than 12 hr after drug administration. The difference between AUC 0-24 gels with other groups was significant (P < 0.05), and the difference between 2% LNC and 2% gel was also significant (P < 0.05) ( Table 5). This means that their effect lasts more than other treatments. The high activity of the 2% gel could be attributed to the presence of a high amount of oleic acid, a known skinpenetration enhancer in the vehicle of LNCs. However, the Table 5: Area under the curve of inhibition time after administration of ketorolac tromethamine loaded in optimized nano lipid capsules from different gel vehicles after induction of inflammation in rat paws by Aerosil injection (results are mean ± SD). sustained activity of the LNC-based gel even at the end of 12 h could be explained by the drug encapsulated within the LNCs, while the fast onset is explained by the free drug in the outer phase of dispersion. Abdel-Mottaleb et al. [31] studies showed that LNCs caused higher permeation-enhancing effect compared to polymeric nanoparticles, while they had similar permeation to solid lipid nanoparticles (SLN). On the other hand, LNCs had the advantage of lower intradermal drug accumulation as well as higher loading efficiency combined with less stability problems compared to SLN. Conclusion Unlike the NLCs of ketorolac reported before that could not enhance transdermal anti-inflammatory effects of this drug, the LNCs reported in this paper showed 13-fold increases in permeability of ketorolac compared to conventional gels. The partition coefficient of drug between stratum corneum and the vehicle was significantly higher than what is reported for NLCs of this drug. The results of inhibition percent of inflammation induced by Aerosil in the paw of the rats also allow to conclude that encapsulation of ketorolac in LNCbased gel can enhance and prolong the anti-inflammatory effects of this drug for more than 12 h.
4,748.8
2011-11-17T00:00:00.000
[ "Chemistry", "Materials Science", "Medicine" ]
Design, Synthesis, and Antimicrobial Evaluation of New Annelated Pyrimido[2,1-c][1,2,4]triazolo[3,4-f][1,2,4]triazines A series of 34 new pyrimido[2,1-c][1,2,4]triazine-3,4-diones were synthesized and fully characterized using IR, NMR, MS, and microanalytical analysis. In vitro investigation of 12 compounds of this series revealed promising antimicrobial activity of the conjugates 15a and 15f–j that were tagged with electron-withdrawing groups, with sensitivities ranging from 77% to as high as 100% of the positive control. The investigation of antimicrobial activity included Bacillus subtilis ATCC 6633, Staphylococcus aureus ATCC 6535, Pseudomonas aeruginosa ATCC 27853, and Escherichia coli ATCC 8739 (EC), and fungal strains Candida albicans ATCC 10231 and Aspergillus brasiliensis ATCC 16404. Introduction Genetic mutations are major contributors to the prognosis of drug-resistant microbial strains [1]. These strains are, in most cases, able to detoxify drugs using mutant digestive enzymes like β-lactamases [2]. In addition, they are able to prevent the intracellular build-up of drugs to microbially nontoxic levels using mutant drug efflux proteins [3]. Spontaneous, error-prone replication bypass, errors introduced during DNA repair, and induced mutations are the four main modes of mutation encountered in nature. Induced mutations, in particular, emerge after a gene has come into contact with a mutagen or environmental inducer [4]. Therefore, seeking alternatives to commercially available drugs that will, sooner or later, no longer be effective remains a pharmacological challenge. The discovery of new antibiotics and innovative pharmacophore architectures for synthesis in particular, those based on computer-aided drug design (CADD) programs [5], in addition to molecular library approaches provide opportunities to develop new drug candidates [6]. Chemistry As a part of our ongoing research toward the synthesis of a variety of nitrogen bridgehead heterocycles, we report the utility of the hydrazine derivative 1 [25,26] to construct fused pyrimidotriazine 2, (Scheme 1). Treatment of the cyclic 1,2-bioxygen analogue 2 with thiosemicarbazide produced the thioureido analogue 3, as mainly indicated by mass and/or NMR analyses. The presence of the thiocarbonyl group was deduced by 13 C-NMR as a singlet at 181.2 ppm and its IR absorption band at 1290 cm −1 , whereas the presence of the amino group was confirmed by 1 H-NMR as a singlet at 8.56 ppm. The recorded mass at m/z 397.08 corresponds to the formula C17H15N7OS2. All these data support the formation of the thiosemicarbazone derivative 3 via simple condensation of one amino group to afford the thioureido analogue 3 without further cyclocondensation. Scheme 1. Reagents and conditions for synthesis of compounds (2 and 3). The thioureido analogue 3 was subject to a sequence of treatments to investigate the reactivity of its thioureido moiety in an attempt to attain target thiazole and/or thiazine architectures. Thus, the treatment of compound 3 with benzylidenemalononitrile in boiling dioxane led to 1,3-thiazine-5carbonitrile derivative 5 via the carbonitrile intermediate 4. This intermediate undergoes intramolecular cyclization via the nucleophilic addition of NH2 to the nitrile group, affording the 1,3thiazine analogue 5 with a 75% yield (Scheme 2). Strong absorption bands at 3325 and 2219 cm −1 were observed for the NH2 and C≡N groups, respectively. Upon treatment of the thioureido 3 with 3chloropentane-2,4-dione in refluxing EtOH, the 2-substituted 4-methyl-5-acetylthiazole derivative 6 Chemistry As a part of our ongoing research toward the synthesis of a variety of nitrogen bridgehead heterocycles, we report the utility of the hydrazine derivative 1 [25,26] to construct fused pyrimidotriazine 2, (Scheme 1). Treatment of the cyclic 1,2-bioxygen analogue 2 with thiosemicarbazide produced the thioureido analogue 3, as mainly indicated by mass and/or NMR analyses. The presence of the thiocarbonyl group was deduced by 13 C-NMR as a singlet at 181.2 ppm and its IR absorption band at 1290 cm −1 , whereas the presence of the amino group was confirmed by 1 H-NMR as a singlet at 8.56 ppm. The recorded mass at m/z 397.08 corresponds to the formula C 17 H 15 N 7 OS 2 . All these data support the formation of the thiosemicarbazone derivative 3 via simple condensation of one amino group to afford the thioureido analogue 3 without further cyclocondensation. Chemistry As a part of our ongoing research toward the synthesis of a variety of nitrogen bridgehead heterocycles, we report the utility of the hydrazine derivative 1 [25,26] to construct fused pyrimidotriazine 2, (Scheme 1). Treatment of the cyclic 1,2-bioxygen analogue 2 with thiosemicarbazide produced the thioureido analogue 3, as mainly indicated by mass and/or NMR analyses. The presence of the thiocarbonyl group was deduced by 13 C-NMR as a singlet at 181.2 ppm and its IR absorption band at 1290 cm −1 , whereas the presence of the amino group was confirmed by 1 H-NMR as a singlet at 8.56 ppm. The recorded mass at m/z 397.08 corresponds to the formula C17H15N7OS2. All these data support the formation of the thiosemicarbazone derivative 3 via simple condensation of one amino group to afford the thioureido analogue 3 without further cyclocondensation. Scheme 1. Reagents and conditions for synthesis of compounds (2 and 3). The thioureido analogue 3 was subject to a sequence of treatments to investigate the reactivity of its thioureido moiety in an attempt to attain target thiazole and/or thiazine architectures. Thus, the treatment of compound 3 with benzylidenemalononitrile in boiling dioxane led to 1,3-thiazine-5carbonitrile derivative 5 via the carbonitrile intermediate 4. This intermediate undergoes intramolecular cyclization via the nucleophilic addition of NH2 to the nitrile group, affording the 1,3thiazine analogue 5 with a 75% yield (Scheme 2). Strong absorption bands at 3325 and 2219 cm −1 were observed for the NH2 and C≡N groups, respectively. Upon treatment of the thioureido 3 with 3chloropentane-2,4-dione in refluxing EtOH, the 2-substituted 4-methyl-5-acetylthiazole derivative 6 Scheme 1. Reagents and conditions for synthesis of compounds (2 and 3). The thioureido analogue 3 was subject to a sequence of treatments to investigate the reactivity of its thioureido moiety in an attempt to attain target thiazole and/or thiazine architectures. Thus, the treatment of compound 3 with benzylidenemalononitrile in boiling dioxane led to 1,3-thiazine-5-carbonitrile derivative 5 via the carbonitrile intermediate 4. This intermediate undergoes intramolecular cyclization via the nucleophilic addition of NH 2 to the nitrile group, affording the 1,3-thiazine analogue 5 with a 75% yield (Scheme 2). Strong absorption bands at 3325 and 2219 cm −1 were observed for the NH 2 and C≡N groups, respectively. Upon treatment of the thioureido 3 with 3-chloropentane-2,4-dione in refluxing EtOH, the 2-substituted 4-methyl-5-acetylthiazole derivative 6 was obtained. The most characteristic signal of compound 6 ( 1 H-NMR), due to the thiazole exchangeable (N-H) proton at 10.83 ppm, in addition to two new singlets, were observed at 2.51 and 2.72 ppm, attributed to the methyl and acetyl protons, respectively. Taken together, these data confirmed the structure of compound 6. Similarly, compound 3 was cyclized with dimethyl but-2-ynedioate in refluxing dioxane to annulate the thiazole analogue 8 with an 80% yield (Scheme 2). Formation of compound 8 can be explained on the basis of an initial Michael-type addition of the thiol function in the thioureido moiety to the activated triple bond in dimethyl but-2-ynedioate to afford the non-isolable intermediate 7, which undergoes intramolecular cyclization via loss of another MeOH molecule (route a) to yield the thiazole derivative 8. The carbothioamide absorption bands originally observed in 3 at 1290 and 3230 cm −1 disappeared after this reaction. Molecules 2020, 25, x FOR PEER REVIEW 3 of 17 was obtained. The most characteristic signal of compound 6 ( 1 H-NMR), due to the thiazole exchangeable (N-H) proton at 10.83 ppm, in addition to two new singlets, were observed at 2.51 and 2.72 ppm, attributed to the methyl and acetyl protons, respectively. Taken together, these data confirmed the structure of compound 6. Similarly, compound 3 was cyclized with dimethyl but-2-ynedioate in refluxing dioxane to annulate the thiazole analogue 8 with an 80% yield (Scheme 2). Formation of compound 8 can be explained on the basis of an initial Michael-type addition of the thiol function in the thioureido moiety to the activated triple bond in dimethyl but-2-ynedioate to afford the non-isolable intermediate 7, which undergoes intramolecular cyclization via loss of another MeOH molecule (route a) to yield the thiazole derivative 8. The carbothioamide absorption bands originally observed in 3 at 1290 and 3230 cm −1 disappeared after this reaction. Chlorination of the 1,2-dioxo compound 2 with POCl3 afforded dielectrophile 3-chloro-8-phenyl-6-(thiophen-2-yl)-6,7-dihydro-4H-pyrimido [2,1-c] [1,2,4]triazin-4-one (9) with a 75% yield (Scheme 3). The N-H stretching band and its 1 H-NMR signal for compound 2 disappeared after this step. Hydrazinolysis of compound 9 with an excess of N2HNH2.H2O afforded the hydrazinyltriazine 10, which undergoes further cyclization due to the reactivity of its hydrazinyl tag, which can be exploited in developing triazolotriazine derivatives. Thus, cyclocondensation of intermediate 10 with phenacyl bromide, triethyl orthoformate, ethyl chloroformate, chloroacetyl chloride, and, finally, dimethylformamide dimethyl acetal (DMF-DMA) afforded the series of compounds 11, 13, 14, and 15 displayed in Scheme 2 under the given conditions. Further hydroxymethylation of compound 11 Scheme 2. Reagents and conditions for the synthesis of compounds 5, 6, and 8. Chlorination of the 1,2-dioxo compound 2 with POCl 3 afforded dielectrophile 3-chloro-8-phenyl-6-(thiophen-2-yl)-6,7-dihydro-4H-pyrimido [2,1-c] [1,2,4]triazin-4-one (9) with a 75% yield (Scheme 3). The N-H stretching band and its 1 H-NMR signal for compound 2 disappeared after this step. Hydrazinolysis of compound 9 with an excess of N 2 HNH 2 .H 2 O afforded the hydrazinyltriazine 10, which undergoes further cyclization due to the reactivity of its hydrazinyl tag, which can be exploited in developing triazolotriazine derivatives. Thus, cyclocondensation of intermediate 10 with phenacyl bromide, triethyl orthoformate, ethyl chloroformate, chloroacetyl chloride, and, finally, dimethylformamide dimethyl acetal (DMF-DMA) afforded the series of compounds 11, 13, 14, and 15 displayed in Scheme 2 under the given conditions. Further hydroxymethylation of compound 11 afforded derivative 12 with a 75% yield. The structure of compound 12 was deduced based on its spectral data, where its mass spectrum recorded a molecular ion peak (C 25 H 20 N 6 O 2 S) at m/z 468.15, whereas the IR spectrum showed characteristic absorption bands at 3315 cm −1 due to stretching of the Intermediate 10 was cyclocondensed with carbon disulfide to produce the Mannich base precursor 17, which upon a classical one-pot three-component reaction, produced a set of Mannich bases (18a-j) with high yields (Scheme 4). The presence of these bases on different pharmacophores have unique potential for medical research [31]. The formation of compounds 18a-j are demonstrated on the basis of the initial Mannich reaction, which proceeds in two steps: First, the reaction between HCHO and the amine leads to the formation of the non-isolable iminium ion intermediate, which loses a H2O molecule in situ. Secondly, the thiocarbonyl compound undergoes tautomerization to produce its thiol tautomer, which proceeds to attack the iminium ion, which finally yields the target β-amino-thiocarbonyl compounds (18a-j) [30]. The IR spectra of the isolated compounds (18a-j) displayed common characteristic absorption bands around the region 3165-3281 cm −1 due to the secondary amine groups. This was further evidenced by their 1 H-NMR broad singlets at ~4.80 ppm (D2O exchangeable), whereas the methylene singlet ( 1 H-NMR) of their phenylaminomethyl moiety was observed at ~5.40 ppm. The presence of the nitro group in 18j was elucidated based on the IR spectrum, which showed two characteristic absorption bands at 1390 and 1520 cm −1 due to NO2str (as and sym), respectively. The mass spectrum of 18j displayed an ion peak at m/z 530.09 (M + , 30%) corresponding to the expected molecular formula C24H18N8O3S2. Intermediate 10 was cyclocondensed with carbon disulfide to produce the Mannich base precursor 17, which upon a classical one-pot three-component reaction, produced a set of Mannich bases (18a-j) with high yields (Scheme 4). The presence of these bases on different pharmacophores have unique potential for medical research [31]. The formation of compounds 18a-j are demonstrated on the basis of the initial Mannich reaction, which proceeds in two steps: First, the reaction between HCHO and the amine leads to the formation of the non-isolable iminium ion intermediate, which loses a H 2 O molecule in situ. Secondly, the thiocarbonyl compound undergoes tautomerization to produce its thiol tautomer, which proceeds to attack the iminium ion, which finally yields the target β-amino-thiocarbonyl compounds (18a-j) [30]. The IR spectra of the isolated compounds (18a-j) displayed common characteristic absorption bands around the region 3165-3281 cm −1 due to the secondary amine groups. This was further evidenced by their 1 H-NMR broad singlets at~4.80 ppm (D 2 O exchangeable), whereas the methylene singlet ( 1 H-NMR) of their phenylaminomethyl moiety was observed at~5.40 ppm. The presence of the nitro group in 18j was elucidated based on the IR spectrum, which showed two characteristic absorption bands at 1390 and 1520 cm −1 due to NO 2str (as and sym), respectively. The mass spectrum of 18j displayed an ion peak at m/z 530.09 (M + , 30%) corresponding to the expected molecular formula C 24 H 18 N 8 O 3 S 2. Upon smooth cyclocondensation of compound 10 with KSCN, ethyl cyanoacetate, acetic anhydride, benzoyl chloride, and thionyl chloride, a series of pyrimido-[1,2,4]triazolo-[1,2,4]triazine derivatives (19)(20)(21)(22)(23) were obtained (Scheme 5). The 1 H-NMR spectra of these compounds showed the lack of signals corresponding to the hydrazinyl protons originally observed in 10 ( 1 H-NMR) at 4.82 and 8.32 ppm. The formation of compound 20 was confirmed through its mass spectrum, which showed a m/z value at 387.08 corresponding to the expected molecular formula C19H13N7OS, whereas its IR spectrum indicated strong absorption bands at 1686 and 2218 cm −1 attributed to the C=O and C≡N groups, respectively. Upon smooth cyclocondensation of compound 10 with KSCN, ethyl cyanoacetate, acetic anhydride, benzoyl chloride, and thionyl chloride, a series of pyrimido-[1,2,4]triazolo- [1,2,4]triazine derivatives (19)(20)(21)(22)(23) were obtained (Scheme 5). The 1 H-NMR spectra of these compounds showed the lack of signals corresponding to the hydrazinyl protons originally observed in 10 ( 1 H-NMR) at 4.82 and 8.32 ppm. The formation of compound 20 was confirmed through its mass spectrum, which showed a m/z value at 387.08 corresponding to the expected molecular formula C 19 H 13 N 7 OS, whereas its IR spectrum indicated strong absorption bands at 1686 and 2218 cm −1 attributed to the C=O and C≡N groups, respectively. Upon smooth cyclocondensation of compound 10 with KSCN, ethyl cyanoacetate, acetic anhydride, benzoyl chloride, and thionyl chloride, a series of pyrimido- [1,2,4]triazolo- [1,2,4]triazine derivatives (19)(20)(21)(22)(23) were obtained (Scheme 5). The 1 H-NMR spectra of these compounds showed the lack of signals corresponding to the hydrazinyl protons originally observed in 10 ( 1 H-NMR) at 4.82 and 8.32 ppm. The formation of compound 20 was confirmed through its mass spectrum, which showed a m/z value at 387.08 corresponding to the expected molecular formula C19H13N7OS, whereas its IR spectrum indicated strong absorption bands at 1686 and 2218 cm −1 attributed to the C=O and C≡N groups, respectively. Pharmacological Evaluation Antimicrobial Impact According to the disc diffusion method [32], compounds 18a-j, 10, and 17 were screened for their in vitro antimicrobial activity. This series was proposed for antimicrobial screening as it represents the largest homologous series that is suitable for structure-activity relationship (SAR) considerations. The investigations included two Gram-positive strains, Bacillus subtilis ATCC 6633 (BS) and Staphylococcus aureus ATCC 6535 (SA); two Gram-negative strains, Pseudomonas aeruginosa Pharmacological Evaluation Antimicrobial Impact According to the disc diffusion method [32], compounds 18a-j, 10, and 17 were screened for their in vitro antimicrobial activity. This series was proposed for antimicrobial screening as it represents the largest homologous series that is suitable for structure-activity relationship (SAR) considerations. The investigations included two Gram-positive strains, Bacillus subtilis ATCC 6633 (BS) and Staphylococcus aureus ATCC 6535 (SA); two Gram-negative strains, Pseudomonas aeruginosa Scheme 7. Reagents and conditions for the synthesis of compounds 28 and 31. Pharmacological Evaluation Antimicrobial Impact According to the disc diffusion method [32], compounds 18a-j, 10, and 17 were screened for their in vitro antimicrobial activity. This series was proposed for antimicrobial screening as it represents the largest homologous series that is suitable for structure-activity relationship (SAR) considerations. The investigations included two Gram-positive strains, Bacillus subtilis ATCC 6633 (BS) and Staphylococcus aureus ATCC 6535 (SA); two Gram-negative strains, Pseudomonas aeruginosa ATCC 27853 (PA) and Escherichia coli ATCC 8739 (EC); and two fungal strains, Candida albicans ATCC 10231 (CA) and Aspergillus brasiliensis ATCC 16404 (AB). Positive controls included ampicillin and gentamicin for Gram-positive and Gram-negative bacteria, respectively, and amphotericin B for fungi, while DMSO was used as the negative control. The minimum inhibitory concentration (MIC) was determined according to the reported method [32]. The inhibitory effects of the synthesized compounds versus these organisms are presented in Table 1. The parent precursors 10 and 17 were less active compared with the triazole's N1-substituted series 18a-j. This finding agrees with the activity of most azole-based antifungal drugs; for instance, fluconazole, ravuconazole, and rufinamide. Analyses of the MIC values and the inhibition zone diameters, as given in Table 1, show that the test organisms were generally sensitive to compounds 18a-j. The sensitivity ranged from 77% to as high as 100% of the positive controls. In the case of bacteria, congeners tagged with electron-withdrawing groups 18f-j showed better activity than those modified with electron-donating groups. Electron-withdrawing substituents at the para position, as in compounds 18j and 18f, displayed higher activity than on the ortho or meta positions, as in compounds 18i, 18h, and 18g. This trend deviated for fungi, where derivatives supported by electron-donating groups 18a-e displayed higher activity than compounds 18f-j. Compound 18d, with a p-methoxy tag, was the most potent among those in the tested series. The observed prominent antifungal profile of compounds 18a-j supports our hypothesis that new N1-substituted triazole architectures show potential as renewable antifungals. The latent antifungal activity of azoles is attributed to their ability to interfere with and disrupt fungal lanosterol biosynthesis [33], which is required for membrane permeability. In conclusion, a series of new aza-heterocycles was prepared according to classical chemical methods. They are tripod and tetrapod pharmacophoric architectures that can enhance antimicrobial potency. The derivatives 18a-g displayed promising antimicrobial activities. Derivatives supported by electron withdrawing groups (EWGs) displayed excellent antibacterial activities, whereas those tagged with electron donating groups (EDGs) were better as antifungals. These results support the case for a second phase of biochemical research to elucidate their possible modes of action and determine whether or not these are in line with classical mechanisms. General Information Reagents were purchased from Sigma Aldrich (Bayouni Trading Co. Ltd., Al-Khobar, Saudi Arabia) and used without further purification. The reaction progress was monitored by TLC on silica gel pre-coated F254 Merck plates (Merck, Darmstadt, Germany). Spots were visualized by ultraviolet irradiation. All melting points were determined on a digital Gallen-Kamp MFB-595 instrument (Gallenkamp, London, UK) using open capillary tubes and were uncorrected. IR spectra were recorded as potassium bromide discs using Bruker-Vector 22 FTIR spectrophotometer (Bruker, Manasquan, NJ, USA). The NMR spectra were recorded with a Varian Mercury VXR-300 NMR spectrometer (Bruker, Marietta, GA, USA) at 300 and 75 MHz for 1 H and 13 C NMR spectra, respectively, using DMSO-d 6 as the solvent. Mass spectra were recorded on a Hewlett Packard MS-5988 spectrometer (Hewlett Packard, Palo Alto, CA, USA) at 70 eV. Elemental analyses were conducted at the Micro-Analytical Center of Taif University, Taif, KSA. Methodology The antimicrobial activity of the new synthesized compounds was evaluated using the disc diffusion method [32]. Plates 90 mm in diameter containing either Müller-Hinton agar for the growth of bacteria or Sabouraud dextrose agar for the growth of fungi were prepared, and each plate was separately inoculated with different cultures of the test bacteria and fungi by aseptically swabbing onto the entire surface of the agar with cotton wool. A 6-mm-diameter filter paper disc was saturated with 200 µg/mL of the test compound in DMSO. The discs were air-dried and placed aseptically at the center of the plates. The plates were left in a refrigerator for 1 h before incubation to allow the extract to diffuse into the agar. Ampicillin and gentamicin were used as bacterial standards and amphotericin B as the fungal reference to evaluate the efficacy of the tested compounds, with DMSO used as a negative control. After incubation of the plate at a suitable temperature (37 • C for bacteria and 25 • C for fungi), the results were recorded for each tested compound as the average diameter (mm) of the inhibition zone (IZ) of bacterial or fungal growth around the discs. The minimum inhibitory concentration (MIC) was determined for compounds that exhibited significant growth inhibition zones of more than 15 mm using the two-fold serial dilution method [34]. The MIC (µM) and IZ values are listed in Table 1.
4,789.4
2020-03-01T00:00:00.000
[ "Chemistry", "Medicine" ]
CODIAGONALIZATION OF MATRICES AND EXISTENCE OF MULTIPLE HOMOCLINIC SOLUTIONS∗ The purpose of this paper is twofold. First, we use Lagrange’s method and the generalized eigenvalue problem to study systems of two quadratic equations. We find exact conditions so the system can be codiagonalized and can have up to 4 solutions. Second, we use this result to study homoclinic bifurcations for a periodically perturbed system. The homoclinic bifurcation is determined by 3 bifurcation equations. To the lowest order, they are 3 quadratic equations, which can be simplified by the codiagonalization of quadratic forms. We find that up to 4 transverse homoclinic orbits can be created near the degenerate homoclinic orbit. When µ = 0, (1.4) may have bifurcations near γ. The case d = 1 has been extensively studied. In this case breaking of the homoclinic orbit γ is restored by choosing the parameter τ , as in [5]. Hale [6] proposed to study the degenerate cases where d ≥ 2. The case d = 2 has been considered in [14]. The purpose of the present work is to treat the case d = 3. Using the method of Lyapunov-Schmidt reduction, we derive a system of bifurcation functions H j , 1 ≤ j ≤ 3, the zeros of which correspond to the persistence of homoclinic solutions for (1.4). The last equation H 3 = 0 can be dealt with by selecting the parameter τ as usual, while H j = 0, j = 1, 2 can be reduced to a system of quadratic equations. By the Lagrange's method and codiagnalization of quadratic forms, we show that the quadratic system can have up to 4 solutions. Finally, if the solutions to the quadratic system are nondegenerate, then the bifurcation functions have nondegenerate zeros and the perturbed system has transverse homoclinic orbits. Codiagnalization of matrices has been used by Jibin Li and Lin [12] to study systems of coupled KdV equations. It may also be useful when studinging the 2x2 systems of hyperbolic conservation laws with quadratic nonlinearities [19,20], base on personal conversation with Shearer. In [14], the method based on circular and hyperbolic rotations, was used to codiagonalize two quadratic forms. The new method in this paper is easier to use if one wants to find conditions for the existence of 4 solutions to quadratic systems. Given a symmetric real matrix B ∈ R 2×2 , then where (x 1 , x 2 ) T = M (y 1 , y 2 ) T . The symmetric transformation described above is also called the congruence diagonalization. It should not confused with the similarity transformation of B which is defined by M −1 BM . For example the matrix diag(λ 1 , −λ 2 ), λ j > 0, can be reduced to diag(1, −1) by the matrix , which is a symmetric reduction, not similarity reduction. In §2, we introduce notations to be used in this paper. We also present the reduced bifurcation functions which, to the lowest degree, represent the breaking of the homoclinc orbits under the periodic perturbations. In §3 we derive the bifurcation equation by using the Lyapunov-Schmidt reduction. To the lowest degree, they reduce to three quadratic equations. In §4, we introduce the Lagrange's method and generalized eigenvalue problems to study solutions of two quadratic forms. The cases when one equation is elliptic are considered in §4.1. The other cases when one equation is hyperbolic and none is elliptic are considered in §4.2. In §4.3, we present the method of codiagonalization of two quadratic equations based on cases studied in §4.1 and §4.2. In §5, we derive the reduced bifurcation function F (τ ). We show a simple zero of F corresponds to the existence of a homoclinic solution near γ. In §6, we present an example showing that our conditions work consistently. Notations and preliminaries Notations. Since y = 0 is a hyperbolic equilibrium, from [17], (1.3) has exponential dichotomies on J = R ± respectively. In particular, there exist projections to the stable and unstable subspaces, P s + P u = I, and constants m > 0, , for t s on J. (2.1) For the same m > 0, define the Banach space with the norm z = sup t∈R |z(t)|e m|t| . The linear variational system will be considered in Z. The adjoint operator for L is L * ψ :=ψ + (Df (γ)) * ψ. From the theory of homoclinic bifurcations [17], L : Z → Z is a Fredholm operator with index 0. The range of L is orthogonal to the null space of L * . That is We define some Melnikov types of integrals [16] that will be used in the future. For integers p, q = 1, 2 and i = 1, 2, 3, let We look for conditions so that (1.4) can have homoclinic solutions near γ. Let β = (β 1 , β 2 ) T . We shall use that the reduced bifurcation functions M i : R 2 ×R×R → R defined bellow: To the lowest degree, (2.5) describes the jump discontinuity x(0 − ) − x(0 + ) along the direction of ψ i (0), see [13]. . We need to solve the following system of quadratic equations (2.6) Recall that L(u) =u − Df (γ)u in the Banach space Z. As in [17], we define the subspace of Z, which consists the range of L in Z. Consider a nonhomogeneous equatioṅ Let Z ⊥ be the subspace of Z consisting of z(t) with z(0) ⊥γ(0). If h ∈ Z, using the variation of constants, there exists an operator K : Z → N (L) ⊥ such that Kh is a solution of (3.3). Clearly, the general bounded solution of (3.3) is As in [17], one can prove that P satisfies the following properties: We now use the Lyapunov-Schmidt reduction to (3.1). Applying P and (I − P ) on (3.1), we find that (3.1) is equivalent to the following systeṁ First, we solve (3.4) for z ∈ Z ⊥ . Then the bifurcation equations are obtained by substituting the solution z into (3.5). Through direct calculations, we can prove the following Lemma. has the following properties: The quadratic functions M i : R 2 × R × R → R 3 given by (2.5) represents the lowest order terms of H i (β, τ, µ). We are lead to solving the system of quadratic equations (2.6). Codiagonalization and solutions of two quadratic equations We say that the quadratic equation F (x, y) = h, h = 0 is of elliptic (or hyperbolic, or line) type if the graph of the equation is an ellipse (or two hyperbolas, or two lines). The graph of two symmetric parallel lines is a special case of two hyperbolas, where the normal direction to two lines replaces the real axis of a hyperbola. The hyperbolic rotation is well-known for its use in relativity theory [2]. We shall define various transformations that keep a quadratic form F (x, y) = ax 2 +2bxy+cy 2 invariant. Consider the Hamiltonian system and its solution mapping T (t). Definition 4.1. The solution mapping T (t) for (4.1) that maps the ray − − → OP 1 to − − → OP 2 , where P 2 = T (t)P 1 will be called the quadratic rotation by the angle t. It will also be called the circular, elliptic or hyperbolic rotation if the graph of F (x, y) = h is a circle, ellipse or hyperbola. The angle θ from , then the angle between the two rays is undefined. Just like the polar coordinates, if P 0 is a point on the major axis (or semi-real, or semi-imaginary axis), then we define the angle coordinate of P 0 to be θ(P 0 ) = 0. For any other P ∈ R 2 , we define its angle coordinate θ(P ) to be the angle from defines the circular rotation in counter-clockwise direction. defines the standard hyperbolic rotation in R 2 . However, given two rays in R 2 , the hyperbolic angle between them can be undefined. More precisely, the two lines y = ±x divides R 2 into 4 sectors: ). The hyperbolic rotation simply draws a hyperbola in sector Similarly, if (x 0 , y 0 ) T ∈ S 2 or S 4 , then there exists an r 0 > 0 or r 0 < 0 such that (x 0 , y 0 ) = r 0 (sinh(t 0 ), cosh(t 0 )). The hyperbolic rotation draws a hyperbola in Notice that the circular and standard hyperbolic rotations satisfy: If T (t) is the solution mapping for (4.1), we always have (1) If the vector field (4.1) corresponding to F (x, y) satisfies x = 0 on the x-axis, or y = 0 on the y-axis, then the matrix B is diagonal. ( We now study the system of two quadratic equations (4.2) (H5) : Assume that the two quadratic forms F 1 (x, y), F 2 (x, y) are linearly independent, i.e., the two matrices B 1 , B 2 are linearly independent. Consider the conditional maximum/minimum problems: We look for critical points from the Lagranginan: To find critical points P j = (x j , y j ) T , j = 1, 2, of the Lagrangian, we solve the generalized eigenvalue/eigenvector problem (4.5) Solutions of (4.2) if one equation is elliptic In this subsection we assume that F 2 (x, y) = h 2 is of elliptic type. Hence b 2 2 −a 2 c 2 < 0. By changing ψ i to −ψ i , we can change B (i) to −B (i) . Hence for elliptic type quadratic forms we assume a 2 > 0, c 2 > 0 and h 2 > 0. (i) (EE) type: Assume that F 1 reaches the minimum r 1 at P 1 and the maximum r 2 at P 2 . System (4.2) has 4 solutions if r 1 < h 1 < r 2 . (iii) (LE) type: In this case, the graph of F 1 (x, y) = h 1 consists of two parallel lines symmetric about the origin. The eigenvalues are λ 1 = 0 with eigenvector P 1 on which F 1 (x 2 , y 2 ) = 0; and λ 2 = 0 with the eigenvector P 2 that solves the conditional minimum problem with F 1 = r 1 < 0, or the maximum problem with F 1 = r 2 > 0. System (4.2) has 4 solutions if r 1 < h 1 < 0 or 0 < h 1 < r 2 . Solutions of (4.2) if both equations are hyperbolic For a give h 2 = 0, the hyperbola defined by F 2 (x, y) = h 2 does not circle the origin as the ellips in §4.1. Observe that for the (HH) type systems, the equilibrium (0, 0) of (4.1) is hyperbolic and there exist stable and unstable eigenspaces for the equilibrium (0, 0). Before giving a counter example, we introduce the following definition. j , i = 1, 2, be the stable and unstable eigenspaces of the equilibrium for (4.1), where (a, b, c, ) = (a j , b j , c j ). They are called the asymptotes for F j (x, y) = h j . The asymptotes L (i) j , i = 1, 2, divide R 2 into four sectors. We say (x, y) is in the positive (or negative) sector if F j (x, y) > 0 (or F j (x, y) < 0). Example 4.3 (A Counter Example) . Assume that the asymptotes of two hyperbolas are alternating, for example Following the curve F 2 (x, y) = h 2 , the values of F 1 are not bounded below or above. Therefore, conditional max/min as in §4.1 is not well posed. It is easy to see that in such case, (4.2) has exactly 2 solutions and the two quadratic forms cannot be codiagonalized. Although the general max/mn problem is not well posed, to each of the cases listed below, it is not hard to find a well posed conditional max/min problem. Consider 4 sub-cases, as depicted in the four figures: (We skipped part of the graphs can be obtained by symmetry for simplicity.) (HH i) The two sectors of F 1 > 0 are inside the sectors of F 2 > 0. (HH ii) The two sectors of F 1 > 0 are inside the sectors of F 2 < 0. (HH iii) The two sectors of F 1 < 0 are inside the sectors of F 2 > 0. (HH iv) The two sectors of F 1 < 0 are inside the sectors of F 2 < 0. (4.6) Then for (4.6), there exists r 3 = max F 1 . System (4.2) has 4 solutions if h 1 < r 3 . For cases (HH iii) and (HH iv), and h 2 > 0 or < 0, consider the conditional minimum problem: (4.7) Then for (4.7), there exists r 4 = min F 1 System (4.2) has 4 solutions if r 4 < h 1 . Finally, after rescalling the generalized eigenvectors (P 1 , P 2 ), we can assume that P 2 solves the max/min problems (4.6) or (4.7). And P 1 solves the complementary max/min problem (4.6*) for cases (HH i) and (HH ii), or (4.7*) for cases (HH iii) and (HH iv), defined as: Proof. Following the curve F 2 (x, y) = h 2 , or −h 2 , the range of F 1 (x, y) can be bounded above and unbounded below, or bounded below and unbounded above. Therefore, either a conditional max problem or a conditional min problem is wellposed, but not both. The (LH) case can be treated just like the (HH) case. Consider 4 sub-cases: (LH i) F 1 ≤ 0 and the line F 1 = 0 is inside the sectors of F 2 > 0. (LH ii) F 1 ≤ 0 and the line F 1 = 0 is inside the sectors of F 2 < 0. (LH iii) F 1 ≥ 0 and the line F 1 = 0 are inside the sectors of F 2 > 0. (LH iv) F 1 ≥ 0 and the line F 1 = 0 are inside the sectors of F 2 < 0. Theorem 4.3. For cases (i) and (ii), consider the conditional maximum problem: (4.8) Then for (4.8), there exists r 5 = max F 1 . System (4.2) has 4 solutions if h 1 < r 5 For cases (iii) and (iv), consider the conditional minimum problem: Then for (4.9), there exists r 6 = min F 1 . System (4.2) has 4 solutions if r 6 < h 1 . Finally, after rescalling the generalized eigenvector (P 1 , P 2 ), we can assume that P 2 solves the max/min problems (4.8) or (4.9). And P 1 solves the complementary max/min problem (4.8*) for cases (LH i) and (LH ii), or (4.9*) for cases (LH iii) and (LH iv), defined as: For the (LL) case, if two family of lines are not parallel, there are 4 solutions. To simplify the paper, we shall not discuss (LL) case in the sequel. Codiagonalization of two quadratic equations In this subsection, we consider codiagonalization of two quadratic equations, but not the coexistence of real valued solutions. The method is based the generalized eigenvalue/eigenvector problems. For the cases listed in §4.1 and §4.2, we have the following results: Theorem 4.4. If one equation of the quadratic system is elliptic, then the two quadratic form can always be codiagonalized by real valued matrices. If both equations are hyperbolic, then in all the cases (HH i)-(HH iv), the two quadratic forms can be dociagonalized by real valued matrices. If F 1 (x, y) is the line type and F 2 (x, y) is hyperbolic, then in all the cases (LH i)-(LH iv), the two quadratic forms can be cociagonalized by real valued matrices. Proof. Let (P 1 , P 2 ) be the generalized eigenvector corresponding to the generalized eigenvalue problem (4.5). After rescaling, assume that P 2 solves the max/min problem. In all the three cases, there exists an angle θ 0 such that T 2 (−θ 0 )P 2 coincides with the major axis or the minor axis of the graphs of F 2 (x, y) = h 2 . Based on the results from previous subsections, each generalized eigenvalue problem has two lindearly independent eigenvectors. Thus, the eigenvalus are distinct. This implies Therefore, in all the cases listed in Theorems 4.1, 4.2 and 4.3, the image of T 2 (−θ 0 )P 1 should coincide with the minor axis or the major axis of F 2 = h 2 . Assume that under the rotation T 2 (θ 0 ), the quadratic form F 1 (x, y) = h 1 becomes F 3 (x, y) = h 1 while F 2 (x, y) = h 2 is unchanged. Now apply a circular rotation R(−θ 0 ) to both F 3 (x, y) = h 1 and F 2 (x, y) = h 2 so the major axis of F 2 (x, y) = h 2 is mapped to the x-axis. The matrices that represent the two quadratic forms are Clearly F 2 (x, y) = h 2 has been diagonalized. From Lemma 4.1, F 1 (x, y) = h 1 has also been diagonalized. Proof. If not, then the solutions of the system are on the lines spanned by − − → OP 1 or − − → OP 2 where the graphs are tangent to each other. Contradicting to the fact that the system has 4 solutions. Proof. Observe that . The proof has been completed. By Theorem 5.2, the bifurcation function H = (H 1 , H 2 , H 3 ) = 0 at (s(β (j) + ω j (s)), τ (j) + η j (s), s 2 µ). Then system (3.1) has the solution φ(β, τ, µ). Hence system (1.4) has 2 or 4 homoclinic solutions given by for 0 = s ∈ I j , 1 ≤ j ≤ 4 or 1 ≤ j ≤ 2. Clearly, lim s→0 γ s , we find that the solutions are robust with respect to small perturbation of g. This alone shows that each of the solution obtained is a transversal homoclinic solution. The same argument was used by Mallet-Paret in [15] to show that the homoclinic orbits in some delay equations are transverse. Alternatively, it is shown in [13] that the functions H i , 1 ≤ i ≤ 3, as in (3.10), measure the gap between the unstable manifold at t = 0 − and the stable manifold at is also a nonsingular matrix. Therefore, the intersection of W u (0) and W s (0) is transverse. An Example Although the example given in this section is not from applications, it shows that the conditions given in this paper are consistent. Consider the following system (6.1) The unperturbed system is It is easy to check that 0 is an equilibrium and the eigenvalues of Df (0) are {−1, −1, −1, 1, 1, 1}. Hence 0 is a hyperbolic equilibrium. Let r(t) = sech(t) and γ = (0, 0, 0, 0, r,ṙ). By direct calculations, we see that γ is a homoclinic solution to the origin. Remark 6.1. The example is modefied from [4]. At the first look, it seems to be unnatural to consider a homoclinic orbit with x 1 = x 2 = x 3 = x 4 = 0 in R 6 . However, if γ(t) is a homoclinic orbit that can be embedded in a smooth 2D submanifold, by a change of variables, we can assume that γ(t), −∞ < t < ∞ lies in the (x 5 , x 6 )-plane.
4,687.2
2017-01-01T00:00:00.000
[ "Mathematics" ]
Realization of pitch-rotational torque wrench in two-beam optical tweezers 3D Pitch (out-of-plane) rotational motion has been generated in spherical particles by maneuvering the laser spots of holographic optical tweezers. However, since the spherical particles, which are required to minimise drag are perfectly isotropic, a controllable torque cannot be applied with it. It remains free to spin about any axis even after moving the tweezers beams. It is here that we trap birefringent particles of about 3 μm diameter in two tweezers beams and then change the depth of one of the beam foci controllably to generate a pitch rotational torque-wrench and avoid the free spinning of the particle. We also detect the rotation with newly developed pitch motion detection technique and apply controlled torques on the particle. Introduction Optical tweezers provides a simple, robust and highly efficient platform to manipulate microscopic particles [1][2][3] both in translational and rotational degrees of freedom. Translational manipulation of micron sized particles and applying piconewton forces is a well desired feature in a wide variety of fields starting from biology [4][5][6] to physics [7][8][9][10] to even chemistry [11][12][13]. In addition, trapped birefringent particles are rotated utilizing the polarization [14][15][16] of the trapping laser. A single biological specimen like a protein or a DNA [17] can be attached to a trapped particle to disclose the dynamics and estimate torques [18] or forces [19]. A trapped particle has three degrees of rotational freedom, two of which corresponds to out of the plane rotations [20] (pitch, roll) and one in-plane rotation [21] (yaw). Many techniques have been illustrated to rotate trapped particles using orbital angular momentum beams [22][23][24], holographic optical tweezers [25][26][27] and controlling ellipticity of the beams [14]. These techniques explore yaw rotational motion explicitly. On the other hand, controlled generation of pitch motion has also been realised in previous work [28,29] where, video microscopy was employed to affirm the generated motion of asymmetric, non-birefringent particles [30,31]. Surfaces have been moved to turn particles optically trapped but adherently residing on them [32]. Entire cells have been rotated in the pitch sense which is quite useful for tomography [27]. Moreover, the pitch motion shows different dynamics in proximity to surfaces than yaw motion. When rotated in the pitch sense, the non-birefringent particles can spin uncontrollably due to the lack of orientational confinement of the particle. The orientational confinement is very important in probing problems like quantum sensing using Nitrogen-Vacancy (NV) centers in diamonds. Further, if one has to apply a shear force on a membrane using a particle, this pitch mode is an alternative to translating the particle along the surface, which can encounter less Stokes drag. The rotational case merely amounts to a fraction of one complete rotation of the particle while the translational case amounts to the entire center of mass moving by a larger extent. This assumes significance when faster dynamics is being studied. Here, we apply controlled pitch torques and also employ a high resolution pitch-detection system [33] to calibrate it in spherical birefringent particles. This scheme of two-beam confinement applies constant pitch torques on a trapped birefringent particle of about 500 pN-nm. The detection system measures the asymmetry in the scattered pattern under cross-polarizers. This technique is prominent as it provides control over an additional rotational degree of freedom and allows us to explore complex dynamics. Theory A non-birefringent spherical particle trapped with two traps as depicted in the figure 1(a) is not rotationally confined. Birefringent particles tend to orient to the direction of polarization of trapping laser breaking the rotational symmetry. The electric field E incident on the birefringent particle experiences different refractive indices n o and n e along ordinary and extraordinary axis. The electric field E of the trapping laser induces a polarization P on the particle and interacts with the induced polarization to produce a torque (τ) given by [34]. A restoring torque per unit area (τ) is produced due to the change in angular momentum of the elliptically polarized light passing through the birefringent particle of diameter d 1 given by equation (2), [14] where, f is the degree of ellipticity of light, k is free space wavenumber, ò is the permittivity, θ is the angle between the birefringent axis and trap polarization, and ω 1 is the angular frequency of electric field. The expression is derived in detail in the reference [14]. As f → 0, the torque for linearly polarized light becomes where, τ 0 is the maximum torque on the particle and n is the direction of torque perpendicular to both E and P. The particle experiences torques τ 1 and τ 2 given by equation (4), (5) due to the interaction of induced dipoles P 1 and P 2 with corresponding electric fields E 1 and E 2 at each trap respectively. where, τ 01 , τ 02 are maximum torques that can be applied, q is the unit vector perpendicular to P 1,2 and E 1,2 . Also, q sin 2 dependence of the torque indicate that the particle has a two-fold symmetry. In addition, particle also experiences a torque (τ F ) given by equation (6) due to the optical force (F) exerted by one trap arising from the intensity gradient while the other trap acts as a pivot as depicted in figure 2(b). Figure 1. Schematics of two tightly focused lasers beams with same polarization (shown in blue and black arrows) simultaneously trapping (a) a spherical non-birefringent particle showing translational confinement but freedom to rotate(yellow), (b) a spherical birefringent particle confined both in translational and rotational degrees. where, r is the vector joining two focal spots and q is unit vector perpendicular to r and gradient force F given by equation (7). ; . cos 1 cos 7 Note that the direction of τ 1 and τ 2 , and that of τ F are in opposite directions and the total torque (τ T ) is equal to sum of all torques acting on the particle. ( ) . Then in the limit that the (θ − β) is small, we get, Thus there is a restoring torque trying to align the particle towards the angle β. It also implies that the magnitude of the torque is maximum when β = 0°, while it is negligible when the β = 90°. Experimental procedure We show the experimental setup to generate and detect pitch motion as figure 3. We use the OTKB/M Optical Tweezers kit from Thorlabs, U.S.A, to do the experiment. Birefringent particles suspended in 20 μl distilled water was placed between a glass slide (Blue star, 75 mm × 25 mm × 1.1 mm) and a coverslip (Blue star, number 1 size, english glass) to form the sample chamber. An oil immersion 100×, 1.3 NA objective from Olympus with inverted microscope configuration is used trap the birefringent particles as shown in figure 3. The scattered beam is then collected using a condenser lens objective (E plan 10×, 0.25 air-immersion) (Nikon). A dichroic mirror and polarizing beam splitter(PBS) directs most of the scattered light into a quadrant photodiode (QPD) Figure 2. (a) Schematics of a spherical birefringent particle in stable configuration depicting torques (τ 1 , τ 2 and τ F ) and force F when trapped by two tightly focused lasers beams with same polarization (shown in blue and black arrows). The torques τ 1 and τ 2 arise due to the misalignment of polarization (P) with the electric field (E). Whereas, the torque (τ F ) is generated by the optical force (F) applied by one trap while the other trap act as a pivot. (b) Vector representation of torques and force acting on the birefringent particle. to detect translational motion of the trapped particle. Also, a white light source illuminates the sample chamber from the top coupled with dichroic mirrors and the light is directed to a CMOS camera as shown in figure 3. Two diode lasers of wavelengths 1064nm and 980 nm were used in the experiment and are focused onto the sample plane via two different paths combined using a beam splitter. A set of lenses are mounted on a movable stage to change the position of the 1064nm trap. A polarizing beam splitter and half-wave plate were used in beam paths of 980 nm and 1064nm lasers respectively to match the polarizations at the sample plane. A 980 nm filter was used after the Dichroic mirror 2 to isolate the 980 nm light from the 1064nm light. The 980 nm beam is held fixed and acts as the reference for turning the particle. The 1064 nm beam focus is moved which in-turn moves the particle. The detection of the angle requires the scatter pattern under crossed polarizers at only one wavelength. Using both 980 nm and 1064 nm for detection can cause complications in the detection, which may give imperfect results. It is not known what the exact effects will be if both are used simultaneously but are expected to cause complications. Hence the 980 nm light is filtered and used. The pitch angle determination unit works by finding the asymmetry in the scatter pattern of a spherical birefringent particle under crossed polarizers. Finding the asymmetry requires the usage of a quadrant photodiode to find the difference in the intensities between the left half of the beam and the right half of the beam. Thus, we use the edge mirror and the two photodiodes. The images in figure 3(b) and (c) makes the particle look different from a sphere because the imaging has been performed under crossed polarizers, so that the scatter pattern hardly shows the exact outline of the particle. The background also does not look perfectly dark. This is because we have used sheet polarizers and has imperfections. Moreover, the differences in the scatter pattern are small and hence the scattered tweezers light is itself used to ascertain the asymmetry. The typical scatter pattern with the birefringence axis symmetrically oriented and turned by 30 degrees is shown in figure 3(d) and (e). These patterns look different from the images because the visible light requires polarizers for those wavelengths which are the inefficient sheet polarizers in this case. Birefringent particles used in the experiment were synthesized using RM257 (Merck) nematic liquid crystal precursor powder. The preparation protocol requires 99% pure ethanol (50 ml) and de-ionised water (150 ml) in 1:3 ratio which were heated to 55°C and 75°C respectively in separate beakers. Temperatures of ethanol and water were monitored and when, ethanol reaches 55°C, about 80 mg of RM257 powder was added. The solution is stirred with a small magnetic stirrer in the beaker allowing the powder to dissolve uniformly. Subsequently, this RM257-ethanol solution was added to de-ionised water in dropwise fashion. A milky white solution was formed in the beaker which then is closed using a perforated aluminium foil. Ethanol was evaporated from these perforations leaving the overall solution at 150 ml. Later, when the solution cools down to room temperature we can store the solution and use it for experiments. The particles sizes synthesized using this method varies from 2 μm to 3 μm. The 980 nm laser has a Polarizing Beam Splitter (PBS1) in front of the laser itself. The other polarizer (PBS2) is after the 980 nm filter. These two ensure that when the scattered light passes through the output polarizer, it provides angular information. The simulated scattered patterns a birefringent particle rotated in the pitch sense has been shown in figure 3(d) and (e). The methodology of the simulation has been shown in [35]. Results and discussions We trap a birefringent particle simultaneously with two optical traps and move one of the traps which makes the particle move in pitch sense as depicted in figure 2. The out-of-plane movement of the spherical particle is hard to observe in video microscopy technique. However, the pitch motion of a birefringent spherical particle can be detected as the amount of asymmetry in scattered light collected with cross-polarizers [35]. The setup can be utilized to apply controlled torques on spherical particles. When a birefringent particle is trapped between cross polarizers, the light scattered forms a symmetric four lobe pattern. When the particle turns in the pitch sense, an asymmetry appears by making two of the lobes glow brighter than the other two without effecting the total intensity. The amount of asymmetry is measured by taking the difference of intensities between two halves which is correlated to the pitch angle. Hence the pitch angle can be calculated based on the intensity difference of two halves of scattered light [33]. The exact value of the rotation angle can be found by multiplying a calibration factor (β) derived from power spectral density [36] of the particle in the presence of the Brownian motion with the rotation signal obtained from the photodiode in volts. Pitch power spectral density (PSD) of a trapped particle follows a Lorentzian and is fit to equation (10) with the calibration factor (β) is given by equation (11) [36]. where, A and B are fitting parameters, f is frequency and γ is drag coefficient. The calibration factor β multiplied by the signal acquired from the photodiodes give us the pitch angle (θ) as a function of time and is shown in figure 4(a). This controlled pitch motion is generated by changing the depth of one of the two traps using a movable lens, as shown in figure 3. The focus of the second trap is moved vertically in and then out such that first it moves clockwise and then counterclockwise in the pitch sense. Pitch angular frequency (w = q d dt ) is calculated numerically from the pitch time series as follows : Pitch angle (θ) of the spherical particle is changing with a constant slope on the angle versus time graph, which implies a constant torque is being applied on the particle. Also, the torque (τ) applied on the particle can be calculated from the pitch angular velocity (ω) and viscous drag of the medium (γ) given by equation (13). Here, it may be noted that this system is an overdamped system where the inertial term is about 5 orders of magnitude larger than the damping term. The torque felt by the system is mainly computed from the drag. The plot shown in figure 5(b) applies constant torque which can be observed in the small regions highlighted with black lines. The constant torque was shown by zooming in as shown in figure 6. Hence, the system is acting as a torque wrench applying constant torques on the spherical birefringent particle. We also show a video of the execution of the pitch motion by using 2 beams in the supplementary video available at stacks.iop.org/JPCO/5/ 115016/mmedia S1. This technique only works if the size of the particle is larger than two times the mode waist of the optical trapping beams. In our case, since the approximate wavelength is 1 μm, the particle diameter has to be at least 2 μm. Conclusions Thus, to conclude, we have successfully designed a torque-wrench to apply a constant torque of 500 pN-nm on a 3 μm spherical birefringent particle using two laser beams. We were also able to rotate the particle by 40°angle. The system can be used where controlled pitch torques are needed. It yields different dynamics from the yaw rotation particularly in proximity to surfaces. Hence the pitch torque can also have major applications when applying torques on soft surfaces like cell membranes and single proteins or DNA molecules attached to surfaces. This can be envisaged as a different mode of applying controlled stresses on molecules or surfaces. Even in the NV centers of diamonds, which have 4 precise orientations on the diamond lattice, the control over the pitch and yaw simultaneously may work wonders to the sensing capabilities of the particle.
3,691.8
2021-11-17T00:00:00.000
[ "Physics" ]
Optimization of photon storage fidelity in ordered atomic arrays A major application for atomic ensembles consists of a quantum memory for light, in which an optical state can be reversibly converted to a collective atomic excitation on demand. There exists a well-known fundamental bound on the storage error, when the ensemble is describable by a continuous medium governed by the Maxwell-Bloch equations. The validity of this model can break down, however, in systems such as dense, ordered atomic arrays, where strong interference in emission can give rise to phenomena such as subradiance and"selective"radiance. Here, we develop a general formalism that finds the maximum storage efficiency for a collection of atoms with discrete, known positions, and a given spatial mode in which an optical field is sent. As an example, we apply this technique to study a finite two-dimensional square array of atoms. We show that such a system enables a storage error that scales with atom number $N_\mathrm{a}$ like $\sim (\log N_\mathrm{a})^2/N_\mathrm{a}^2$, and that, remarkably, an array of just $4 \times 4$ atoms in principle allows for an efficiency comparable to a disordered ensemble with optical depth of around 600. Atomic ensembles constitute an important platform for quantum light-matter interfaces [1], enabling applications from quantum memories [2,3,4,5] and few-photon nonlinear optics [6,7,8,9,10,11] to metrology [12,13,14,15]. In typical experiments, ensembles consist of disordered atomic clouds, with the propagation of light through them modeled phenomenologically by the Maxwell-Bloch equations [16,17]. Within this description, the atoms are treated as a smooth density and the discreteness of atomic positions is ignored. In addition, spatial interference that can arise from light arXiv:1710.06312v2 [quant-ph] 4 Sep 2018 scattering is neglected, and the emission into directions other than the mode of interest is treated as an independent atomic process. Within this formalism, one can derive standard limits of fidelity for applications of interest -for example, the storage error of a quantum memory scales inversely with the optical depth (D) of the ensemble [18]. Recently, novel experimental platforms have emerged where it is possible to produce small ordered arrays of atoms [19,20,21,22,23]. Intuitively, one expects that strong interference in light emission can emerge, which renders inoperable the typical theoretical approaches to modeling light-atom interfaces. Theoretically there has been growing interest in novel quantum optical effects in arrays, such as subradiance [24,25,26,27,28,29,30], topological effects [31,32], and complete reflection of light [33,34,35]. Indeed, it has already been shown numerically that an ordered onedimensional array of atoms coupled to a nanofiber allows for a storage error exponentially smaller than the previously known bound [29]. In this work, the exponential scaling was observed by considering a fixed, spatial waveform for the optical pulse. However, two interesting questions that arise are (i) whether it is possible to develop a theoretical technique to bound the error, which takes fully into account the atomic positions and the interference of emission in all directions, and (ii) whether an improved scaling is possible for atoms in free space, as opposed to coupled to a photonic structure. These questions are affirmatively answered in our work. In particular, we provide a construction that enables the maximum storage efficiency to be found, given the atomic positions and the desired spatial mode of light. This procedure is based upon solving the dynamics of a "spin model", which encodes the multiple scattering and interference of light as it interacts with atoms, and then calculating the light emitted into the desired mode by an input-output equation. We show that the maximum efficiency is given by the maximum eigenvalue of a Hermitian matrix, whose elements are derived from the atomic positions and optical mode. While this technique is completely general, we apply it specifically to the case of a twodimensional square array of atoms. In particular, it has recently been shown that an infinite array can in principle form a 100% reflector for light [33,34,35], when the lattice constant d is smaller than the resonant wavelength λ 0 . While a mirror constitutes a "passive" optical system, it is natural to ask whether this implies a 100% success probability, if the system were functionalized into a quantum memory. For a finite array, we show that the minimum error decreases like ∼ (log N a ) 2 /N 2 a for storage from a Gaussian-like mode, and remarkably, that a 4 × 4 array in principle already enables an error below 1%. The spin model The full dynamics of light emission and re-scattering of an arbitrary collection of atoms in free space, specified only by their discrete, fixed positions r j , can be related to an effective model containing only the atomic degrees of freedom and the incident field [36,37,38,39,40,41]. We first review this formalism for two-level atoms with ground state |g and excited state |e , with the dipolar transition |g − |e coupled with free space optical modes. Within the Born-Markov approximation, these modes can be integrated out to yield effective dynamics for the atomic density matrixρ, which evolves asρ = −(i/ )[H,ρ] + L[ρ], where the Hamiltonian and Lindblad operators read [36,37,38,39,40,41,42] Here H in is associated with the input field that drives the atoms (which need not be specified for our purposes), d eg andd j are the dipole matrix element and unit atomic polarization vector associated with the transition, and σ βγ = |β γ| are atomic operators with {β, γ} ∈ {e, g}. G 0 (r j , r l , ω eg ) is the electromagnetic Green's function tensor in free space, which is the fundamental solution of the wave equation and fulfills where the curl is taken with respect to r. The Green's function explicitly takes the form [43] G 0 (r j , r l , ω eg ) = where R = |r j − r l | and k 0 = ω eg /c is the wavevector associated with the atomic transition frequency ω eg , with c being the speed of light. We note that the local term [i.e., G 0 (r j , r j , ω eg )] is divergent. This term is responsible for the Lamb shift and is incorporated into a renormalized resonance frequency ω eg . Physically, Eq. (1a) describes the coherent exchange of atomic excitations mediated by photons. On the other hand, Eq. (1b) describes the collective emission or dissipation of excited atoms, after integrating out the common reservoir of electromagnetic modes with which they interact (within the Born-Markov approximation). Instead of solving the density matrix evolution as governed by the master equation, one can equivalently work within the stochastic wave function or "quantum jump" formalism [44]. In that case, the system is described by a wave function, which deterministically evolves under an effective, non-Hermitian Hamiltonian This Hamiltonian captures both the coherent evolution of Eq. (1a) and the last two terms of the Lindblad operator in Eq. (1b). In addition, one must also stochastically apply quantum jump operators to the wave function, to capture the population recycling terms σ l ge ρσ j eg of Eq. (1b). Formally, the jump operators of our system will consist of superpositions of σ l ge , i.e. atomic lowering operators, which physically encode the emission of a photon. In the following, we will be interested in initial states with just a single excitation; thus, any jump operator trivially takes the system to the ground state |g ⊗N , where it cannot further evolve or contribute to observables of interest (e.g., the emission of a photon). Furthermore, the rate that jumps occur is exactly equal to the rate of population loss of the wave function evolving under H eff . Thus, in our case, jumps are effectively accounted for just by evolution under H eff alone. Any loss of population from the single-excitation manifold implies that a corresponding population is building up in the manifold |g ⊗N |1(r, t) , where all the atoms are in the ground state and a single photon is emitted in some spatial-temporal pattern. We next discuss how the photon-emission pattern and its overlap with a mode of interest can be calculated. Given the evolution of the atomic state under H eff , any observables associated with the total field operatorÊ out (r) can be derived from the input-output relation [37,38,39,40,41] Formally, this equation states that the total field is a superposition of the incoming field and the fields emitted by the atoms, whose spatial pattern is contained in the Green's function. Equation (5) enables the field to be calculated at any point r, based upon the evaluation of an atomic correlation function ∼ G 0 (r, r j , ω eg ) ·d j σ ge j weighted by the Green's function. Evaluating the Green's function at each r and the corresponding atomic correlation function to construct the field everywhere can become tedious. However, in experiments one often cares about the projection of the field into a specific spatial mode, such as a Gaussian (see Fig. 1). It can be proven (see Appendix A) that this projection depends only on the amplitudes of the mode of the classical field E det (r) at the positions of the dipoles. We can thus define the quantum operator associated with the detector aŝ whereÊ det,in is the input field in the detection mode and F det = z=const d 2 r E * det (r) · E det (r) is a normalization factor. Here, the normalization is such that Ê † detÊ det represents the photon number per unit time emitted into the mode. Before discussing the specifics of the retrieval efficiency, we would like to briefly discuss the validity of the Born-Markov approximation, which allows one to trace out the photonic degrees of freedom and arrive at an atomic master equation, as well as to write equations for the field operators that depend instantaneously on the atomic operators. This approximation is valid whenever (1) the photon bath correlations decay much faster than the atomic correlations and (2) retardation can be ignored. The Figure 1. Schematic of a quantum memory using a two-dimensional atomic array. An excitation initially stored in the |s -manifold is retrieved as a photon by turning on the classical control field Ω c (blue arrows), which then creates a Raman scattered photon from the |g − |e transition. The photon is detected in some given mode, illustrated here as a Gaussian beam. first condition is obviously satisfied for atoms in free space, as the vacuum's Green's function has a frequency spectrum that is much broader than the atomic linewidth. Neglecting retardation in both the photon-mediated interactions between atoms and the field produced by the atoms requires the characteristic length L of the atomic system to be much smaller than that of a spontaneously-emitted photon, which is ∼ c/Γ 0 ≤ 1 m [45,46,47,48], where Γ 0 = µ 0 ω 3 eg d 2 eg /3π c is the single-atom spontaneous emission rate in vacuum. It should also be pointed out that for at most a single atomic excitation, the dynamics of atom-light interactions can readily be solved in an exact manner [46,48,49,50,51]. In this regime of linear optics, the dynamics can be analyzed for each frequency component in the Fourier domain, exploiting the fact that different frequency components do not couple to one another. However, the spin model presented above has a natural extension to the multi-excitation case (e.g., studying the storage of multiple photons and their subsequent nonlinear interaction [52,53,54]), whereas exact solutions are only available in a limited number of cases [55,47,56]. The retrieval efficiency The typical quantum memory scheme consists of an ensemble of three-level atoms where an additional metastable state |s is coupled to the excited state |e by a classical control field with Rabi frequency Ω c (r, t) and detuning ∆ from the transition frequency ω se (see Fig. 1) [18]. While the state |s is typically associated with another state in the groundstate hyperfine manifold, in our case this would deleteriously reduce interference effects in emission. For example, in storage where all atoms begin in |g , there is no interference pathway to suppress spontaneous emission into |s once an incident photon excites an atom to |e . Thus, we assume that our atoms have no hyperfine structure and there is a unique ground state, as would be the case for bosonic Sr or Yb atoms, and that level |s is a long-lived, higher-lying excited state. Dipole-dipole interactions on the |e -|s transition have no effect, as they require at least two total excitations in the system. In the main text, we will furthermore take the conceptually simpler case where |e is the unique excited state coupled to |g (for concreteness, with polarizationd j =x). A more realistic model with three excited states |e x,y,z , providing an isotropic atomic response to light, is presented in Appendix C, but the results qualitatively remain the same. Instead of storage, it is mathematically more convenient to optimize the retrieval problem, in which an initial collective spin excitation |ψ(t = 0) = j s j (t = 0)σ sg j |g ⊗Na is emitted as an outgoing photon on the |g − |e transition via a Raman process facilitated by the control field Ω c . The initial state then evolves under the total Hamiltonian H = H eff + H c , where the Hamiltonian associated with the control field is H c = j − ∆σ ee j + Ω j c (t)(σ es j + H.c.) and H in = 0 as there is no external field driving the |g − |e transition in retrieval. We take a spatially uniform, but , although it is straightforward to generalize the following discussion to the case of a spatially varying control field. Then, for a given detection mode and atomic spatial configuration, we want to find the initial spin amplitude s j (0) that maximizes the retrieval efficiency. By time-reversal symmetry, the storage efficiency for an incoming photon in the same mode and for the same atomic configuration is identical, when optimized over the temporal shapes of the incoming photon and control field [18]. Writing the general state in time as where the matrix M jl = 3πk −1 0d * j · G 0 (r j , r l , ω eg ) ·d l . While we explicitly consider the model above, we note that it is straightforward to add a number of other effects (e.g., decay of the |s state or dephasing) into the analysis. From Eq. (6), we can evaluate the expected total photon number η = ∞ 0 dt Ê † det (t)Ê det (t) emitted into the detection mode. Assuming that the control field is turned on for long enough, it is guaranteed that one photon in total is emitted into all modes, and thus η also represents the retrieval efficiency. Evaluating the atomic operators in Eq. (6), we find that where we have defined the local scalar field E j = E det (r j ) ·d * j at the atom positions, and S λ 0 = (3/2π)λ 2 0 is the resonant atomic optical cross-section (λ 0 = 2π/k 0 being the resonant wavelength). Equation (9) can be simplified by noting that M jl in Eq. (7) is a symmetric complex matrix. Thus, if M jl is diagonalizable (as we numerically verify in our cases of interest), its eigenvalues λ ξ are complex and its eigenmodes v ξ are non-orthogonal in the quantum mechanical sense, but obey the orthogonality and completeness conditions v T ξ · v ξ = δ ξξ and ξ v ξ ⊗ v T ξ = I [37]. Projecting the equations of motion into this basis results in N a decoupled pairs of equations: where e ξ = j v ξ,j e j , s ξ = j v ξ,j s j . Provided that the atomic excitation has left the system as t → ∞, one can derive that Inserting this equality into Eq. (9), we readily find where and E ξ = m v ξ,m E m . Importantly, K is an N a × N a Hermitian matrix which depends only on the positions of the atoms and the detection mode, but not on the specific time dependence of the control field (for example, one could apply a π pulse that transfers all of the excitation from state |s to |e at time t = 0). The maximum retrieval efficiency is thus given by the initial configuration corresponding to the eigenvector of K with the largest eigenvalue. We should note that while the efficiency η of retrieval is independent of the particular profile Ω c (t), the shape of the outgoing photon is completely determined by the control field. By time-reversal symmetry, if one wants to store an incoming photon with maximum efficiency, one must first consider its time-reversed shape (i.e., an outgoing photon), find the unique control field Ω c (t) that generates such a shape in retrieval, and then apply the time-reversed fieldΩ c (t) for storage. Before proceeding further, we briefly comment on the classical and quantum optical aspects of the calculation presented above. An equation analogous to Eq. (9) also applies if the atoms were replaced by classical oscillating dipoles with amplitudes e j (t). Such an equation corresponds to the projection of the total classical radiated field into a particular spatial mode. The equivalence between classical and quantum equations is not surprising, given that both the propagation of classical and quantum fields are given by Maxwell's equations. In our particular problem of interest, the quantum nature of the field manifests itself by considering field correlations. For example, using Eq. (6), one can calculate the second-order correlation function Ê †2 detÊ 2 det . As the atomic state that we consider contains at most one excitation, this correlation function is exactly zero, or perfectly "anti-bunched," reflecting the fact that only a single photon is emitted. 2D square array While the formalism presented above is general to any ensemble of atoms with known positions, we now apply it to a 2D square array with lattice constant d. This case is particularly interesting, as an infinite array of two-level atoms can act as a perfect mirror for incoming light at normal incidence when d is smaller than the atomic resonant wavelength λ 0 [33,34,35]. Physically, the incoming field guarantees that all the induced atomic dipoles oscillate with the same phase. While such a configuration can in principle emit into various diffraction orders, for d < λ 0 , all of the orders except the one perpendicular to the plane become evanescent, and cannot radiate away energy. With only two channels of emission possible (forward and backward), the scattered field of the array perfectly interferes with an incident resonant photon in the forward direction, leading to complete reflection of light. Likewise, when an excitation is stored uniformly in the infinite array with d < λ 0 , it is "selectively radiant" [29], as interference guarantees that the retrieved photon is perfectly emitted into two plane waves normal to the array (we assume that this symmetric emission can be re-combined). While this simple argument hints that a finite array can also be very efficient, what remains is to quantify the error. We thus analyze the retrieval efficiency of an array made of N a = N × N atoms. As far as the detection mode is concerned, a common mode to project into is a Gaussian beam. There is a technicality, however, since a Gaussian beam is only an approximate (paraxial) solution to Maxwell's equations. While such an approximation usually suffices, here we anticipate that one can achieve nearly perfect storage and retrieval efficiencies. Consequently, it is not obvious a priori that the small (actual) retrieval errors are not overwhelmed by the error of the paraxial approximation itself. Thus, we consider an exact mode solution for Maxwell's equations (see Appendix B for details), which approaches the Gaussian solution in the limit of large beam waist w 0 . Before presenting the numerics, one can already intuitively argue the fundamental sources of error associated with a finite array by considering the reflectance problem. If the beam waist w 0 is too large with respect to the array dimensions, then part of the incoming light will not see the atoms and will be transmitted or scattered in other directions by the edges of the array. If w 0 is too small, the incoming mode contains a broad range of wavevectors with different propagation directions. Since different angles have maximum reflectance at different detunings relative to the bare transition frequency ω eg [35], the overall reflectance for a near-monochromatic photon will be reduced. For a given array, an optimal beam waist thus maximizes the reflectance of an incoming photon (at optimal detuning). The situation is analogous for the retrieval problem, where the optimization over the photon frequency is replaced by an optimization over the initial spatial distribution of the collective s-excitation. To check this behavior, we numerically calculate the minimum retrieval error = 1 − η varying the beam waist w 0 , for several different atom numbers. In Fig. 2(a), the error is plotted as a function of the ratio between the array area S arr = d 2 N a and w 2 0 . Here, we have taken the retrieval mode to consist of a symmetric superposition of Gaussian beams emitted in opposite directions from the array, with the view that these beams can in principle be recombined. For concreteness, we consider a lattice constant of d = 0.6λ 0 , although other choices d < λ 0 do not affect the general scalings. As S arr /w 2 0 grows, the error initially scales as ∼ 1 − Erf 2 (N d/ √ 2w 0 ) (illustrated by the dashed curve), where Erf(z) is the error function. Physically, this error corresponds to the fraction of the energy carried by the Gaussian beam beyond the array boundaries. In Fig. 2(b) we plot (in log-log scale) as a function of the ratio between w 0 and λ 0 (for values larger than one), again for different array sizes. Up to a point where the beam waist becomes comparable with the array dimension, the error scales roughly as ∼ (λ 0 /w 0 ) 4 (dashed line). This error physically arises from the range of wavevector components that make up the detection mode, which is inversely proportional to w 0 . An analysis of the reflectance of a beam of finite waist from an infinite array in fact shows the same scaling, when considering the fraction of light that is not reflected. Overall we have that the minimum error can be approximated by the expression The constant C can be obtained by fitting the error: for d = 0.6λ 0 we find C ≈ 2.4·10 −3 . One can use Eq. (15) to find the optimal beam waist. After optimizing w 0 we find that the leading term for the error is given by In Fig. 2(c) this approximate expression for the minimum retrieval error is compared with the value obtained by numerical optimization. The associated optimum beam waist for the retrieval mode is also plotted for completeness. Interestingly, even a 4 × 4 array of atoms can in principle already enable a storage/retrieval efficiency of above 99%. In comparison, an optical depth of nearly D ∼ 600 is needed to obtain the same error in a conventional ensemble [18]. In the case where the beam waist does not significantly diverge over the length of the ensemble, the optical depth is given by D ∼ S λ 0 N a /w 2 0 . For cold atoms, an atom number on the order of N a ∼ 10 6 -10 7 might be required to achieve a value of D ∼ 600. Analysis of disorder In this section, we analyze the effects of various types of disorder in the array. One useful attribute of our efficiency calculation is that it enables different spatial configurations to be studied. Thus, we can easily include imperfections such as the absence of atoms ("holes") in the array, or classical position disorder. We first examine the case of some number N def of holes in the array. Intuitively, one expects that the relative decrease in efficiency, (η − η def )/η, will be proportional to the ratio between the intensity of the detection mode hitting the empty sites, to the total intensity over the array. Here, η def and η denote the maximum retrieval efficiency with and without the holes, respectively, with the beam waist w 0 chosen to optimize η. In Fig. 3(a) we plot the relative loss as a function of j∈def |E j | 2 / l |E l | 2 , where the sums of the field intensities in the numerator and denominator run over sites of holes and all sites, respectively, sampling over 100 random configurations for different densities of holes (N def /N a up to 20%). One sees a clear statistical relation of the form The constant of proportionality α in Eq. (17) configuration, which would be applicable if an experiment could resolve the positions of the holes in a single shot [21], we expect a similar scaling even if the positions of holes are unknown. Classical disorder for the atomic positions consists in having the atoms displaced by random amounts δ j = (δ x,j , δ y,j ) from their position in the perfect lattice. It is shown in Ref. [35] for the case of reflectance of an infinite array that, when the δ's are extracted from a Gaussian distribution with standard deviation σ, then the decrease in reflectance introduced by the disorder scales as σ 2 /d 2 . We find numerically the same result for the retrieval error of the finite array. In particular, in Fig. 3(b) the error introduced by disorder is plotted as a function of σ for different array dimensions and fixed lattice constant. This error is defined as the difference between the optimized maximum retrieval efficiency η of a perfect lattice, and the mean retrieval efficiency η dis (sampled over many configurations) with the same initial atomic wave function and beam waist but with disorder in the atomic positions. Finite detection time When calculating the retrieval efficiency, given by Eq. (9), we have implicitly assumed that the detection time is infinite, such that all the energy emitted into the detection mode is collected. Practically, it might also be relevant to consider the retrieval efficiency given a finite time window 0 < t < T d for photon collection, such as if an experiment has other limiting time scales (i.e., atom trapping time, required fast readout, etc.). The efficiency detected for an arbitrary detection time window T d is given by where e j (t) is obtained by integrating Eqs. (7)- (8). In general the temporal profile of the emitted field depends on the control field amplitude Ω c (t) and detuning ∆. If one wants to achieve a high efficiency in the shortest time, then the optimal strategy is to essentially use the control field to apply an instantaneous π-pulse at t = 0, thus instantly transferring the excitation stored in the metastable state |s to the rapidly emitting excited state |e . In an array, this collective excitation in |e will emit a photon at a rate ∼ Γ 0 comparable to the single-atom emission rate, ensuring that the errors due to finite time window T d become very small once T d is on the order of a few ∼ Γ −1 0 . In Fig. 4, we plot the relative error 1 − η T d /η due to the finite detection time, where η is the detection efficiency for an infinite time window, for an array of 10 × 10 atoms with d = 0.6λ 0 and optimal beam waist. We notice that for a detection time T d ∼ 10/Γ 0 the error is of the order of 10 −3 . The possibility of having a good retrieval efficiency even for a short detection time is a consequence of the fact that, while the array can support highly subradiant states [25,29,30,34,35], they form a negligible component of the optimized spin wave for storage and retrieval. This makes intuitive sense, as to interface with light efficiently, one should use radiant or "selectively radiant" atomic excitations rather than states that decouple from light. Conclusions In summary, we have introduced a prescription to calculate the maximum storage and retrieval efficiency of a quantum memory, which fully accounts for re-scattering and interference of light emission in all directions. Our approach is in principle applicable to any system where the positions of the emitters are known (or can be reasonably modelled, such as assigning random positions) and the spatial and spectral response of the dielectric environment (i.e., the Green's function) is also known [2,3,4,5,57,58,29,59,37,60,61,62,63]. As one particular application, we have shown an improved scaling of errors for atoms in free space, compared to the result predicted by the one-dimensional Maxwell-Bloch equations. We speculate that it is possible to obtain an exponential reduction of errors versus atom number in free space, by using arrays that are not completely periodic. The question of how to tailor the spatial positions will be left to future work. More broadly, we expect that a significantly improved storage efficiency is possible whenever the excited state emission is largely radiative and coherent, which includes not only atoms but solid-state emitters with large zero-phonon line and Fourier-limited linewidths [63]. Techniques to reversibly map between photonic and atomic excitations in arrays should find a variety of exciting applications. For example, it would allow for photonic quantum gates, given some form of spin interactions in the array (such as between Rydberg levels [64]), or would allow for exotic spin states (like subradiant [24,25,26,27,28,29] or topological excitations [31,32]) to be detected optically. It would also be interesting to investigate whether the spin state itself could be engineered to produce a useful non-classical state of outgoing light. More broadly, the ability to formally map atom-light interactions to a long-range open spin model could provide new insights into quantum optical phenomena with atomic systems. Appendix A. Green's function expansion in plane and evanescent waves Here we derive Eq. (6) of the main text by using an expansion of the Green's function in terms of plane and evanescent waves. The Green's function Eq. (3) can be written in the angular spectrum representation, i.e. as an integral over k x and k y in Fourier space, as [43] G ± 0 (r, r , ω eg ) = and the ± denoting the sign of z − z . We can separate the integral in Eq. (A.1) into two separate integrals: for values of k x , k y lying inside and outside the disk defined by k 2 x + k 2 y = k 2 0 . This decomposition separates the plane waves from evanescent waves, i.e., we can write G ± (r, r , ω eg ) = G ± pl (r, r , ω eg ) + G ± ev (r, r , ω eg ), where The integral in the plane waves part can be rewritten in polar coordinates using k 0 = k 0 (sin θ cos φ, sin θ sin φ, cos θ), obtaining G ± pl (r, r , ω eg ) = i 8π 2 k 0 2π 0 dφ π/2 0 dθ sin θ Q ± e ik 0 (sin θ cos φ(x−x )+sin θ sin φ(y−y )±cos θ(z−z )) . orthogonal to k 0 and between them, G + pl (r, r , ω eg ) can be expressed as where we have defined a plane wave basis u k 0 ,θ,φ,α (r) =ê α k 0 e −ik 0 (sin θ cos φx+sin θ sin φy+cos θz) , (A.9) with the normalization z=const d 2 r u * k 0 ,θ,φ,α (r) · u k 0 ,θ ,φ ,β (r) = Similarly one has An analogous expression can be found for the evanescent wave part. Here it is convenient to define the vectork 0 = k 0 (cosh ξ cos φ, cosh ξ sin φ, i sinh ξ): orthogonal tok 0 and between them, one can indeed write Similarly one has Now let's consider a detection mode that does not contain evanescent components for simplicity, so that it can be expanded just in terms of monochromatic plane waves as E det (r) = 1 (2π) 2 α 2π 0 dφ π 0 dθ sin θ c k 0 ,θ,φ,α u k 0 ,θ,φ,α (r). (A. 18) The overlap between this mode and the field generated by a dipole is where we have used Eq. (5) (without input field) to express the field generated by the dipole through the Green's function and Eqs. (A.8) and (A.11) for the Green's function decomposition. Adding the input field and normalizing the detection mode we finally obtain Eq. (6) of the main text. Appendix B. Gaussian detection mode Here we present the detection mode which we have chosen to study the retrieval efficiency of the 2D array. We choose a solution oscillating with frequency e −iωegt , and where the x-component of the electric field in wavevector space is given by is the Heaviside step function. That is, E x has a Gaussian distribution for k 2 x + k 2 y ≤ k 2 0 while it is zero for k 2 x + k 2 y > k 2 0 , such that the field does not contain evanescent components. In the y direction, we take the field to be identically zero. The value of the z-component is then determined by Maxwell's equations [65]. The real space profile of this mode can be obtained by Fourier transformation: where (ρ, z) are the cylindrical coordinates for r, while J 0 and J 1 are Bessel functions. If evanescent components were included, the field in real space would identically consist of a Gaussian in the z = 0 focal plane with beam waist w 0 . The step function in wavevector space enforces in real space a diffraction limit, and distorts the beam to prevent a focal spot smaller than ∼ λ 0 . For large w 0 the mode tends to the paraxial solution, i.e. E z det vanishes and E x det assumes the form of a fundamental Laguerre-Gauss mode [43]. Appendix C. Spin model for isotropic atoms In the main text we have introduced a formalism to calculate the retrieval efficiency of an atomic ensemble of three-level atoms, with an excitation initially stored in a metastable state |s coupled to the excited state |e by a classical control field. Instead of a single excited state, a more realistic minimal model of an atom consists of three excited states |e α , where α = x, y, z denotes the three possible orientations of the dipole transitiond. The effective Hamiltonian (4) generalizes to H eff = H in − µ 0 d 2 eg ω 2 eg j,l αβ G αβ (r j , r l , ω eg ) σ eg α,j σ ge β,l , (C.1) where the sum over α and β are over x, y, z. Here, σ ge β,l = |g l e β | l is the lowering operator on atom l, which takes the excited state |e β to the ground state |g . It should be noted that in general, transitions with different orientations can mix together (e.g., one atom could decay from |e y and excite another atom from the ground state to |e x ), as a photon emitted from a given dipole orientation does not have the same global polarization everywhere in space. In the case in which the state |s is coupled only to one of the three excited states, for concreteness |e x , it is straightforward to generalize the main result of the paper. Figure C1. (a) Comparison between the optimal retrieval error opt (left axis, blue lines) and the corresponding optimal beam waistw 0 (right axis, red lines) for the case of a single excited state discussed in the main text (continuous lines) and the case of three excited states (dashed lines), as functions of the linear array dimension N . (b) Relative difference ( opt,iso − opt,TL )/ opt,TL between the retrieval errors of the isotropic and two-level atomic structures plotted in (a). Eq. (13) indeed keeps the same form, but with the matix K generalized to where E ξ = m v ξ,m E x det (r m ) and the sum over the index ξ of the eigenvectors has 3N a values. In Fig. C1(a) we compare the minimum retrieval error for an N × N square array of atoms versus N , for the cases of a single excited state and for the three-fold degenerate excited states. We notice that, while the scaling of the error remains the same, a small reduction of the efficiency is observable in the isotropic case, a consequence of the fact that light polarized along y can be emitted from atoms in the state |e x with a reduction of the overlap between the output mode and detection mode. The increase of the error is better quantified in Fig. C1(b) where the relative difference is plotted. We observe that for the range of array sizes considered here the error increases between 50% and 90%. The value of the optimal beam waist is instead not particularly affected, as expected.
8,810.2
2017-10-17T00:00:00.000
[ "Physics" ]
Development of Single Nucleotide Polymorphism (SNP) Markers for Analysis of Population Structure and Invasion Pathway in the Coconut Leaf Beetle Brontispa longissima (Gestro) Using Restriction Site-Associated DNA (RAD) Genotyping in Southern China To determine population genomic structure through high-throughput sequencing techniques has revolutionized research on non-model organisms. The coconut leaf beetle, Brontispa longissima (Gestro), is a widely distributed pest in Southern China. Here, we used restriction site-associated DNA (RAD) genotyping to assess the invasion pathway by detecting and estimating the degree of genetic differentiation among 51 B. longissima accessions collected from Southern China. A total of 10,127 SNPs were obtained, the screened single nucleotide polymorphism (SNP) information was used to construct the phylogenetic tree, FST analysis, principal component analysis, and population structure analysis. Genetic structure analysis was used to infer the population structure; the result showed that all accessions were divided into Hainan population and non-Hainan population. The Hainan population remained stable, only the Sansha population differentiated, and the non-Hainan populations have gradually differentiated into smaller sub-populations. We concluded that there are two sources of invasion of B. longissima into mainland China: Taiwan and Hainan. With the increase of the invasion time, the Hainan population was relatively stable, and the Taiwan population was differentiated into three sub-populations. Based on the unrooted phylogenetic tree, we infer that Taiwan and Hainan are the two invasive base points. The Taiwan population invaded Fujian, Guangdong, and Guangxi, while the Hainan population invaded Yunnan and Sansha. Our results provide strong evidence for the utility of RAD sequencing (RAD-seq) in population genetics studies, and our generated SNP resource could provide a valuable tool for population genomics studies of B. longissima in the future. Introduction The coconut leaf beetle, Brontispa longissima (Gestro) (Coleoptera: Chrysomelidae), is a serious pest of the coconut palm Cocos nucifera (L.) and other palm trees [1]. The species longissima was first described as Oxycephala longissima by Gestro(1885), collected from the Aru Islands, which are located in the Arafura Sea between New Guinea Island and Australia, and then was transferred to the genus Brontispa by Gestro (1907) [1,2]. The beetle is thought to be native to Indonesia and Papua New Guinea [3]. Shun-Ichiro Takano et al. (2011) showed that B. longissima represents two monophyletic clades, using mitochondrial DNA analysis and crosses between the two nominal species. One named Pacific clade is distributed in a relatively limited area (Papua New Guinea, Australia, Samoa, and Sumba Island), whereas the other named Asian clade covers a wide area, including Asia (i.e., Indonesia, Cambodia, Japan, Myanmar, the Philippines, Taiwan, Thailand, and Vietnam) and the French Polynesia, New Caledonia, and Vanuatu [2]. Since the late 1930s, it has invaded Pacific islands [3,4]. In 1975, the beetle was introduced to Taiwan [5], and then spread to Hong Kong, Hainan, Guangdong, Guangxi, Yunnan, and Fujian provinces in China [6]. Larvae and adults of B. longissima are found in young folded leaflets of palms, where they feed on the soft leaf tissues [7]. Infestation with the beetles turns the leaves brown and decreases fruit production. The sustained heavy attack may ultimately kill the palm trees [3,7,8]. At present, the prevention and control of B. longissima are mainly accomplished via chemical and biological control [9]. Due to the large quantities of pesticides used, chemical methods are not only expensive but can also induce insecticide resistance and pollute the environment. The cost and effects of the biological control methods are not yet obvious, thus a new strategy is needed to directly, safely, and effectively control the damage caused by the B. longissima. Related studies on population genetic structure and genetic diversity identified genetic evolution and gene connectivity between different populations of pest, which has important theoretical and practical significance for comprehensive management [10][11][12]. Population differentiation and the genetic variation of pests directly affects the formulation and application of many environmentally friendly pest control strategies, such as infertility techniques, mating interference techniques, microbial pesticides, etc. Early studies mainly focused on biology, ecology, and morphological classification of B. longissima, with few studies that focused on molecular genetics. To date, the use of gene markers to analyze genetic diversity and the genetic structure of B. longissima has not been reported. Population genetics research is based on a relatively wide range of distribution [13]. The same species are distributed in a relatively wide range of different geographical environments. Due to factors such as climate and host, microevolution can be accelerated, and more genetic variation can be obtained in a short time. However, highly connected and recently differentiated populations with large, effective population sizes typically exhibit very weak genetic differentiation, reducing the ability of genetic tools for defining management units and assigning accessions to their origin [14]. Therefore, to obtain a high-resolution profile of the population structure, more nuclear and mitochondrial genetic markers are required [15]. To date, the available genetic markers for B. longissima have been limited to mitochondrial DNA, restriction fragment length polymorphism (RFLP) [2,16], and microsatellite techniques [4]. The advent of next-generation sequencing (NGS) has facilitated the identification of novel population genetic markers on an unprecedented scale, even in non-model organisms [17]. Restriction site-related DNA sequencing (RAD-seq) is a promising technique widely used in population genomics. In particular, RAD-seq uses a repeatable method to generate a large number of nuclear markers, in which accession single nucleotide polymorphisms (SNPs) detected by short NGS read nearby or between restriction sites scattered throughout the nuclear genome [18]. In contrast to whole genome sequencing, RAD-seq only targets a subset of the genome. This not only improves the sequencing depth of each locus, but also includes more accessions in a single sequencing run [19]. Thus, we hypothesized that genetic diversity of B. longissima populations could be useful for the control of coleopterans in other regions of the world. In this study, we aimed to elucidate the genetic structure of B. longissima populations using RAD sequencing and SNP markers. The results of this study have important implications for understanding Table 1 and Figure 1. A total of 51 accessions were collected and all were captured on coconut trees C. nucifera. Detailed descriptions with the original locations of specimen collection are provided in Table 1. The specimens we sampled were preserved in 95% ethanol (Xilong Scientific, Shantou, Guangdong, China) for DNA extraction. Then, the samples were maintained in 10% ethanol (Zhongshan Scientific, Nanjing, Jiangsu, China) for later identification. Voucher specimens were catalogued for further experimentation in the laboratory. sequencing depth of each locus, but also includes more accessions in a single sequencing run [19]. Thus, we hypothesized that genetic diversity of B. longissima populations could be useful for the control of coleopterans in other regions of the world. In this study, we aimed to elucidate the genetic structure of B. longissima populations using RAD sequencing and SNP markers. The results of this study have important implications for understanding the genetic diversity of B. longissima populations and could be used to guide the integrated control of the beetle. Sampling B. longissima specimens were collected from May 2016 to August 2017. The sampling range was mainly in Southern China, including the Fujian (FJ), Guangdong (GD), Guangxi (GX), Yunnan (YN), Hainan (HN), Sansha (SS), and Taiwan (TW) regions, as shown in Table 1 and Figure 1. A total of 51 accessions were collected and all were captured on coconut trees C. nucifera. Detailed descriptions with the original locations of specimen collection are provided in Table 1. The specimens we sampled were preserved in 95% ethanol (Xilong Scientific, Shantou, Guangdong, China) for DNA extraction. Then, the samples were maintained in 10% ethanol (Zhongshan Scientific, Nanjing, Jiangsu, China) for later identification. Voucher specimens were catalogued for further experimentation in the laboratory. RAD Library Preparation and Illumina Sequencing RAD-seq libraries were constructed following a modified protocol [17]. Genomic DNA (0.1-1 µg from accession samples) was extracted, and the restriction endonuclease EcoRI (New England Biolabs) Insects 2020, 11, 230 5 of 13 was used to digest the genome followed by heat inactivation of the enzyme. The EcoRI-cut site of each sample was ligated with barcoded P1 adapters (complementing the EcoRI-cut DNA gap). These adapters contained forward amplification and Illumina sequencing primer sites, as well as a nucleotide barcode 4-or 8-bp long for sample identification. The adapter-ligated fragments were subsequently pooled, randomly sheared, and size-selected. DNA was then ligated to a second adapter (P2), a Y adapter [20] that has divergent ends. The reverse amplification primer was unable to bind to P2 unless the complementary sequence was filled in during the first round of forward elongation originating from the P1 amplification primer. The structure of this adapter ensured that only P1 adapter-ligated RAD tags were amplified during the final PCR amplification step. During the QC (Quality Control) step, Agilent 2100 Bioanaylzer and qPCR methods are used to qualify and quantify the sample library. Then, the library products were used for sequencing. Sequencing was performed on the Illumina HiSeq 4000 platform (Illumina company, San Diego, California, USA). Clean Reads Filtering and SNP Obtaining Raw reads of RAD-seq Illumina sequencing were processed using the Stacks pipeline [21,22]. Quality filtering was performed with the process-radtags function implemented in Stacks with default settings [15]. According to the following standards: (1) removing reads aligned to the barcode adapter, (2) remove low quality reads (i.e., reads with more than 50% bases whose quality value is less than or equal to 10), and (3) removing reads with ≥10% unidentified nucleotides. We identified the loci of each sample using the ustacks program in the Stacks, then the loci in all accessions were integrated using the cstacks program to form a catalog. One-to-one searches and probability calculations were performed by the sstacks program for the gene locus appearing in each accession and the gene locus appearing in the catalog, which defined the alleles at each locus of the gene. [23] The final SNPs obtained underwent strict filtering and screening. We selected only SNPs matching the following criteria: (1) At least 20 accessions contained the locus in which the SNP was located, (2) the) was 0.01, (3) the minimum depth of the gene locus in which the SNP was located was 7, and (4) if all samples had a deletion rate of more than 10% at a site, the site was removed. The Stacks denovo_map.pl was implemented to identify candidate SNP markers for downstream analyses, as no reference genome for B. longissima was available. Population Structure Analysis Based on the candidate SNPs, we inferred the population structure of B. longissima using Structure 2.3.4 [24][25][26][27]. The statistic "K", which indicates the change in likelihood of different numbers of clusters, was calculated, and the cluster number with the highest K value, which indicated the most likely number of clusters in the population, was obtained by using Structure Harvester (available at http://taylor0.biology.ucla.edu/structureHarvester/). Based on the K value that we have selected, pairwise FST values were estimated in Arlequin v.3.5 [28,29]. Principal Component Analysis (PCA) PCA was performed by using PLINK [30,31], without prior information on group accession populations; PCA was performed to visualize broad-scale population structure. Clustering of Accessions and Populations The phylogenetic tree is a branch map or tree that describes the order of evolution between groups, indicating the evolutionary relationship between groups [32]. The theoretical methods for constructing phylogenetic trees are mainly Neighbor-joining (NJ) and Maximum likelihood (ML). ML tree was generated based on the GTR (generalized time-reversible) substitution model using PhyML [33] with 1000 bootstraps. Tree topology (t), branch length (l), and rate parameters (r) were optimized in this analysis. The SNPs selected can be used calculate the distance matrix in phylogenetic tree. By analyzing the phylogenetic tree constructed by the ML method, we could find the evolutionary relationship between populations. Selecting Candidate SNPs for Demographic Inference A genome scan of the 61,182 SNPs retained using ARLEQUIN [34] revealed that 39,735 SNPs had a MAF of < 0.05, and 10,127 had a missing rate of <10%. Subsequent inferences of the genetic structure were conducted using the 10,127 SNP candidate markers. (Table 2) Population Structure Analysis After the stringent filtering procedure, 10,127 loci were identified as candidate SNPs. The candidate SNPs were used for subsequent population inference. The STRUCTURE analysis software was used to analyze the population structure of B. longissima (Figure 2). Population structure analysis for each K value was performed, as well as K analysis for a different number of clusters (K) for 51 B. longissima accessions. K showed a peak at 4, suggesting four clusters as the optimal option (Figure 3 F-statistics Based on the optimal K = 4, RAD-seq analysis revealed that the overall degree of genetic differentiation among B. longissima populations was not high (average pairwise F ST = 0.093 across all populations) on a reduced-representation genome scale (Table 3). However, the genetic differentiation of population 4 was significantly high (average pairwise F ST = 0.141). Our result is similar to the genetic differentiation on population structure (Figure 2). The accessions of population 4 were collected from Taichung, Taiwan. We speculated that the high altitude and low temperature environment may have caused the rapid evolution of B. longissima. Principal Component Analysis The genetic split between different populations was also discerned by PCA in Figure 4. Separation along the first discriminant function (PC1) shows that all accessions are divided into two populations, Hainan population and no-Hainan population, which is consistent with the K = 2 analysis of population structure analysis and indicates the distinct genetic distance between accessions from Hainan and Taiwan. PC2 shows no-Hainan population differentiated into several subgroups. It was consistent with the results of the population structure as shown above. Principal component analysis The genetic split between different populations was also discerned by PCA in Figure 4. Separation along the first discriminant function (PC1) shows that all accessions are divided into two populations, Hainan population and no-Hainan population, which is consistent with the K = 2 analysis of population structure analysis and indicates the distinct genetic distance between accessions from Hainan and Taiwan. PC2 shows no-Hainan population differentiated into several subgroups. It was consistent with the results of the population structure as shown above. The filtered SNP information was used to construct a phylogenetic tree using the ML method ( Figure 5). The results showed that the accessions collected from Hainan were clustered together, and this cluster was obvious (Figure 2, Figure 4, and Figure 5). B. longissima in Fujian was quite different from all other accessions, indicating that the recently distinct population was formed in Fujian. The filtered SNP information was used to construct a phylogenetic tree using the ML method ( Figure 5). The results showed that the accessions collected from Hainan were clustered together, and this cluster was obvious ( Figure 2, Figure 4, and Figure 5). B. longissima in Fujian was quite different from all other accessions, indicating that the recently distinct population was formed in Fujian. Samples collected in Taiwan were clustered into two major clades, which is consistent with the aforementioned population results, indicating several distinct groups of B. longissima detected in Taiwan and mainland China. All B. longissima accessions in Hainan, together with accessions from Sansha and Yunan, were clustered together and showed an apparent genetic distance from other accessions which might have originated from Taiwan. With a comprehensive analysis of genetic structure, PCA, FST, and the phylogenetic tree, we could indicate the invasion pathway. These results indicated two potential independent populations formed in Taiwan and Hainan, which then invaded mainland China by two independent pathways (Figure 1). Discussion This study is the first attempt to understand the genetic variation patterns of B. longissima in southern China. In this study, we analyzed the genetic diversity and genetic structure among 18 geographic populations collected from 6 provinces in southern China. The use of SNP molecular markers to discover the genetic variability of B. longissima showed that its genetic diversity is not related to geographical distance and that it is affected by human factors. According to the analysis of Discussion This study is the first attempt to understand the genetic variation patterns of B. longissima in southern China. In this study, we analyzed the genetic diversity and genetic structure among 18 geographic populations collected from 6 provinces in southern China. The use of SNP molecular markers to discover the genetic variability of B. longissima showed that its genetic diversity is not related to geographical distance and that it is affected by human factors. According to the analysis of SNP data, the genetic diversity of B. longissima is low to medium. RAD sequencing is generally used in marine organisms and plant research, to develop SNP molecular markers, but less applied in insect research. The researchers used this method to analyze the genetic structure and genetic diversity of white perch Morone Americana [15], Nujiang catfish Creteuchiloglanis macropterus [35], American lobster Homarus americanus [36], Sweetpotato Ipomoea batatas [37], and other organisms. All of them obtained good results. For B. longissima, researchers used mtDNA molecular markers [2]. In general mtDNA is a good option for phylogenetic analysis, since it reflects only maternal inheritance and it is not submitted to recombination; thus, its variation is expected to be low within subpopulations of the same species, but higher at species level. In addition, B. longissima invaded China for a short time, only about 40 years, and the genetic differentiation within the species was not obvious. SNP markers are highly polymorphic. Thus, these markers should be good markers to estimate the population structure of B. longissima. ML tree shows that samples collected in Taiwan were clustered into two major clades, indicating several distinct groups of B. longissima detected in Taiwan and mainland China. All B. longissima accessions in Hainan, together with accessions from Yunan, were clustered together and showed apparent genetic distance from other accessions which might have originated from Taiwan. According to the results of the phylogenetic tree analysis, the invasion route of B. longissima in southern China can be inferred. Taiwan and Hainan are the two invasive base points. The Taiwan population invades Fujian, Guangdong, and Guangxi, and the Hainan population invades Yunnan and Sansha (Figure 1). B. longissima is thought to be native to Indonesia and Papua New Guinea, Shun-Ichiro Takano et al. (2011) showed that B. longissima represents two monophyletic clades, using mitochondrial DNA analysis, and crosses between the two nominal species. One named Pacific clade is distributed in a relatively limited area, whereas the other named Asian clade covers a wide area, including Asia and the French Polynesia, New Caledonia, and Vanuatu [2]. Java Island was the earliest recorded point for the spread of the Asian clade [38,39]. However, the historical data shows that B. longissima Asian clade appeared in New Caledonia and Vanuatu [40], Tahiti [41], Taiwan [42], Japan [43], the Indo-China peninsula and the Philippines [44], and Hainan of China. The populations other than China have not been obtained, thus we are not sure about the clade to which that Chinese population belongs and it is the shortcoming of our study. As an invading pest, B. longissima has high risk and great harm. The results showed that most effective measures for comprehensive treatment of B. longissima are to strengthen the quarantine work during the transport of B. longissima host seedlings to prevent their spread, and then use chemical agents [9] or parasitic bees [45] to kill them in the occurrence area. Conclusions In this study, the RAD sequencing technology was used for the development of B. longissima SNP genetic markers. In our strict filtering and screening, 10,127 SNPs out of 61,182 markers were identified. Based on these SNPs, phylogenetic tree analysis, principal component analysis (PCA), and population structure analysis were used to analyze the evolutionary relationship among different B. longissima populations. The results showed that the B. longissima in China can be divided into Hainan B. longissima population and non-Hainan B. longissima population. The Hainan B. longissima population was relatively stable. The non-Hainan B. longissima population was divided into several subgroups. Based on the unrooted phylogenetic tree, we inferred that Taiwan and Hainan B. longissima populations evolved as two invasion hubs. The Taiwan B. longissima population invaded Fujian, Guangdong, and
4,676
2020-04-01T00:00:00.000
[ "Biology" ]
Orientation Asymmetric Surface Model for Membranes: Finsler Geometry Modeling We study triangulated surface models with nontrivial surface metrices for membranes. The surface model is defined by a mapping ${\bf r}$ from a two dimensional parameter space $M$ to the three dimensional Euclidean space ${\bf R}^3$. The metric variable $g_{ab}$, which is always fixed to the Euclidean metric $\delta_{ab}$, can be extended to a more general non-Euclidean metric on $M$ in the continuous model. The problem we focus on in this paper is whether such an extension is well-defined or not in the discrete model. We find that a discrete surface model with nontrivial metric becomes well-defined if it is treated in the context of Finsler geometry (FG) modeling, where triangle edge length in $M$ depends on the direction. It is also shown that the discrete FG model is orientation assymetric on invertible surfaces in general, and for this reason, the FG model has a potential advantage for describing real physical membranes, which are expected to have some assymetries for orientation changing transformations. Introduction Biological membranes including artificial ones such as giant vesicles are simply understood as two-dimensional surfaces [1]. The well-known surface model for membranes is statistical mechanically defined by using a mapping r from a two-dimensional parameter space M to R 3 [2]. This mapping r and the metric g ab (a, b = 1, 2), a set of functions on M, are the dynamical variables of the model. To discretize these dynamical variables, we use triangulated surfaces in both M and R 3 . On the discrete surfaces, the metric g ab is always fixed to the Euclidean metric δ ab [3][4][5], while the induced metric ∂ a r · ∂ b r is also used in theoretical studies on continuous surfaces [2]. These two-dimensional surface models are considered as a natural extension of one-dimensional polymer model [6], and a lot of studies for membranes have been conducted [7][8][9][10][11]. Landau-Ginzburg theory for membranes has also been developed [12]. In Ref. [13], anisotropic morphologies of membranes are studied, and the notion of multi-component is found to be essential also for the metric function [14]. However, it is still unclear whether non-Euclidean metric can be assumed or not for discrete models. In this paper, we study the metric g ab in Ref. [13] in more detail. We will show that models with the metric in Ref. [13] and their extension to a more general one are ill-defined in the ordinary surface modeling prescription, however, these ill-defined models turn to be well-defined in the context of Finsler geometry (FG) modeling [15][16][17][18][19][20]. Moreover, it is also shown that the FG model becomes orientation asymmetric, where "orientation asymmetric" means that Hamiltonian is not invariant under the surface inversion [13]. In real physical membranes, the pages 1 -17 Version March 13, 2018 2 of 17 orientation asymmetry is observed because of their bilayer structure [21]. Indeed, asymmetry such as area difference between the outer and inner layers is expected to play an important role for anisotropic shape of membranes. Therefore, it is worth while to study the discrete surface model with non-trivial metric g ab more extensively. We should note that there are two types of discrete surface models; the first is fixed connectivity (FC) model and the second is dynamically triangulated (DT) surface model. The FC surface model corresponds to polymerized membranes, while the DT surface model corresponds to fluid membranes such as bilayer vesicles. The polymerized and fluid membranes are characterized by nonzero and zero shear moduli, respectively. Numerically, the dynamical triangulation for the DT models is simulated by bond-flip technique as one of the Monte Carlo processes on triangulated lattices [22][23][24], while the FC surface models are defined on triangulated lattices without the bond flips. According to this classification, the discrete models in this paper belong to the DT surface models and correspond to fluid membranes, because the dynamical triangulation is assumed in the partition function, which will be defined in Section 3, just like in the model of [13]. In Section 2, a continuous surface model and its basic properties are reviewed, and a non-Euclidean metric, which we study in this paper, is introduced. In Section 3, we discuss why orientation asymmetry needs to be studied, and then we introduce a discrete model on a triangulated spherical lattice and show that this discrete model is ill-defined in the ordinary context of surface modeling. In Section 4, we show that this ill-defined model can be understood as a well-defined FG model in a modeling which is slightly extended from the one in Ref. [15]. In Section 5, we summarize the results. Continuous surface model In this paper, we study a surface model which is an extension of the Helfrich and Polyakov (HP) model [25,26]. The HP model is physically defined by Hamiltonian S which is a linear combination of the Gaussian bond potential S 1 and the bending energy S 2 such that S = S 1 + κS 2 , where κ[k B T ] is the bending rigidity (k B and T are the Boltzmann constant and the temperature, respectively). The surface position is described by r(∈ R 3 ), and g ab is a Riemannian metric on the two-dimensional surface M, g ab = (g ab ) −1 is its inverse, and g = det g ab . Note that the surface position r is understood as a mapping r : where the surface orientation is assumed to be preserved. The symbol n in S 2 denotes a unit normal vector of the image surface, where one of two orientations is used to define n. It is well known that the Hamiltonian is invariant under (i) general coordinate transformation x → x ′ in M and (ii) conformal transformation for g ab such that g ab → g ′ ab = f (x)g ab with a positive function f on M [2]. The first property under the transformation (i), called re-parametrization invariance, is expressed by S (r(x), g ab (x)) = S (r(x ′ ), g ab (x ′ )), where r(x ′ ) and g ab (x ′ ) are composite functions. The second property under (ii) is expressed by S (r(x), g ab (x)) = S (r(x), g ′ ab (x)). The metrices g ab and g ′ ab are called conformally equivalent, which is written as g ab ≃ g ′ ab , if there exists a positive function f such that g ′ ab = f g ab . Therefore, the second property with respect to the transformation (ii) implies that S depends only on conformally non-equivalent metrices. The metric g ab of the surface M is generally given by g ab = [13]. This metric is in general not conformally equivalent to the Euclidean metric δ ab . We call a metric g ab trivial (non-trivial) if g ab is conformally equivalent (inequivalent) to δ ab , although surface models with g ab = δ ab and g ab = ∂ a r · ∂ b r are physically non-trivial [22][23][24][29][30][31][32][33][34]. First, we should comment on the surface orientation. The unit normal vector n is directed from inside to outside of material separated from bulk material by membrane (see Fig. 1(a)). However, if the membrane self-intersects, then the direction of n changes from outside to inside ( Fig. 1(b)). Otherwise (⇔ n is directed from inside to outside), n discontinuously changes at the intersection point. For this reason, we change the surface orientation by changing the local coordinate system from left-handed to right-handed while n remains unchanged ( Fig. 1(b)). We should emphasize that our basic assumption is that the surface orientation is locally changeable. This means that the surface in R 3 is self-intersecting, or in other words the surface is not self-avoiding. Membrane orientation However, such intersection process is not so easy to implement in the numerical simulations (no numerical simulation is performed in this paper). Apart from this, it is unclear whether or not the implementation of such intersection process is effective for simulating the membrane inversion. Therefore, we assume that the surface is locally invertible without intersections; an inversion is expected to occur independent of whether the surface is self-intersecting or not. Indeed, real physical membranes are composed of lipid molecules, which have hydrophobic and hydrophilic parts. These lipids form a bilayer structure ( Fig. 1(c)). In those real membranes, the bilayer structure is partly inverted just as in Fig. 1(d) via the so-called flip-flop process. Such inversion process without intersection is not always unphysical because it can be seen in the process of pore formation. The pore formation process is reversible and forms cup-like membranes, where the membranes are not always self-intersecting [27]. The cup-like membranes are stable [28] and expected to play an important role as an intermediate configuration for cell inversion. It should be remarked that the surface orientation is also changeable in the process of cell fission and fusion, where the surface self-intersects, in real physical membranes. To define a discrete model, we use a piecewise-linearly triangulated surface in R 3 [3][4][5]. In this paper, a spherical surface is assumed. Therefore, it is natural to assume that M is also triangulated and of sphere topology. Triangles in M can be smooth in general, and these smooth triangles are mapped to piecewise-linear triangles in R 3 by r (see Figs. 2(a) and 2(b))). We should note that triangle ∆ in M has two different orientations. Let ∆ L,R denote the triangle that has the left-hand (right-hand) orientation, where L(R) corresponds to the left-handed (right-handed) local coordinate system. The symbol ∆ L is used for non-inverted parts of the surface, while ∆ R is used for inverted parts shown in Fig. 1(d). The direction of n is defined to be dependent on the orientation of ∆ L,R as mentioned in the previous subsection (see Fig. 2(c)). The surface inversion is given by for example. The problem is whether the inverted surface is stable or not. As we will see below, the energy of the inverted surface is different from that of the original surface in a non-Euclidean metric model. This non-Euclidean metric model becomes well-defined if it is treated as an FG model. In the FG modeling (not in the standard HP modeling), we assume that the surface is locally invertible as in Fig. 1(d), which can be defined by the change of local coordinate orientation. Thus, studies on the stability of inverted surfaces become feasible within the scope of FG modeling, although the transformation of variables r i for this local inversion is not always given by Eq. (2); the vertex position remains unchanged under the change of triangle orientation. Discretization of the model In this subsection, the discretization of Hamiltonian in Eq. (1) is performed on the triangles ∆ L and their image triangles r (∆ L ). The function ρ in g ab is defined on each triangle ∆ in M in the discrete model, and we denote the function ρ on ∆ by ρ ∆ . Thus, the discrete metric defined on triangle ∆ is given by By replacing the integral and partial derivatives in S 1 and S 2 with the sum over triangles ∆ and differences, respectively, such that we have the discrete expressions g 11 (r 2 −r 1 ) 2 +g 22 (r 3 −r 1 ) 2 and g 11 (n 0 −n 2 ) 2 +g 22 (n 0 −n 3 ) 2 corresponding to the discrete energies g i j (∂r/∂x i ) · (∂r/∂x j ) and g i j (∂n/∂x i ) · (∂n/∂x j ) of S 1 and S 2 on triangle ∆, where the local coordinate origin is assumed at vertex 1 (see Fig. 2(b)). Thus, the corresponding discrete expressions of S 1 and S 2 are given by where The index i of n i in this S 2 represents a triangle (see Fig. 2(b)). Since the coordinate origin can also be assumed at vertices 2 and 3 on triangle ∆, we have three possible discrete expressions including those in Eq. (5) for g i j (∂r/∂x i ) · (∂r/∂x j ) and g i j (∂n/∂x i ) · (∂n/∂x j ). Thus, we have: where the factor 1/3 is assumed. In the expressions, the suffix i of ρ i denotes the coordinate origin. The reason why the function ρ depends on the coordinate origin is that ρ is an element of 2 × 2 matrix g ab , which depends on local coordinates in general. The expressions for S 1 and S 2 in Eqs. (5) and (6) correspond to those for ∆ L . In Eq. (6), the sum over triangles ∆ in S 1 and S 2 can be replaced by sum over bonds i j . In this replacement, we should remind ourselves of the fact that the first terms of S 1 and S 2 in (6) are respectively replaced by In these expressions, ρ ± i denotes the function ρ on the triangles ∆ ± L , where the coordinate origin is at vertex i (see Fig. 2(d)), and n ± denote n for triangles ∆ ± L . The coefficient of ℓ 2 12 is different from that of (1−n + · n − ), and these coefficients come from the following expressions: Thus, we have where the factor 1/3 is replaced by 1/4 in the final expressions of S 1 and S 2 . The indices i j of γ i j and κ i j simply denote vertices i and j. We should note that γ i j = κ ji and γ i j κ i j in general in Eq. (8) as mentioned above. The partition function Z and Hamiltonian S of the model, we start with in this paper, are defined by where Ising model energy S 0 with the coefficient λ is included in S . This is a surface model for multi-component membranes [13]. The sum ± in S 0 denotes the sum over all nearest neighbor triangles + and −, and σ ± denotes that σ is defined on the triangles ∆ ± L . The variable σ is an element of Z 2 = {1, −1}, however, S 0 (and σ) is not always limited to Ising type Hamiltonian. The variable σ ± is introduced to represent the components A and B such as liquid-ordered and liquid-disordered phases [13]. If σ + = 1(−1) on triangle ∆ + , this triangle ∆ + is understood such that it belongs to or is occupied by the component A (B) for example. The value of σ on each triangle ∆ remains unchanged, however, the energy S 0 does not remain constant because the combination of nearest neighbor pairs of triangles ∆ ± changes due to the triangle diffusion, which is actually expected on dynamically triangulated surfaces [13]. In the model of Ref. [13], the function ρ + i is independent of vertex i and depends only on triangle ∆ + , and therefore the value of ρ + is uniquely determined only by σ + if the dependence of ρ + on σ + is fixed. As a consequence, the metric g ab is determined by the internal variable σ. In the model of Eq. (9), the dependence of ρ + i on σ + is not explicitly specified, because this dependence of ρ on σ is in general independent of the well definedness of discrete surface models with non-Euclidean metric, and this well definedness is the main target in this paper. In Z, σ and T denote the sum over all possible configurations of σ and triangulations T , respectively. The sum over triangulation T can be simulated by the bond flips in MC simulations, and therefore the model is grouped into the fluid surface models as mentioned in the Introduction. The symbol T in T denotes the triangulation, which is assumed as one of the dynamical variables of the discrete fluid model. This means that a variable T corresponds to a triangulated lattice configuration. Therefore, the lattice configurations in the parameter space M are determined by T . On the other hand a lattice configuration corresponding to a given T is originally considered as an ingredient of a set of local coordinate systems; two different T s correspond to two inequivalent coordinates which are not transformed to each other by any coordinate transformation. Recalling that the continuous Hamiltonian is invariant under general coordinated transformations, we can chose an arbitrary coordinate such as orthogonal coordinate for each triangle of a given T . However, from the Polyakov's string theoretical point of view, the partition function is defined by the sum over all possible metrices Dg in addition to the sum over all possible mappings Dr. Since the metric g depends on coordinates, Dg is considered to be corresponding to the sum over local coordinates, which is simulated by T in the discrete models. Therefore, from these intuitive discussions, the Euclidean metric, for example, is forbidden in a fluid model on triangulated lattices without DT; this Euclidean metric model without DT is simply a FC model for polymerized membranes, where the surface inversion is not expected. The symbol ′ N i=1 dr i denotes 3(N − 1)-dimensional integrations in R 3 under the condition that the center of mass of the surface is fixed to the origin of R 3 . The Hamiltonian S has the unit of energy [k B T ]. The coefficient κ[k B T ] of S 2 is the bending rigidity. Here, we comment on the property called scale invariance of the model [35]. This comes from the fact that the integration of r in Z is independent of the scale transformation such that r → αr for arbitrary positive α ∈ R. This property is expressed by Z({r}) = Z({αr}), and therefore, for Hamiltonian S ′ = λS 0 +cS 1 +κS 2 , we have In the second line of Eq. (10), we assume α = 1/ √ c, and then in the third line we have S ′ (αr) = λS 0 +S 1 +κS 2 because S 0 and S 2 are scale independent and S 1 (αr) = α 2 S 1 (r). Thus, from the fact that the partition function is independent of multiplicative constant, we find that the model with S ′ = λS 0 +cS 1 +κS 2 is equivalent to the model with S = λS 0 +S 1 +κS 2 . "Equivalent" means that the shape of surface is independent of the value of c(> 0) although the surface size depends on c in general. The dependence of surface size on c is also understood from the scale invariant property of Z. Indeed, it follows from Z({r}) = Z({αr}) that ∂Z({αr})/∂α| α=1 = 0, and therefore we have [35] This final equation implies that the mean bond length squares ℓ 2 i j depends on c, because S 1 is given by For a specialized case that γ i j =constant, S 1 becomes proportional to ℓ 2 i j . On the other hand, the mean bond length squares in general represent the surface size for smooth surfaces, which are expected for sufficiently large κ. We should note that the model studied in Ref. [13] for a two-component membrane is obtained from the model of Eqs. (8) and (9) by the assumption that ρ ± i is independent of the local coordinate origin i and depends only on triangles ∆ ± . In this case, the model is orientation symmetric, and therefore the lower suffices L, R for the orientation of triangles ∆ L,R are not necessary. Then, we have γ i j = κ i j = (1/4) (ρ + +1/ρ + +ρ − +1/ρ − ), where + and − are the two neighboring triangles of bond i j which links vertices i and j. Thus, γ i j (and κ i j ) defined on bond i j depends only on ρ ± of the two neighboring triangles in the model of Ref. [13]. For this reason, the configuration (or distribution) of ρ on the surface remains unchanged if the triangulation is fixed. However, the model is defined on dynamically triangulated lattices, which allow not only vertices but also triangles to diffuse freely over the surface [22][23][24]. This free diffusion of triangles changes the distribution of ρ and hence γ i j and κ i j . Moreover, σ(∈ Z 2 ) is assigned on triangles (not on vertices) such that the value of ρ ± on each triangle is determined by σ ± (∈ Z 2 ). As a consequence, the corresponding energy S 0 = ± (1−σ + · σ − ) becomes dependent on the distribution of ρ, or in other words, the distribution of γ i j and κ i j is determined by the energy S 0 . This is an outline of the model in Ref. [13]. In this paper, ρ ± i depends on not only triangles ∆ ± but also the local coordinate origin i in contrast to that of the model in Ref. [13]. We should note that the relation between ρ ± and σ ± is not explicitly specified. Although the model is not determined without the explicit relation, the following discussions in this paper are independent of this relation. Well-defined model We start with the definition of trivial (non-trivial) model for a discrete surface model. Definition 1. Let us assume that Hamiltonian S of a discrete surface model is given by Eq. (8). Then, this discrete model is called trivial (non-trivial) if the following conditions are (not) satisfied: where the constants are independent of bond i j, and these constants are not necessarily be the same. We assume λ = 0 in S of Eq. (8) for simplicity. We should note that a model with S ′ = c 1 S 1 +κc 2 S 2 , for arbitrary coefficients c 1 and c 2 , is identical to the model defined by S = S 1 +κ ′ S 2 with κ ′ = κc 2 . Indeed, because of the scale invariance of Z discussed in the previous subsection using Eq. (10), the coefficient c 1 of S 1 in S ′ can be replaced by 1. Thus, we have S ′ = S 1 +κ ′ S 2 . If the metric is conformally equivalent to Euclidean metric, then the model is trivial. In this sense, this definition for trivial (non-trivial) model is an extension of the definition by the terminology conformally equivalent for g ab discussed in Section 3.1. However, there exists a metric, that is conformally non-equivalent to δ ab while it makes the model trivial. An example of such metric is g ab =  , and more detailed information will be given below (in Remark 2). Next, we introduce the notion of direction dependent length L i j (and L ji ) of bond i j, which is shared by two triangles, in the discrete model. Let ∆ ± L be the two nearest neighbor triangles of bond 12 on M (Fig.3(a)). The length L 12 (∆ + L ) of bond 12 is defined by L 12 (∆ + L ) = dx 1 1/ρ + 1 = 1/ρ + 1 , where 1/ρ + 1 is the element g 11 of the metric g ab on ∆ + L where the local coordinate origin is at vertex 1; the symbol ∆ + L in L 12 (∆ + L ) denotes that L 12 is defined by g ab on triangle ∆ + L . It is also possible to define L 12 (∆ + L ) by L 12 (∆ + L ) = dx 2 (ρ + 2 ) = ρ + 2 , where ρ + 2 is the element g 22 on ∆ + L where the local coordinate origin is at vertex 2. Thus, L 12 (∆ + L ) is defined by the mean value of these two lengths, and the length L 21 (∆ − L ) of bond 12 is also defined with exactly same manner. Then, we have These two lengths are different from each other in their expressions, and therefore it appears that the bond length is dependent on its direction. For the inverted surface (shown in Fig.3(b)), we also have the two different lengths It is also possible to define the lengths of bond 12 as follows: where L ′ 12 and L ′ 21 (L ′ 12 andL ′ 21 ) correspond to those in Eq. (13) (Eq. (14)). The following discussions remain unchanged if L ′ 12 , L ′ 21 andL ′ 12 ,L ′ 21 are assumed as the definition of bond lengths. For this reason, we use only the expressions in Eq. (13) and Eq. (14) for bond lengths in the discussions below. Now, let us introduce the notion of well-defined model. Definition 2. A discrete surface model is called well-defined if the following conditions are satisfied: (A1) Any bond length is independent of its direction (A2) Any bond length is independent of surface orientation (A3) Any triangle area is independent of surface orientation We should note that these constraints (A1)-(A3) are not imposed on Finsler geometry models, which will be introduced in the following section. Using Eqs. (13) and (14), we rewrite the first and second conditions (A1) and (A2) such that The condition (A3) is always satisfied because of the fact that det g ab = 1 for the metric function in Eq. (3). Note that the constraint (A1) is imposed only on triangles (∆ L , and the equation corresponding to (A1) on triangles (∆ L is not independent of the three equations in Eqs. (16) and (17). If we use the following definition for the bond length consistency for every vertex: then we have ρ + 1 = ρ − 1 = 1 (vertex 1 for simplicity). In this case, we have a trivial model because g ab = δ ab . The discrete expression of the induced metric g ab = ∂ a r · ∂ b r is given by  , which is defined on triangle 123 in R 3 with the local coordinate origin is at r 1 (see Fig.2(b)). This g ab is not of the form , and for this reason the induced metric model is out of the scope of Definition 1. However, it is easy to see that the induced metric model satisfies (A1)-(A3). Indeed, the bond length of this model is just the Euclidean length of bond 12 in R 3 . Other conditions are also easy to confirm. Orientation symmetric model The discrete model is defined by Hamiltonian in Eq. (8), where g ab is a coordinate dependent metric. Therefore, the Hamiltonian depends on the local coordinates on M, and it also depends on the orientation of M. For this reason, we define the notion of orientation symmetric/asymmetric model defined on surfaces with ∆ L . This simply means that Hamiltonian of Eq. (8) can be used for a model in which the partition function allows the surface inversion process. Indeed, a property of the model corresponding to symmetries in Hamiltonian can be discussed without referencing the partition function in general. Thus, Hamiltonian is called orientation symmetric if it is invariant under the surface inversion in Eq. (2), for example, for any configuration of r, and we also have: Definition 3. A discrete surface model is called orientation symmetric if the Hamiltonian is orientation symmetric. In the Hamiltonian of Eq. (8), the quantities γ i j and κ i j in S 1 and S 2 depend on the surface orientation. Thus, the condition for that the Hamiltonian is orientation symmetric is as follows: for all bonds 12 and ∆ ± . Indeed, the Gaussian bond potential S 1 (ℓ 12 ) of bond 12 is given by S 1 (ℓ 12 ) = (1/4) 1/ρ − 1 +ρ − 2 +1/ρ + 2 +ρ + 1 ℓ 2 12 (Fig.4(a)), while on the inverted triangles the corresponding quantitȳ S 1 (ℓ 12 ) is given byS 1 (ℓ 12 ) = (1/4) 1/ρ − 2 +ρ − 1 +1/ρ + 1 +ρ + 2 ℓ 2 12 . These S 1 (ℓ 12 ) andS 1 (ℓ 12 ) are obtained by using the following expression for the inverse metric: Thus, from the equation S 1 (ℓ 12 ) =S 1 (ℓ 12 ) for any bond 12, which is the condition for S 1 to be orientation symmetric, we have Eq. (20). We should note that from the condition S 2 (n + · n − ) =S 2 (n + · n − ) for the bending energy S 2 the same equation as Eq. (20) is obtained. Remark 1. We have the following remarks: (a) All non-trivial models are orientation asymmetric (b) All orientation asymmetric models are ill-defined Proof of Remark 1. (a) The inverse metric g ab of a non-trivial model is given by Eq. (21), and therefore, it is easy to see that there exists a bond 12 such that S 1 (ℓ 12 ) S 1 (ℓ 12 ). Indeed, we can choose ρ's such that Eq. (20) is not satisfied. This inequality S 1 (ℓ 12 ) S 1 (ℓ 12 ) implies that the condition in Eq. (20) is not satisfied and that the model is orientation asymmetric. (b) ⇔ All well-defined models are orientation symmetric, which can be proved as follows: If the model is well-defined, then Eqs. (16) and (17) are satisfied. Then, it is easy to see that Eq. (20) is satisfied. This implies that the model is orientation symmetric. From Remark 1, it is straightforward to prove the following theorem: Theorem 1. All non-trivial models are ill-defined. Here, we should clarify how well-defined models are different from the model with Euclidean metric δ ab . This problem is rephrased such that what type of ρ is allowed for a well-defined model. The answer is as follows: Remark 2. We have the following remarks: (a) The function ρ i of any well-defined model satisfies where the constant a depends on neither vertex i nor triangle ∆. (16) and (17). Multiplying both sides of the first equation in Eq. (17) It is also easy to see that ρ + 1 = ρ + 2 (= ρ + ) from the second equation in Eq. (17). Therefore, using these two equations and Eq. (16), we have ρ − 1 +1/ρ − 1 = ρ + 1 +1/ρ + 1 . This implies that the combination ρ − +1/ρ − is independent of the vertex and triangle, and thus Eq. (22) is proved. (b) It is easy to see that ρ ± = a± √ a 2 −4 /2, (a ≥ 2) from Eq. (22). (c) Indeed, using Eq. (22), we have γ i j = κ ji = (1/4) (ρ + +1/ρ + +ρ − +1/ρ − ) = a/2, and therefore It follows from Remark 2(a) that the model in Ref. [13] is ill-defined (in the context of HP model). In fact, the metric function assumed in the model of Ref. [13] does not satisfy Eq. (22). The metric corresponding to Remark 2(b) shows examples of metric for the trivial model, which is defined by Definition 1. More explicitly,  are conformally equivalent to δ ab , because ρ + ρ − = 1, and therefore these also make the model trivial. We should remark that Remarks 2(a) and 2(c) also prove Theorem 1. Note also that if a model is well-defined and orientation symmetric in the sense of Definitions 2 and 3 then inverted triangles ∆ R need not to be included in the lattice configuration. However, from Theorem 1 the model introduced in Eq. (8) is orientation asymmetric, and this model turns to be well-defined if it is treated as an FG model. Therefore the inverted triangles ∆ R should be included as a representation configuration of the model of Eq. (8) if it is understood as a well-defined model. For this reason, we have to extend the FG model introduced in Ref. [15] such that the Hamiltonian has values on both ∆ L and ∆ R . Finsler geometry model As we have demonstrated in the previous subsection, all non-trivial surface models (⇔ either γ i j or κ i j depends on i j) are ill-defined. The reason why this unsatisfactory result is obtained is because the bond length should not be direction dependent for any well-defined models (see Definition 2). To make this ill-defined models meaningful, we introduce the notion of Finsler geometry, where length unit is allowed to be dependent on the direction. In the context of Finsler geometry modeling, Theorem 1 does not hold. The problem is whether or not the above mentioned ill-defined model (in Section 3) is fitted in Finsler geometry modeling. Let ∆ L,R be triangles in M, and x = (x 1 , x 2 ) be a local coordinate on ∆ L,R , where the coordinate origin is at vertex 1. Let y = (y 1 , y 2 ) be defined by y i = dx i /dt, (i = 1, 2), where t is a parameter that increases toward the positive direction of the axes. It is also assumed that a positive parameter v i j is defined on the axis from vertex i to vertex j, where v i j v ji in general. which can also be written as the bilinear forms From these expressions, we have the metric functions g ab,L on ∆ L and and g ab,R (x) on ∆ R , such that In general, g ab is a function with respect to x and y, however, g ab,LR in Eq. (26) only depends on the local coordinate x and it is independent of y. Using the metric g ab,LR in Eq. (26) and summing over all possible coordinate origins on triangle ∆ L,R , just the same as in Eq. (6), we have the discrete Hamiltonian such that (see Fig. 2 (b)) S 1 = ∆ γ 12 ℓ 2 12 + γ 23 ℓ 2 23 + γ 31 ℓ 2 31 , The sum over triangles ∆ in S 1 and S 2 can also be expressed by the sum over bonds with a numerical factor 1/4. Thus, we have where γ ± 12 and κ ± 12 are concrete examples of γ ± i j and κ ± i j for bond 12 (see Fig. 5(c)). The symbol ± denotes that γ i j and κ i j defined on the triangles ∆ ± L,R which share the bond i j (Figs. 6 (a),(b)). If the coefficients γ ± i j and κ ± i j are defined by the quantities, which are defined on vertices i and j or on bond i j, just like those in Eq. (28), then these coefficients become independent of the orientation of the triangles. Therefore, we have γ ± i j,L = γ ± i j,R = γ ± i j , and κ ± i j,L = κ ± i j,R = κ ± i j , and therefore the model is orientation symmetric. In this case, we have γ . On the contrary, if γ ± i j and κ ± i j depend on ∆ L,R , then the model is orientation asymmetric. In this case, we Such an orientation asymmetric FG model will be studied in the following subsection. Finally in this subsection, we emphasize a difference between the models defined by Eqs. (28) and (8). In fact, the expressions of S 1 and S 2 in Eq. (28) are different from those in Eq. (8). This difference comes from the fact that S 1 and S 2 in Eq. (8) are simply obtained by discretization of an ordinary HP surface model with a non-Euclidean metric. More explicitly, we have the following facts: (i) Not only ∆ ± L but also ∆ ± R is assumed to define S 1 and S 2 in Eq. (28) , while only ∆ ± L is assumed to define those in Eq. (8). (ii) Finsler function is assumed to define S 1 and S 2 in Eq. (28), while it is not assumed to define those in Eq. (8). Therefore, mainly from the latter fact (ii), it is still unclear whether the model defined by Eq. Orientation asymmetric Finsler geometry model As we have discussed in the previous subsection, FG model in Ref. [15] is extended such that inverted triangles are included in the lattices. The triangulated lattices are composed of both ∆ L and ∆ R , where ∆ R corresponds to an inverted part of surface ( Fig. 1(d)). On these triangles ∆ L and ∆ R , the coefficients γ ± i j and κ ± i j of S 1 and S 2 are defined. Therefore, the orientation asymmetric states are in general allowed in the configurations of the FG model. In this subsection, we show that the ill-defined model constructed in the previous Section by Eq. (8) turns to be a well-defined model in the context of FG modeling. By comparing g ab in Eq. (26) and g ab in Eq. (3), we have the following correspondence between the parameters v 12 , v 13 , · · · , v 42 and the functions ρ ± 1 , ρ ± 2 , ρ ± 3 on ∆ ± L (see Figs. 4(a), 5(c) and 6(a)): The symbol ρ + i is a function on triangle ∆ + L for the metric in Eq. (3) when the local coordinate is at vertex i (= 1, 2, 3). We also have a contribution from ∆ ± R : By inserting these expressions into γ i j and κ i j in Eq. (28) (v −2 41 and v −2 42 in Eqs. (29) and (30) are not included in the list below), we have The expressions of γ ± i j and κ ± i j on ∆ ± R are obtained by replacing ρ with 1/ρ in the expressions in Eq. (31). We find from Eq. (31) that the coefficients γ ± i j and κ ± i j can also be written more simply by using the suffices i j, which will be presented below. To incorporate two types of triangles ∆ L,R into the lattice configurations, which are dynamically updated in the partition function, we need a new variable corresponding to these ∆ L,R . Thus, we introduce a new dynamical variable χ, which is defined on triangles ∆ and has values in Z 2 just like σ in Eq. (9) to represent the surface orientation: If χ i (= χ(∆ i )) = −1 is satisfied for all triangles ∆ i , then the surface is understood as it is completely inverted. In contrast, mixed states, where the value of χ i is not uniform, are understood as a partly inverted membrane (see Fig. 1(d)). This implies that actual intersections like the one in Fig. 1(b) are not necessarily implemented in the model. If such intersections must be taken into consideration in the numerical simulation, it will be very time consuming, because every step for the vertex move should be checked to monitor how the lattice intersects. More than that the simulation is time consuming, as mentioned in the previous section real physical membranes are expected to undergo inversion by pore formation without self-intersection. By this new variable χ i in Eq. (32), the FG model introduced in [15] is extended such that the inverted surface states are included in the surface configurations. Indeed, for any given configuration, its inverted configuration by Eq. (2) is included in the configurations, because the inverted configuration is obtained by the transformation χ i → −χ i for all i and with suitable translation and deformation of r. In this new model, the triangulated surfaces are composed of both ∆ L and ∆ R , where the triangles ∆ R correspond to an inverted part of surface like the one in Fig. 1(d). The coefficients γ i j and κ i j of S 1 and S 2 are defined on not only ∆ L but also ∆ R . Therefore, the orientation asymmetric states are naturally expected in the configurations of the new model. The variable χ has values in Z 2 just like σ in the energy S 0 of Eq.(9), however, the role of χ is different from that of σ. The variable σ plays a role for defining the functions ρ i of the metric g ab . In the context of the modeling in this paper, ρ is determined independently of the surface orientation χ. As mentioned in the end of Section 4, S 0 is not included in the Hamiltonian introduced below although the role of S 0 is completely different from that of S 3 . By including the partition function, we finally have dr i exp [−S (r, χ)] , S = S 1 + κS 2 + ζS 3 , where Ising model Hamiltonian S 3 is assumed for the variable χ with the coefficient ζ. The value of χ ± (∈ {1, −1}) corresponds to ∆ ± L,R as in Eq. (32). For sufficiently large ζ, one of the lowest energy states of S 3 is realized because both S 1 and S 2 are asymmetric even though S 3 is symmetric under the surface inversion. Thus, we have proved that the model introduced in Eq. (8) is identified to the FG model defined by Eq. (28), in which the Finsler functions in Eq. (24) are assumed. We should note that Ising model Hamiltonian is not always necessary for S 3 . Note also that this FG model in Eq. (33) has no constraint for the well-definedness introduced in Definition 2. In this sense, this model is well-defined even though the bond length in M is direction dependent. Moreover, since the surface configuration includes inverted triangles, this model is orientation asymmetric from Remark 1 (a). Thus, we have Theorem 2. All non-trivial models such as the one defined by Eq. (8) or Eq. (33) are orientation asymmetric and well-defined in the context of Finsler geometry modeling. Summary In this paper, we confine ourselves to discrete surface models of Helfrich and Polyakov with the metric of the type g ab = The discrete model is defined on dynamically triangulated surfaces in R 3 , and therefore the model is aimed at describing properties of fluid membranes such as lipid bilayers. The result in this paper indicates that the surface models with this type of non-Euclidean metric are well-defined in the context of Finsler geometry (FG) modeling, and moreover the models are orientation asymmetric in general. Indeed, in the FG scheme for discrete surface models, length of bond of the triangles in the parameter space M can be direction dependent, and no constraint is imposed on the bond length of inverted surfaces in the FG modeling. These allow us to introduce a new dynamical variable corresponding to the triangle orientation to incorporate the surface inversion process in the model. Thus, Hamiltonian of the models with non-trivial g ab has values on locally inverted surface, and for this reason the Hamiltonian becomes dependent on the surface orientation. This property is expected to be useful to study real physical membranes, which undergo surface inversion. FG modeling for membranes and the numerical studies should be performed more extensively. Acknowledgments: The authors acknowledge S. Bannai and M. Imada for comments and discussions. This work is supported in part by JSPS KAKENNHI Numbers 26390138 and 17K05149. Author Contributions: E.P. performed the calculations, and H.K. wrote the paper. Conflicts of Interest: The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. Abbreviations The following abbreviations are used in this manuscript: HP Helfrich and Polyakov FG Finsler geometry FC Fixed connectivity DT Dynamically triangulated
10,397.8
2017-04-20T00:00:00.000
[ "Mathematics" ]
The systematics of neutron reaction cross sections The parameterized theoretical formulae of excitation functions for (n, 2n) and (n, γ) reactions have been established, and for (n, tot) ,(n, non), (n, 3n), (n, p), (n, d), (n, t), (n,3 He) and (n, α) have been recommended. According to these formulae, the SEF code have been developed for systematics calculation of these reactions. The calculated results with the systematics of the corresponding reactions of the discretional nucleus can be provided by the SEF code in the applied range quickly. At the same time, the comparison of calculated results with experimental and evaluated data can be given graphically. Introduction The cross sections of neutron-induced reactions are important for nuclear science and technology. The evaluation of nuclear reaction cross sections are based on experimental measurement, theoretical calculation and systematics. All over the world, the experimental measurement has never been abandoned, and the progress is significant. However, the available measured data are scarce and scattered for some nuclei, reactions or energy regions. Based on the nuclear models, some codes [1, 2] have been developed and used to calculate the cross sections. However, there are obvious discrepancy in the unmeasured energy regions. Generally, the systematics is convenient and reliable for prediction of the neutron-induced reaction cross sections, compared to model theory calculations when experimental data are scarce. Formulae Based on the constant temperature evaporation model and exciton model, taking the competition of other reactions and the contribution of pre-equilibrium emission into account, under some assumptions and approximations, the parameterized theoretical formulae of excitation functions for (n, 2n) and (n, γ) reactions have been established. Only the most sensitive parameters are included in the formulae. For getting the parameters, the available experimental data of these reactions were analyzed and fitted by means of the nonlinear least square method. The fitted results agree fairly well with the measured data at some energy * e-mail<EMAIL_ADDRESS>and nuclei region. On the basis of the parameters of every nucleus, the correlations between the parameters and some quantity of the target nucleus can be expressed as simple functions. Using the regional parameters, more accurate systematics prediction for unmeasured nucleus or energy range can be provided. The parameterized theoretical formulae of (n, tot) ,(n, non), (n, 3n), (n, p), (n, d), (n, t), (n, 3 He) and (n, α) have been recommended. In the formulae below, A is the mass numbers of target nucleus, E n is the incident energy of neutron. Neutron total cross sections For (n, tot) reaction cross sections, the systematics of R.W.Bauer et al. [3,4] and S.M.Grimes et al. [5,6] are used. Using the assumptions of the Ramsauer model, the neutron total cross section can be expressed as: where, the unit of cross section is b, R is the radius of the nucleus, is the reduced wave length of neutron, α is a parameter with the magnitude between 0 and 1, and β denotes the phase difference between the wave that passes through the nucleus and the wave that goes around the nucleus. The parameters a, b, c and k listed in Table 1 are taken from Ref. [6], the limited mass range is 7 A and the limited energy range is from 6 to 60 MeV. These parameters have been obtained by fitting total neutron cross sections with the nuclear Ramsauer model for mass number A > 40 and for neutron energies between 6 and 60 MeV, are extended to nuclei of mass A < 40 [6]. Non-elastic cross sections For (n, non) reaction cross sections, the systematics of A.Chatterjee et al. [7] for neutron energy range from 1 to 50 MeV are used. where, the unit of cross section is mb and the unit of incident neutron energy is MeV. (n,2n) and (n,3n) reaction cross sections For (n, 2n) reaction cross sections, based on the constant temperature evaporation model taking the competition of (n,3n) reaction and the contribution of preequilibrium emission into account, the systematics formulae of (n,2n) reaction excitation function [8] have been established from threshold energy to 30 MeV in the mass region 45 A 210. where, the unit of cross section is b, σ ne is the nonelastic cross section from empirical formula [9], σ n,M is the neutron emission cross section of compound nucleus, σ n,M = σ n,n / + σ n,2n + σ n,3n + · · · · · · (5) δ denotes the contribution of preequilibrium emission, the subscript eq is the equilibrium cross section can be calculated by the evaporation model and subscript pre is the preequilibrium cross section can be calculated by the exciton model. For the expressions of (n, 3n) reaction cross sections are similar to equation (4). In the expressions of (n, 2n) and (n, 3n) reaction cross sections, there are two ajustable parameters, the nuclear temperature T and the ratio σ n,M σ ne . Neutron capture cross sections The (n, γ) reaction cross sections [10,11] can be written as σ n,γ = σ n,γ (s) + σ n,γ (d) σ nγ (s) is the contribution of statistical process and σ nγ (d) is the contribution of interaction between direct and semidirect process. Based on the evaporation model and under some assumptions and approximations, σ nγ (s) can be expressed as where, the unit of cross section is mb, the unit of incident neutron energy is KeV, b L can be expressed as V L is the penetration factor of L partial wave. There are two ajustable parameters α and β. If we only consider the preequilibrium emission in the first step in the equilibrium process, which is characterized by excitions n = 3, σ nγ (d) can be expressed as where, the unit of incident neutron energy is MeV, E R and Γ R are the giant dipole resonance parameters. There is only one ajustable parameter C γ . In Eqs. (7) and (9), S n is neutron separation energy for compound system. The parameter α can be determined by the systematics of [12] around 25 KeV. The fitting to Eq. (9) has been carried out with the collected (n, γ) reaction cross sections for about forty nuclei, the systematic feature of parameter C γ has been obtained, can be expressed as The fitting to Eqs. (7) and (8) have been carried out with the collected (n, γ) reaction cross sections. For odd-A nuclei, the systematic feature of parameter β have been obtained from about forty nuclei, can be expressed as For even-even nuclei, the systematic feature of parameter β have been obtained from about fifty nuclei, can be expressed as (n,charged particle) reaction cross sections For (n, x) [x = p, α, d, t, 3 He] reaction cross sections, the systematics of Zhao Zhixiang et al. [13,14] are used. The equilibrium cross sections are calculated by the evaporation model and the preequilibrium cross sections by the exciton model. The preequilibrium emission in the first step in the equilibrium process are considered, which is characterized by excitions n = 3. For (n, x) [x = p, α], it can be written as For (n, x) [x = d, t, 3 He], it can be written as In Eqs. (13) and (14), there are two adjustable parameters, E x c and C x . E x c represents the generalized height of coulomb barrier and C x is a constant proportional to the maximum of the cross sections. For (n, p) and (n, α) reactions in mass region 23 A 197, For (n, d), (n, t) and (n, 3 He) reactions, E d c and E t c are replaced by E p c , E 3 He c is replaced by E α c . SEF code Based on the established and recommended systematics formulae and the regional parameters, the SEF code was developed to calculate the cross sections of the neutroninduced reaction. Figure 1 shows the flow chart of SEF code, A is the mass numbers and Z is the number of protons for target nuclide, P represents the reaction channels, E n is the incident energy of neutron and CS is the value of the cross sections. Figure 2 and 3 shows the results of calculation for 208 Pb(n, 2n) and 208 Pb(n, γ) with SEF code. Results and discussion The results of systematics for the corresponding reactions of the discretional nucleus can be provided by the SEF code in the limitted mass number and neutron energy range. From Figures 4 to 10, the comparisons of experimental data taked from EXFOR [15], evaluated value and results of systematics for (n, tot), (n, non), (n, 2n), (n, 3n), (n, γ), (n, p), (n, α), (n, d), (n, t) and (n, 3 He) reaction cross sections are shown respectively. The results indicated that the predicted cross sections are consistent with the measured and evaluated data within the errors for (n, tot), (n, non), (n, 2n), (n, 3n), (n, γ), (n, p) and (n, α) reactions. For (n, tot), (n, non), (n, 2n) and (n, 3n) reactions, the energy range is extended to more than 20 MeV. Hence more accurate systematics prediction for unmeasured nuclei or neutron energy ranges can be provided. The agreement between the predicted curves and experimental data is fair for (n, d), (n, t) and (n, 3 He) reactions. So, further research is needed for the systematics of these reactions.
2,184
2020-01-01T00:00:00.000
[ "Physics" ]
DeepRePath: Identifying the Prognostic Features of Early-Stage Lung Adenocarcinoma Using Multi-Scale Pathology Images and Deep Convolutional Neural Networks Simple Summary Pathology images are vital for understanding solid cancers. In this study, we created DeepRePath using multi-scale pathology images with two-channel deep learning to predict the prognosis of patients with early-stage lung adenocarcinoma (LUAD). DeepRePath demonstrated that it could predict the recurrence of early-stage LUAD with average area under the curve scores of 0.77 and 0.76 in cohort I and cohort II (external validation set), respectively. Pathological features found to be associated with a high probability of recurrence included tumor necrosis, discohesive tumor cells, and atypical nuclei. In conclusion, DeepRePath can improve the treatment modality for patients with early-stage LUAD through recurrence prediction. Abstract The prognosis of patients with lung adenocarcinoma (LUAD), especially early-stage LUAD, is dependent on clinicopathological features. However, its predictive utility is limited. In this study, we developed and trained a DeepRePath model based on a deep convolutional neural network (CNN) using multi-scale pathology images to predict the prognosis of patients with early-stage LUAD. DeepRePath was pre-trained with 1067 hematoxylin and eosin-stained whole-slide images of LUAD from the Cancer Genome Atlas. DeepRePath was further trained and validated using two separate CNNs and multi-scale pathology images of 393 resected lung cancer specimens from patients with stage I and II LUAD. Of the 393 patients, 95 patients developed recurrence after surgical resection. The DeepRePath model showed average area under the curve (AUC) scores of 0.77 and 0.76 in cohort I and cohort II (external validation set), respectively. Owing to low performance, DeepRePath cannot be used as an automated tool in a clinical setting. When gradient-weighted class activation mapping was used, DeepRePath indicated the association between atypical nuclei, discohesive tumor cells, and tumor necrosis in pathology images showing recurrence. Despite the limitations associated with a relatively small number of patients, the DeepRePath model based on CNNs with transfer learning could predict recurrence after the curative resection of early-stage LUAD using multi-scale pathology images. Introduction Among all cancer types, lung cancer is the leading cause of cancer-related deaths, accounting for 25% of cancer-related deaths [1]. Non-small cell lung cancer (NSCLC) is the most common type of lung cancer, and lung adenocarcinoma (LUAD) accounts for more than 50% of all cases of NSCLC. Recently, the clinical outcomes of LUAD patients have been greatly improved with the development of effective treatment approaches, including surgical or radiation techniques, and the introduction of targeted therapies and immunotherapies tailored to the molecular or immunological characteristics of primary tumors [2]. However, the survival rate for curatively resected LUAD remains low, ranging from 58% to 73% in stage I, 36% to 46% in stage II, and only 19% to 24% in stage IIIA [3]. To achieve better clinical outcomes, adjuvant chemotherapy is required for resected NSCLC; however, questions remain as to which patients benefit from adjuvant chemotherapy. Therefore, the accurate and timely identification of patients with a high risk of recurrence may provide opportunities to optimize clinical interventions for patients with early-stage LUAD. Similar to that of cancers, the diagnosis of LUAD is completely dependent on histopathological findings. In particular, the histopathological features of LUAD provide crucial information related to prognosis as well as diagnosis [4,5]. The pathologic subtypes of LUAD (lepidic, acinar, papillary, solid, and micropapillary) can contribute to different recurrence and survival rates [6][7][8][9][10]. New histomorphological features of LUAD have been identified with increasing frequency, including tumor budding, lymphovascular invasion, and tumor spread through air spaces [11][12][13][14]. Although microscopic morphology has predictive and prognostic values, the interpretation of pathology images is time-consuming and errorprone for pathologists and subjected to interobserver and intraobserver variabilities [15,16]. Recently, deep learning using histopathology images has emerged as a new tool to aid pathologists in various tasks in clinical settings. In comparison with human pathologists, artificial intelligence using deep learning and pathology images has advantages, such as improved reproducibility and consistency in recognizing diagnostic clues and pathological patterns, as well as continuous estimation of the immunolabeling index and cell count [17]. Notably, one of the benefits of using artificial intelligence in pathology is the identification of novel features, that is, subvisual features that could be helpful for supporting decisions in patient management [17]. Several related studies on predicting the prognosis of patients with NSCLC have been published [18][19][20][21][22][23][24][25][26][27]. Previous studies used deep learning to evaluate prognosis-associated histopathological variables, such as tumor-infiltrating lymphocytes, necrosis, tumor stroma, and nuclear segmentation [19,20]. In these studies, deep learning was usually only used as part of the process of evaluating histopathological variables. Features such as tumor regions or nuclei were detected using deep learning, and dozens to hundreds of morphological features were extracted from the information obtained using deep learning to evaluate histopathological variables using several geometrical methods. The prognosis of patients with NSCLC was predicted with some degree of success on the basis of these morphological features, using existing machine learning methods such as support vector machine (SVM) and Cox proportional hazards (CoxPH) [19,20]. However, the process of extracting a large number of morphological features from images and selecting important features from them is complicated and cumbersome. Therefore, in this study, we predicted recurrence directly with the histopathology images of LUAD, using only deep learning without extracting predefined morphological features (DeepRePath). We aimed to demonstrate that DeepRePath could predict the prognosis of patients with early-stage LUAD, which may facilitate treatment decision-making. Study Population and Baseline Characteristics The clinical and pathological data of NSCLC patients who had undergone curative resection between 2009 and 2017 at five St. Mary's hospitals affiliated with the Catholic University of Korea in Seoul, Incheon, Uijeongbu, Bucheon, and Yeouido were reviewed. The inclusion criteria were as follows: (i) pathologically confirmed stage I-II LUAD; (ii) availability of a pathology report; (iii) no preoperative radiation or chemotherapy; and (iv) at least 3 years of follow-up. The clinicopathological characteristics of cohort I and cohort II (external validation set) are summarized in Supplementary Table S1. A total of 1104 patients underwent lung cancer surgery between 2009 and 2017, and of these, 393 patients who met the inclusion criteria were selected from the six hospitals. Of these patients, 302 patients were included in the training, validation, and test sets (cohort I), and the remaining 91 patients were included in the external validation set (cohort II) (Supplementary Figure S1 and Table S2). The median age of the patients was 64 years (range = 25-86 years), 50.1% of the patients were men, and 79.9% of the patients were classified as stage I. The baseline characteristics were not significantly different between cohort I and cohort II, except for sex and lymphovascular invasion. The median follow-up periods of cohort I and cohort II were 59.9 (range = 6.7-99.2) and 60.4 (range = 12.2-108.6) months, respectively, and 72 (23.8%) and 22 (24.2%) patients experienced recurrence at 3 years after curative resection, respectively. This study was approved by the institutional review board of the Catholic Medical Center (no. UC17SESI0073) and was performed in accordance with the guidelines of human research. The requirement for written informed consent was waived by the institutional review board (Catholic Medical Center) because of the retrospective nature of this study. Data Preparation A total of 3923 hematoxylin and eosin (HE)-stained slides were collected form 393 patients. Three board-certified pathologists reviewed all the slides and selected one representative slide for each case. Representative slides at 40×, 100×, 200× and 400× magnification were captured by three pathologists (S.A.H., K.Y., and T.-J.K.). In total, 5 pathological images (40×, 2 images; 100×, 1; 200×, 1; 400×, 1 image) were obtained from each case using Olympus DP74 (Olympus, Tokyo, Japan). Tumors with adjacent non-neoplastic tissues were captured in the 40× images. In the 100× tumor images, the pathologists focused on and captured the prevalent and aggressive architecture of the tumors. Tumor cells with aggressive cytological features, including nuclear hyperchromasia, pleomorphism, membrane irregularity, and high nuclear-cytoplasmic ratio, were captured in 200× and 400× magnification. All images were re-examined and confirmed by two pathologists (S.A.H. and K.Y.) (Supplementary Figure S2). The 100× and 400× tumor images were finally selected as the input image data because DeepRepath showed the best performance (higher value of area under the curve (AUC)) with a combination of 100× and 400× tumor images. A total of 302 patients were included in cohort I (training and validation sets). The patients in cohort I were randomized to maintain the ratio of the training (74%), validation (8%), and test (18%) sets. However, the training classes were unbalanced. LUAD recurred within 3 years in only 24% of patients in cohort I. To resolve the class imbalance problem, we used image data augmentation techniques, such as vertical flip, horizontal flip, and standardization. We oversampled recurrence cases and increased them to 50% in the training set. The training and validation sets were used for model learning and optimal model selection, and the test set was used to evaluate the performance of the model. In addition, 91 cases (182 images with 5450 patches) were used in cohort II (external validation set). Data in cohort I were obtained from three St. Mary's hospitals affiliated with the Catholic University of Korea in Seoul, Incheon, and Uijeongbu. Data in cohort II (external validation set) were obtained from two St. Mary's hospitals affiliated with the Catholic University of Korea in Bucheon and Yeouido. Pre-Training for Transfer Learning Training a deep learning model with insufficient data is highly likely to cause the model to be overfitted. When training data are insufficient, transfer learning using a pretrained model is a common method that is employed to prevent overfitting. As we had insufficient histopathology image data to train the model for predicting the prognosis of lung cancer patients, we performed transfer learning using data from a similar domain. We obtained 1067 (tumor = 823, normal = 244) HE-stained whole-slide histopathology images (WSIs) of LUAD from the Cancer Genome Atlas (TCGA). We pre-trained a convolutional neural network (CNN) using TCGA data to classify LUAD and normal images. Figure 1A shows the pre-training workflow. We extracted the 10 core tiles, excluding the relatively white areas in the histopathology image. The core tiles were captured at 40× magnification, and the size was 1024 × 1024 pixels. We extracted 20 patches from each tile and trained the CNN (ResNet50) using these patches as inputs. We then fine-tuned this pre-trained model using histopathology image data of 302 patients from St. Mary's hospitals. Model Architecture Both tumor cell patterns and structural patterns are critical factors in predicting recurrence. As the two patterns have different characteristics, it is not effective to perform training simultaneously with one network. Therefore, we constructed DeepRePath for multi-scale pathology images using two separate CNNs (ResNet50). Figure 1B shows the architecture of DeepRePath. One network was used for structural patterns, and its input included the images captured at 100× magnification. The other network was for tumor cell patterns, and its input included the images captured at 400× magnification. We concatenated the two feature vectors extracted from the two networks and performed classification using XGBoost. Figure 1C shows the DeepRePath classification workflow. The images were augmented using adjustments of vertical flip, horizontal flip, and standardization, to fix the class imbalance caused by the low proportion of cases with recurrence. In the case of a 400× magnification image, 36 patches with a size of 224 × 224 pixels were extracted from one image of tumor cell patterns. In the case of a 100× magnification image, DeepRePath extracted 24 patches with a size of 500 × 500 pixels and then reduced them to a size of 224 × 224 pixels. Patches extracted from a 100× magnification image and patches extracted from a 400× magnification image were each passed through the CNN independently. The CNN extracted feature vectors with a size of 2048 from each patch, and the feature vectors were averaged element by element. The two averaged feature vectors from both the 400× and 100× magnification images were concatenated. Finally, we predicted the probability of recurrence within 3 years using XGBoost. Visualization of the DeepRePath Model To create a deep learning application that analyzes histopathology images, it is crucial to make the deep learning model more interpretative so that pathologists can understand it. To visualize our CNN-based DeepRePath model, we used the gradient-weighted class activation mapping (Grad-CAM) algorithm, which provides an explainable heatmap of the CNN model [28]. The input image was forward-propagated through the CNN of the DeepRePath to obtain the raw score of the recurrence. Then, Grad-CAM back-propagated this signal to the convolutional layer of interest so as to obtain the gradient. Using both these values, Grad-CAM computed the convolutional feature maps and combined them to compute the heatmap. A total of 10 core tiles (1600 × 1600 pixels) excluding white areas were captured at 40× magnification from whole-slide histopathology images of lung adenocarcinoma (LUAD) from the Cancer Genome Atlas (TCGA). A total of 50 patches (224 × 224 pixels) were extracted from the core tiles. Resnet50 was trained to classify adenocarcinoma vs. normal tissue using the patches. This pre-trained Resnet50 model was used for transfer learning of the DeepRePath model. (B) DeepRePath network architecture. For multi-scale pathology images, DeepRePath was constructed using two separate convolutional neural networks (CNNs) (ResNet50). One network was for tumor cell patterns, and its input was the images captured at 400× magnification. The other network was for structural patterns, and its input was the images captured at 100× magnification. A total of 2048 features produced by both networks were concatenated into 4096 features. These features were used as inputs for the XGBoost classifier trained to predict the probability of recurrence within 3 years. For visualization of the DeepRePath model, the gradient-weighted class activation mapping (Grad-CAM) algorithm was used. (C) DeepRePath training workflow: Slide images whose class is in the minority are augmented by methods such as vertical flip, horizontal flip and standardization to fix class imbalance. In the case of a 400× magnification image, 36 patches with a size of 224 × 224 pixels were extracted from an image for tumor cell patterns. In the case of a 100× magnification image, the DeepRePath extracted 24 patches with a size of 500 × 500 pixels and then reduced them to a size of 224 × 224 pixels. Patches extracted from a 100× magnification image and patches extracted from a 400× magnification image were each passed through the CNN independently. The CNN extracted feature vectors with a size of 2048 from each patch, and the feature vectors were averaged element by element. The two averaged feature vectors from both 400× and 100× magnification images were concatenated. Finally, we predicted the probability of recurrence within 3 years using XGBoost. A total of 10 core tiles (1600 × 1600 pixels) excluding white areas were captured at 40× magnification from whole-slide histopathology images of lung adenocarcinoma (LUAD) from the Cancer Genome Atlas (TCGA). A total of 50 patches (224 × 224 pixels) were extracted from the core tiles. Resnet50 was trained to classify adenocarcinoma vs. normal tissue using the patches. This pre-trained Resnet50 model was used for transfer learning of the DeepRePath model. (B) DeepRePath network architecture. For multi-scale pathology images, DeepRePath was constructed using two separate convolutional neural networks (CNNs) (ResNet50). One network was for tumor cell patterns, and its input was the images captured at 400× magnification. The other network was for structural patterns, and its input was the images captured at 100× magnification. A total of 2048 features produced by both networks were concatenated into 4096 features. These features were used as inputs for the XGBoost classifier trained to predict the probability of recurrence within 3 years. For visualization of the DeepRePath model, the gradient-weighted class activation mapping (Grad-CAM) algorithm was used. (C) DeepRePath training workflow: Slide images whose class is in the minority are augmented by methods such as vertical flip, horizontal flip and standardization to fix class imbalance. In the case of a 400× magnification image, 36 patches with a size of 224 × 224 pixels were extracted from an image for tumor cell patterns. In the case of a 100× magnification image, the DeepRePath extracted 24 patches with a size of 500 × 500 pixels and then reduced them to a size of 224 × 224 pixels. Patches extracted from a 100× magnification image and patches extracted from a 400× magnification image were each passed through the CNN independently. The CNN extracted feature vectors with a size of 2048 from each patch, and the feature vectors were averaged element by element. The two averaged feature vectors from both 400× and 100× magnification images were concatenated. Finally, we predicted the probability of recurrence within 3 years using XGBoost. In addition to heatmap visualization analysis, morphometric analysis of the nucleus was performed using ImageJ software (National Institutes of Health, Bethesda, MD, USA) [29,30]. We selected 30 representative patches at a magnification of 400×. In each patch, all nuclei in the hotspots and the same number of coldspot nuclei were analyzed. After drawing along the nucleus, we obtained the area, primary and secondary axes, maximum and minimum Feret diameters, and the perimeter to evaluate the nuclear size and length. Moreover, the shape factor and roughness were determined to evaluate nuclear irregularity, and the aspect ratio and roundness were determined to evaluate nuclear elongation. Statistical Analysis Disease-free survival (DFS) duration was defined as the time from the date of surgery until the first recurrence or death from any cause, whichever was observed first, and the survival curves were estimated using the Kaplan-Meier method and compared using the log-rank test. The performance of our models was measured and compared using AUC scores. To determine the AUC of the classification for 3-year recurrence, patients censored before 3 years were excluded from the test set because recurrence classification for these samples was unclear. The evaluation matrix included accuracy, sensitivity, specificity, positive predictive value, and negative predictive value. The nuclear morphometric results were evaluated using an independent t-test. Survival curves of the external cohort were generated using the Kaplan-Meier method and compared using the log-rank test. In the multivariate analysis, CoxPH regression models from the external cohort were used to identify the significance of the prognostic factors. Survival rates and hazard ratios are shown with their respective 95% confidence intervals (CIs). All statistical analyses were performed using R statistical programming (version 3.4.1; http://www.r-project.org), the SPSS software package (version 23, IBM, Chicago, IL, USA) and GraphPad Prism 8.0 (GraphPad Software, Inc., Graphpad Holdings, San Diego, CA, USA). A two-sided p-value of <0.05 was considered statistically significant in all the tests and models. Model Performance Patients in cohort I were randomized to maintain the ratio of the training (74%), validation (8%), and test (18%) sets. The training and validation sets were used for model learning and optimal model selection, and the test set was used to evaluate model performance. A five-fold stratified cross-validation was used for training and validation. The performance results of our DeepRePath model are presented in Table 1 and Figure 2. The model performance was evaluated by averaging the scores of the five-fold stratified cross-validation. In the DeepRePath model, the use of 100× magnification images showed a sensitivity of 65%, a specificity of 59%, an accuracy of 62%, and an AUC score of 0.6, while the use of 400× magnification images showed a sensitivity of 52%, a specificity of 78%, an accuracy of 71%, and an AUC score of 0.68. In contrast, the use of 100× and 400× magnification images showed the best single model performance with a sensitivity of 46%, a specificity of 94%, an accuracy of 82%, and an AUC score of 0.72. These findings indicate that the features of the structural and tumor cell images were complementary to each other in deep learning for predicting clinical outcomes. After data augmentation, the model performance was improved to a sensitivity of 74%, a specificity of 78%, and an AUC score of 0.77, but the accuracy was decreased to 77%. other in deep learning for predicting clinical outcomes. After data augmentation, the model performance was improved to a sensitivity of 74%, a specificity of 78%, and an AUC score of 0.77, but the accuracy was decreased to 77%. Table 1. Performance evaluation of cohort I (average of five-fold cross-validation, n = 302). To further analyze the robustness, reproducibility, and reliability of the model, we performed an additional validation using data from cohort II. To develop the final DeepRePath model for testing cohort II, we used all the data from cohort I, including the five-fold cross-validation test set, to train the model. Similar to the results of cohort I, the use of architectural and tumor cell images showed the best single model performance compared with the use of only structural or tumor cell images (accuracy of 77% and AUC of 0.76; Table 2 and Figure 3). In addition, we compared the results with or without transfer learning. Table 3 presents the differences between the models with and without transfer learning. These models were trained without data augmentation. When transfer learning was not applied, the AUC score of cohort I was high at 0.87, while the AUC score of cohort II (external validation set) was very low (0.58) ( Table 3). This result suggests overfitting. When transfer learning was applied, the AUC score of cohort I was slightly reduced to 0.72, while the AUC score of cohort II was relatively high at 0.75. Therefore, we could prevent overfitting and obtain a more generalized model through transfer learning. We also compared three boosting algorithms: XGBoost, Gradient Boosting, and Adaptive To further analyze the robustness, reproducibility, and reliability of the model, we performed an additional validation using data from cohort II. To develop the final DeepRePath model for testing cohort II, we used all the data from cohort I, including the five-fold cross-validation test set, to train the model. Similar to the results of cohort I, the use of architectural and tumor cell images showed the best single model performance compared with the use of only structural or tumor cell images (accuracy of 77% and AUC of 0.76; Table 2 and Figure 3). In addition, we compared the results with or without transfer learning. Table 3 presents the differences between the models with and without transfer learning. These models were trained without data augmentation. When transfer learning was not applied, the AUC score of cohort I was high at 0.87, while the AUC score of cohort II (external validation set) was very low (0.58) ( Table 3). This result suggests overfitting. When transfer learning was applied, the AUC score of cohort I was slightly reduced to 0.72, while the AUC score of cohort II was relatively high at 0.75. Therefore, we could prevent overfitting and obtain a more generalized model through transfer learning. We also compared three boosting algorithms: XGBoost, Gradient Boosting, and Adaptive Boosting (AdaBoost). In the DeepRePath model, the use of XGB showed the best model performance compared to the use of Gradient Boosting or AdaBoost (Supplementary Table S3). Table 2. Performance evaluation of cohort II (external validation set) (n = 91). Boosting (AdaBoost). In the DeepRePath model, the use of XGB showed the best model performance compared to the use of Gradient Boosting or AdaBoost (Supplementary Table S3). Table 2. Performance evaluation of cohort II (external validation set) (n = 91). Visualization To identify the area in the pathology image that is the most responsible for predicting recurrence in the DeepRePath model, Grad-CAM was used to visualize our CNN-based DeepRePath model by creating heatmaps (Figure 4). The image produced by Grad-CAM was examined and assessed by two pathologists (S.A.H and K.Y). Grad-CAM highlighted Visualization To identify the area in the pathology image that is the most responsible for predicting recurrence in the DeepRePath model, Grad-CAM was used to visualize our CNN-based DeepRePath model by creating heatmaps (Figure 4). The image produced by Grad-CAM was examined and assessed by two pathologists (S.A.H and K.Y). Grad-CAM highlighted the atypical nuclei of tumor cells under a high-power view (400×), which contributed to recurrence ( Figure 4A,B). Under a low-power view (100×) of the architectural patterns of the tumor, Grad-CAM revealed two main histological features that may be associated with a high probability of recurrence-the first was tumor necrosis (Figure 4C), and the second was discohesive tumor cells in the alveolar space ( Figure 4D). the atypical nuclei of tumor cells under a high-power view (400×), which contributed to recurrence ( Figure 4A,B). Under a low-power view (100×) of the architectural patterns of the tumor, Grad-CAM revealed two main histological features that may be associated with a high probability of recurrence-the first was tumor necrosis (Figure 4C), and the second was discohesive tumor cells in the alveolar space ( Figure 4D). Nuclear Morphometric Results of Hotspots and Coldspots in Heatmap Visualization To clarify the association between heatmap visualization and nuclear morphology, morphometric analysis of the nucleus was performed using ImageJ software. The results are summarized in Table 4. The nuclear size, primary and secondary axes, Feret diameters, and perimeter were significantly greater in hotspots than in coldspots (all p < 0.001), whereas the shape factor was significantly lower in hotspots than in coldspots (p = 0.036). These findings indicate that nuclear enlargement and membrane irregularity could predict recurrence, while nuclear elongation was not associated with recurrence. Values are presented as the mean ± standard deviation. * Shape factor is defined as 4π × area/perimeter 2 . "1" indicates a perfect round nucleus, and as the value approaches "0", it indicates nuclear shape pleomorphism. † Roughness is defined as area/convex area. "1" indicates minimal irregularity in the nuclear membrane. ‡ Aspect ratio is defined as major axis/minor axis. The higher the value, the longer the nucleus. § Roundness is defined as 4 × area/π × primary axis 2 . A value closer to "1" indicates perfect roundness, and a value closer to "0" indicates an elongated nucleus. p < 0.05 was considered statistically significant. All variables were compared using Student's t-test. Prognostic Significance of the DeepRePath Model To determine the clinical significance of the DeepRePath model, survival analysis for DFS was performed on patients in the external validation set (cohort II). We assumed that the samples with a high probability score would have a high-risk probability; hence, we sorted the samples according to the probability scores based on the optimal probability threshold using the receiver operating characteristic curve. Patients with high-risk scores demonstrated significantly shorter DFS in each individual stage (I and II), as well as stages I and II together (total population, Figure 5A, p < 0.0001; stage I, Figure 5B, p = 0.0009; stage II, Figure 5C, p = 0.0005). Notably, patients with stage I disease are of special interest because postoperative treatment for stage I disease is controversial. In this study, despite the small number of patients with stage I disease, the DFS of patients with high-risk and low-risk scores in the DeepRePath model was significantly different (p = 0.0009, Figure 5). In the univariate analyses, patients with a high-risk score based on the DeepRePath model had statistically worse outcomes (Table 5; p < 0.001). In the multivariate CoxPH analysis, DeepRePath model scores remained a statistically significant predictor of recurrence with a high score, indicating an unfavorable prognosis (Table 5; hazard ratio = 5.564, 95% confidence interval = 2.245-13.789, p < 0.001). Discussion Although histopathology images provide clinicians with important information related to a patient's clinical outcomes, it is challenging for pathologists to predict recurrence from most images after the curative resection of early-stage LUAD. In this study, we developed a deep learning model (DeepRePath) to predict the recurrence of primary tumors. DeepRePath demonstrated good performance using multi-scale pathology images that show the tumor architecture and tumor cells. Our DeepRePath model could stratify early-stage LUAD into high-risk and low-risk groups. Wang et al. tried to predict recurrence in early-stage NSCLC through nuclear segmentation and nuclear feature extraction using CellProfiler. They categorized the extracted features using three popular classifiers (QDA, LDA, and SVM with polynomial kernel) and predicted recurrence (AUC score = 0.69-0.84, accuracy = 75-82%) [23]. However, only 122 patients with LUAD were enrolled in that study, and the authors did not present the data regarding the performance of their model based on the histological type (adenocarcinoma vs. squamous cell carcinoma). In contrast to the previous study, we performed data augmentation to circumvent issues associated with the low incidence of early-stage lung adenocarcinoma recurrence [23]. Additionally, in our study, pathologists examined foci that were indicative of a high probability of recurrence, depicted them as heatmaps, and tried to interpret these features. As a result, nuclear atypia, tumor necrosis, and discohesive tumor cells were identified as the potential prognostic features. Similarly, after nuclear segmentation and extraction with CellProfiler, Yu et al. used seven classifiers to differentiate malignant and normal tissues and adenocarcinoma from squamous cell carcinoma, and finally predicted the survival in stage 1 LUAD (Kaplan-Meier curve, p = 0.0023-0.028) [24]. In addition, the nuclei were analyzed using machine learning with nuclear feature extraction. Luo et al. selected significant features using CoxPH analysis and then predicted the survival using random forest methods (hazard ratio = 2.34 in LUAD and 2.22 in LUSC) [18]. However, the features identified from nuclear segmentation are difficult to utilize in clinical settings, and the removal of false-positives is not feasible. Therefore, we allowed the deep learning model to freely predict recurrence. The nuclei were mainly indicated as hotspots on the heatmap. Recurrence was associated with an enlarged nucleus and an irregular nuclear membrane, but not nuclear elongation (Table 4). Thus, we could eliminate potential false-positive findings and obtain results that are applicable to actual clinical practice. We used pathology images with different scales to input the data for DeepRePath. Depending on the scale, the images have their own strengths for extracting pathologic features, such as tumor-infiltrating immune cells, tumor cells, and tumor stroma. In our study, we found that DeepRePath extracted tumor necrosis and tumor cell patterns under a low-power view (original magnification, 100×). Tumor necrosis and discohesive tumor cells may be key features related to aggressive tumor behavior [35,36]. However, architectural features related to prognoses, such as micropapillary features, lymphovascular invasion, increased stroma, and tumor spread through air spaces, could not be identified in our model. Further studies on integrating prognostic features identified by human pathologists and deep learning models could be helpful for accurately predicting prognosis. Adjuvant chemotherapy plays a significant role in the treatment of patients with resected LUAD and improves overall survival, resulting in a 4-5% absolute increase in the 5-year survival [37]. Although adjuvant chemotherapy is limited to patients with above stage I NSCLC, in a previous study, 30% of patients in that stage showed disease recurrence [38]. In practice, clinicians experience difficulties in deciding on chemotherapy to prevent recurrence for patients with stage I disease. DeepRePath by itself might be limited with respect to detecting recurrence owing to its low performance. However, in combination with clinicopathological features (TNM stage, solid and micropapillary subtypes), DeepRePath can aid clinicians in diagnosing patients with recurrence who might benefit from adjuvant chemotherapy in early-stage lung adenocarcinoma. Additionally, DeepRePath may enable the determination of the duration of radiology examinations and follow-up periods for the early detection of recurrence. In our study, predicting the recurrence of lung cancer by analyzing pathology images showed significant results; however, there were some limitations. First, the number of pathology images used for deep learning was relatively small in the context of developing a general model for predicting stage I and II LUAD. Further studies using more data are needed to determine whether DeepRePath can predict various LUAD stages. Second, in this study, pathologists identified a large section of tumor cells on histopathology slides and input the captured images into our deep learning model to determine lung cancer recurrence. To effectively train the model using WSIs without pathologist intervention, a fully automated model that can detect and segment the tumor part is needed. In a previous study, a model using WSIs and deep learning failed to predict disease-specific survival in the case of lung adenocarcinoma (hazard ratio = 1.35, confidence interval = 0.87-2.08, p = 0.1824) [22]. However, Wu et al. predicted the recurrence of lung cancer using WSIs from TCGA and deep convolutional neural networks. They reported a relatively good prediction performance (AUC score = 0.79, sensitivity = 0.84, and specificity = 0.67), which was similar to that of DeepRePath [26]. Nonetheless, WSIs require high computing power, and there are problems with image quality and confounding non-neoplastic tissues outside the tumor area [39]. Practically, the selection and refinement of images by experienced pathologists can reduce these disadvantages. All our data from the input pathology images were selected and refined by three pathologists. Third, to identify the area in the pathology image that is the most responsible for predicting recurrence in the DeepRePath model, we used Grad-CAM to visualize the CNN result by creating heatmaps. Nevertheless, some heatmaps were difficult to interpret for the pathologists. To determine whether specific heatmap regions are representative of novel histopathological features, more data should be collected to establish the consistency of the results. Further studies may be warranted based on these results. Conclusions In conclusion, our findings show that a DeepRePath model with transfer learning using two separate CNNs could identify early-stage LUAD patients with a high risk of recurrence based on multi-scale pathology images, despite some limitations related to the small number of patients. Although DeepRePath is not suitable for use as an automated tool in clinical settings owing to its low performance, differential risk classification using the DeepRePath model can facilitate patient prognosis. Ultimately, our results demonstrate the usefulness of a deep learning model to clinically stratify patients beyond the TNM stage. This contributes to the development of personalized treatments that can improve patient outcomes. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/cancers13133308/s1, Figure S1: Data criteria and specification. Figure S2: The process of acquiring the pathological tumor images. Table S1: Baseline characteristics of the cohort I (training set) and cohort II (external validation set). Table S2: The institutes of the cohort I and cohort II (external validation set). Table S3: Comparison of the performance of models using XGBoost, Gradient Boosting, and AdaBoost for cohort I (training set) and cohort II (external validation set). Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki, and approved by the Institutional Review Board of the Catholic Medical Center (UC17SESI0073). Informed Consent Statement: The requirement for written informed consent was waived by the institutional review board (Catholic Medical Center) because of the retrospective nature of this study. Data Availability Statement: The program code of DeepRePath and data that do not infringe patients' personal information are available at https://github.com/deargen/DeepRePath, accessed on 17 May 2021.
8,107.4
2021-07-01T00:00:00.000
[ "Medicine", "Computer Science" ]
Impact of Treadmill Interval Running on the Appearance of Zinc Finger Protein FHL2 in Bone Marrow Cells in a Rat Model: A Pilot Study Although the benefits of physical exercise to preserve bone quality are now widely recognized, the intimate mechanisms leading to the underlying cell responses still require further investigations. Interval training running, for instance, appears as a generator of impacts on the skeleton, and particularly on the progenitor cells located in the bone marrow. Therefore, if this kind of stimulus initiates bone cell proliferation and differentiation, the activation of a devoted signaling pathway by mechano-transduction seems likely. This study aimed at investigating the effects of an interval running program on the appearance of the zinc finger protein FHL2 in bone cells and their anatomical location. Twelve 5-week-old male Wistar rats were randomly allocated to one of the following groups (n = 6 per group): sedentary control (SED) or high-intensity interval running (EX, 8 consecutive weeks). FHL2 identification in bone cells was performed by immuno-histochemistry on serial sections of radii. We hypothesized that impacts generated by running could activate, in vivo, a specific signaling pathway, through an integrin-mediated mechano-transductive process, leading to the synthesis of FHL2 in bone marrow cells. Our data demonstrated the systematic appearance of FHL2 (% labeled cells: 7.5%, p < 0.001) in bone marrow obtained from EX rats, whereas no FHL2 was revealed in SED rats. These results suggest that the mechanical impacts generated during high-intensity interval running activate a signaling pathway involving nuclear FHL2, such as that also observed with dexamethasone administration. Consequently, interval running could be proposed as a non-pharmacological strategy to contribute to bone marrow cell osteogenic differentiation. Introduction In the fight against degeneration of the musculoskeletal system, non-drug prophylactic approaches, particularly through physical activity, hold a very promising place [1], and the connection between them and good health or wellness is now admitted as a part of an integrative approach [2]. The intimate mechanisms by which physical exercise induces structural musculo-skeletal adaptations, however, need to be further investigated [3], and specifically the place of the mechano-transductive signalling pathways induced by mechanical stress of the musculo-skeletal system need to be better defined [4]. Indeed, the mechanical stresses applied to the cells induce changes in their structure and shape by altering the balance of strains at the cell level, and these strains generate various cell responses, such as growth, differentiation, mobility, remodelling, and gene expression for instance, thus determining the cell fate [5][6][7]. Cells implement the mechano-transductive process by converting perceived physical stimuli into intracellular biochemical signals [8,9]. These biochemical signals may consist of the activation of a specific signalling transduction pathway, which results in the adaptation of the cell to the physical stimuli [10]. Regarding bone specifically, physical exercise is usually known to have an anabolic effect on bone tissue, mineral density (BMD), or global bone status [11]. Different exercises have, however, different effects on bone status. In particular, the best osteogenic effects have been observed in sports in which bone strains resulted from impacts [12]. Outdoor running [13] or treadmill running [14] are consequently good candidates to induce bone adaptations. To explain the relationship between physical activities and mechano-transductive processes, various prospective actors have been investigated, including the pressure generated by the interstitial fluid bathing the osteocytes and its effects on osteoblast and osteoclast activities [15], or the role played by bone marrow cells [16]. Indeed, mesenchymal stem cells (MSCs) are multipotent stromal cells capable of multilineage differentiation. They contribute to the regeneration or the repair of mesenchymal tissues such as bone, cartilage, muscle, ligament, tendon, and adipose tissue [17]. Various signaling transduction pathways driving the differentiation of bone marrow MSCs toward the osteogenic lineage are described [18]. More recently, it has been shown that zinc finger proteins, and especially their LIM domain [26], could be involved in a mechano-sensing response [27] and could induce genetranscription regulation [28]. They would therefore be reasonable candidates to participate in a mechano-transductive process. Four and a half LIM protein type 2 (FHL2) belong to this class of proteins. FHL2 is detected closed to Actin filaments and Focal adhesion of the cytoskeletal subcellular compartment. In reference to the "Human Protein Atlas" [29], FHL2 is expressed in small amounts in hematopoietic cells of bone marrow. It is a multifunctional intracellular adaptor protein expressed in cells, which participates in various cell processes [30] including adjustments in signalling cascades and gene transcription activities [31]. This protein is involved in muscle development [32], cardiovascular system hypertrophy, atherosclerosis or angiogenesis [33], changes in chondrocyte morphology [34], mesenchymal cell osteogenic differentiation and bone formation [35], fibroblast activation [36], or adipocyte differentiation [37]. In vitro, in the absence of mechanical stresses, FHL2 appears in bone tissue in response to chemical osteoinductive stimulation (dexamethasone [38]), which mediates MSC differentiation into osteoprogenitor cells. FHL2 expression would therefore induce Alkaline phosphatase, type I collagen and osteocalcin synthesis, as well as extracellular matrix mineralization [39]. It is suggested by Nakazawa et al., 2016, [40] that FHL2 phosphorylation by FAK is a critically dependent mechanical step in signalling from the extracellular matrix to the nucleus for gene expression and cell proliferation. Moreover, if FHL2 is important for bone marrow MSC differentiation, it must be underlined that it is also essential to the maintenance of bone Hematopoietic Stem Cells (HSCs) in a quiescent state and to their survival under biochemical stress conditions [41]. Krüppel-like factor 8 (KLF8) is a zinc finger protein identified as a transcriptional factor [42]. It is also a target of the FAK signalling pathway for up-regulation of the cyclin D1 promotor [42]. This protein is localized in the nucleus of bone marrow cells in reference to the Human Protein Atlas [29], but also in the cytosol and nucleoplasm for the UniProt Database [43]. It plays a crucial role in differentiation or proliferation of several kinds of cells [44,45]. Thus, FAK plays a major role in mediating signal transduction by integrin [46] with the consequence of transmitting signals from the extracellular matrix to the cytoskeleton [25,44] and to the nucleus. Moreover, FAK has several interactions with KLF8 or FHL2 activities. Consequently, in reference to the in vitro signalling pathways listed by Langenbach and Handschel. (2013) [47], which induce the osteogenic differentiation of stem cells by increasing the transcription of FHL2, we hypothesized that mechanical stress impacts generated by interval running could activate, in vivo, a FAK-dependent signaling pathway. This mechanism could be expected to pass by an integrin-mediated mechano-transductive process, leading to KLF8 up-regulation or activation and FHL2 synthesis or translocation with consequences for bone marrow cell osteogenic differentiation [16]. Animal Experiment Twelve 5-week-old male Wistar rats, weighing 203 ± 10 grams, were purchased from Elevage Janvier (Le Genet-St-Isle, France), acclimated for 1 week to the new facilities and for 1 week to the treadmill. Rats were then randomly assigned to one of the 2 following groups (n = 6 each): sedentary control (SED) or running exercise (EX). At the beginning of the experiment, all rats were housed in controlled facilities (3 per standard cage) and maintained on a 12 h light/dark cycle, at a constant temperature of 21 ± 2 • C. A commercial standard diet (Genestil, Royaucourt, France) and tap water were provided ad libitum to all animals. The experimental study was carried out in strict accordance with the European Guidelines for Care and Use of Laboratory Animals (Directive 2010/63/EU). The experimental protocol received approval from the Ethics Committee on Animal Research of Lariboisière/Villemin (Paris, France) and the French Ministry of Agriculture (Paris, France; APAFIS # 9505). Maximal Aerobic Speed Test On the 5th day of acclimation, each rat underwent Maximal Aerobic speed (MAS) testing, using a progressive running test [48] to determine subsequent running speeds. It started with a warming session at 10 • inclination and a speed of 13 m/min for 5 min, followed by increments of 4 m/min every 2 min until 17 min were reached, then increments of 4 m/min every 1 min 30 s. The test was conducted until the rats could no longer keep pace with the treadmill speed, despite 2 consecutive stimulations with air-compressed sprays. The test was therefore stopped, and the last fully sustained increment speed defined as the MAS for the rat. The same MAS test was conducted at the end of the exercise protocols to validate the exercise program. Training Protocol The running training protocol was performed on the treadmill: 0 • inclination, 45 min per day, 5 days, for 8 consecutive weeks. It consisted of 7 repetitions in blocks of 3 min at a moderate speed (70% MAS), followed by 2 min of high-intensity running (i.e., 100% MAS) and 1 min of passive recovery. Bone Histology One day after the last training session, the animals were euthanized by exsanguination and then the radii were excised, cleared of connective soft tissues, and fixed in 4% v/v paraformaldehyde at 4 • C. Bone samples were slowly decalcified with EDTA 177 g/L, pH 7.0-7.3 (Osteosoft, Merck KGaA, Darmstadt, Germany), embedded in paraffin, and cut longitudinally with a microtome. Slides were mounted, and immunohistochemical labelling was performed with a primary anti-FHL2 antibody (EPR17860-23, abcam) for the whole different bone tissues, or with anti-KLF8 antibody (PAS-67196, Invitrogen) for the bone marrow, at the dilution 1/100 for both antibodies. 3,3 -Diaminobenzidine (3,3 -DAB) was added for stain revelation and counter-staining was achieved with hematoxylin. Imaging Processing Slides were observed under an optical microscope with a camera at 20× magnification and images were visualized on a computer screen using IC Capture ®® software (The imaging Source Europe GmbH, Sommerstrasse, Germany) for image acquisition (v.2.4, exposure time 1/6410 s; 8.81 db gain; acuity 0; gamma 48). Ten pictures for cortical, trabecular, and bone marrow compartments, respectively, were defined for each rat. Qualitative analyses were performed for bone architecture and bone marrow composition, as reported by Lapidot et al., 2005 [49]. (Different parameters are shown in the For quantitative analyses, pictures from the bone sections were digitally processed and the total number of cells were counted in each picture field using Image J software (National Institute of Health, Bethesda, MD, USA, 1987) and associated digital processing plug-in. Image processing was performed as follows: The background noise of the image was first subtracted. The image was converted into 16-bit images, and then into binary images with a contrast adjusted to 27% of the total signal. The binary processing was set up with the "Files Holes" mode, which allowed the removal of signals of a too-low intensity (empty areas). Finally, the number of structures exceeding 150 pixels and delimited with the "analyse particles" mode were counted, excluding the structures too near to the edges of the image. The number of marked cells (i.e., labelled by immuno-histochemistry) and the number of osteocytes (easily identified in their lacunae) were counted manually on printed pictures. Statistical Analysis We tested that our data followed a normal distribution (Shapiro-Wilk test) and then we verified for each condition that the data distribution followed this law. Moreover, we also completed a Fisher-F test to control variance homogeneity. The comparison between groups was performed using a quantile-quantile diagram or "Q-Q plot". To complete this first overview of the normality of our data, we submitted our data to the Shapiro-Wilk test ("Statistical Tools For High-Throughput Data Analysis" platform: sthda.com; accessed on 15 Jun 2019). We used as a null hypothesis the follow-up of a normal distribution. Our confidence interval was set at 95%. We then performed a comparison of the mean percentages of cells labelled with each primary antibody. Since our samples came from different animals, our data were distributed according to a normal distribution; there was homogeneity of the variances, and we opted for a parametric Student's "t" test for independent groups, on a bilateral basis (tests performed on the biostatgv.sentiweb.fr platform). Our null hypothesis was that the means were equal in both groups and our confidence interval was set at 95%. Variables were expressed as means ± SEM. Qualitative Observations Pictures A and B in Figure 1 display cortical bone from both SED and EX groups. Osteocytes appeared purple in their lacunae. Compared to the SED group (A), a low number of empty osteocyte lacunae were observed in the EX group (B). In contrast, no differences in the number and size of vascular canals were observed among groups. These parameters served to locate FHL2 or KLF8 immuno-staining in different experimental conditions. A slight visual difference in FHL2 immunostaining between the SED and EX groups was observed. Figure 1C,D display trabecular bone in the diaphyseal medullary canal, embedded in the marrow, including MSC and HSC (which cannot be distinguished in the pictures). Osteocytes in the trabecular bone were stained in purple by hematoxylin. Although particularly rare, in the cortical bone, FHL2 labelling was observed. Figure 1E,F display bone marrow in the medullar canal. In the EX group, FHL2 immunostaining intensity in bone marrow appears stronger than respective results obtained in cortical bone ( Figure 1B versus Figure 1F). Moreover, this labelling was not randomly distributed, but rather located in the diaphysis ( Figure 1E,F), whereas rare labelling was observed (for example, in the epiphyses). The DAB labelling combined with the FHL2 antibody and counter-staining with hematoxylin looks specific to the same subcellular compartment (the cell nucleus). Compared to the SED group, a stronger labelling intensity of FHL2 was observed in the EX group ( Figure 1F), whereas no difference in KLF8 immunostaining appeared between SED and EX groups in the bone marrow ( Figure 1G,H). Furthermore, the labelling in the lining cells ( Figure 1I) in the endosteal region was only observed in the diaphyseal part, but not in the metaphyseal or epiphyseal cortical parts. No labelling was observed in the osteocytes. It seemed, however, that the labelling could be observed in blood cells in the vascular canal. Quantitative Analyses In cortical bone, the percentage of FHL2-labelled cells in the EX group (1.3 ± 1.2%) was higher (p < 0.05) than the respective results obtained in the SED group where no FHL2 was stained. In the trabecular bone, the percentage of FHL2-labelled cells was 0.2 ± 0.08% and 0% in the EX and SED groups, respectively. The difference was, however, not statistically significant. In the bone marrow, the percentage of FHL2 labelled cells in the EX group was significantly higher (p < 0.05) than those obtained in the SED group (7.5 ± 1.6% vs. 0%). In the bone marrow, the percentage of KLF8 labelled cells in the EX group was not significantly higher than results obtained in the SED group (14 ± 4.2% vs. 12.7 ± 6%). Discussion FHL2 is an indicator of cell differentiation, from MSCs to osteoblasts. To date, this protein expression is low in HSC cytoskeletal sub-compartments and null in nucleus cells [29]. FHL2 appears here in bone marrow cells of radii obtained from rats after stimulation by high-intensity interval running. This bone marrow cell response to mechanical stress in vivo could be compared to the response induced in vitro by chemical stimulation (dexamethasone, for instance), for which FHL2 expression induces the differentiation of MSCs in osteoblasts [47]. Our study highlighted the presence of FHL2 in the bone marrow cells in running rats, whereas no FHL2 was observed in sedentary rats. This result indicates that in a situation in which bone is not or is insufficiently exposed to external mechanical stresses, FHL2 is not expressed, whereas FHL2 is synthetized when bone is subjected to sufficient external mechanical stress. Consequently, seven repetitions in blocks of 3 min of running at moderate speed, followed by 2 min of intensive running and 1 min of passive rest, 45 min per day, 5 days per week, for 8 consecutive weeks are deemed to cause sufficient mechanical impact on the locomotor system to induce a FHL2 response in bone marrow cells. Because FHL2 is a protein involved in MSC differentiation in osteoblasts, our results are consistent with the observations from Gonzalo-Encabo et al. (2019) [50], that an exercise involving the repetition of impacts could warrant osteogenic effects on bone status in postmenopausal women. We hypothesized that FHL2 could be produced endogenously in vivo in bone marrow cells when they are subjected to external mechanical stress, such as when the osteogenic inductor dexamethasone provokes a production of FHL2 in vitro in bone marrow MSCs [38]. By the way of a mechanical stress, FHL2 synthesis in MSCs (as with dexamethasone) could initiate the expression of the transcription factor Runx2, leading to the differentiation of the osteoblast lineage and the mineralization of the extracellular matrix. This could explain the immunostaining observed in cortical bone in the EX group, since differentiation was constantly taking place for osteoblasts close to the endosteal cortical bone. FHL2 could be an indicator for the activation of the mechanisms by which bone marrow MSC differentiate into osteoblasts, before it becomes a transcription factor itself, translocating to the nucleus through its association with the cargo protein IGFBP-5 [51]. FHL2 is a metalloprotein from the zinc finger class. For others such as the Muscle Lim Protein (MLP) in muscular cells, a translocation from the cytoskeleton to the nucleus had been identified [52]. It depends on an activation resulting from a post-transductional response to stress and, for example, to a mechanical stress. Moreover, Nakawa et al., 2016 [40], reported an FHL2 movement from the cytoskeleton to the nucleus to activate P21 expression. So, even if P21 is involved in the inhibition of the cell proliferation, we cannot exclude that FHL2 could also be involved in another role such as a transcriptional cofactor, or a function mediated by FAK activation, with a translocation to the nucleus. How Could FHL2 Be Synthetized in Response to a Mechanical Stimulus? Among the mechanisms described for cells to achieve extracellular and intracellular medium communication, tensegrity and mechano-transduction allow the translation of the mechanical strains applied to the ECM into an intracellular mechanical stress [53] or an intracellular biochemical message [54]. Numerous transduction signalling pathways have been identified, often involving integrin complexes as the cell gateway. Moreover, integrins have been associated with the differentiation of MSCs into osteoblasts [55]. Consequently, based on a possible response of MSCs to mechanical nano-stimuli involving integrins [56], and with reference to the FHL2-dependent signalling pathways leading to the differentiation of MSCs in osteoblasts, we have tried to understand how bone marrow cells could induce FHL2 synthesis in response to an external mechanical stress ( Figure 2). To answer this question, the literature describes the KLF8 protein as a zinc finger protein with a possible role in transcription [57] and mechano-transduction [27]. Cytoplasm-nuclear localisation [58] is ensured by the presence of an NLS (nuclear localisation [47]). Left-hand side (frame): Signalling pathway leading to the synthesis of FHL2 by mechano-transduction. The FAK would play a key role by activating two transduction signalling pathways that regulate Sp1 (PI3K/AKT, Src/ERK pathways). Sp1 would bind to a KLF8 promoter to initiate KLF8 transcription and KLF8 would initiate the transcription of FHL2 by binding to its promoter. To answer this question, the literature describes the KLF8 protein as a zinc finger protein with a possible role in transcription [57] and mechano-transduction [27]. Cytoplasmnuclear localisation [58] is ensured by the presence of an NLS (nuclear localisation signal) fragment [59]. It appears that the synthesis of KLF8 can be initiated at the focal adhesion plate by the activation of FAK [46] and then by the activation of two main signalling pathways, specifically PI3K/AKT [44] and Src/ERK [42], before the transcription factor Sp1 induces its synthesis. It could then be possible to link the expression of KLF8 and FHL2: KLF8 becomes a transcription factor by binding to the gene encoding for FHL2 (binding to the GT box 1 promoter of the GGGTG sequence of nucleic acids -55 and -50) [60,61]. These mechanisms could explain why the lack of KLF8 and the lack of FHL2 have similar effects. For instance, a decrease in KLF8 expression slows down the proliferation of differentiated cells (cancerous osteoblasts) or metastases in osteosarcomas [62], and FHL2 positively affects osteoblastogenesis, bone formation, and bone mass [35]. As displayed in Figure 2, a mechano-transductive pathway could activate the end of the signalling pathway that is known to be activated in vitro by dexamethasone. In contrast to our results for FHL2, we cannot quantitatively differentiate at the protein level the presence of KLF8 in the BM of trained rats compared with the presence of KLF8 in the BM of untrained rats (14± 4.2% vs. 12.7 ± 6%), and this lack of statistical difference may appear surprising. In this context, it is necessary to specify that: KLF8 is expressed at the cellular level for multiple reasons. It is involved in a set of mechanisms to differentiate healthy cells (e.g., for adipocyte production [45]) and during the regulation of cell proliferation (e.g., cancerous tumours [61]). KLF8 is not exclusively nucleo-plasmatic [43] but also cytosolic out of nuclear adaptative processes. The immuno-histochemical analyses do not allow to differentiate an overexpression vs. an expression from a quantitative point of view. It explains firstly the identification of KLF8 in the rat bone marrow cells without mechanical stress induced by running exercise and secondly that we were unable to associate the presence of this protein in a cell sub-compartment as a nuclear adaptative process to exercise. However, even without being able to conclude an overexpression related to a mechanical stress, the presence of KLF8 in the cell's nucleus of trained animals is compatible with an activation of the FAK-dependent signalling pathway involving KLF8/FHL2 cooperation. We emphasize that a subcellular localization and quantification becomes necessary to validate with certainty that running can induce FHL2 transcription by synthesis of nuclear KLF8 (cytoplasmic-nuclear translocation of FHL2 for MSC differentiation remaining a possible and independent mechanism). The mechanism proposed here could explain (i) why it is possible, when MSCs differentiate in osteoprogenitors, to observe the protein FHL2 in MSCs still engaged in the differentiation phase, and (ii) why, in a healthy differentiated or pre-differentiated breast (cancer) tissue, the cells are devoid of FHL2. Our hypothesis would also explain how, by mechano-transduction, FHL2 could be produced in bone marrow MSCs under mechanical stress and initiate the osteogenic differentiation of the MSCs. Our hypothesis regarding the mechanism of expression of FHL2 in CSMs is presented in Figure 2, in the framed section. Finally, our hypothesis is similar to that from Tsimbouri et al. (2015) [56], who concluded that bone marrow cells, and in particular MSCs, are able to respond to nano-stimulation and pave the way for adaptive responses. Author Contributions: A.G., C.B., C.P., H.P., S.P. and P.G. equally contributed to this work. All authors have read and agreed to the published version of the manuscript. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are available on request from the corresponding author.
5,308.8
2022-04-01T00:00:00.000
[ "Biology" ]
Quantitative transportation assessment in curved canals prepared with an off-centered rectangular design system The purpose of this study was to assess the ability of an off-centered rectangular design system [ProTaper Next (PTN)] to maintain the original profile of the root canal anatomy. To this end, ProTaper Universal (PTU), Reciproc (R) and WaveOne (WO) systems were used as reference techniques for comparison. Forty clear resin blocks with simulated curved root canals were randomly assigned to 4 groups (n = 10) according to the instrumentation system used: PTN, PTU, R and WO. Color stereomicroscopic images of each block were taken before and after instrumentation. All image processing and data analysis were performed with an open source program (Fiji v.1.47n). Evaluation of canal transportation was obtained for two independent regions: straight and curved portions. Univariate analysis of variance and Tukey’s Honestly Significant Difference test were performed, and a cut-off for significance was set at α = 5%. Instrumentation systems significantly influenced canal transportation (p = 0.000). Overall, R induced significantly lower canal transportation compared with WO, PTN and PTU (p = 0.000). The curved portion displayed superior canal transportation compared to the straight one (p = 0.000). The significance of the difference among instrumentation systems varied according to the canal level evaluated (p = 0.000). In its straight portion, R and WO exhibited significantly lower transportation than PTN; whereas in the curved portion, R produced the lowest deviation. PTU exhibited the highest canal transportation at both levels. It can be concluded that PTN produced less canal transportation than PTU and WO; however, R exhibited better centering ability than PTN. The most relevant reciprocation systems available on the market -R and WO -propose using a single file to prepare the root canal, so that it ultimately has a minimum taper-size shape. 1,2,3To achieve this, reciprocating techniques normally use large, rigid single files of increased taper, which can result in a higher incidence of canal transportation, Declaration of Interests: The authors certify that they have no commercial or associative interest that represents a conflict of interest in connection with the manuscript.compared to the progressive filing increase in size and taper, as proposed in multifile rotary systems. 3,4,5urthermore, the lack of a preliminary coronal enlargement, the greater engagement of the flutes, and the higher torque and/or increased applied pressure in reciprocating techniques could contribute to greater canal transportation. 1,2,3,4,5n contrast, the multifile PTN system seeks to improve the strength and flexibility along the active part of the file, by incorporating a progressive and regressive taper design, and using an innovative off-centered rectangular design. 6This specific design enables an asymmetric rotary motion intended to decrease the screw-in effect, minimizing the contact area between the file and the dentinal walls. 7This could be especially important when navigating challenging curves in the apical region, thus minimizing canal transportation. Investigations of the shaping effect promoted by these new NiTi systems are becoming more important, because they help understand how the file design and different kinematics affect NiTi system performance. 8imulated curved canals in resin blocks have been traditionally used to evaluate some aspects of the shaping ability, including canal transportation and centering ability of different NiTi systems. 9,10However, a major limitation of most proposed evaluation methods is the need of operator intervention to preselect the evaluation points required to ultimately obtain the transportation measurements.An interesting root transportation analysis in simulated canals has been recently proposed.It uses an automatic approach that measures the entire simulated canal without operator intervention. 11This innovative method offers some improvements, including much less operator input and reduced bias, thus providing results for the evaluation of the whole canal length instead of just preselected slices. The present study was designed to assess the ability of PTN system to maintain the original profile of root canal anatomy using simulated curved canals in resin blocks.ProTaper Universal (PTU; Dentsply Maillefer), R and WO systems were used as the reference techniques for comparison.A recently published methodological approach was used to automatically register the images before and after instrumentation and to use a skeletonization algorithm to calculate the canal centering ability.The null hypothesis tested was that there are no significant differences in canal transportation between the PTN and the other tested NiTi systems. Methodology Digital image acquisition Forty simulated curved root canals in clear resin blocks (Endo Training Blocks ISO 15; Dentsply Maillefer), with 2% taper, 10 mm radius of curvature, 70° angle of curvature and 17 mm in length, were randomly assigned to 4 groups (n = 10) according to the instrumentation system used: PTN, PTU, R and WO.Before performing any instrumentation procedures, a round silicon base with a rectangular slot was fit onto the microscope base of a color stereomicroscope (1005t Opticam stereomicroscope; Opticam, São Paulo, Brazil) coupled to a digital camera (CMOS 10 megapixels; Opticam, São Paulo, Brazil).The rectangular slot matched the exact dimensions of the simulated canal blocks.Each specimen was then inserted into the slot, and color images were taken and stored in TIFF format.Following the instrumentation procedures, all blocks were imaged again, using the same protocol. Ten resin blocks were used as a control group in which no instrumentation was performed, in order to check the reliability and consistency of the repositioning method.In this group, one color stereoscopic image of each block was taken.Afterwards, the block was removed, and replaced after another image taken. Instrumentation In all groups, a stainless steel 10-and 15 K-files (Dentsply Maillefer) scouted the canal up to the working length (WL), creating an initial and standardized glide path. R. The canals were prepared with R25 (25/0.08)instruments used according to the pre-setting program (RECIPROC ALL), powered by a torque-controlled motor (VDW Silver).The instrument was gradually advanced in the root canal until it reached 2/3 of the WL, and then moved in a slow and gentle in-and-out pecking motion with a 3 mm amplitude limit.After each three complete pecking movements, the instrument was removed from the canal and its flutes were cleaned by insertion into a clean stand with a sponge. WO. WO Primary (25/0.08)files were used similarly to the R group, according to the WAVEONE ALL presetting program. All instrumentation procedures were performed by a single operator with experience in rotary and reciprocating motions, and only new instruments were used.Apical patency was confirmed between each preparation step, using a size 10 K-file just beyond the WL, and the canals were irrigated with 1.0 mL sterile water using a 30-G side-vented needle (Max-i-Probe; Dentsply Rinn, Elgin, USA) placed to a depth just short of binding.After final irrigation with 1.0 mL sterile water, post instrumentation images were performed as described earlier. Image processing and analysis All image processing, registration, segmentation and extraction of attributes were performed with the FIJI open source software interface (Fiji v.1.47n;Madison, USA) or one of its associated plugins. 12Image processing and analysis were based on previously described methodology. 11Briefly, the images were first converted to an 8-bit grayscale, after which each pair of images (baseline and after instrumentation) was registered using the "Rigid Registration" plugin.The baseline image was used as the template for the rigid transformation.A composite image of the baseline and the instrumented canal after registration can be seen in Figure 1A. Each canal (baseline and instrumented) was then segmented from the background using an iterative polygon tracing tool.Each line segment was defined by the user, following the geometry of the canal, and aided by an automatic segmentation algorithm to appropriately define the edges.After defining the polygon, a simple binarization scheme (0 for background, 255 for the defined polygon) was applied (Figure 1B).A skeletonization algorithm, 11 which uses a symmetrical erosion procedure to find the centerline of the segmented images, was applied.An example of the final centerline of each baseline and instrumented canals is depicted in Figure 1C.The XY coordinates of each skeleton were exported to a spreadsheet, and the difference between each XY coordinate for the baseline and the instrumented skeleton images was calculated using the following formula: where: x b and y b are the coordinates for the baseline canal, and x i and y i are the coordinates for the instrumented canal. Transportation measurements were obtained by converting the values obtained to millimeters (mm) with the aid of the microscope magnification scale.Transportation values were then averaged for the whole canal or for two independent regions (straight and curved portions), as seen in Figure 2A, which shows the artificial canal image and the regions analyzed. Statistical analysis Both canal portions generated several deviation values (straight = 26,360 and curved = 33,600), corresponding to each pixel evaluated.Each pixel was considered as a unit for statistical analysis purposes.Considering the data size, a bell-shaped distribution was assumed, and a univariate analysis of variance (two-way) procedure, with a cut-off significance level of α = 5%, was selected, considering the instrumentation systems and the root canal portion as independent variables, and canal transportation (in mm) as the dependent.Tukey's Honestly Significant Difference test was used for pair-wise comparisons. Results No canal transportation was observed in the control group, confirming the reliability and consistency of the method.Instrumentation systems significantly influenced canal transportation (p = 0.000).Considering the overall canal length, R (0.061 ± 0.049) induced significantly lower canal transportation compared with WO (0.063 ± 0.060), PTN (0.072 ± 0.062) and PTU (0.082 ± 0.066) (p = 0.000).Canal transportation was more severe in curved canal portions (0.091 0.066) when compared to straight portions (0.042 ± 0.037) (p = 0.00), as seen in Figure 2B.A significant interaction between the instrumentation systems and the root canal portion (p = 0.000) indicated patterns of different effect for the instruments, according to the canal level, as follows: in the straight portion, similar canal transportation was observed for WO and R; this result was found to be significantly lower for PTN; in the curved portion, R produced the lowest canal transportation.WO and PTN produced similar results.PTU exhibited the greatest transportation values in both canal portions (Table ). Discussion The present study used a recently described methodology to study transportation in simulated root canals, by comparing images registered before and after instrumentation with different systems. 11his method considerably reduces the bias related to a subjective, visually driven or operator-based image superimposition scheme and canal transportation evaluation, 2,3,9,10 since it is virtually not dependent on user input, and also gives information on the whole canal length, instead of only preselected slices.Although the two-dimensional approach represents a clear limitation of the method, it is important to point out that current three-dimensional-based techniques used to evaluate root canal transportation have not yet provided fully quantitative volumetric data, 5,13,14,15,16,17 resulting in the evaluation of limited selected slices and manual selection of gravity center points. Simulated artificial canals have already been validated as satisfactory models to study the shaping ability of endodontic instruments. 2,3,4,9,10These models are especially attractive, because they fully standardize the canal anatomy.However, there are some limitations.These include the difference in microhardness between manufactured resin and root dentin, 9 and the potential side effects created by heat generation during instrumentation, which may soften the resin material and bind the cutting blades of the instruments. 2,3,4,9,10,11For this reason, care should be taken before extrapolating these results directly to a clinical situation. During the investigation of the shaping ability of NiTi systems, it is important to standardize not only the tip size but also the taper of the last file used during root canal preparation.For this reason, this study used R R25 and WO Primary files for the reciprocation systems; while PTN X2 and PTU F2 instruments were used in the multifile systems.Therefore canal preparation was standardized to Taper and cross-sectional design could explain the better results observed herein in the PTN group, as compared with the PTU and WO systems. The clinical applicability of the current canal preparation results using only one instrument is indeed very attractive.Reciprocating motion is known to improve the canal centering ability and to reduce the risk of root canal aberrations. 18,19In this study, R system showed significantly less canal transportation in relation to the overall canal length.Therefore, the null hypothesis tested was rejected.It is important to point out that the difference between the reciprocating systems (R and WO) was numerically very low (0.002 mm), as compared with the numerical difference between the mean transportation values in reciprocating and multifile rotary systems (0.01-0.02 mm), in relation to the entire canal length.Despite the fact that R and WO instruments have some similarities, such as the reciprocating motion per se and the same special heat-treated M-wire alloy and tip size, 20,21,22 the former showed significantly less transportation in the curved portion of the canal.These results may be explained by their different cross-sectional designs; whereas R has a double-cutting edge S-shaped geometry, WO has a modified, convex, triangular cross-section with radial lands at the tip, and a convex triangular cross-section in the middle and portion of the file, with a larger cross-sectional area when compared to R. 20 This larger cross-sectional area influences the bending resistance of the instrument, 23 making it less flexible and thus increasing the straightening trend in curved canals.The larger cross-sectional area of PTN and PTU may also substantiate the differences between these and R. In addition, other variables such as the screw-in effect, which usually occurs with active instruments that rotate under continuous rotation motion, 24,25 and the total number of used instruments, may explain the results obtained by PTN and PTU herein.Overall, PTU system showed the highest canal transportation mean (0.082 mm), which may be explained by its traditional NiTi alloy, which noticeably affects stress-strain distribution patterns and bending ability, making PTU much less flexible. The outperformance showed by the R system contrasts with some recent studies, which have shown no differences regarding shaping outcomes, compared to other NiTi systems. 20,21,22Çapar et al. 20 showed similar canal curvature modifications among OneShape (Micro-Mega, Besançon, France), PTU, PTN, R, WO and Twisted File Adaptive (SybronEndo, Orange, CA) systems in the mesial root canals of mandibular molars, and Saber et al. 21showed no differences in the overall shaping ability of R and WO in curved root canals using digital radiographs.Bürklein et al. 22 have also shown no differences among R, WO, PTU and Mtwo (VDW) systems in extracted teeth using digital radiography to evaluate their shaping ability.These contradictory results could be attributed mainly to the differences in the magnitude of the resolution employed by these studies, which are about 10 -1 mm, compared to the present study, where resolution was increased to about 10 -3 mm, which can considerably increase the effect of small differences among the instrumentation systems.Whether the difference achieved herein is of clinical significance is a matter of further debate.Other factors, including instrument design, alloy composition, instrumentation technique and root canal anatomy, are also known to cause an impact on canal transportation 26 and may account for the present results. Conclusion It can be concluded that PTN produced less canal transportation than PTU and WO systems; however, R exhibited better centering ability than PTN. Figure 1 . Figure 1.(A) Composite image of superposition of sound and instrumented canals after image registration; (B) Segmented instrumented canal; (C) Skeleton of instrumented canal. Figure 2 . Figure 2. (A) Schematic representation of the straight and curved canal regions evaluated in the present study.(B) Mean transportation values in simulated canals for each instrumentation group and canal portion. of ISO 25 tip size with a taper of 0.08 over the first 3 mm for all tested systems, except for PTN, which has a taper of 0.06 over the first 3 mm. Table . Mean, standard deviation (SD) and 95% Confidence Interval (CI) for the interaction between instrumentation systems and canal portion.Different low or capital letters indicate significant differences as depicted from the 95% CI, respectively at straight and curved canal portions.
3,661
2016-01-01T00:00:00.000
[ "Materials Science", "Medicine" ]
Modification of Metal-Organic Framework-Derived Nanocarbons for Enhanced Capacitive Deionization Performance: A Mini-Review Capacitive deionization (CDI) is a promising electrochemical water treatment technology. Development of new electrode materials with higher performance is key to improve the desalination efficiency of CDI. Carbon nanomaterials derived from metal–organic frameworks (MOFs) have attracted wide attention for their porous nanostructures and large specific surface areas. The desalination capacity and cycling stability of MOF-derived carbons (MOFCs) have been greatly improved by means of morphology control, heteroatom doping, Faradaic material modification, etc. Despite progress has been made to improve their CDI performance, quite a lot of MOFCs are too costly to be applied in a large scale. It remains crucial to develop MOFCs with both high desalination efficiency and low cost. In this review, we summarized three modification methods of MOFCs, namely morphology control, heteroatom doping, and Faradaic material doping, and put forward some constructive advice on how to enhance the desalination performance of MOFCs effectively at a low cost. We hope that more efforts could be devoted to the industrialization of MOFCs for CDI. INTRODUCTION With the increasing shortage of water resources worldwide, the exploration of new methods for water treatment has become one of the important ways to solve the problem (Xu et al., 2017b;Sun et al., 2020a,b). Capacitive deionization (CDI) is considered a promising water treatment technology with powerful competitiveness compared with reverse osmosis and electroosmosis owing to its advantages of low energy consumption, environmental friendliness, and low cost (Oren, 2008). It shows excellent performance in the fields of seawater desalination, brackish water desalination, heavy metal ion removal (Hou et al., 2018), and element enrichment. So far, numerous materials (especially carbon materials) have been developed for CDI electrodes, including activated carbon (Wang et al., 2013;Luo et al., 2019), activated carbon nanofiber (ACF) (Wang et al., 2012), carbon aerogel (CA) (Jung et al., 2007), carbon nanotubes (CNT) (Wang et al., 2011), graphene (Xu et al., 2016b;Huang et al., 2019), ordered mesoporous carbons (OMCs) Xu et al., 2019c), etc. Among them, graphene is undoubtedly the most promisingly studied electrode material for CDI mainly owing to its large specific area, low cost, and abundance (Li et al., 2012). However, its poor salt adsorption capacity (SAC) limits its further application. The development of CDI needs, first and foremost, low-cost and high-efficiency electrodes (AlMarzooqi et al., 2014). Carbon nanomaterials derived from metal-organic frameworks (MOFs) have attracted wide attention recently (Chaikittisilp et al., 2013;Xu et al., 2017a). Thanks to the porous structures and tailored compositions of precursors (Yaghi and Li, 1995;, MOF-derived carbons (MOFCs) show adjustable pore structures, large specific surface areas, and good conductivity, giving them unparalleled CDI performance. Since Yang et al. demonstrated that carbon derived from IRMOF-1 has the potential as a high-performance CDI electrode material (Yang et al., 2014), more and more MOFs have been used for producing CDI electrodes, including the well-known zeolitic imidazolate frameworks (ZIFs) (Liu et al., 2015b;Wang et al., 2017;Gao et al., 2018), Materials Institute Lavoisier (MILs) (Xu et al., 2016a;Wang, K., et al., 2019), and MOF-5 (Chang et al., 2015). Modifications, such as morphology control, heteroatom doping (Wang et al., 2014;Xu et al., 2015), and Faradaic material doping, have been further studied to construct nanomaterials with more reasonable structures and compositions. As a result, the SAC and cycling stability of MOFCs have been greatly improved. Nevertheless, a considerable portion of MOFCs are costly due to their complex synthesis and expensive precursors, which limits their application in a large scale. The efficient and low-cost modification of MOFCs still needs to be systematically explored. In this paper, the principle of CDI is given, including its adsorption mechanism and requirements for electrode materials. Thereafter, three common modification methods in the aspects of morphology control by template, element doping, and Faradaic material doping are summarized (Figure 1). Moreover, we put forward some advice on cost control and discuss the future development direction of MOFCs for the desalination industry. THE PRINCIPLE OF CDI A typical CDI cell consists of two electrodes placed in parallel and saline water between them. The electrodes adsorb ions from saline water when charged and release ions when discharged, so as to desalinate feed water or recycle electrodes. The electrodes can be categorized into non-Faradaic electrodes and Faradaic electrodes according to the ion adsorption mechanism (Chen et al., 2020;Lu et al., 2020). In most carbon-based CDI processes, ions are usually stored in the electric double layers (EDLs) formed within the pores of porous electrodes without the occurrence of Faradaic reactions. For efficient and rapid desalination, electrode materials therefore should meet at least the following properties: (1) large specific surface area for ion storage and suitable pore structure for rapid migration of ions, (2) high conductivity for rapid transfer of electrons within the electrodes, (3) stable electrochemical property for cycling stability, and (4) good hydrophilicity (Yin et al., 2013;Liu et al., 2015aLiu et al., , 2017. To achieve these aims, morphology control and heteroatom doping have been frequently used. Aside from the commonly used non-Faradaic electrodes, Faradaic electrodes are also utilized to store ions mainly based on Faradaic reaction, which have attracted wide attention for their typical high SAC and cycling stability . Morphology Control With Templates Although MOFCs have high specific surface areas and high porosities, most MOF crystals are dissociative and solid particles, which can lead to poor electrical conductivity and low accessible surface area (Tang et al., 2016;Xu et al., 2020a). Morphology control with templates, including MOF templates and external templates (e.g., carbon materials, metal compounds, polymers), may be an effective method to optimize the nanostructures and composition of MOFCs (Dang et al., 2017;Xu et al., 2019b). ZIF-8 is a typical subfamily of MOFs that has been widely investigated for CDI application. Liu et al. prepared porous carbon polyhedrons (PCPs) through direct carbonization of ZIF-8, which showed an improved desalination performance (with a SAC of 13.86 mg g −1 ) and stability compared with commercial AC (Liu et al., 2015b). Subsequently, Xu et al. reported hierarchical porous carbon nanotubes (CNTs)/PCP hybrid (hCNTs/PCP) fabricated via in situ insertion of CNTs in ZIF-8 with a subsequent pyrolysis process. Thanks to its novel CNT-inserted-PCP porous structure, high specific surface area, and good electrical conductivity, the resultant hCNTs/PCP exhibited a high SAC of 20.5 mg g −1 and stable cycling stability (Xu et al., 2016d). After that, Xu et al. synthesized integrated MOF tubes by controlled growth of ZIF-8 nanocrystals on 3D polymeric fibers with the subsequent dissolution of template (Supplementary Figure 1). Afterwards, self-standing nitrogendoped carbon tubes (NCTs) with an ultrahigh SAC of 56.9 mg g −1 were obtained by thermal conversion of the resulting MOF tubes (Xu et al., 2020b). The external templates can tune the morphology of MOFCs effectively; however, their market price might not be acceptable for practical application and sometimes require complicated template removal operation (Dang et al., 2017). More versatile and cheaper templates that effectively controlled the morphology are needed (Dutta et al., 2016;Xu et al., 2016c). Heteroatom Doping Heteroatom doping is a common modification method for improving the electrochemical performance of carbon materials . Non-metallic elements or metal ions can be evenly doped in MOFCs by simple carbonization of MOF precursors containing target elements, which would contribute to enhancing the comprehensive properties of carbon materials including conductivity, hydrophilicity, and stability (Kurak and Anderson, 2009;Zheng et al., 2011;Cheng et al., 2019;Xu et al., 2019a). Gao et al. synthesized nitrogen-doped graphitic carbon polyhedrons (NGCPs) by direct carbonization of ZIF-8. NGCPs show a maximum SAC of 17.73 mg g −1 and high salt adsorption rate of 4.14 mg g −1 min −1 and good regeneration performance (Gao et al., 2018). Zhang et al. prepared N, P, S co-doped hollow carbon polyhedron (denoted as ZIF-8@PZS-C) derived from ZIF-8-based core-shell nanocomposites (denoted as ZIF-8@PZS). The resultant ZIF-8@PZS-C displayed an improved electrical conductivity, excellent hydrophilic, and high SAC of 22.19 mg g −1 (Zhang et al., 2018). Considering the performance fading of conventional carbon materials caused by the formation of H 2 O 2 due to the reduction of dissolved oxygen in nature saline water, the introduction of oxygen reduction mechanism will effectively improve the stability of MOFCs . Xu et al. prepared nitrogen-irondoped carbon tubes (3D-FeNC tubes) derived from the 3D interconnected MOF tubes (Supplementary Figure 2). Thanks to its well-defined structure and enhanced oxygen reduction ability, the 3D-FeNC tubes achieved both excellent salt removal ability and cycling performance in oxygenated saline water (Xu et al., 2020a). The research reveals that high-performance oxygen reduction catalysts, such as Fe, N, and other heteroatom-doped carbon materials (Zhang et al., 2020), can significantly improve the continuous desalination performance of CDI. Heteroatom doping enables MOFCs with higher desalination capacity, faster adsorption rate, and more importantly, better stability. Dissolved oxygen ubiquitous in natural water will eventually cause the performance fading of carbon materials. By simply doping, the stability of MOFCs can be greatly improved, which contributes to their practical application for the desalination industry. Faradaic Material Doping Even though great progress has been made in improving the CDI performance of MOFCs based on EDLs, further improvement of SAC seems hard to achieve due to the limitation of physical charge adsorption capacity (Suss et al., 2015;Zhao et al., 2019). Inspired by the booming field of energy storage such as sodium-ion battery and supercapacitor . Faradaic materials have been investigated for CDI and proved to be promising candidates with high SAC and cycling stability (Tang, W., et al., 2019). Widely studied Faradaic materials include transition metal oxides (e.g., MnO 2 , TiO 2 , Na 4 Ti 9 O 20 ), Prussian blue analogs, polyanionic phosphates [e.g., FePO 4 , NaTi 2 (PO 4 ) 3 , Na 3 V 2 (PO 4 ) 3 ], conducting polymers (e.g., polypyrrole, polyaniline), MXenes, transition metal dichalcogenides, and so on (Qin et al., 2019(Qin et al., , 2020Yu et al., 2019). Yang et al. prepared hierarchically porous carbon-coated zirconium oxide nanocubes (HCZ) derived from metal-organic framework (Zr-UiO-66) for CDI electrodes. The asymmetrical cell composed of HCZ negative electrode and AC positive electrode showed a remarkable SAC of 55.17 mg g −1 in 250 mg L −1 aqueous sodium chloride solution at 1.4 V (Yang and Luo, 2019). Ding et al. reported a titanium dioxide/porous carbon composite (TiO 2 @PC) derived from MIL-125 (Ti) for a membrane CDI. A synergy of high pseudocapacitance and good oxidation resistance endows the anatase TiO 2 @PC (annealed at 600 • C) with an improved SAC of 46.7 mg g −1 at 10 mA g −1 and stable cycling performance over 50 cycles . (Ti)derived NaTi 2 (PO 4 ) 3 /carbon (NTP/C) composite as electrode materials for hybrid CDI (HCDI; Supplementary Figure 3). Due to the unique porous structure, high specific surface area, and good electrical conductivity of NTP/C, the HCDI system with NTP/C composite cathode and AC anode exhibited an excellent desalination performance with a high SAC of 167.4 mg g −1 and good desalination ability . The experimental results reveal that it is an effective strategy to prepare Faradaic electrodes with good conductivity and high CDI performance derived from MOFs. To develop efficient, cheap, and safe Faradaic MOFC-based electrodes, more synthetic strategies of carbon materials combining MOFs with Faradaic materials need to be investigated. As we discussed above, most MOFCs with high CDI performance usually involve controlled morphology, heteroatom doping, and Faradaic material doping. These modification methods are applied comprehensively in the synthesis of MOFCs with the purpose to optimize the nanostructure and composition of carbon materials, so as to achieve faster adsorption rate, higher SAC, and better cycling stability. The cases mentioned above with synthesis procedures and CDI performances are listed in Table 1. CONCLUSIONS AND OUTLOOK As a potential water treatment technology, CDI is progressively making its path to the desalination industry. In this process, the first and most important is the development of high-efficiency and low-cost electrode materials. Nanocarbon materials derived from metal-organic frameworks have become one of the most promising candidates for their highly designable precursors. Thanks to the application of creative modification methods, breakthroughs have been made in the CDI performance of MOFCs. Nevertheless, promotion of desalination efficiency is merely the first step of industrialization, the next will be the control of cost. Generally, the synthesis of MOFCs should select a wide range of cheap raw materials and simple synthetic routes. For example, MILs composed of metal ions such as iron, titanium, manganese, and organic ligands such as fumaric acid and terephthalic acid may be an ideal choice due to their low cost, safety, and high specific surface area. In terms of morphology control, other than the template strategies mentioned above, more methods need to be investigated. Nitrogen doping is a common modification method of MOFCs with a main consideration of nitrogen source. In addition to nitrogencontaining MOFs, cheap external nitrogen sources such as urea and ammonia are also worth considering. In the aspect of MOFCbased Faradaic electrode, transition metal oxides (Kai et al., 2017) and polyanionic phosphates with low price and high salt adsorption ability and are environmentally friendly hold great potential. In summary, MOFCs are one of the most promising electrode materials for CDI. The further developing target is to achieve higher SAC, faster desalination rate, higher cycling stability, environmental friendliness, and lower cost. Considering that recent studies have revealed the outstanding performance of hybrid CDI with Faradaic negative electrodes, Faradaic material doping might become a mainstream modification method. Moreover, since the current CDI positive electrode materials are still carbon materials, it is vital to improve the non-Faraday desalination performance of MOFCs through morphology control and element doping. It can be expected that the combination of Faradaic mechanism and non-Faradaic mechanism by selecting appropriate modification methods of MOFCs would give CDI better desalination performance. AUTHOR CONTRIBUTIONS PL, ML, XS, and YW: proposal and writing. TY and XX: revising and guidance. All authors contributed to the article and approved the submitted version.
3,239.6
2020-11-30T00:00:00.000
[ "Materials Science" ]
Circadian disruption induced by light-at-night accelerates aging and promotes tumorigenesis in young but not in old rats. We evaluated the effect of exposure to constant light started at the age of 1 month and at the age of 14 months on the survival, life span, tumorigenesis and age-related dynamics of antioxidant enzymes activity in various organs in comparison to the rats maintained at the standard (12:12 light/dark) light/dark regimen. We found that exposure to constant light started at the age of 1 month accelerated spontaneous tumorigenesis and shortened life span both in male and female rats as compared to the standard regimen. At the same time, the exposure to constant light started at the age of 14 months failed to influence survival of male and female rats. While delaying tumors in males, constant light accelerated tumors in females. We conclude that circadian disruption induced by light-at-night started at the age of 1 month accelerates aging and promotes tumorigenesis in rats, however failed affect survival when started at the age of 14 months. INTRODUCTION at the period of natural switching-off reproductive function has no effect or protective effect on antioxidant defense system, survival and tumorigenesis in rats. Effect of light/dark regimen on life span in rats In male rats, the exposure to LL regimen started at the age of 1 month failed significantly influence the mean life span of all as well as the last of 10% survivors whereas the exposure to LL regimen started at the age of 14 months increased by 6.7% the mean life span (p>0.05), by 9.4% (p<0.01) the mean life span of the last 10% survivors and increases by 3 months the maximum life span of male rats (Table 1). At the same time, the rate of population aging (parameter α in the Gompertz equation) was slightly decreased in LL-1 and in LL-14 groups as compared with the LD group males. The survival curve for males of the group LL-1 was significantly shifted to left in comparison to the survival curve for the group LD ( Figure 1A) whereas was not in LL-14 group ( Figure 1A). In female rats, the exposure to the LL regimen significantly decreased the mean life span (by 22.0%) and the population aging rate (by 27.0%) when started at the age of 1 month and failed to change both the mean life span and the aging rate when it was started at the age of 14 months ( Table 2). The survival curve for females of the group LL-1 was significantly shifted to left in comparison to the survival curve for the group LD whereas was not in LL-14 group ( Figure 1D). www.impactaging.com www.impactaging.com According to the log-rank test the conditional life span distributions of rats (given the animals survived the age of 14 months) kept under alternating day/night and two constant light regimens starting from one and 14 months of age differ insignificantly for males (p-value is 1.58E-01, χ2=3.7 on 2 df) and significantly for females (pvalue is 6.31E-04, χ2=14.7 on 2 df). The difference between two groups of male rats kept under constant light regimens (LL-1 and LL-14) is significant (p-value is 1.02E-01, χ2=2.7 on 1 df). The life span distribution of females kept under constant light from the age of one month differs significantly from the control LD group (p-value is 1.39E-03, χ2=10.2 on 1 df) and from the group subjected to the constant light from the 14th month (p-value is 1.26E-03, χ2=10.4 on 1 df). According to the estimated parameters of the Cox's regression model in males the constant light from older age decreases the relative risk of death compared to the group kept under the same regiment from earlier in life. Among the females, the LL-1 regimen increases the risk of death compared to the control group and the LL-14 decreases the risk of death compared to the LL-1 group ( Table 3). Effect of light/dark regimen on spontaneous tumorigenesis in rats Pathomorphological analysis shows that benign tumors were most frequent in all groups of males and females. The significant part of them was represented by testicular Leydig cell tumors in males and mammary fibroadenomas in females (Tables 4 and 5). Among malignant tumors lymphomas were most common however some cases of hepatocellular carcinoma, soft tissues sarcomas and sporadic carcinomas of other organs were detected. The exposure to the LL-1 regimen accelerated sponta-neous tumors development as compared to the LD group and not influenced their total incidence both in male and female rats (Tables 4 and 5; Figure 1B and 1E). The first tumor in males of the LL-1 group was detected 5 months earlier than the first tumor in the LD group. The exposure to the LL-14 regimen did not influence the incidence of spontaneous tumors in male and female rats. According to the log rank test the difference in life span distributions among all three groups of male rats with fatal and non-fatal tumors is significant (p-value is 4.85E-02, χ2=6.1 on 2 df). The pair-vise difference between LD and LL-1 groups is insignificant; between LD and LD-14 is significant (p-value is 3.32E-02, χ2=4.5 on 1 df); between LL-1 and LL-14 can be considered as significant (p-value is 1.10E-01, χ2=2.6 on 1 df). There was no significant difference in life span distributions among the female tumor-bearing rats. According to the log rank test there is no significant difference in life span distributions among male rats with fatal tumors subjected to different regiments. In females with fatal tumors the difference is significant among all three groups of rats (p-value is 8.30E-03, χ2=9.6 on 2 df); between LD and LL-1 groups (p-value is 4.50E-03, χ2=8.1 on 1 df) and between LD and LL-14 groups (p-value is 1.91E-02, χ2=5.5 on 1 df). Effect of light/dark regimen on free radical processes in rats Age-related changes in free radical processes should be generally described as desynchronization in activity of antioxidative enzymes and as a decreased antioxidant defense in the majority of organs. The changes of the functional activity of pineal gland induced by constant illumination affect both dynamics and level of enzymatic activities. Most significant effects of the age of start of the exposure to constant light on differences Notes: TBR-tumor-bearing rats. in the enzymatic activities were detected in the liver. Thus, the activity of catalase revealed season cyclicity in rats of the group LD and LL-1. In the group LL-14, the activity of both catalase and SOD was cyclic and revealed more high level as compared with the relevant parameters in the group LL-1. Maximum levels of the enzymatic activity was detected at the age of 24 months whereas in LD and LL-1 groups its where at the age of 12 and 18 months (Figures 2 and 3). There were agerelated decrease in catalase activity in the groups LD and LL-1, but not in LL-14 group. www.impactaging.com There were season changes in dynamics of activity of antioxidant enzymes. Season variations in the activity of SOD were observed in heart, lungs and skeletal muscles, whereas the activity of catalase -in kidney and skeletal muscles. Age-related increase in catalase activity was observed in the skeletal muscles in rats of all three groups. The activity of SOD in lungs and spleen of rats in LL-14 group revealed U-shape curve pattern: it decreased at the age 24 months and increased at the age of 30 months. In the group LL-1 the decrease in SOD activity in lungs and spleen have been observed at the age of 12 months (Figures 2 and 3). www.impactaging.com DISCUSSION Our present data have shown that live-long maintenance of male and female rats at the LL regimen started at the age of 1 month accelerated aging, decreased survival and promoted spontaneous tumorigenesis, whereas the exposure to constant illumination started at the age of 14 months failed to reduce life span. Moreover it seems that LL-14 regimen had rather protective effect on survival and delayed age-related decrease in activity of antioxidant enzymes, SOD and catalase. Experiments in female rodents presented significantly evidence that exposure to constant illumination (24 hours per day) leads to disturbances in estrus function (persistent estrus syndrome, anovulation) [16][17][18] and spontaneous tumor development [1,17,19,20]. In all these studies the exposure to constant illumination has been started at the young adult age. There are evidences that the exposure to light at night time inhibits pineal production and secretion of melatonin -key pineal hormone [5,21,22]. It is worthy of note that old rodents are more susceptible to modifications of the photoperiod as compared with young ones [23]. In postmenopausal women, light at night suppressed serum melatonin level in higher degree then that in young cycling women. The exposure to constant illumination increases the lipid peroxidation in tissues and decreases both the total antioxidant activity and SOD activity, whereas treatment with melatonin inhibits lipid peroxidation, in the brain particularly [19,[24][25][26][27]. Notes: TBR-tumor-bearing rats. www.impactaging.com Pierpaoli and Bulian [28] surgically pinealectomized BALB/c mice at the age of 3, 5, 7, 9, 14 and 18 months and evaluated their life span. Results showed that while pinealectomy at the age of 3 or 5 months promoted acceleration of aging, no relevant effect of pinealectomy was observed when mice were pinealectomized at the age of 7 or 9 months. The remarkable life extension was observed when mice were pinealectomized at the age of 14 months. No effect was observed when the mice were pinealectomized at 18 months of age. The same agingpromoting or -delaying effects were confirmed in the hematological and hormonal-metabolic values measured. Evidence from the blood measurements showed that removal of the pineal gland in mice at the age of 14 months resulted in maintenance of more juvenile hormonal and metabolic patterns at 4th and 8th months after pinealectomy [28]. On the contrary, a deleterious effect of pinealectomy was observed in mice subjected to the surgery at the age of 3 or 5 months. The authors suggest that the age of 14 months is the time when pineal gland accomplished its "aging program" and prevention of and/or recovery from aging becomes impossible. Our data on effect of "physiological pinealectomy" induced by the exposure to constant illumination started at the age of 1 or 14 on survival are in according with the observations of Pierpaoli and Bulian [28]. The results of our experiments suggest that people at perimenopausal age could be less susceptible to hazardous effect of constant illumination. This conclusion is not in contradiction with available data on age-related differences in susceptibility to carcinogenic agents is some tissues which were discussed earlier [29][30][31]. www.impactaging.com MATERIAL AND METHODS Two hundred sixty seven male and 135 female outbreed LIO rats [32] were born during the first half of May, 2003. At the age of 25 days they were randomly subdivides into 4 groups (males and females separately) and kept at 2 different light/dark regimens: 1) standard alternating regimen (LD) -12 hours light (750 lux): 12 hours dark; 2) constant light regimen (LL) -24 hours light on (750 lux). At the age of 14 months the part of survived rats kept at the LD regimen were moved in the room with the constant light regimen (LL). Thus, the were 3 final groups: 1) LD; 2) LL-1 since the age of 1 months; 3) LL-14 since the age of 14 months. Only rats in each group survived the age of 14 months were included into protocols for calculations. The full data on the survival and tumorigenesis in control LD rats and in rats exposed to the LL since the age of 1 months have been presented elsewhere [15]. Some animals were sacrificed by decapitation, the appropriate tissues (liver, kidney, heart, lung, spleen and a skeletal muscle) dissected, weighed, and kept frozen at -25°С before carrying out of analyses. The samples of tissue of rats groups LD and LL-1 were collected at age 6, 12, 18 and 24 months, of the group LL-14 -at 14, 18, 24 and 30 months. Prior to enzyme determinations, thawed tissue samples were homogenized in 20 volumes of ice cold 50mM phosphate buffer (pH 7.4), centrifuged at 6000 g for 15 min at 5°C. The supernatant fraction was used for antioxidant enzyme determinations. All animals were kept in the standard polypropylene cages at the temperature 21-23 ºC and were given ad libitum standard laboratory meal [33] and tap water. The study was carried out according to the recommendations of the Committee on Animal Research of Petrozavodsk State University about the humane treatment of animals. The total SOD activity was measured using the epinephrine-adrenochrome reaction and was followed kinetically at 480 nm [34]. One unit of SOD was defined as the amount of enzyme required for 50% inhibition of the spontaneous epinephrine-adrenochrome transformation. Catalase activity was measured by the method of Bears and Sizer [35] following the decrease in the absorption spectra of hydrogen peroxide at 240 nm caused by its decomposition by catalase. Activity of catalase defined as the amount of hydrogen peroxide in μmol that decomposed 1 g of tissue per 1 minute. All other rats were allowed to survive for natural death and were autopsied. Tumors as well as the tissues and organs with suspected tumor development were excised and fixed in 10% neutral formalin. After the routine histological processing the tissues were embedded into paraffin. 5-7μm thin histological sections were stained with hematoxylin and eosin and examined microscopically. Tumors were classified as fatal and non-fatal tumors and morphologically according to the IARC recommendations [36,37]. Experimental results were statistically processed by the methods of variation statistics and multifactor analysis of variance (MANOVA) with the use of STATGRAPH statistic program kit. The significance of the discrepancies was defined according to the Student tcriterion, Fischer exact method, χ2, non-parametric Wilcoxon-Mann-Whitney. Student-Newman-Keuls method was used for all pairwise multiple comparisons. Coefficient of correlation was estimated by Spearman method [38]. Differences in tumor incidence were evaluated by the Mantel-Haenszel log-rank test. Parameters of Gompertz model were estimated using maximum likelihood method, non-linear optimization procedure [39] and self-written code in 'Matlab'; confidence intervals for the parameters were obtained using the bootstrap method [40]. For experimental groups Cox regression model [41] was used to estimate relative risk of death and tumor development under the treatment compared to the control group: h(t, z) = h0(t) exp(zβ), where h(t,z) and h0(t) denote the conditional hazard and baseline hazard rates, respectively, β is the unknown parameter for treatment group, and z takes values 0 and 1, being an indicator variable for two samples − the control and treatment group.
3,375.4
2010-02-01T00:00:00.000
[ "Biology", "Physics" ]
Statistical research and modeling network traffic The self-similarity properties of the considered traffic were checked on different time scales obtained on the available daily traffic data. An estimate of the tail severity of the distribution self-similar traffic was obtained by constructing a regression line for the additional distribution function on a logarithmic scale. The self-similarity parameter value, determined by the severity of the distribution “tail”, made it possible to confirm the assumption of traffic self-similarity. A review of models simulating real network traffic with a self-similar structure was made. Implemented tools for generating artificial traffic in accordance with the considered models. Made comparison of artificial network traffic generators according to the least squares method criterion for approximating the artificial traffic point values by the approximation function of traffic. Qualitative assessments traffic generators in the form of the software implementation complexity were taken into account, which, however, can be a subjective assessment. Comparative characteristics allow you to choose some generators that most faithfully simulate real network traffic. The proposed sequence of methods to study the network traffic properties is necessary to understand its nature and to develop appropriate models that simulate real network traffic. Introduction Models for estimating the network traffic servicing characteristics remain at present actual scientific tasks. Reliable network traffic estimates are necessary in the planning of the telecommunications networks development, the differentiated service policies choice and computing resources characteristics that guarantee the required quality of service with the appropriate network load [1,2]. The inflamed interest in studying the network traffic nature is explained by the results of studies showing the long-term dependencies presence in the traffic or the self-similarity process. These changes in the traffics structure are associated with the implementation of the single multiservice network concept, involving the voice, data and multimedia integration [3,4]. To date, the self-similar stochastic processes theory is not as well developed as the Poisson processes theory. Given the known conclusions about the network traffic selfsimilarity, the actual tasks are the methods of its study and the tools development for generating artificial traffic that adequately reflects the real heterogeneous network traffic [5]. Properties and Characteristics Self-Similar Processes Self-similarity describes a phenomenon in which some statistical characteristics of the process are preserved when the time is scaled. When averaging over the time scale in a selfsimilar process, there is no rapid "smoothing", that is, a tendency to bursts persists. Properties that characterize the self-similarity of the process are such as slowly damped dispersion, long-term dependence, the presence of a distribution with heavy "tails" [6]. The property of a slowly decaying dispersion is that the variance of the sample mean decays more slowly than the inverse of the sample size, that is ( )  2 −variance of the process X (t); n − sample size; H − Hurst parameter (self-similarity parameter), 0.5 <H <1. Note that for traditional random processes, the variance of the sample mean decreases inversely with the sample size: ( ) The presence of a long-term dependence lies in the fact that the self-similar process has a hyperbolically damped correlation function. The Pareto-distribution is determined by the distribution function in the form (2 2) ( ) ( ), 1, , L(k) − slowly varying function at infinity, for which The property of having a distribution with a heavy "tail" is that the random variable X has a distribution with a heavy "tail", if 0 < <2 − parameter of the distribution form; c − positive constant. Methods for Investigating the Self-Similar Process There are a techniques number that allow us to verify the self-similarity property of the process under investigation. The self-similarity effect can be observed on the graphs illustrating the change in the time scale, in which the structure of the series obtained by averaging the groups of elements remains the same as the structure of the original one. This fact is a prerequisite for the assumption of the process self-similar structure under consideration and the basis for further analysis. Further, it is necessary to estimate the distribution "tail" gravity ( ) ( ) Next, you need to assess the gravity "tail" of the distribution - parameter. To assess the , it is necessary to construct a graph additional distribution ( ) angle inclination tangent of the regression line for ( ) F x the horizontal axis is the parameter value . The properties of the heavy-tailed distributions are as follows: ─ If 2, then the distribution has infinite variance; ─ If 1, then the distribution has an infinite average. ─ As  decreases, an arbitrary large portion of the density can be represented in the tail of the distribution. In fact, a heavy tail means the presence of infinite variance, in other words, a random variable can take very large values, but with a very small probability. Regression equation derived y=-1.29x+0.5067 shows that the  takes a value equal to 1.29 and [0; 2], from which it follows that the traffic distribution data has the property of a heavy tail. Knowing the  you can find the parameter of self-similarity. We calculate the selfsimilarity parameter value H=(3-1.29)/2=0.855, which also confirms the process selfsimilarity properties under consideration, since H[0. 5,1]. It is known that the Hurst parameter is a measure of persistence -the tendency of the process to trends. Using the example of filling the tank with incoming and outgoing flow, it can be shown how the Hurst parameter was originally calculated. And so that the water in the tank is stationary, it is necessary that the output flow is equal to the average input, so that the tank never empties or overflows (Fig. 9). where R − the difference between the maximum and minimum values of S for N time units. Thus, R is the value that best describes the x variability. The H is related to the coefficient of the normalized scope R∕S, where R is the scope of traffic on the entire time series, and S is the standard deviation In Simulations of self-similar traffic The traditional analysis of telecommunication systems, which is based on the assumption of the Poisson flow, cannot accurately estimate the amount of computing resources and system performance in terms of pulsating traffic [7]. The necessary tools for generating artificial traffic that corresponds to the properties of real network traffic that can be used when modeling the processes of transmission, storage and processing of network traffic. There are only a few models that are designed to simulate self-similar traffic. The work implements the tools for generating artificial traffic on the models listed in Table. 2 [8]. Comparative characteristics allow you to choose generators that mimic the real network traffic as plausibly as possible [9]. When comparing, the criterion of the least squares method Y is approximated by the point values of the artificial traffic by the approximating function of the real traffic where F(xi) − values of the approximating function at the points xi of artificial traffic; yi − specified array of source traffic at points xi. Every 60th minute is taken as a point, for a total of 24 hours -24 points. gamma function; H −Hurst parameter; dB(t') − independent random displacements of the Brownian particle at time t'; K(t-t') − memory function of the system: In addition to the quantitative assessment, Table 2 also provides qualitative assessments in the form of the laboriousness of implementing a software generator (the number of tunable parameters or the need for training). This is a subjective assessment, which is difficult to estimate, for example, as the time spent on generator programming or the complexity of the algorithm, since everything depends on the size of the pack, the time spent setting up one parameter and a set of parameters, programming knowledge, and others. For example, despite the fact that the neural network model showed the best result according to the Y criterion, most of its time was spent on choosing the neural network architecture and then setting up the model (3 days), while the model of fractal Gaussian noise was implemented in 40 min, but the Y criterion is 17.5 times greater than that of the neural network model. Moreover, to simulate traffic with a different Hurst parameter, the procedure for choosing a neural network architecture and its training will be required again. Analysis of the above models allows you to concentrate on the last three presented in Table. 2 and use them in solving problems of modeling telecommunication systems and networks with the resulting global problems − planning the development of telecommunication networks, implementing differentiated services, evaluating the characteristics of computing resources that guarantee the required quality of service of the corresponding traffic [10][11][12]. Conclusion The article presents the traffic research results in order to identify its self-similarity property. The assumption about traffic self-similar structure is based on the consideration of available data for a different timeline. Using the method of additional distribution function constructing for a logarithmic scale, the gravity of the tail distribution and the selfsimilarity parameter are estimated. The results obtained allowed us to verify the traffic selfsimilarity properties in question according to the definition and thus confirm the traffic selfsimilarity assumption. Such studies are necessary for understanding the network traffic behavior and developing models that simulate the process real traffic entering the network. A review existing simulations of self-similar traffic was performed. It is assumed that the model adjustment can be performed according to the Hurst parameter if there are recorded real network traffic traces.
2,203.6
2021-12-01T00:00:00.000
[ "Computer Science" ]
Perspective on the Structural Basis for Human Aldo-Keto Reductase 1B10 Inhibition Human aldo-keto reductase 1B10 (AKR1B10) is overexpressed in many cancer types and is involved in chemoresistance. This makes AKR1B10 to be an interesting drug target and thus many enzyme inhibitors have been investigated. High-resolution crystallographic structures of AKR1B10 with various reversible inhibitors were deeply analyzed and compared to those of analogous complexes with aldose reductase (AR). In both enzymes, the active site included an anion-binding pocket and, in some cases, inhibitor binding caused the opening of a transient specificity pocket. Different structural conformers were revealed upon inhibitor binding, emphasizing the importance of the highly variable loops, which participate in the transient opening of additional binding subpockets. Two key differences between AKR1B10 and AR were observed regarding the role of external loops in inhibitor binding. The first corresponded to the alternative conformation of Trp112 (Trp111 in AR). The second difference dealt with loop A mobility, which defined a larger and more loosely packed subpocket in AKR1B10. From this analysis, the general features that a selective AKR1B10 inhibitor should comply with are the following: an anchoring moiety to the anion-binding pocket, keeping Trp112 in its native conformation (AKR1B10-like), and not opening the specificity pocket in AR. Introduction Aldo-keto reductases (AKRs) constitute a superfamily of NADP(H)-dependent, monomeric oxidoreductases, mostly cytosolic, catalyzing the reduction of carbonyl-containing compounds to their corresponding alcohols. In this case, 15 human AKRs have been described belonging to six different subfamilies: AKR1A, AKR1B, AKR1C, AKR1E, AKR6A, and AKR7A. There are three members of the human AKR1B subfamily, namely AKR1B1 (aldose reductase, AR), AKR1B10 (aldose reductase-like protein-1), and AKR1B15, which share 71% amino acid sequence identity and overlapping substrate specificities for aliphatic and aromatic aldehydes. AR is a ubiquitous enzyme and has been thoroughly investigated because it participates in glucose reduction under hyperglycaemia, being involved in the secondary complications of diabetes. This has elicited a long-lasting search for AR inhibitors (ARIs) as antidiabetic drugs. AKR1B10 has a very high catalytic efficiency for all-trans-retinaldehyde and a more specific tissue expression (mostly in the gastrointestinal, GI, tract), although it is overexpressed in several cancer types and skin diseases. AKR1B15 is likely a mitochondrial protein and its mRNA has been found in placenta, testis, skeletal muscle, and adipose tissue [1][2][3]. Luckily enough, an elevated number of high-quality crystallographic AKR structures (AKR1B1, 156; AKR1B10, 20) are available from the Protein Data Bank (PDB), some with a resolution higher than 1 Å, and many including ternary complexes with inhibitors. This AKR1B10 Inhibition Strategies Since the 1980s, AR has been deeply studied as a drug target [8,9] because it transforms cytosolic glucose into sorbitol (a reaction that AKR1B10 and AKR1B15 cannot perform [7,10]), though only under hyperglycemia. Despite many positive pre-clinical studies on ARIs, most clinical trial outcomes have been disappointing. The failure of ARIs as therapeutic agents has been mainly attributed to poor pharmacokinetic properties, lack of clinical efficacy, and/or unacceptable side effects. Most ARIs contain either a cyclic imide group, such as a spirohydantoin group or a spirosuccinimide group, or an acetic acid moiety. The carboxylic acid-containing inhibitors have lower in vivo efficacy, which has been attributed to the relatively low pKa value of the carboxyl group, thus causing ionization at physiological pH and an inability to cross cell membranes. Conversely, cyclic imides have higher pKa values and are only partially ionized at physiological pH, thus allowing to pass through cell membranes and therefore having better pharmacokinetic properties [6,11,12]. Recently, a novel approach using intra-site differential inhibitors against AR has been proposed. These inhibitors may act differentially on AR activity depending on the nature of the substrate, in such a way that they could interfere specifically with the transformation of some substrates while leaving the conversion of other substrates free to occur. This means that the damaging activity of AR (e.g., glucose reduction) could be diminished without compromising the detoxifying role of the enzyme. A few natural AR differential inhibitors from plant extracts have been reported [13][14][15]. Regarding the effect of ARIs against other enzymes (especially from the AKR superfamily), initially the only cross-inhibition target thoroughly analyzed had been human aldehyde reductase (or AKR1A1) [6]. Nevertheless, AKR1A1 presents notable differences in respect to AKR1Bs: (i) it lacks the hyper-reactive active site cysteine (Cys298 in AR) and the Nε of the imidazole ring of the active site histidine interacts with the amide side chain of the nicotinamide ring of NADPH; (ii) the size of loop C is nine residues longer than that of AKR1Bs, determining a rather distinct substrate specificity and inhibitor selectivity [1,5,6]. As explained above, AKR1B10 is in fact the closest enzyme to AR (sharing 71% amino acid identity), and we and others surmised that the lack of selectivity of ARIs could be a relevant factor contributing to their failure as pharmacological drugs [10][11][12]. Furthermore, AKR1B10 is now established as a promising cancer target (except for gastric cancers, where it is downregulated) [12,16], and the ubiquitously expressed AR can represent a problematic off-target, given its overall similarity with AKR1B10. Next, we will provide an overview of the different AKR1B10 inhibitor types in the context of the available three-dimensional structures of AKR1B10 deposited in the PDB (Table A1). To note that an exhaustive listing and description of AKR1B10 inhibitors is beyond the scope of this review. We refer the reader to the revisions of Huang et al. [17] and, more recently, Endo et al. [16,18] for further details. AKR1B10 Reversible Inhibitors The first AKR1B10 inhibitors described were in fact non-selective ARIs, e.g., tolrestat ( Figure 1, [10,19,20]). In general, most of these ARIs belonged to the carboxylic acid type, while most cyclic imide ARIs tested (e.g., fidarestat, Figure 1) were poor AKR1B10 inhibitors [21,22], except for minalrestat ([23], Figure 1). Like ARIs, AKR1B10 inhibitors exploit the hydrophilic nature of the enzyme active site, containing the ABP. This pocket involves catalytic residues Tyr49 and His111, key residue Trp112 (AKR1B10 numbering) and the positively charged nicotinamide moiety of the cofactor NADP + ( [10,11], Figure 2). Therefore, all AKR1B10 inhibitors present a negatively charged or electronegative moiety that anchors them to the ABP and display an uncompetitive inhibition pattern despite binding to the same pocket than substrates [11,22,24]. This behavior is related to the conserved AKR kinetic mechanism, strictly ordered, with the cofactor binding first and leaving last: substrates are binding with higher affinity to the AKR-NADPH complex while inhibitors interact better with the AKR-NADP + complex [1,25,26]. Hence, considering this anchoring moiety, we can broadly divide AKR1B10 inhibitors into two types: carboxylic acid-and non-carboxylic acid-containing inhibitors (hereinafter, CAIs and NCAIs, respectively). Section 3.2. will provide relevant examples and binding insights of each class whose structure in complex with the AKR1B10 holoenzyme (AKR1B10-NADP + ) has been solved. Hence, considering this anchoring moiety, we can broadly divide AKR1B10 inhibitors into two types: carboxylic acid-and non-carboxylic acid-containing inhibitors (hereinafter, CAIs and NCAIs, respectively). Section 3.2. will provide relevant examples and binding insights of each class whose structure in complex with the AKR1B10 holoenzyme (AKR1B10-NADP + ) has been solved. AKR1B10 Covalent Inhibitors Despite its wide use in medicine (e.g., aspirin, penicillin, acetaminophen), there has been some reluctance in pursuing covalent inhibition in drug discovery until recently, because of off-target binding and potential toxicity. This tendency has reversed upon FDA-approval of several covalent drugs [27]. Accordingly, covalent inhibition of AR or other human AKR1s such as AKR1B10 is an unexploited strategy. Both AR and AKR1B10 possess a reactive cysteine (Cys298/Cys299, respectively) in their active site [22,28]. Nevertheless, there is a lack of thorough understanding of its in vivo role if any. Indeed, Cys298 in AR is highly nucleophilic and can be reversibly or irreversibly modified by different reactive species such as nitric oxide (NO), 4-hydroxynonenal (HNE, Figure 1), or oxidized glutathione (GSH, Figure 1) in both recombinant protein and ex vivo models [29]. These modifications can reduce or increase the catalytic activity of AR, depending on the modifying moiety, and reduce its susceptibility to some non-covalent AKR1B10 Covalent Inhibitors Despite its wide use in medicine (e.g., aspirin, penicillin, acetaminophen), there has been some reluctance in pursuing covalent inhibition in drug discovery until recently, because of off-target binding and potential toxicity. This tendency has reversed upon FDAapproval of several covalent drugs [27]. Accordingly, covalent inhibition of AR or other human AKR1s such as AKR1B10 is an unexploited strategy. Both AR and AKR1B10 possess a reactive cysteine (Cys298/Cys299, respectively) in their active site [22,28]. Nevertheless, there is a lack of thorough understanding of its in vivo role if any. Indeed, Cys298 in AR is highly nucleophilic and can be reversibly or irreversibly modified by different reactive species such as nitric oxide (NO), 4-hydroxynonenal (HNE, Figure 1), or oxidized glutathione (GSH, Figure 1) in both recombinant protein and ex vivo models [29]. These modifications can reduce or increase the catalytic activity of AR, depending on the modifying moiety, and reduce its susceptibility to some non-covalent inhibitors, while increasing concentrations of the reduced cofactor NADPH protect Cys298 from modification [29,30]. This oxidated form of AR is called "activated AR" versus the "native AR" (with reduced Cys298). Balendiran and colleagues [29] generated the C298S AR mutant, a good surrogate of activated AR, and studied it biophysically. They solved the C298S AR holoenzyme structure (PDB ID 3Q67) and identified that Ser298 makes a hydrogen bond contact with Tyr209, restricting the flexibility of the mutant in comparison to the native holoenzyme, with Cys298 and lacking this interaction. We and collaborators have recently generated another useful surrogate of the activated AR by means of X-ray irradiation [31]. Its structure (PDB ID 6F8O) displays a similar interaction with Tyr209, and comparison to structures containing ARIs shows that the "locked" residue 298 in activated AR may cause steric hindrance, explaining the reduced inhibition of some ARIs against activated AR. Likewise, Balendiran and colleagues [22,29] observed similar trends of activity affection and inhibitor reduced potency with C299S AKR1B10, while Shen and colleagues probed "native" AKR1B10 with reactive oxygen species (ROS), GSH and free cysteine accounting for similar effects as for AR. Considering that Tyr210 (Tyr209 in AR) is conserved, it seems that AKR1B10 in vivo may also be regulated by the redox state. Last, the crystal structure of the AKR1B10 holoenzyme with epalrestat (PDB ID 4JIH, [32]) presents a sulfenylated Cys299, probably due to the crystallization conditions, further supporting the redox regulation of this residue. This long prelude is necessary to understand the potential and the limitations of such approach for AKR1B10 (and AR) covalent inhibition. The first covalent inhibitor of AKR1B10 was found by Pérez-Sala's laboratory in 2011 [28]. Using a proteomics approach, they found in mice fibroblasts that AKR1B3 (a mouse ortholog of AR) and AKR1B8 (a close AKR1B10 mouse homolog), sharing some key features but diverging in others [33,34]), were covalently bound to PGA 1 -biotin (PGA 1 -B, Figure 1). Furthermore, they showed that AKR1B10 is forming adducts with PGA 1 -B through Cys299, and that PGA 1 inhibited its activity on antitumoral drug doxorubicin in human lung adenocarcinoma A549 cells, preventing chemoresistance [28]. A follow-up study by the same research group [35] proved that AR is also reacting covalently via Cys298 with PGA 1 (Figure 1) and showed that, for both AKR1B10 and AR, the adduct could be reversed by high concentrations of GSH. Inhibition assays with recombinant proteins showed IC 50 values of 38 and 16 µM against PGA 1 , respectively [35,36]. More recently, the Cravatt laboratory has found an additional couple of covalent leads, VC59 and VC63, with IC 50~1 µM in AKR1B10-transfected cell lysates ( Figure 1). They have developed powerful chemical proteomics approaches to map Cys ligandability in mammalian cancer cell lines [37]. In the first work [37], they used a broadly reactive iodoacetamide alkyne (IA-alkyne, Figure 1) in lung cancer cell lines and identified three liganded proteins exclusive to KEAP1-mutant cells (KEAP1 is a negative regulator of the transcription factor NRF2, which in cancer cells induces expression of metabolic enzymes such as AKR1B10 to restore redox homeostasis). In the second study [38], they have developed a second type of broadly reactive (but less unspecific) electrophilic fragment ("scout" fragments, Figure 1) with the same purpose of mapping Cys ligandability. AKR1B10 is used as a proof-of-concept, and again Cys299 has been identified as a highly reactive Cys with the scout fragments. Next, they screened a panel of~140 evolved analogues based on the scout fragments, obtaining the mentioned two leads. A common feature of both types of covalent inhibitors is that their discovery involved screening with cell lysates, not living cells [28,38]. Surprisingly, the most potent lead, VC59, did not bind to AKR1B10 in living lung cancer cells [38]. The researchers found out that, in cell lysates, increasing concentrations of NADPH prevented reactivity of Cys299. They argued that in living cells AKR1B10 is fully saturated with NADPH, which is expected according to the literature [12,39]. Balendiran and colleagues [29] observed that C298S AR binding to NADPH was diminished in comparison to wild-type AR, while NADP + was unaffected. Thus, the polarity of the mutated and "locked" Ser298 side chain is likely to be less compatible with the NADPH complex compared with that with NADP + . Thus, the unfavourability of such a complex may prevent the reactivity of Cys298 (or Cys299 in AKR1B10). Hence, this warrants further research and consideration of both "native" and "activated" forms for drug discovery campaigns against both enzymes, and screening of compounds in living cells in different possible physiological and pathological redox scenarios. Potential for AKR1B10 Catalytic Activators AKR1B10 has a key role in protecting the GI tract from lipid peroxides and reactive aldehydes, and its expression is decreased in GI cancers [2,16]. Thus, finding small molecule activators of its activity could potentially be beneficial in both precancerous and cancerous lesions of the GI tract in which AKR1B10 downregulation has been observed. Indeed, some inroads into small molecule enzyme activators have been made through activity-based protein profiling or high throughput screening, including aldehyde dehydrogenase 2 [40], glucose-6-phosphate dehydrogenase [41] and serine hydrolase LYPLAL1 [42]. In this regard, Endo and colleagues reported that various bile acids ( Figure 1) activated rat AKR1B14 catalytic activity [43]. Through a combination of kinetics, mutagenesis, and structural analyses, they identified that the likely mechanism of activation is acceleration of NADP + dissociation, i.e., the rate-limiting step of the reaction catalyzed by AKR1Bs. This was surprising because most AKR1B and AKR1C enzymes are inhibited by bile acids [44,45]. However, His269 in AKR1B14 (a lysine in AKR1B10 and in most AKR1Bs apart from AKR1B15, [7]) was identified as a key residue for activation. Since the molecular basis for activation in AKR1B14 is well defined, and the differences with AKR1B10 are minimal, it is possible to envisage that a focused library of bile acid derivatives could help find the specific AKR1B10 activators. What We Have Learnt from 3D Structures The long-standing interest in AR is also reflected in the impressive number of threedimensional structures of the holoenzyme by itself and in complex with many inhibitors (https://www.rcsb.org/uniprot/P15121, accessed on 8 December 2021), starting from the crystal structure of pig aldose reductase solved in 1994 [8]. Of note it is also the availability of over 30 structures with a resolution of 1 Å or higher, including the record resolution (0.66 Å) for a structure of a macromolecular entity over 25 kDa, the complex of AR holoenzyme with carboxylic ARI IDD594 (PDB ID 1US0 and [46]). Such detail level allowed identification of the protonation states of the residues involved in inhibition and catalysis, and it was later complemented by a joint X-ray/neutron crystallography structure that elucidated the catalysis and inhibition mechanisms of AR in extraordinary detail [47]. As explained for AKR1B10 inhibitors, the determination of structures of the AKR1B10 holoenzyme by itself and in complex with many inhibitors (20 structures, please see https://www.rcsb.org/uniprot/O60218, accessed on 8 December 2021, Table A1) had to wait a bit more than a decade (PDB ID 1ZUA and [10]) and provides a fair number of complexes containing CAIs and NCAIs, which will be addressed below. AKR1B10 Structure: Overview and Specific Features The first three-dimensional structure solved for AKR1B10, the ternary complex of AKR1B10/NADP + /tolrestat (PDB ID 1ZUA, Figure 2), was elucidated in a joint work by the groups of Parés/Farrés and Fita, and is yet the one with the highest resolution (1.25 Å). It illustrates the paradigm of a non-specific ARI binding to AKR1B10, with only positions 301 and 303 differing between AR and AKR1B10 for residues interacting with the compound [10]. The structure showed the (α/β) 8 TIM barrel topology characteristic of the AKR superfamily, with the NADP + cofactor bound in the interior of the barrel in an extended conformation (Figure 2A,B). Protruding from the barrel core, loops A (residues 112-136), B (residues 216-227) and C (residues 299-310), the most divergent in AKRs and conferring substrate specificity ( Figure 2B), are forming the "lid" of the active site. Tolrestat interacts-through its carboxylic acid (CA) moiety-with the anion-binding site residues Tyr49, His111 (both along with Asp44 and Lys78 form the catalytic tetrad), and Trp112, near the positively charged nicotinamide ring of the cofactor ( Figure 2C,D). The methoxy-trifluoromethyl-naphthalen moiety of tolrestat is lined by residues at the base of loops A and B (Trp112, Phe116, Phe123, Trp220) and by loop C (Cys299, Val301, Gln303). By comparison to AR, the latter pocket has also been named as "specificity pocket" (SP, Figure 2C). The determination of the structures of the AKR1B10 holoenzyme, alone and in complex with other ARIs and with specific AKR1B10 inhibitors, has allowed to identify two key differences between AR and AKR1B10 (see Table A1 for detailed information). The first corresponds to the different conformation of Trp112, in comparison to Trp111 in AR, in the holoenzyme by itself and with specific AKR1B10 inhibitors such as UVI2008 ( Figure 3E). This conformation, which has been named as "native" or "1B10-like conformation", is perpendicular to the "flipped" or "AR-like conformation", observed in the case of AKR1B10/NADP + /tolrestat and complexes with other ARIs. As reported by us and the Hu's laboratory [32,48,49], AKR1B10 adopts the native Trp112 conformation through a hydrogen bond network involving Gln114 and loop C residue Ser304 ( Figure 2D). In AR, with Thr113 and Cys303, respectively, this network cannot be established. In addition, AR Trp111 conformation is always locked through a hydrophobic interaction with Leu300 (or an ARI opening the SP) [50]. is perpendicular to the "flipped" or "AR-like conformation", observed in the case of AKR1B10/NADP + /tolrestat and complexes with other ARIs. As reported by us and the Hu's laboratory [32,48,49], AKR1B10 adopts the native Trp112 conformation through a hydrogen bond network involving Gln114 and loop C residue Ser304 ( Figure 2D). In AR, with Thr113 and Cys303, respectively, this network cannot be established. In addition, AR Trp111 conformation is always locked through a hydrophobic interaction with Leu300 (or an ARI opening the SP) [50]. Specific features of AKR1B10 three-dimensional structure. (A) AR:NADP + :fidarestat and V301L AKR1B10:NADP + :fidarestat complexes in cartoon representation in B-factor putty mode to reflect the thermal flexibility of the structure (red: high B-factor to blue: low B-factor). (B) The zoom of part of the loop A region is an atomic model with color coding as follows: AR:NADP + :tolrestat (PDB ID 2FZD) in orange, AR:NADP + :fidarestat (PDB ID 1PWM) in deep blue, V301L AKR1B10:NADP + :fidarestat (PDB ID 4GAB) in cyan, with distances in the same color as the protein (or in red if they represent a short contact), water molecules in magenta for AR and in deep teal for the V301L AKR1B10 complex, and fidarestat is in black and gray, respectively. HOH-198 is also represented with dots to provide an idea of its size; (C) AR:NADP + :fidarestat complex and (D) V301L AKR1B10:NADP + :fidarestat complex represented in cartoon tube, the cofactor in orange sticks and fidarestat (black and gray, respectively) and key residues shown in space-filling model. (E) Superimposition of the AKR1B10 holoenzyme-UVI2008 complex (PDB ID 5M2F) with AR holoenzyme-fidarestat complex (PDB ID 1PWM). AKR1B10: white, AR: brown, NADP + : orange, UVI2008: violet, fidarestat: light pink. C3 halogen addition to pan-RAR agonist TTNPB enables AKR1B10 selectivity, facilitated by AKR1B10 Trp112 native conformation. Inhibition data provided for both complexes and AR Thr14813Gln, mutant disrupting the hydrogen bond network composed of residues 112, 114 and 304 stabilizing the native conformation. Panels (A-D) were adapted from Ref. [18]. Panel E was adapted from Ref. [48]. Created with PyMoL 2.3.0. and in biorender.com. The second main difference between AKR1B10 and AR is again involving one of the external loops. When comparing any of the AKR1B10 and AR structures, it is consistently observed, as derived from the thermal B factors, that loop A in AKR1B10 is much more mobile than in AR ( Figure 3A). Furthermore, AKR1B10 presents a larger and more loosely packed loop A subpocket (LAS) than AR, with consistently observed crystallographic water molecule(s) trapped within (in 10 out of the 20 structures, Table A1, and Figure 3B-D). This subpocket in AR is normally absent and flanked on the sides by loop C Ser302 and loop A Phe122. In the case of AKR1B10, the flanking residues cannot come as close as in AR, due to the presence of the bulkier Gln303 side chain, resulting in an additional opening of~2 Å of Phe123 side chain. In addition, in AR, Phe115 (Phe116 in AKR1B10), Leu124 (Lys125 in AKR1B10) and Val130 (Ala131 in AKR1B10) stack and make the pocket more hydrophobic, locked, and compact, being unable to allocate any water molecule without clashes ( Figure 3B). On the contrary, in AKR1B10, the occupation of this subpocket and the capability of displacing the buried water(s) (see Table A1) are important for inhibitor binding and selectivity (discussed below) [21,50,51]. Structural Bases for AKR1B10 Selectivity Aside from the difference in the active site region due to Trp112 unique conformation and the specific and imperfectly hydrated LAS, our biophysical and computational studies [21,[50][51][52] have shown that AKR1B10 has different conformational landscape, hydration, and electrostatic properties than AR. In the previous sections, we have introduced the inhibitor types and the specific structural features of AKRB10. In this section, we will elaborate on what are the requirements for potent and selective AKR1B10 inhibitors, through a careful look to the different conformations of the holoenzyme upon their binding. Regarding CAIs solved in complex to AKR1B10 holoenzyme (Table A1), several of them are also ARIs and bind AKR1B10 very similarly to AR, although with some exceptions. Tolrestat binding has already been considered in the previous section. Zopolrestat is also opening the SP in AKR1B10 analogously to in AR, through a π-π stacking interaction with Trp112 ( Figure 4A). Sulindac, a non-steroidal anti-inflammatory drug (NSAID) previously reported to inhibit cyclooxygenase-2 (COX-2), AR and AKR1C3 [52,53], displays a mode of binding essentially equivalent in the two enzymes, stacking towards the base of loop A. However, the stacking interaction is different given by Phe122/Phe123, and two buried and ordered water molecules are present in the LAS in AKR1B10 but not in AR ( Figure 4B). Regarding the mentioned exceptions, IDD388 and MK181 are known to open the SP in AR similarly to zopolrestat [54], but in AKR1B10, instead, they occupy the LAS bound in an extended conformation ( Figure 4C). Lastly, epalrestat, in both structures with AR and AKR1B10 (PDB ID 4JIR and 4JIH, respectively), has not precise coordinates for the phenyl moiety and part of the linker to the CA moiety. Despite this, in AR, with no open LAS in the structure, it is expected that epalrestat may bind in a similar way as sulindac. Meanwhile, in AKR1B10, we can manually model epalrestat with its phenyl moiety occupying the LAS ( Figure 4D). There are several CAIs, that are selective AKR1B10 inhibitors, for which the enzy NADP + -inhibitor structure has been solved (Table A1). Flufenamic acid is a NSAID specific AKR1B10 inhibitor vs. AR but also inhibiting COX-2 and AKR1C3 [53]. Inte ingly, in AKR1B10, it binds the holoenzyme with the aryl moiety stacking against T (Trp20 in AR), in a small loop near the active site. The selectivity is due to the steric c that Trp111 in AR (always in flipped position) would have with the benzoic acid mo of the inhibitor, and that the native Trp112 position avoids in AKR1B10 ( Figure 5A). There are several CAIs, that are selective AKR1B10 inhibitors, for which the enzyme-NADP + -inhibitor structure has been solved (Table A1). Flufenamic acid is a NSAID and specific AKR1B10 inhibitor vs. AR but also inhibiting COX-2 and AKR1C3 [53]. Interestingly, in AKR1B10, it binds the holoenzyme with the aryl moiety stacking against Trp21 (Trp20 in AR), in a small loop near the active site. The selectivity is due to the steric clash that Trp111 in AR (always in flipped position) would have with the benzoic acid moiety of the inhibitor, and that the native Trp112 position avoids in AKR1B10 ( Figure 5A). The other two selective CAIs solved in complex with AKR1B10 holoenzyme, JF0049 and MK204 ( Figures 5B,C), have in common polybrominated aryl moieties that are too bulky to fit within the SP of AR [50,51]. Nevertheless, they interact differently with AKR1B10. The aryl moiety of JF0049 is having a tight fit with the LAS, and we also observed an enthalpic signature by isothermal titration calorimetry upon its binding, consistent with the displacement of the water molecule trapped in the LAS in the holoenzyme structure. Indeed, the LAS presents just one or two ordered water molecules, but it is likely that other disordered and mobile water molecules are present. While their release should not contribute significantly to a large entropy gain, the new hydrogen bonds they will form with other water molecules in the bulk phase may add a significant enthalpic benefit [50]. The other two selective CAIs solved in complex with AKR1B10 holoenzyme, JF0049 and MK204 ( Figure 5B,C), have in common polybrominated aryl moieties that are too bulky to fit within the SP of AR [50,51]. Nevertheless, they interact differently with AKR1B10. The aryl moiety of JF0049 is having a tight fit with the LAS, and we also observed an enthalpic signature by isothermal titration calorimetry upon its binding, consistent with the displacement of the water molecule trapped in the LAS in the holoenzyme structure. Indeed, the LAS presents just one or two ordered water molecules, but it is likely that other disordered and mobile water molecules are present. While their release should not contribute significantly to a large entropy gain, the new hydrogen bonds they will form with other water molecules in the bulk phase may add a significant enthalpic benefit [50]. On the other hand, MK204, with one additional bromine (Br) substituent in the aryl moiety and a three-atom linker between the CA and aryl moieties, was studied in the context of a series with increasing number of Br atoms in the aryl moiety of compounds with identical CA moiety and linker [51]. We observed that the three bulkier ligands can fit nicely into a novel AKR1B10 binding site conformer, mainly through a stacking interaction with the Trp112 native (but not the flipped) conformation ( Figure 5B). Computational studies paired with the structures allowed us to surmise that ligand binding in this novel pocket requires a very hydrophobic aryl moiety able to displace unfavorable water molecules (accounting for a high desolvation penalty) observed in structures with less Br substituents than MK204 [51]. Furthermore, the latter (but not the other congeners) establishes a strong halogen bond with the main chain carbonyl of Cys299. Regarding NCAIs solved in complex to AKR1B10 holoenzyme (Table A1), fidarestat and sorbinil (Figure 1), both cyclic imide ARIs, display an almost identical binding to the two enzymes ( Figure 3A), not opening the SP but with a flipped Trp112. Next, we screened a library of synthetic polyhalogenated compounds lacking the usual CA or cyclic imide moieties (in collaboration with Biomar Microbial Technologies) and discovered JF0064, a pan-inhibitor against human AKR1B (in order of potency, inhibiting AKR1B15 > AR > AKR1B10 [7,11]) with a new anchoring moiety. We determined K i values and complexes with AR and AKR1B10 holoenzymes, identifying it as a non-competitive inhibitor where the acidic hydroxyl group is binding the ABP, again not opening the SP but with a flipped Trp112. Of note is that JF0064 binding triggers a slight opening of loop B (loop B subpocket, or LBS), the only instance in which this has been observed in AKR1B10 structures ( Figure 5D and Table A1). Chatzopoulou and colleagues [55], in an unrelated manner, developed a 2-fluoro-4-(1H-pyrrol-1-yl) phenol scaffold inhibiting AR, that showed improved membrane permeation, in line with our in vitro data predicting better pharmacokinetic properties for JF0064 and potential congeners [11]. A great number of selective CAIs and NCAIs was developed in the period from 2010 to 2015 [17,18]. Most of them are: (i) long aliphatic unsaturated compounds with terminal aryl moieties (caffeic acid derivatives, retinoids, etc.), or (ii) steroids. All these large inhibitors fit better the larger and more malleable (plastic) AKR1B10 active site and "lid" region (constituted by the three external loops A, B, and C), opposite to the snugger AR counterpart. We will address three of these compounds solved in complex with the AKR1B10 holoenzyme that illustrate the mechanistic bases of selectivity. Regarding the first group, the Hu's laboratory determined the structure for the NCAI lead caffeic acid phenethyl ester (CAPE) [49]. As we observed with JF0064, an acidic hydroxyl of the ligand is hydrogen-bonded to Tyr49 and His111 ( Figure 5E) and Trp112 adopts the native conformation. CAPE would be compatible with the flipped conformation, but not CAPE derivatives with a 2-methoxy group in the catechol moiety ( Figure 5E), which would clash with that conformation of Trp112 and have extraordinary selectivity for AKR1B10. This is similar to what we observed with UVI2008 ( Figure 2D). Both compounds have aryl moieties that occupy the LAS (Figures 2D and 5E). Regarding triterpenoid inhibitors, such as oleanolic acid (Figure 1), molecular docking suggested that they would interact in a similar fashion as CAPE or UVI2008. The last AKR1B10 structure determined so far (PDB ID 5Y7N) is the first and only that contains a steroid inhibitor (an NCAI derivative from 5β-cholanic acid, androst-4-ene-3β,6α-diol (3a), [56]). While inhibition studies have been reported, the structure has not been published in a peer-reviewed journal. Complex with compound 3a shows a surprising feature: Phe123 is displaced inwards blocking the entry of the LAS and stacking against the side chain of Leu302, opening a novel subpocket that we name base of loop A subpocket (BLAS), delimited by Phe123, Leu122 and Val48 ( Figure 5F). It should be of interest determining a structure of the AKR1B10 holoenzyme with oleanolic acid to see whether the latter binds similarly to 3a, in the BLAS, or it can open the LAS. Conclusions Different protein conformers may contribute to inhibitor selectivity against AKR1B10 versus AR. Due to the flexibility of the AKR1B10 active site and the existence of transient opening subpockets, the exact inhibitor-AKR1B10 interactions might need to be determined on a case-by-case basis using crystallographic methods. Upon close examination of the crystallographic structures of AKR1B10 with various inhibitors, distinct structural conformers were revealed. Here we summarize the general features that a selective AKR1B10 inhibitor should comply with: (i) An anchoring moiety: Common to ARIs, an AKR1B10 inhibitor must have an anchoring moiety, with a carboxylic acid or an acidic hydroxyl as best choices. Cyclic imides, without the addition of an aryl moiety for binding to either the SP or the LAS (as it may be with minalrestat), are poor AKR1B10 inhibitors. (ii) Keeping Trp112 in its native conformation (AKR1B10-like): Substituents or ligand conformations that are not compatible with the Trp112 flipped (AR-like) conformatione.g., flufenamic acid, UVI2008−, and/or aryl moieties that provide an "optimal filling" of the LAS are required for specificity. That is to displace the buried water molecule(s) in the LAS, an adequate shape complementarity, and to have interactions that are more favorable that those in the bulk water [50,51]. (iii) Not opening the SP in AR: Another recurrent feature of selectivity for AKR1B10 over AR, is the inability of an inhibitor to induce the opening of the SP of AR, which normally occurs in ligands with a bulky aryl moiety as in JF0049 or MK204. This can be observed in Table A1, as, in the solved structures of AKR1B10 holoenzyme with inhibitors, no specific AKR1B10 inhibitor is able to open the SP in AR. Funding: This research was funded by the Spanish Ministerio de Ciencia e Innovación, grant number PID2020-119424RB-I00. Acknowledgments: We would like to express the sincerest recognition and gratitude to Alberto Podjarny, as well as present and past members from Parés' and Podjarny's labs (with special mention to André Mitschler and Alexandra Cousido-Siah), for their invaluable contributions in the understanding of structure, catalysis, and inhibition of human AKRs AR and AKR1B10. Conflicts of Interest: The authors declare no conflict of interest.
7,691.6
2021-12-01T00:00:00.000
[ "Chemistry", "Biology" ]
GATA4-targeted compound exhibits cardioprotective actions against doxorubicin-induced toxicity in vitro and in vivo: establishment of a chronic cardiotoxicity model using human iPSC-derived cardiomyocytes Doxorubicin is a widely used anticancer drug that causes dose-related cardiotoxicity. The exact mechanisms of doxorubicin toxicity are still unclear, partly because most in vitro studies have evaluated the effects of short-term high-dose doxorubicin treatments. Here, we developed an in vitro model of long-term low-dose administration of doxorubicin utilizing human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs). Moreover, given that current strategies for prevention and management of doxorubicin-induced cardiotoxicity fail to prevent cancer patients developing heart failure, we also investigated whether the GATA4-targeted compound 3i-1000 has cardioprotective potential against doxorubicin toxicity both in vitro and in vivo. The final doxorubicin concentration used in the chronic toxicity model in vitro was chosen based on cell viability data evaluation. Exposure to doxorubicin at the concentrations of 1–3 µM markedly reduced (60%) hiPSC-CM viability already within 48 h, while a 14-day treatment with 100 nM doxorubicin concentration induced only a modest 26% reduction in hiPCS-CM viability. Doxorubicin treatment also decreased DNA content in hiPSC-CMs. Interestingly, the compound 3i-1000 attenuated doxorubicin-induced increase in pro-B-type natriuretic peptide (proBNP) expression and caspase-3/7 activation in hiPSC-CMs. Moreover, treatment with 3i-1000 for 2 weeks (30 mg/kg/day, i.p.) inhibited doxorubicin cardiotoxicity by restoring left ventricular ejection fraction and fractional shortening in chronic in vivo rat model. In conclusion, the results demonstrate that long-term exposure of hiPSC-CMs can be utilized as an in vitro model of delayed doxorubicin-induced toxicity and provide in vitro and in vivo evidence that targeting GATA4 may be an effective strategy to counteract doxorubicin-induced cardiotoxicity. Electronic supplementary material The online version of this article (10.1007/s00204-020-02711-8) contains supplementary material, which is available to authorized users. Introduction Cardiotoxicity is a well-recognized devastating adverse outcome related to cancer therapy and can lead to longterm morbidity (Senkus and Jassem 2011). Its prevalence is increasing due to improved long-term survival of cancer patients. One of the most commonly used groups of anticancer drugs are the anthracyclines (e.g. doxorubicin, daunorubicin and idarubicin) which may cause acute cardiac damage that can be reversible, but more commonly cause late-onset toxicity that leads to heart failure. Anthracycline cardiotoxicity is dose-dependent with the heart failure incidence rates ranging from 0.14 to 48% (Conway et al. 2015). If baseline cardiotoxicity risk is high, a prophylactic cardioprotective treatment with angiotensin-converting enzyme inhibitors, angiotensin II receptor blockers, β-blockers and/ or statins should be considered (Corremans et al. 2019;Zamorano et al. 2016). Other strategies to prevent left ventricular dysfunction and heart failure induced by anthracyclines include reduction in the cumulative dose and use of continuous infusions to decrease peak plasma levels, liposomal formulations and less toxic analogues of anthracyclines as well as FDA-approved cardioprotective agent dexrazoxane (Zamorano et al. 2016). Unfortunately, none of these strategies is efficacious enough to prevent a subset of cancer patients developing heart failure. The exact mechanisms of anthracycline-induced cardiotoxicity are still unclear, but may involve oxidative stress, interaction with DNA topoisomerase II beta, calcium dysregulation, iron accumulation, mitochondrial damage, structural changes, and premature senescence as well as activation of immune system (Maejima et al. 2008;Octavia et al. 2012;Renu et al. 2018;Rochette et al. 2015;Zhang et al. 2012). Anthracyclines (Aries et al. 2004;Bien et al. 2007;Esaki et al. 2008;Kim et al. 2003;Kobayashi et al. 2006Kobayashi et al. , 2010Koka et al. 2010;Riad et al. 2008) along with ischemia (Suzuki et al. 2004) have been shown to induce apoptosis and downregulation of transcription factor GATA4 in the myocardium. Increased apoptosis has also been observed in adult cardiomyocytes in GATA4 knock-out mice and in mice with reduced GATA4 levels Oka et al. 2006) as well as in neonatal cardiomyocytes when GATA4 has been depleted by adenoviral antisense transcripts (Aries et al. 2004). In agreement with these findings, GATA4 overexpression in vivo by intramyocardial delivery of GATA4 adenoviral vector prevented myocardial infarction-induced apoptosis and adverse remodelling in rats (Rysä et al. 2010). Accordingly, overexpression of GATA4 in transgenic mice (Kobayashi et al. 2006) or by adenovirus-mediated gene transfer in vitro in neonatal cardiomyocytes and HL-1 cells prevented anthracycline-induced apoptosis (Aries et al. 2004;Kim et al. 2003;Kobayashi et al. 2006). The mechanisms of doxorubicin-induced decrease in GATA4 protein levels may involve downregulation of GATA4 gene expression (Park et al. 2011) or caspase-1-dependent depletion of GATA4 protein levels (Aries et al. 2014). On the other hand, GATA4 is a transcriptional regulator of the anti-apoptotic genes Bcl-xL (Aries et al. 2004;Kitta et al. 2003;Park et al. 2007) and Bcl-2 (Kobayashi et al. 2006) for which GATA4 binding activity and Ser-105 phosphorylation are required (Kobayashi et al. 2006). Overall, these findings demonstrate the significance of GATA4 for cell survival signalling. One reason for the lack of full understanding of the mechanisms of doxorubicin cardiotoxicity may be that traditional preclinical models are not appropriate or sufficiently clinically relevant (Madonna et al. 2015). The translatability of results from in vitro and in vivo models to human is limited as these models are unable to reproduce the complex pathophysiology of human disease. For instance, animal models do not generally take into consideration ageing or comorbidities that aggravate drug-induced cardiotoxicities in clinical situations. Moreover, most in vitro studies have evaluated the effects of short-term high-dose doxorubicin treatments (Corremans et al. 2019). Additionally, due to interspecies differences, in vitro and in vivo animal models do not predict accurately the toxic effects on human heart. Thus, better experimental models of doxorubicin cardiotoxicity are needed to more appropriately simulate clinical circumstances as well as the actions of potential cardioprotective agents. The present study had two main aims. First, we aimed to establish an in vitro model of long-term low-dose administration of doxorubicin utilizing human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs). Cell viability was studied with the 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. High-content analysis (HCA) was used to study changes in DNA content, transcription factor GATA4 levels, expression of pro-B-type natriuretic peptide (proBNP), as well as caspase activation. Additionally, to compare different cardiomyocyte types, toxicity was studied in both hiPSC-CMs and primary neonatal rat ventricular myocytes (NRVMs). Second, as we have recently described a novel family of small-molecule compounds that affect the protein-protein interaction of transcription factors GATA4 and NKX2-5 and improve cardiac function in experimental models of myocardial infarction and hypertension (Ferreira et al. 2017;Kinnunen et al. 2018;Välimäki et al. 2017), we aimed to investigate if the lead compound 3i-1000 has cardioprotective potential against doxorubicin cardiotoxicity in vitro and in vivo. Cell cultures Long-term toxicity was studied in hiPSC-CMs. Acute toxicity was studied in both hiPSC-CMs and primary NRVMs. Cell cultures were maintained at 37 °C in a humidified atmosphere of 5% CO 2 . To investigate if the compound 3i-1000 is cardioprotective against doxorubicin-induced toxicity, the cells were exposed simultaneously to both doxorubicin and 3i-1000. The selection of doxorubicin concentrations was based on plasma concentrations detected in patients undergoing treatment (Creasey et al. 1976;Greene et al. 1983;Muller et al. 1993;Speth et al. 1987). The selection of 3i-1000 concentrations was based on previous studies investigating the efficacy and toxicity of the compound in vitro (Karhu et al. 2018;Välimäki et al. 2017). For compound exposures doxorubicin, 3i-1000, and equivalent vehicle dilutions were made separately in the growth medium. Compound exposures were started by aspirating the old growth media and adding first the media containing 3i-1000 (or equivalent volume concentration of dimethylsulfoxide; DMSO) to the cells. Cells were incubated at 37 °C for 10-15 min after which the medium containing doxorubicin (or equivalent volume concentration of DMSO) was also added to the cells. During long-term exposures, the media were replaced with fresh growth media (containing doxorubicin and/or 3i-1000) every 3-4 days. Human induced pluripotent stem cell-derived cardiomyocytes The iPS(IMR90)-4 line (Yu et al. 2007) was purchased from WiCell (Madison, Wisconsin, USA). The stem cells were cultured in Essential 8™ medium (E8) on six-well plates coated with Matrigel ® (1:50). For passaging, the cells were dissociated with Versene ® and resuspended in E8 containing 10 µM ROCK inhibitor Y-27632. The cells were grown until 80-95% confluent. Cardiomyocytes were produced from hiPSCs using small-molecule induction, as described earlier (Burridge et al. 2014;Karhu et al. 2018). Differentiation was started by adding 6 µM CHIR99021 in RPMI 1640 medium supplemented with B-27 without insulin (RB-ins) to the cells (day 0). After 24 h, CHIR99021 was removed and replaced with fresh RB-ins (day 1). On day 3, the medium was changed to RB-ins containing 2.5 µM Wnt-C59 for 48 h. From day 5 to 11, the cells were maintained in RB-ins. To purify the cardiomyocyte cultures, on day 11 and 13, the cells were fed with RPMI 1640 without glucose with B-27 supplement. From day 15 onwards, the cells were maintained in RPMI 1640 supplemented with B-27 (RB + ins). Beating hiPSC-CMs were dissociated between days 15 and 17 by incubating them in cell dissociation solution containing 40% enzyme-free cell dissociation buffer, 40% RPMI 1640 and 20% trypsin-EDTA (final trypsin concentration 0.01%) for 7-8 min. Trypsin was inactivated with RB + ins supplemented with 10% foetal bovine serum (FBS). After centrifugation the cells were suspended in RB + ins with 10% FBS containing 10 µM ROCK inhibitor Y-27632 and seeded at 17,000-20,000 cells/well on gelatin-coated 96-well plates. In general, differentiation yielded almost pure (> 95%) cardiomyocyte cultures indicating high differentiation efficiency. In the experiments, only differentiation batches that were > 95% pure cardiomyocyte cultures were used. The cells were let to attach for 2 days, after which they were maintained in RB + ins (without FBS) for 1 3 approximately 1 week before treatments. For compound exposures, RB + ins (without FBS) was used. Primary cardiomyocytes Primary cultures of NRVMs were prepared from 1 to 3 day-old Wistar rats, as described earlier (Tölli et al. 2014). Animals were sacrificed by decapitation. Ventricles were dissected and cut into small pieces, which were then enzymatically digested by incubating them for 1-1.5 h at 37 °C under 600 rpm shaking conditions in a solution containing 100 mM NaCl, 10 mM KCl, 1.2 mM KH 2 PO 4 , 4.0 mM MgSO 4 , 50 mM taurine, 20 mM glucose, 10 mM 4-(2-hydroxyethyl)-1-piperazineethanesulfonic acid (HEPES), 2 mg/ml collagenase type 2, 2 mg/ml pancreatin, 100 U/ml penicillin, and 100 µg/ml streptomycin. The cell suspension was collected and centrifuged for 5 min at 160 × g. The supernatant and the top layer of the pellet were discarded and the isolated cardiac cells were resuspended in Dulbecco's Modified Eagle Medium/Nutrient Mixture F-12 (DMEM/F12) supplemented with 10% FBS, 100 U/ ml penicillin, and 100 µg/ml streptomycin. To reduce the number of contaminating non-myocytes (25-45% on day 3 after cell isolation), the cells were pre-plated onto cell culture flasks and let attach for 45-60 min in cell culture conditions. Unattached cells (enriched cardiomyocytes) were collected with the medium and seeded at 30,000-40,000 cells/ well in gelatin-coated 96-well plates. Next day, the medium was changed to complete serum free medium (CSFM; DMEM/F-12 supplemented with 2.5 mg/ml BSA, 5 µg/ ml insulin, 5 μg/ml transferrin, 5 ng/ml selenium, 2.8 mM sodium pyruvate, 0.1 nM triiodo-l-thyronine (T3), 100 U/ ml penicillin, and 100 µg/ml streptomycin) for 24 h prior to compound treatments. For compound exposures, CSFM was used. Cell viability assay The cells were exposed to doxorubicin and/or 3i-1000 for 2-21 days and cell viability was quantified with MTT assay (Mosmann 1983). MTT was added to the cells at a final concentration of 0.5 mg/ml followed by 2 h incubation in cell culture conditions. The medium was aspirated and formed formazan crystals were solubilized in DMSO. Absorbance was measured at 550 nm and absorbance at 650 nm was subtracted as background. Automated fluorescence microscopy and high-content analysis The cells were exposed to doxorubicin and/or 3i-1000 for 1-14 days. For proBNP stainings, cells were additionally treated with Brefeldin A (1:1000) for 3 h prior to fixation. Alternatively, to study caspase activation, the cells were incubated with 7 µM caspase-3/7 detection reagent solution in phosphate-buffered saline (PBS) with 5% FBS for 60 min at 37 °C prior to fixation. The cells were fixed with 4% paraformaldehyde (PFA) for 15 min at room temperature (rt) and permeabilized with 0.1% Triton X-100 for 10 min. Non-specific binding sites were blocked with 4% FBS in PBS for 45 min at rt after which the cells were incubated with anti-GATA4 (1:400) or anti-proBNP (1:500) antibody. Additionally, a primary antibody against α-actinin (1:600) or cardiac troponin T (1:800) was used to identify myocytes. After a 60-min incubation with primary antibodies at rt, the cells were washed 3 × 5 min with PBS followed by a 45-min incubation with Alexa Fluor-conjugated secondary antibodies (1:200, with the exception of Alexa Fluor 647 anti-rabbit 1:250) and DAPI (1 µg/ml) at rt. The plates were imaged and analysed with CellInsight CX5 High-Content Screening Platform (Thermo Scientific) using a 10 × objective (Olympus UPlanFL N 10x/0.3). For quantification, the cells were first identified based on DAPI fluorescence, which defined the nuclear area. Non-myocytes were excluded based on absence of α-actinin/cardiac troponin T staining. The threshold for α-actinin/cardiac troponin T fluorescence intensity was set manually in each experiment to allow optimal exclusion of non-myocytes. The data were collected only from α-actinin/cardiac troponin T positive cells. The intensity of GATA4 staining was analysed within the nucleus. The intensity of proBNP staining was analysed in the perinuclear area defined by a 4-pixel ring around the nucleus. The threshold for proBNP positive cells was set manually in each experiment to adjust for minor variation in staining intensity. The intensity of fluorescent caspase-3/7 activity reporter was quantified within the nucleus. The threshold for caspase positive and caspase negative cells was also set manually in each experiment. Doxorubicin-induced cardiotoxicity in rats Doxorubicin was administered i.p. to 7 weeks old male Sprague Dawley rats with average weight 216 g (range 189-245 g) at the dose of 1 mg/kg/day for 10 days (Hayward and Hydock 2007). Control animals received an equivalent volume of saline. Based on previous experiments, and due to its rapid metabolism , the compound 3i-1000 was administered i.p. at the dose of 15 mg/ kg two times a day for 2 weeks from week 7 to week 9. It was diluted to DMSO and administered to animals as 1:1 dilution in corn oil, control animals receiving DMSO with corn oil in equivalent volume. Transthoracic echocardiography was performed using the Vevo2100 high-frequency high-resolution linear array ultrasound system (FujiFilm VisualSonics, Toronto, Canada) and MS-250 transducer (13-24 MHz, axial resolution 75 μm, lateral resolution 1 3 165 μm) by a trained sonographer blinded to the treatments, as described previously (Jurado Acosta et al. 2017). Rats were sedated with isoflurane or anesthetized with ketamine (50 mg/kg, i.p.) and xylazine (10 mg/kg, i.p.). Using twodimensional imaging, a short axis view of the left ventricle (LV) at the level of the papillary muscles was obtained and a two-dimensionally guided M-mode recording through the anterior and posterior walls of the LV were acquired. Endsystolic and end-diastolic LV dimensions (ESD and EDD) as well as the thickness of the interventricular septum and posterior wall were measured from the M-mode tracings. LV fractional shortening (FS) and ejection fraction (EF) were calculated from the M-mode LV dimensions using Eqs. 1 and 2: An average of three measurements of each variable were used. After echocardiographic measurements at 9 weeks, the terminally anesthetized animals were decapitated, hearts were excised, and the apex of left ventricle was immersed in liquid nitrogen and stored at − 70 °C for further analysis. RNA isolation from LV tissue and RT-PCR The LV tissue was grinded in liquid nitrogen to powder, of which 1/3 was used for total RNA isolation using guanidine thiocyanate-CsCl method (modified from Cathala et al. 1983). Shortly, tissue powder was homogenised in 3 ml lysis buffer containing 4 M guanidium thiocyanate, 0.1 M Tris-HCl (pH 7.5), 7% β-mercaptoethanol and 1.0-2.0% Na-lauroylsarcosine with Ultra-Turrax ® (IKA ® ) and cell debris was pelleted for 10 min 3000 rpm (1791×g) 4 °C. The supernatant containing RNA was stored in − 80 °C for further treatment. RNA was isolated by ultracentrifugation overnight through a 5.7 M CsCl cushion at 4 °C. The resulting pellet was resuspended in lysis buffer and RNA was precipitated with 3 M sodium acetate (pH 5.2) (1/10 vol) and ice cold absolute ethanol (3 × vol) at least for 1 h at − 20 °C. The precipitated RNA was pelleted by centrifugation for 15-20 min 12,000 rpm (13,520×g) at 4 °C and washed with 70% ethanol in diethylpyrocarbonate (DEPC)-treated water followed by another centrifugation for 5-10 min as described above. The washing was repeated and RNA pellet was air dried before dissolving in DEPC-H 2 O. For quantitative RT-PCR analyses, cDNA was synthesised from total RNA with a First-Strand cDNA Synthesis Kit (GE Healthcare Life Sciences) following the manufacturer's protocol. RNA was analysed by RT-PCR on an ABI 7300 sequence detection system (Applied Biosystems) using TaqMan chemistry. The results were quantified using ΔΔC T method and normalised to 18S RNA quantified from the same samples. The following sequences of the primers and the fluorogenic probe were used in assay: atrial natriuretic peptide (ANP, forward:GAA AAG CAA ACT GAG GGC TCTG, reverse:CCT ACC CCC GAA GCA GCT , probe: TCG CTG GCC CTC GGA GCC T) and B-type natriuretic peptide (BNP, forward: TGG GCA GAA GAT AGA CCG GA, reverse: ACA ACC TCA GCC CGT CAC AG, probe: CGG CGC AGT CAG TCG CTT GG). Protein extraction from LV tissue and western blot Two thirds of the ground LV tissue was homogenised in 4 ml of lysis buffer (20 mM Tris, 10 mM NaCl, 0.1 mM EDTA, 0.1 mM EGTA, pH 8.0) containing protease and phosphatase inhibitors (1 mM β-glycerophosphate, 1 mM Na 3 VO 4 , 10 µg/ml leupeptin, 10 µg/ml pepstatin, 10 µg/ ml aprotinin, 2 mM benzamidine, 1 mM phenylmethylsulfonyl fluoride, 50 mM sodium fluoride, 1 mM dithiothreitol). Of the homogenate, 0.8 ml was used for total protein extraction and the rest for nuclear protein extraction. Then, 0.2 ml of lysis buffer (100 mM Tris-HCl, 750 mM NaCl, 5 mM EDTA, 5 mM EGTA, 5% Triton X-100, 12.5 mM sodium pyrophosphate, 5 mM β-glycerophosphate, 5 mM Na 3 VO 4 , pH 7.5) was added into the total protein homogenate and vortexed for 30 s. After a 20-min centrifugation at 12,500 rpm (14,670×g) at 4 °C, the supernatant containing total proteins was collected. For nuclear protein extraction, the homogenate was divided into two sets, which were later combined. The homogenate was kept on ice for 15 min after which NP-40 was added at the final concentration of 0.6%. Sample was vortexed vigorously for 15 s and centrifuged for 30 s at 12,500 rpm (14,670×g) at 4 °C. The pellet was suspended into buffer (20 mM Hepes, 0.4 mM NaCl, 1 mM EDTA, 1 mM EGTA, pH 8.0) including inhibitors mentioned above and the parallel samples were combined. The samples were then vortexed vigorously for 45 min at 4 °C. After the final centrifugation at 12,500 rpm for 5 min at 4 °C, the supernatant containing nuclear proteins was collected. The protein concentrations were determined with the Bio-Rad Protein Assay. From each animal, 50 µg of total protein or 20 µg nuclear protein was resolved on 12% SDS-PAGE gel and transferred onto nitrocellulose membrane. After blocking the nonspecific background in 5% non-fat dry milk, the membranes were incubated with 1:1000 dilution of primary antibodies, except for anti-GAPDH, which was used at 1:100,000 dilution, at 4 °C overnight. After washing, the membranes were incubated for 1 h with an HRP-conjugated anti-rabbit or anti-mouse secondary antibody in 1:2000 dilution. The protein amounts were detected by enhanced chemiluminescence with ECL Plus reagents (RPN2132, Amersham Biosciences) followed by digitalization of chemiluminescence with Luminescent Imager Analyzer LAS-3000 (Fujifilm) and analysing with Quantity One software 4.6.6.Basic (Bio-Rad Laboratories). For a second immunoblotting, the membrane was stripped for 30 min at 60 °C in stripping buffer (0.16 M Tris-HCl, 6.5% SDS and 2.25% β-mercaptoethanol), blocked and probed with antibodies as described above. Statistics In vitro results are expressed as mean from at least three independent experiments with error bars representing the standard error of the mean (SEM). For statistical analysis, non-normalized raw data was used. Statistical analyses were performed using IBM SPSS Statistics 24 software. Statistical significance was evaluated with randomized block ANOVA (experiment and treatment as factors) followed by Tukey's HSD. In vivo results are expressed as mean with error bars representing SEM. For the first series of in vivo results, Welch's t test was used to compare groups NaCl and DOX at each time point separately. For the second series of in vivo results, Levene's test was used to analyse the equality of variances after which independent-samples t test was used to compare groups NaCl + V and DOX + V or DOX + V and DOX + 3i-1000. Differences at the level of P < 0.05 were considered statistically significant. Doxorubicin-induced chronic toxicity in hiPSC-CMs To study the long-term toxicity in vitro, hiPSC-CMs were exposed to doxorubicin for up to 21 days. Exposure to doxorubicin at concentrations of 1 µM and 3 µM markedly reduced hiPSC-CM viability already within 48 h (approximately 60%, P < 0.001; Fig. 1a). Treatment of hiPSC-CMs with 300 nM doxorubicin was less toxic but resulted in severe cytotoxicity within 21 days. On the other hand, a 14-day exposure to 100 nM doxorubicin induced only a modest 26% (P = 0.201) reduction in hiPSC-CM viability. Based on these results, doxorubicin at the concentration of 100 nM for 14 days was chosen for further HCA experiments to explore long-term toxicity. To investigate the effects of the small-molecule compound 3i-1000 in hiPSC-CMs, the cells were exposed to the compound alone or in combination with doxorubicin for 7, 14, and 21 days (Fig. 1b). In the MTT assay, 3i-1000 alone at 10 µM concentration reduced hiPSC-CM viability 34% (P = 0.001), 50% (P < 0.001) and 65% (P < 0.001) after 7, 14 and 21 days of exposure, respectively. At the concentration of 3 µM, the decrease was only 16% even after 21-day exposure. Moreover, 3i-1000 at 3-10 µM concentrations had no effect on doxorubicin-induced reductions in hiPSC-CM viability. Effects of doxorubicin and 3i-1000 on DNA content and GATA4 levels in hiPSC-CMs DAPI is a fluorescent dye that binds to A-T rich sequences of double-stranded DNA, thus the fluorescence depends on the amount of the DNA in the cells (Kapuscinski 1995). To evaluate the effect of doxorubicin and 3i-1000 on DNA content in hiPSC-CMs as well as hiPSC-CM density in culture, DAPI staining and HCA were utilized. Over long-term exposure, both doxorubicin and 3i-1000 decreased hiPSC-CM density in culture (Fig. 2a). After a 4-day exposure, doxorubicin-induced reduction in cell density was 12% compared to control, and after a 14-day exposure 49%. Additionally, a 14-day exposure to 10 µM 3i-1000 alone caused an 80% (P = 0.181) reduction in hiPSC-CM number compared to control, whereas at the concentration of 3 µM the decrease was 36%. A 4-day exposure to 100 nM doxorubicin decreased the average total intensity of DNA staining by 18% (P < 0.001) compared to control and after 14-day exposure this reduction was 28% (P = 0.003; Fig. 2b). We also measured variation in DNA staining intensity as an indication of DNA fragmentation leading to distribution of DNA fragments around nuclei (Darzynkiewicz et al. 2010;Doan-Xuan et al. 2013). Doxorubicin decreased the intranuclear variability of DNA staining intensity by 31% (P < 0.001), 37% (P < 0.001) and 44% (P = 0.009) after 4, 7 and 14-day exposures, respectively ( Supplementary Fig. S1). The compound 3i-1000 had no significant effect on the total intensity of DNA staining or the intranuclear variability, either alone or in combination with doxorubicin. To elucidate the effect of doxorubicin and 3i-1000 on GATA4 levels in hiPSC-CMs, average GATA4 staining intensity in nucleus was analysed using HCA (Fig. 2c). Neither doxorubicin nor 3i-1000 had statistically significant effect on nuclear GATA4 staining even after a 14-day exposure. Effects of doxorubicin and 3i-1000 on proBNP expression in hiPSC-CMs BNP is used for the diagnosis of heart failure and cardiac dysfunction (de Lemos et al. 2003;Ruskoaho 2003) 1 3 and its synthesis in cardiomyocytes is induced by cellular stress such as mechanical stretch (Pikkarainen et al. 2003), hypoxia (Toth et al. 1994), and metabolic stress (Bistola et al. 2008), as well as various paracrine signals such as endothelin-1 (Bruneau et al. 1997) and cytokines (Ma et al. 2004). To evaluate the effect of doxorubicin and 3i-1000 on expression of the BNP precursor proBNP in hiPSC-CMs, proBNP staining and HCA were utilized. A 4-day exposure to 100 nM doxorubicin induced a 3.1fold increase (P < 0.001) in the percentage of cardiomyocytes positive for proBNP compared to control (Fig. 3a, b), and this was paralleled by increased average intensity of proBNP staining in the perinuclear region ( Supplementary Fig. S2). When the hiPSC-CMs were exposed simultaneously to 100 nM doxorubicin and 10 µM 3i-1000 for 4 days, the percentage of proBNP + cells decreased 60% (P < 0.001). Correspondingly, at the 3 µM concentration of 3i-1000, the decrease was 20%. On the other hand, percentage of proBNP + cells was similar in doxorubicin and doxorubicin plus 3i-1000 treated groups at day 7, as well as in doxorubicin and doxorubicin plus 3 µM 3i-1000 groups at day 14. However, when the cells were exposed simultaneously to doxorubicin (100 nM) and 10 µM 3i-1000 for 14 days, the percentage of proBNP cells increased 19.3fold compared to a 7.7-fold increase in cells exposed to 100 nM doxorubicin only (Fig. 3b). Fig. 1 The effect of long-term doxorubicin (DOX) exposure on the viability of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs). To study cell viability, the cells were exposed to DOX for 2-21 days after which the MTT assay was performed. a Cell viability after DOX treatment expressed as mean ± SEM (n = 4). Effects of doxorubicin and 3i-1000 on caspase activation in hiPSC-CMs To investigate the effect of doxorubicin and 3i-1000 on cell death, caspase activation was analysed using HCA. Doxorubicin at 100 nM concentration had no effect on the percentage of hiPSC-CMs positive for the fluorescent caspase-3/7 activity reporter after a 4-day exposure, but tended to increase after a 7-day exposure and produced a significant 3.1-fold increase (P = 0.001) in the percentage of cells with active caspase-3/7 after a 14-day exposure (Fig. 3d). The compound 3i-1000 alone at 10 µM concentration caused a 5.0-fold increase (P = 0.007) in cells positive for the caspase reporter after a 4-day exposure (Fig. 3c, d). Caspase-3/7 activity was significantly increased also after 7-day (2.5fold, P = 0.007) and 14-day (2.9-fold, P = 0.003) exposures Fig. 2 The effects of doxorubicin and 3i-1000 on DNA content and GATA4 levels in human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) after long-term exposure. For high-content analysis, the cells were exposed simultaneously to 100 nM doxorubicin and 3i-1000 for 4, 7 or 14 days after which they were fixed and stained. Imaging and analysis was carried out using CellInsight High-Content Screening Platform. a Cell density. b Intensity of DNA staining. c Intensity of GATA4 staining in nucleus. The results are expressed as mean + SEM (n = 3-5). ***P < 0.001 vs. control; **P < 0.01 vs. control (randomized block ANOVA followed by Tukey's HSD) to 10 µM 3i-1000. At the 3 µM concentration of 3i-1000, the increase was 1.8-fold compared to control at day 14. Interestingly, when the cells were exposed to 10 µM 3i-1000 simultaneously with doxorubicin, the increases were only 1.7-fold, 1.1-fold and 1.9-fold (not statistically significant) compared to control at days 4, 7 and 14, respectively, indicating cardiomyocyte protective effect for 3i-1000 in hiPSC-CMs. hiPSC-CMs are more resistant to doxorubicin toxicity than primary cardiomyocytes To compare different cardiomyocyte types, hiPSC-CMs and NRVMs were exposed short-term to doxorubicin and 3i-1000 treatments. Short-term exposures were used because NRVMS, unlike hiPSC-CMs, cannot be cultured extended periods of time. In the MTT assay, a 48-h exposure to doxorubicin at 1 µM and 3 µM concentrations induced > 58% reductions (P < 0.001) in hiPSC-CM viability (Fig. 4a), whereas viability of NRVMs decreased by > 79% (P < 0.001; Fig. 4b) ( Supplementary Fig. S3 shows IC 50 values for doxorubicin in both cell types). A 48-h exposure to 100 nM doxorubicin had no substantial effect on the viability of either cardiomyocyte type, while 300 nM doxorubicin decreased the viabilities of both cell types by 20%. The compound 3i-1000 (at 10 µM and 30 µM concentrations) alone or in the presence of 100 nM doxorubicin tended to increase the viabilities of both cell types (approximately 20%). Based on HCA results (Fig. 5), a 24-h exposure to 100 nM doxorubicin induced a 14% decrease (P = 0.039) in the average intensity of GATA4 staining in NRVM nuclei, but not in hiPSC-CMs. Also, 3i-1000 decreased nuclear GATA4 staining intensity in NRVMs by 15% (P = 0.030). Moreover, a 24-h exposure to 100 nM doxorubicin had no effect on the percentage of cardiomyocytes positive for the fluorescent caspase-3/7 activity reporter in either cell type. However, a 24-h exposure to 10 µM 3i-1000 by itself induced a 2.5fold non-significant increase in hiPSC-CMs and a 1.4-fold significant increase (P = 0.037) in NRVMs positive for the caspase reporter compared to DMSO, but not in the presence of 100 nM doxorubicin. It is also notable that in NRVM cultures the basal level of caspase + cardiomyocytes after 24-h exposure to 0.1% DMSO (on day 3 after cell isolation) was 8%, whereas in hiPSC-CM cultures the basal level of caspase + cells was no more than 1%. In vivo model of chronic doxorubicin toxicity As the compound 3i-1000 showed cardioprotective effects in hiPSC-CMs in vitro, we next examined its effects on doxorubicin-induced cardiotoxicity in vivo. First, we carefully tested several rat and mice models (single 15 or 20 mg/kg dose of doxorubicin), as described in previous GATA4 in vivo cardioprotection studies (Kobayashi et al. 2006) and in various other studies in which doxorubicin has been shown to affect GATA4 levels (Aries et al. 2004;Bien et al. 2007;Esaki et al. 2008;Koka et al. 2010;Riad et al. 2008). However, under our experimental conditions, using high bolus dose of 15 mg/kg or 7.5 mg/kg/ week three times i.p. had no effect on LV ejection fraction, instead doxorubicin induced acute diarrhoea and ascites, and serious weight loss (18-20%) was observed after 7 days in 2/3 of the rats. In our subsequent studies, we finally observed that the model developed for rats by Hayward and Hydock (2007), in which doxorubicin was administered at the dose of 1 mg/kg/day for 10 days (Fig. 6a), possessed many classical signs of doxorubicininduced late-onset dilated cardiomyopathy, reflected as the decline in both LV ejection fraction and fractional shortening (Fig. 6b, c). Cardiac function was studied by echocardiography at 2, 7, 9, and 11 weeks (Fig. 6b, c). The cardiomyopathy as consequence of doxorubicin treatment started to develop after 7 weeks. At week 9, the LV ejection fraction in the saline group was 68.8 ± 3.4% (n = 3) and DOX group 55.9 ± 3.8% (n = 6) (P = 0.044), and also LV fractional shortening was lower in DOX-treated animals. After week 9, the survival of DOX-treated animals decreased quickly (Fig. 6d). Compound 3i-1000 restores cardiac function in doxorubicin-treated animals To study the effect of 3i-1000 on doxorubicin-induced cardiotoxicity in vivo, heart failure was first induced with doxorubicin treatment as described in Fig. 6a, and the compound 3i-1000 (or equal volume of vehicle) was injected i.p. at 30 mg/kg/day (the daily dose was divided in two portions) for 2 weeks during the weeks 8 and 9. Cardiac function was assessed by echocardiography at weeks 2, 7 and 9 (Supplementary Table S1). Interestingly, treatment with compound 3i-1000 significantly inhibited doxorubicin-induced cardiotoxicity by restoring the left ventricular EF (DOX plus 3i-1000 63.8 ± 2.6% vs. DOX 56.8 ± 1.8%, P = 0.041) and FS (DOX plus 3i-1000 36.4 ± 2.1% vs. DOX 31.2 ± 1.3%, P = 0.043) (Fig. 7a, b). There was no changes in left ventricular posterior wall thickness (LVPW) or internal dimension (LVID) (Fig. 7c, d). Doxorubicin-induced cardiac damage was associated with elevation of ANP and BNP mRNA expression and these increases in gene expressions were not significantly influenced by 3i-1000 treatment (Fig. 7e, f). We did not detect any significant changes on GATA4 protein levels by western blot analysis, but compound 3i-1000 inhibited the doxorubicin-induced decrease in phosphorylated-p38 protein levels ( Supplementary Fig. S4). Discussion Doxorubicin is a widely used chemotherapeutic agent but its clinical applications are limited by dose-dependent cardiotoxicity. Here, we have developed an in vitro model of long-term low-dose administration of doxorubicin utilizing hiPSC-CMs to more accurately mimic long-term doxorubicin dosing and late effects of cardiotoxicity in clinical practice. We exposed hiPSC-CMs to 100 nM doxorubicin for up to 21 days, which differs from the other hiPSC-CM-based models that have recently been used to study doxorubicin toxicity (Burridge et al. 2016;Chaudhari et al. 2016a, b;Louisse et al. 2017;Zhao and Zhang 2017). The concentration of doxorubicin was lower and the exposure time longer, which can be expected to more accurately model chronic dosing in human cancer patients and the cardiotoxicity that ensues. At the same time, it should be taken into account that extended exposure times with repeated medium changes may cause doxorubicin accumulation in the nuclei and associated cardiotoxicity (Kawai et al. 1997). The 100 nM concentration was chosen based on cell viability data evaluation. Micromolar concentrations of doxorubicin caused severe acute toxicity leading to considerable cell death already after 2 days. This acute toxicity was even more severe in NRVMs compared to hiPSC-CMs. Doxorubicin at 100 nM concentration, however, reduced cell viability over long-term exposure without causing excessive cell death. This model also allows the evaluation of efficacy of novel cardioprotective or restorative therapies on chronic cardiomyocyte toxicity in vitro. Doxorubicin can intercalate with DNA, directly affecting transcription and replication, and leading to apoptosis of cancer cells (Yang et al. 2014). Doxorubicin-induced DNA damage and apoptosis contribute also to its cardiotoxicity (Arola et al. 2000;Lyu et al. 2007;Rochette et al. 2015;Zhang et al. 2012). Here we show that doxorubicininduced decreases in cell number and viability and DNA content were associated with increased caspase-3/7 activity in the chronic in vitro cardiotoxicity model, confirming that caspase-dependent apoptosis contributed to cardiomyocyte death and thus cardiotoxicity. These findings further validate long-term low-dose exposure of hiPSC-CMs as a novel model of doxorubicin-induced cardiotoxicity. To investigate doxorubicin cardiotoxicity in vivo, we subjected rats to a once a day regimen of doxorubicin for a period of 10 days and then followed up animals for 11 weeks. In previous studies of doxorubicin-induced cardiotoxicity, various animal models, doses and dosing regimens have been used (Aston et al. 2017). Here, our aim was to study the effect of the compound on cardiac function, and therefore, an animal model of doxorubicin-induced cardiotoxicity, in which the ejection fraction decreases, was necessary. The onset of cardiotoxicity was assessed by means of echocardiographic evaluation of cardiac function and natriuretic peptide measurements, both recommended also in clinical practice as diagnostic tools to detect myocardial toxicity (Zamorano et al. 2016). An important characteristic of the model used herein was the use of low doses of doxorubicin that resulted in delayed development of cardiac dysfunction, as shown by significant decrease in LV ejection fraction and fractional shortening only after week 7. This is in contrast to models that administered single or high doses of doxorubicin that rapidly damage the heart (Hayward and Hydock 2007). The dose of doxorubicin used here, however, was sufficient to activate left ventricular ANP and BNP gene expression, which is consistent with previous reports showing their expression to increase in response to cellular stress (Kinnunen et al. 1992(Kinnunen et al. , 1993Ogawa et al. 1991;Toth et al. 1994). Regarding the clinical value, it is also noteworthy to point out that the studies with single high bolus dose of doxorubicin have been questioned, since they simulate acute cardiotoxicity (Corremans et al. 2019;Gianni et al. 2008). Therefore, a more relevant experimental design for the doxorubicin-induced cardiotoxicity would be the lowdose and repeated administrations, as this is used in clinics (Vejpongsa and Yeh 2014). Regarding the translational value, it is important to compare also the doxorubicin dose used in the present experiments to the doses used in humans. In clinical use, doxorubicin is administered at the doses of 40-90 mg/ m 2 as at least 15 min-long intravenous infusions every third week (Vejpongsa and Yeh 2014). For paediatric patients and together with other chemotherapeutics lower doses are used. When a single dose of 12 mg/kg i.p. was administered to mice, doxorubicin plasma concentration was 60 ng/ml after 2 h and 20 ng/ml after 24 h (Johansen 1981). Correspondingly, in humans, 60 mg/m 2 dose i.v. resulted doxorubicin plasma concentrations 480 ng/ml after 1 h and 40 ng/ml after 24 h (Barpe et al. 2010). Thus, in the present model in rats, the total cumulative dose of 10 mg/kg over 10 days roughly resembles a subchronic cardiotoxicity model. On the other hand, the cumulative Fig. 3 The effects of doxorubicin (DOX) and 3i-1000 on expression of pro-B-type natriuretic peptide (proBNP) and caspase activation in human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) after long-term exposure. For high-content analysis, the cells were exposed simultaneously to 100 nM DOX and 3i-1000 for 4, 7 or 14 days after which they were fixed and stained. Imaging and analysis was carried out using CellInsight High-Content Screening Platform. a Representative images of proBNP staining after a 4-day exposure. b Proportion of proBNP positive hiPSC-CMs. c Representative images of caspase staining after a 4-day exposure. d Proportion of hiPSC-CMs positive for fluorescent caspase-3/7 activity reporter. Adjustments of individual colour channels to enhance brightness and contrast were made identically to all representative images. The results are expressed as mean + SEM (n = 3-4). ***P < 0.001 vs. control; **P < 0.01 vs. control; *P < 0.05 vs. control (randomized block ANOVA followed by Tukey's HSD). cTnT cardiac troponin T (colour figure online) ◂ dose 10 mg/kg in rats has been estimated to correspond to 400 mg/m 2 in humans (80 kg, 183 cm) (Hayward and Hydock 2007). Moreover, in vitro concentrations 3 and 1 µM compare to the initial plasma levels of doxorubicin detected in patients after a bolus administration, whereas concentrations of 300 and 100 nM compare to the plasma levels that are reached within few hours after doxorubicin administration and are maintained by continuous infusion Fig. 4 The effect of short-term doxorubicin exposure on the viability of human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) and neonatal rat ventricular myocytes (NRVMs). To study cell viability, a hiPSC-CMs and b NRVMs were exposed simultaneously to doxorubicin and 3i-1000 for 48 h after which the MTT assay was performed. The results are expressed as mean + SEM (n = 3-4). ***P < 0.001 vs. control; **P < 0.01 vs. control (randomized block ANOVA followed by Tukey's HSD) Fig. 5 The effects of doxorubicin (DOX) and 3i-1000 on GATA4 levels and caspase activation in human induced pluripotent stem cell-derived cardiomyocytes (hiPSC-CMs) and neonatal rat ventricular myocytes (NRVMs) after short-term exposure. For high-content analysis, the cells were exposed simultaneously to 100 nM DOX and 3i-1000 for 24 h after which they were fixed and stained. Imaging and analysis was carried out using CellInsight High-Content Screening Platform. Representative images of GATA4 staining in a hiPSC-CMs and b primary NRVM cultures. c Average intensity of nuclear GATA4 staining. d Proportion of cardiomyocytes positive for fluorescent caspase-3/7 activity reporter. Adjustments of individual colour channels to enhance brightness and contrast were made identically to all representative images. The results are expressed as mean + SEM (n = 3-4). *P < 0.05 vs. control (randomized block ANOVA followed by Tukey's HSD) (colour figure online) (Creasey et al. 1976;Greene et al. 1983;Muller et al. 1993;Speth et al. 1987). Using these in vitro and in vivo models, we studied whether the GATA4-targeted compound 3i-1000 has cardioprotective effects on doxorubicin-induced cardiotoxicity. GATA4 is a member of the GATA family of zinc finger transcription factor, which was originally discovered as a regulator of cardiac development and subsequently identified as a major regulator of cardiac hypertrophy and cell survival (Pikkarainen et al. 2004;Suzuki 2011;Tremblay et al. 2018). Several hypertrophic stimuli directly regulate GATA4 DNA-binding and transcriptional activity in vitro (Hasegawa et al. 1997;Majalahti et al. 2007;Morimoto et al. 2000;Morisco et al. 2001;Kerkelä et al. 2002) and in vivo (Hautala et al. 2001;Majalahti et al. 2007). Moreover, mechanical stretch transiently increases GATA4 DNA-binding activity and transcript levels followed by increases in the expression of BNP, ANP, and skeletal α-actin genes (Pikkarainen et al. 2003). Interestingly, GATA4 overexpression alone induces hypertrophic myocardial cell growth and hypertrophic gene expression in GATA4 transgenic mice . Similarly, overexpression of GATA4 in cell culture by adenoviral gene transfer induces cardiomyocyte hypertrophy ) and sarcomere reorganization as efficiently as endothelin-1 and phenylephrine (Charron et al. 2001). For stress-induced cardiac hypertrophic response, the GATA4 phosphorylation of Ser-105 has shown to be necessary (van Berlo et al. 2011). Recently, we have reported the identification of small molecules that either inhibit or enhance the GATA4-NKX2-5 transcriptional synergy (Jumppanen et al. 2019;Välimäki et al. 2017). The most potent inhibitor of GATA4-NKX2-5 interaction, 3i-1000, had no influence on the baseline GATA4 proteins levels in NRVMs, whereas the phenylephrine-induced elevation in GATA4 Ser-105 phosphorylation was significantly inhibited by 3i-1000 . Although the exact mechanisms of action remains to established, the compound 3i-1000 inhibits BNP transcription, and stretch-, endothelin-1-and phenylephrinestimulated gene expression of ANP and BNP, as well as hypertrophic cell growth in cardiomyocytes while having no effect on GATA4 or NKX2-5 DNA binding or on the activity of protein kinases involved in the regulation of GATA4 phosphorylation (Välimäki et al. 2017;Kinnunen et al. 2018). Moreover, enhanced cardiac function in vivo in experimental models of myocardial infarction and hypertension has been observed . Importantly, in our present experiments, the compound protected from doxorubicin-induced cardiac damage, as reflected by the restoration of LV ejection fraction and fractional shortening in doxorubicin-treated animals. Interestingly, improvement in cardiac function by 3i-1000 was not associated with the Fig. 6 The in vivo animal model of doxorubicin (DOX) cardiotoxicity. The rats received saline or DOX 1 mg/kg/day for 10 days and were followed up to 11 weeks. a The experimental design of chronic DOX cardiotoxicity in rats. DOX was administered 1 mg/kg/day i.p. for 10 days. b, c Cardiac function was measured by echocardiography at 2, 7, 9 and 11 weeks. d The survival of the DOX-treated rats decreased quickly after 9 weeks. The results are expressed as mean ± SEM; *P < 0.05 vs. control (Welch's t test). Number of animals at 2, 7 and 9 week time points: NaCl = 3; DOX = 6, week 11 time point: NaCl = 3; DOX = 3. Panels b-d show data from animals that were not treated with 3i-1000. ECHO echocardiography decrease in left ventricular ANP and BNP mRNA levels, suggesting a direct effect of doxorubicin on LV natriuretic peptide gene expression. The compound 3i-1000 showed cardioprotective effects also in vitro. It attenuated doxorubicin-induced increase in proBNP expression in hiPSC-CMs after a 4-day exposure. Moreover, exposure of 3i-1000 at 3 µM and 10 µM concentrations attenuated doxorubicin-induced increase in caspase activation up to 14 days. The long-term exposures (up to 21 days), however, revealed toxic effects of 3i-1000 in cardiomyocytes. In our previous study (Karhu et al. 2018), the toxicity of eight compounds (3i-1000 and its derivatives) at concentrations ranging from 10 nM to 30 µM on the viability of eight different cell types were studied in detail. In these short-term experiments (24 h), 3i-1000 was non-toxic to cardiomyocytes (NRVMs, hiPSC-CMs), fibroblasts and H9c2 cardiac myoblasts. Interestingly, stem cells were very sensitive to detect toxicity of 3i-1000 and structure-toxicity analysis of all compounds revealed a characteristic dihedral angle in the GATA4-targeted compounds that may cause stem cell toxicity (Karhu et al. 2018). Overall, our present in vitro results show that the protective effects of 3i-1000 on doxorubicin-induced cardiotoxicity are dependent on dose and treatment time, and also suggest distinct mechanisms of action for doxorubicin-and 3i-1000-induced cardiotoxicities. Doxorubicin had a direct effect on DNA content in cardiomyocytes, leading to caspase activation and apoptosis, whereas compound 3i-1000 had no direct effect on DNA content but increased caspase activity. Moreover, the present results not only show that 3i-1000 protected cardiomyocytes from doxorubicin-induced elevation of proBNP expression but Fig. 7 The effect of 3i-1000 on doxorubicin (DOX) cardiotoxicity in vivo. The rats received saline or DOX 1 mg/kg/day for 10 days and were then treated with compound 3i-1000 30 mg/kg/day or DMSO vehicle (V) for 2 weeks (weeks 8 and 9). a-d Left ventricular functional and structural changes measured by echocardiography at the end of the experiments. e, f mRNA extracted from left ventricles and measured by RT-PCR. The levels of transcripts were normalised to ribosomal 18S quantified from the same samples. The results are expressed as mean + SEM; *P < 0.05 (independent-samples t test). Number of animals: NaCl + V = 10; DOX + V = 9 (except for e, f DOX + V = 8); DOX + 3i-1000 = 8. LVPW left ventricular posterior wall thickness, LVID left ventricular internal dimension also that doxorubicin protected cardiomyocytes from 3i-1000-induced caspase activation. Cell viability data at 7 and 14-day time points show the same effect: exposure to 3i-1000 at 10 µM concentration alone decreased hiPSC-CM viability, but this effect was attenuated with co-exposure to 100 nM doxorubicin. Thus, it is possible that targeting GATA4 with 3i-1000 may be detrimental to healthy cardiomyocytes in long-term. On the other hand, when cardiomyocytes are exposed to stressors (e.g. doxorubicin), co-treatment with 3i-1000 has protective effects in cardiomyocytes. Furthermore, together with our previous toxicological analysis of 3i-1000 and its derivatives, the present data supports further development of 3i-1000 derivatives. Interestingly, in the present study doxorubicin had no effect on GATA4 protein levels either in vivo or in hiPSC-CMs even after long-term exposure. Statistically significant changes in GATA4 levels were detected only in NRVMs after short-term doxorubicin exposure. In previously published in vitro studies, in which doxorubicin was shown to decrease GATA4 mRNA and protein levels, doxorubicin concentrations were higher and exposure times shorter, ≤ 24 h (Aries et al. 2004(Aries et al. , 2014Kim et al. 2003;Kobayashi et al. 2006Kobayashi et al. , 2010. Similarly, in the prior in vivo studies mice were treated with a single high-dose injection of doxorubicin (Aries et al. 2004;Kobayashi et al. 2006). Therefore, it is possible that the changes in GATA4 levels are related to short-term high-dose doxorubicin treatments. However, interspecies differences both in vitro and in vivo cannot be ruled out. Furthermore, the maturity level of the cells and the potential limitations that may entail should be considered when utilizing hiPSC-CMs. Although more investigations are needed in the future to fully understand the exact mechanisms of action of doxorubicin as well as the GATA4-targeted compound 3i-1000, our current results suggest that their mechanisms of action are not related to obvious changes in GATA4 protein levels. Regarding preclinical drug development, our results highlight the importance of choosing an appropriate experimental model for compound testing already in early phases of drug discovery projects. The delayed toxicity of GATA4targeted compound 3i-1000 demonstrates the significance of using longer exposure times in in vitro toxicity screening, which is possible when using hiPSC-CMs as these cells can be cultured for significantly longer periods of time compared to primary cardiomyocytes. Utilizing differentiated human cells also eliminates the influence of interspecies differences and helps to reduce the use of experimental animals. Furthermore, choosing a suitable model is a key element also in investigating the mechanism of doxorubicin cardiotoxicity as reflected by the lack of doxorubicin-induced GATA4 protein depletion in response to chronic low-dose treatments in hiPSC-CMs. In summary, long-term exposure of hiPSC-CMs is a useful in vitro model to investigate the mechanisms of delayed doxorubicin-induced cardiotoxicity and novel cardioprotective therapies. The GATA4-targeted compound 3i-1000 exhibited cardioprotective potential in vitro as well as in vivo. Over chronic exposure the compound was, however, toxic to cardiomyocytes and hence further structural optimization is required to develop non-toxic derivatives.
11,242
2020-03-17T00:00:00.000
[ "Biology", "Materials Science", "Chemistry", "Medicine" ]
Effects of Poly(Vinylidene Fluoride-co-Hexafluoropropylene) Nanocomposite Membrane on Reduction in Microbial Load and Heavy Metals in Surface Water Samples : In this work, nanocomposite membranes were prepared using silver nanoparticles (Ag) attached to poly(amidoamine) dendrimer (P)-functionalised multi-walled carbon nanotubes (CNTs) blended with poly(vinylidene fluoride-co-hexafluoropropene) (PVDF-HFP) polymeric membranes (i.e., AgP-CNT/PVDF-HFP) via the phase inversion method. The nanocomposites were characterised and analysed via transmission electron microscopy (TEM), scanning electron microscopy (SEM), energy-dispersive spectroscopy (EDX), thermal gravimetric analysis (TGA) and Brunauer–Emmett–Teller (BET) analysis. The TEM and EDX analyses confirmed the presence of Ag nanoparticles on the nanocomposites, while the SEM and BET data showed the spongy morphology of the nanocomposite membranes with improved surface areas. The sample analysis of surface water collected from the Sekhukhune district, Limpopo Province, South Africa indicated that the water could not be used for human consumption without being treated. The nanocomposite membranes significantly reduced the physic-ochemical parameters of the sampled water, such as turbidity, TSS, TDS and carbonate hardness, to 4 NTU, 7 mg/L, 7.69 mg/L and 5.9 mg/L, respectively. Significant improvements in microbial load (0 CFU/mL) and BOD (3.0 mg/L) reduction were noted after membrane treatment. Furthermore, toxic heavy metals such as chromium, cadmium and nickel were remarkably reduced to 0.0138, 0.0012 and 0.015 mg/L, respectively. The results clearly suggest that the AgP-CNT/PVDF-HFP nanocomposite membrane can be used for surface water treatment. Introduction Inadequate access to clean water remains a major challenge in many developing countries, affecting mostly rural areas.The majority of people living in developing and underdeveloped countries rely greatly on surface water as their primary source of water for domestic usage due to a lack of potable water supplies.The consumption of polluted water may cause waterborne diseases such as typhoid, diarrhoea, dysentery, cholera and organ damage, which lead to acute health challenges throughout society, heavily impacting children below five years of age [1,2].These diseases are caused by the ingestion of water contaminated with microscopic organisms (such as viruses and bacteria), disinfection byproducts and heavy metals [3,4].These contaminants have rapidly increased within the surface water due to modern agricultural, household and industrial activities.Microbials such as coliforms, Escherichia coli (E.coli) and Enterobacteriaceae are used as water quality indicators when assessing the safety of water for human consumption [5].The determination of the quality of potable water should be performed by assessing its physical, chemical and biological characteristics before usage.These are analysed against the standards for drinking water outlined by the World Health Organization (WHO) and the South African National Standard (SANS 241) [6][7][8]. Amongst the heavy metals associated with severe health effects [9] are nickel (Ni), cadmium (Cd), chromium (Cr), iron (Fe), zinc (Zn) and copper (Cu).Although the Fe, Cu and Zn heavy metals play an essential role in the human body, at high concentrations, these metals can lead to severe health complications (such as diabetes, vomiting, liver damage and kidney disease), which are mainly associated with exposure to toxic metals like Cr(VI), Ni and Cd [10,11].Recent reports from various studies have noted higher-than-normal concentrations of Cr, Fe, Ni and Zn in water collected from the Olifants [12] and Dzindi [13] rivers, as well as from wastewater treatment in Durban, South Africa [9]. Ahmed et al. [14] synthesised silver nanoparticles, and their results revealed a decrease in the concentrations of various physicochemical parameters (converted into per-centages) of textile effluents, such as pH (65%), electrical conductivity (55%), hardness (58%), total suspended solids (TSS) (75%), biological oxygen demand (BOD) (66%) and total dissolved solids (TDS) (76%).Mustapha et al. [15] showed that kaolin/ZnO nanocomposites had improved adsorption performance, with the largest reductions in the chemical oxygen demand (COD) (95%), BOD (94%), Fe(III) (98%), Cr(VI) (100%) and chloride (78%) [15].In another study by Fanta et al. [16], a synthesised copper-doped zeolite composite adsorbent effectively reduced Cd to 0.005 mg/L (99%), Cr to undetectable levels and BOD to 6.54 mg/L (70%) in wastewater collected from the Akaki river.Although some of the above studies achieved undetectable levels of chromium, the reported cadmium concentration levels are still above the acceptable limits indicated by the WHO [6] and SANS 241 [8] guidelines.It is, therefore, necessary to ensure that the developed composite material is able to reduce both the physicochemical parameters and heavy metal levels to below the acceptable limits. Although various water purification methods exist today, they are usually too expensive to be implemented by ordinary citizens due to their excessive usages rates and/or a lack of technical operational skills [2].Common processes such as sand filtration, settlement, coagulation and chlorination are usually utilised for the treatment of contaminated water in various industries, although it is still a challenge for purified water to meet the drinking water standards [17].This highlights the importance of extensive research to develop improved and low-cost water purification methods that utilise less energy [17].A solution to the scarcity of safe drinking water would be to find economic methods of purifying surface water for domestic usage. Somma et al. [18] reviewed a variety of water purification methods, such as adsorption, membrane separation, bioremediation, etc., and noted that adsorption seemed to be the preferred method due to its simplicity and cost-effectiveness.Although the adsorption technique is useful, especially with regard to fast preparation processes, membrane technology also offers some advantages.For example, membrane technology is a useful technique for the removal of organic and inorganic water contaminants.It works by providing a selective barrier that allows certain substances, called the permeate, to pass through, while leaving behind other substances (retentate).Zhang et al. [19] utilised graphene oxide quantum dots to produce thin film nanocomposite (TFN) membranes for the treatment of water polluted by the methylene blue and Congo red dyes.The results showed good antifouling properties and approximately 99% removal efficiency for both the methylene blue and Congo red dyes.Wei et al. [20] purified water polluted by several phthalates using hollowfibre nanofiltration membranes.Good removal efficiency of up to 95% for both di-n-octylphthalate and diethylhexyl phthalate was obtained. In our previous studies [21], the Ag-MWCNT/PVDF-HFP nanocomposite membrane exhibited good fouling resistance, microbial load reduction (100%), non-leaching properties and excellent bactericidal effects in simulated water samples.The addition of poly(amidoamine) dendrimers, with their highly branched 3D structures, is an excellent choice for the functionalisation of carbon materials, as well as metal ion complexing; these improve the overall surface areas of nanocomposite materials [22,23].Studies have indi-cated that carbon materials functionalised with poly(amidoamine) dendrimer encapsulated with silver nanoparticles show better solubility and antibacterial properties [24][25][26]. Herein, we prepared a nanocomposite membrane containing Ag nanoparticles attached to P-CNT/PVDF-HFP.The study aimed at evaluating the microbial reduction, some physicochemical parameters and the heavy metals within surface water samples collected from the Sekhukhune Municipality, Limpopo Province, South Africa.The effects of the first-generation poly(amidoamine) dendrimer on the dispersity of Ag nanoparticles and performance of the entire nanocomposite membrane were investigated during the surface water analysis. Preparation of AgPCNT/PVDF-HFP Nanocomposite Membranes For the preparation of PVDF-HFP blended with Ag and the poly(amidoamine) MWCNT nanocomposite (i.e., 1.8 wt.% AgPCNT), the procedure was as follows: PVDF-HFP (2 g) was dissolved in N,N-dimethylacetamide (15 mL) at 85 • C while stirring and this was followed by the addition of PVP (0.6 g).The mixture was stirred for 2 h at 85 • C to obtain a PVDF-HFP polymer solution. In a separate container, silver poly(amidoamine) multi-walled carbon nanotubes (AgPCNTs) (0.037 g) were sonicated in 5 mL of N,N-dimethylacetamide for a period of 30 min.The sonicated AgPCNTs were then added to the PVDF-HFP polymer solution and this was stirred for an additional 1 h.The resultant polymer was then hand-cast in a glass plate using a casting knife at 180 µm thickness.The membrane formed (AgP-CNT/PVDF-HFP) was pre-dried in an oven at 55 • C for about 35 s to pre-evaporate the water, and this was followed by coagulation in a water bath (6 • C) and drying on paper sheets. Characterisation of Nanocomposite Membranes The TEM analysis of the nanocomposites was performed on a JEOL JEM-2100 transmission electron microscope operated at 200 kV.The morphologies of the membranes were examined using a focused ion beam scanning electron microscope (Auriga Zeiss-39-42 SEM with Germini FE-SEM column).The EDS analysis of the nanocomposite membranes was undertaken using a focused ion beam scanning electron microscope.The BET analysis was undertaken on a Micromeritics ASAP 2020 instrument, to investigate the surface properties of the nanocomposite membranes.The thermal gravimetric analysis was performed using the TGA Q500 (V20.13Build 39) by heating the samples from 30 to 800 • C at a ramping rate of 10 • C/min under nitrogen gas. Permeation Tests of Nanocomposite Membranes To study the swellability of the PVDF-HFP nanocomposite membranes, the membranes were initially weighed on a weighing balance, followed by soaking in distilled water for 7 h before weighing again. The swellability (Q t ) was calculated as follows: where m c and m w are the initial and final masses of the membrane in g before and after soaking in distilled water.The water content and porosity measurements were performed by immersing the nanocomposite membranes in distilled water, followed by weighing the wet membrane (W 0 ) after 24 h.The wet membrane was dried in an oven for 24 h at 85 • C, followed by weighing again to determine the dry weight (W 1 ). The water content (WC) was calculated using the equation below: Porosity (P) was calculated as follows: where A is the surface area of the nanocomposite membrane, h is the thickness of the membrane and d is the water density at room temperature. Water Sampling The collection of surface water (into sterile bottles) for analysis in this study was conducted at 3 different locations, which were the Makotswane dam, Olifants river (near the Flag Boshielo dam) and a furrow in Apel Cross in the Sekhukhune district, Limpopo Province, South Africa.These water sources were used by the local and nearby communities for domestic purposes.The water samples were collected into previously sterilised bottles and placed on ice before analysis in the Microbiology Laboratory at the University of Limpopo.The pooled water samples collected from these areas were analysed within 4 h of sampling, and the pooled water samples for selected heavy metal analysis were preserved in 1% nitric acid. The TSS solids was calculated as follows: where A and B are the initial and final weights of the membrane (in g) before and after filtration, respectively, and C indicates the filtered water volume (in mL).A multi-parameter analyser (HI 991300 pH/EC/TDS meter) was used for the pH, total dissolved solids (TDS) and conductivity measurements.The calculation of the TDS was as follows: where A and B are the initial and final weights of the beaker before and after evaporation (g), while the evaporated water sample (in mL) is indicated by C. The fold decrease (FD) was calculated as follows: where A and B are the initial and final parameters (i.e., concentration), respectively. Microbiological and Elemental Analysis of Treated Water Using Synthesised Membranes The prepared nanocomposite membranes were used to filter 100 mL water samples.The microbiological analysis included the measurement of E. coli, total coliforms, Enterobacteriaceae and the aerobic count, performed before and after filtration.The BOD 5 experiment was conducted following the manufacturer's instructions (UV/Vis Nanocolor user manual).The BOD 5 measurements of the control mixture were performed immediately and that of the sample was performed after 5 days, also using the UV/Vis NANOCOLOR ® spectrometer.The elemental analysis of the water samples before and after treatment was performed using atomic adsorption spectroscopy (AAS).The metal percentage reduction was calculated as follows: where C i and C f are the initial and final concentrations, respectively. TEM Analysis of AgCNT and AgP-CNT Nanocomposites Figure 1 shows the TEM images of the AgCNT and AgP-CNT nanocomposites.Both the AgCNT (Figure 1a1) and AgP-CNT (Figure 1b1) composites showed spherical dark spots of Ag nanoparticles attached to MWCNTs and P-CNTs [29,30].The Ag nanoparticles and the multi-walled CNTs' structures were clearly visible at the high TEM magnifications of the composites (Figure 1a2,b2).The Ag nanoparticles were homogeneously distributed on the P-CNT nanocomposite (Figure 1b1), as compared to the AgCNTs (Figure 1a1).The TEM images showed that the poly(amidoamine) dendrimer on the surface of the MWCNTs contributed to the high dispersity of the Ag nanoparticles, due to their attachments at the ends of the amine groups of poly(amidoamine).The dendrimer further contributed to the smaller Ag nanoparticles of 5.4 nm (as measured from Figure 1b2), as compared to 6.7 nm recorded for the AgCNTs. SEM Analysis of AgCNT/PVDF-HFP and AgP-CNT/PVDF-HFP Nanocomposite Membranes The SEM images in Figure 2 show the morphologies of the PVDF-HFP, AgCNT/PVDF-HFP and AgP-CNT/PVDF-HFP nanocomposite membranes.The SEM image of AgCNT/ PVDF-HFP (Figure 2b1) shows some surface roughness with the addition of AgCNTs on PVDF-HFP; the roughness increased even further on AgP-CNT/PVDF-HFP (Figure 2c1) with the addition of the poly(amidoamine) dendrimer.Both composites had a rough surface when compared to that of the PVDF-HFP membrane (Figure 2a1), as noted in the literature [31].A small percentage of MWCNTs and Ag nanoparticles was embedded within the structure of the PVDF-HFP membrane; hence, they were not visible on the surface.The SEM cross-section of PVDF-HFP (Figure 2a2) showed finger-like pores linked to the spongy walls of the membrane.As observed from the cross-sections in Figure 2b2,c2, the presence of AgCNTs and AgP-CNTs on PVDF-HFP slightly increased the surface roughness of the membranes.This resulted in the slight suppression of macrovoids, which is beneficial for water purification, as reported in the literature [32,33].The EDX analysis of AgCNT/PVDF-HFP (Figure 2d1) and AgP-CNT/PVDF-HFP (Figure 2d2) confirmed the presence of both Ag nanoparticles and functional groups, such as oxygen, attached to the surfaces of the MWCNTs. SEM Analysis of AgCNT/PVDF-HFP and AgP-CNT/PVDF-HFP Nanocomposite Membranes The SEM images in Figure 2 show the morphologies of the PVDF-HFP, AgCNT/PVDF-HFP and AgP-CNT/PVDF-HFP nanocomposite membranes.The SEM image of AgCNT/PVDF-HFP (Figure 2b1) shows some surface roughness with the addition of AgCNTs on PVDF-HFP; the roughness increased even further on AgP-CNT/PVDF-HFP (Figure 2c1) with the addition of the poly(amidoamine) dendrimer.Both composites had a rough surface when compared to that of the PVDF-HFP membrane (Figure 2a1), as noted in the literature [31].A small percentage of MWCNTs and Ag nanoparticles was embedded within the structure of the PVDF-HFP membrane; hence, they were not visible on the surface.The SEM cross-section of PVDF-HFP (Figure 2a2) showed finger-like pores linked to the spongy walls of the membrane.As observed from the cross-sections in Figure 2b2,c2, the presence of AgCNTs and AgP-CNTs on PVDF-HFP slightly increased the surface roughness of the membranes.This resulted in the slight suppression of macrovoids, which is beneficial for water purification, as reported in the literature [32,33].The EDX analysis of AgCNT/PVDF-HFP (Figure 2d1) and AgP-CNT/PVDF-HFP (Figure 2d2) confirmed the presence of both Ag nanoparticles and functional groups, such as oxygen, attached to the surfaces of the MWCNTs.The BET specific surface area and pore volume data for the PVDF-HFP, AgCNT/PVDF-HFP and AgP-CNT/PVDF-HFP nanocomposite membranes are presented in Table 1.The PVDF-HFP polymeric membrane had a low surface area of 3.61 m 2 g −1 ; however, after modification with AgCNTs, the surface area increased to 3.71 m 2 g −1 , which could be associated with the improved hydrophilicity of the entire membrane's structure.The addition of poly(amidoamine) to obtain 1.8 wt.% doping of (AgPCNTs) on PVDF-HFP resulted in a further increase in the BET surface area (3.82 m 2 g −1 ), which could be linked to the improved dispersity of Ag nanoparticles (as demonstrated by TEM data) and the crosslinking of P-MWCNTs with the PVDF-HFP moiety [23].The nanocomposite membranes showed a type IV(a) isotherm and a sharp capillary condensation step, with pore sizes ranging between 2 and 50 nm (Figure 3 insert).These results indicated that the PVDF-HFP nanocomposite membranes had a mesoporous structure (Figure 3), which was further confirmed by the relative pressure ranging between 0.8 and 0.99 [34].the crosslinking of P-MWCNTs with the PVDF-HFP moiety [23].The nanocomp membranes showed a type IV(a) isotherm and a sharp capillary condensation step pore sizes ranging between 2 and 50 nm (Figure 3 insert).These results indicated th PVDF-HFP nanocomposite membranes had a mesoporous structure (Figure 3), whic further confirmed by the relative pressure ranging between 0.8 and 0.99 [34].Figure 4 shows the TGA profiles of the AgCNT, AgP-CNT, PVDF-HFP, AgCNT/PVDF-HFP and AgP-CNT/PVDF-HFP nanocomposite membranes.The TGA profile of Ag/ MWCNTs shows a weight loss of approximately 34%, with the remainder being the Ag nanoparticles.However, in the presence of poly(amidoamine), the AgP-CNT nanocomposite showed a weight loss of approximately 68% at 700 • C, which suggests a silver loading of about 32%.The decomposition of the poly(amidoamine) moiety and carbon nanotube body were indicated by the weight loss at 330 and 460 • C, respectively, and this was comparable to the work reported in the literature [35].PVDF-HFP showed weight loss from 200 to 350 • C, followed by the third step at 400 • C. Interestingly, the TGA profile of AgP-CNT/PVDF-HFP showed improved stability as compared to that of AgCNT/PVDF-HFP, due to the presence of the poly(amidoamine) dendrimer.The two nanocomposite membranes exhibited a sudden weight loss at about 450 • C, which was attributed to the structural loss of the PVDF-HFP nanocomposite, as reported in the literature [36]. loss from 200 to 350 °C, followed by the third step at 400 °C.Interestingly, the TGA profile of AgP-CNT/PVDF-HFP showed improved stability as compared to that of AgCNT/PVDF-HFP, due to the presence of the poly(amidoamine) dendrimer.The two nanocomposite membranes exhibited a sudden weight loss at about 450 °C, which was attributed to the structural loss of the PVDF-HFP nanocomposite, as reported in the literature [36]. Permeation Tests of PVDF-HFP Nanocomposite Membranes The permeation studies were undertaken by monitoring the swellability, porosity, water content and contact angle of the PVDF-HFP membranes, as shown in Table 2. AgP-CNT/PVDF-HFP showed an increase in porosity, swellability and water content, when compared to both the PVDF-HFP and AgCNT/PVDF-HFP membranes, which indicates an improvement in hydrophilicity.These results were further supported by a decrease in the contact angle from 78 to 64°, after the addition of AgP-CNTs to PVDF-HFP. Analysis of Surface Water Samples Table 3 shows the physicochemical properties of the surface water before and after treatment with the PVDF-HFP-based membranes.This was undertaken to evaluate the efficacy of the membranes during water purification.Although the conductivity of the raw surface water was already compliant with the SANS 241 [8] and WHO [6] guidelines, it Permeation Tests of PVDF-HFP Nanocomposite Membranes The permeation studies were undertaken by monitoring the swellability, porosity, water content and contact angle of the PVDF-HFP membranes, as shown in Table 2. AgP-CNT/PVDF-HFP showed an increase in porosity, swellability and water content, when compared to both the PVDF-HFP and AgCNT/PVDF-HFP membranes, which indicates an improvement in hydrophilicity.These results were further supported by a decrease in the contact angle from 78 to 64 • , after the addition of AgP-CNTs to PVDF-HFP. Analysis of Surface Water Samples Table 3 shows the physicochemical properties of the surface water before and after treatment with the PVDF-HFP-based membranes.This was undertaken to evaluate the efficacy of the membranes during water purification.Although the conductivity of the raw surface water was already compliant with the SANS 241 [8] and WHO [6] guidelines, it was reduced by 2.6-and 4-fold after membrane filtration when using AgCNT/PVDF-HFP and AgP-CNT/PVDF-HFP, respectively (Table 3).Furthermore, the nanocomposite membranes' filters significantly improved the colour of the surface water to produce transparent water, which is desirable as colour is an important aesthetic property of drinking water [8].The turbidity and total suspended solids were higher than the acceptable limits in all raw river water samples [37].Interestingly, after membrane treatment, the particulate suspended materials and dissolved substances contained in the surface water were significantly reduced.This observation was supported by the large decreases in parameters such as turbidity, TSS and TDS (Table 3), as reported in the literature [14].The pH was within an acceptable range, between 5 and 9.7 (Table 3), indicative of alkaline water.Treatment with the AgP-CNT/PVDF-HFP nanocomposite membranes neutralised the pH of the treated water samples.The TDS is directly proportional to the electrical conductivity and it is influenced by the type and amount of dissolved inorganic salts [38].The TDS and carbonate hardness were also significantly reduced to acceptable limits [6,8], which indicates the effectiveness of the membranes in improving the purity of surface water. The BOD measurements of the raw water were greatly reduced after filtration, and this proved the efficacy of the AgP-CNT/PVDF-HFP nanocomposite membranes.Such water with a low BOD can remain safer for a longer time because the low level of organic matter results in low levels of nutrients for microbial growth.A 5-day BOD that falls between 1 and 2 mg/L indicates very clean water, 3 to 5 mg/L indicates moderately clean water and over 8 mg/L indicates severely polluted water [38].The physicochemical evaluation of the treated water indicated that the water quality was improved by both the AgP-CNT/PVDF-HFP and AgCNT/PVDF-HFP nanocomposite membranes.Interestingly, water samples treated with the AgP-CNT/PVDF-HFP nanocomposite membrane gave even better results compared to the Ag-CNT/PVDF-HFP nanocomposite membrane. Microbial Analysis The enteric bacteria, E. coli, total coliforms and aerobic count were analysed in the surface water samples collected from the Sekhukhune district and are recorded in Table 4.The microbiological quality of the surface water was found to be poor; hence, it was not suitable for home use and had to be treated before consumption by humans.Following treatment with the AgCNT/PVDF-HFP nanocomposite membrane, the levels of enteric bacteria, E. coli, total coliforms and aerobic count were reduced to 21, 0, 21 and >4.9 × 10 5 , respectively.However, upon treatment with the AgP-CNT/PVDF-HFP nanocomposite membrane, the levels of enteric bacteria, E. coli, total coliforms and aerobic count were all reduced to zero.These results indicate that the quality of the water samples was improved to acceptable levels for all classes of microorganisms when filtered with the AgP-CNT/PVDF-HFP nanocomposite membrane, which is attributed to its high BET surface area (Table 1) [16]. Surface Water Heavy Metal Analysis Table 5 shows the elemental analysis of the water samples measured using atomic adsorption spectroscopy (AAS) before and after membrane filtration.The data in Table 5 show that the levels of chromium, nickel and cadmium were above the acceptable levels of the SANS 241 and WHO [6,8].The levels of chromium and nickel were relatively similar to those reported in Dzindi [13], the Olifants river [10] and the Durban wastewater treatment plant [9].The heavy metal analysis indicates that the surface water from the Sekhukhune district is not suitable for consumption if not treated.Fortunately, the concentrations of copper, iron and zinc in the water from Sekhukhune district were within the recommended levels for drinkable water, although zinc was below the detection limit of AAS.Interestingly, upon filtration with the AgCNT/PVDF-HFP nanocomposite membrane, the concentration levels of nickel, cadmium and chromium were reduced by 73, 95 and 92%, respectively.Further reductions in nickel, cadmium and chromium (85, 98 and 93%, respectively) were noted when using the AgP-CNT/PVDF-HFP nanocomposite membrane.This improved removal efficiency is associated with the higher specific surface area of the nanocomposite, due to the presence of the poly(amidoamine) dendrimer, as observed elsewhere [23].The data thus far evidently support the efficacy of the membranes in improving the quality of water for household use. Furthermore, a comparison of a variety of nanoparticles and nanocomposite materials is shown in Table 6.The AgP-CNT/PVDF-HFP nanocomposite membrane showed better heavy metal reduction, providing levels that fall within the limits set by the SANS 241 [8] and WHO [6] guidelines [15,16,39].This nanocomposite membrane also demonstrated good microbial reduction that was comparable with the work reported in the literature [16].Following the filtration analysis, the surface of the AgP-CNT/PVDF-HFP nanocomposite membrane was further investigated using the SEM and EDX techniques (Figure 5a,b).The surface of the membrane appeared rougher, mainly due to suspended particles with sizes ranging between 5 and 20 µm (Figure 5a).This is consistent with the removal of the total suspended solids from the surface water, as depicted in Table 3.When the surface of the membrane was studied by EDX, most of the investigated metals were detected; these results correlated with the AAS data reported in Table 5.In our previous studies, it was shown that these types of membranes can be easily regenerated and are not easily fouled [21,40]. Conclusions The TEM and EDX analysis confirmed the presence of Ag nanoparticles with diameters ranging between 5 and 7 nm on the surfaces of the AgP-CNT/PVDF-HFP nanocomposite membranes.The presence of the poly(amidoamine) dendrimer improved the dispersity of the Ag nanoparticles, as well as the stability of the nanocomposite membrane.This was further confirmed by the TEM and TGA data, as well as the increased BET surface area.SEM images showed the spongy morphology on the surfaces of the nanocomposite membranes, with pores well distributed on the surface.The AgP-CNT/PVDF-HFP nanocomposite membrane demonstrated efficacy in improving the physicochemical and microbiological properties of contaminated river water in the water purification analysis, thus rendering the water potable and suitable for human use.The nanocomposite membranes significantly reduced the physicochemical parameters, such as the conductivity, colour, turbidity, TSS, pH, TDS and carbonate hardness, within the sampled surface waters.Furthermore, it is important to mention the improvements in the microbial load, BOD and heavy metal reduction after the membrane filtration of the surface water samples.In the present study, the AgP-CMTs/PVDF-HFP nanocomposite membranes significantly reduced both the microbial load and heavy metals in the surface water samples. Table 3 . Physicochemical properties of surface water samples before and after membrane filtration treatment. Table 4 . Levels of bacteria in surface water samples before and after membrane treatment. Table 5 . Elemental analysis of surface water before and after membrane treatment. Table 6 . Comparison of some nanocomposites in removal of microbial and heavy metals.
5,929
2024-03-23T00:00:00.000
[ "Environmental Science", "Chemistry", "Materials Science" ]
Controllability of Fractional Neutral Stochastic Integro-Differential Systems with Infinite Delay This paper is concerned with the controllability of a class of fractional neutral stochastic integro-differential systems with infinite delay in an abstract space. By employing fractional calculus and Sadovskii's fixed point principle without assuming severe compactness condition on the semigroup, a set of sufficient conditions are derived for achieving the controllability result. Introduction It is well known that the fractional calculus is a classical mathematical notion and is a generalization of ordinary differentiation and integration to arbitrary (noninteger) order.Nowadays, studying fractional-order calculus has become an active research field [1][2][3][4][5][6][7].Much effort has been devoted to apply the fractional calculus to networks control.For example, Chen et al. [8], Delshad et al. [9], and Wang and Zhang [10] studied the synchronization for fractional-order complex dynamical networks; Zhang et al. [11] investigated a fractional order three-dimensional Hopfield neural network and pointed out that chaotic behaviors can emerge in a fractional network; Kaslik and Sivasundaram [12] discussed the local stability for fractional-order neural networks of Hopfield type by applying the linear stability theory of fractional-order system. One of the emerging branches of this study is the theory of fractional evolution equations, say, evolution equations, where the integer derivative with respect to time is replaced by a derivative of fractional order.The increasing interest in this class of equations is motivated both by their application to problems from fluid dynamic traffic model, viscoelasticity, heat conduction in materials with memory, electrodynamics with memory, and also because they can be employed to approach nonlinear conservation laws (see [13] and references therein).In addition, neutral stochastic differential equations with infinite delay have become important in recent years as mathematical models of phenomena in both science and engineering, for instance, in the theory development in Gurtin and Pipkin [14] and Nunziato [15] for the description of heat conduction in materials with fading memory.It should be pointed out that the deterministic models often fluctuate due to noise, which is random or at least appears to be so.Therefore, we must move from deterministic problems to stochastic ones.We mention here the recent papers [16,17] concerning the existence of mild solutions of fractional stochastic systems. As one of the fundamental concepts in mathematical control theory, controllability plays an important role both in deterministic and stochastic control problems such as stabilization of unstable systems by feedback control.Roughly speaking, controllability generally means that it is possible to steer a dynamical control system from an arbitrary initial state to an arbitrary final state using the set of admissible controls.Controllability problems for different nonlinear stochastic systems in infinite dimensional spaces have been extensively studied in many papers; see [18][19][20][21][22] and references therein.We would also like to mention that the controllability for stochastic systems with infinite delay has been investigated by Balasubramaniam et al. [23,24] and Ren et al. [25] using some abstract spaces.Nevertheless, to the best of our knowledge, it seems that little is known about the controllability of fractional neutral stochastic differential equations with infinite delay, and the aim of this paper is to close this gap. In this paper, we are interested in the controllability of a class of fractional neutral stochastic integro-differential systems with infinite delay of the followin form: Here, := [0, ], > 0. = {( + ), ∈ (−∞, 0]} belong to the phase space ℎ , which will be defined in Section 2. The initial data = {(), ∈ (−∞, 0]} is an F 0 -measurable, ℎ -valued random variable independent of with finite second moments, and : × ℎ → H, : × × H → L 0 2 (K, H) are appropriate mappings specified later (here, L 0 2 (K, H) denotes the space of all -Hilbert-Schmidt operators from K into H, which is going to be defined later). The structure of this paper is as follows.In Section 2, we briefly present some basic notations and preliminaries.The controllability result of system (1) is investigated by means of Sadovskii's fixed point theorem and operator theory in Section 3. Conclusion is given in Section 4. Preliminaries For more details in this section, we refer the reader to Pazy [26], Da Prato and Zabczyk [27], and Samko et al. [28].Throughout this paper, (H, | ⋅ | H ) and (K, ‖ ⋅ ‖ K ) denote two real separable Hilbert spaces.We denote by L(K, H) the set of all linear bounded operators from K into H, equipped with the usual operator norm ‖ ⋅ ‖.In this paper, we use the symbol ‖ ⋅ ‖ to denote norms of operators regardless of the spaces potentially involved when no confusion possibly arises. Let (Ω, F, {F } ≥0 , ) be a filtered complete probability space satisfying the usual condition, which means that the filtration is a right continuous increasing family and F 0 contains all -null sets. = ( ) ≥0 is a -Wiener process defined on (Ω, F, {F } ≥0 , ) with covariance operator such that Tr < ∞.We assume that there exist a complete orthonormal system { } ≥1 in K, a bounded sequence of nonnegative real numbers such that = , = 1, 2, . .., and a sequence of independent Brownian motions { } ≥1 such that ) be the space of all Hilbert-Schmidt operators from 1/2 K to H with the inner product ⟨, ⟩ Suppose that 0 ∈ (−), where (−) is the resolvent set of −, then the semigroup (⋅) is uniformly bounded.That is to say, ‖()‖ ≤ , ≥ 0, for some constant > 0.Then, for ∈ (0, 1], it is possible to define the fractional power operator as a closed linear operator on its domain D( ).Furthermore, the subspace D( ) is dense in H, and the expression defines a norm on H := D( ).The following properties are well known. (a) If 0 < < ≤ 1, then H ⊂ H and the embedding is compact whenever the resolvent operator of is compact. (b) For every ∈ (0, 1], there exists a positive constant such that Let now us recall some basic definitions and results of fractional calculus.Definition 2. The fractional integral of order with the lower limit 0 for a function is defined as provided the right-hand side is pointwise defined on [0, ∞), where Γ(⋅) is the gamma function. Definition 3. The Caputo derivative of order with the lower limit 0 for a function can be written as If is an abstract function with values in H, then the integrals that appear in the previous definitions are taken in Bochner's sense. Assume that ℎ : (−∞, 0] → (0, +∞) with = ∫ 0 −∞ ℎ() < +∞ is a continuous function.Recall that the abstract phase space C ℎ is defined by If C ℎ is endowed with the norm then At the end of this section, we recall the fixed point theorem of Sadovskii [30]. Lemma 4. Let Φ be a condensing operator on a Banach space H; that is, Φ is continuous, and take bounded sets into bounded sets, and (Φ()) ≤ () for every bounded set of H with () > 0. If Φ() ⊂ for a convex, closed, and bounded set of H, then Φ has a fixed point in H (where (⋅) denotes Kuratowski's measure of noncompactness.) Main Results In this section, we obtain controllability of system (1).We first present the definition of mild solutions.Definition 5.An H-valued stochastic process {(), ∈ (−∞, ]} is said to be a mild solution of system (1) if (i) () is F -adapted and measurable for each ≥ 0; (ii) () is continuous on [0, ] almost surely and for each ∈ [0, ), the function ( − ) −1 ( − )(, ) is integrable such that the following stochastic integral equation is verified: where with a probability density function defined on (0, ∞).Definition 6. System ( 1) is said to be controllable on the interval , if for every initial stochastic process ∈ C ℎ defined on (−∞, 0], there exists a stochastic control ∈ 2 (, U), which is adapted to the filtration {F } ≥0 such that the mild solution () of ( 1) satisfies () = * , where * and are preassigned terminal state and time, respectively. The following properties of () and () that appeared in Zhou and Jiao [7] are useful. (A 5 ) Assume that the following relationship holds: where Denote by ((−∞, ], H) the space of all continuous Hvalued stochastic processes {(), ∈ (−∞, ]}.Let Set ‖ ⋅‖ to be a seminorm defined by We have the following useful lemma that appeared in Liu et al. [29]. Lemma 8. Assume that ∈ C ; then, for all ∈ , ∈ C ℎ .Moreover, where The main object of this paper is to explain and prove the following theorem. In what follows, we will show that using the control (⋅), the operator has a fixed point, which is then a mild solution for system (1). For ∈ C ℎ , define where φ+ is obtained by replacing by φ + in (24).Let For each ∈ C 0 , we have Thus, (C 0 , ‖ ⋅ ‖ ) is a Banach space.For > 0, set Consider the map Π : C 0 → C 0 defined by A similar argument as (26), we can show that Π is well defined on for each > 0. Note that the operator with a fixed point is equivalent to show that the operator Π has fixed point.To this end, we decompose Π as Π = Π 1 + Π 2 , where the operators Π 1 and Π 2 are defined on , respectively, by (34) Thus, Theorem 9 follows from the next theorem. Proof.The proof is followed by the several steps. Step 1.There exists a positive number such that Π( ) ⊂ .If it is not true, then for each positive number , there exists a function (⋅) ∈ , but Π( ) ∉ ; that is, |(Π )()| in view of Lemma 7 and Hölder inequality, we have where 0 and are defined in ( 18) and (19), respectively.Applying Burkhölder-Davis-Gundy's inequality and assumptions (A 2 ), we get On the other hand, in view of ( 24) and (A 3 ), we have thus, by the same procedure as (36)-( 38), it follows that where 0 and are defined in (18) and (19), respectively.Combining these estimates (35) to (40) yields where Dividing both sides of (42) by and taking → ∞, we obtain that which is a contradiction by assumption (A 5 ).Thus, for some positive number , Π( ) ⊂ . Step 2. Π 1 is a contractive mapping.Let , V ∈ .From the assumptions on and , it is easy to verify that the following inequality holds: Thus, by the assumptions, we have where we have used the fact that 0 = V 0 = 0. Hence, so, Π 1 is a contraction by (23). Step 3. We show that the operator Π 2 is compact.Let > 0 be such that Π 2 ( ) ⊂ .The proof will be divided into the following claims. Therefore, there are relatively compact sets arbitrary close to the set {Π 2 (), ∈ }; hence, the set {(Π 2 )(), ∈ } is also precompact in .Thus, from Arzelá-Ascoli theorem together with assumptions on and , we conclude that Π 2 is a compact operator.Therefore, Π is a condensing map on .This completes the proof. Remark 11.In order to describe various real-world problems in physical and engineering sciences subject to abrupt changes at certain instants during the evolution process, impulsive differential equations have been used to model the system.The technique used here can be extended to establish the controllability of neutral fractional stochastic integro-differential systems with impulsive effect and infinite delay.The controllability result can be obtained by suitably introducing the impulsive effects defined in [19]. Conclusions In this paper, we have studied the controllability of fractional neutral stochastic integro-differential systems with infinite delay in an abstract space.Through fractional calculus and Sadovskii's fixed point principle, we have investigated the sufficient conditions for the controllability of the system considered.
2,835
2013-04-18T00:00:00.000
[ "Mathematics", "Engineering" ]
Integrating neural and ocular attention reorienting signals in virtual reality Objective. Reorienting is central to how humans direct attention to different stimuli in their environment. Previous studies typically employ well-controlled paradigms with limited eye and head movements to study the neural and physiological processes underlying attention reorienting. Here, we aim to better understand the relationship between gaze and attention reorienting using a naturalistic virtual reality (VR)-based target detection paradigm. Approach. Subjects were navigated through a city and instructed to count the number of targets that appeared on the street. Subjects performed the task in a fixed condition with no head movement and in a free condition where head movements were allowed. Electroencephalography (EEG), gaze and pupil data were collected. To investigate how neural and physiological reorienting signals are distributed across different gaze events, we used hierarchical discriminant component analysis (HDCA) to identify EEG and pupil-based discriminating components. Mixed-effects general linear models (GLM) were used to determine the correlation between these discriminating components and the different gaze events time. HDCA was also used to combine EEG, pupil and dwell time signals to classify reorienting events. Main results. In both EEG and pupil, dwell time contributes most significantly to the reorienting signals. However, when dwell times were orthogonalized against other gaze events, the distributions of the reorienting signals were different across the two modalities, with EEG reorienting signals leading that of the pupil reorienting signals. We also found that the hybrid classifier that integrates EEG, pupil and dwell time features detects the reorienting signals in both the fixed (AUC = 0.79) and the free (AUC = 0.77) condition. Significance. We show that the neural and ocular reorienting signals are distributed differently across gaze events when a subject is immersed in VR, but nevertheless can be captured and integrated to classify target vs. distractor objects to which the human subject orients. Introduction As humans, we constantly redirect our attention to different objects and stimuli in the environment. The complex set of neural and physiological adjustments we make is known as the reorienting response. The process underlying attention reorienting (e.g. the reorienting response) has been widely studied both in the fields of neuroscience and psychology [1][2][3]. Previous studies have identified neural and physiological signatures of attention reorienting, including pupil dilation and the P300 wave recorded via electroencephalography (EEG) [2,4,5]. These neural and physiological signatures are parts of the larger attention networks in the brain, namely the dorsal and ventral attention networks, which have also been functionally linked to the locus coeruleusnorepinephrine (LC-NE) system [1,6,7]. While the relationship between the P300 signal and pupil dilation remains unclear, both of them have been shown to potentially reflect the phasic activity of the LC nucleus, with the P300 reflecting the cortical signatures of attention reorienting and pupil dilation serving as an index of the subcortical LC-NE system activity [1,8,9]. Utilizing these neural and physiological signatures, recent neural engineering studies have developed brain computer interfaces (BCIs) that can perform simple tasks based on the user's attention reorienting response, such as a P300-based speller and computer cursor control [10,11]. One of the major limitations of prior attention reorienting studies is the unnaturalistic environment in which the subjects performed tasks. These studies typically employed different variations of a cueing or an oddball task presented on a 2D screen to generate the reorienting response [12][13][14]. While these tasks are simple, well-documented and wellcontrolled, they do not represent how humans actually reorient their attention in the real world. Take a simple example of a person driving a vehicle down the street. The driver must constantly reorient their attention to different objects and events in the environment as the vehicle moves forward. These objects may be task relevant such as a pedestrian crossing the street or task irrelevant such as an on-ramp sign. At the same time, the real-world field of view is much wider than that of a screen, requiring the person to not only move their eyes but also their head to constantly monitor the surrounding environment. To better understand the neural and physiological basis of attention reorienting in real-world scenarios, a more naturalistic experimental paradigm is needed. This understanding would potentially translate to more robust and reliable attention-based BCI systems that are not confined to a 2D screen and instead enable more natural eye and head movements. In this study, we employ an immersive 3D-based target detection paradigm presented in a headmounted virtual reality (VR) display to study attention reorienting signals in a naturalistic and dynamic setting. Subjects travel through a simulated city environment in a moving vehicle with blank white billboards located in between buildings on the left-and right-hand side of the street. They are instructed to count the number of target images that appear on the billboards during each experimental run. Subjects perform the target detection task under two conditions, one without head movement as a control condition and one with head movement as a more naturalistic condition. We simultaneously collect the subjects' EEG, pupil diameter, gaze position and head rotation data. Our aims are twofold. First, we aim to better understand the relationship between eye movements and the reorienting response. In previous reorienting studies, the traditional experimental paradigm typically only allows for minimal or well-controlled eye movements. However, in more naturalistic conditions such as the one in the current study, eye and head movements of the subjects are now coupled to the reorienting process. This effectively decomposes the reorienting process across these movements. Therefore, we aim to investigate how the neural and ocular reorienting signals are reflected in this decomposition. To achieve this goal, we first employ temporal-based EEG-only and pupil diameter-only classifiers to identify the neural and ocular reorienting signatures that differentiate between target and distractor stimuli responses. We then perform general linear model (GLM) analysis to determine the correlation between the length of different gaze events and the reorienting signatures derived from the classifiers. We show that while the dwell time contributes the most to the reorienting response, the distributions are different between the two modalities, with the EEG reorienting response leading that of the pupil reorienting response. Second, we aim to capture and integrate the neural and physiological response underlying attention reorienting in a naturalistic environment. We employ a hierarchical hybrid classifier combining EEG, pupil diameter and dwell time to classify the object in which the subject observes during each trial. We show that the hybrid classifier successfully captures neural and ocular reorienting signals and can classify the target object with relatively high accuracy even when the subject moves their head in a naturalistic environment. Subjects Twenty healthy volunteer subjects (15 male, 5 female, aged 18-40 years old) were recruited for this study. Subjects did not report any neurological illness or medication and all had normal or corrected to normal vision. Informed consent was obtained in writing from all subjects prior to the experiment in accordance with the guidelines and approval of Columbia University Institutional Review Board. Data from two subjects (1 male, 1 female) were excluded from the final analysis due to substantial artifacts in the EEG signals. Data from the eighteen remaining subjects (14 male, 4 female, aged 18-40 years old) were included in the final analysis. Virtual environment The 3D virtual target detection paradigm was developed using the open-source suite Naturalistic Experimental Design Environment [15] which is built on the Unity3D game development software (Unity Technologies, CA). The virtual environment consists of a street in the middle of a simulated city environment. Buildings were placed on the leftand right-hand side of the street, with blank white billboards placed in between the buildings. Images chosen from the CalTech101 database [16] appeared on the billboards as the subject approached them in the virtual environment. Four categories of images were selected-cameras, laptops, grand pianos and schooners. Each category of images consisted of a total of 50 images. The image which appeared on each billboard was chosen at random and with random placement to the left or right of the street for each trial. Experimental paradigm During each experimental run, the subjects were navigated down the street at a constant speed in an autonomous vehicle. As the subjects approached to each pair of billboards, one of them would display an image chosen at random from the four categories described in the earlier section. Prior to the start of the experiment, the subjects were informed which one of the categories of images was a 'target' image and that the rest were 'distractor' images. The subjects were instructed to internally count the number of target images displayed and to report the final number to the experimenter at the end of each session. Each subject performed the task under two conditions-fixed and free (figure 1(a)). In the fixed condition, the subjects were instructed to keep their head still throughout the whole experimental session while only using their eyes to saccade to the images displayed on the billboards before returning to center marked by a grey square in the middle of the street ( figure 1(b)). In the free condition, the subjects were instructed to turn both their head and their eyes to observe and categorize the images on the billboards before returning to center, similarly marked by a grey square (figure 1(c)). The two conditions were designed to simulate a control condition (the fixed condition) where only eye movements were allowed and a more naturalistic condition (the free condition) where both eye and head movements were allowed. A total of 40 images were displayed during each experimental block and each block lasted approximately 200 s. Each subject performed four experimental blocks at a time of a single condition and a total of 16 experimental blocks, eight being the fixed condition and eight being the free condition. The order in which the subjects performed each four experimental blocks were chosen at random. A total of 640 images were displayed for each subject and approximately 25% were targets. The target category was randomly selected for each subject. Data acquisition EEG data was collected using a Biosemi ActiveTwo amplifier (Biosemi, Amsterdam, The Netherlands) with 64 Ag/AgCl electrodes at a sampling rate of 2048 Hz. The electrodes were placed according to the international 10-20 system. All electrode impedances were less than 50 kΩ and common average reference was used. Eyetracking data was collected using a built-in Tobii eyetracker (Tobii, Stockholm, Sweden) within the Tobii Pro headset. The eyetracker was used to collect eye position and pupil diameter data at a sampling rate of 120 Hz. A five-point calibration was performed every time the subject put on the headset prior to the start of the experiment. Re-calibration was performed if the calibration did not display an 'OK' sign at the end of the calibration session. An open-source software library known as lab streaming layer (LSL) was used to synchronize all the data streams together across a local network [17]. All data acquisition was performed in an electromagnetically shielded room. Data pre-processing Eye position and pupillometry data were analyzed using MATLAB (The Mathworks Inc., MA). Eye position data was first epoched from 0 to 3000 ms locking to image onset (IO). In order to study the relationship between gaze events and the reorienting signals, we first divided the continuous gaze data of each trial into distinct gaze events related to visual attention reorienting. For this purpose, we chose to apply piece-wise linear modeling to divide the continuous eye position data into four distinct phases: (1) Peripheral: the time of fixation on the center of the display before any gaze movement was made, (c)). The trials that did not fit the model were discarded (about 15 percent of total number of trials on average per subject), along with the corresponding pupillometry and EEG trials. Traditionally, EEG and pupillometry data are epoched by time-locking to the time of stimuli onset. However, as a result of our piece-wise linear modeling of the gaze data, we also identified the time in which the saccades and fixations began and ended for each trial. This allows us to epoch our EEG and pupillometry data based not only on when the stimuli onset occurs but also when the saccade towards the image and when the fixation on the image occur for each trial. We therefore denote these times as our three different 'locking conditions': (1) time of IO, (2) time of FS and (3) time of first fixation (FF). Pupillometry data was first processed by removing any data during intervals in which the pupil was not detected. Blinks were then removed based on the speed of change of the pupil diameter. Any missing data was interpolated using cubic spline interpolation. Each subject's pupillometry data was then downsampled to 20 Hz and standardized for each experimental run. Pupillometry data was then epoched from 0 to 3000 ms based on locking condition and baseline-corrected using the mean value from −200 to −0 ms. EEG data was pre-processed using EEGLAB toolbox [18]. The 64 channel EEG data were band-pass filtered from 0.5 to 50 Hz and downsampled to 256 Hz. Noisy channels were removed using visual inspection (4 channels removed on average per subject). Independent component analysis (ICA) was performed to remove blinks and horizontal eye movement artifacts. EEG data was then epoched, relative to locking condition, from 0 to 1000 ms and baselinecorrected using the mean value from −200 to 0 ms. Principal component analysis was then performed on the remaining EEG data and only the top 20 PCs were retained in order to reduce the number of feature space and avoid rank deficiency issues when performing classification. Temporal ICA was then performed on the data to ensure that the temlporal patterns of the activity were statistically independent from each other. The resulting ICs were used as input for the classifier described in the following section and results prior to ICA removal are presented in Supplementary figure 7 (available online at stacks.iop.org/JNE/18/066052/mmedia). Data analysis 2.6.1. Hierarchical discriminant component analysis (HDCA) In order to capture and integrate the neural and ocular reorienting response recorded by the EEG and eyetracking signals, we adapted the hierarchical discriminant component analysis seen in [19] to build our hybrid classifier. First the epoched EEG ICs data were divided into 10, 100 ms, bins from 0 to 1000 ms relative to locking condition. Fisher linear discriminant analysis (FLDA) was performed on each bin to determine the within-bin weights across ICs: where w j is the vector of within-bin weights for bin j, µ and Σ are the mean and covariance of the EEG data in the current bin, and + and − subscripts refer to target and distractor trials, respectively. The weights w j were then applied to the IC activations x ji to determine the within-bin interest score z ji for each bin i and each trial j: Similarly, FLDA was performed on the pupil diameter and dwell time data. The epoched pupil diameter data was divided into six 500 ms bin and averaged within each bin from 0 to 3000 ms based on locking condition. The average was passed through FLDA to determine within-bin interest score. The dwell time data was also passed through FLDA. The within-bin interest scores for each feature were then normalized by dividing by their standard deviation across trials. To construct the second-level feature vector, the EEG, pupil diameter and dwell time normalized interest scores were appended into a single column vector. To visualize the contributions of each EEG data channel to the discriminating components, we calculated and plotted the scalp topography of the forward models for each 100 ms bin of the EEG data. For each bin j, the z ij values were appended across trials into a column vector z j and the x ji vector into matrix X j . The forward model a j can then be calculated as follows: For cross-bin classification, logistic regression was applied to the second-level feature vector z i for each trial to determine the cross-bin weights v (across time bins and modalities): where c i denotes the class (+1 for targets and −1 for distractors) for trial i. The cross-bin weights were then used to calculate the final single cross-bin interest score y i for each trial: Ten-fold cross validation was used to create the training and testing sets. The area under the receiver operating characteristic (ROC) curve (AUC) was used to quantify the performance of the classifier. For comparison, we also constructed single-modality classifiers using the same procedures as described above but only using single-modality within-bin interest scores (EEG only, pupil diameter only or dwell time only). Gaze events-based epoch time-locking In order to explore the temporal variations in the reorienting signals, the EEG and pupil diameter data were epoched based on the timing of the gaze events during each specific trial-IO, FS and FF. As the name suggested, IO refers to the time point in which the image first appeared on the billboard for that trial. The EEG and pupil diameter data were then epoched with zero starting at the time of IO for that trial. FS refers to the time point in which the subject's eye began moving from center towards the image on the billboard while FF refers to the time point in which the subject's eye began fixating on the image on the billboard. Similarly, the EEG and pupil diameter data were then epoched with zero starting at the time of FS and FF for that trial, respectively. General linear model (GLM) analysis We further investigated the relationship between the orienting signals and gaze events by performing a general linear model (GLM) analysis. We fitted the discriminating components (e.g. the cross-bin interest score), y i derived from the EEG-only and pupil diameter-only classifiers for each trial with the following four measurements derived from the piecewise modeling of gaze data, namely the initial fixation (peripheral) time, the time of FS, the dwell time and the time of RS. All measurements were normalized within each subject before the GLM was performed. We utilized mixed-effects GLM in order to take into account the variability in the distributions of beta weights across subjects. The setup for our mixedeffects GLM is as followed: where Y i refers to the vector of the discriminating components y i , X i refers to the gaze events time matrix, β refers to the gaze events time-effects vector, Z i refers to the inter-subject variability design matrix, b refers to the inter-subject variability-effects vector and ϵ to the random error term. We also performed a second set of GLM analysis by first orthogonalizing the four different regressors with the dwell time of each trial before fitting it against the discriminating components derived from the EEG-only and pupil diameter-only classifiers. This is done in order to investigate the contributions of the three remaining time measurements (peripheral, FS and RS) without the effects of the dwell time. In the fixed condition, subjects' gaze travels to the image and tracks it during the dwell time section before returning to the middle fixation with no head movement. However, in the free condition, subjects' gaze first travels to the billboard before their head rotation follows, resulting in longer FS and dwell time on the billboard. Furthermore, their gaze then return to the middle fixation prior to their head rotation returning to the starting position, also leading to longer RS time. These results are in line with results found in previous eyetracking studies in which head movements were involved [20,21]. Grand average pupil dilation and EEG ERPs results Grand average EEG event related potentials (ERPs) for the three midline electrodes (Fz, Cz and Pz) are plotted in figure 3(a). The overall pattern and time course for the ERPs are in line with other target detection studies [22][23][24]. The separation between the ERPs for the target and distractor trials are more pronounced in the Cz and Pz channels than in the Fz channel. Qualitatively, the P300 peak appears sharper in the fixed condition than in the free condition where it is more distributed over time. This result is expected due to the nature of the paradigm in which the subjects move their head in the free condition and spends more time across different gaze events (figure 2). Grand average pupil dilation across subjects for target and distractor trials are plotted in figure 3(b). The overall time course for pupil dilation (around 1-2 s following stimuli onset) is in line with the results from other target detection studies [19,25]. Overall the pupil dilates more for target trials than for distractor trials in both the fixed and the free conditions. The sharper pupil dilation more pronounced in the fixed condition around 500 ms following stimuli onset may be explained by the ocular muscle-related dilation from the wide-angle saccade the subjects made to see the images on the billboards [26,27]. Relationship between the orienting signals and gaze events To determine the relationships between the EEG and pupil orienting signals and different gaze events time, we first developed EEG-only and pupil-only classifiers using the HDCA algorithm described in the Methods section. The cross-bin weights of the EEGonly classifier are shown in figure 4(a). The crossbin weights for both the fixed and free condition peak roughly around 500-600 ms which correspond to the peak time of the P300 signal. Similarly, the forward models calculated from the EEG-only classifiers ( figure 4(b)) also show the pattern of the P300 signal peaking roughly between 500 and 600 ms after stimuli onset. Figure 4(c) shows the cross-bin weights of the pupil diameter-only classifier. The cross-bin weights for the pupil diameter-only peak around 1700 ms for both the fixed and free condition, which also correspond to the time of grand average pupil dilation shown in figure 3(a). Based on the results of the EEG-only and pupil diameter-only classifiers, we used the cross-bin interest scores (e.g. discriminating components) of each trial to be the representative of the strength of the orienting signals of that respective trial. We then performed a mixed-effects GLM fit between the EEG-only and pupil diameter-only discriminating components and the four different gaze events time. We also performed the same analysis after orthogonalizing the four different gaze events time against the dwell time for each trial. The GLM fit estimates for the EEG-only analysis are plotted in figure 5(a). The beta weight estimates (β) for both the fixed and free conditions are greatest for the dwell time. However, the beta weight for FS is only significant in the free and not the fixed condition. After the four regressors were orthogonalized against the dwell time of each trial, the beta weight estimates become negative for the peripheral and FS time in the fixed condition and only for the peripheral time in the free condition. These results suggest that subjects tend to move their eyes away from center (e.g. lower peripheral time) during target trials both in the fixed and free condition and saccade towards targets faster in the fixed condition. Similarly, for the GLM estimates for the pupil diameter-only discriminating components ( figure 5(b)), the beta weights are highest for the dwell time in both the fixed and free conditions with the beta weight for the second saccade being significant only in the free and not the fixed condition. The orthogonalized beta weight results for the pupil diameter-only discriminating components show significant negative values for the FS and RS in the fixed condition and peripheral and FS in the free condition. These results both demonstrate a shift forward in time compared to the orthogonalized EEG-only beta weight estimates. Hybrid classifier performance Following the development of the single-modality classifiers, we developed a hybrid classifier using the combination of EEG, pupil diameter and dwell time signals, in which the performance is shown in figure 6. Figure 6(a) shows each subject's AUC for the hybrid classifier compare to the single-modality classifiers. The subjects are sorted in descending order of the EEG-only AUC to highlight the importance of the hybrid classifier. Overall, the AUC of the hybrid classifier tracks and exceeds the AUC of the singlemodality classifier which yields the highest AUC for that subject in both the fixed and in the free condition. We show that the hybrid classifier performed significantly better than each of the single-modality classifier in figure 6(d) (Student's paired-sample ttests, p < .05). The cross-bin weights and the EEG forward models of the hybrid classifier are shown in figures 6(b) and (c), respectively. The patterns for the cross-bin weights for both the EEG and the pupil diameter are similar to that of the cross-bin weights derived from the single-modality classifiers shown earlier in figures 4(a) and (c), with the EEG weights peaking around 500-600 ms and the pupil diameter weights peaking around 1700 ms. Similarly, the forward models derived from the hybrid classifier also show the pattern of the P300 signal peaking at approximately 500-600 ms following IO. In addition, we also compared the performance of the hybrid and singlemodality classifiers across the fixed and the free conditions as shown in figure 6(e). We did not find any significant difference in the AUC for the hybrid or any of the single-modality classifiers across the two conditions (Student's paired-sample t-tests). This result demonstrates that the classifiers are able to capture the reorienting signals both in the control scenario and in the more naturalistic scenario of our experiment. Lastly, we compared the AUC results for the hybrid and single modality classifiers across different types of epoch time locking (as described in the Methods section). We found no significant differences across the three locking types (e.g. IO locked, FS locked and FF locked) for all classifiers in both the fixed and free conditions. This result demonstrates that the reorienting signals are not locked to one particular gaze-based event but are decomposed across multiple different gaze events, which is consistent with other results presented earlier in this study. Moving towards more naturalistic experimental environments Attention reorienting is without a doubt a complex set of processes. It involves multiple neural and physiological systems working together to redirect our attention to new and novel stimuli in the environment. Using standardized paradigms, typically with no head movement and minimal eye movement, previous studies have identified neural and physiological signatures associated with attention reorienting, namely the EEG P300 and pupil dilation [2,12,14,22]. The fixed condition of our study mimics these standardized paradigms, by limiting the head movement of the subject and only allowing eye saccades to be made. Unsurprisingly, the grand average ERP and pupil diameter results of the fixed condition show a clear and pronounced P300 and pupil dilation peaks. However, in the free condition where both head and eye movements were allowed, the P300 and pupil dilation become much more spatially and temporally distributed ( figure 3). This result coincides with the behavioral results shown in figure 2 where the subjects take significantly longer time to saccade and fixate on the stimuli when head movements were made. Considering that many BCIs utilize these neural and physiological signals as measures of subject's attention, the greater spatial and temporal distributions of these signals pose direct challenge to the performance of these BCIs in more naturalistic environments. To address this issue, we first explore the relationships between the neural and physiological signals associated with attention reorienting and the different gaze events taken place when subjects reorient their visual attention to the stimuli in the environment. Relationship between gaze events and attention reorienting In order to study the relationship between the orienting signals and different gaze events, we must first divide the continuous gaze information collected from each trial into concrete events. We chose to divide the continuous gaze data into four distinct gaze events, peripheral, FS, dwell time and RS, as they are generally applicable to how a person might observe an object in real world environments and are understood to effect the reorienting response [28,29]. In realistic scenarios such as the task employed in the current study, the subjects must not only reorient their attention to the stimuli but also reorient their attention back to the center fixation prior to the arrival of the subsequent stimuli. Therefore, we consider the RS to be part of the reorientation loop. We performed the GLM analysis using the time of the four different gaze events as the regressors to the discriminating components derived from the EEGonly and the pupil diameter-only classifiers. The beta weight estimates in both the EEG-only and pupil diameter-only analyses and across both the fixed and free condition suggest that the dwell time of each trial contributes most significantly to the reorienting signals. Considering that the dwell time by itself can be used to distinguish between target and distractor stimuli in most subjects (figure 6(a)), and similarly in previous studies [19,30,31], this result confirms the importance of dwell time in attention reorienting.Here we also performed the second set of GLM analysis by orthogonalizing the dwell time, the most important contribution to the reorienting signals, against of the other three time regressors. With the dwell time removed, the negative beta weight estimates suggest that while the other gaze events are still important to the reorienting signals, they are negatively correlated. The EEG-only results suggest that the subjects spend less time fixating in the middle (i.e. lower peripheral gaze event time) when target image appears in both the fixed and free condition and also saccade to the target image faster (e.g. lower FS time) in the fixed condition. The slight positive FS beta weights in the free condition may be explained by the longer FS time overall for that condition. Meanwhile the pupil diameter-only results show negative beta weights estimate for FSs and RSs for the fixed condition and for peripheral and FS for the free condition. These results demonstrate a forward shift in time in comparison to the EEGonly results, suggesting that the neural and ocular reorienting response might be processed by different but connecting brain regions. This theory is in line with recent works connecting the cortical signatures of reorienting mediated by the ventral attention system (e.g. the P300 signal) to that of the subcortical signatures (e.g. pupil dilation) mediated by the LC-NE system [8,32]. It has been proposed that the activity of the LC is 'informed' by the connecting cortical structures such as the posterior cingulate cortex (PCC) and the anterior cingulate cortex (ACC) [7,8,32]. The results of the current study, specifically the forward shift in time in the pupil reorienting signals compared to the EEG reorienting signals as indexed by the gaze events, provide support for this theory. Capturing and integrating attention reorienting signals in naturalistic environments One of the main aims of the current study is to capture and integrate the neural and physiological signals underlying attention reorienting in naturalistic environments. While the hybrid HDCA classifier has previously been shown to successfully classify target and distractor stimuli in a 2D screen-based environment [19], our study is the first application of the hybrid HDCA classifier in a VR-based 3D environment. The results of the current study show that not only were the hybrid classifier able to classify the type of stimuli the subjects observed in a more immersive and naturalistic environment, it was able to perform equally well even when the subjects moved their heads in the free condition. The implication of this result is that despite the greater temporal distribution of the reorienting signals across trials in the more naturalistic condition, the hybrid classifier is still able to capture and integrate the information within these signals. We also demonstrate the benefits of utilizing multiple neural and physiological signal modalities to improve the classification performance of the classifier. While each single modality (EEG, pupil diameter and dwell time) contains the reorienting information on its own, combining the information across modalities significantly improves the classification performance both in the fixed and in the free condition. While the use of a hybrid classifier to classify targets vs. non target stimuli is still rare, the performance of our classifier is comparable to those of previous target detection studies typically done outside of a VR headset [19,33,34]. Our results therefore suggest that the hybrid HDCA classify may potentially serve as a basis for the development of attention-based BCI applications that can perform well in realistic scenarios and not only in well-controlled experimental environments. Limitations/future directions While the current study has shed light on some of the questions surrounding the dynamics of attention reorienting signals in naturalistic environments, many of them still remain unanswered. One of the major limitations to our study design is despite the subjects' ability to move their head, the movement is still limited to one plane of motion. With the use of HMD VR goggles, a study in which subjects are free to move in all planes of motion in a 'visual search' task may answer further questions regarding the orienting of attention in realistic scenarios [35,36]. In addition, while the current study attempted to divide the subjects' gaze direction into distinct events, gaze movements in realistic scenarios have been shown to be more complex, with saccade and fixation events constantly interleaving in time [37,38]. Lastly, while the hybrid HDCA classifier demonstrates good performance in the current work, further studies are required to investigate the possibility of applying it in a closed-loop system in order to serve as a basis for the development of a real-time BCI application. Conclusion In this study, we explored the relationship between gaze events and attention reorienting signals in a more naturalistic environment. We determined that dwell time contributes most significantly to both the ocular and neural reorienting signals. However, the distribution of the reorienting signals across the remaining gaze events, namely peripheral, FS and RS, are different across the two modalities. Specifically, the pupil reorienting signals show a forward shift in time in comparison to the EEG reorienting signals, consistent with the theory in which the cortical regions of the ventral attention network (e.g. ACC and PCC) modulates the activity of the subcortical regions associated with the reorienting process (e.g. the LC-NE system). Nevertheless, when applying the hybrid classifier which combines the EEG, pupil dilation and dwell time signals together, it was able to capture and integrate the reorienting signals across different modalities and classify target vs. distractor stimuli with high accuracy. We expect the results of this study will provide the basis for the development of an attention-based BCI system that can operate in more naturalistic environment in the future. Data availability statement The data that support the findings of this study will be openly available following an embargo at the following URL/DOI: https://github.com/LIINC/ LIINC_VR_Reorienting. Data will be available from 30 November 2021 [39].
8,066.8
2021-12-22T00:00:00.000
[ "Computer Science" ]
Corneal Dystrophy-associated R124H Mutation Disrupts TGFBI Interaction with Periostin and Causes Mislocalization to the Lysosome* The 5q31-linked corneal dystrophies are heterogeneous autosomal-dominant eye disorders pathologically characterized by the progressive accumulation of aggregated proteinaceous deposits in the cornea, which manifests clinically as severe vision impairment. The 5q31-linked corneal dystrophies are commonly caused by mutations in the TGFBI (transforming growth factor-β-induced) gene. However, despite the identification of the culprit gene, the cellular roles of TGFBI and the molecular mechanisms underlying the pathogenesis of corneal dystrophy remain poorly understood. Here we report the identification of periostin, a molecule that is highly related to TGFBI, as a specific TGFBI-binding partner. The association of TGFBI and periostin is mediated by the amino-terminal cysteine-rich EMI domains of TGFBI and periostin. Our results indicate that the endogenous TGFBI and periostin colocalize within the trans-Golgi network and associate prior to secretion. The corneal dystrophy-associated R124H mutation in TGFBI severely impairs interaction with periostin in vivo. In addition, the R124H mutation causes aberrant redistribution of the mutant TGFBI into lysosomes. We also find that the periostin-TGFBI interaction is disrupted in corneal fibroblasts cultured from granular corneal dystrophy type II patients and that periostin accumulates in TGFBI-positive corneal deposits in granular corneal dystrophy type II (also known as Avellino corneal dystrophy). Together, our findings suggest that TGFBI and periostin may play cooperative cellular roles and that periostin may be involved in the pathogenesis of 5q31-linked corneal dystrophies. The 5q31-linked corneal dystrophies are heterogeneous autosomal-dominant eye disorders pathologically characterized by the progressive accumulation of aggregated proteinaceous deposits in the cornea, which manifests clinically as severe vision impairment. The 5q31-linked corneal dystrophies are commonly caused by mutations in the TGFBI (transforming growth factor-␤-induced) gene. However, despite the identification of the culprit gene, the cellular roles of TGFBI and the molecular mechanisms underlying the pathogenesis of corneal dystrophy remain poorly understood. Here we report the identification of periostin, a molecule that is highly related to TGFBI, as a specific TGFBI-binding partner. The association of TGFBI and periostin is mediated by the amino-terminal cysteine-rich EMI domains of TGFBI and periostin. Our results indicate that the endogenous TGFBI and periostin colocalize within the trans-Golgi network and associate prior to secretion. The corneal dystrophy-associated R124H mutation in TGFBI severely impairs interaction with periostin in vivo. In addition, the R124H mutation causes aberrant redistribution of the mutant TGFBI into lysosomes. We also find that the periostin-TGFBI interaction is disrupted in corneal fibroblasts cultured from granular corneal dystrophy type II patients and that periostin accumulates in TGFBI-positive corneal deposits in granular corneal dystrophy type II (also known as Avellino corneal dystrophy). Together, our findings suggest that TGFBI and periostin may play cooperative cellular roles and that periostin may be involved in the pathogenesis of 5q31-linked corneal dystrophies. Corneal dystrophies are characterized by the progressive loss of corneal transparency as a result of extracellular amyloid and non-amyloid deposits, which accumulate in different layers of corneal tissues. 5q31-linked corneal dystrophies are pathologically heterogeneous, autosomal-dominant disorders caused by mutations in the TGFBI (transforming growth factor-␤-induced) gene, which encodes the TGFBI protein (also known as keratoepithelin or Big-H3) (1,2). To date, more than 30 different mutations leading to corneal dystrophies have been attributed to mutations in TGFBI, the most frequent of which are mutations within exons 4 and 12, which result in amino acid substitutions in Arg 124 and Arg 555 , respectively (3,4). The different mutations in TGFBI cause clinically distinct types of corneal dystrophies, which are classified according to the accumulation patterns of the deposits, including lattice corneal dystrophies type I and IIIA, deep stromal lattice corneal dystrophy, granular corneal dystrophies (GCDs) 2 type I and II (also known as Avellino corneal dystrophy), Reis-Bucklers corneal dystrophy (also known as corneal dystrophy of Bowman's layer type I), or Thiel-Behnke corneal dystrophy (also known as corneal dystrophy of Bowman's layer type II) (reviewed in Refs. 5 and 6). Histological examinations of corneal tissues demonstrate the presence of amyloid deposits in lattice corneal dystrophies and GCD II, hyaline accumulations in GCDs, and subepithelial fibrous material in Reis-Bucklers corneal dystrophy and Thiel-Behnke corneal dystrophy (7)(8)(9)(10)(11)(12)(13)(14). TGFBI was originally identified as a gene induced by transforming growth factor-␤ stimulation in adenocarcinoma cells and is expressed in many tissues (15). The human TGFBI consists of 683 amino acids, with the mature protein predicted to have a molecular mass of ϳ68 kDa. As shown in Fig. 1A, TGFBI contains an NH 2 -terminal signal peptide that targets it for insertion into the lumen of the endoplasmic reticulum (ER) for eventual secretion, a cysteine-rich EMI domain, four tandem repeats of fasciclin-1 like (FAS1) domains, and a COOH-terminal RGD sequence (15)(16)(17)(18)(19). The FAS1 domains of TGFBI display homology to the cell adhesion protein fasciclin-I in Drosophila, an axon guidance protein that is involved in neuronal development (20). Based on the presence of multiple FAS1 domains, TGFBI has been assigned to a larger family of proteins, which includes periostin, stabilin-1, and stabilin-2 (16,21). To date, many TGFBI homologues have been reported in various vertebrates, including mouse, chicken, pig, and zebrafish, but no homologues have been identified in invertebrates (16,19,21). TGFBI has been shown to interact with a number of extracellular matrix (ECM) proteins, including fibronectin, biglycan, decorin, and several types of collagen (19,(22)(23)(24)(25). Furthermore, TGFBI also functions as a ligand for several integrins, including ␣3␤1, ␣v␤5, ␣v␤3, and ␣m␤2 (26 -29). The COOH-terminal RGD domain of TGFBI is the putative integrin-binding motif. However, several studies have suggested that the interactions between TGFBI and integrins are mediated via the YH (tyrosine-histidine) motifs and DI (aspartate-isoleucine) motifs present in the TGFBI FAS1 domains (30). Although the precise roles of TGFBI are not fully understood yet, emerging evidence suggests a role for TGFBI as a secreted factor involved in cell adhesion, proliferation, and migration. TGFBI and periostin show a high degree of similarity in amino acid sequence and in overall domain structure, diverging primarily at the COOH terminus ( Fig. 1A) (16,21). Similar to TGFBI, periostin contains a NH 2 -terminal secretory signal peptide followed by a cysteine-rich EMI domain, four tandem repeats of FAS1 domains, and a hydrophilic region in its COOH terminus ( Fig. 1A) (16,17,31,32). Periostin has been found to be ubiquitously expressed in multiple tissues in mammals (31,33,34). In addition, the expression of periostin has been implicated in the development of variety of cancers, including neuroblastoma, head and neck cancer, and non-small cell lung cancer, possibly by regulating the metastatic growth (32,35). Periostin is also associated with epithelial-mesenchymal transition during cardiac development (36) and is induced during the proliferation of cardiomyocytes, thereby promoting cardiac repair after heart failure (37,38). In addition, interlukin-4 and -13 have been found to induce the secretion of periostin from lung fibroblasts, implicating periostin in subepithelial fibrosis in bronchial asthma (39). Despite the similarities between TGFBI and periostin, it is not known whether periostin is involved in the pathogenesis of 5q31-linked corneal dystrophies. In this study, we find that periostin specifically interacts with TGFBI via the NH 2 -terminal cysteine-rich EMI domain and colocalizes with TGFBI in the trans-Golgi network of COS-7 and corneal fibroblast cells. In addition, corneal dystrophy-linked mutations in TGFBI disrupt its subcellular localization and impair its interaction with periostin. Furthermore, we find that periostin accumulates in extracellular corneal deposits in GCD II patients bearing homozygous R124H mutations in TGFBI. These findings provide new insights into the pathogenic mechanisms of TGFBI mutations in 5q31-linked corneal dystrophies and have important implications for understanding and treating corneal dystrophies. EXPERIMENTAL PROCEDURES Plasmids-pcDNA3-Periostin-GFP (34) and pcDNA3.1-Periostin-His (35) constructs were kind gifts from Dr. Hirokazu Inoue (Siga University of Medical Science, Japan) and Dr. Xiao-Fan Wang (Duke University, Durham, NC). Full-length human TGFBI cDNA was cloned into the pcDNA3.1 mammalian expression vector (Invitrogen) with a V5 and His 6 tag at the COOH terminus of TGFBI. Deletion and point mutation mutants of TGFBI and periostin were generated in using conventional PCR methods and the QuikChange site-directed mutagenesis kit (Stratagene), following the manufacturer's instructions. The sequences of all constructs were verified by direct sequencing. Cell Culture and Transfections-HeLa, COS-7, HEK293, and human corneal fibroblast (HCF) cell lines were grown in Dulbecco's modified Eagle's medium (Invitrogen) supplemented with 2 mM L-glutamine, 100 units/ml penicillin, 100 g/ml streptomycin, and 10% (w/v) fetal bovine serum (Invitrogen) at 37°C in a 5% CO 2 incubator. The human corneal epithelial (HCE) cell line was grown in Dulbecco's modified Eagle's medium and F-12 (1:1) media supplemented with 2 mM L-glutamine, 100 units/ml penicillin, 100 g/ml streptomycin, 10% (w/v) fetal bovine serum, 10 ng/ml recombinant human epidermal growth factor (R&D Systems) at 37°C in a 5% CO 2 incubator. Human corneal epithelial and fibroblast cell lines were a kind gift from Dr. Shigeru Kinoshita (Kyoto Prefectural University of Medicine, Japan) and Dr. James V. Jester (University of California, Irvine, CA). Primary corneal fibroblasts were cultured from corneal buttons obtained from a 60-year-old control and a 27-year-old homozygous GCD II patient during penetrating keratoplasty. The endothelial and epithelial layers were removed from the corneas, and stroma was used as explants to initiate corneal fibroblast cultures. The cells were maintained in Dulbecco's modified Eagle's medium supplemented with 2 mM L-glutamine, 100 units/ml penicillin, 100 g/ml streptomycin, and 10% (w/v) fetal bovine serum at 37°C in a 5% CO 2 incubator. Donor confidentiality was maintained according to the Declaration of Helsinki and was approved by the Severance Hospital IRB Committee (CR04124). Transfections were performed using GeneJammer (Stratagene) according to the manufacturer's instructions, analyses were conducted 24 h posttransfection, and immunoprecipitations were carried out as described previously (49). Western Blot-Cells were washed with PBS, and extracts were obtained by passing the suspension through a 26-gauge needle in ice-cold lysis buffer (50 mM Tris-HCl, pH 7.4, 150 mM NaCl, 1% Nonidet P-40, 0.1% Triton X-100 supplemented with protease inhibitor mixtures (Applied Biological Materials Inc.)). Soluble supernatants were analyzed by SDS-PAGE under reducing conditions and transferred to nitrocellulose membranes (Millipore). The membrane was then blocked with 5% skim milk (Difco) in 1ϫ TBST buffers (20 mM Tris-HCl, 137 mM NaCl, pH 7.6, 0.1% Tween 20) and incubated with the indicated antibodies. The SuperSignal West Pico chemiluminescent substrate Kit (Thermo Scientific) was used for protein detection. The band intensities were quantified using the ImageJ program (version 1.38). Human Corneal Epithelium Protein Extracts-Normal human corneal epithelial cells were obtained by scraping the epithelial layer during photorefractive keratectomy. Patient corneal epithelial cells from a GCD II patient were obtained by scraping the epithelial layer during deep lamellar corneal transplantation. After scraping the corneal surface using a blunt blade, samples were immediately placed into ice-cold lysis buffer, and proteins were extracted. His Tag Pull-down Assays-For His tag pull-down assays, COOH-terminal His-tagged wild type of TGFBI was purified as described previously (25), and NH 2 -terminal His-tagged periostin was purchased from BioVendor. Twenty micrograms of His-tagged recombinant TGFBI or periostin was immobilized on nickel-agarose resin (Applied Biological Materials) and incubated overnight at 4°C with 500 g of HCF cell lysates. Bound proteins were resolved by SDS-PAGE and detected by Western blotting with the indicated antibodies. Immunofluorescence Microscopy and Immunohistochemistry-For immunofluorescence microscopy, cells were grown on coverslips, fixed in cold methanol/acetone (1:1, v/v) for 10 min at Ϫ20°C, and blocked with 2% bovine serum albumin for 30 min. Cells were incubated with primary antibodies in 2% bovine serum albumin for 1 h at room temperature. Cells were washed with PBS and subsequently incubated with secondary antibodies in 2% bovine serum albumin for 1 h at room temperature. After washing with PBS, cells were mounted using Vectashield (Vector Laboratories, Inc.). Images were acquired using a TCS SP5 confocal microscope (Leica). For immunohistochemistry analyses, corneas from normal human, R124H mutated heterozygous and homozygous GCD II patients were fixed in 10% neutral-buffered formalin and embedded in paraffin. The paraffin-embedded samples were sectioned on a microtome at a thickness of 5 m, mounted on slide glasses, deparaffinized in xylene, and rehydrated in ethanol. The sections were incubated in 0.3% H 2 O 2 for 30 min and blocked with 2.5% normal horse serum for 20 min. The sections were then incubated with normal rabbit IgG serum and/or rabbit polyclonal anti-periostin antibody (1:500, v/v) in 2.5% normal horse serum and 0.1% bovine serum albumin for 1 h at room temperature. The sections were washed with PBS and incubated in ImPress universal reagent (Vector Laboratories) for 30 min. After washing with PBS for 5 min, the sections were incubated with DAB solution and visualized according to the manufacturer's instructions (Vector Laboratories). Sections were washed with PBS three times and mounted using Vectashield (Vector Laboratories). Masson's trichrome stains were used to confirm the mutated TGFBI deposits in the corneal stroma. Images were acquired using a BX 40 light microscope (Olympus). Periostin Is Expressed in Cornea-derived Cell Lines and Corneal Tissues-Despite the fact that TGFBI and periostin share several similarities in structure and expression patterns ( Fig. 1A) (15,33), little is known about the roles of periostin in corneal tissues. To examine the expression of periostin in cornea and cornea-derived cells, we first performed Western blot analysis with specific anti-periostin antibodies (C-20 and ab14041) (Fig. 1B). Western blot analysis revealed expression of periostin in all of the tested cells and tissues, including COS-7, HeLa, HEK293, and HCF (40); primary cultured corneal fibroblast from normal human (NPCF) and HCE cell lines (41); and normal human corneal epithelium (Fig. 1B, top, lanes 1-7). In HeLa, COS-7, HEK293, HCF, and NPCF, endogenous periostin was detected primarily as a single band that migrated with an apparent molecular mass of ϳ85 kDa, consistent with the predicted molecular weight (Fig. 1B, lanes [1][2][3][4][5]. A second high molecular mass band of ϳ170 kDa was observed in some cell lines. This band may represent the previously reported covalently linked periostin multimer (42) or perhaps some other covalent postranslational modification. However, in HCE and corneal epithelium, periostin was detected as a single band of ϳ60 kDa (Fig. 1B, top, lanes 6 and 7). The C-20 anti-periostin antibody was raised against a COOH-terminal periostin peptide (amino acids 725-775), and preabsorption with a periostin peptide completely abolished the immunoreactivity of anti-periostin antibody (C-20), confirming the specificity of this antibody (Fig. 1B, second panel). To further determine the identity of the periostin-immunoreactive band, we performed additional Western blot analyses using an independent anti-periostin antibody generated against a separate epitope (ab14041, amino acids 22-669) and found that this antibody also recognized the ϳ60 kDa band in human corneal epithelium (supplemental Fig. 1). Together, these results suggest that periostin is expressed in cornea-derived fibroblast and epithelial cell lines as well as in corneal epithelium. In addition, the detection of a form of periostin of reduced molecular weight with two anti-periostin antibodies that recognize separate periostin epitopes raises the possibility of cell typespecific proteolytic processing of periostin or cell type-specific periostin splice variants. Periostin Interacts with TGFBI in Vivo and in Vitro-Periostin has previously been shown to form dimers (42), and given the structural similarity between periostin and TGFBI, we next sought to determine whether the two proteins interact. We first performed pull-down assays using immobilized Histagged TGFBI or periostin with HCF cell lysates. Bound proteins were separated by SDS-PAGE and visualized by Western blotting. As shown in Fig. 2A, His-tagged TGFBI efficiently pulled down endogenous periostin from HFC cell lysates ( Fig. 2A). Consistent with previous reports (19,23,24), we found that collagen VI was readily pulled down from HFC cell lysates by His-tagged recombinant TGFBI ( Fig. 2A). In addition, His-tagged TGFBI did not pull down the cytoskeletal protein actin, confirming the specificity of this experiment. In the reciprocal experiment, we found that His-tagged periostin efficiently pulled down endogenous TGFBI, but not actin or collagen VI. These in vitro binding studies indicate that periostin is able to interact with TGFBI. These results also show that periostin does not interact with the TGFBI-binding partner collagen VI (Fig. 2B), indicating that despite the large degree of sequence similarity, periostin and TGFBI are not interchangeable. To verify that the periostin-TGFBI interaction occurs in vivo, we performed co-immunoprecipitation experiments using antibodies specific for periostin and TGFBI. As shown in Fig. 2C, anti-TGFBI antibodies, but not the IgG control, efficiently co-immunoprecipitated endogenous periostin from HCF cell lysates. Furthermore, anti-periostin antibodies specifically coimmunoprecipitated endogenous TGFBI from HCF cell lysates (Fig. 2D, lane 3). Taken together, the pull-down assays and coimmunoprecipitation experiments demonstrate that periostin interacts with TGFBI in vitro and in vivo. The Periostin and TGFBI Interaction Is Mediated by the Amino-terminal, Cysteine-rich EMI Domain-To map the binding sites mediating the interaction between periostin and TGFBI, we generated a series of COOH-terminally V5/Histagged TGFBI deletion mutants and COOH-terminally GFPtagged periostin deletion mutants (Fig. 3, A and C) and performed coimmunoprecipitation analyses. As shown in Fig. 3B, all of the NH 2 -terminal deletion mutants of TGFBI (⌬N1-⌬N4) abolished the interaction with GFP-tagged wild type of periostin (lanes 2-5). In contrast, the full-length and ⌬N5 deletion mutant of TGFBI, in which the first and second FAS1 domain regions are deleted, both efficiently precipitated GFPtagged full-length periostin (Fig. 3B, lanes 1 and 6). These results suggest that the NH 2 -terminal, cysteine-rich EMI domain of TGFBI is critical for the interaction with periostin. In addition, using NH 2 -terminal deletions of periostin, we found that the NH 2 -terminal, cysteine-rich EMI domain of periostin is critically required for the interaction with TGFBI. As shown in Fig. 3, GFP-tagged full-length periostin efficiently coimmunoprecipitated V5/His-tagged full-length TGFBI (Fig. 3D, lane 1). In contrast, NH 2 -terminal deletion mutants of periostin (⌬N2-⌬N5) completely abolished the interaction with V5/Histagged full-length TGFBI (Fig. 3D, lanes 2-6). To test whether the predicted binding region (EMI domain) of TGFBI and periostin are responsible for the interactions, we generated the COOH-terminally V5/His-tagged TGFBI-EMI and COOHterminally GFP-tagged periostin-EMI constructs (Fig. 3, E and G) and performed coimmunoprecipitation using the indicated antibodies. The results confirmed that the NH 2 -terminal EMI domains are sufficient for the interactions between TGFBI and periostin (Fig. 3, F and H). Taken together, these deletion mapping analyses provide evidence supporting a model in which the binding between TGFBI and periostin is mediated via NH 2terminal, cysteine-rich EMI domains in both TGFBI and periostin (Fig. 3I). Periostin Colocalizes with TGFBI in the trans-Golgi Network-To provide further evidence for an in vivo association of periostin with TGFBI, we employed immunofluorescence confocal microscopy to examine the subcellular localization of periostin and TGFBI. As shown in Fig. 4A, GFP-tagged periostin expressed in COS-7 cells localizes to a perinuclear region (a and d) that colocalizes with TGN38, a trans-Golgi network (TGN) marker (ac). GFP-tagged periostin fluorescence did not colocalize with the late endosome and lysosome marker, Lamp2 (d-f). The COOH-terminally V5/His-tagged full-length TGFBI showed a similar staining pattern that also colocalized with TGN38 (g-i) but not Lamp2 (j-l). Immunostaining of COS-7 cells co-expressing GFP-tagged full-length periostin and V5/His-tagged TGFBI revealed clear overlapping subcellular distributions (mo), indicating that periostin and TGFBI colo-calize in COS-7 cells. To determine the subcellular localizations of endogenous periostin and TGFBI, we employed antibodies specific for periostin and TGFBI in HCF cells. Consistent with the above results, we found that endogenous TGFBI and periostin both colocalized with TGN38 immunostaining (Fig. 4B, a-f). In addition, endogenous periostin and TGFBI immunostaining showed a substantial amount of overlap (Fig. 4B, g-i). Taken together, these results demonstrate that endogenous periostin and TGFBI colocalize in the trans-Golgi network. The GCD II-associated R124H Mutant TGFBI Impairs Binding of Periostin-Our biochemical and immunofluorescence microscopy results strongly suggest that periostin and TGFBI cooperated in the same pathways in the cells. To further our understanding of the pathophysiology of 5q31-linked corneal dystrophies, we next examined the effects of 5q31-linked corneal dystrophy mutations in TGFBI on its interaction with periostin. COS-7 cells co-expressing GFPtagged periostin and V5/His-tagged wild type and mutant TGFBI were subjected to immunoprecipitation with GFP antibodies. Several corneal dystrophy-associated TGFBI mutants were examined, including R124H (GCD II), R124C (lattice corneal dystrophy I), R124L (Reis-Bucklers corneal dystrophy), R555W (GCD I), and R555Q (Thiel-Behnke corneal dystrophy). Interestingly, we found that R124H mutant TGFBI significantly decreased binding of periostin (lane 2) compared with wild type TGFBI as well as other TGFBI mutants (Fig. 5, lanes 1 and 3-6). Quantification of three independent experiments confirmed that the R124H mutation in TGFBI disrupts the interaction between TGFBI and periostin (Fig. 5B). The R124H mutation causes granular corneal dystrophy type II. To further examine whether the interaction between TGFBI and periostin is indeed disrupted by the R124H mutation, we performed coimmunoprecipitation experiments using primary cultured corneal fibroblasts from normal human control and homozygous R124H GCD II patients. Consistent with our results, endogenous wild type of TGFBI coprecipitated with an anti-periostin-specific antibody (Fig. 5C, lane 2). In contrast, binding of the R124H FIGURE 2. Periostin specifically interacts with TGFBI. A and B, in vitro His pull-down assays were performed by incubation of nickel-nitrilotriacetic acid beads alone, immobilized His-tagged recombinant TGFBI, or immobilized His-tagged periostin with HCF cell lysates. Bound periostin and TGFBI were detected by Western blotting using specific antibodies (first panel). Collagen VI and actin were used for positive and negative controls using specific antibodies as indicated (second and third panels). Immobilized His-recombinant TGFBI and periostin were detected using anti-His antibody (fourth panel). C and D, co-immunoprecipitation of endogenous periostin and TGFBI in HCF cell lysates. Endogenous TGFBI or periostin was immunoprecipitated from HCF cell lysates with anti-TGFBI or anti-periostin antibodies, respectively, followed by Western blotting for periostin or TGFBI (lane 3). Normal mouse or rabbit IgG was used as negative control (lane 2). WB, Western blot; IP, immunoprecipitation. mutant TGFBI to periostin is dramatically reduced in corneal fibroblasts from GCD II patients (Fig. 5C, lane 4). Together, these results demonstrate that the R124H mutation in TGFBI, which is responsible for GCD II, impairs binding of periostin, suggesting the possibility that impaired binding of periostin may be one contributing factor to the pathophysiology of GCD II. To test whether 5q31-linked corneal dystrophy mutations in TGFBI affect its secretion, we analyzed both the intracellular TGFBI and the TGFBI secreted into the cell media from HEK293 cells expressing V5/His-tagged wild type and mutant TGFBI by Western blotting. As shown, we did not find any significant differences in the secretion of wild type and mutant TGFBI (Fig. 5D). Experiments in which corneal fibroblasts cultured from normal human and GCD II patients were employed yielded similar data (data not shown), indicating that 5q31linked corneal dystrophy mutations in TGFBI do not significantly affect its secretion. The GCD II-associated R124H Mutant TGFBI Mislocalizes to Lysosomes-Since the R124H mutation of TGFBI seriously impaired the interaction with periostin, we performed immunofluorescence confocal microscopic analyses to examine the subcellular localization of R124H mutant TGFBI. Consistent with the above result in immunofluorescence analysis, V5/Histagged wild type TGFBI showed a typical perinuclear localiza-tion pattern, which colocalized with the TGN38 immunostaining (Fig. 6A, a-c) but not Lamp2 immunostaining (Fig. 6B, a-c). In contrast, the V5/His-tagged R124H mutant TGFBI showed significant changes in subcellular localization. In addition to a slight overlap with TGN immunostaining, the R124H mutant TGFBI was found to be predominantly associated with cytosolic vesicles that largely colocalized with Lamp2-immunoreactive puncta (Fig. 6B, d-f). Quantification of the subcellular distribution of the R124H mutant of TGFBI revealed a significant shift from TGN to late endosomes and lysosomes when compared with the distribution of wild type TGFBI (Fig. 6C). To confirm these results in the endogenous state, we examined the distribution of endogenous TGFBI in cultured corneal fibroblasts from normal human control and homozygous R124H GCD II patients. As shown in Fig. 6D, we found that the number of TGFBI-positive cytosolic vesicles was increased in cultured corneal fibroblasts from the GCD II patient (bottom) when compared with the more typical TGN localization of wild type TGFBI in cultured corneal fibroblasts from the normal control patient (top). Furthermore, the degree of overlap between TGFBI and periostin was reduced in GCD II cultured corneal fibroblasts (Fig. 6D). Interestingly, Western blot analyses of lysates prepared from normal and GCD II patient corneal fibroblasts indicate an increase in the levels of periostin but not TGFBI (data not shown). Together, these results indicate that the R124H mutation disrupts the normal TGFBI localization, resulting in the abnormal presence of a lysosomal pool of R124H mutant TGFBI. Periostin Accumulates in R124H Mutant TGFBI Deposits in GCD II Corneal Tissues-The biochemical and cell biological analyses in our studies strongly suggest the possibility that periostin plays a role in the pathogenesis of GCD II. Therefore, we next examined the distribution of periostin in control as well as heterozygous and homozygous R124H GCD II patient corneal tissues. As expected, Masson's trichrome staining revealed the presence of large deposits in the corneal stroma from both heterozygous and homozygous R124H GCD II but not the control tissues (Fig. 7A). Immunostaining with anti-TGFBI (Fig. 7A, g) and anti-periostin (Fig. 7A, j) antibodies showed strong immunoreactivity in corneal epithelium and a small amount of diffuse staining within the corneal stroma in the normal human control corneal tissue. In contrast, in the corneas from heterozygous and homozygous R124H GCD II patients, strong immunoreactivity was detected in the deposits in the corneal stroma by both the TGFBI-specific (Fig. 7A, h and i) and periostinspecific antibodies (Fig. 7A, k and l). Importantly, the deposits were not stained by the normal rabbit IgG control (Fig. 7A, d-f), indicating the specificity of these staining patterns. These findings indicate that periostin accumulates in mutant TGFBI corneal deposits and raise the possibility that periostin may co-aggregate with mutant TGFBI in GCD II patients. To examine this possibility, we performed Western blotting analyses of protein extracts from scraped corneal epithelial layers of normal human control and homozygous R124H GCD II patients. As shown in Fig. 7B, we found that the TGFBI antibody recognized monomeric TGFBI in control and GCD II patients. Moreover, there was an increase in the total amount of TGFBI protein and the appearance of high molecular weight forms of TGFBI in the GCD II patient tissue (Fig. 7B, first panel). We also found that the anti-periostin antibody strongly reacted with high molecular weight forms of periostin in the samples from the GCD II patient that were absent in control tissues (Fig. 7B, second panel). These TGFBI and periostin high molecular weight bands were completely absent in the control samples even when 20 times more sample was loaded (data not shown), indicating that these bands are specific to the disease state. In contrast to TGFBI and periostin, other extracellular matrix proteins previously reported to interact with TGFBI and periostin, such as fibronectin and tenascin C, did not show differential levels in normal or GCD II patient samples (Fig. 7B, panels [3][4][5]. In addition, reverse transcription-PCR analyses of the TGFBI and periostin transcripts indicate that there is little change in the mRNA levels in normal and disease tissues (data not shown), suggesting that the increase in protein levels is due to accumulation within the extracellular deposits. Taken together, these results strongly indicate the possibility that periostin co-aggregates in mutant TGFBI corneal deposits and raise the possibility that periostin is involved in the pathogenesis of 5q31-linked corneal dystrophies. DISCUSSION Despite many recent studies on TGFBI in 5q31-linked corneal dystrophies, the precise molecular mechanisms by which mutations in TGFBI cause the characteristic disease phenotypes remain poorly understood. In addition, although there is a high degree of overall similarity between TGFBI and periostin, periostin has not been previously implicated in corneal biology or in the pathogenesis of 5q31-linked corneal dystrophies. In this study, we show that periostin is expressed by human cornea-derived cells, and we identify a specific interaction between TGFBI and periostin. Moreover, our results demonstrate that the R124H mutation in TGFBI impairs the interaction with periostin and results in the mislocalization of a portion of TGFBI to lysosomes. Finally, we find that periostin accumulates in deposits of aggregated mutant TGFBI in the corneal stroma of GCD II patients. Periostin was originally identified as a ϳ90-kDa secreted protein in murine osteoblasts and originally termed OSF-2 (osteoblast-specific factor-2) (31). Later, it was renamed periostin due to its expression in the periosteum and periodontal ligament (33). Although it has been shown that periostin is widely expressed in many different cell types, including connective, bone, periodontal ligament, and several types of cancer (31-34), its expression in corneal cells and tissues has not been FIGURE 5. GCD type II-associated R124H mutant TGFBI disrupts the interaction with periostin. A, COS-7 cells coexpressing GFP-tagged periostin and V5/His-tagged wild type and 5q31-linked corneal dystrophies-associated mutant forms of TGFBI were subjected to immunoprecipitation with anti-GFP antibody, followed by Western blotting with anti-V5 and anti-GFP antibodies. B, quantification of the precipitated amounts of mutant TGFBI. Amounts of precipitated V5/His-tagged TGFBI were normalized to the amount of precipitated GFP-tagged periostin. Data represent mean Ϯ S.E. from three independent immunoprecipitation experiments. C, primary cultured corneal fibroblasts from a normal human patient and a GCD II patient bearing homozygous R124H mutations in TGFBI were subjected to immunoprecipitation with anti-periostin antibody, followed by Western blotting with anti-TGFBI and anti-periostin antibodies. D, cell lysate or cell media from COS-7 cells coexpressing GFP-tagged periostin with V5/His-tagged wild type and 5q31-linked corneal dystrophy-associated mutant forms of TGFBI were analyzed by Western blotting with anti-V5 and anti-GFP antibodies. WB, Western blot; IP, immunoprecipitation. reported. Using two anti-periostin antibodies that recognize distinct periostin epitopes, we show that periostin is expressed by cornea-derived fibroblast and epithelial cells, suggesting the possibility that periostin plays a role in corneal cells. Interestingly, despite the predicted molecular mass of periostin, which is ϳ90 kDa, it was detected as a ϳ60-kDa species in HCE cells and human corneal epithelium by two different periostin-specific antibodies, C-20 and ab14041. The specificity of this lower band was confirmed by preabsorption experiments employing a periostin peptide. A lower molecular weight form of periostin has previously been reported by Kern et al. (44) in chick developing heart using a periostin antibody recognizing a more NH 2terminal epitope, providing further support for the identity of this anti-periostin-reactive band. Potential splicing events could result in this lower molecular weight periostin species. However, although it was previously reported that several alternative splicing variants of periostin exist (32,39), all of the spliced forms are ϳ80 -90 kDa (32,38,39). A second possibility is that the lower molecular weight form of periostin represents the product of proteolytic processing. Thus, it may be that periostin undergoes a proteolytic processing event that is specific to corneal tissues. Our findings indicate that a novel, lower molecular weight form of periostin exists in human corneal epithelium. Further studies will be important to understand the precise molecular events that give rise to this lower molecular weight form of periostin. Our in vitro and in vivo biochemical analyses revealed a specific interaction between exogenously expressed TGFBI and periostin in COS-7 cells and endogenous TGFBI and periostin in human corneal fibroblasts. These results raise the possibility that TGFBI and periostin function in the same regulatory pathways in human cornea. Indeed, previous reports have shown that both TGFBI and periostin function as cellular adhesion molecules and are involved in the promotion of cancer metastasis (35,45). Our coimmunoprecipitation experiments using deletion mutants of TGFBI and periostin revealed that TGFBI and periostin association is mediated by the NH 2 -terminal, cysteine-rich EMI domain of both TGFBI and periostin (Fig. 3, A-I). The EMI domain was first named after its presence in proteins of the EMILIN family and suggested to be the protein-protein interaction motif (17,33,46). Interestingly, previous reports have shown that the interaction of periostin and the ECM proteins fibronectin, tenascin C, and collagen V is mediated via the FAS1 domain (39). Thus, the interaction of TGFBI and periostin via the EMI domain would potentially leave the FAS1 domain free for interaction with other binding partners, suggesting that the TGFBI-periostin interaction would not necessarily preclude simultaneous binding to effector proteins. We further found that the deletion of the FAS1 domain had no effect on the TGFBI-periostin interaction, and the expression of the EMI domain alone was sufficient to recapitulate the interaction between the two molecules. In addition, we noted that despite the high degree of similarity between TGFBI and periostin, periostin does not interact with the TGFBI-interacting protein collagen VI (Fig. 2B). These results are intriguing and provide evidence that, although highly similar, TGFBI and periostin are not interchangeable. Both TGFBI and periostin contain NH 2 -terminal signal sequences, which are expected to be necessary for their cotranslational insertion into the endoplasmic reticulum, the port of entry into the cellular secretory system. After folding within the endoplasmic reticulum, proteins destined for secretion are transported to the Golgi apparatus prior to their secretion. Our immunofluorescence microscopic analyses are consistent with this folding and processing pathway and show that endogenous TGFBI and periostin colocalize in the TGN. In addition, we show that both proteins are efficiently secreted from cells and that corneal dystrophy-associated mutations in TGFBI have no effect on its secretion. In fact, the levels of mutant TGFBI secretion were indistinguishable from wild type TGFBI, indicating a failure in the ER quality control mechanisms to recognize and degrade these mutant proteins. It is possible that these mutations do not result in gross misfolding of TGFBI, which would be expected to expose buried hydrophobic regions that would allow quality control proteins to recognize and dispose of them. Instead, these mutations may disrupt local TGFBI surfaces that affect interactions with critical binding partners, such as periostin. Our findings indicate that several corneal dystrophy-associated mutations in TGFBI display reduced binding of periostin, with the R124H mutation causing the most severe impairment in periostin binding. We further confirmed this result using primary cultured corneal fibroblasts from a GCD II patient bearing homozygous R124H mutations. These results clearly show that the interaction between periostin and TGFBI was severely reduced by the R124H mutation in TGFBI (Fig. 5C, lane 4), providing the first evidence implicating periostin in GCD II. Our immunofluorescence analyses provide further support for the importance of the Arg 124 residue. We found that in COS-7 cells expressing R124H mutant TGFBI, a large portion was aberrantly localized to Lamp2-immunoreactive late endosomes and lysosomes. We also found that R124H mutant TGFBI showed a similar redistribution in primary cultured corneal fibroblasts from a GCD II patient bearing homozygous R124H mutations. The precise reason for this redistribution is currently unclear and will require further study. It is possible that a portion of the R124H mutant TGFBI is recognized as misfolded and is degraded via the lysosome through a specialized autophagic process termed ER-phagy. Indeed, this has been shown to occur for the Z-variant of ␣1-antitrypsin, which causes severe misfolding and aggregation in the ER (47). However, our analyses indicate that the R124H mutant is secreted normally, and a second possibility is that this mutant is endocytosed and trafficked to the lysosome for degradation. Further studies will be necessary to determine the molecular basis underlying the lysosomal localization of the R124H mutant TGFBI. It is interesting to note that the Arg 124 residue is found within the initial NH 2 -terminal segment of TGFBI near the EMI domain, which mediates an association with periostin. Mutations to the Arg 555 residue had no affect on the association with periostin, probably because the Arg 555 residue is within the fourth FAS1 domain and is spatially separated from the periostin-binding site. Interestingly, R124L and R124C also had no effect on peri-ostin binding. One possibility is that the R124H mutation induces more severe structural changes in the NH 2 terminus than the other TGFBI mutations and that these changes affect periostin binding. Previous structural analyses of the FAS1 domain indicate that the Arg 124 residue would be solventexposed, and distinct amino acid substitutions could have very different effects on TGFBI intermolecular contacts and local protein structure (50,51). A second possibility supported by our data is that the redistribution of R124H mutant TGFBI to lysosomes results in subcellular segregation of the proteins, decreasing their overall ability to interact in the cell. Similar to TGFBI, mutant transthyretin is also a nonglycosylated, secreted protein that accumulates into extracellular deposits (43). The secretory system has a robust quality control system that functions to recognize and degrade terminally misfolded proteins through a process called ER-associated degradation. Previous studies have established that mutant transthyretin is recognized, degraded via this pathway, and displays reduced secretion (43). In contrast to transthyretin, our analyses indicate that diseaseassociated mutations in TGFBI have no effect on its secretion. These data indicate that mutant TGFBI eludes the secretory pathway protein quality control systems, resulting in the aberrant secretion of a mutant protein. Based on the interaction between TGFBI and periostin and the clearly disruptive effects of the R124H mutation, we analyzed the distribution and expression pattern of periostin in the cornea of normal and GCD II patients. We observed antiperiostin staining in the corneal epithelial layer of normal corneal tissue. Within the corneal epithelium, periostin appeared to be mostly within the cell body and was excluded from the nucleus. In contrast, in GCD II, periostin accumulated in mutant TGFBI stromal deposits, which were highly granular in appearance and stained bright red with Masson's trichrome stain. Western blot analyses of corneal tissues from control and GCD II patients revealed a significant increase in the overall amounts of TGFBI and periostin in the diseased tissue. In addition, both TGFBI and periostin accumulated into a high molecular weight smear, suggesting that these proteins are in an aggregated form that is resistant to SDS denaturation. Not all ECM proteins showed this pattern. ECM proteins tenascin C, fibronectin, and collagens I and VI (Fig. 7) (data not shown) did not exhibit any changes in levels or molecular weight, indicating that not all TGFBI-interacting ECM proteins accumulate into the corneal deposits. Our results demonstrate that the TGFBI-interacting protein periostin is a specific component of the mutant TGFBI deposits in GCD II. Our studies indicate that TGFBI and periostin are expressed in both corneal fibroblast and corneal epithelial cell types. Moreover, our corneal epithelial explants contain epithelial tissue and stromal tissue and show a mix of both the large and small form of periostin (Fig. 7), suggesting that periostin secreted from corneal fibroblasts and epithelial cells accumulates in the extracellular deposits observed in these patients. Together, these data support the validity and importance of our studies in these cell types. In the studies reported here, we have focused on TGFBI and periostin in corneal epithelial cells and COS-7 as a model cell line. Further studies with corneal epithelium would be of value for understanding the role of TGFBI and periostin. In summary, our findings reveal that periostin is a novel binding partner of TGFBI and FIGURE 6. GCD type II-associated R124H mutation in TGFBI causes mislocalization to the lysosome. A and B, COS-7 cells expressing V5/His-tagged wild type or R124H mutant of TGFBI were immunostained with primary antibodies against V5 and TGN38 (a-c) or Lamp2 (d-f) and analyzed by confocal fluorescence microscopy. The yellow color indicates overlapping localization in the merged image. C, quantification of subcellular distribution of V5/His-tagged wild type and R124H mutant of TGFBI in COS-7 cells. COS-7 cells expressing V5/His-tagged wild type or R124H mutant TGFBI were immunostained with primary antibodies against V5 and TGN38 or Lamp2. Cellular distribution of TGFBI was determined based upon the colocalization of TGFBI with TGN38 or Lamp2. Data represent mean Ϯ S.E. from three independent experiments. D, primary cultured corneal fibroblasts from a normal control (a-c) and a GCD II patient (d-f) bearing homozygous R124H mutations in TGFBI were fixed, permeabilized, and immmunostained with monoclonal anti-TGFBI (red) and polyclonal antiperiostin (green) antibodies. Cells were examined by confocal fluorescence microscopy. Scale bars, 50 m. a, d, g, and i), a heterozygous GCD type II patient after lamellar keratoplasty (b, e, h, and k), and a homozygote GCD II patient (c, f, i, and l) were stained by Masson's trichrome (a-c), normal rabbit IgG (d-f), anti-TGFBI (g-i), and anti-periostin antibody (j-l). The arrows indicate the region of the TGFBI deposit. Scale bar, 200 m. B, analysis of TGFBI and periostin expression in corneal epithelium extracts from normal human control and homozygous GCD II patients. Lysates were examined by Western blotting (WB) using the indicated antibodies. that impairment of interaction of TGFBI with periostin by the corneal dystrophy-associated mutations in TGFBI may be involved in pathogenesis of 5q31-linked corneal dystrophies.
9,388.4
2009-05-28T00:00:00.000
[ "Medicine", "Biology" ]
THE EFFECT OF THE FINANCIAL RATIOS ON THE SHARE PRICE OF INSURANCE COMPANIES LISTED IN THE IRAQI STOCK EXCHANGE MARKET USING MULTIPLE REGRESSION ANALYSIS. APPLIED RESEARCH IN THE IRAQI STOCK EXCHANGE MARKET This research aims to identify the most important financial ratios affecting the share price of insurance companies listed in the Iraqi Stock Exchange Market (ISEM) during the period 20062015, and to indicate which ratios are more influential than others on share prices. The research population which consisted of (4 companies) is taken from insurance companies listed in the Iraqi Stock Exchange Market. The research sample consisted of one of these companies, Its” Iraqi International insurance Company representing 25% of the research population. In the statistical analysis a multiple regression model was used to determine the relationship between the independent and the dependent variables and the results of this study showed that there is a statistically significant relationship and effect between some financial ratios and the share price. The study gave a general background on the financial markets and the Iraqi Stock Exchange Market in particular. What characterizes this study from previous ones is demonstrating the effect of the financial ratios on the share price in the Iraqi setting for insurance companies listed in the Iraqi Stock Exchange using the multiple regression method. Introduction The financial ratios are considered to be one of the most useful financial indicators in the field of financial analysis and the overall performance of the company, which investors seek to know. Financial markets play a role in the economic activity in terms of their functions both in the developed and the developing countries. Investors always try to avoid risk in their investments, especially investing in insurance companies, so that they can also transfer part of these investments into existing funds while at the same time ensuring the greatest possible return. Financial ratios are seen to be of significance to shareholders, potential investors in equities, stock exchange analysts and investment banks. Shareholders and investors are interested in knowing the impact of the company's performance on the income generated from their investments in the shares of insurance companies. Therefore, the financial ratios are very important to the management of the company in term of share prices in financial markets, especially as the primary objective of financial management is to maximize the wealth of shareholders by maximizing the market value of the share. The study is consisted of the following parts: The first part: Methodology of research and some previous studies Research Methodology The research methodology is as follows: Research problem The problem of the research is represented in the following questions: 1. Do the financial ratios of insurance companies work in measuring the overall performance of the company which the investors seek to know? 2. What is the effect of the financial ratios published in the financial statements on the share price of insurance companies in the Iraqi Stock Exchange Market? 3. What is the effect of these ratios on the shares' prices of insurance companies in the Iraqi Stock Exchange Market? Research Hypotheses The research hypotheses are as follows: 1. There is no significant correlation between the financial ratios and the share price of Iraqi International insurance company listed in the Iraqi Stock Exchange Market. 2. There is no statistically significant effect between the financial ratios on the share price of Iraqi International insurance company listed in the Iraqi Stock Exchange Market‫ز‬ Research Objectives The objectives of the research are to: 1. Identify the dimensions of the financial ratios' analysis, advantages and how to utilize it in determining the share price of insurance companies. 2. Identify the implications of the financial ratios' analysis on the quality of information published in the financial statements of insurance companies whose shares are traded in the Iraqi Stock Exchange Market. 3. Develop a quantitative model using multiple regression analysis in determining share price based on financial ratios. Research significance The importance of the research is that it discusses the dimensions of the financial ratios' analysis of the insurance companies whose shares are traded in the Iraqi Stock Exchange Market as well as their effect on the share price using regression analysis method because the financial market's activity in Iraq reflects the economic activity of companies, especially insurance companies, so we find that there are factors impacting the share prices including the financial ratios which this study explores its effect on the share price using a quantitative statistical method which is the multiple regression analysis. Research population and sample The research population consists of the insurance companies listed in the Iraqi Stock Exchange Market, totaling (4) companies and the sample of the research is one company of these companies, it`s Iraqi International insurance company which represent 25% of the research population. (13) industrial companies. The study included a cross sectional analysis for testing without the independent factors. The test results showed that they were not significant determinants. The test also included the absence of unusual behavior during the underwriting period, in addition to giving a background to the financial markets, the system and growth of the volume of equity issues in Jordan. 2. The study of Badri and Al-Khoury, 1997 (Study of Stock Movements in the Amman Financial Market Using the Econometric Models). The study aimed at identifying the movements of shares in the Amman Financial Market using standard models based on the analysis of quarterly information between 1978-1994. The standard estimates of the models presented in this study showed that there is a statistically significant relationship between movements in share prices and some macroeconomic variables, without the possibility of relying on this information in making investment decisions. 3. The Hussein study titled (Securities and their markets with reference to the Iraqi Stock Exchange Market-Theoretical Framing) aimed at explaining the role of the stock market as the main driving force for economic growth as economic progress is closely linked to the existence of a thriving and developed stock exchange market. In addition to increasing securities and their diversity leads to increased public interest, from many categories such as owners of surplus funds from savers who wish to invest their money in the market for a long term in various possible ways of issuance and exchange. Second: Foreign Studies 1. The study of Filion & Boyer 2004 (Common and fundamental factors in stock returns of Canadian oil and gas companies). The study aimed at identifying how the share in the industrial sector was affected by a number of factors for the sector and the market returns for this sector. The study concluded that the returns of this sector are corresponding positively with the market returns and oil prices and that it is inversely correlated with interest rates as well as Canadian exchange rates against the US dollar. 2. The study of Shin, Chin, 1997 " Open-Market stock repurchase announcements and revaluation of prior accounting information". It aimed at showing the extent of open market interaction with the published accounting information and its impact on share repurchase. A sample of 323 markets was used during the period from 1978 to 1992. The study concluded that the market is affected by the published accounting information. Also there is a relationship between repurchasing the shares and the market's response to the accounting information. The theoretical framework Concept of shares (equities) The underwritten shares (and the added ones) are formed by investors, which include their financial contributions and determine their ownership of the company. Consequently, both the declared capital and the actual capital as well as the equity capital are all terms reflecting the total value of the shares acquired by the investors. The share gives its holder a right or a share in the ownership of the company and this share is determined by the number of shares he owns to the number of the issued shares. The share is defined as a " financial document issued by a shareholding company with a nominal fixed value, which is the (par value) ensuring equal rights and obligations for its owners and is offered to the public through underwriting in the essential market (Primary market) and it is allowed to be traded in secondary markets (Al-Jarjawi, 2008). Securities' markets These are financial institutions dealing with investing in securities in terms of issuance and exchange in which the buying and selling of financial securities such as shares and bonds are conducted. The operations of these markets entail returns and risks. Hence the financial market is the mechanism through which purchasing, sharing and exchange of financial assets which include (shares, bonds and currencies) are carried out (Al-Amiri, 574: 2013). (Al-Rubaie, 2009:7) points out to the market as being the place where investors meet and transactions of securities in terms of selling and purchasing are done. These transactions form one of the channels through which money flows into between individuals, institutions and different sectors which helps in mobilizing, developing, and preparing them for investment fields. The concept of financial ratios The financial ratios are logical relationships between certain elements in the financial statements. Many of them can be calculated by linking a given element to another one, taking into account a logical relationship between the two elements which the ratio is intended to be calculate for both of them. These two elements may appear in the same list or may appear between two different lists (Albadawi and Shahatah, 2003:202). The financial ratios are viewed to be better than the comparison method between the elements of the financial statements due to the existence of defect in the comparison of the aggregates and the details of the financial statements' elements where growth or decrease in the business activity of the company is not taken into account. Hence with calculating the financial ratios of the financial statements' elements and comparing them with their counterparts of the previous years for the same company or others, this defect can be overcome. Thus, the financial analyst can produce more accurate results when making comparisons in the analytical procedures' methods (Naim, 2008: 63). Objectives of Financial Ratios The most important financial ratios' objectives can be explained as follows: 1. Understanding given data in the financial statements in order to assist the management in making various decisions as the financial indicators resulting from financial analysis reveal the strengths and shortcomings in the financial position of the firm. In addition to that using these indicators enable the company of examining its previous achievements. Accordingly demonstrating their commitment to the financial policies. 2. Reducing the large volume of the financial statements to a small and useful number of clearly defined financial indicators (Al-Jerjawi : 2008, 58). Types of financial ratios First: Liquidity Ratios These ratios can help in identifying the ability of the economic unit to meet its shortterm financial obligations (Polis, 2009: 45), including: 1. Current ratio = current assets / current liabilities 2. Rapid Liquidity ratio = (current assets -inventory) / current liabilities Second: profitability ratios The profitability ratios reflect the overall performance of the company as the profitability ratios unify the impact of most management decisions. It examines the company's ability to generate profits from sales, assets and property rights, which is the measure of the effectiveness of management policies for the investment, financing and operating companies. (Al-Amry, 2013: 88) 1. Net profit margin = net profit of the current operations / net sales 2. Return on total assets = net profit after taxes / total assets 3. Return on equity (property right) = net profit after taxes / total equity rights. Third: the ratio of indebtedness In general, indebtedness ratios focus on measuring the ability of a company to service long-term debts and repay them when it is due. Beside that when the date of those debts is due, the firm must be able to repay those debts from its own funds (equity) or borrow again. Certainly the ability of the firm to get or repay long-term debts often depends on or is related to the ability of the crediting corporations to be clarified (Lotfi , 2008: 348) and it includes: 1. Ratio of current liabilities to the equity = current liabilities / equity 2. Ratio of debt to equity = total liabilities / total equity Fourth: Operating and activity ratios Based on these ratios, efficiency or activity indicators are obtained which are considered a measure of the efficiency and effectiveness of the economic unit in exploiting and managing its assets. These ratios include (Polis, 2009: 47): The rate of total assets' trading = net sales / total assets The rate of current assets' trading = net sales / current assets The rate of net capital trading = net sales / net working capital Fifth: Market ratio (shares) It is a set of financial ratios showing the relationship of the share's price to the profits, cash flows and the book value of the share, as well as giving investors an image of the company's past and expected positions. These ratios (relevant to the market) are viewed as the most important ratios for investors, creditors, investment banks and financial analysts as they use them to achieve their aims and objectives (Brigham & Ehrhardt, 2005: p 454). These ratios are represented as follows: 1. Profit ratio per ordinary share = net profit after interests and taxes / number of ordinary shares 2. Market value to the book value = ordinary share's price in the market / book value of the ordinary share 3. Ratio of the share's price to the share's profit = ordinary share's price in the market / the share's profit 4. Book value of ordinary share = total value of ordinary shares / number of ordinary shares The applied side of the research Al-Ameen Insurance Company was incorporated in Baghdad under the incorporation certificate No. 7606 on 31/7/2000 with a capital of 150 million Iraqi dinars (ID). Its capital in 2015 was 3,410,100,000 ID. The company aims to provide the best security coverage, most suitable and beneficiary to protect the national economy and community members as well as invest the funds accumulated for the best available investment opportunity .The company achieved a net profit in 2015 of 326,040,231 IDs. From the table, data were analyzed to extract the following: First: finding the multiple correlation between the dependent variable (Y) represented by the share price and the independent variables represented in the financial ratios. In order to establish correlation between variables, the statistical analysis program (SPSS-20) was used, and the outputs shown in table (2) were obtained. From Table 2 above we conclude that: 1. The correlation between the variable (Y) represented by the share's price and the trading ratio (X1) is a weak positive relationship (0.260) and the value of sig = 0.023, which is less than the level significance of 0.05. This is because of the insurance sector aiming at having large liquidity for the companies to pay their dues during the process of compensation. 2. The correlation between the share's price (Y) and the return on assets (X2) is positive (0.42) and sig = (0.010), which is below the level of significance at 0.05. 3. The correlation between the share's price (Y) and the return on equity (X3) was positive reaching to (0.34) and sig = (0.039), which is below the significance level of 0.05. 4. The correlation between the share's price (Y) and the ratio of current liabilities to equity (X4) has a negative inverse relationship of (-0.48) and the value of sig = 0.016 which is less than the significance level (0.05). 5. The correlation between the share's price (Y) and the profit per share ratio (X5) was positive (0.774) and the value of sig = 0.04 was below the significance level of 0.05. 6. The correlation between the share's price (Y) and the ratio of market value to book value (X6) was positive (0.94) and sig = 0.00, which is below the significance level of 0.05. 7. The correlation between the share price (Y) and the ratio of book value to the ordinary share was positive (0.65) and sig = 0.020, which is below the significance level of 0.05. From what has been presented it is clear that there is a significant correlation between the share's price and the financial ratios and we can reject the null hypothesis H0: "there is no significant correlation between the financial ratios and share's price of insurance companies listed in the Iraqi Stock Exchange Market". We accept the alternative hypothesis H1. Second: Estimation of the Multiple Regression Model: The multiple regression model was estimated using the statistical method shown in the following table: Table 3 shows that the coefficient of determination (R2) demonstrated the high explanatory power of the independent variables of the financial ratios on the dependent variable (the share's price) as it pointed out that 94% of the changes in the dependent variable are explained by the independent variables, with the remaining (6%) belongs to other explanatory variables not included in the model, while value of (F) = 4.538. As for the Durbin-Watson test, its value of (2.170) indicated that there was no problem of self-correlation between residues in the model. The result of regression analysis is illustrated in the following table: Table 4 shows that the calculated value of (F) which is (4.583) with a degree of numerator freedom (7) and a denominator degree of 2 and at a significance level (0.000). Thus, we obtained the coefficients of the regression equation line and standard deviations which can be shown in the table below: Table 5 shows that the values of:
4,088
2018-07-31T00:00:00.000
[ "Economics", "Business" ]
Modeling on the Effect of Coal Loads on Kinetic Energy of Balls for Ball Mills This paper presents a solution for the detection and control of coal loads that is more accurate and convenient than those currently used. To date, no research has addressed the use of a grinding medium as the controlled parameter. To improve the accuracy of the coal load detection based on the kinetic energy of balls in a tubular ball mill, a Discrete Element Method (DEM) model for ball kinematics based on coal loads is proposed. The operating process for a ball mill and the ball motion, as influenced by the coal quality and the coal load, was analyzed carefully. The relationship between the operating efficiency of a coal pulverizing system, coal loads, and the balls' kinetic energy was obtained. Origin and Matlab were utilized to draw the variation of parameters with increasing coal loads in the projectile and cascading motion states. The parameters include the balls' real-time kinetic energy, the friction energy consumption, and the mill's total work. Meanwhile, a method of balanced adjacent degree and a physical experiment were proposed to verify the considerable effect of the balls' kinetic energy on coal loads. The model and experiment results indicate that a coal load control method based on the balls' kinetic energy is therefore feasible for the optimized operation of a coal pulverizing system. Introduction Ball mills, which grind coal to a target size prior to boiler combustion, are important auxiliary equipment in thermal power plants.Their coal grinding efficiency is closely related to the economy of the power plant, as discussed by Masiuk, et al. [1].The control requirement for a pulverizing system is to guarantee that the coal load in the ball mill is close to the optimum level.Therefore, accurately measuring and controlling the coal load of a ball mill is key to maintaining the proper boiler feed.Currently, the various detection methods include the differential pressure method, the vibration method, the noise method, the ultrasonic method, the power method and different combinations of these methods.In the differential pressure method, the coal load is expressed in terms of the pressure difference between the inflow and outflow of a ball mill, and the measuring precision is finite and determined by the air rate.The fundamentals of vibration method are to analyze the relationship between the vibration strength of the bearing and the coal load at a constant rotational speed of the ball mill.However, this method has poor linearity and low accuracy.In detecting the coal load, the noise method utilizes ball mill noise, which has poor anti-interference and large deviations due to the effect of environmental noise on the audio signals.The ultrasonic inspection method realizes coal detection by building relations in the sending-receiving interval between the ultrasonic sensor and the interface.The shortcomings of this method are a high system cost, demanding environmental requirements, and the lack of stability and reliability.In the power method, material levels are detected by the power transformation rule of the coal load.Nevertheless, the sensitivity of this method needs to be improved, and it may be difficult to estimate the coal load when the electric power of a ball mill decreases.The above mentioned methods cannot truthfully reflect the coal load in ball mills because they have many limitations and low accuracies [2,3]. The motion of the ball mill's medium can directly influence the power consumption of grinding and is associated with the grinding mechanism [4,5].Davis [6] and Lu, et al. [7] studied projectile motion in ball mills and established the ball's motion equations by numerical modeling, and they developed a systematic theory for grinding coal.Ying [8] studied the influence of the mill's rotation rate, the ball filling ratio, and many other factors on balls' motion.Afterwards, many domestic and overseas scholars performed numerical modeling and developed theories of medium movement states, such as the two-phase movement theory [9][10][11].Although there is already a considerable amount of research on the medium's motion track and how the mill's working parameters influence the medium's motion in different distribution areas, there are only a few studies on using the grinding medium as a controlled parameter. The above-mentioned findings and discussions reveal that there has not been a unified, rigorous and complete mathematical theory for a ball mill grinding process, and this theory lacks a more complete and accurate method for supervising and controlling the coal load.The research on improving the mill's efficiency and lowering energy consumption has not provided breakthrough progress.The aim of this study is to obtain the relationship between the ball motion and coal loads and realize a better coal load control method based on the balls' kinetic energy.To improve the performance of the coal load control method, a Discrete Element Method (DEM) is used to analyze the kinematics of the balls under the influence of the coal load.A method of balanced adjacent degree and a physical experiment further confirms that the balls' kinetic energy can reflect the coal load more accurately.In this paper, the balls' kinetic energy is utilized to detect and control the coal load, and this method avoids the influence of other factors and enhances the accuracy of coal detection. Results and Discussion The device architecture is described in detail in the Modeling Section.The force and boundary conditions for the balls and the coal are limited, and the accumulation of particles appears naturally.The kinetic energy, the friction energy consumption, and the mill's total work for any location and size are conveniently determined in the ball and coal accumulation system.A three-dimensional image of the balls and the coal can be directly generated.PFC3D tracks every particle's motion periodically and repeatedly, thus obtaining the motion of the overall granular mixtures. Based on the PFC3D model, the initial values of the coal load are an arithmetic progression whose a(1) = 100, d = 200 and a(7) = 1300.The mill rotates uniformly in 5.6 rpm.The modeling simulation results for four revolutions of the cylinder are as follows: With increasing coal loads, the maximum and average value of the balls' real-time kinetic energy, the energy consumption from sliding friction and the mill's total work are shown in Figures 2 and 3, where Kpj0/Ww0 represents the average value of the balls' kinetic energy as a percentage of the mill's total work when the mill rotates four revolutions without coal.In Figure 2, the balls' work on the coal particles with different diameters is first increasing and then decreasing as the coal load increases.When Dm = 6 mm and Nm = 700, the balls' kinetic energy accounts for 9.2831% of the mill's total work, which is higher than the situation when there is no coal (6.4537%).That is, Kpj/Ww − Kpj0/Ww0 = 2.8294%. Time (s) ×10 When Dm ≥ 12 mm, the increase of kinetic energy of the ball load becomes smaller and smaller as the coal load increases.When Dm = 16 mm, the projectile motion disappears, and the motion of the balls is mainly grinding and squeezing.Therefore, different ball mills and parameters are suitable for a limited range of coal diameters.The maximum and average value of the balls' kinetic energy for different coal particles' diameters, energy consumption from the sliding friction and the mill's total work with increasing coal loads are clearly demonstrated in Figure 3.To reflect the variation of the curve in the Figure 3, the maximum and average values of the balls are amplified by a factor of ten.The pink data points, shown in Figure 3, represents the difference between the mill's total work and ten times the average value of the balls' kinetic energy (Wn = Ww−Kpj*10).As a result, the variation of the friction energy consumption and the variation of the mill's total work were fundamentally the same.When Wn was larger than its minimum, it increased gradually, and the mill's total work rose while the balls' kinetic energy remained almost the same.The minimum of the curve corresponded to the optimal coal load, which further indicated that when the coal load exceeded a certain value, both the mill's useful work and the use ratio of the balls' kinetic energy decreased.Therefore, the real-time kinetic energy of the ball motion closely relates to the coal load and the mill's operational efficiency. Effects of Db = 0.04 m and N0.04 = 86 Figure 4a demonstrates the PFC3D distribution and the motion of coal particles of Dm = 6 mm and Nm = 700 corresponds to the 86 balls inputted, which reached their maximum kinetic energy when the mill cylinder rotated in steady state.Figure 4b demonstrates the real-time variation curve of the balls' kinetic energy in Figure 4a after the mill rotates.Figures 5 and 6 demonstrate the maximum and average value of the balls' kinetic energy, the friction energy consumption and the mill's total work with increasing coal loads.In the comparison between Figures 2, 3, 5, and 6, increasing ball diameter by 0.01 m, the growth rate of the balls' kinetic energy reduces with increasing coal particles' diameters.However, when Dm ≤ 12 mm, the balls' kinetic energy increases in the beginning and then decreases with increasing coal loads.The larger the coal particle's diameter is, the less the optimal coal load corresponds to the maximum of the balls' kinetic energy.Wn increases with increasing coal loads but has a minimum point.Therefore, the motion space of balls inside the mill is limited.When the coal load exceeds the optimum value, the balls' impact strength gradually decreases, and their grinding effect on coal loads plays a leading role.Because it is influenced by the ball's diameter and the coal particle's diameter, the optimal coal load possesses different functional values [12,13]. Effects of Coal Size Distribution Coal particles are of different sizes during the grinding process of a ball mill.To further simulate actual operation conditions of a ball mill, PFC3D models were respectively conducted on coal particles (6-16 mm) with a uniform distribution and a Gaussian distribution.The random numbers were drawn from coal particle number (such as Nm = 100) with coal particle diameters between 6 mm and 16 mm.In addition, the balls' strain energy was researched.Figure 7a shows the PFC3D motion of the coal particles with Nm = 100 with a uniform distribution corresponding to the balls of Db = 0.03 m and N0.03 = 200, which reached their maximum kinetic energy when the mill was running in a steady state.Figure 7c shows the PFC3D motion of the coal particle Gaussian distribution with Nm = 500 under otherwise identical conditions.Figure 7b,d demonstrate the real-time variation curve of the balls' kinetic energies in Figure 5a,c, respectively.Figures 8 and 9 show the maximum and average value of the balls' kinetic energy, the strain energy, the friction energy consumption and the mill's total work with increasing coal loads. These results show that strain energy of the balls was much smaller than the balls' kinetic energy and the mill's total work.Consequently, the strain energy of the balls was amplified a hundred times to reflect the variation of the parameter values in the figure.Balls were less affected by the kinetic energy of the cylinder rotation movement.Assuming the dynamic and temperature effects were not considered in the process of deformation, the work performed by the mill's total work on the balls would be all stored in the coal in the form of strain and stress, and the work would be transformed into strain energy.The balls' energy was almost all utilized to impact coal particles.In Figures 1, 4 and 7, balls' kinetic energy gradually reached its maximum, the impact strength of balls on coal loads is highest, the use ratio of balls is also highest, and the maximum useful work of the ball mill is done to obtain the qualified pulverized coal.Therefore, the projectile motion is a ball's optimal motion state, the mill obtains the optimal coal load, which corresponds to the mill's highest grinding efficiency.In Figures 2 and 5, when the coal particles' diameter and coal load are proper, the average value of the balls' kinetic energy as a percentage of the mill's total work gradually increased and then decreased in comparison with the situation when there was no coal.In Figures 3 and 8, when the coal particles' diameter is proper and the coal load increased, the difference between the mill's total work and balls' work on the coal experienced a minimum point and then increased.It is obtained that the balls and coal in the mill were going through a cascading motion, a projectile motion of several balls, a projectile motion of most of the balls, and a circular motion of almost all balls, respectively.In addition, the use ratio of the balls' kinetic energy increased first and then decreased, and the mill's efficiency likewise increased first and then decreased.In the comparison between Figures 2, 3, 8, and 9, with increasing coal loads, the increasing range of the energy consumption of the friction and the mill's total work for coal particle diameter with a uniform distribution and a Gaussian distribution was larger than for a situation with coal particles of equal size.The use ratio of the balls' kinetic energy similarly increased in the beginning and then decreased with increasing coal loads.In conclusion, the balls' real-time kinetic energy can indicate the mill's coal load in a more precise way, and the mill obtains the optimal coal load when the balls reach a maximum kinetic energy, corresponding to the highest grinding efficiency.The energy consumption of the friction and the mill's total work further indicate the use ratio of the balls' impact force, thus demonstrating the mill's grinding efficiency in indirect ways. Ball Kinematics The coal load inside the mill is a significant factor in influencing ball motion.Figure 10 is the spatial distribution diagram describing that when the mill is rotating uniformly around a center axle O, the balls inside the mill would realize different motion with increasing coal loads.Where, 1. Ω1 is the area where the balls undergo circular motions or cascades.When the coal load inside the mill is small or even empty, the possibility of collisions among the balls is larger.The friction force among the balls, the coal, and the mill's liner is enhanced, and they are mainly affected by grinding in this area, and the friction force is enhanced, thus leading to unnecessary abrasion of the balls and liner and low grinding efficiency.2. Ω2 is the projectile motion area.When the coal load is normal and not exceeding the dropping point B of the balls' outermost layer, the motion between the balls and the coal in the underneath area is mainly a striking motion.Therefore, this motion realizes a periodic collision between the balls and the coal, and it has a high efficiency.Suppose that the volume of all the moving balls is Ω = Ω1 + Ω2, L is the mill's effective length, R is the distance between a ball regarded as a particle and the center of the ball mill, and the ball filling ratio Ψ = Ω/(πR 2 L) is the ratio of the balls' loose volume and the mill's effective volume.For the ball in the ith layer, σi is its central angle, αi is the included angle between OAi, which is the line that connects the mill's center and the departure point, and the positive vertical axis Y. βi is the included angle between OBi, which is the line that connects the point of dropping and the center of the mill, and the positive horizontal axis X.According to the arc length computational formula, we can infer that dΩ1 = πRiLσi/180°dRi.The integration of the radius between the outermost layer R and the innermost layer R1 is: And we can obtain that: 3. In the area around the mill's center region Ω3, the balls' circular motion and projectile motion are blended.Because space is limited, grinding and impact effect is weak. 4. In the empty area Ω4, balls do not move or undergo circular motion.When the coal load inside the mill is excessive, it would waste energy and cause operational troubles because the balls' moving space is limited, and the balls are relatively still relative to the mill [14]. Correlation Degree and Balance Degree Method The correlation degree combined with the balance degree is utilized to verify that the balls' kinetic energy is applicable for the control of the coal load in a ball mill.Correlation degree is a measure of the correlation degree of the factor variation trends between two systems with the increase of time or other variables.We judge the correlation degree between the reference data sequence and the comparison data sequence according to the geometrical relation and similarity of the generated data curves.The more approximate the variation trends of curves are, the bigger the correlation degree of the corresponding sequence is, and vice versa. The reference data sequence reflects the behavioral characteristics of system, 1 ( ) { ( ), ( ),... ( )} the comparison data sequence affects the factors of system performance, and the correlation coefficient of Among all sequences, max  and min  are the maximum and minimum absolute difference, generally with 0 min   .The distinguishing coefficient is (0,1)   . The correlation degree between the reference data sequence and the comparison data sequence is calculated by the following formula [15]: To reduce the association tendency of local points, the balance degree is further adopted to measure and compare the correlation degree of the data sequence's correlation coefficient series [16,17].Supposing the correlation coefficient series of the jth comparison data sequence and the ith reference data sequence is , then the correlation coefficient distribution map is where . We can define the following equation based on Equation (8): The above equation is the correlation coefficient's entropy of the jth comparison data sequence and the ith reference data sequence, and the balance degree is (10): where is expressed as Ln (sequence number) , and it is the largest entropy in the jth comparison data sequence.Thus, the balanced adjacent degree is: . The correlation degree between each comparison data sequence and reference data sequence is ordered by the balance adjacent degree.Finally, we can determine the relationship between the comparison parameter and the reference parameter, and obtain the effective theory, which is based on the meaning of parameters. Correlation Analysis of the Coal Load and Balls' Kinetic Energy From the DEM simulation's results, analysis of the ball motion with increasing coal particles with different diameters clearly demonstrates that there exists a close relationship between the coal load and the balls' real-time kinetic energy in the operational process of a ball mill.According to the above study, a balanced adjacent degree between the coal load and the balls' kinetic energy is proposed. The coal load in modeling data was selected as the reference data sequence, while other parameters, such as the balls' kinetic energy, the energy consumption from sliding friction and the mill's total work, were comparison data sequences.Ba1(Xi,Xj) represents the balance degree between the coal load and other parameters when Db = 0.03 m when the balls possess optimal projectile motion.Ba2(Xi,Xj) represents the balance degree between the coal load and other parameters when Db = 0.04 m and when the balls possess optimal projectile motion.The results in Table 1 show that every parameter's balanced adjacent degree exceeds 0.6, and every parameter's sensitivity to the variation of the coal load was high.While the balanced adjacent degree between the balls' kinetic energy and the coal load was slightly higher than the energy consumption from sliding friction and the mill's total work, it could better explain the coal load and reflect the working efficiency. Experimental Apparatus The practical ball motion is very complicated.The validity of the DEM model for ball kinematics with increasing coal loads is estimated by practical experiments.The experimental apparatus is shown schematically in Figure 11.It is a ball mill whose standard is Ø0.36 m × 0.2 m.The rough rubber lining plates are connected to 8 × 2 screw holes on the cylinder wall, which can change the number of lining plates.The electronic speed controller controls the working speed of the mill.An iron stent reduces vibration and increases the stability by connecting the bearing part with boards.The front cover and back cover are made up of thick flange and ribbed flange respectively.The front window glass and the mill cylinder are fixed by the front cover, and the middle rubber cradle is connected to avoid the damage of the front window.The diameter of balls is Db = 0.008 m, and the bulk density is ρ = 4.9 t/m 3 .Coal particles are replaced with the polypropylene plastic particles (EPP) in the experiment, which is similar to coal in physical characteristics, aiming at reducing the wear and tear of the mill cylinder and guaranteeing the visibility of the front glass window.Their bulk density is ρm = 0.75 t/m 3 , the rotation rate is Φ = 75% and the optimum ball filling ratio is Ψzj = 40%.Ψm is coal filling ratio.It is assumed that the ball filling ratio in the experiment satisfies that the balls can move in cascading or projectile motion.Based on the optimized configuration of the above operating parameters, the physical model of a ball mill was established.A high-speed camera was adopted to record the real-time variation of kinetic energy of balls and coal loads, the parameters are as follows: the camera body is CANON EOS 5D, 12.8 million pixels; the lens is CANON EF50MM, F1.4; the shutter speed is 1/256 S and the focal length is 105 mm.The initial values of the coal load are arithmetic progression whose a(1) = 10%, a(2) = 20% and a(3) = 50%.The mill rotates uniformly at 20 rpm.The physical experiment results are shown in Figure 12. The use ratio of balls' kinetic energy is performed by impacting and grinding coal loads.When Ψm >30%, the number of the projectile balls tend to decrease with increasing coal loads.When Ψm = 50%, the motion space of balls inside the mill is limited, balls' impact strength gradually decreases and the grinding plays a leading role.It concludes that the projectile motion reaches the obvious peak point when the balls and coal loads come to a certain value.Therefore, ensuring the optimal motion state of balls is significant to the coal pulverizing efficiency for limited coal loads. Experimental Results The position data of the balls and coal loads are shown in Figure 13 when the ball mill was running in a steady state.It further indicates that the ball motion is closely related to coal loads on equal conditions.In the process balls and coal loads in the mill are going through the cascading motion, the projectile motion of several balls and the one of most balls, their kinetic energy gradually reached its maximum.The projectile motion is the balls' optimal motion state and it corresponds to mill's highest grinding efficiency.However, balls do not move or undergo circular motion and the kinetic energy of balls decreased due to excess coal loads and the limited space.Figure 14 demonstrates the variation of balls' kinetic energy with the coal loads based on the ball mill physical experiment.It is indicated that the more impact energy balls obtain in the projectile motion state, the higher the grinding efficiency is.The speed sensor and the torque will be utilized to measure the balls' motion speed and the mill' torque, and the relevant experimental study will be further designed in the future. Modeling Section A 0.4 m diameter by 1.2 m long ball mill with steel liner plates was modeled by a cylinder with special material characteristics according to the ball mill's performance.The cylinder's boundary was truncated into the 1/6 of the initial 1.2 m long.Balls and the coal inside the mill were modeled by spherical discrete elements of certain material characteristics and size.The balls' diameters were Db = 0.03 m and Db = 0.04 m, and their bulk density was ρgq = 4.9 t/m 3 .The coal particles' diameters are Dm = 6, 8, 10, 12, 14, 16 mm, and their bulk density was ρm = 0.75 t/m 3 .In a particle and contact model, a "ball-ball" contact and a "ball-wall" contact were affected by the contact force.The parameters with the most influence on the contact force were the stiffness, the damping, and the friction factor.The stiffness represented the stress resistance of balls to elastic deformation, the friction factor affected the power consumption of ball mills, and the damping main influenced the accumulation process and the course of energy dissipation [18].Some properties of other parameters for the model are given in Table 2, which combines the operating characteristics of a ball mill with different performance properties of the materials.The rotation rate Φ, the ratio of the working speed to the critical speed of the mill, was selected as Φ = 80%.With the parameters optimized by the estimate method for experiments, the ball mill model was established by setting the parameters related to the external appearance of ball mills, which included a mill cylinder and cylinder walls.DEM is a numerical computation method for discontinuous medium mechanics and is used for solving and analyzing granular material's equations of motion and kinetic parameters [19][20][21].DEM separates granular mixtures to the set of discrete units, and the units themselves have certain geometrical, physical, and chemical properties.The medium motion is controlled by Newton's second law, and it can be iteratively solved by dynamic or static relaxation method, which describes the whole medium's law of motion by observing all units' motion and location. The particle i's equation of motion according to Newton's second law is as follows: where i u  , i    are respectively particle i's accelerated velocity and angular acceleration, i m , i I are respectively particle i's quality and rotational inertia,  F ,  M are respectively the joint force and joint moment of force where the particle is in the center of mass.Central difference method is always used to do numerical integration for Equation (12), and it could obtain the updated speed which is expressed by the intermediate point of two iterative time steps, which is as follows: where i u  , i   are respectively particle i's velocity and angular velocity, t  is time step, N is corresponding to time t.By integrating the Equation ( 13), we can get the Equation ( 14) which is with respect to displacement: where i u , i  are respectively particle i's displacement and angular displacement in this equation. Therefore we can compute the new force according to particle's new value of displacement, and it satisfies the relationship of force-displacement.It tracks every particle's motion periodically and repeatedly at any time, thus obtaining the motion of the overall granular mixtures.Fundamental assumptions are very important prerequisites for DEM analysis.The fundamental assumptions of this study are as follows: (1) The particle unit is regarded as a rigid body and a sphere. (2) Contacts happen over a tiny area, which is a point contact.There exists the maximum stress strength of contact-bonded model in the contact place.(3) The contact is a flexible contact.It allows a certain overlap, which is tiny in comparison with the particle size in the contact area, and it relates to contact force.(4) The time step is small.Any unit disturbance from indirect contact should be avoided, and the velocity as well as the accelerated velocity of any time step is constant. DEM is used to simulate the ball mill with the optimized working parameters [22,23], and it records the real-time modeling simulation data of the balls' kinetic energy by directly observing the ball motion when the mill's coal load is different.The DEM model analyzes the ball kinematics with increasing coal loads under different parameter conditions. Recently [24,25] PFC3D was used to simulate the motion and interaction between the balls and the coal by DEM.The pulverized coal with dry hot air is discharged and coal particles are added simultaneously, which is a process of dynamic equilibrium.The balls inside the mill are discontinuous, and the PFC3D model, which is based on a command driving mode, is good at processing discontinuous problems because it can demonstrate ball motion in a natural way, as discussed by Geng, et al. [26].The PFC3D model uses an explicit difference algorithm and the theory of discrete element simulation to calculate the balls' kinetic energy for coal particles with the coal pulverization and particle size distribution with increasing coal loads. The contact patterns in the PFC3D model include a "ball-ball" contact and a "ball-wall" contact.Supposing that we can simulate the three-dimensional motion of a particle system by setting the contact model of the balls and the coal, the boundary conditions, and the force as well as particle properties [27], there exists an optimum value in the ball filling ratio Ψzj that corresponds to a rotation rate Φ, that is: The approximation calculation equation for the number of balls is [28][29][30][31] where L is the mill's effective length, D is the inner diameter of mill, and Ψ is the ball filling ratio of ball mill.After the calculation, the number of simulated balls for diameters of 0.03 m and 0.04 m were N0.03 = 200 and N0.04 = 86, respectively.With coal loads, the variation tendency of the kinetic energy, the strain energy, the energy consumption from sliding friction and the mill's total work was analyzed based on these two parameter conditions that are effect of Db = 0.03 m and N0.03 = 200 and effect of Db = 0.04 m and N0.04 = 86. Conclusions The balls' kinetic energy is utilized as the controlled variable of coal loads.Meanwhile, to study the relationship between the operating efficiency of the coal pulverizing system, the coal load and the balls' kinetic energy, DEM ball motion simulation modeling was carried out for different coal weight, coal loads, and coal particle size distributions.Several important conclusions may be drawn: (1) Results of the DEM modeling showed that the use ratio of the balls' kinetic energy increased in the beginning and then decreased with increasing coal loads.The projectile motion is the optimum state for the balls to obtain the maximum kinetic energy.A close relationship between the coal load of the ball mill and ball movement was found.(2) The real-time kinetic energy of the balls was modeled by Origin and Matlab to reflect the coal load of the ball mill accordingly.With an increase of the coal load, the spatial distribution states of the balls were obtained.It is further indicated that the effect of the balls' kinetic energy on the coal load was considerable by using the method of balanced adjacent degree.(3) A real physical experiment was performed to verify the close correlation between balls' kinetic energy and coal loads.A coal load control method based on the balls' kinetic energy is applicable for control of coal loads for ball mills. 2. 1 .Figure 1 . Figure1aindicates the balls' distribution and motion of the mill when it is running in a steady state after adding coal particles of Dm = 6 mm and Nm = 700.Figure1billustrates the real-time variation curve of the balls' kinetic energy after the rotation of the cylinder, and it concludes that the balls' kinetic energy reaches the obvious peak value when the projectile ball and the coal come to a certain value.Moreover, there exists a regular fluctuation of kinetic energy with the circulating rotation of the cylinder. Figure 2 .Figure 3 .Figure 3 . Figure 2. Parameters of ball motion of Db = 0.03 m and N0.03 = 200 for coal particles with different diameters with increasing coal loads. BallsFigure 7 .Figure 8 . Figure 7. (a) Simulation model of ball motion with uniform distribution; (b) Real-time kinetic energy of balls with uniform distribution; (c) Simulation model of ball motion with Gaussian distribution; (d) Real-time kinetic energy of balls with Gaussian distribution. Figure 10 . Figure 10.Distribution diagram of ball motion. Figure 11 . Figure 11.(a) General schematic drawing of experimental apparatus; (b) Ball mill physical experiment. the optimum weight of ball charge is: :
7,404.6
2015-07-09T00:00:00.000
[ "Materials Science", "Engineering", "Physics" ]
Critical Percolation and the Incipient Infinite Cluster on Galton-Watson Trees We consider critical percolation on Galton-Watson trees and prove quenched analogues of classical theorems of critical branching processes. We show that the probability critical percolation reaches depth $n$ is asymptotic to a tree-dependent constant times $n^{-1}$. Similarly, conditioned on critical percolation reaching depth $n$, the number of vertices at depth $n$ in the critical percolation cluster almost surely converges in distribution to an exponential random variable with mean depending only on the offspring distribution. The incipient infinite cluster (IIC) is constructed for a.e. Galton-Watson tree and we prove a limit law for the number of vertices in the IIC at depth $n$, again depending only on the offspring distribution. Provided the offspring distribution used to generate these Galton-Watson trees has all finite moments, each of these results holds almost-surely. Introduction We consider percolation on a locally finite rooted tree T : each edge is open with probability p ∈ (0, 1), independently of all others. Let 0 denote the root of T and C p be the open p-percolation cluster of the root. We may consider the survival probability θ T (p) := P[|C p | = +∞] and note that θ T is an increasing function of p. There thus exists a critical percolation parameter p c ∈ [0, 1] so that θ T (p) = 0 for all p ∈ [0, p c ) and θ T (p) > 0 for p ∈ (p c , 1]. If T is a regular tree where each non-root vertex has degree d + 1-i.e. each vertex has d children-then the classical theory of branching processes shows that p c = 1 d and θ T (p c ) = 0 (see, for instance, [AN72]). Since critical percolation does not occur, we may consider the incipient infinite cluster (IIC), in which we condition on critical percolation reaching depth M of T and take M to infinity. The IIC for regular trees was first constructed and considered by Kesten in [Kes86b]. In that work, along with [BK06], the primary focus was on simple random walk on the IIC for regular trees. Our focus is on three elementary quantities for random T : the probability that critical percolation reaches depth n; the number of vertices of C p at depth n conditioned on percolation reaching depth n; and the number of vertices in the IIC at depth n. For regular trees, these questions were answered in the study of critical branching processes. In fact, these classical results apply to annealed critical percolation on Galton-Watson trees. If we generate a Galton-Watson tree T with progeny distribution Z ≥ 1 with E[Z] > 1, we may perform p c = 1/E[Z] percolation at the same time as we generate T ; this is known at the annealed process-in which we generate T and percolate simultaneously-and is equivalent to generating a Galton-Watson tree with offspring distribution Z := Bin(Z, p c ). Since E[ Z] = 1, this is a critical branching process and thus the classical theory can be used: The annealed conditional distribution of |Y n |/n given |Y n | > 0 converges in distribution to an exponential law with mean Under the additional assumption of E[Z 3 ] < ∞, parts (a) and (b) are due to Kolmogorov [Kol38] and Yaglom [Yag47] respectively; as such, they are commonly referred to as Kolmogorov's estimate and Yaglom's limit law. For a modern treatment of these classical results, see [LPP95] or [LP17, Section 12.4]. Although less widely known, Theorem 1.1 quickly gives a limit law for the size of the annealed IIC. Corollary 1.2. If E[Z 2 ] < ∞, let C n denote the number of vertices at depth n in the annealed incipient infinite cluster. Then C n /n converges in distribution to the random variable with density λ 2 xe −λx with This can be easily proven from Theorem 1.1 using an argument similar to the proof of Theorem 3.10, and thus the details are omitted. Our goal is to upgrade Theorem 1.1 and Corollary 1.2 to hold for the quenched process; that is, rather than generate T and perform percolation at the same time as in the annealed case, we generate T and then perform percolation on each resulting T . We then ask what properties hold for almost every T . For instance, a key quenched result is that of [Lyo90], which states that for a.e. supercritical Galton-Watson tree with progeny distribution Z, we have that the critical percolation probability is p c = 1/E[Z]; furthermore, for almost every Galton-Watson tree T, θ T (p) = 0 for p ∈ [0, p c ] and θ T (p) > 0 for p ∈ (p c , 1]. For a fixed tree T , let P T [·] be the probability measure induced by performing p c percolation on T . When T is random, this is a random variable and we may ask about the almost sure behavior of certain probabilities. Our main results are summarized in the following theorem: and let Y n be the set of vertices in depth n of T connected to the root in p c = 1/E[Z] percolation. Then for a.e. T we have (a) n · P T [|Y n | > 0] → W λ a.s. where W is the martingale limit of T. (c) Let C n denote the number of vertices in the quenched IIC of T at depth n. Then C n /n converges in distribution to the random variable with density λ 2 xe −λx a.s. Note that, surprisingly, the limit laws of parts (b) and (c) of Theorem 1.3 do not depend at all on T itself but just on the distribution of Z. This is in sharp contrast to the case of near-critical and supercritical percolation on Galton-Watson trees, in which the behavior is dependent on the tree itself [MPR18]. One possible justification for this lack of dependence on W , for instance, is that conditioning on |Y n | > 0 forces certain structure of the percolation cluster near the root; since W is mostly determined by the levels of T near the root, the behavior when conditioned on |Y n | > 0 for large n doesn't depend on W . Part (a) of Proposition 3.8 corroborates this heuristic explanation. The three parts of Theorem 1.3 are Theorems 3.3, 3.5 and 3.10 respectively. The proof of part (a) utilizes its annealed analogue, Theorem 1.1(a), along with a law of large numbers argument. Part (b) is proven by the method of moments building on the work of [MPR18]. Part (c) follows from there with a similar law of large numbers argument combined with two short facts about the structure of the percolation cluster conditioned on |Y n | > 0 (this is Proposition 3.8). Set-up and Notation We begin with some notation and a brief description of the probability space on which we will work. Let Z be a random variable taking values in {1, 2, . . . , } with µ := E[Z] > 1 and P[Z = 0] = 0. Define its probability generating function to be φ(z) := P[Z = k]z k . Let T be a random locally finite rooted tree with law equal to that of a Galton-Watson tree with progeny distribution Z and let (Ω 1 , T , GW) be the probability space on which it is defined. Since we will perform percolation on these trees, we also be the corresponding probability space. Our canonical probability space will be (Ω, F , P) with Ω := Ω 1 × Ω 2 , F := T ⊗ F 2 and P := GW × P 2 . We interpret an element ω = (T, ω 2 ) ∈ Ω as the tree T with edge weights given by the U i random variables. To obtain p percolation, we restrict to the subtree of edges with weight at most p. Since we are concerned with quenched probabilities, we define the measure P T [·] := P[· | T] = P[· | T ]. Since this is a random variable, our goal is to prove theorems GW-a.s. We employ the usual notation for a rooted tree T , Galton-Watson or otherwise: 0 denotes the root; T n is the set of vertices at depth n; and Z n := |T n |. In the case of a Galton-Watson tree T, we define W n := Z n /µ n and recall that W n → W almost surely. Furthermore, if E[Z p ] < ∞ for some p ∈ [1, ∞), we in fact have W n → W in L p [BD74, Theorems 0 and 5]. In the Galton-Watson case, define T n := σ(T n ); then (T n ) ∞ n=0 is a filtration that increases to T . For a vertex v of T , define T (v) to be the descendant tree of v and extend our notation For percolation, recall that the critical percolation probability for GW-a.e. T is p c := 1/µ and that percolation does not occur at criticality [Lyo90]. For vertices v and w with v ≤ w, let {v ↔ w} denote the event that there is an open path from v to w in p c percolation; let {v ↔ (u, w)} be the event that v is connected to both u and w in p c percolation; for a subset S of T, let {v ↔ S} denote the event that v is connected to some element of S in p c percolation; lastly, let Y n be the set of vertices in T n that are connected to 0 in p c percolation. Moments where C j (k) denotes the set of j-compositions of k and m r := E[ Z r ]. We use the following result from [MPR18]: is a martingale with respect to the filtration (T n ), and there exist constants While Theorem 3.1 isn't stated precisely this way in [MPR18], the martingale property follows from W almost surely and in L 2 . Proof. By Theorem 3.1, M (k) n is a martingale with uniformly bounded L 2 norm for each k. By the L p martingale convergence theorem, M (k) n converges in L 2 and almost surely. We now proceed by induction on k. For k = 1, E T [|Y n |] = W n which converges to W . Suppose that the proposition holds for all j < k. Then by convergence of M where the o(1) term is both in L 2 and almost-surely. By induction, the leading term is the contribution and the fact that Survival Probabilities Throughout, define λ := 2 p 2 c φ ′′ (1) . Our first task is to find a quenched analogue of Kolmogorov's estimate: Before proving this exact limit, we first prove upper and lower bounds: where R(0 ↔ T n ) is the equivalent resistance between the root and T n when all of T n is shorted to a single vertex and each edge branching from depth k − 1 to k has resistance 1−pc p k c . Shorting together all vertices at depth k for each k gives the lower bound Proof of Theorem 3.3: For each fixed m < n, the Bonferroni inequalities imply by Lemma 3.4 ≤ Cm 2 n −4 . by Theorem 3.1 Multiplying by n, the second moment of the right-hand side of (3.1) is bounded above by Cm 2 n −2 = O(n −3/2 ) which is summable in n. By Chebyshev's Inequality together with the Borel-Cantelli Lemma, the right-hand side of (3.1) converges to zero almost surely. This implies We want to show that the right-hand side of (3.2) converges to W λ, so we first calculate where the last inequality is via Lemma 3.4. Since this is summable in n, Chebyshev's Inequality and the Borel-Cantelli Lemma again imply Taking n → ∞ and utilizing Theorem 1.1 together with (3.2) completes the proof. Conditioned Survival Theorem 3.5. Suppose E[Z p ] < ∞ for all p ≥ 1. Then the conditional variable (|Y n |/n | |Y n | > 0) converges in distribution to an exponential random variable with mean λ −1 for GW-almost every T. Proof. The proof is via the method of moments. In particular, since the moment generating function of an exponential random variable has a positive radius of convergence, its distribution is uniquely determined by its moments. Thus, any sequence of random variables with each moment converging to the moment of an exponential random variable must converge in distribution to that exponential random variable [Bil95, Theorems 30.1 and 30.2]. Let X n be a random variable with distribution (|Y n |/n | |Y n | > 0). It is sufficient to show E T [X k n ] → k!λ −k GW-a.s. since k!λ −k is the kth moment of an exponential random variable. Proposition 3.2 and Theorem imply More can be said about the structure of the open percolation cluster of the root conditioned on 0 ↔ T n , but we require two general, more or less standard lemmas first. Taking absolute values and bounding |P[A | Proof. This is a straightforward application of [Che09, Theorem 2.1] which states that for independent random variables Y i with E[Y i ] = 0 and E[|Y i | p ] < ∞ for some p > 2 we have i ] and C p is a positive constant. Setting Y i = X i /n completes the proof. For a fixed tree and m < n, define B m (n) to be the event that 0 ↔ T n through precisely one vertex at depth m. Proof. Note first that for the choice of m as in part (a), we have 1 2µ W n 1/4 ≤ Z m ≤ 2µW n 1/4 for sufficiently large n. (a) Using Theorem 3.3 and Lemma 3.4, we bound for n sufficiently large, and some choice of C > 0 depending on the distribution of Z. Applying Lemma 3.7 for p = 9 gives where we use the trivial bound of 1 ≤ Z m . Since this is summable in n, the Borel-Cantelli Lemma implies that this event only occurs finitely often. In particular, this means that for sufficiently large n for some constant C > 0 depending only on the distribution of Z. (b) Applying Lemma 3.6 to the measure P T [· | 0 ↔ T n ] and recalling B m (n) ⊆ 0 ↔ T n , which is O(n −1/4 ) by part (a). It's thus sufficient to bound P T [v ∈ Y n | B m (n)]. For a vertex v ∈ T n and m < n, let P m (v) be the ancestor of v in T m . We then have Conditioned on B m (n), there exists a unique vertex w ∈ T m so that 0 ↔ w ↔ T n ; this vertex w is chosen with probability bounded above by where the latter inequality is by applying the bound of Lemma 3.4 to the numerator and arguing as in (3.3) to almost-surely bound the denominator. In particular, the o(1) term is uniform in w. We want to take the maximum over all possible w ∈ T m , and note that for any α > 0, which is summable, implying that for any fixed α > 0, we eventually have max w∈Tm W (w) ≤ n α . It merely remains to bound the denominator of (3.5). Note that by Proposition 3.2, the lower bound given in Lemma 3.4 converges almost surely to W λ 2 as n → ∞. In particular, this means that if we set then p n → 1. By Hoeffding's inequality together with Borel-Cantelli, the number of vertices u ∈ T m for which we have is almost surely at least 1/2 of T m for n sufficiently large. This gives Recalling that Z m = Θ(W n −1/4 ) and plugging the above into (3.5) completes the proof. Incipient Infinite Cluster As in [Kes86a], we sketch a proof of the construction of the IIC. For an infinite tree T , define T [n] to be the finite subtree of T obtained by restricting to vertices of depth at most n. almost surely for each tree t. The random measure µ T on subtrees of T with marginals has a unique extension to a probability measure on rooted infinite trees GW almost surely. The IIC is thus the random subtree of T with law µ T . Proof. Since each T has countably many vertices, Theorem 3.3 assures that nP T [v ↔ T n+|v| ] = λW (v) for each vertex v of T a.s. When all of these limits hold, we then have for each t. To show that the measure µ T can be extended, we note that its marginals are consistent, as can be seen via the recurrence W (v) = p c w W (w) where the sum is over all children of v. Applying the Kolmogorov extension theorem [Dur10, Theorem 2.1.14] completes the proof. In light of Lemma 3.9, it's natural to guess that the number of vertices in the IIC at depth n will asymptotically be the size-biased version of (|Y n | | 0 ↔ T n ): the sum v∈tn W (v) will be relatively close to |t n |W , therefore biasing each choice of t by a factor of |t n |. In order to make this argument rigorous, we'll invoke Proposition 3.8 which shows that no single vertex has high probably of surviving conditionally. Throughout, we use the notation n(a, b) = (na, nb) for a < b and C to denote the IIC. almost surely. In fact, C n /n converges in distribution to the random variable with density λ 2 xe −λx for GW-almost every T. Proof. To see that convergence in distribution follows from the almost sure limit, apply the almost sure limit to each interval (a, b) with a, b ∈ Q; since there are only countably many such intervals, there exists a set of full GW measure on which these limits simultaneously exist for each rational interval, thereby implying convergence in distribution [Dur10, Theorem 3.2.5]. We have For a fixed n, write (3.6) We then calculate For a fixed n, we take M → ∞ and utilize Theorem 3.3 to get (3.7) We plug this into (3.6) to get the limit Theorems 3.3 and 3.5 show that the latter two terms in (3.7) have almost sure limits b a λe −λx dx and λ as n → ∞, leaving only the first term. We note that (3.9) We want to show that this is summable, and thus look to bound the max term. Applying Lemma 3.6 to the measure P T [· | |Y n | ∈ n(a, b)] gives Thus, by (3.9), the conditional variance is almost surely summable. For any fixed δ > 0, Chebyshev's inequality then implies P v∈Tn P T [v ∈ Y n | |Y n | ∈ (a, b)] n · (W (v) − 1) > δ T n is summable almost surely. Applying a conditional Borel-Cantelli Lemma (e.g. [Che78]) shows that (3.8) holds almost surely.
4,554.2
2018-06-03T00:00:00.000
[ "Mathematics" ]
Synthesis, Characterization and Pressure Effect on Structural and Mechanical Properties of MgBi2O6: Solid-State Route and DFT Study Here we have prepared good quality crystalline sample MgBi2O6 employing the solid state reaction technique. The synthesized material was characterized by XRD and SEM (scanning electron microscopy). The structural study confirmed that MgBi2O6 possesses tetragonal crystal configuration (JCPDS PDF#, No. 86-2492) with outstanding crystallinity and a grain size between 200 to 350 nm. The temperature dependence electrical resistivity and conductivity were measured by two probe methods and ensured the semiconducting nature of this material. Using the impedance analyzer and UV-visible spectrophotometer we studied the experimental electronic and optical properties of this material. To explore the hypothetical features of MgBi2O6 we have used first principles methods which depend on CASTEP code. The band structure analysis also ensured the semiconducting nature of MgBi2O6 with small band gap of 0.12 eV. The semiconducting behavior of MgBi2O6 with band gap of 0.12 eV was also observed by the band structure analysis. The Born’s stability criteria were fulfilled by the investigated elastic constants and ensured the stable nature of MgBi2O6. The response of structural and mechanical properties with pressure of MgBi2O6 was discussed in details. We have also studied the hypothetical optical properties of MgBi2O6 by CASTEP code. or pentavalent states. Recently, some Bi 3+ containing materials for example BiVO 4 (Kudo et al., 1999), Bi 2 WO 6 (Fu et al., 2005), and BiOCl (Wang et al., 2017) have been broadly explored as new candidates for visible-light-responsive photo-catalysts due to the exceptional electronic configuration essentially arising from the hybridization of O-2p and filled Bi-6s orbitals. Additionally, the empty 6s orbital of Bi 5+ also leads to some Bi 5+ including compounds having tremendous photo-catalytic activity. Alternatively, a number of Bi oxides having remarkable pentavalent state (Bi 5+) have received a lot of attention for their research interest. Among these bismuth oxides a famous example is NaBiO 3 which show as a strong absorber of visible light and has significant application in photo-oxidation of organic materials (Kako et al., 2007). Recently Gong et al. (2017) proposed that the compound AgBiO 3 has the ability to create large quantity of reactive oxygen species with no light illumination and has an exceptional oxidizing activity. The compound BaBiO 3 having both Bi 3+ and Bi 5+ states, can be used as a potential absorber of alloxide photovoltaic (Chouhan et al., 2018) and show photocatalyst behavior in the case of visible-light irradiation (Liu et al., 2019). With trirutile structure the compound MgBi 2 O 6 shows outstanding photocatalytic manners for methylene blue degradation (Takei et al., 2011). This compound was first synthesized by Kumada et al. (2003) employing the hydrothermal method. In 2003, Mizoguchi et al. (2011) investigated the electrical and optical features of MgBi 2 O 6 and mentioned that it is a degenerate ntype semiconductor with relatively narrow band gap of about 1.8 eV. Having special band configuration, the compound MgBi 2 O 6 can be used as visible lightsensitive photocatalysts for disintegration of carbonic species. The theoretically investigated band gap of MgBi 2 O 6 is found to be1.10 eV carried out by Heyd-Scuseria-Ernzerhof (HSE) functional method (Zhang et al., 2018). In this work we have investi-gated the detailed physical properties of MgBi 2 O 6 by first-principles method with GGA and PBE and have seen that this phase shows metallic behavior . The characteristic has also been found by Lin Liu, Dianhui Wang et al. in 2019. Therefore to obtain the band gap of this compound they have used HSE functional scheme instead of GGA, PBE route. But fortunately in our present work we have successfully observed the band gap of MgBi 2 O 6 by using the GGA and PBE route. In this work, we have also synthesized the high quality MgBi 2 O 6 crystals via the solid-state reaction method and characterized the as-prepared sample by XRD, SEM, impedance analyzer and UV-visible spectrophotometer. Furthermore, using first-principles method we have calculated the structural and mechanical properties of this compound under different pressures for the first time. METHODOLOGY: 2.1 Experimental methodology -In this research work, the pure MgBi 2 O 6 crystal was produced through the usual solid-state reaction method with the high purity (purity > 98 %) powders of MgO and Bi 2 O 5 . In order to begin the synthesizing process of MgBi 2 O 6 , initially we have studied the phase development of this phase using a thermobalance (TG/DTA 630). A stoichiometric mixture of MgO and Bi 2 O 5 was heated in an air atmosphere through a heating program as shown in the inset of Fig 1. A characteristics TG curve attained from the mixture of raw materials (MgO and Bi 2 O 5 ) is also revealed in From this TG curve, it was noticed that the weight loss at 100-580°C is too large. This result confirm that at temperature lower than 100°C, the chemical reaction between the raw materials may not be active so far and there is no weight change above 600°C where the phase formation is done. At the primary step of synthesis, the reactants were dried in air an oven at 100 °C for 12 h. The powder mixture of MgO and Bi 2 O 5 was mixed well in an agate mortar with ethanol then dried and calcined at 800 °C for 12 h at ambient. Before the next heat treatment the mixture was grounded again to ensure homogeneity. The powder was calcined second time at 850 °C for 12 h at atmosphere. After the second heat conduct, the powder was grounded and pelletized in 12 mm diameter under the pressure of 80 KN by using pressure gauze. At atmosphere the pellet was sintered at 900°C for 12 h. During the heat treatment process the raising-cooling rates of temperature were fixed to 3 °C/min. The powder sample of MgBi 2 O 6 was analyzed by using the X-ray powder diffraction spectroscopy with CuK α (λ = 0.15418 nm) radiation source at room temperature in Centre for Advanced Research in Sciences (CARS), in Bangladesh. The sample was scanned at the diffraction angle (2θ) within the range of 5° and 85°. The structural and morphological investigation of the prepared sample was carried out by scanning electron microscopy (SCM). For investigating the FTIR spectrum of the powder sample, we have used Fourier transform infrared (FTIR) spectrophotometer (Spectrum 100, Perkin Elmer). The Agilent Precision impedance analyzer (Agilent technologies, Model 4294A Japan) was used for the measurement of frequency-dependent ac conductance, impedance, dielectric constant, capacitance, inductance and reactance. Theoretical methodology - The detailed physical properties of magnesium bismuth oxide, MgBi 2 O 6 has been carried out through the CASTEP computer code (Clark et al., 2005) within the frame of density-functional theory (DFT). By employing the Perdew-Burke-Ernzerhof (PBE) method (Perdew et al., 1996) we have treated the exchange-correlation energy within the generalized gradient approximation (GGA). For pseudo atomic computations, Mg-2p 6 3s 2 , Bi-6s 2 6p 3 and O-2s 2 2p 4 have been taken as the valence electron states. The plane wave basis set with cut-off energy 480 eV is employed to expand the wave functions. For sampling the Brillouin zone a Monkhorst-Pack grid of 10105 k-points was used for compound MgBi 2 O 6 . In order to obtain the equilibrium crystal structure of MgBi 2 O 6 the Broyden-Fletcher-Goldfarb-Shanno (BFGS) minimization scheme was used. The succeeding criteria for the geometry optimization were sited to 5.0×10 -5 eV/atom for total energy, 0.01 eV/Å for maximum force, 0.02 GPa for maximum stress and 5.0×10 -4 Å for maximum atomic displacement. The stress-strain system was used to find out the single independent elastic constants of MgBi 2 O 6 (Kang et al., 2003;Mostari et al., 2020). We have used the Voigt-Reuss-Hill approximations to compute the poly-crystalline elastic constants of MgBi 2 O 6 . Experimental and Theoretical Structural Properties -The structural analysis of MgBi 2 O 6 has performed by X-ray diffractometer (XRD) (Rigaku Ultima IV X-Ray Diffractometer) with CuKα radiation (λ = 0.15418 nm) from 10º-80º, with a scan speed of 5º/min. The unit cell refinement has been performed by Cell Call program employing the XRD data. The X-ray diffraction pattern of MgBi 2 O 6 is displayed in Table 1. From Table 1, we have seen that our experimental lattice parameters are approximately equal to the standard lattice parameters obtained from the stated JCPDS data and satisfied the previous work. The optimized lattice parameters are very close to our experimental data which insured the reliability of the DFT based simulation. The sharp and strong diffraction peaks (Fig 2) reveals the excellent crystallinity of MgBi 2 O 6 . The larger value of inten-sity ratio reveals the better crystallinity (Hu et al., 2007). Here, the intensity ratio of the highest peaks (110) and second highest peak (103 ) is 2.21 which is large than the critical value 1.2 (Hao et al., 2005) and confirms the better crystallinity of MgBi 2 O 6 . The higher crystallinity would promote to yield higher photocatalytic activity (Zhong et al., 2018). To study the effect of external pressure on the crystal structure of MgBi 2 O 6 , we have studied the variations of the lattice parameters, unit cell volume and bulk modulus of MgBi 2 O 6 with differrent pressures up to 50 GPa. For this investigation we have used generalized gradient approximation depend on DFT based calculations implement in CASTEP code. The variations of the cell volume, lattice parameters and bulk modulus of MgBi 2 O 6 with pressure are presented in Fig 3. From Fig 3 we have seen that the lattice parameters and the cell volume of MgBi 2 O 6 are decrease with increa-ses of pressure, consequently the bulk modulus, B 0 increased with the increase of pressure. However, the atomic distance is reduced with increasing pressure. For this reason, the repulsive attraction between atoms becomes string, which guides to the complexity of compression of the material under pressure . The lattice parameters, cell volume and bulk modulus with different pressures are listed in Table 2. Experimental Electronic Properties - Using the two probe methods from room temperature to 600 K, we have measured the electrical resistivity and dc conductivity of MgBi 2 O 6 which are shown in In the high frequency region the dielectric constant is decreased because the dipoles are not capable to rotate rapidly with the increase of frequency. For this reason their oscillations begin to lag behind to the applied field. As the frequency is further increased, the dipole will be totally unable to follow the field and the orientation polarization will be stopped (Sarkar et al., 2016). The frequency-dependent ac conductance of the synthesized sample MgBi 2 O 6 was measured by precision impedance analyzer within the frequency range 100 Hz to 2MHz with applied oscillating voltage of 500 mV. From Fig 7(b) we have seen that the conductance is increased rapidly with the increase of frequency. The mobile charge carriers contribute to this conductivity. Following the ionhopping rules, the ionic conduction of MgBi 2 O 6 is created from the transfer of exchangeable channels and cavities of the grains. The mobile charge carriers face some displacement between the two minimum potential energy states when they jump to a new site from its original position. This is due to the polarization of dipoles (Usha et al., 2007). The maximum conductance is observed at the high frequency (~2 MHz). The frequency dependent capacitance of MgBi 2 O 6 is shown in Fig 7(d) measured at 500 mV with a precision impedance analyzer. The high value of capacitance is observed in the low frequency region (Fig 7d) which is due to the due to the involvement of all kinds of polarizations at low frequency region. The capacitance is decreased with the increase of frequency and come close to almost constant value at above 1.0 MHz. This is due to the change of space charge, ionic and orientation polarizations at higher frequencies . Fig 7(e) and (f) represent the frequency dependent reactance and impedance of MgBi 2 O 6 sample respectively within 100 Hz to 2 MHz. All these parameters are high in the low frequency region and gradually decrease in the high frequency region. The reactance is almost independent of frequency at higher frequencies (above 0.4 MHz) which is due the resistance effect. Fig 8 (a, and b) illustrate the absorption spectra and transmittance of compound MgBi 2 O 6 . From Fig 8(a) we have seen two absorption peaks in the ultraviolet region which ensures the absorption criteria of this material in this region. No absorption peaks are found in the visible site. However the absorption increases and the transmittance decreases with the increase of wavelength in the visible region. The optical band gap energy can be calculated by using the following equation - = ℎ Where 'E g ' is the optical band gap, 'h' is the Plank's constant, 'c' is the velocity of light and λ is the wavelength at the edge of the absorption peak. Here, λ = 522 x10 -9 m Therefore, = This band gap indicates that the sample is a semiconductor material. This feature has also found from the dielectric and resistivity analysis. The same results ensured the reliability of our present work. The elastic constants provide fundamental information about solid-state phenomenon for example rigidity, fragility, ductile feature, anisotropy and stability behavior of a material. So it is very essential to study the stiffness constants of a material and also essential to know how the elastic feature varies with different pressures. The elastic constants of MgBi 2 O 6 are calculated from a linear fit of the stress-strain function as said by Hook's law. Since our synthesized material is belong to tetra-gonal crystal system, it has six independent elastic constants which are listed in Table 3. We are unable to compare our results due to absence of experimental measurement of elastic constants data in literature. However our investigated results are in well accord with the previous theoretical work (Liu et al., 2019). There some slide variation in our investigated results from the previous study which is due to the use of different calculation methods. nm 2(C 11 + C 12 ) + C 33 + 4C 13 > 0 From Table 3 we have observed that the investigated independent elastic constants of our synthesized compound are positive and fulfill the above stability conditions which demonstrating that MgBi 2 O 6 is mechanical stable in nature. From Table 3 we have observed that C 33 is higher than C 11 signifying that the chemical bonding strength in the (001) direction is considerably stronger than bonding strength in the (100) and (010) directions. Additionally, C 44 is clearly smaller than C 66 indicating that it is very easy to occurs shear deformation in (001) direction than (010) direction (Liu et al., 2019). 9c) and confirms the stability nature of MgBi 2 O 6 up to 50 GPa. Consequently, the bulk modulus, shear modulus and Young's modulus definitely shows an increasing tendency as pressure increases (Fig 9b) And 2 = ( 11 + 12 ) 33 − 2 13 2 According to Hill the average value of B and G is given by, Now to find out the values of Young's modulus (E) and Poisson's ratio (ν) we have used the following relations, The Universal anisotropic factor of a material can be calculated by the following equation Ranganathan and Ostoja-Starzewski, 2008, The calculated polycrystalline elastic constants at different pressures of MgBi 2 O 6 by using the Eq. 2 to Eq. 10 are charted in Table 4. The ratio of bulk to shear modulus B/G is a sign of ductile and brittle manner of any material. The bulk modulus B indicates the resistance to volume changes via applied pressure, whereas the shear modulus G denotes the resistance to plastic deformation. The high value of B/G ratio ensures the ductility, whereas a low value corresponds to brittle manner. If B/G > 1.75, the material will behaves ductile manner; or else, the material will behaves brittle activities. From the value of B/G as shown in Table 4, we can say that this material has some toughness at ambient condition. The nature of B/G with pressure in MgBi 2 O 6 is depicted in Fig 10(c). It has been seen that when pressure increases from 0 to 50 GPa, the value of B/G changes from 1.81 to 4.07. It indicates that the compound MgBi 2 O 6 is strongly prone to ductility at high pressure. Another recognized parameter is the Poisson's ratio, ʋ which is used to separate the brittle solids from the ductile once proposed by Frantsevich et al. (1983). The larger value of Poisson's ratio ( > 0.26) indicates ductile manner and the compound will be brittle when the value of Poisson's ratio is ( < 0.26). According to the value of ν as evident from Table 4 this material shows ductile behavior which consistent with the result of Pugh's criteria B/G. Our results are very similar to the previous study (Liu et al., 2019). Fig 10(b) also ensures that MgBi 2 O 6 has little bit ductile manner at zero pressure and is strongly prone to higher ductility with increasing pressure. It is well recognized that elastic anisotropy associates with anisotropic plastic deformation and activities of micro cracks in solid materials. Therefore it is very essential to determine the elastic anisotropy in super hard materials due to realized these properties and expectantly find mechanisms which will develop their hardness and mechanical durability. An appropriate explanation of anisotropic manners has a significant impact in engineering discipline as well as in crystal physics. For a pure isotropic material, AU is zero and for other case the material will be anisotropic. The value of A U at 0 to 50 GPa of MgBi 2 O 6 is shown in Table 4 which is greater than zero and ensures that this compound shows anisotropic behavior. From Fig 10(a) it is observed that the value of A U increases sharply with increasing pressure due to reason that that the elastic constants C 11 , C 33 , C 66 , C 12 and C 13 are increased with pressure. Theoretical electronic and bonding properties It is very essential to study to electronic properties of any material due to understanding the physical properties and bonding character of this material. For this reason in this study we studied the detailed electronic properties such as electronic band structure, density of states (total and partial) and the Mulliken atomic populations MgBi 2 O 6 at zero pressure. The observed electronic band structure of this compound is depicted in Fig 11. A clear separation between the valence band and conduction band is observed from Fig 11 which ensures the semiconducting behavior of MgBi 2 O 6 . This characteristic is also observed from the resistivity analysis shown in Fig 6(a). The investigated electronic band gap of MgBi 2 O 6 is about 0.121 eV which is differs from the experimental value of 1.6 eV (Mizoguchi et al., 2003). This happened because DFT based calculations skip the electron's excitation effects and therefore underrate the electronic band gap (Naefa and Rahman, 2020). The calculated partial and total density of states of tetragonal MgBi 2 O 6 is exposed in Fig 12. The valance bands are located from -20 eV to the Fermi level and mostly created from Mg-2p, Bi-6s, O-2s and O-2p states. The conduction bands are located from 0 to 10 eV and chiefly created from Bi-6p states. However, near the Fermi level O-2p orbital contributes the most, which are the general features of oxide semiconducting materials. From Table 5 we have seen the total density of states of this material is 3.32 states/eV, where the contribution of O-2p states is dominated. In order to understand the chemical bonding nature in compound MgBi 2 O 6 we have studied the Mulliken atomic populations which are listed in Table 6. A low value of the bond population refers to the ionic behavior (For perfect ionic bond the value of the bond population is zero) whereas a high value indicates increase of covalency level (Segall et al., 2003). The calculated bond populations of MgBi 2 O 6 are shown in Table 6. From Table 6 we can see that Mg and Bi atoms carry the positive charges on the other hand O atoms carry the negative charges indicating the transfer of charge from Mg and Bi to O atoms. . The decrease of real part of the dielectric function with the increase of photon is due to the reasons that when the photon energy reaches to 0.121 eV which is the band width of this phase, the valence band electrons start to excite and move to conduction bands. Hence the carrier concentration is increases, the degree of polarization reduces and consequently the real part of the dielectric function decreases (Liu et al., 2019). The non-zero region of the imaginary part indicates the happening of light absorption of this material. The imaginary part comes to zero at about 13 eV indicating that this material would be transparent after this energy range. Refractive index is an important optical function which explains the nature of electromagnetic wave through a visual medium. From Fig 14(b) it is obvious that the refractive index is high in the infrared and visible regions and slowly decreases in the ultraviolet region demonstrating that MgBi 2 O 6 has strong refractive effect in the infrared and visible regions. The loss function of fast moving electron would be used to represent the resonant frequency or bulk plasma frequency of the plasma (Xu et al., 2006). From Fig 14(c) it can be seen that the effective bulk plasma frequency is observed at 13 eV which ensures that the characteristics of plasma frequency in MgBi 2 O 6 are obvious. This result is well agrees with our previous study and did not agree with the study of Lin Liu et al. (2019). Therefore MgBi 2 O 6 shows transparent behavior when the incident photon has the energy higher than this plasma frequency. The calculated absorption spectrum of MgBi 2 O 6 depicted in Fig 14(d) illustrates that the light absorption edge is stared at about 0.121 eV which is comparable with the band gap determined by PBE scheme. Only one major absorption peak is found at 9 eV in the absorption spectrum. Hence it is so interesting to notice that this material absorbs ultraviolet radiation quite efficiently. The optical conductivity of MgBi 2 O 6 starts at about 0.14 eV (Fig 14e) confirming again the semiconducting nature of this phase. Since the material MgBi 2 O 6 has high absorption in the ultraviolet region as a result maximum conductivity is observed in this region. The reflectivity shape of MgBi 2 O 6 is shown in Fig 14(f). The high reflectivity is appeared at around 13 eV which corresponds to the energy where the conductivity falls to zero and absorption quality is good. Since MgBi 2 O 6 shows good reflectivity in the high energy area this compound should be used as a possible shield for ultraviolet radiation. CONCLUSION: In summary, the pure single phase MgBi 2 O 6 crystal has been effectively prepared through solid-state reaction way. The polycrystalline sample MgBi 2 O 6 has been obtained after two times calcinations at 600 and 650 °C respectively. The powder XRD patterns reveal that the prepared sample is well crystallized and indexed to a trirutile-type tetragonal crystal structure. The large grain size of about 200-350 nm as observed from SEM images ensures the increase of efficiency of MgBi 2 O 6 , when it is used as a visible light-sensitive photocatalysts. The decrease of electrical resistivity and increase of electrical conductivity with temperature ensures the semi-conducting behavior of MgBi 2 O 6 . This behavior is also observed from the electronic band structure calculations and from dielectric constant measurement. The high dielectric constant, high capacitance, high resistance, high impedance and low ac-conductance are observed at low frequency regions and consequently reverse characteristics are found in the high frequency regions. We have also performed the DFT based calculations to study the structural configuration, mechanical, electronic and optical properties of MgBi 2 O 6 . Furthermore we have observed the pressure effect on the structural and mechanical properties of the prepared product. The geometrical optimized lattice constants are very close to our experimental values which ensure the accuracy of our present work. The lattice parameters and cell volumes are decreased with the increase of pressure. The observed band gap of about 0.121 eV near the Fermi level confirms the semiconducting nature of MgBi 2 O 6 . The existence of ionic and covalent features is observed from the Mulliken atomic population calculations. The investigated elastic constants satisfied the Born's stability criteria and ensure the mechanical stability of MgBi 2 O 6 . All the elastic constants show linear response with the external pressure in which C 33 shows more response compared to other constants. The calculated B/G ensures a little bit ductile manner of MgBi 2 O 6 at zero pressure but this phase is strongly prone to higher ductility at high pressure. The increased of Poisson's ratio and anisotropic factor are observed with increasing pressure. The large reflectivity in the ultraviolet site ensures that MgBi 2 O 6 should be used as a possible coating material for ultraviolet radiation. ACKNOWLEDGEMENT: We would like to thank Department of Physics, Rajshahi University, Bangladesh, and Centre for Advanced Research in Sciences, Dhaka University, Dhaka, Bangladesh for their lab support
6,004.6
2020-09-29T00:00:00.000
[ "Materials Science" ]
DOING L2 SPEECH RESEARCH ONLINE: WHY AND HOW TO COLLECT ONLINE RATINGS DATA Abstract Listener-based ratings have become a prominent means of defining second language (L2) users’ global speaking ability. In most cases, local listeners are recruited to evaluate speech samples in person. However, in many teaching and research contexts, recruiting local listeners may not be possible or advisable. The goal of this study was to hone a reliable method of recruiting listeners to evaluate L2 speech samples online through Amazon Mechanical Turk (AMT) using a blocked rating design. Three groups of listeners were recruited: local laboratory raters and two AMT groups, one inclusive of the dialects to which L2 speakers had been exposed and another inclusive of a variety of dialects. Reliability was assessed using intraclass correlation coefficients, Rasch models, and mixed-effects models. Results indicate that online ratings can be highly reliable as long as appropriate quality control measures are adopted. The method and results can guide future work with online samples. Most studies involving listener-based ratings have recruited local listeners to evaluate L2 speech samples in a controlled, laboratory setting. On the one hand, this approach is sensible for second language (SL) contexts because it allows researchers to sample listeners from the population of individuals with whom L2 speakers are most likely to interact. On the other hand, there are many scenarios where recruiting local listeners may not be ecologically valid or even possible. Such is the case for the foreign language (FL) context, where students typically interact with one another and their instructor in a classroom setting. Practically speaking, depending on the location where the teaching and research take place, there may not be (m)any local listeners to recruit. Even if local listeners are available, they may not be familiar with the L2 varieties to which learners have been exposed, and they may not represent the individuals with whom L2 learners envision themselves interacting. Thus, there is an immediate need to devise approaches that allow researchers to locate and recruit listeners beyond the boundaries of their local community. One promising approach is online listener recruitment through platforms such as Amazon Mechanical Turk (AMT). At least one study suggests that online ratings are reliable (Nagle, 2019), but more work is needed to understand the demographic characteristics of online listener samples, especially in L2s other than English, and how demographic characteristics and listener recruitment choices affect the reliability of the resulting data. The present study contributes to this area by comparing fully crossed ratings collected in person to ratings collected online in AMT using a pseudo-random-raters design. LISTENER-BASED RATINGS AS A WINDOW INTO ORAL COMMUNICATIVE COMPETENCE Listener-based ratings of fluency, comprehensibility, and accentedness have become a ubiquitous means of operationalizing L2 speakers' oral communicative competence. Fluency refers to listener's perception of the rhythm and flow of speech, comprehensibility refers to ease of understanding, or how much effort the listener has to invest to understand the speaker, and accentedness refers to the extent to which speech deviates from a target variety of the L2 (see, e.g., Derwing & Munro, 2013). These three constructs, while interrelated, capture distinct facets of oral communicative competence. For one, accented speech is often highly comprehensible (Munro & Derwing, 1995;Nagle & Huensch, 2020), and numerous studies have shown that different bundles of linguistic features predict comprehensibility and accentedness. For instance, Trofimovich and Isaacs (2012) found that rhythm (vowel reduction ratio) was the strongest predictor of accent, whereas type frequency was the strongest predictor of comprehensibility. In another study, Saito et al. (2017) reported that lexicogrammatical and pronunciation features accounted for roughly equal proportions of variance in comprehensibility (50% vs. 40%), but for accentedness, pronunciation was the primary predictor, accounting for 60% of variance compared with 28% for lexicogrammar. Based on these and similar results, pronunciation scholars have long recognized comfortable intelligibility, or comprehensibility, rather than accent reduction as the basic goal of pronunciation instruction (Levis, 2005). Listener-based ratings of fluency have also played an important role in the pronunciation and speech literature. Utterance-based fluency measures (e.g., articulation rate, number of filled and unfilled pauses) are often correlated with both comprehensibility and accentedness (e.g., Saito et al., 2017;Trofimovich & Isaacs, 2012), which suggests that listeners take fluency-based measures into account when evaluating speakers along the other two scales. Put another way, fluency-based variables appear to be an important dimension of comprehensibility and accentedness. In fact, some studies have shown significant overlap in the features that predict L2 speech ratings (O'Brien, 2014), which highlights the interconnected and multidimensional nature of the three constructs. Practically speaking, listener-based ratings of fluency, comprehensibility, and accentedness are easy to interpret and collect using simple rating scales, and the resulting data have been shown to be highly reliable. It comes as no surprise then that these constructs have had an important impact on L2 speech research and have been adapted and implemented in a range of research and teaching contexts (see, e.g., Foote & McDonough, 2017;Isaacs et al., 2017). These same reasons make listener-based ratings appealing for online research and a useful starting point for determining best practice in online data collection. APPROACHES TO LISTENER RECRUITMENT Recruiting an appropriate group of listeners to serve as raters means identifying the individuals with whom speakers are most likely to interact. In an SL context, the question of potential interlocutors is relatively straightforward because SL speakers routinely communicate with individuals in the local community. For instance, if SL speakers are university students, then university students can be recruited to serve as listeners (Kennedy, et al., 2015), and if they are working professionals, then listeners who work in a similar professional context can be recruited (Derwing & Munro, 2009). Recruiting an appropriate listener group is more complex in FL contexts for several reasons. For one, there may not be a single or stable target variety of the FL because FL learners have been exposed to a range of models through their instructors and may not have a clear sense of the interlocutors with whom they would like to interact in the future. Even if they do have an idea of potential future interlocutors, their imagined interlocutor group is likely to change as their learning goals evolve. In light of these considerations, various approaches to listener recruitment are possible. Recruitment could be guided by the L2 varieties to which FL learners have been exposed or by the types of interlocutors with whom they envision themselves interacting. Another alternative would be to recruit a diverse listener group because FL learners may end up interacting with many different types of interlocutors as they become more proficient L2 users. It bears mentioning here that listener recruitment choices have implications for construct definitions. For example, Derwing and Munro defined accentedness as "how different the speakers' accents are from a standard Canadian English accent" (2013, p. 185). However, for FL research, a standard local variety may not be a sensible anchor point because learners' speech patterns are likely an amalgamation of the diverse dialects to which they have been exposed and thus may not be aligned with any single native variety of the L2. In fact, eliciting accentedness ratings relative to a local standard may underestimate participants' pronunciation ability if, for example, a listener perceives a relatively nativelike accent as moderately to strongly accented because it does not coincide with the local norms with which that individual is familiar. Instead of accentedness relative to a local standard, the notion of foreign accent, or the degree to which the speaker's accent deviates from any native variety of the L2, may be suitable for FL studies. However, foreign accent ratings are not without their pitfalls. When provided by a diverse group of listeners, such ratings may be noisy because it is unlikely that all listeners would be equally familiar with the speech characteristics of other L2 dialects. There is also the issue of precisely how to define foreign accent so that listeners understand foreign to mean nonnative rather than a native speaker who is not from the local area. Once these methodological and operational issues are resolved, there is the practical task of actually locating and recruiting listeners. In SL contexts, this means turning to the local community. In FL contexts, listener recruitment can be difficult. In many locations, there may not be any native listeners to recruit, and even if native listeners are available, they may not match the target listener profile. FL researchers could travel to a location where the L2 is spoken or rely on their colleagues abroad, but those options are not timeand cost-effective. Recruiting listeners online is a practical alternative. APPROACHES TO THE RATING DESIGN AND PROCEDURE Although scalar ratings are relatively easy to implement, their apparent simplicity belies several complex decisions that must be made. How many points should the rating scales include? Should ratings be carried out simultaneously or sequentially? And should the rating design be fully crossed, such that all listeners evaluate all items, or can a randomraters design be used? Fortunately, researchers have begun to address these questions. With respect to scale length, Munro (2017) found that 18 of 21 listeners used at least nine choices in rating comprehensibility, suggesting that a 9-point scale would be the minimum number of steps needed for sufficient resolution (see also Southwood & Flege, 1999). However, in another study, Isaacs and Thomson (2013) found that a 9-point scale resulted in fuzzier distinctions between steps than a 5-point scale, which they argued could have been due to the relative homogeneity of the speaker sample (i.e., the speakers did not show a wide enough proficiency range to warrant nine distinct options). Other researchers have used 1,000-point slides and obtained results that fall in line with studies using shorter interval scales. In sum, then, appropriate scale length depends on other study features (e.g., the anticipated proficiency spread of the speakers), although for most research a scale of at least 9 points seems advisable. Regarding the rating procedure, in a study on sequential versus simultaneous ratings, O'Brien (2016) found that the two approaches yielded comparable results. The third question on the ratings design has not been addressed in the literature. In a fully crossed design all listeners evaluate all items, whereas in a random-raters design a random subset of listeners evaluates each item, such that all items are evaluated many times, but the listener groups that rate each item are different. Fully crossed designs, which are common in L2 research, are advantageous because they allow for rater-by-item effects to be taken into account during data analysis. However, there are many instances in which fully crossed designs may not be feasible, such as studies that generate a large number of files to be evaluated. In that case, raters could adopt a blocked design by randomizing files into blocks to be evaluated by different listener groups. Typically, each group rates a subset of common files shared across blocks, allowing for robust estimation of reliability, as well as a set of unique files available only to that group (see, e.g., Trofimovich et al., 2009;Wisniewska & Mora, 2020). In a completely random-raters design, each item would be evaluated by a random group of k raters, such that no two items share the same rater group. Thus, there are a range of options that researchers can leverage depending on their needs. EXPANDING THE TOOLKIT: ONLINE APPROACHES Online platforms such as AMT offer researchers several practical and methodological advantages. For one, they can connect researchers with a large pool of potential raters to whom they might not otherwise have access, which may be especially important for FL researchers. They are also readily scalable to the size of the study, insofar as researchers can collect a large number of ratings relatively quickly. At the same time, in an online design, researchers cannot directly oversee data collection and thus have limited insight into how listeners carry out the ratings. Therefore, appropriate quality control measures must be put into place. One common quality control measure is an instructional manipulation or attention check. This measure involves inserting directions on how to respond to the item into the item itself. Raters who follow the directions are classified as attentive, whereas raters who do not are classified as inattentive. The data from the latter group are then removed before analysis. Studies have shown that laboratory and online participants perform similarly on simple attention checks (Goodman et al., 2013;Paolacci et al., 2010). However, such post-hoc screening strategies can result in substantial data loss because data provided by inattentive workers must be excluded from analysis. Post-hoc strategies have also been criticized because they may fundamentally alter the demographic characteristics of the listener sample, which can have an impact on findings (Paolacci & Chandler, 2014). For these reasons, researchers have advocated for pre-screening measures, such as making tasks available to online workers who have been vetted. For instance, AMT allows requesters to limit tasks to high-reputation workers whose overall approval rate for work completed exceeds a certain threshold (e.g., 90%). Data provided by high-reputation workers have shown to be highly reliable, eliminating the need for attention checks (Peer et al., 2014). The challenge with recruiting only high-reputation workers is that such workers may not be available in all L2s because a large portion of the AMT userbase consists of English-speaking individuals (Ross et al., 2010). Thus, other pre-screening methods may need to be developed for studies that aim to recruit non-English-speaking workers. Researchers who work in AMT and other online platforms must also consider the ethical dimension of their approach. Historically, AMT workers have been characterized as hobbyists, individuals who participate in AMT for fun rather than income. In fact, this view has been debunked. The reality is that many AMT workers do rely on the wages they receive (Martin et al., 2014). Moreover, as Fort et al. (2011) pointed out in their critical assessment, AMT workers lack standard workplace protections. For example, when creating a task, requesters can create a review period during which time they can review work submitted and decide to approve it or reject it without pay, but workers have little recourse to address concerns that they have about requesters. Researchers can mitigate some of these concerns by paying workers a fair wage commensurate with the complexity of the task (e.g., at least the federally mandated minimum wage, which in the United States is $7.25 per hour at the time of writing) and paying them promptly for all work completed, even if the work does not appear to be completed correctly. The latter is particularly important because incorrectly completed assignments could be due to several issues beyond the worker's control, such as the instructions that the researcher provided or the interface itself. This is why it is also useful to include an open-ended response box where workers can provide feedback and recommendations to guide future improvements. ONLINE SPEECH RESEARCH A growing body of work has explored the utility of AMT for linguistic research (e.g., Callison-Burch & Dredze, 2010). L2 speech researchers have used AMT to identify and grade mispronunciations, with the goal of using the crowdsourced data to improve computer-assisted pronunciation training systems (Peabody, 2011;Wang et al., 2013). Researchers have also crowdsourced intelligibility data using transcription tasks as well as accuracy and comprehensibility data using a scalar ratings interface (Loukina et al., 2015;Nagle & Huensch, 2020). Although researchers are increasingly turning to AMT and other online platforms, to date, only one study has reported on the steps needed to design the AMT interface, collect and process the data, and examine its reliability. In Nagle (2019), 50 speech samples (39 L2 samples, 4 near-native anchor samples, and 7 attention checks) were paired with a simple AMT rating interface where workers were allowed to listen to the sample up to three times before evaluating it for comprehensibility, fluency, and accentedness using separate 9-point scales. A completely random-raters design was adopted, where each file was evaluated by a unique group of 20 workers, and the task was made available only to AMT workers whose Internet protocol (IP) address was located in a country where Spanish was an official language. Of 54 AMT workers, only 4 were classified as inattentive, but an additional 15 had to be excluded because they did not complete the minimum number of attention checks required to evaluate the quality of their work (12 workers) or because they did not rate the minimum number of nearnative anchor clips required to determine that they had understood the instructions and rating scales (3 workers). Intraclass correlation coefficients (ICC) were used to estimate reliability, and Rasch models were fit to the data for each construct for the 35 workers who were retained after implementing the quality control measures. Results indicated excellent reliability for all three constructs (in all cases, ICC > .87), but Rasch modeling revealed some issues with scale use and structure. Namely, the 9-point scales did not yield sufficiently distinct steps, especially for accentedness. Based on these results, Nagle (2019) made several recommendations to improve data retention and validation in AMT, including creating a screening task to award workers a special qualification necessary to advance to the rating experiment and blocking files to avoid excluding workers who evaluated a small number of samples (e.g., <3). THE CURRENT STUDY Building upon Nagle (2019), the present study was borne out of the practical need to continue to refine a robust approach to conduct L2 speech research online. Developing a method for online data collection offers researchers a complementary tool that they may choose to use under certain circumstances (e.g., if they cannot recruit an appropriate group of raters locally, if they have collected a large number of samples to be evaluated). We implemented Nagle's (2019) suggestions by (1) creating a screening task where workers completed a comprehensive background survey and rated a small number of samples to familiarize themselves with the instructions and interface and (2) blocking files to ensure that all workers evaluated a similar number of files (i.e., leading to a pseudo-random-raters design). We also made improvements to the AMT interface, directly incorporating certain quality control measures (e.g., timers to ensure that raters moved through the task at a reasonable pace). We collected data from two AMT listener groups, a learner-guided dialect group composed of AMT workers representing the dialects to which learners had been exposed (i.e., the dialects that their instructors spoke) and an any-dialect group composed of AMT workers recruited from all Spanish-speaking countries. We also collected data from a group of local laboratory listeners for comparison. These listeners were necessarily drawn from many different Spanish dialects because it would have been difficult to recruit a homogenous and ecologically valid rater group at the location where the research took place. This study was, therefore, guided by the following research questions: 1. What are the demographic characteristics of AMT workers based in Spanish-speaking countries? 2. How reliable are the comprehensibility, fluency, and foreign accent ratings provided by each group (lab, AMT learner-guided dialects, and AMT any dialect of Spanish)? 3. What is the minimum number of raters needed to be recruited online to establish a reliable aggregate rating? To answer the first research question, we descriptively analyzed the background characteristics of participating AMT workers. To answer the second research question, we conducted three separate analyses. First, we computed standard reliability coefficients for each listener group. Then, we fit separate Rasch models to each group to examine differences in rater severity and fit. The Rasch models provided an additional perspective on scale fit and reliability for each construct. Finally, we fit a linear mixed-effects model and carried out post-hoc comparisons to determine if there were significant betweengroup differences in the way listeners scored the speakers on each construct. To answer the third research question, we resampled our data at progressively smaller listener sample sizes (e.g., n = 20, 19, 18), recalculating reliability at each step. SPEECH SAMPLES Twenty-three speakers who were recruited from multiple sections of two intermediatelevel Spanish language courses provided the speech samples used in this study. After watching a silent animated short film about a girl who had to defeat a monster that sprang out of her journal, speakers received a set of eight screenshots captured from the short and used them to retell the story. 1 Speakers were given five keywords to help them retell the story and had up to a minute to look over the screenshots before they were recorded. During the planning time, they were not allowed to take notes. Speakers were recorded using a high-quality, head-mounted microphone connected to a desktop computer in a sound-attenuated room. Following standard procedures, we prepared the audio files for listener evaluation by creating a 30-s sample from the beginning of each participant's full recording, excluding initial pauses and hesitations. We then normalized all samples to a comfortable listening volume. We used one of the 23 samples as a practice file and divided the remaining 22 samples into two blocks of 11 L2 audio files to be used in the AMT experiment. Three native speakers of Argentinian Spanish provided the control samples. We included samples from two of the speakers in the experimental blocks, reserving the sample from the third native speaker as a practice file. LISTENERS Three listener groups participated in this study: lab listeners (Lab); AMT listeners who were recruited from Spain, Argentina, and Mexico, the Spanish dialect regions to which learners had been exposed (AMT L-Guided); and AMT listeners representing a range of dialects (AMT Any Dialect). 2 We provide a detailed description of how we recruited and screened online listeners in the procedures section. Here, we focus on the characteristics of the listeners who participated in the experimental portion of the study after passing the screening task. The Lab listeners were 14 native Spanish speakers who were pursuing an advanced degree at the university where the research took place. They reported the following countries of origin: Colombia (4), United States (3), Peru (2), Spain (2), Ecuador (1), Costa Rica (1), and Mexico (1). Although we recruited and approved AMT L-Guided listeners from Spain, Argentina, and Mexico, of the 25 listeners who completed the experimental rating task, 22 were from Spain, 2 were from Mexico, and 1 was born in Venezuela but was residing in Spain at the time. The fact that most listeners in this group were from Spain, and none were from Argentina, highlights one of the challenges of recruiting balanced listener groups (at least with respect to dialect) online, a point we return to in the discussion. Finally, the 23 AMT Any Dialect listeners who completed the experimental rating were from the following regions: Spain (11), Colombia (5), Chile (2), Mexico (2), Argentina (1), Ecuador (1), and Venezuela (1). The demographic characteristics of the listener groups are summarized in Table 1. All three groups reported exposure to English between the ages of 5-7 and rated their English speaking and listening skills in the upper range (M > 6.00) on the 9-point proficiency scale (extremely low proficiency-extremely high proficiency). As expected, the Lab listeners, who were living in the United States and pursuing a graduate degree at a US university, evaluated their English skills slightly more positively than the AMT listeners did. The Lab listeners reported interacting in English far more frequently than in Spanish, whereas the opposite was true for the AMT listeners. In fact, patterns of language use (percent daily use of English and Spanish) for the Lab and AMT listeners were near mirror images of one another. The Lab listeners also reported less familiarity with L2 Spanish speech, which is likely due to the fact that they spent most of their time interacting in English and would not have had much opportunity to interact with non-native Spanish speakers in Spanish in an English-dominant environment. Regarding context of interaction, there was a trend toward the personal domain for the Lab listeners, whereas the AMT listeners showed a relatively even spread across the categories, albeit with slightly fewer workers reporting interacting with non-native speakers in both personal and professional contexts. In all three groups, there were very few individuals who reported no interactions with non-native speakers. Last, a third to half of the listeners in the three groups reported linguistic experience, but teaching experience was more common for the Lab listeners than for the AMT workers. RATING TASK All materials associated with the AMT rating interface, including the html code to generate the tasks (Spanish and English versions), a document outlining the task properties that were implemented when the tasks were deployed in AMT, and a guide for modifying the interface are available in the Online Supplementary Materials. Study materials can also be accessed at https://www.iris-database.org and https://osf.io/wazmc. Working with a computer programmer, we designed an AMT rating interface consisting of the following elements: (1) an informed consent document; (2) a comprehensive background survey; (3) an overview of the speaking task, including an embedded copy of the animated short and the screenshots speakers received, a summary of the constructs to be evaluated with instructions and information on the rating interface; (4) two practice files to be evaluated, one from an L2 speaker and one from a native speaker, neither of whom provided files for the experimental rating task; (5) the experimental files; and (6) a posttask survey. We adapted Derwing and Munro's (2013) construct definitions. We adhered to their definitions for fluency and comprehensibility, but we modified the accentedness scale to target foreign accent, which we defined as any pronunciation feature that would not occur in native Spanish speech. We also instructed listeners that assigning the audio file the best possible score on the foreign accent scale would signify that the speaker could be a native speaker of Spanish. In this way, we aimed to sensitize listeners to the distinction between pronunciation features that would indicate a nonnative speaker versus pronunciation features that could correspond to a native variety of Spanish. We gave listeners the following definitions (backtranslated from Spanish; for the Spanish version, see the HTML task preview): • Fluency: Fluency refers to the rhythm of the language, that is, if the speaker expresses themselves with ease, or if they have difficulty expressing themselves and pause often. • Comprehensibility: Comprehensibility refers to how easy or difficult it is to understand what the speaker is saying. You may be able to understand everything the speaker says, but doing so required a lot of attention and effort on your part. What we are interested in is how much effort you have to expend to understand the speaker. • Foreign Accent: We all have an accent, but for our purposes, we are interested in foreign accent, that is, any pronunciation feature that does not occur in the speech of a native Spanish speaker. Keep in mind that foreign accent is different from comprehensibility: it could be that the speaker is easy to understand even if they speak with a strong foreign accent. Each audio file was presented individually on a rating screen with a play button and 7-point fluency, comprehensibility, and foreign accent scales arranged horizontally. We selected 7-point rating scales to strike a balance between a lower-resolution 5-point scale, which may not have given listeners enough options, and a higher-resolution 9-point scale whose steps may have overlapped for the intermediate speakers who provided our speech samples (Isaacs & Thomson, 2013). Anchors were provided only at the extremes, where higher scores were always better on the 7-point scales: for fluency, not very fluent-very fluent; for comprehensibility, very difficult to understandvery easy to understand; and for foreign accent, very strong foreign accent-no foreign accent (could be a native speaker of Spanish). The instructions that appeared on the rating screen made it clear that the scales would only become active after the audio had finished playing. After the file played through, workers had up to 45 s to make their ratings before the page became inactive. These two timers were quality control measures that served a similar function to Nagle's (2019) attention control checks, insofar as they ensured that workers listened to the entire audio file before rating it and moved through the task at a reasonable pace. All experimental audio files were presented in a unique random order to each worker. After completing the ratings, workers received the posttask survey where they rated their understanding of the comprehensibility and foreign accent constructs and the difficulty the task posed using 100-point sliders and optionally provided open-ended feedback on any aspect of the interface or procedure. It was not possible for the in-person raters to complete the AMT version of the rating task. We, therefore, developed a Qualtrics survey whose format mirrored that of the AMT interface. PROCEDURE We recruited local listeners from the same large, public university where we recruited the speakers. We sent an email to 3,519 graduate students with information about the study and posted study information to relevant university message boards. We also relied on word of mouth and our professional networks. We were ultimately able to recruit 14 local listeners who were native speakers of Spanish. The second author met with the Lab listeners individually in a quiet space for data collection. As shown in Figure 1, we split the AMT interface into two tasks. The first task was a screening measure consisting of the informed consent document, background survey, instructions and information on the rating interface, and two practice files (one from an L2 speaker and the other from a native speaker). To recruit AMT L-Guided listeners, we deployed the task to 100 workers located in Argentina, Mexico, and Spain using AMT geographic filters. The task was active for 7 days before the 100-worker completion threshold was met. We used the screening data to validate workers, assigning them a study-specific qualification that allowed them to view and complete the experimental ratings task. Workers had to meet three criteria to receive the study-specific qualification: (1) Spanish had to be their native language (or one of their native languages), (2) they had to be born in a Spanish-speaking country, and (3) they had to rate the native speaker practice file better than the learner file on all three scales, which we took as an indicator that they had understood the instructions and used the scales properly (i.e., they did not reverse the directionality of the scales). Applying these criteria, we eliminated five workers whose L1 was not Spanish, six L1 Spanish workers who were not born in a Spanish-speaking country, and 11 L1 Spanish workers who assigned the learner practice file a higher score than the native speaker practice file. Regarding the latter, all 11 cases were related to the foreign accent scale, where workers reversed the scale, interpreting lower values as indicative of a better score (i.e., less foreign accent) than higher values, despite instructions to the contrary. Thus, 78 AMT L-Guided workers were approved to advance to the experimental task. The experimental task consisted of an informed consent document, the same instructions and information that workers had received on the screening task, the experimental files to be rated, and a post-task survey. Because we blocked the 24 samples into two FIGURE 1. Overview of the structure and timing of the online ratings in Amazon Mechanical Turk. groups of 12 files (11 L2 files and one native file), we deployed two versions of the experimental task, one per block, simultaneously, requesting 20 workers per block. Some workers completed both versions, rating all 24 samples. Of the 78 screened and approved AMT L-Guided workers, 25 completed the experimental ratings task. Following the same procedure, we created the AMT Any Dialect group by recruiting and validating AMT workers located in any Spanish-speaking country. We staggered data collection for the AMT L-Guided and AMT Any Dialect groups to prevent workers from participating in the experiment twice. Thus, after we had recruited workers for the AMT L-Guided group, we awarded all participating AMT L-Guided workers a special qualification to prevent them from retaking the screening task. We then deployed the screening task a second time to 100 workers located in any Spanish-speaking country. The second screening task was active for 17 days before 100 new workers were recruited. Of those 100 individuals, 6 were excluded because their L1 was not Spanish, 8 because they were not born in a Spanish-speaking country, and 24 because they did not score the native speaker practice file higher than the learner file on all three rated dimensions. Thus, 62 AMT Any Dialect workers were approved to advance to the experimental task. Of those 62 workers, 23 completed the experimental task. Because the experimental rating task involved a small number of speech samples, both Lab and AMT listeners evaluated all files in a single sitting without a break. The experiment was self-paced, insofar as listeners could move from one file to the next at a pace that felt comfortable, but the experimental rating session was time-controlled. AMT workers had up to 30 min to complete the experimental rating task before the task became inactive (AMT allows researchers to specify a time within which the task must be completed), and Lab listeners were kept on pace by a research assistant who supervised the experimental session. Lab listeners wore noise-canceling headphones while completing the task, and AMT workers were instructed to wear headphones while carrying out the ratings. 3 AMT workers were compensated at a rate of $7.25 per hour following the US federal minimum wage at the time of recruitment, and Lab listeners received a $10 honorarium. WORKER DEMOGRAPHICS: SCREENED AND APPROVED AMT WORKERS Because most AMT research has recruited native English speakers, we were interested in examining the demographic characteristics of AMT workers who passed the screening task (i.e., AMT workers whose L1 was Spanish, who were born in a Spanish-speaking country, and who correctly used the rating scales). As shown in Table 1, the two AMT groups began learning English relatively early in life (in both cases, M ageonset < 7 years). Means for selfestimated listening and speaking proficiency in English exceeded 6.00 for both groups on the 9-point scale (anchors: extremely low proficiency-extremely high proficiency). As expected, the AMT workers, all of whom were located in a Spanish-speaking country, reported using mostly Spanish in their daily interactions, followed by English, and some additional languages (e.g., Catalan, Galician, and Basque, the three regional languages spoken in Spain; Portuguese; German; and other L2s that they had learned). Both groups reported approximately the same degree of familiarity with L2 Spanish speech (M = 6.63 and 6.74 for the L-Guided and Any Dialect groups, respectively). The most common frequency of interaction with non-native speakers was once per month, but a sizable portion of workers in each group reported daily interaction. For the L-Guided group, this amounted to nearly half of workers compared with approximately a third of the Any Dialect workers. Context of interaction was largely balanced across the three categories. Most AMT workers had some background in linguistics (i.e., they had taken a course that dealt with linguistic topics), but few reported language teaching experience. Overall, then, the characteristics of the online AMT workers who passed the screening task were in line with the characteristics of the subset of workers who completed the experimental task (cf. Table 1). DESCRIPTIVE STATISTICS: L2 SPEECH RATINGS As a first step toward validating the data, we computed descriptive statistics for each group. As reported in Table 3, the two native speaker files received much higher ratings on average than the learner files, and, as shown in Figure 2, the modal response for the native speaker files was 7, the highest possible score, for all groups on all constructs. Figure 2 also shows that the overall distribution of scores for each construct was similar across the three listener groups: Comprehensibility scores were relatively distributed throughout the 7-point continuum, and fluency and foreign accent scores were slightly and strongly skewed toward the less fluent/stronger foreign accent end of the continuum. Note. 1 Two workers who indicated that they never interacted with L2 speakers on the frequency of interaction item nevertheless reported both personal and professional interactions with non-native speakers on the context of interaction item. RELIABILITY COEFFICIENTS To examine reliability, we computed two coefficients: the two-way, consistency, singlemeasure intraclass correlation coefficient (ICC(C, 1)), and the two-way, consistency, average-measure intraclass correlation coefficient (ICC (C, k)). The single-measure ICC is an estimate of the reliability of ratings provided by a single individual, and the averagemeasure ICC is an estimate of the reliability of ratings provided by a group of k individuals. Cicchetti (1994) proposed the following cutoffs for the ICC: <.40 = poor, .40-.59 = fair, .60-.75 = good, and >.75 = excellent. As shown in Table 4, averagemeasure coefficients were all in the excellent range, whereas most of the single-measure coefficients were in the poor range. With respect to the online groups, the reliability coefficients for the AMT L-Guided group were slightly higher than the coefficients for the AMT Any Dialect group. RASCH MODELS We fit three separate Rasch models to examine rater severity, fit indices, and scale use for each of the three listener groups. Each model included three facets: examinees (i.e., speakers), raters (i.e., listeners), and scale categories (i.e., comprehensibility, fluency, and foreign accent). A fixed chi-squared test of the null hypothesis that the Lab listeners were of the same severity level was significant (χ²(13) = 501.5; p < .001). In other words, the Lab listeners showed statistically different levels of severity. The logit measures associated with the rater facet provides information on individual listeners and their respective severity levels. The overall range was 3.21 logits, from -0.76 for the most lenient Lab listener to 2.45 for the most severe Lab listener. The separation index, which shows the number of severity levels, was 6.07 with a reliability estimate of .97, suggesting that there were approximately six statistically distinct levels of severity. As for rater fit statistics, a range of 0.50-1.50 for infit values can be interpreted as good internal consistency (Eckes, 2015), and all but one listener (infit = 2.08) was in that range. With respect to the rating categories, foreign accent was the most severely rated (logit value of 0.78), which is in line with previous research (e.g., Munro & Derwing, 1995;Nagle & Huensch, 2020), while comprehensibility and fluency yielded more lenient ratings (logit values of -0.57 and -0.21, respectively). According to Eckes (2015), rating scale effectiveness may also be examined through fit statistics such as the mean-square outfit statistic, which should not exceed 2. For each of the three rating categories, the rating scale had an excellent model fit; values of the outfit mean-square statistic were 1.33, 0.95, and 0.77 for foreign accent, comprehensibility, and fluency, respectively. A fixed chi-squared test of the null hypothesis that AMT Any Dialect listeners were of the same severity level was significant (χ²(22) = 533.6; p < .001). This means that, like the Lab listeners, the AMT Any Dialect listeners showed different levels of severity. The logit Note. (C, 1): Two-way, consistency, single-measure intraclass correlation coefficient. (C, k): Two-way, consistency, average-measure intraclass correlation coefficient. measure range (4.60) was larger than the range for the Lab listeners, with a value of 0.34 for the most lenient AMT Any Dialect listener and a value of 4.26 for the most severe listener. Such a high upper logit indicates that this group of AMT listeners may have been more severe than the Lab listeners. Despite the larger logit range, the separation index of 5.48, with a reliability estimate of .97, indicates that there were five to six statistically distinct levels of severity. With respect to rater fit statistics, the mean rater infit value of 1.07 points to good internal consistency, but four AMT Any Dialect listeners were just outside the suggested range, with infit values of 0.45, 0.46, 1.67, and 1.70. As was the case for the Lab listeners, this group of AMT listeners was the most severe when rating foreign accent (logit value of 0.82), followed by fluency (-0.16) and comprehensibility (-0.67). Outfit mean-square statistics of 1.13, 1.07, and 0.90 were obtained for the foreign accent, comprehensibility, and fluency scales, respectively, indicating excellent scale fit. The AMT L-Guided listeners also showed statistically different levels of severity (χ²(23) = 391; p < .001). Their logit measure range was 2.18, from -0.15 for the most lenient to 2.03 for the harshest rater in this group. The separation index of 3.75 with a reliability of 0.93 suggests that there were approximately four different levels of severity. Additionally, four AMT L-Guided listeners exhibited overfit, with infit values slightly under 0.50 (0.26, 0.34, 0.42, and 0.41). Category severity showed a similar pattern to the other two listener groups: foreign accent (logit value of 0.62), fluency (logit value of -0.17), and comprehensibility (logit value of -0.46). Outfit values for rating categories also pointed to an excellent model fit, with values of 1.30, 0.98, and 0.80 for foreign accent, comprehensibility, and fluency, respectively. To sum up, although distinct levels of severity were observed within each listener group, the Rasch models suggested good overall performance for raters and scales. Whereas approximately six distinct levels of severity were observed for the Lab and AMT Any Dialect listeners, the AMT L-Guided group showed only four severity levels, suggesting greater uniformity in their ratings. MIXED-EFFECTS MODELING To determine if the three listener groups rated the L2 speakers differently on each construct, we fit a linear mixed-effects model in R version 4.0.2 (R Core Team, 2020) using the lme4 package (Bates et al., 2015). The model included Group, Rating Type, and a Group  Rating Type interaction as fixed effects, and by-speaker and by-listener random intercepts. We also included familiarity with L2 Spanish speech as a covariate. We used the emmeans package (Lenth, 2020) for post-hoc comparisons to locate statistically significant between-group differences. This package uses the Tukey method to account for multiple comparisons. As shown in Table 5, none of the between-group comparisons reached significance, suggesting that the three listener groups rated the speakers similarly on all three dimensions. SIMULATING RELIABILITY AT DIFFERENT ONLINE RATER SAMPLE SIZES We were also interested in how the reliability of data collected through AMT would change depending on the number of raters recruited. We reasoned that this information could help researchers make their studies more efficient by recruiting the number of listeners needed for reliability (while also considering issues of statistical power, depending on how the ratings are used). We generated 100 samples of k raters (e.g., n = 20, 19, 18 … 5) by randomly sampling raters within each group. For instance, at n = 20, we randomly sampled 20 raters from the AMT any-dialect group and 20 raters from the AMT learner-guided group, repeating this process 100 times to create 100 distinct rater groups. We estimated ICC(C, k) to examine the mean and range of ICCs observed at each rater sample size for each construct and AMT group. As shown in Figure 3, this simulation suggests that comprehensibility and fluency could be estimated with excellent reliability at sample sizes of seven to eight listeners and with good reliability with samples as small as five listeners given the other design features of this study (i.e., a blocked design in which each listener evaluates at least 11 items). On the other hand, a larger listener sample size would be required to obtain good reliability for the foreign accent ratings. Note. Each sample size simulation consists of 100 runs. These ICC estimates hold for the blocked design of the current study, in which listeners evaluated at least 11 L2 files and up to 22 files if they participated in both experimental blocks. According to Cicchetti (1994), ICC > .60 = good and ICC > .75 = excellent. Solid black lines have been added to the figure at these values. DISCUSSION In this study, we set out to develop a more valid and robust approach to collecting L2 speech ratings online in AMT. We improved upon Nagle's (2019) method by including a screening task that we used to validate AMT workers. We also built timers into the interface that ensured that workers (1) listened to each audio file in its entirety before making their ratings and (2) moved through the task at a reasonable pace without backtracking. We scrutinized the reliability of the resulting data by (1) computing twoway consistency intraclass correlation coefficients, (2) fitting Rasch models to examine differences in rater severity and fit, and (3) fitting a mixed-effects model to examine between-group differences in scoring. We also simulated the reliability of the ratings data at different listener sample sizes to provide preliminary insight into the number of online listeners required to produce good to excellent scale reliability. In all of these analyses, we compared two AMT listener sampling strategies: sampling listeners from the dialects to which FL Spanish learners had been exposed through their instructors and sampling listeners from a broad range of dialects without considering the input FL learners had received. We compared these two groups to a group of US-based laboratory raters who were recruited locally at the university where the research took place. Overall, the results showed that the data collected from all three groups, when aggregated, was highly reliable. Reliability estimates were in the excellent range for comprehensibility and fluency and in the acceptable to good range for foreign accent. Thus, the results of this study corroborate Nagle's (2019) findings and provide further evidence that comprehensibility and fluency ratings can be collected reliably online. The fact that reliability was lower for the foreign accent scale is not entirely surprising. Many studies that use L2 speech ratings as a global pronunciation outcome measure focus on degree of accentedness in reference to a local variety of the L2. In that case, ratings may exhibit higher reliability because listeners have the same internal anchor point. In contrast, in the present study, listeners were asked to evaluate foreign accent, judging the sample not in relation to a single local variety but in relation to any native variety of the L2. Such an approach entails that listeners understand what speech characteristics would surface in nonnative speech versus those that might surface in another native variety of the L2. On the post-task survey, AMT workers indicated that they did not have difficulty completing the ratings (M = 83.88/100) and that they had understood the foreign accent and comprehensibility scales well (M = 93.08 and 93.09, respectively). At the same time, their open-ended comments suggested that they were sensitive to scaling issues, particularly with respect to foreign accent. For instance, one worker commented that "it would be helpful to give an example of each scale step because each listener will give different ratings depending on their perspective." Another said, "The foreign accent part needs some explanation. Since it's almost a binary answer, the intent isn't clear." This comment in particular signals that our attempts to orient listeners toward nonnative accents on the foreign accent scale (e.g., by including wording that assigning the best score indicated that the speaker could be a native speaker of Spanish) was not entirely successful. Providing additional files for evaluation at the screening stage along with feedback and/or more robust scale descriptors could help mitigate these concerns. Reliability analyses revealed surprisingly few between-group differences. Reliability coefficients were similar across the board, especially for comprehensibility and fluency, and the mixed-effects model and post-hoc comparisons revealed no significant differences in mean scores. There were, however, two areas where the learner-guided sampling strategy seemed to outperform the any-dialect strategy. First, Rasch modeling showed fewer statistically distinct levels of rater severity in the learner-guided group than in the any-dialect group (4 vs. 6), which could be interpreted as a sign of greater consistency among raters who were sampled from dialect regions that represented significant sources of input for the FL speakers included in this study. Second, reliability simulations at a variety of sample sizes suggested that higher reliability could be obtained for learnerguided samples, a trend that became more pronounced in the smallest sizes that we simulated. Yet, this finding deserves qualification for two reasons. First, the learnerguided sampling strategy could have yielded higher reliability simply because fewer dialects were sampled, making that group inherently less variable than the any-dialect group. Second, and to that point, although workers from Argentina, Mexico, and Spain were recruited at the screening stage and received the study-specific qualification granting them access to the experimental task, ultimately, most of the workers who completed the experimental task were from Spain, which could also account for the higher reliability observed for the learner-guided listener group. Put another way, the learner-guided group represented a narrow range of Peninsular Spanish dialects. This, coupled with the fact that approximately half of the speakers reported that they had taken Spanish courses from instructors who were native speakers of Peninsular Spanish, likely accounts for the differences between the two sampling strategies. Overall, then, the present findings do not necessarily show that learner-guided sampling is a superior sampling strategy, but they do inspire confidence in a variety of approaches to online listener recruitment. Of course, in addition to reliability, listener sampling practices should be informed by conceptual considerations. For example, if the goal is to help FL learners communicate successfully with a specific group of individuals with whom they will interact in the future (e.g., when they study or intern abroad), then, to the extent possible, listeners should be recruited from that group. Future research should test that approach, which might prove especially useful for upper-level language students who have cultivated a deeper understanding of why and with whom they plan on using the L2. Admittedly, targeting a very specific group of listeners may be difficult through online platforms. For one, the only geographic filter available in AMT at the time of testing was a country-level filter. Recruiting individuals from the same country to serve as listeners would likely result in a narrower range of target varieties, as previously discussed, but it would not guarantee that all listeners speak the same variety of the target language because there is often substantial dialectal variation within a single country or region. In addition to our primary goal of developing a more robust interface and checking the reliability of the resulting data, our secondary objective was to understand the demographic characteristics of AMT workers who were nonnative English speakers (in this case, Spanish speakers). Descriptively, the AMT workers we recruited were bi-or multilingual individuals with moderate to high proficiency in English who were accustomed to interacting with L2 Spanish speakers in both personal and professional contexts. For the most part, they were also university-educated; most had completed a 4-year degree, and many had an advanced degree in their field. It is also clear that listeners were technologically literate. This group, therefore, represents one important subset of potential interlocutors that researchers can access online. RECOMMENDATIONS FOR DOING ONLINE SPEECH RESEARCH The findings from this study have implications for doing online speech research. First, they underscore the necessity of including a screening task, which allows researchers to validate participants' work and ensure that they meet inclusion criteria for the study. Screening data can also provide researchers with insight into parts of the task or interface that are not functioning properly or that need further clarification. Another advantage of screening tasks is that they allow the researcher to begin creating a database of vetted workers who can be authorized to complete similar tasks in the future. The present study also confirms the utility of implementing a posttask survey to diagnose problems with the user interface and instructions. For instance, some workers indicated that they had trouble interpreting the foreign accent scale, which likely contributed to its lower overall reliability compared with the other two rated dimensions. Based on such feedback, the task could be updated in a future iteration to make that scale clearer, such as by providing additional descriptive information, examples, and so on. As with any study, some aspects of methodology must be specified clearly from the start, such as participant inclusion and exclusion criteria. In this study, we were fairly lenient in terms of inclusion criteria: participants had to indicate that they were a native Spanish speaker and that they were born in a Spanish-speaking country, and they had to use the rating scales properly when evaluating the sample audio files that were part of the prescreening task. These criteria were easy to implement through geographic and studyspecific filters. Geographic filters can be useful for recruiting workers from a certain region, but those filters guarantee only that workers presently reside in that region. Thus, it is important not to make assumptions about other demographic characteristics on the basis of residence (or, more precisely, the location of the user's IP address). For example, in the present study, one of the listeners included in the AMT L-Guided group indicated that he was born in Venezuela but was living in Spain at the time. Although that listener would undoubtedly be familiar with the characteristics of Peninsular Spanish, it would be inaccurate to classify him as a native speaker of that variety. Researchers should carefully consider how they classify workers based on demographic variables such as country of residence, country of origin, and so on, as well as how they use those variables to compose listener groups. Worth noting is that AMT offers researchers a variety of flexible options for implementing other inclusion and exclusion criteria, using both prebuilt AMT filters (some of which are associated with an additional fee) and in-house/study-specific filters that researchers can create. On a related note, in this study, we created a learner-guided group by recruiting raters from Argentina, Mexico, and Spain. After screening an initial group of raters, we made the experimental task available to the entire group. The unintended consequence of this decision was that most of the raters who completed the experimental task were from Spain. Another option would have been to deploy the task separately to each of those regions, in which case we would have been able to end up with, for instance, 10 listeners from each country (for an example, see Huensch & Nagle, 2021). Thus, we recommend that researchers consider whether it would be necessary and/or advantageous to deploy the experimental task multiple times to target several different listener groups, leading to a more balanced and representative listener group. The fact that most AMT L-Guided listeners were from Spain also underscores the dynamics of AMT, and, indeed, the dynamism of the AMT userbase. In some countries, such as Spain, the userbase seems to be quite large, and users seem to log on and complete tasks quite frequently, whereas in others, it may be difficult to recruit a sufficient number of workers. What's more, the userbase is constantly evolving, as new workers join the platform and existing workers leave it. It is, therefore, unclear if AMT can be used for more complex, repeated-measures research designs. Future work should address this topic and should also explore the utility of AMT for collecting other types of L2 data. Last but certainly not least, researchers must consider the ethical dimension of online research, which includes considering who has the means to participate. Clearly, workers must have access to a device and a reliable Internet connection, which necessarily excludes a large number of individuals who lack access to one or both. It is also important to acknowledge that, in many countries, institutional review boards have developed policies and requirements for doing online research, including policies that specifically address AMT. For instance, in some cases, researchers may be asked to make participants aware of the fact that their data may be stored in jurisdictions where governments have expanded access to personal records such as IP addresses. Finally, one weakness of AMT is that it does not offer researchers and workers a convenient means of dialoguing with one another during the research process. This means that researchers must be intentional about including task elements, such as feedback forms, that allow workers to offer suggestions, voice concerns, and, if necessary, lodge complaints and notify researchers of adverse effects. CONCLUSION Online and distance research methods are becoming increasingly common. Online research comes with challenges, but it also offers some advantages over an in-person approach. For one, it can broaden potential participant pools. It also allows researchers to carry out studies that otherwise would be difficult or impossible to execute. Such is the case for L2 speech ratings. When there are no local listeners to recruit, either because there are few local listeners of the L2 or because local listeners do not match the target group that researchers need for their study, online recruitment and data collection can be a viable and even desirable alternative. The results of this study show that online ratings can be as reliable as those collected in person. This study also raises important questions about how raters should be recruited and how constructs should be adapted and defined unambiguously in a new research context. Ultimately, these questions can only be answered in light of other methodological considerations. What is certain, however, is that online data collection is here to stay and will likely become more prevalent in an increasingly digital world. Future work should, therefore, replicate the current procedure using a larger number of samples provided by speakers of varying proficiency before broad conclusions can be reached regarding online speech rating procedures. It would also be fruitful to explore the extent to which other types of data (e.g., writing, speech) can be reliably and ethically collected online. In short, there is far more work to be done in this area, including work targeting crowdsourcing platforms other than AMT. SUPPLEMENTARY MATERIALS To view supplementary material for this article, please visit http://dx.doi.org/10.1017/ S0272263121000292.
13,472.2
2021-04-27T00:00:00.000
[ "Physics" ]
Big Data Support for Problem Solving Method in Mass Spectrometry Topic in Modern Analytical Chemistry Course Extremely large and unpredictable user generation of data, all digitized and stored in large data repositories is built up by scientists, especially from modern analytical chemistry. This study aims to build a new approach in chemistry education, by utilizing Big Data sources to support IDEAL (I-Identify problem, D-Define goal, E-Explore possible strategies, A-anticipate outcomes and act, L-Look back and learn) Problem Solving learning model. Modern analytical chemistry studies and uses instruments to analize chemical compounds up to structural analysis.  Modern instruments, such as mass spectrometer, generate information of compounds and stored in big data bank.  This must be able to be accessed and used in chemistry education.  This report would be around the benefits of using Big Data during learning process in this digital era, through IDEAL Problem Solving learning. Some preliminary progress would be presented.  The growing number of data and resources would change also teaching and learning methodology in higher education.  Some highlights about disruptive learning innovation would be described Introduction Chemistry is a science to study chemical reaction-based processes from natural sciences which mainly refers to scientific research. Learning chemistry, means being exposed to three worlds or levels, namely the real world (macroscopic), the real world and the representation of the theoretical model (submicroscopic) and the world of representation (symbolic world) [1]. One of the scope in chemistry is analytical chemistry. Analytical Chemistry in general studies and uses instruments and learning models that are used to separate, identify, and measure material both qualitatively and quantitatively. Analytical chemistry is often described as the field of chemistry which is responsible for characterizing the qualitative and quantitative composition of matter [2]. The scope of analytical chemistry can be divided into four parts, one of which is structural analysis. Structural analysis plays a role in analyzing the structure of a compound in a sample by utilizing modern instrument such as mass spectrometry (Mass-Spectrometry). This mass spectroscopy is used to determine the mass of atoms or molecules. The working principle of this instrument is the deflection of charged particles in a magnetic field. Molecules that are initially uncharged (neutral), are converted into charged. This learning model can also be used to determine the content and composition of compounds in the sample, whose learning model is combined with the separation learning model. This instrument can also be used for analysis of reaction mechanisms. The reaction mechanism can be carried out because the fracture pattern of the molecular body can be analyzed the process of breaking the original compound. The resulting mass spectrum is the result of breaking the sample compound, which when analyzed will provide information about the initial compound [3]. Increasingly, there are more and more research on structural analysis, especially using mass spectrometry, where data are finally collected and placed in a data warehouse. A survey conducted by IDC reports that the digital world will grow by a factor of 10 from 2013 to 2020, from 4.4 trillion gigabytes to 44 trillion. This indicates that the growth of this data is more than doubling every two years [4]. The growth of data is massive, leading to the technological innovations. In this case the term Big Data emerged. Big Data is the word, describing the following characters such as Volume (size of data in its increasing units: GB, TB), Velocity (fluency in streaming data), Variety (sources of data: text, images, videos, audios), Veracity (accuracy of data) and Value (power to make decision) [5]. Big data affects the education sector as it does to every sector. Big data analytics plays a major role in the education. Big data might transform educational data studies by supporting analysts be more efficient and provide more informative conclusions [6]. With the existence of big data from various sources, educators are aided in conducting classroom learning. Appropriate organizing and direction will provide more meaningful learning and a more effective and efficient learning process. Processing and utilizing more detailed data, not only for the development of education itself but also for understanding students more deeply, starting from the level of absorption, to learning models that match their character [7]. The need for information is increasing with the increasingly advanced internet information technology. Increasing technology automatically affects the education sector both in terms of administration and the learning process. Internet technology has led to the displacement of the learning process from the "chalk and talk" system to "based on internet". The following factors lead to the generation of big data in educational institutions [8]. Big data is a high-volume, high-speed and high-variety information asset that demands cost-effective, innovative forms of information processing for enhanced insight and decision making. This large amount of data makes it possible to analyze and provide better decision support [9]. One of big data in analytical chemistry is NIST. NIST is an institution or data center for information and technology, and one of them stores the big data on chemistry of mass spectrometry. The NIST mass spectrometry data center is a group in the Biomolecular Measurement Division (BMD) that develops the mass spectral library that is evaluated and provides related software. This site offers information and access to NIST mass spectral data products. A collection of data products are presented to aid compound identification by supplying reference mass spectra of from any molecules, including EI and MS tandem (for small molecules as well as bigger ones like peptides) libraries, GC retention index collections and also numerous freely available spectral libraries. Data analysis tools are also available for free, including AMDIS (Automated Mass Spectral Deconvolution and Identification System, this is for GC/MS), Mass Spectrum Interpreter (to elucidate chemical structures from mass spectra), and Mass Spectrum Digitizer Program. In a full version of the NIST MS Search Program, a small demonstration library is available. This big data source can be accessed through chemdata.nist.gov or webbook.nist.gov/chemistry [10]. The learning process in the classroom will be better if technological advances in terms of Big Data sources are collaborated with the right learning model. According to Greeno in Sulasamono, learning models that are triggered by a problem can trigger a thought process in learning [11]. Problems are gaps that occur in (cognitive) thinking. Based on information processing theory, a problem is a situation when the knowledge stored in memory is not ready to be used to solve problems [12]. The learning model that can be applied is IDEAL Problem Solving. IDEAL problem solving is a problem solving learning model that is carried out at the IDEAL stage. IDEAL as a problem-solving model in education introduced by Bransford and Stein. There are 5 indicators for this model (1) Identify the problem and make a creative opportunity, (2) Define goals to set, (3) Explore potential solutions and approaches, (4) Act on the strategy found and anticipate the results, and (5) Look and learn: review the real process as well as consequences from the experience gained [13]. This model would enhance the ability to think and improve skills in the problem solving process. Sometimes the answers are not in accordance with the objectives set. In IDEAL problem solving, the fifth step, which is to look back, if the answers are not in accordance with the desired goals or have not been achieved, then it can return to the stage where the error occurs. The ability of using thinking skills or operational abilities to solve problems or tasks, will be used a lot [14]. Problem solving learning has several advantages. Problem solving is a good learning model for understanding lesson content so that it challenges students' abilities and gives satisfaction to finding new knowledge. According to Elias and Colleagues in [15] the advantages can be described as: (1) to increase awareness about the problem given and have the idea of problem solving, (2) to encourage positive expectations for problem solving and distract attention from undesirable or pre-occupying thoughts, (3) to encourage perseverance against emotional stress and tough situation, and (4) to simplify a positive emotional state especially within a group. The problem solving model shows that every subject has a way of thinking, and must be understood. Problem Solving is an alternative innovative learning model developed based on a constructivist paradigm. Students play an active role in constructing their knowledge so that students can develop their thinking skills. Problem Solving is one of the problem based learning groups where the teacher helps students learn to solve problems through learning experiences [16]. Process of learning using the IDEAL Problem Solving model shows that this model is able to improve competency mastery of solve problem or task which includes aspects of identifying problems, formulating problems, finding alternative solutions, choosing the best solutions [17,18]. Every problem that exists needs to understand carefully how the problem-solving process must be carried out. The problem solving process has an important role in developing the ability to improve thinking skills. Problem solving is a core activity in classrooms at all levels of education around the world, since problems are the materials to teaching, learning, and the basis for intellectual activity in the classroom. Thus, problems shape students to learn. In the end, anticipating, checking and evaluating student assignments on problems is a large part of the teacher / lecturer task [19]. This is in accordance with research conducted by Jitendra et al., that when students are given proper instruction, difficulties in problem solving can improve problem solving performance proportionally [20]. Research of Problem Solving learning model has been widely carried out and found that the Problem Solving learning model can increase student activity and learning achievement. Tambunan [21], concluded that problem solving strategies were more effective than scientific approaches to students' abilities in term of communication, creativity, as well as mathematical reasoning. Another result by Pinta stated that the average percentage of misconceptions of students who are taught with the Problem Solving learning model is less than students who are taught using conventional learning [22]. Method The method operated in this research was experimental research with a form of research design, namely Pre-Experimental Design one-group pretest-posttest. The use of pre-experimental design in this study was because there are still external variables that influence the formation of the dependent variable. So the experimental results which were the dependent variable were not solely influenced by the independent variable. This happens, because there was no control variable and the sample was not randomly selected [23]. The one-group pre-test posttest design was chosen because of limited access to select classes due to the pandemic, limited lecture and class materials available for conducting research. Because of these limitations, this experimental study used a one-group design, with the form of a "Pre-experimental designs" design, with a onegroup pretest-posttest design. The population for this time were all students of the Biotechnology class, State University of Malang, Indonesia, in Introduction to Spectroscopy and Microscopy Course, in which Mass Spectrometry was one of its topic. The sampling technique used was purposive sampling, namely taking the sample determined from the research design so that the sample criteria really matched the research. The sample in this study was one sample class. Taking the sample based on consideration of lecture material, appropriate teaching hours, and class limitations. As many as 19 students were selected for this research. The variables in this study were independent and dependent variable. The independent variable was the influence of the IDEAL Problem-Solving Learning Model assisted by Big Data. The dependent variable was the learning outcomes of the biotechnology students' mass spectrometry topic. The research procedure was divided into three parts, namely the preparation, the implementation and the completion stages. The preparatory stage that was carried out was preparing the syllabus, lesson plans, grids of learning outcome test questions, research instruments such as problem handouts, mass spectrometry material, and the implementation of the learning process. The implementation stage was carried out in lectures using the IDEAL Problem Solving Model with Big Data Support for 5 online meetings (1 meeting for pre-test implementation, 3 meetings for the learning process and 1 meeting for post-test). The IDEAL Problem Solving learning model has steps in the learning process. This learning step is divided into five stages. The five stages of the IDEAL Problem Solving assisted by Big data model are as in Table 1: • Guiding in viewing / correcting ways of solving problems • Guiding in seeing / assessing the effect of the strategies used in problem solving. • The completion stage was carried out with an assessment, namely a test of learning outcomes and data analysis. Assessment of this type of research was the process of collecting data followed by processing the information as a way to evaluate the achievement of student learning outcomes [24]. The data collection technique used was through learning outcome tests. Learning outcomes test questions in the form of essay questions totaling 5 numbers and have been validated by experts. The normality test was used to determine the normality of a data so that it knows the statistical techniques that will be used for further data analysis. In this case, the normality test used the Saphiro-Wilk test using SPSS Version 16. With a confidence level of 95% with an error rate of 0.05. To make sure that the data was normal or not, the significance value of the SPSS output was evaluated. The data will be normally distributed if the value is sig. > 0. 05. The homogeneity test was used to determine that the data come from the same variant. In research to test homogeneity using SPSS version 16, namely by looking at the significance value at the SPSS output if the value is sig. > 0.05, the data comes from the same variant and the sig value. <0.05, the data does not come from the same variant. The significance value of the t-test results used was the Two Paired Sample Test, namely the two tailed or two sides test in the SPSS 16.0 software program. Hypothesis testing is carried out by using comparative hypothesis testing between two different variables. In this study, researchers used analysis using SPSS version 16. The results showed the normality test using the SPSS version 16 software, and listed in the table above. It can be seen that the significance value of the pre-test data was 0.803 and the post-test is 0.139 which means that the significance value was greater than the 0.05 probability value so that the data is normally distributed, then further process could be carried out. Homogeneity of Pre-Test / Post-Test Question Data Homogeneity Test of Pre-Test / Post-Test Questions. The homogeneity test was carried out on the question data to check whether the variant data was homogeneous or not. The homogeneity test used was Lavene's Test using the SPSS software version 16. The results of the homogeneity test are shown in Table 3. From the output of the test of homogeneity of variances, the significance value was 0.065> 0.05, which means that the pretest and posttest scores come from the same variant (homogeneous). Hypothesis testing. Based on the prerequisite test, it was found that the learning outcome data were normally distributed and homogeneous, so that the type of hypothesis test carried out was the Independent Sample T-Test (T-Test). The significance value of the t-test results used was the Two Paired Sample Test, namely the two tailed or two sides test in the SPSS 16.0 software program. Hypothesis testing is carried out by using comparative hypothesis testing between two different variables, namely student learning outcomes before using the IDEAL Problem Solving learning model assisted by Big Data and student learning outcomes after using the BIG Data-assisted IDEAL Problem Solving learning model. Hypothesis Test Results are listed in Table 4. From the SPSS output results in the table above, it is known that the Sig. 0,000, which means the significance value of 0,000 <0.05, it can be concluded that there is an effect using the IDEAL Problem Solving model assisted by Big Data on student learning outcomes. Big data resources in this case brought more information to the class and this was a good sign for the teaching and learning process. While more knowledge was simply included to the topic, better comprehension can be achieved. Beside the textbook provided for the class, students can access any further materials related to their need, under lecturer's supervision. The textbook is in Bahasa Indonesia, and designed for chemistry students. In this case, biotechnology students need to learn their objects in chemistry point of views for their basic understanding of chemistry-biochemistry analysis. Chemistry topics used to be small molecules in discussion while in biotechnology the objects are much binger in term of dimension. There must be some "bridging knowledge" in between chemistry and biology and biotechnology. In open sources in the internet, including the databases provided by lecturers, the need for better information can be filled from the open sources. The intensive use of mobile technology in this case is helpful for the better processes in higher education. This type of tendency is currently being investigated along with more platforms and applications created in the time during Covid-19 pandemic time. More progress in digital era is accelerated by databases provided so far. N-Gain Data Analysis. N-gain analysis is used to test the hypothesis which reads "The improvement of student learning outcomes uses the IDEAL Problem Solving learning model assisted with Big Data". Based on the criteria table for the N-Gain value on learning outcomes, the N-Gain value was 0.5322 ≥ 0.3 and 0.5322 ≤ 0.7, then the learning outcomes are categorized as moderate. So that the hypothesis which reads "Improving student learning outcomes using the IDEAL Problem Solving learning model assisted by Big Data is categorized as moderate" can be accepted. Further study around both factors is underway since the problem solving approach itself contribute a lot for concepts understanding as well as big data and applications in the databases. Big data can be a good tool for modern teaching and learning in higher education institutions. The N-gain value can lead to some interpretation, and this must be done very carefully. In term of gaining knowledge there must be a meaningful process in teaching and learning, and this must result in the better understanding and higher scores obtained. However how it was due to the treatment given, it must be followed up by some confirmation test. By having interview with some students, some more explanation can be obtained and analized further. Results of Learning Outcomes Test Score. The results of the learning outcomes test using the IDEAL Problem Solving learning model assisted by Big Data are presented in Table 5. Learning outcomes in this case is the result of pre-test and post-test around mass spectrometry topic. This instrument consisted of some points in essay to be answered by students. The pre-test and post test were the same test, and was already validated. The scores are presented in Tabel 5 as well as the quick analysis of them. Based on the data in table 3.4, the pre-test mean value was 54.21 while for the posttest mean score increased to 78. 58. From this data it can be concluded that there was a positive effect of using the IDEAL Problem Solving learning model assisted by Big Data on student learning outcomes. In the big data resources there are several types of information, in which both need special skill to dig the information out of it. However, using problem based learning approach, some students in groups can access the database and learn from it. Student was also busy and focused to the assignments given, while they had to learn some more skill in data digging. However, modern students manage to overcome technological difficulties while doing it in groups. For example using NIST database, the students can extract the information about molecular structures given in more practical way. The structure can be accessed and filtered using provided software and they learnt and understood faster than using textbooks. For the future, big data are being accessed by more researcher in many areas of interest. The available data enable researcher to analyze more than collecting data in the laboratory. The experimental part of scientific research can be skipped and data analysis becomes pronounced and accessed from many different points of view. This way cost for scientific attempts can be reduced. More if the data is also used in teaching and learning as practical work can be minimized too. Conclusion Based on the results of data analysis and discussion, it can be concluded that: 1. There was an influence on student learning outcomes before using the IDEAL Problem Solving learning model assisted with Big Data and student learning outcomes after using the IDEAL Problem Solving learning model with Big Data support. 2. There was an improve in the average student learning outcomes on mass spectrometry material with the IDEAL Problem Solving learning model assisted by Big Data which can be seen from the average pre-test score, namely 54.21 and post-test, namely 78.58, so that the increase in student learning outcomes was categorized as moderate. The N-Gain value is 0.5322. Authors Irmayanti Muis is currently a magister student at Chemistry Education Study Program, Faculty of Mathematics and Science, State University of Malang, Indonesia. She is doing research in which chemistry classes are exposed to big data resources, including NIST MS database. Some current real laboratory data from GC-MS equipment are analyzed using big data resources for Analytical Instrumentation Method in chemistry department. Her email address is<EMAIL_ADDRESS>Surjani Wonorahardjo is a lecturer and researcher in Chemistry Department, Faculty of Mathematics and Science, State University of Malang, Indonesia. Her current research is about analytical chemistry methods development for characterization and application in chemistry level. In this topic the aid from big data resources, especially in spectroscopy area are most needed. She is also a member of Center of Excellence (PUI-PT) Disruptive Learning Innovation, State University of Malang, which has emphasis in modern information technology development for teaching and learning processes. Her email address is<EMAIL_ADDRESS>Endang Budiasih was a senior lecturer in Chemistry Department, Faculty of Mathematics and Science, State University of Malang, Indonesia. She had the expertise in conventional and modern analytical chemistry besides her main projects in chemistry educational areas. She developed instruments for teaching and learning processes assessment for the chemistry education research group in the university. She passed away in December 2020.
5,216.4
2021-05-04T00:00:00.000
[ "Chemistry", "Education", "Computer Science" ]
Launching system of helicopter aviation transient electromagnetic system Aviation electromagnetic transmitters are used to ascertain the distribution and reserves of underground mineral resources. Among them, the launch subsystem is used as the energy source of helicopter aviation transient electromagnetic exploration equipment, and its performance will have a great impact on the exploration depth and accuracy of mineral resources. The structure and principle of the power conversion circuit of the transmitting subsystem are introduced. Complete the production of the power conversion circuit, and complete the test of several main performance indicators of the circuit. Laboratory test results show that the power conversion circuit has good performance and meets the design requirements. Introduction Airborne Transient Electromagnetic Method (ATEM) is a transient electromagnetic detection method based on aviation platform [1].A pulsed electromagnetic field (primary field) is emitted to the ground through the launch loop mounted on the flight platform; under the excitation of the primary field, eddy currents are generated inside the earth; under the action of the ohmic effect, the eddy currents inside the earth are attenuated, thereby exciting New electromagnetic field (secondary field);by observing the secondary field, extracting and analyzing the geoelectric information contained in it,the purpose of detecting underground geological structures can be achieved [1].The flexible and efficient characteristics of helicopter aviation transient electromagnetic survey system make it more and more used in groundwater survey, large-scale engineering foundation survey, geological mapping, mineral survey, soil salinization survey, and the search for unexploded remnants And other low-quality survey missions [2]. The helicopter aviation transient electromagnetic system includes the helicopter airborne hoisting launcher and the ground data interpretation system. The helicopter hoisting launcher includes two parts: electromagnetic launching system and receiving system. The electromagnetic emission system includes a power conversion circuit and an excitation source pulse modulation circuit. It can be said that the aviation transient electromagnetic system is very complicated. It contains various subsystems. Each subsystem is like every organ of our human body. They cooperate and cooperate with each other. They are connected to each other to form a powerful human body. Only in this way can problems be dealt with in an orderly manner and with ease. The power topology circuit converts the energy output by the helicopter into the shape, frequency and amplitude required by the excitation source pulse current, and then converts it into the excitation source pulse with the corresponding frequency through the transmitting device to radiate the magnetic field into the air for exploration [3].The power topology 2 circuit undertakes the role of power conversion and is the core of the helicopter aviation transient electromagnetic launch system [3].The power topology circuit includes two parts: the power conversion circuit and the excitation source pulse modulation circuit. The power conversion circuit converts the output voltage of the helicopter into the voltage adjustment range required by the excitation source pulse modulation circuit [3]. The excitation source pulse modulation circuit receives the energy of the power conversion circuit to generate the excitation source pulse current waveform, which is finally converted by the transmitter into a pulsed primary magnetic field of the same frequency and radiated into the air [3]. The power conversion circuit of the helicopter aviation transient electromagnetic transmission system converts the output voltage of the helicopter into the voltage level required by the excitation source pulse modulation circuit, which plays an important role in the energy conversion, so the power conversion circuit and its control strategy will be directly Affect the performance of the entire helicopter aviation transient electromagnetic launch system [3]. Structure The following shows the structure of the excitation source power topology circuit when the fundamental frequency is 25Hz.When the fundamental frequency is 25Hz, the transmission waveform can adopt the pulse current waveform of multiple excitation sources, and the high-energy momentum of the half-sine wave of large magnetic flux can be used. To explore the deep mineral resources on the earth. Trapezoidal waves with small magnetic moments and low energy are used to improve the resolution of shallow geological exploration on the surface [3]. However, this article will still send out a half sine wave of 25Hz single excitation source pulse current. During the exploration flight of the helicopter hoisting the launcher, the distance between the cabin and the launcher was 60m, and the peak value of the excitation source pulse current was as high as 800A.Therefore, the high-voltage and low-current transmission method can effectively reduce the loss and weight of the transmission line [3]. The DC boost circuit adopts the traditional full-bridge resonant converter. The full-bridge resonant converter is very suitable for high-voltage and high-power occasions due to the soft switching of switching devices [4], suitable for high-frequency working conditions, and has a small size, Advantages such as low power consumption. The transformer's transformation ratio is as high as 1:20, and there are a total of 8 working modes. It boosts the 28V DC output of the helicopter to 450V high-voltage DC. The boost ratio is about 1:16, realizing the conversion from low-voltage DC to high-voltage DC. The intermediate conversion circuit adopts the PSFB-PWM circuit. The PSFB-PWM converter is used in various fields due to its easy realization of soft switching, simple structure and good EMI characteristics. As a practical topology, it is widely used for medium and high power isolation. DC-DC conversion field Experimental results The principle prototype experiment platform is built, and the controller uses a digital signal processing chip (TMS320F28335), The key device parameters are shown in Table 1. And get the data when the input voltage is 27V and the emission current is 740A, see Table 2. Transmission ratio of output current to input current: (1) Transmission ratio of output current to input current: (2) efficiency: (3) efficiency: Overall efficiency: Among them, the resonant voltage doubler circuit raises the 27V DC output from the helicopter to a high voltage 432V,the transmission ratio reaches 1:16,and the current drops to 5A.The intermediate conversion circuit reduces the high voltage 432V to an adjustable voltage of 20~40V,and the current transfer ratio is 1:9.The efficiencies of and are all above ,and the overall emission efficiency is as high as .The launch waveform is shown in Figure 1.The launched fundamental frequency 25Hz half-sine wave pulse current peak value is 740A,the launch time is 4ms, and the stop time is 16ms.Meet the requirements of the index. Figure 2. Transmit waveform. Conclusions This paper presents a design scheme of the core part of the power topology circuit of the helicopter aviation transient electromagnetic system-the power conversion circuit under the 25HZ fundamental frequency. In order to reduce the weight of the cable and reduce the transmission loss of the wire, a DC high voltage and low current transmission strategy is adopted. The resonant boost circuit adopts an improved full-bridge converter, and the intermediate conversion circuit adopts a phase-shifted fullbridge pulse width modulation converter. And the power conversion circuit has been tested, showing that the scheme is basically feasible.
1,607.2
2021-03-01T00:00:00.000
[ "Engineering", "Physics" ]
MODERN METHODS FOR DETECTION OF UNMANNED AERIAL VEHICLES Most recent Unmanned Aerial Vehicle (UAV) detection methods are discussed in the article. Detection of UAV principles are pointed out during the overview. Brief advantages of each technique is covered and compared in between. Key technological limitations of each technique is pointed out and discussed. Several most recent and actual UAV threat accidents are presented with the indication of the used counter UAV systems. New upcoming threat of “Kamikaze” (selfdestructive) UAV and their detection limitations are presented. Case studies on the hybrid counter drone technology interactions are covered. In this article, important civil and military types of UAV propulsion are covered. Design features and future consumer demands, are analyzed, aiming at UAV components which are mandatory to perform a flight. Using recently published articles energy sources and thrust power plants are analyzed. UAV detection principles, that include audio signal signature analysis, aerial object video tracking, thermal heat signature analysis, radar systems, radio frequency spectrum and data packet communication detection are covered, pointing out their advantages and limitations. Conclusions are drawn taking into account future perspective of the UAV technology developments and upcoming future threats of the highest impact. Evaluation of most actual recent articles is made in order to overview weak points of the counter UAV system development techniques. Finally future UAV technology development is analyzed and main safety related threats are indicated. Slowly developing UAV components are indicated, putting more attention on possible UAV detection methods, where UAV mandatory components will not become obsolete. Introduction Unmanned Aerial Vehicles (UAV's) are lightning fast getting in our everyday life by solving complex tasks never available before. The most active sector in UAV development is in the field of military, scientific research, agriculture and recreational use. The state of the art intelligent autonomous technologies used in UAV's are capturing information data, performing search and rescue, military missions, fire fighting and medical help operations. Recent situation based on UAV threats show, that UAV's can perform devastating precise attack's on remote infrastructure and make a high volume global impact on essential supplies worldwide. Last high volume UAV attack launched on 14 September 2019, cut Saudi oil production by 50%, precision impacts are shown in Figure 1. The oil processing plant closure had an impact almost on Electronics and electrical engineering Elektronika ir elektros inžinerija *Corresponding author. E-mail<EMAIL_ADDRESS>Mokslas -Lietuvos ateitis / Science -Future of Lithuania ISSN 2029-2341/ eISSN 2029-2252 2020 Volume 12, Article ID: mla.2020.11435, 1-6 https://doi.org/10.3846/mla.2020.11435 Figure 1. Results of high impact UAV attack on Saudi oil processing plant infrastructure. Red square rectangles indicate a precise hit and UAV positioning control possibilities (CNBC News, 2019) 5.7 million barrels of crude production per day, about 5% of the world's daily oil production (CNBC News, 2019). It is worth to mention the arising threat of "kamikaze drone" (a type of UAV packed with lethal explosives and shrapnel), that is becoming widely available to construct and show no practical technological detection solutions from terrorist attacks, especially in highly populated and dense urban areas. Usually such UAV's are handmade and use specialized remote control equipment, which is not mass produced and easily discovered in urban areas (Russell et al., 2019). The detection of UAV's is even more complicated, when fully autonomous flight autopilot systems take care of the flight control, without the need of any radio communication and navigation. One of the UK busiest Gatwick's runways has been shut on 20 December 2018, since UAV's have been repeatedly flying over the airfield, causing about 110,000 passengers on 760 flights to be late (BBC News, 2018a). It is evident, that UAV's threats are one of the major unresolved safety concerns of the nearest future. The arising task for detection of an UAV's in a no-fly zones is getting highly demandable but hardly solved tasks (Bunker, 2015). Each country has their own UAV no-fly zones for airport and strategic infrastructure security, but due to the development of intelligent UAV control technology, upcoming high capacity communication links, it tends to be less reliable instruments to achieve the desired safety (Solomitckii et al., 2018). UAV propulsion types In general all UAV's need propulsion energy to perform their take-off, flight and landing. Thrust engines are used to power any UAV for a flight, with exception for gliders and lighter than air systems. Glider type construction UAV's do not need propulsion energy as they can perform their flight using rising air streams, but they do need to be lifted or towed in the sky. Main UAV propulsion energy and engine types are as follows: 1. Chemical or nuclear energy powered engines 1. Electric thrust engines tend to be the best choice for recreational UAV propulsion. They can be small, lightweight, reliable and easily controlled in flight. Nevertheless, it is the batteries what limits flight duration and range. Figure 2 shows a solution of a petrol-electric power plant, powered by a small gas engine for longer UAV flight range (Hung & Gonzalez, 2012). Types of engines for UAV propulsion Propulsion is mandatory for all UAV to perform their flight. Main thrust engine types for UAV flight performance, are introduced briefly, focusing on internal combustion and electric engines. A piston engine, also commonly referred to as a reciprocating engine, is an internal combustion engine that uses one or more reciprocating pistons to convert pressure into a rotational motion in order to obtain thrust power from a liquid fuel. An electric engine (motor) is an electrical machine that converts electrical energy into mechanical energy. A motor shaft rotation force is achieved through the interaction between the motor's magnetic field and electric current in a motors wire windings. Two-stroke engine is a kind of an internal combustion engine, which performs two cycles in one shaft revolution. In a two-stroke engine, the end of the combustion stroke and the beginning of the compression stroke happen simultaneously, with the intake and exhaust functions occurring at the same time. The high power achieved in this way leads this type of engine to a high power to weight ratio in comparison with other types of internal combustion engines. Simplicity of construction makes these engines relatively lightweight and widely used for the UAV propulsion. Four-stroke engine has four stroke working cycle. They are: intake stroke, compression stroke, power stroke and exhaust stroke. Four stroke engines tend to be heavy as they use an oil pump and oil storage for the lubricating of the engine. Valve system is present in the engine which also gives more weight. A four stroke engine deliv- ers one power stroke for every two cycles of the piston (or four piston strokes), this is why it has less coefficient of power to weight ratio. Nevertheless this type of engine, due it's built in oil lubricating system is far more reliable and energy efficient that the two-stroke engines. Four-stroke engine usually are used for long endurance military type UAV propulsion. Wankel engines are redesigned for a use in UAV propulsion as hybrid power generating units. Jet type engines are being used in large scale UAV's. The propulsion in the jet type engine is created by discharging liquid fuel combustion products with a high rapidity, inducing the thrust, which acts as a pushing power for a UAV. Rocket engines are powered by liquid or solid fuels. As a result of burning fuels the propulsion power is obtained. This type of engines has a limitation for using in UAV propulsion, as the there is no mechanism of thrust power control. There is no option to shut down the engines before all fuel is burned. Jet engine with a compressor has different stages of rotating vanes. Air in the engine enters each stage and each stage incrementally compresses the air. The flow path the air takes gradually reduces in area as it makes its way to latter stages. At the end of the compressor is a diffuser that slows the airstream down as well as further increases its static pressure. Turboprop or turbofan engine is a turbine engine that drives an aircraft propeller or a turbofan incased in enclosure. Some of the power generated by the turbine is used to drive the compressor, while the rest is transmitted through the reduction gears to drive the propeller or a turbofan. The main difference from a turbojet engine, is that the engine's exhaust gases do not contain enough energy to create powerful thrust, that is why almost all of the engine's power is used to drive the propeller or a turbofan. Electric thrust motors are being widely used to perform take-off, flight, positioning and landing of an UAV. A fixed wing UAV can be powered even with one electric motor, as for performing its take-off, flight and landing, controlled flight surfaces are used. A generally called "copter" UAV can be powered minimum with three thrust engines in order to perform take-off, flight, positioning and landing procedures. Positioning of the copter is realized by varying thrust power of the electric motors and maintaining the position and flight path of the UAV. Brushed direct current (DC) electric motor can be used as a thrust motors for a tiny copter UAV. It can also power a fixed wing UAV, as it does not need to produce the lifting force due to airfoil type wing presence. To operate a DC motor there is no need to use complicated electronic speed controller circuits, they can be powered by DC current or controlled using pulse width modulation. Brushless direct current motors (BLDC) are advanced electric motors widely used in general purpose electrically powered small UAV's. The main advantage of the motor are: -Elimination of brushes, commutator and slip rings makes the design compact and robust; -Simple, light weight and lossless rotor construction lead to low inertia and high efficiency; -Low rotor inertia leads to quick dynamic response; -High efficiency allows a reduction in frame size of the machine; -High life cycle, in special cases more than 30.000 hours; -Motor windings are a part of the fixed stator, that is why no moving electric parts and mechanical commutation is present; -High reliability, as the bearings are the only part which take friction force. BLDC motor consists of the permanent magnets rotating around the fixed stator with windings ( Figure 3). Power to the stator windings is controlled with the electronic speed regulator circuit, which also senses the rotation speed and direction of the rotor. For the control of the motor speed, high current pulse width modulation is used, as a result, stator windings are energized at a different fraction of time but at a constant frequency. Due to motor's weight, acceleration momentum and the lack of mechanical thrust control possibilities, internal combustion engines are not used on copter type UAV's. Up to date there is no mass produced solution for using an internal combustion engines in recreational type copter UAV's. The majority of recreational copter type UAV's are powered by electrically powered thrust power plant. Hybrid internal combustion engines, as electric energy generators, are likely to be the only internal combustion engines, which could be used in copter type UAV's together with the BLDC electric thrust motors. Common methods for UAV detection There is an increasing number of different methods for solving UAV detection task. Each method differs in reliability, range, working conditions, accuracy and many more parameters. Main escalated and highly developing methods for UAV detection are: -Acoustic signal footprint analysis Microphone or microphone array receives an acoustic signal of UAV thrust motors, which in real time is analyzed and compared to the known UAV signal pattern. If signal patterns match, a UAV is detected. -Optical footprint analysis Single video camera or camera array is used to capture surrounding image. Image data is processed real time for flight path signature recognition. -Heat signature analysis Thermal camera or camera array is used to capture the surrounding image. Image data is processed real time for flight path signature recognition. -Radar signature analysis UAV is detected by the presence of their radar signature, which is generated when the UAV body encounters RF pulses emitted by the detection element. Signal, reflected from UAV is processed in real time for flight path detection. -Radio communication Radio frequency spectrum is received and analyzed in real time for modulation, video, telemetry and control data decoding. Known communication data packets are decoded and valuable data retrieved. -Combined methods Any of more than one detection methods combined, are used to increase accuracy, fail proof, stability, positioning and range for UAV detection. Features of common UAV detection methods -Acoustic signal signature analysis Advantages: Can be used in conjunction with many other UAV detection methods. Effectively increases accuracy of an optical detection used in combination (Liu et al., 2017). Easy to install a microphone array for securing perimeters or buildings (Pechan & Sescu, 2015;Sinibaldi & Marino, 2013). Cost effective system and installation. Disadvantages: Due to noise interference, not suitable for urban areas, stadium safety, mass events, airports or near any noise sources. Rain, wind and noise can influence reliability of detection. No object tracking capability. -Optical signature analysis Advantages: Instant and accurate object tracking capability. Can be used in conjunction with many other detection methods. Capability for artificial intelligence interaction (Chen et al., 2018). Disadvantages: Under mist, fog, rain, snow and low light conditions system is not effective. Infra red light does not solve the low light problems. Gliding birds are difficult to recognize from flying UAV, they do generate false alarms. -Heat signature analysis Advantages: Can be used in conjunction with many other UAV detection methods for increasing reliability. Disadvantages: Detection accuracy is highly dependent on weather conditions. Birds do emit thermal signature, which generate false UAV detection alarms. UAV thermal shielding and ventilation can be used as countermeasures for thermal signature detection. -Radar signature analysis Advantages: High range of detection. Possible use of passive radar, when other source of transmitted signals are used (ex. TV broadcasting signal). Finding large winged drones is a task that can be performed with traditional radar products (Eriksson, 2018). Disadvantages: Active RF emitting system, UAV's can be programmed to overfly detection area, or fly near the surface. Small UAV's made of special material and shape are hard to detect. Small birds and UAV's are hard to differ. Not suitable in urban areas due to radiation influence on health and building shielding (Hinostroza et al., 2018). Equipment and operation prices are high. -RF communication signature analysis Advantages: Software defined radio (SDR) is taking place together with the artificial intelligence for better adoption of reception in urban areas. High altitude moving UAV communication source can be used as a future detection system development with object tracking capabilities, enabling detection of swarm UAV's (Ezuma et al., 2019). Disadvantages: Early detection methods were used only for communication detection in a known frequency range. UAV control data frequencies tend to be of a small power, can be used in public frequency range WI-FI, GSM, DVB or any other, where their detection in an urban area will nearly be impossible. Only pre stored sample data packet recognition was available, while newer UAV models in such systems were not detected if not uploaded to the system. Evolution of UAV autopilots, makes it possible to complete their preprogrammed flight mission in full radio silence, without any RF transmission. -Combined detection methods Advantages: Any known methods combined together by integrating a variety of different sensor types in order to provide a more reliable detection capability can be used. Audio assisted video camera array show a significant increase in detection reliability used together (Liu et al., 2017). Disadvantages: Complicated interoperation between different UAV detection systems from different manufacturer's. There is no common interfacing standard for connection of several systems from different manufacturers. Several common widely used passive and active UAV detection system's functionalities are compared in the Table 1. As can be seen from the table, each manufacturer of the UAV detection system has different UAV detection capabilities. There is a UAV detection system manufacturer, which includes all common UAV detection methods in one serially manufactured system. System integrates radar, video, audio and RF UAV detection technologies for improvement of detection reliability. Using several types of different UAV detection sensors and technologies minimizes false alarm possibility for UAV detection and guarantees extremely flexible system adaptation capabilities. Advantages of UAV thrust power plant detection Analyzing history and perspective of small UAV structure development, it is obvious, that the two main components -energy source and thrust power plant are mandatory for UAV flight operation. Many other UAV operation components are developing rapidly and quickly changing to obsolete. The most frequent thrust power plant of a serially produced small UAV, is electrically powered BLDC motor. Electrically powered UAV's, due to their high motor efficiency and dense energy storage capabilities can be produced in small size and weight, which makes them hardly detectable using common UAV detection methods. UAV's are already capable to perform a fully autonomous flights, without the need of radio communication and GPS navigation. They are easily programmed to overfly any active RF emitting radar detection system and complete the preprogrammed mission in full radio silence mode. Due to rapidly increasing demand on payload, improving technologies of lightweight battery energy storage, UAV manufacturers are racing in between to increase maximum UAV payload mass with increase of thrust motor power. Higher thrust power of electric motors, require more powerful switching capabilities for the electric motor control, what also increases the emitted electromagnetic interference signature of electrically powered UAV (Blažek, 2015;Lipovský et al., 2018). Electromagnetic interference signature, will be increasing with increase of maximum payload of any electrically powered UAV. It is just a matter of time, when the increased payload of any electrically powered UAV will result in a noticeable electromagnetic interference for its reception. Discussions and conclusions UAV thrust engine technology development tendency clearly indicate, that vast majority of small recreational types of UAV's, which are mass produced, are powered electrically. This leaves a very tiny space for a mass produced small UAV's powered by any type of internal combustion engines. Because of lack of electromechanical control mechanisms for copter type small UAV positioning, high thrust inertia, direct usage of internal combustion engines used for small UAV propulsion is not developing, with the exception of a fixed wing UAV's with a high capacity of payload. The fast-growing demand on UAV payload, increases the overall power of thrust engines used in small UAV's. The fast developing lightweight electrical energy storage solutions allow UAV manufacturer's to compete in achieving the maximum payload and flight endurance on any mass produced UAV. Hybrid solar, petrol-electric energy plants are taking the development race increasing UAV flight performance characteristics, while thrust engine development, due to its performance, weight and control capabilities is completely stuck on electrical energy source. Using of fast developing artificial intelligence show a great signs of reliability and range improvement in drone detection systems development. Present passive UAV detection technology development trends, tend to put all attention to low power RF communication detection methods. It should be noted, that, due to the highly improved autonomous flight control capabilities, UAV's can perform remote flight missions in complete radio silence, without the use of any type of radio communications or overfly zones where active radar detection is observed. RF spectrum signature analysis method are hardly effective in the highly populated urban areas, where radio communication channels are intensive and UAV can be programmed to use communication frequencies in any WI-FI, GSM, DVB-T or any other highly busy and hardly discoverable frequency range. Early detection systems had only a variation of several types of detection methods for getting better results of UAV detection and miss alarm. At present most of UAV detection systems tend to use artificial intelligence and adapt to the environment which is monitored. Increase in technological development for autonomous, radio silence flights makes covert low terrain flights possible. There is no widely available instruments ready to resist "kamikaze drone" attacks during mass event organization or protecting our national airports and no-flight zones. These circumstances are pushing to return to the essence of UAV flight, and to eliminate all the features without which UAV flight would be not possible. One of mandatory small copter UAV flight components is electric thrust engines and their control peculiarities during take-off, positioning and landing. Analyzing rapidly increasing demand for UAV's payload and flight endurance, UAV manufacturers tend to compete in developing high power electric BLDC motors, innovative electric energy sources and high power BLDC motor speed regulators. The usage of these high power components increase the emitted electromagnetic interference signature of electrically powered UAV. Signature of electromagnetic interference will be increasing with increase of maximum payload of electrically powered UAV. It is just a matter of time, when the increasing payload of any electrically powered UAV will result in a noticeable electromagnetic interference signature for its reliable reception and detection.
4,863.6
2020-02-19T00:00:00.000
[ "Computer Science" ]
The fractional Brownian motion and the halo mass function The fractional Brownian motion with index $\alpha$ is introduced to construct the fractional excursion set model. A new mass function with single parameter $\alpha$ is derived within the formalism, of which the Press-Schechter mass function (PS) is a special case when $\alpha=1/2$. Although the new mass function is computed assuming spherical collapse, comparison with the Sheth-Tormen fitting function (ST) shows that the new mass function of $\alpha\approx 0.435$ agrees with ST remarkably well in high mass regime, while predicts more small mass halos than the ST but less than the PS. The index $\alpha$ is the Hurst exponent, which exact value in context of structure formation is modulated by properties of the smoothing window function and the shape of power spectrum. It is conjectured that halo merging rate and merging history in the fractional set theory might be imprinted with the interplay between halos at small scales and their large scale environment. And the mass function in high mass regime can be a good tool to detect the non-Gaussianity of the initial density fluctuation. INTRODUCTION Halo models are widely applied in the campaign of cosmological parameter estimation in precision from cosmic large scale structures as well as in the expedition of understanding structure formation. The mass function is a fundamental ingredient of halo models. The most famous analytical formula of mass function, the Press-Schechter mass function (hereafter PS) was derived by Press & Schechter (1974) based on the spherical collapse model. The PS function can be alternatively derived with the random walk or the excursion set formalism (Bond et al. 1991). By smoothing the linear density field on different scales with a sharp k-space filter, the density fluctuation within the characteristic scale could be regarded as a random walk against the variance of the smoothed field at this scale. Consequently the whole theory of random walk can be grafted to model the density contrast field. The elegant theory provides a concise analytical framework to study various processes in cosmic structure formation, and is embraced with great interests by the community. For instance, Lacey & Cole (1993) explicitly calculated the merger rate, halo formation time, and relevant properties of galaxy clusters; Sheth & Tormen (2002) adopted the excursion set theory with moving barrier to study ellipsoidal collapse of halos; Zhang & Hui (2006) ⋆<EMAIL_ADDRESS>solved the excursion set theory with moving barrier of arbitrary shape and discussed the HII bubble size during reionization; and voids phenomenon is explored within the framework by Furlanetto & Piran (2006). The success of the random walk formalism in cosmology is prominent, but the primary product of the excursion set theory, the PS mass function, is a poor description to simulations at all epochs (Reed et al. 2006). The common practice is to parameterize the PS function, and then fit the function to simulations to pin down free parameters (e.g. Sheth & Tormen 1999;Jenkins et al. 2001;Reed et al. 2003;Warren et al. 2006). Many functional forms have been proposed by various authors to account for different effects of ellipsoidal collapse (Sheth & Tormen 2002), angular momentum (Del Popolo 2006a) and the index of power spectrum (Reed et al. 2006). Betancort-Rijo & Montero-Dorta (2006) claims that the "all-mass-at-center" problem shall be properly formulated to obtain the correct mass function in high mass regime, Lee (2006) assumes there is a break in the hierarchical merging process and obtains much shallower mass function in low mass regime. In this report, we construct a fractional excursion set theory by replacing the conventional random walk with the fractional Brownian motion of index α. The standard excursion set theory is simply a special case of the the new theory. The difference between the normal random walk and the fractional random walk lies in that the latter takes the correlation between walking steps into account. A new mass function is derived with the fractional excursion set theory, which contains one parameter α in connection with the correlation of steps of the random walk. Although the new mass function is derived with the boundary condition of a single fixed absorbing barrier, i.e. in spherical collapse scenario, it is in good agreement with the Sheth-Tormen formula (Sheth & Tormen 1999, hereafter ST) with α ≈ 0.435 in high mass regime, while has more small mass halos than ST and less small mass halos than PS. The layout of this paper is that at first we recite the excursion set theory briefly in Section 2, then in Section 3 we introduce the fractional Brownian motion to develop the fractional excursion set theory and subsequently derive a new mass function, and the last section is of discussion. THE EXCURSION SET THEORY The initial density fluctuation δ = ρ/ρ − 1 ≪ 1 in early universe is Gaussian and evolves linearly. If the density contrast in a region exceeds a critical value δc, the mass in that region will collapse and be virialised in future to form a halo. As pointed out by Bond et al. (1991), at an arbitrary point in the universe, the density contrast smoothed with a window function WM (R) of characteristic scale R is a function of the underlying total mass M (R) ∼ρR 3 included by the smoothing window, the M effectively represents the scale R. The variation of the smoothed density contrast δ(M ) forms a trajectory in the plane of δ(M )-M . The collapsing condition δc is turned into an absorbing barrier over the trajectory, at the largest M where δ(M ) firstly crosses the barrier, the trajectory will be absorbed, i.e. an object will form. The task to find how many objects will form in mass range (M, M + dM ) is converted to the problem of tracing the fraction of trajectories passing through the barrier. A quantity used to represent the smoothing scale in stead of the mass M is the variance of the smoothed field where δ k andWM (k) are the Fourier transform of δ and the window function WM (r) respectively. The smoothed density fluctuation can be written as which actually tells us that δ(S) is the sum of δ k weighted by the window functionWM (k). If the smoothing scale R is sufficiently large, S and δ(S) will be zero. Once we decrease the smoothing scale R, since the window W is a function of R, the weighting to Fourier modes of δ will change. Naturally the feature of δ(S) trajectories depends on the weighting pattern of Fourier modes, i.e. properties of the window function (see examples in Bond et al. 1991). If the window function is sharp in k-space (a top-hat function spanning from k = 0 to k ∼ 1/R), the increment δ(S +dS)−δ(S) of a step from S to S +dS comes from a new set of Fourier modes in a thin shell of (k, k + dk). Phases of δ k are uniformly distributed in [0, 2π], the sum k+dk k δ k is a random Gaussian variable and uncorrelated with previous increments (Bond et al. 1991;Lacey & Cole 1993). This is exactly a Brownian random walk. If we define Q(δ, S) as the number density of trajectories at S within (δ, δ + dδ), the Brownian random walk satisfies a simple diffusion equation and S = 0, δ(S) = 0. In absence of barrier, we have solution According to Chandrasekhar (1943), a trajectory δ(S) reaches the barrier δc at S has equal probability to walk above or below the barrier, therefore the solution of Eq. 3 with an absorbing barrier boundary condition is The probability of a trajectory absorbed by the barrier δc must equal to the reduction of trajectories survived below the barrier in interval (S, S + dS), Substituting Eq. 3 and 5 into the above equation gives which is the fraction of mass associated with halos in the range of S and consequently M . So the comoving number density of halos of mass at epoch z is simply This is the well-known Press-Schechter mass function. motivation It is clear that the validity of the Brownian random motion prescription to the trajectory of δ(S) is guaranteed by the sharp k-space filtering. Lack of correlation between the new increment with any previous steps delimits the Markov nature of the Brownian motion. In context of structure formation, it means that the formation of halos at small scales is not correlated with the density fluctuation smoothed at large scales, henceforth halo formation is completely independent of environment. If we choose a different smoothing window function such as a Gaussian or a top-hat in real space, δ(S + dS) contains the same set of δ k as δ(S) though in the summation each Fourier mode is weighted differently by the window function. In this circumstance δ(S) is apparently correlated with earlier steps, which can not be described by the Brownian random walk formalism any longer. In general there is no analytical solution to these types of walks with correlation (Bond et al. 1991). Recently with the emergence of high resolution simulations, it has been revealed that the formation history and properties of halo, especially of small mass, are modulated significantly by halos' large scale environment (Sheth & Tormen 2004;Gao et al. 2005;Harker et al. 2006;Wechsler et al. 2006). Therefore there must be considerable influence from the mass accretion at large scales on the amplitude of density fluctuation smoothed at small scales, i.e. δ(S) is correlated with δ(S ′ < S) even the window function is sharp in k-space. Either mathematically or physically, the walk of δ(S) of a realistic density field is some kind of random motion with correlated steps, which is obviously not a Brownian random motion, rather, is partly random and partly deterministic. Walks like this with "fractional" randomness, fortunately, are objects that the fractional Brownian motion (FBM) is designed to score. the fractional Brownian motion The FBM is a generalization of the normal Brownian random walk introduced by Mandelbrot & van Ness (1968). FBM, though not well-known in astronomy community, has been widely used to model geometry and growth of many types of rough surfaces in nature like mountain terrain, clouds, percolation and diffusion-limited aggregation. Interestingly it also finds its application in financial market (c.f. Meakin 1998). Formally, with index α (0 < α < 1), a FBM is defined as a random process X(t) on some probability space such that: (i) with probability 1, X(t) is continuous and X(0) = 0; (ii) for any t ≥ 0 and h > 0, the increment X(t+h)−X(t) follows a normal distribution with mean zero and variance h 2α , so that If α = 1/2, FBM backs to the normal Brownian motion (c.f. Feder 1988). The index α is named the Hurst exponent, which is used originally in the rescaled range analysis (R/S analysis) to portray scaling behaviors of time series. It has strong connection with the fractal dimensions of time series or spacial structures, but the exact relation is case dependent (Meakin 1998). Here, α tells us how strongly correlated the step increment is with previous steps. The trajectory of a FBM with smaller index α is more noisy than that of a FBM with higher index, so sometimes α is called the roughness exponent. It is very interesting that the FBM has infinitely longrun correlations. For instance, the past increments X(0) − X(−t) are correlated with future increments X(t) − X(0): as X(0) = 0, the correlation function of the "past" and "future" is which is invariant with the "time" t and only vanishes when α = 1/2! This is an impressive feature of FBM, which leads us to classify FBM into two types: (i) persistence FBM with α > 1/2, which means that an increasing trend in past will result in an increasing trend in future for arbitrary large t, i.e. a positive feedback process; (ii) anti-persistence FBM with α < 1/2, which refers to an increasing trend in past will lead to a decreasing trend in future, i.e. a negative feedback. It might help understanding characteristics of FBM to know the generation methods of FBM. To simulate a 1dimensional FBM, the simplest method is in which G(t) and G(s) are uncorrelated random numbers extracted from a normal distribution with zero mean and unity variance, n is a practical cut-off number which shall be as large as possible. To generate a (d + 1)-dimensional surface of FBM by Fourier transformation, in the first instance we place a grid in Fourier space, and fill the grid with complex numbers δ(k) with Gaussian distributed amplitudes and random phases. Spatial correlation is introduced by Then Fourier transformation of the random field δ ′ (k) will give a self-affine surface modelled by FBM. the fractional excursion set theory The number density of trajectories Qα(δ, S) of fractional Brownian motion with index α obeys the diffusion equation (c.f. Lutz 2001), which has solution in absence of barrier Apparently the distribution of δ(S) at S is still a Gaussian, the argument of Chandrasekhar (1943) shall be valid, thus the solution under boundary condition of a fixed absorbing barrier δc is After a straightforward and tedious calculation, the halo mass function is with the kernel It is easy see that it is the PS function when α = 1/2. For comparison, we reproduce the kernel of the Sheth-Tormen mass function here, where A = 0.3222, a = 0.707 and p = 0.3. fα of different Hurst exponent α in comparison with PS and ST formulas are displayed in Fig. 1. The mass function from persistence FBM is very different with that of antipersistence FBM. It appears that the ST function is in good agreement with our new mass function of index α = 0.435 in large mass regime of ln σ −1 >∼ 0.3 beyond which the mass function is very sensitive to the choice of α. Considering the fact that ST mass function is obtained from fitting to simulations and has good accuracy in large mass regime, an immediate conclusion is that trajectories δ(S) in our universe is actually anti-persistent. In small mass regime, the dependence on α of fα is relatively weak. If α < 0.5 the new mass function predicts less number of halos than the PS formula by ∼ 10 − 20%, but up to ∼ 30% more than what ST function gives. It is very difficult and unreliable to resolve halos with mass lower than 10 8 M⊙ in present days' simulations, we have to leave it to future to tell which mass function is better in small mass regime. A quick check indicates that fα has very different shape with ST formula at ln σ −1 <∼ 0.3, we can only achieve good fit to fST in range of −0.5 < ln σ −1 < 0.3 with α ≈ 0.35. DISCUSSION The fractional Brownian motion of index α is introduced to construct the fractional excursion set theory. The new mass function computed with the theory is analytical and simple, of which the PS mass function is only a special case of α = 1/2. Comparison with the ST function nurtured by N-body simulations demonstrates the excellent performance of the new mass function. In Fig. 1 it is observed that high mass halo abundance is very sensitive to the value of α, the high mass halo abundance observed can be potentially a very powerful tool to detect the non-Gaussianity of the initial density fluctuation field: non-Gaussianity will change the correlation between walking steps of δ(S) and therefore modify the α effectively. The success of applying FBM formalism to model structure formation is attributed to the inclusion of the correlation between density fluctuations at different scales. The correlation strength characterized by the Hurst exponent α could be resulted from properties of window function and the intrinsic correlation of the cosmic density field. We know that a non-sharp filtering in k-space will induce correlation (Bond et al. 1991), but are unclear how α changes with features of the window function. More of interests is the relation between α and the power spectrum of density field. The generation method Eq. 13 provides some clues, however there is the complication that the scaling of trajectory δ(S) is founded relative to the variance σ 2 , not the physical scale R. Numerical experiments with scale free simulations shall be able to improve our understanding effects of window function and power spectrum on α. In this work only the mass function is computed. In principle, the fractional excursion set theory may have many applications, for example, those works of Lacey & Cole (1993), Mo & White (1996) and Zhang & Hui (2006) can all be revisited with FBM. Since α denotes the correlation of δ at different scales, the subsequently calculated halo merger rate and merger history is marked with the stamp of large scale environment on halo formation at small scales. We might be able to explain the halo clustering dependence on halo formation history and environment (Gao et al. 2005;Wechsler et al. 2006). The new halo mass function is obtained assuming spherical collapse. To improve the accuracy of the model, ellipsoidal collapse has to be taken into account. The poor performance of Eq. 18 of α ≈ 0.435 in low mass regime (see Fig. 1) is very likely due to our simplification of adopting the spherical collapse model. Essentially to calibrate the ellipsoidal collapse, we replace the fixed barrier δc with a moving barrier B(S) as in Sheth & Tormen (2002) and Del Popolo (2006a,b), and then solve the diffusion equation with the new boundary condition. Technique details and comparison with simulations will be presented elsewhere (Fosalba & Pan, in preparation).
4,128.6
2006-10-16T00:00:00.000
[ "Physics" ]
Neonatal Genetic Delivery of Anti-Respiratory Syncytial Virus (RSV) Antibody by Non-Human Primate-Based Adenoviral Vector to Provide Protection against RSV Respiratory syncytial virus (RSV) is one of the leading causes of lower respiratory tract infection in infants. Immunoprophylaxis with the anti-RSV monoclonal antibody, palivizumab, reduces the risk for RSV-related hospitalizations, but its use is restricted to high-risk infants due to the high costs. In this study, we investigated if genetic delivery of anti-RSV antibody to neonatal mice by chimpanzee adenovirus type 7 expressing the murine form of palivizumab (AdC7αRSV) can provide protection against RSV. Intranasal and intramuscular administration of AdC7αRSV to adult mice resulted in similar levels of anti-RSV IgG in the serum. However, only intranasal administration resulted in detectable levels of anti-RSV IgG in the bronchoalveolar lavage fluid. Intranasal administration of AdC7αRSV provided protection against subsequent RSV challenge. Expression of the anti-RSV antibody was prolonged following intranasal administration of AdC7αRSV to neonatal mice. Protection against RSV was confirmed at 6 weeks of age. These data suggest that neonatal genetic delivery of anti-RSV antibody by AdC7αRSV can provide protection against RSV. Introduction Respiratory syncytial virus (RSV) is one of the leading causes of lower respiratory tract infections in children, with a high morbidity and mortality, especially in developing countries [1][2][3].Most children are infected during their first year of life, and infants under six months of age are at highest risk for severe disease, especially those with bronchopulmonary dysplasia or cyanotic heart disease [4,5]. Despite the extensive efforts over decades, there are no effective vaccines or treatments for RSV.Immunoprophylaxis with the neutralizing monoclonal antibody, palivizumab, is the only available drug to prevent infections with RSV [6][7][8].Although palivizumab is safe and well-tolerated for prophylactic use, its use is limited to high-risk infants due to the high costs and the need for monthly intramuscular injections during the RSV season [9].Since most deaths attributed to RSV occur in low-income countries where expensive prophylaxis is not affordable [3], more cost-effective strategies are needed to spare more infants at risk from severe RSV disease. Genetic delivery of neutralizing antibodies using gene transfer vectors is an alternative strategy to achieve sustained expression of neutralizing antibodies.Among all currently available viral vectors, adenovirus (Ad) is one of the most efficient gene delivery systems, but pre-existing immunity against common human serotypes, such as Ad5, is hampering its use in humans.Non-human primate-derived Vaccines 2019, 7, 3 2 of 10 Ad, such as chimpanzee Ad serotype 7 (AdC7), are alternative vectors and are less likely to be affected by pre-existing immunity [10][11][12].We have previously reported that anti-Ad5 neutralizing antibodies do not cross-neutralize AdC7, and that maternal immunization of mice with the AdC7 vector expressing RSV vaccine antigen can protect their pups against RSV infection [13].Therefore, we adopted AdC7 as a gene transfer vector to deliver anti-RSV neutralizing antibody to infants, expecting a successful application in the presence of maternal anti-human Ad antibodies. Anti-Ad immunity induced by the vector limits long-term expression of the transgene and is thus another limitation of Ad vectors.However, immune tolerance may allow longer persistence when the vector is administered to neonates.The neonatal immune system is much less likely to develop a vigorous immune response to transgenic proteins [14,15].In this study, we administered AdC7 vector expressing anti-RSV antibody (AdC7αRSV) to neonatal mice and evaluated the efficacy of the delivered anti-RSV antibody to protect against RSV infection. Mice BALB/c mice were purchased from The Jackson Laboratories (Bar Harbor, ME, USA) housed under specific pathogen-free conditions, and bred to obtain neonatal mice.Adult female BALB/c mice were used at 8 weeks of age, and neonatal mice were used between 24 and 48 h after the birth.All animal studies were conducted in accordance with the protocols reviewed and approved by the Weill Cornell Institutional Animal Care and Use Committee (protocol number 2015-0011).All efforts were made to minimize the suffering of the animals. Generation of an AdC7 Vector Expressing Murine Anti-RSV IgG (AdC7αRSV) The recombinant Ad vectors used in this study are replication defective E1-, E3-deleted Ad vectors based on the chimpanzee AdC7.The AdC7 plasmid pPan-GFP (kindly provided by JM Wilson, University of Pennsylvania, Philadelphia, PA, USA) was digested with I-Ceul and PI-Scel, and the expression cassette of anti-RSV antibody carrying (5 to 3 ) the cytomegalovirus promoter/enhancer followed by cDNAs encoding the anti-RSV light chain, the poliovirus internal ribosomal entry site (IRES), the anti-RSV heavy chain, and the SV40 polyadenylation signal [17] was inserted into the E1 region using the same restriction enzyme sites.AdC7GFP, an AdC7 vector with the green fluorescent protein cDNA under control of a prokaryotic promoter which does not lead to transgene expression in mammalian cells, were used as controls.The AdC7αRSV vectors were propagated in HEK-293 cells and purified by centrifugation twice through a CsCl gradient as previously described [18], and the particle units (pu) were determined spectrophotometrically [19]. Western Blot Analysis To confirm the expression of anti-RSV IgG in vitro, supernatants of A549 cells infected with AdC7αRSV were separated by SDS-PAGE under both non-reducing and reducing conditions.Following transfer to a polyvinylidene difluoride (PVDF) membrane (Bio-Rad Laboratories, Hercules, CA, USA) murine IgG was detected using a horseradish peroxidase (HRP)-conjugated sheep anti-mouse IgG antibody (Sigma, St. Louis, MO, USA) and Immobilon Western Chemiluminescent HRP substrate (EMD Millipore, Burlington, MA, USA).The supernatant of mock-infected cells was used as negative control.Mouse serum obtained 8 weeks following RSV infection was used as a positive control. Dot Blot To evaluate binding of anti-RSV IgG to RSV, RSV Line19 (1.2 × 10 4 pfu/spot) was blotted to the PVDF membrane (Bio-Rad Laboratories, Hercules, CA, USA) and then developed with culture supernatant of HEK-293 cells infected with AdC7αRSV followed by the sheep anti-mouse IgG-peroxidase antibody as described above.Ad5 (2.0 × 10 6 pu/spot) was blotted as control. Plaque Reduction Assay Serial dilutions of supernatants from HEK-293 cells that had been infected for 48 h with AdC7αRSV were incubated with RSV Line19 (5 × 10 3 pfu/mL) for 1 h at 37 • C, and then added to Vero cells.The number of plaques was quantified after 4 days as previously described [16]. Expression of anti-RSV IgG In Vivo AdC7αRSV or AdC7GFP (5 × 10 10 pu each) diluted in 40 µl PBS were administered intranasally or intramuscularly to 8-week-old female BALB/c mice.Neonatal mice received either 3 × 10 10 pu (5µL) or 6 × 10 10 pu (10 µL) of the vectors intranasally.The levels and kinetics of anti-RSV IgG following administration of AdC7αRSV were quantified in serum and bronchoalveolar lavage (BAL) by ELISA.Serial dilutions of serum and BAL were added to flat-bottomed 96-well EIA/RIA plates (Corning, Corning, NY, USA) coated with 1µg/mL of human anti-palivizumab clone AbD23967 (HCA261, Bio-Rad-Antibodies, Hercules, CA, USA), followed by PBST + 5% blotting grade blocker (Bio-Rad Laboratories, Hercules, CA, USA.Detection was performed using an HRP-conjugated sheep anti-mouse IgG (Sigma, St. Louis, MO, USA) in PBS + 1% blotting grade blocker and substrate (hydrogen peroxide/tetramethylbenzidine) (R&D systems, Minneapolis, MN, USA) and the absorbance at 450 nm was measured.Titers were calculated with a log (OD)-log(dilution) interpolation model, with detection cut-off equal to 2-fold the background absorbance.Half-life (t 1/2 ) was calculated by the formula: where t = time elapsed, N 0 = titer at 1 week, and N t = titer at 4 weeks after the administration of AdC7αRSV. Statistics Statistical analyses were performed using one-way ANOVA, followed by two-tailed unpaired Student's t-tests.Statistical significance was determined at p < 0.05. Expression of Murine Anti-RSV In Vitro AdC7αRSV (Figure 1) was generated and propagated in HEK-293 cells. Figure 1.Schema of the chimpanzee adenovirus type 7 vector expressing murine anti-respiratory syncytial virus antibody (AdC7αRSV) .The E1/E3 genes of AdC7 are deleted (ΔE1/ΔE3) and replaced by the expression cassette of the anti-RSV antibody cDNAs using the restriction enzyme sites I-CeuI and PI-SceI.The expression cassette includes the cytomegalovirus promoter (P CMV), followed by cDNAs encoding the anti-RSV light chain (LC), the poliovirus internal ribosome entry site (IRES), the anti-RSV heavy chain (HC), and SV40 polyadenylation signal (SV40 pA). To confirm expression of murine anti-RSV IgG in vitro, A549 cells were infected with purified AdC7αRSV, and cell culture supernatants were assessed by Western Blot analysis (Figure 2).Under non-reducing conditions, a complex of 150 kDa, corresponding to the size of the completely assembled murine IgG, was detected (Figure 2A, lane 1).Under reducing conditions, individual heavy chains (HC, 50 kDa) and light chains (LC, 25 kDa) were detected (Figure 2B; lane 4).The binding ability of the expressed anti-RSV IgG to RSV was confirmed by dot blot ELISA using RSV Line19 binding to cell culture supernatants of HEK-293 cells infected with AdC7αRSV (Figure Figure 1.Schema of the chimpanzee adenovirus type 7 vector expressing murine anti-respiratory syncytial virus antibody (AdC7αRSV).The E1/E3 genes of AdC7 are deleted (∆E1/∆E3) and replaced by the expression cassette of the anti-RSV antibody cDNAs using the restriction enzyme sites I-CeuI and PI-SceI.The expression cassette includes the cytomegalovirus promoter (P CMV), followed by cDNAs encoding the anti-RSV light chain (LC), the poliovirus internal ribosome entry site (IRES), the anti-RSV heavy chain (HC), and SV40 polyadenylation signal (SV40 pA). To confirm expression of murine anti-RSV IgG in vitro, A549 cells were infected with purified AdC7αRSV, and cell culture supernatants were assessed by Western Blot analysis (Figure 2).Under non-reducing conditions, a complex of 150 kDa, corresponding to the size of the completely assembled murine IgG, was detected (Figure 2A, lane 1).Under reducing conditions, individual heavy chains (HC, 50 kDa) and light chains (LC, 25 kDa) were detected (Figure 2B; lane 4). Figure 1.Schema of the chimpanzee adenovirus type 7 vector expressing murine anti-respiratory syncytial virus antibody (AdC7αRSV) .The E1/E3 genes of AdC7 are deleted (ΔE1/ΔE3) and replaced by the expression cassette of the anti-RSV antibody cDNAs using the restriction enzyme sites I-CeuI and PI-SceI.The expression cassette includes the cytomegalovirus promoter (P CMV), followed by cDNAs encoding the anti-RSV light chain (LC), the poliovirus internal ribosome entry site (IRES), the anti-RSV heavy chain (HC), and SV40 polyadenylation signal (SV40 pA). To confirm expression of murine anti-RSV IgG in vitro, A549 cells were infected with purified AdC7αRSV, and cell culture supernatants were assessed by Western Blot analysis (Figure 2).Under non-reducing conditions, a complex of 150 kDa, corresponding to the size of the completely assembled murine IgG, was detected (Figure 2A, lane 1).Under reducing conditions, individual heavy chains (HC, 50 kDa) and light chains (LC, 25 kDa) were detected (Figure 2B; lane 4).The binding ability of the expressed anti-RSV IgG to RSV was confirmed by dot blot ELISA using RSV Line19 binding to cell culture supernatants of HEK-293 cells infected with AdC7αRSV (Figure The binding ability of the expressed anti-RSV IgG to RSV was confirmed by dot blot ELISA using RSV Line19 binding to cell culture supernatants of HEK-293 cells infected with AdC7αRSV (Figure 3A, lanes 1, 2).Plaque-reduction assay showed dose-dependent neutralization of RSV by the antibodies (Figure 3B). Assessment of Anti-RSV IgG Delivered by AdC7αRSV In Vivo The kinetics of anti-RSV IgG in the serum of adult mice following intranasal or intramuscular administration of AdC7αRSV showed that the anti-RSV IgG titer peaked at one week following administration, and then gradually decreased to non-specific response levels by eight weeks (Figure 4A).The estimated half-life was 19 days.Although the titers were higher following intramuscular administration, the kinetics of anti-RSV levels over time were similar between the two routes of administration.In contrast, anti-RSV IgG in the BAL at one week was only detectable following intranasal administration (Figure 4B). Assessment of Anti-RSV IgG Delivered by AdC7αRSV In Vivo The kinetics of anti-RSV IgG in the serum of adult mice following intranasal or intramuscular administration of AdC7αRSV showed that the anti-RSV IgG titer peaked at one week following administration, and then gradually decreased to non-specific response levels by eight weeks (Figure 4A).The estimated half-life was 19 days.Although the titers were higher following intramuscular administration, the kinetics of anti-RSV levels over time were similar between the two routes of administration.In contrast, anti-RSV IgG in the BAL at one week was only detectable following intranasal administration (Figure 4B). Assessment of Anti-RSV IgG Delivered by AdC7αRSV In Vivo The kinetics of anti-RSV IgG in the serum of adult mice following intranasal or intramuscular administration of AdC7αRSV showed that the anti-RSV IgG titer peaked at one week following administration, and then gradually decreased to non-specific response levels by eight weeks (Figure 4A).The estimated half-life was 19 days.Although the titers were higher following intramuscular administration, the kinetics of anti-RSV levels over time were similar between the two routes of administration.In contrast, anti-RSV IgG in the BAL at one week was only detectable following intranasal administration (Figure 4B). Protection Against RSV Infection Following AdC7αRSV Administration in Adult Mice To evaluate if AdC7αRSV can deliver sufficient antibody to protect against RSV infection, AdC7αRSV, AdC7GFP or PBS (No AdC7 control) were intranasally administered to eight-week-old female BALB/c mice, followed by RSV Line19 infection three days later.RSV viral loads in the lungs of mice that had received AdC7αRSV were lower (below detection level) compared to mice that had received AdC7GFP (Figure 5A; p < 0.01).Likewise, RSV viral genomes also decreased in mice that had received AdC7αRSV compared to AdC7GFP (Figure 5B; p < 0.05).This suggests that intranasal administration of AdC7αRSV provides protection against RSV infection in adult mice. Protection against RSV Infection following AdC7αRSV Administration in Adult Mice To evaluate if AdC7αRSV can deliver sufficient antibody to protect against RSV infection, AdC7αRSV, AdC7GFP or PBS (No AdC7 control) were intranasally administered to eight-week-old female BALB/c mice, followed by RSV Line19 infection three days later.RSV viral loads in the lungs of mice that had received AdC7αRSV were lower (below detection level) compared to mice that had received AdC7GFP (Figure 5A; p < 0.01).Likewise, RSV viral genomes also decreased in mice that had received AdC7αRSV compared to AdC7GFP (Figure 5B; p < 0.05).This suggests that intranasal administration of AdC7αRSV provides protection against RSV infection in adult mice. Protection against RSV following AdC7αRSV Administration to Neonatal Mice To evaluate if AdC7αRSV administered to neonatal mice can lead to protection against RSV, AdC7αRSV or AdC7GFP were intranasally administered to one-day-old mice.Anti-RSV IgG was detected in 4 out of 5 mice that had received AdC7αRSV at four and six weeks of age (Figure 6A).The kinetics of anti-RSV IgG in the serum of neonatal mice after four weeks of age following intranasal administration of AdC7αRSV showed that the titers were the highest at four weeks and then gradually decreased, but stayed at detectable levels until nine weeks (Figure S1A).Serum anti-RSV IgG titers were also at detectable levels 14 weeks following intranasal administration of AdC7αRSV.The mice that had received a higher dose of AdC7αRSV tended to have higher anti-RSV IgG titers, particularly in lung homogenate supernatants (Figure S1B).Importantly, challenge with RSV at six weeks of age showed a reduction of viral loads only in the mice that had received AdC7αRSV, with undetectable RSV level in three out of five mice, compared with the mice that had received AdC7GFP or that had not received any Ad vector (Figure 6B).Protection against RSV was also evaluated at a later time point; Anti-RSV IgG expression was confirmed in the mice that had received AdC7αRSV at four and eight weeks of age (Figure S2A).Challenge with RSV at 10 weeks of age also showed a reduction of viral loads in three out of four mice that had received AdC7αRSV, compared with the mice that had received AdC7GFP or that had not received any Ad vector (Figure S2B).These data suggest that administration of AdC7αRSV to neonatal mice can lead to protection against RSV, and prolonged protective immunity compared to adult mice can be provided. Protection Against RSV Following AdC7αRSV Administration to Neonatal Mice To evaluate if AdC7αRSV administered to neonatal mice can lead to protection against RSV, AdC7αRSV or AdC7GFP were intranasally administered to one-day-old mice.Anti-RSV IgG was detected in 4 out of 5 mice that had received AdC7αRSV at four and six weeks of age (Figure 6A).The kinetics of anti-RSV IgG in the serum of neonatal mice after four weeks of age following intranasal administration of AdC7αRSV showed that the titers were the highest at four weeks and then gradually decreased, but stayed at detectable levels until nine weeks (Figure S1A).Serum anti-RSV IgG titers were also at detectable levels 14 weeks following intranasal administration of AdC7αRSV.The mice that had received a higher dose of AdC7αRSV tended to have higher anti-RSV IgG titers, particularly in lung homogenate supernatants (Figure S1B).Importantly, challenge with RSV at six weeks of age showed a reduction of viral loads only in the mice that had received AdC7αRSV, with undetectable RSV level in three out of five mice, compared with the mice that had received AdC7GFP or that had not received any Ad vector (Figure 6B).Protection against RSV was also evaluated at a later time point; Anti-RSV IgG expression was confirmed in the mice that had received AdC7αRSV at four and eight weeks of age (Figure S2A).Challenge with RSV at 10 weeks of age also showed a reduction of viral loads in three out of four mice that had received AdC7αRSV, compared with the mice that had received AdC7GFP or that had not received any Ad vector (Figure S2B).These data suggest that administration of AdC7αRSV to neonatal mice can lead to protection against RSV, and prolonged protective immunity compared to adult mice can be provided. Discussion In this study, we showed that intranasal administration of AdC7αRSV could provide protection against RSV infection.The administration route of Ad vector may greatly contribute to the efficiency of local delivery to the airway.Intravenous administration of palivizumab was shown to be effective in rodents and humans, effectively reducing lung RSV load in the lung of cotton rats [20] and being detectable in human nasal washes [21].It is known that intramuscularly injected palivizumab is slowly absorbed and maximum serum concentrations are reached at three to five days [22], but the kinetics of lung and airway levels are less well-known.Our data showed that the anti-RSV IgG was detected in the BAL after intranasal administration but not intramuscular administration at the time of peak serum IgG levels (at one week).The intranasal administration route may be advantageous through the production of the neutralizing antibodies directly by the respiratory mucosal cells, the primary target of RSV.Future studies should examine the cellular source of the antibody within the respiratory mucosa, and include more analyses of the nose-associated lymphoid tissue (NALT) immune cells. Immune responses in the respiratory tract can be altered by prior viral infections [23,24].The recent study [25] revealed that nasal priming by viruses can influence lung immunity and can induce protective immunity against heterologous viral infection.Our data in the adult mice showed some protection against RSV also in the mice that had received the control AdC7GFP vector (Figure 5A).Although the protection against subsequent RSV infection was much greater in the mice that had received AdC7αRSV, the AdC7 vector itself seemed to have a priming effect to the lung immunity. Neonatal adaptive immune responses show a great deal of variability, ranging from nonresponsiveness to fully mature function.It has been shown that newborn mice can be tolerized to Ad vectors, and that repeat administration of Ad vector could be possible [26].Thus, immune tolerance can be expected when vectors are administered to neonates.Administration of AdC7αRSV to oneday-old mice resulted in prolonged antibody expression compared to adult mice.Another potential advantage of neonatal gene delivery is the higher vector particles-to-cell ratio, requiring a lower relative dose [15].In addition, Ad vector delivery seems to be more efficient in the neonatal lung compared to adult lungs [27].Our data showed that the serum anti-RSV IgG titers are higher after neonatal administration compared to adult administration, albeit with the same dose. To reduce the burden of disease caused by RSV, there is a strong consensus that focus should be placed on children in their first six months of life when the risk of severe RSV-associated respiratory Discussion In this study, we showed that intranasal administration of AdC7αRSV could provide protection against RSV infection.The administration route of Ad vector may greatly contribute to the efficiency of local delivery to the airway.Intravenous administration of palivizumab was shown to be effective in rodents and humans, effectively reducing lung RSV load in the lung of cotton rats [20] and being detectable in human nasal washes [21].It is known that intramuscularly injected palivizumab is slowly absorbed and maximum serum concentrations are reached at three to five days [22], but the kinetics of lung and airway levels are less well-known.Our data showed that the anti-RSV IgG was detected in the BAL after intranasal administration but not intramuscular administration at the time of peak serum IgG levels (at one week).The intranasal administration route may be advantageous through the production of the neutralizing antibodies directly by the respiratory mucosal cells, the primary target of RSV.Future studies should examine the cellular source of the antibody within the respiratory mucosa, and include more analyses of the nose-associated lymphoid tissue (NALT) immune cells. Immune responses in the respiratory tract can be altered by prior viral infections [23,24].The recent study [25] revealed that nasal priming by viruses can influence lung immunity and can induce protective immunity against heterologous viral infection.Our data in the adult mice showed some protection against RSV also in the mice that had received the control AdC7GFP vector (Figure 5A).Although the protection against subsequent RSV infection was much greater in the mice that had received AdC7αRSV, the AdC7 vector itself seemed to have a priming effect to the lung immunity. Neonatal adaptive immune responses show a great deal of variability, ranging from non-responsiveness to fully mature function.It has been shown that newborn mice can be tolerized to Ad vectors, and that repeat administration of Ad vector could be possible [26].Thus, immune tolerance can be expected when vectors are administered to neonates.Administration of AdC7αRSV to one-day-old mice resulted in prolonged antibody expression compared to adult mice.Another potential advantage of neonatal gene delivery is the higher vector particles-to-cell ratio, requiring a lower relative dose [15].In addition, Ad vector delivery seems to be more efficient in the neonatal lung compared to adult lungs [27].Our data showed that the serum anti-RSV IgG titers are higher after neonatal administration compared to adult administration, albeit with the same dose. To reduce the burden of disease caused by RSV, there is a strong consensus that focus should be placed on children in their first six months of life when the risk of severe RSV-associated respiratory disease is highest [8].Immunization to provide the protective immunity lasting throughout the vulnerable period would be an ideal strategy to protect children from RSV infection.We focused on passive immunization in this study, but the combination of AdC7-based passive immunization and AdC7-based active immunization can be an attractive strategy to provide complete protective immunity throughout the vulnerable period.Since mature responses to vaccines can also be expected in the neonatal immune system [14], simultaneous administration of AdC7 which carries anti-RSV neutralizing antibody and RSV vaccine antigen may provide both passive and active immunization.An appropriate vaccine antigen that is not overlapped with the antigenic site for palivizumab would be required to make this strategy successful. We did not include wild-type RSV or other RSV vaccine controls in this study.Since our primary interest was to determine if genetic delivery of anti-RSV antibody by AdC7 vector can be effective in neonatal mice, we used AdC7 control vector without transgene expression (AdC7GFP).Future studies should include wild-type RSV or vaccine controls to further evaluate the effectiveness of this vector strategy, even if wild-type RSV is not a realistic vaccine alternative for neonates. The half-life of the delivered antibody in our study was 19 days, which is compatible with a known half-life of palivizumab.Prolonged expression was seen when the vectors were administered to neonatal mice, but we could not estimate the half-life of antibodies since a serial collection of neonatal serum was not feasible.Several explanations can be possible for the prolonged expression.The antibody may have had a longer half-life than in adult mice because of the unique immunological properties of neonatal mice.The efficiency of gene delivery may have been greater due to the higher susceptibility to Ad of neonatal lungs.Recently, the development of new anti-RSV monoclonal antibodies went toward prolongation of the serum half-life.The amino acid modification made it possible to extend the serum half-life of the antibody [28,29].Modifying the expression cassette on the AdC7 to deliver more efficient anti-RSV monoclonal antibody with extended half-life itself may enable further long-term protection against RSV. Conclusions In summary, our data showed that intranasal administration of AdC7αRSV to neonatal mice provided prolonged expression of anti-RSV antibody to protect against RSV infection.Neonatal immune responses have been mainly studied in mice, but there are some indications that a similar situation is true in humans in early life [14].We demonstrated the potential of neonatal genetic delivery by non-human primate-based Ad vector that could efficiently deliver anti-RSV neutralizing antibody to neonatal lungs and could provide protection against RSV. Figure 2 . Figure 2. Expression of murine anti-respiratory syncytial virus (anti-RSV) IgG in vitro.Anti-RSV IgG in supernatants of A549 cells infected with chimpanzee adenovirus type 7 vector expressing murine anti-RSV IgG (AdC7αRSV) was detected by Western Blot analysis.(A) Expression of the full-length murine IgG under non-reducing conditions.(B) Expression of the heavy chain (HC) and light chain (LC) of murine IgG under reducing conditions.The supernatant of mock-infected cells was used as a negative control (lanes 2, 5).Mouse serum 8 weeks post infection with RSV was used as a positive control (lanes 3, 6).Detection was with a horseradish peroxidase (HRP)-conjugated sheep anti-mouse IgG and HRP chemiluminescence substrate. Figure 2 . Figure 2. Expression of murine anti-respiratory syncytial virus (anti-RSV) IgG in vitro.Anti-RSV IgG in supernatants of A549 cells infected with chimpanzee adenovirus type 7 vector expressing murine anti-RSV IgG (AdC7αRSV) was detected by Western Blot analysis.(A) Expression of the full-length murine IgG under non-reducing conditions.(B) Expression of the heavy chain (HC) and light chain (LC) of murine IgG under reducing conditions.The supernatant of mock-infected cells was used as a negative control (lanes 2, 5).Mouse serum 8 weeks post infection with RSV was used as a positive control (lanes 3, 6).Detection was with a horseradish peroxidase (HRP)-conjugated sheep anti-mouse IgG and HRP chemiluminescence substrate. Figure 2 . Figure 2. Expression of murine anti-respiratory syncytial virus (anti-RSV) IgG in vitro.Anti-RSV IgG in supernatants of A549 cells infected with chimpanzee adenovirus type 7 vector expressing murine anti-RSV IgG (AdC7αRSV) was detected by Western Blot analysis.(A) Expression of the full-length murine IgG under non-reducing conditions.(B) Expression of the heavy chain (HC) and light chain (LC) of murine IgG under reducing conditions.The supernatant of mock-infected cells was used as a negative control (lanes 2, 5).Mouse serum 8 weeks post infection with RSV was used as a positive control (lanes 3, 6).Detection was with a horseradish peroxidase (HRP)-conjugated sheep anti-mouse IgG and HRP chemiluminescence substrate. Figure 3 . Figure 3. Assessment of anti-respiratory syncytial virus (anti-RSV) IgG expression in vitro.Supernatants of HEK-293 cells infected with AdC7αRSV were assessed for the presence of functional anti-RSV IgG.(A) Binding to RSV.Supernatants were incubated with RSV Line19 or Ad5 (control) immobilized on a polyvinylidene difluoride (PVDF) membrane followed by an HRP-conjugated sheep anti-mouse IgG.Mouse serum 8 weeks post infection with RSV was used as a positive control (lane 4).(B) Plaque-reduction assay.Serial dilutions of supernatants were incubated with RSV Line19 (5x10 3 pfu/ml) for 1 hour, followed by infection of Vero cells.Data are shown as % reduction of plaques after 4 days, with mean ± SEM of 4 replicates. Figure 4 . Figure 4. Assessment of anti-respiratory syncytial virus (anti-RSV) IgG expression in adult mice.(A) Kinetics of anti-RSV IgG in the serum following intranasal (i.n.) or intramuscular (i.m.) administration of AdC7αRSV (5 x 10 10 pu).Titers were measured by ELISA.Data are shown as mean ± SEM of 4 mice per group.(B) Anti-RSV IgG in the bronchoalveolar lavage (BAL) 1 week following administration of AdC7αRSV (5 x 10 10 pu).Titers were measured by ELISA.Data are shown with mean ± SEM. Figure 3 . Figure 3. Assessment of anti-respiratory syncytial virus (anti-RSV) IgG expression in vitro.Supernatants of HEK-293 cells infected with AdC7αRSV were assessed for the presence of functional anti-RSV IgG.(A) Binding to RSV.Supernatants were incubated with RSV Line19 or Ad5 (control) immobilized on a polyvinylidene difluoride (PVDF) membrane followed by an HRP-conjugated sheep anti-mouse IgG.Mouse serum 8 weeks post infection with RSV was used as a positive control (lane 4).(B) Plaque-reduction assay.Serial dilutions of supernatants were incubated with RSV Line19 (5 × 10 3 pfu/mL) for 1 h, followed by infection of Vero cells.Data are shown as % reduction of plaques after 4 days, with mean ± SEM of 4 replicates. Figure 3 . Figure 3. Assessment of anti-respiratory syncytial virus (anti-RSV) IgG expression in vitro.Supernatants of HEK-293 cells infected with AdC7αRSV were assessed for the presence of functional anti-RSV IgG.(A) Binding to RSV.Supernatants were incubated with RSV Line19 or Ad5 (control) immobilized on a polyvinylidene difluoride (PVDF) membrane followed by an HRP-conjugated sheep anti-mouse IgG.Mouse serum 8 weeks post infection with RSV was used as a positive control (lane 4).(B) Plaque-reduction assay.Serial dilutions of supernatants were incubated with RSV Line19 (5x10 3 pfu/ml) for 1 hour, followed by infection of Vero cells.Data are shown as % reduction of plaques after 4 days, with mean ± SEM of 4 replicates. Figure 4 . Figure 4. Assessment of anti-respiratory syncytial virus (anti-RSV) IgG expression in adult mice.(A) Kinetics of anti-RSV IgG in the serum following intranasal (i.n.) or intramuscular (i.m.) administration of AdC7αRSV (5 x 10 10 pu).Titers were measured by ELISA.Data are shown as mean ± SEM of 4 mice per group.(B) Anti-RSV IgG in the bronchoalveolar lavage (BAL) 1 week following administration of AdC7αRSV (5 x 10 10 pu).Titers were measured by ELISA.Data are shown with mean ± SEM. Figure 4 . Figure 4. Assessment of anti-respiratory syncytial virus (anti-RSV) IgG expression in adult mice.(A) Kinetics of anti-RSV IgG in the serum following intranasal (i.n.) or intramuscular (i.m.) administration of AdC7αRSV (5 × 10 10 pu).Titers were measured by ELISA.Data are shown as mean ± SEM of 4 mice per group.(B) Anti-RSV IgG in the bronchoalveolar lavage (BAL) 1 week following administration of AdC7αRSV (5 × 10 10 pu).Titers were measured by ELISA.Data are shown with mean ± SEM. Figure 5 . Figure 5. Protection against RSV infection following AdC7αRSV administration to adult mice.AdC7αRSV, AdC7GFP (5 x 10 10 pu) or PBS (No AdC7 control) were intranasally administered to 8week-old BALB/c mice, followed by RSV Line19 (10 6 pfu) challenge 3 days later.(A) RSV viral loads in the lungs 4 days after the RSV challenge by plaque assay.(B) RSV genome expression in the lungs 4 days after the challenge by RT-qPCR.Data are shown with mean ± SEM. * and ** denote p < 0.05 and p < 0.01, respectively. Figure 5 . Figure 5. Protection against RSV infection following AdC7αRSV administration to adult mice.AdC7αRSV, AdC7GFP (5 × 10 10 pu) or PBS (No AdC7 control) were intranasally administered to 8-week-old BALB/c mice, followed by RSV Line19 (10 6 pfu) challenge 3 days later.(A) RSV viral loads in the lungs 4 days after the RSV challenge by plaque assay.(B) RSV genome expression in the lungs 4 days after the challenge by RT-qPCR.Data are shown with mean ± SEM. * and ** denote p < 0.05 and p < 0.01, respectively. Figure 6 . Figure 6.Protection against RSV infection following AdC7αRSV administration to neonatal mice.AdC7αRSV or AdC7GFP (6 x 10 10 pu) were intranasally administered to 1-day-old BALB/c mice, followed by RSV A2 (10 6 pfu) challenge at 6 weeks of age.(A) Anti-RSV IgG titer in serum before the RSV challenge.Serum was collected at 4 and 6 weeks, and titers were measured by ELISA.Data are shown with mean ± SEM. (B) RSV viral loads in the lungs 4 days after the RSV challenge by plaque assay.Data are shown with mean ± SEM. * and *** denote p < 0.05 and p < 0.005, respectively. Figure 6 . Figure 6.Protection against RSV infection following AdC7αRSV administration to neonatal mice.AdC7αRSV or AdC7GFP (6 × 10 10 pu) were intranasally administered to 1-day-old BALB/c mice, followed by RSV A2 (10 6 pfu) challenge at 6 weeks of age.(A) Anti-RSV IgG titer in serum before the RSV challenge.Serum was collected at 4 and 6 weeks, and titers were measured by ELISA.Data are shown with mean ± SEM. (B) RSV viral loads in the lungs 4 days after the RSV challenge by plaque assay.Data are shown with mean ± SEM. * and *** denote p < 0.05 and p < 0.005, respectively. : Assessment of anti-RSV IgG expression in neonatal mice, Figure S2: Protection against RSV 10 weeks following AdC7αRSV administration to neonatal mice.Author Contributions: S.W. and A.S. conceptualized the study; R.G. and W.W. performed experimental procedures.A.S. supervised the experiments.R.G. analyzed data and wrote the manuscript.S.W. reviewed and edited the manuscript.Funding: This research was funded by National Institutes of Health grant number R21 AI113801 to S.W.
7,644
2018-12-29T00:00:00.000
[ "Biology", "Medicine" ]
Quantum Dot-Driven Stabilization of Liquid-Crystalline Blue Phases Liquid crystals hosting nanoparticles comprise a fascinating research field, ranging from fundamental aspects of phase transitions to applications in optics and photonics. Liquid-crystalline phases exhibit topological defects that can be used for assembly of nanoparticles in periodical arrays, and at the same time, the nanoparticles can increase the stability range of liquid-crystalline phases. This has been experimentally demonstrated over the past few years in the case of blue phases that are present in some strongly chiral liquid crystals. Experimental results in quantum dot-driven blue phase stabilization are presented here by means of high-resolution calorimetry and polarizing optical microscopy. It is demonstrated that quantum dots essentially stabilize the macroscopically amorphous blue phase III. There are discussed similarities and differences between the effects of spherical and anisotropic nanoparticles on blue phase stabilization; moreover, future prospects and trends in the field are addressed. INTRODUCTION Liquid crystals (LCs) form a fascinating class of soft materials exhibiting a plethora of mesophases between the isotropic liquid and the crystal phases. Being discovered in the end of the 19th century [1], LCs made inroads into optical display technologies in the second half of the 20th century [2]. Upon reducing temperature, LCs undergo several phase transitions along which they progressively obtain a long-range orientational and partial positional order. The competition between the incompatible structures of adjacent mesophases often results in the appearance of topological defects such as disclinations and screw and edge dislocations. In addition, LCs exhibit anisotropy in their elastic, dielectric, and optical properties. Hence, they form a multidisciplinary scaffold, where fundamental science, and envisioned applications meet [3][4][5][6][7]. The former consists of studies of quenched-random disorder and inclusions on symmetry-braking phase transitions [8][9][10][11], critical phenomena, and universality classes [12][13][14][15][16]. The latter is related to attempts for assembly and orientation of nanoparticles (NPs) [17], as well as to search of tunable photonic crystals and lasers [18,19], soft magnetoelectrics [20][21][22], and metamaterials [23]. Moreover, liquid-crystalline cholesteric and blue phases comprise a testing ground for the study of active materials [24,25], with rapidly increasing interest in physics and biology [26]. Among the most interesting liquid-crystalline phases for applications in optics and photonics are the so-called blue phases (BPs). These phases are inherently present in some strongly chiral LCs and only within a narrow temperature range (in most cases from 1 to 3 K) between the isotropic (I) and chiral nematic (N * ) phases. Three such phases have been identified, denoted as blue phase III (BPIII), blue phase II (BPII), and blue phase I (BPI), upon reducing temperature. The extension of the BP range as a function of increased chirality was demonstrated in phase diagrams of racemic-chiral mixtures [27]. Their thermodynamic stability was confirmed by identifying the distinct thermal signatures of I-BPIII, BPIII-BPII, BPII-BPI, and BPI-N * transitions for cholesteryl non-anoate [28]. In the late 90s, the BP phase diagrams were revisited and the critical behavior as a function of chirality was well-understood [29,30]. Regarding structure, BPI and BPII consist of double-twisted cylinders, with the director being parallel to the cylinder axis in the center and gradually changing from −45 • to +45 • , as it can be schematically depicted in Figure 1. This structure emanates from a continuous competition between two factors: the chirality and the packing topology. The double-twisted cylinders are packed in such a way that BPII and BPI exhibit three-dimensional simple cubic and body-centered cubic defect lattices, respectively [31][32][33]. Between these cylinders, there is no LC molecular alignment, i.e., the cylinders coexist with lines of −1/2 disclinations as seen in Figure 1. The structure of BPIII, in particular, remained elusive for many years [34]. Though it was initially considered to be locally similar to BPII [35], recent systematic theoretical work by Henrich et al. [36,37] yields a macroscopically amorphous network of disclination lines. These lines are interconnected in BPIII and BPII, whereas they do not intersect in the case of BPI. The cubic lattice of BPs with periodicity in visible wavelengths could be exploited toward the fabrication of tunable photonic crystals, as proposed by Etchegoin [38]. Following work of Cao et al. [18] reported lasing in a three-dimensional photonic-bandgap BPII sample. Nevertheless, the narrow temperature stability range of BPs remained a long-standing, unsurpassed obstacle. The envisioning of applications revived the interest of the scientific community in exploring strategies to extend the BP temperature stability range. Kikuchi et al. [39] exploited bi-continuous phase separation phenomena in LC and polymer composites that, in case of a BPI defect lattice, could drive the polymer chains in the space between the double-twist cylinders, i.e., assemble them along the disclination lines. Indeed, this work resulted in the first experimental demonstration of BP stabilization. Posterior stabilization strategies were based on mixing of chiral and non-chiral molecules [40] and fast quenching in super-cooled states [41,42]. The use of surface-functionalized NPs is a more recent, yet very effective strategy toward expanding the temperature range of BPs. The first reports came out almost simultaneously for spherical Au nanoparticles dispersed in a LC/chiral dopant mixture [43] and CdSe quantum dots (QDs) [44,45] dispersed in single LC compounds. Subsequent studies used additional types of spherical NPs and QDs, by varying the core composition and diameter, as well as the surface functionalization [46][47][48][49][50][51][52]. Apart from the stabilization effect, it has been also reported that NPs could improve the electro-optical performance of blue phase-based optical displays [53]. Experimental results are presented here on two mixtures of CdS x Se 1−x quantum dots (QDs) dispersed in the chiral LC compound S-(+)-4-(2 ′ -methylbutyl) phenyl-4 ′ -n-octylbiphenyl-4 carboxylate), henceforth referred to as CE8. The thermal and optical properties of the mixtures have been investigated by means of high-resolution ac calorimetry and polarizing optical microscopy. It is shown that the presence of CdSSe increases the total BP range and especially promotes the BPIII stabilization. In the succeeding sections, the materials and methods are presented, followed by the presentation of experimental results and discussion with respect to other recent advances and trends. MATERIALS AND METHODS CE8 of high purity has been purchased from Merck and exhibits all three BPs within a total range of 5 K [44]. QDs of the CdS x Se 1−x core (where x = 0.5) have been synthesized in the National Center for Scientific Research "Demokritos" (Greece). They have a core diameter of 3.4 nm, and they are surfacefunctionalized by flexible oleyl amine and tri-octyl phosphine molecules [54]. The oleyl amine-based functionalization has proven very effective in obtaining high-quality dispersions of spherical and anisotropic NPs in LCs [44,45,[54][55][56][57]. Two mixtures of CE8 and CdSSe QDs have been prepared, with concentrations χ = 0.01 and χ = 0.05, where χ is defined as the ratio of the mass of QDs over the total sample mass (QDs and CE8). The mixing protocol has been described in previous studies [44,45]. High-resolution ac calorimetry and polarizing optical microscopy have been used to study the mixtures' properties. The calorimetric apparatus at Jožef Stefan Institute (Slovenia) is home-made and fully automatized. It achieves an excellent thermal stability (better than 50 µK) and operates at slow scanning rates. This way, the samples are kept close to thermal equilibrium and the temperature profiles of heat capacity C p (T) are accurately derived. For calorimetric measurements, quantities of ∼30 mg of the mixtures are placed immediately after preparation in home-made, high-purity silver cells. A glass bead thermistor and a heater are attached to the cell prior to measurements. Polarizing optical microscopy yields the characteristic textures of liquid-crystalline phases. The apparatus in National and Kapodistrian University of Athens (Greece) consists of a Leica DM2500P microscope equipped with a Leica DFG420 digital image-acquisition camera. An Instec HCS402 heating stage with temperature stability better than 10 mK is attached to the microscope, allowing for temperature scans. The samples were placed between glass slides of 10-µm thickness. The combination of ac calorimetry and polarizing optical microscopy provides a solid picture for the mixtures' phase transition behavior. By checking the thermal and optical properties with two methods, upon heating and cooling, the possibility that BPs are super-cooled, i.e., thermodynamically unstable, is ruled out. Moreover, calorimetry provides robust evidence that these phases exist in bulk (thick) samples and are not induced or stabilized by the interfaces in thin samples. The latter has been recently demonstrated by means of microscopy as a function of the cell thickness, the anchoring conditions, and the effective anchoring strength [58]. RESULTS The C p (T) profiles of the two mixtures, χ = 0.01 and χ = 0.05, have been obtained at the same scanning rates of 0.25 Kh −1 upon cooling from the I down to the N * phase. They are shown in Figure 2, at the middle and top layers, respectively. At the bottom layer, the C p (T) profile of pure CE8 [44] is shown at the same temperature range for comparison. The presence of CdSSe QDs has a strong impact on the phase transition behavior of the mixtures with respect to pure CE8. In particular, a widening of the total BP range is observed, from 5.0 K for CE8 to 7.2 K for χ = 0.05. Although the increase of the total BP range is moderate, an impressive increase of the BPIII range by a factor of six times is found. All phase transitions appear suppressed in the presence of QDs. An interesting feature is the apparent stronger suppression of I-BPIII anomaly for χ = 0.01 with respect to χ = 0.05. This does not imply any abnormality regarding the enthalpic content of phase transition behavior, taking into account that ac calorimetry in the conventional (so-call ac) mode of operation senses only the continuous part of enthalpy change. In case of first-order transitions (such as the I-BPIII) latent heat is present, being the major contribution to the total enthalpy. The latter can be in this case measured by operating the calorimeter in a different, so-called relaxation or non-adiabatic scanning mode [59]. Note that ac runs are more precise for the determination of the transition temperatures, and they have been chosen for the construction of the temperature-concentration phase diagram. Nevertheless, only the size of the anomalies obtained by relaxation runs can be used for a comparison of the total enthalpic content of first-order I-BPIII transition between pure CE8 and the mixtures. Such types of runs have not been performed, since the determination of enthalpy values is not the focus of the present work. Another interesting feature of the phase transition behavior is that BPII is already absent at χ = 0.01. The phase sequence I-BPIII-BPII-BPI-N * of pure CE8 has given its place to I-BPIII-BPI-N * in the case of χ = 0.01. By additionally increasing the QD concentration to χ = 0.05, BPI also disappears and BPIII occupies the full temperature range of 7.2 K between the I and N * phases. Hence, the presence of CdSSe QDs strongly promotes the stabilization of BPIII and yields an I-BPIII-N * sequence, with an extended BPIII range. In order to additionally confirm the phase sequence derived by ac calorimetry, the optical textures have been examined upon sequential heating and cooling cycles. The temperature is slowly changed, using average scanning rates from 0.1 to 0.2 Kmin −1 . The images are captured at several temperatures in the transmission mode and under crossed polarizers, for the χ = 0.01 mixture. Well-reproducible foggy blue textures of BPIII and vivid turquoise-green, large-size platelets attributed to BPI appearing on both heating and cooling. Oily streaks, characteristic of the N * phase in planar anchoring conditions, are also found. All the aforementioned textures remain stable when leaving the sample for longer time scales at a fixed temperature. In Figure 3, the temperature-concentration (T-χ) phase diagram of the CE8 and CdSSe nanocomposites is presented, based on the combined results from calorimetry and microscopy which agree well with each other. The three insets show the characteristic textures of the BPIII, BPI, and N * phases for χ = 0.01. The phase diagram clearly demonstrates that by increasing the QD concentration, the range of the amorphous BPIII structure prevails over BPII and BPI. DISCUSSION This work shows that BPII and BPI gradually disappear and BPIII prevails, exhibiting a six-fold extended range in the presence of CdSSe QDs. BPIII has been also strongly stabilized in the case of CdSe QDs of almost identical size (3.5 nm) and similar surface treatment dispersed in CE8 [44], as well as in another chiral LC compound CE6 [45]. In both these cases, BPII already disappeared for small concentrations of QDs (below χ = 0.02 for CE8 and below χ = 0.01 for CE6). However, BPI was mildly affected and remained present for higher QD concentrations than in the current work. In particular, in the case of CE8 the CdSe-driven stabilization reaches saturation in concentrations well above χ = 0.05 [44]. On the contrary, the CdSSe-driven stabilization effect of this work essentially saturates around χ = 0.01. With the LC compound being the same and the QD core size almost identical, the different impact must be related to density changes and surface functionalization. Indeed, as mentioned above, the composition of the QDs used here is CdS x Se 1−x (where x = 0.5). In addition, oleyl amine binds on both Cd and S, whereas tri-octyl phosphine only on Se. Hence, the partial replacement of Se by S yields a larger coverage of the CdSSe surface with oleyl amine combined with reduced core density. The latter is reflected in slower kinetics of CdSSe QDs in the LC volume, since they sense a more viscous medium with respect to their heavier CdSe counterparts. Hence, their trapping to the defect cores is slightly less effective and the overall stabilization milder. This reveals the great importance of NP chemistry, related to the modification of core composition and surface functionalization, in their trapping efficiency. The adaptive character of NPs is related to their size and coating; both play an important role in the NP entrapment within the cores of defects. The trapping mechanism has been originally proposed by Kikuchi et al. [39] for polymer-stabilized BPs, focusing on the energy gain when part of the defect volume is replaced by the guest (polymer) molecules. It was later on generalized for NPs [44,60], focusing on the adaptive character of the latter that should not significantly disrupt the surrounding LC ordering. This implies that the energy gain from the defect core replacement prevails over the energy cost for nonfavorable boundary conditions in the interfaces between NPs and LC molecules [61]. Both CdSSe and CdSe QDs surfacefunctionalized with flexible molecules are highly adaptive to the defect lattices of LC structures. Apart from stabilizing BPs, it is also shown that they induce a stable twist-grain boundary phase, characterized by screw dislocations, in the cases of CE8 [54] and CE6 LCs [61]. Au NPs with oleyl amine coating are also reported to stabilize BPIII and induce a twist-grain boundary phase in CE8 [54,62]. Note that the aforementioned mechanism for NP assembly in the defect cores could be extended by including the contribution of saddle-splay elasticity; preliminary efforts to create a more general phenomenological model can be found in our recent study [62], and additional work is in progress. A new theoretical approach has been recently proposed by Machon and Alexander [63] and by Selinger [64] for analyzing the director deformations in liquid-crystalline phases, based on splay, twist, bend, and saddle-splay contributions in the free energy. BP stabilization driven by spherical NPs is reported in several other studies in literature. The NP cores are composed of ZnS, Ni, MnO 2 , Fe 3 O 4 , and SiO 2 , and the sizes range from 2 nm to over 100 nm [46][47][48][49][50][51][52]. The effect is mostly on LC materials exhibiting BPI and, in few cases, BPII. According to theoretical predictions, BPI and BPII defect lattices could be used as matrices for largescale, stable assembly of large (up to 100 nm) NPs that should remain unaffected by thermal fluctuations [65]. Nevertheless, most of the existing size-dependent studies [46,48] suggest that the trapping becomes less effective and the stabilization effect milder as the NP size grows. When the size of particles becomes essentially larger, approaching the µm scale, the stabilizing effect may be associated with their assembly within the interfaces between platelets [66]. Moreover, it is worth investigating how the NPs affect the relaxation of a double-twist cylinder structure and if the larger sizes suppress the fluctuations, as theory suggests [65]. In the present ac calorimetric measurements, the frequency has been chosen to achieve a thermally thin sample, i.e., a sample without temperature gradients. In such slow timescales, the impact of QDs on the relaxation times of double-twist cylinder structures could not be assessed. The small spherical QDs of the present and previous studies [44,45] exhibit a minor shift of the BPIII-I transition temperature to slightly higher or lower values (depending on the QDs size, surface functionalization, and the LC host). On the contrary, large anisotropic NPs, such as graphene, laponite, and MoS 2 nanosheets, systematically upshift the BPIII-I transition temperature even at minute concentrations [55][56][57]67]. This trend confirmed by other studies [68,69] is attributed to the ordering of LC molecules induced within the I phase by the large surfaces of nanosheets. It also explains why large anisotropic NPs with similar surface functionalization promote the stabilization of the more ordered BPI structure over the less ordered, macroscopically amorphous BPIII [55][56][57]67]. SUMMARY Soft nanocomposites of LCs and NPs constitute an exciting field of ongoing research. BPs hold a prominent position among liquid-crystalline phases due to their high potential for applications [70,71]. The defect lattices could be used for assembly of NPs in regular templates or tuned by external fields [37,50,71]. Among various approaches, the nanoparticledriven BP stabilization has attracted considerable interest over the last 10 years. Spherical quantum dots and nanoparticles, as well as anisotropic nanosheets, have been tested as stabilization agents. In this study, we have shown that small spherical QDs, surface-functionalized with flexible molecules, extend the total blue phase range of CE8 and strongly increase the stability of BPIII. The outcomes of this work have been compared to other studies on QDs and other spherical and anisotropic NPs. The role of chemistry is very important since even moderate changes in the core composition and surface functionalization [56] have a noticeable impact on the stabilization effect. The results obtained so far have revealed certain trends, as well as the key mechanisms behind the BP stabilization. These mechanisms could be reformed to clearly include the saddle-splay contribution, using recently proposed mathematical formulations [63,64]. It is anticipated that additional studies, both experimental and theoretical, will clarify the precise role of nanoparticle size, shape, core density, and coating. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author.
4,387.6
2020-08-31T00:00:00.000
[ "Physics", "Materials Science" ]
Thermal Effects and Small Signal Modulation of 1.3-μm InAs/GaAs Self-Assembled Quantum-Dot Lasers We investigate the influence of thermal effects on the high-speed performance of 1.3-μm InAs/GaAs quantum-dot lasers in a wide temperature range (5–50°C). Ridge waveguide devices with 1.1 mm cavity length exhibit small signal modulation bandwidths of 7.51 GHz at 5°C and 3.98 GHz at 50°C. Temperature-dependent K-factor, differential gain, and gain compression factor are studied. While the intrinsic damping-limited modulation bandwidth is as high as 23 GHz, the actual modulation bandwidth is limited by carrier thermalization under continuous wave operation. Saturation of the resonance frequency was found to be the result of thermal reduction in the differential gain, which may originate from carrier thermalization. Introduction High-temperature stability in laser operation is an essential characteristic required for the long-wavelength semiconductor lasers in optical communication systems. Realization of uncooled high-speed operation of 1.3-μm quantum-dot (QD) lasers has attracted intensive research interests due to its application in optical communication. Over the past decade, promising dynamic properties of QDs such as large differential gain, high cut-off frequency, and small chirp were reported in devices with emission wavelength less than 1.2 μm [1]. Improved temperature characteristics of QD lasers, such as temperature-invariant threshold current [1], high characteristic temperature (T o ) [2], and linewidth enhancement factor [3], have been realized through p-doping technique. However, quantum-dots emitting at 1.3 μm and above have not fulfilled the initial expectation of improved temperature-insensitive modulation bandwidths, which have largely remain below 12 GHz [4]. With the increase in QD size and the strain effect of the cap layer, the self-assembled (SA) InAs/GaAs QDs can emit at 1.3 μm. The energy levels are still discrete. However, the number of energy level increases and the level separation, especially for holes, becomes much narrower (8-11 meV for hole) than that in the shortwavelength QDs. This results in significant hole thermalization [5]. Other problems reported in the 1.3-μm SA QDs include the finite GaAs barrier and thin wetting layer [6]. These disadvantages consequently lead to the temperature-sensitive performance observed in 1.3-μm QD lasers, such as the low characteristic temperature at or above room temperature [7] and strong temperaturedependent maximum gain. Fiore et al. [8] have studied the effects of intradot relaxation on the K-factor and differential gain of quantum-dot lasers. Deppe et al. [9] have reported the role of density of states, especially thermalization of holes due to their closely spaced discrete energy levels. This limits the modulation speed of QDs with deep confinement potentials such as the 1.3-μm InAs/GaAs QDs. Many theoretical [8,9] and experimental [3,10,11] investigations have been performed to study the bandwidth limitations in longwavelength QD lasers. According to these investigations, K-factor [8,11] has been recognized to be one of the limiting factors for the modulation bandwidth of QD lasers, which accounts for the effect of photon lifetime, differential gain, and nonlinear gain compression factor. Despite the theoretical and experimental investigations on the effect of differential gain on the DC performance of 1.3-μm QD laser and directly modulated uncooled 1.3-μm QD laser [12], the effect of carrier thermalization on the high-speed performance of 1.3-μm QD laser has not been analyzed systematically. Obviously, the modulation speed (or bandwidth) of the 1.3-μm QD lasers should be temperature-dependent due to the temperature-sensitive gain profile of QDs. As there are few investigations on the effect of temperature on the bandwidth of 1.3-μm QD lasers, a study on this aspect will provide greater understanding on the differential gain and carrier dynamics in long-wavelength QD lasers. In this paper, we investigate the influence of thermal effects on the high-speed modulation characteristics of 1.3-μm InAs/GaAs QDs by studying the temperaturedependent small signal modulation behavior. The effects of temperature on the K-factor, differential gain, and nonlinear gain compression will be presented here. Experimental Details The ten-layer self-assembled InAs/GaAs QD laser structure, as shown in Figure 1, was grown on GaAs (100) substrate by molecular beam epitaxy (MBE). The structure consists of QD active region sandwiched between two 1.5-μm C-and Si-doped Al 0.35 Ga 0.65 As cladding layers. The active layer comprises 2.3 monolayer (ML) of InAs QDs capped by a 5-nm In 0.15 Ga 0.85 As layer. A 33-nm GaAs layer is used to separate the two QD layers [13]. The wafer was processed into 4-μm-wide ridge waveguide (RWG) lasers by standard photolithography process and wet chemical etching at room temperature (RT) [14]. Ridge height of approximately 1.3 μm was obtained before the pulsed anodic oxidation (PAO) process. A 200 ± 5 nm-thick oxide layer was formed by PAO method, whose experimental setup can be found in [15]. Subsequently, p-contact layers (Ti/Au, 50/300 nm) were deposited by electron beam evaporation, while n-contact layers (Ni/Ge/Au/Ni/Au, 5/20/100/25/300 nm) were deposited on the backside of the substrate following lapping down to~100 μm. Finally, the wafer was cleaved into laser bars and the cleaved facets were left uncoated. The devices were mounted p-side down on a heat sink for measuring the small signal modulation characteristics. The small signal modulation response of the QD lasers was measured under continuous wave (CW) biasing condition using a vector network analyzer (VNA), a high-speed photoreceiver and laser diode current source. A thermoelectric temperature controller was used to regulate and monitor the device temperature during measurements. Results and Discussions The measured CW Power-Current performance of a device with cavity length of 1.1 mm shows that the threshold current (I th ) and slope efficiency are 55 mA and 0.27 W/A at room temperature, respectively. Maximum output power of 96 mW occurred at injection current of 395 mA. Figure 2 shows the lasing spectrum of the laser device under injection current of 100 mA at RT for verification. The lasing wavelength is centered at 1,306.5 nm. Furthermore, no lasing at excited state was observed. Characteristic temperature T o is around 41 K from 5 to 50°C. The small signal modulation response under different injection current levels is shown in Figure 3. At room temperature, the highest bandwidth of 6.1 GHz was obtained at injection current level of 390 mA. For injection current more than 395 mA, the resonance frequency f r decreases with increasing injection current. This is because, there are two competing factors affecting the resonance frequency: (1) increase in resonance frequency with injection current and (2) decrease in resonance frequency due to internal heating. Therefore, when injection current increases higher than 395 mA, the internal heating resulted from the increased current becomes dominant and leads to the decrease in resonance frequency. The small signal modulation response was further fitted into a transfer function that accounts for the intrinsic response of the laser as well as the extrinsic effects [16]: where f r is the resonance frequency, g is the damping rate, and f p is the parasitic cut-off frequency. From the fitting, we obtained values of the damping rate g and resonance frequency f r at different bias currents. The parasitic cut-off frequency is almost temperatureindependent and only restricts the bandwidth minimally. According to the plot of f r vs. the square root of the normalized bias current (I -I th ) 1/2 , the slope (known as D-factor or modulation efficiency) is obtained to be 0.28 GHz/mA 1/2 at RT. The relationship between resonance frequency and damping rate defines the K-factor, which is 0.83 ns at RT. Furthermore, the K-factor is directly related to the damping-limited bandwidth (f 3dB,damping ) by [16]: The internal quantum efficiency (h i ) and internal optical loss (a i ) of the devices were estimated to be 51% and 4 cm -1 by measuring lasers with different cavity length (1-3 mm) [17,18]. The internal quantum efficiency and internal optical loss exhibit weak dependency on temperature. With the values of internal quantum efficiency and internal optical loss, the differential gain (dg/dn) and nonlinear gain compression factor (ε) are extracted. The gain derivatives with respect to the carrier population defines differential gain, while the nonlinear gain compression factor is used to describe the gain dependence on the photon density. From the value of the D-factor, the differential gain is obtained to be 11.1 × 10 -15 cm 2 at RT, which is almost ten times higher than that reported in literature [19] (differential gain of 1 × 10 -15 cm 2 at 300 K for a device emitting at 1,263 nm). The nonlinear gain compression factor is determined to be 12 × 10 -16 cm 3 at RT. Note that the results are different from that reported recently [20]. We believe that the differences are due to the different device dimensions considered since the performance depends strongly on the device dimensions [21,22]. Measurements of direct small signal modulation of the QD laser were carried out from 5 to 50°C. Figure 4 shows the maximum measured (triangles) bandwidth (f 3dB,measured ) as function of temperature. The maximum measured bandwidth decreases almost linearly with temperature as temperature increases from 5 to 50°C. The highest f 3dB,measured of 7.51 GHz occurred at 5°C. The D-factor is 0.36 GHz/mA 1/2 at 5°C and 0.15 GHz/mA 1/2 at 50°C as shown in Figure 5 (solid circles). The differential gain from 5 to 50°C decreases following increase in temperature as shown in Figure 6. Figure 7 shows the calculated K-factor of the QD laser as function of temperature. There is a significant increase in the K-factor as temperature increases. The calculated K-factor increases approximately by a factor of three over the temperature range of 5-50°C. From (2), the f 3dB,damping of the QD laser is 23 GHz at 5°C and 8.9 GHz at 50°C, which is limited by the carriercapture time and modal gain via the K-factor [11,23]. This shows that the 1.3-μm InAs/GaAs QD lasers can potentially operate at very high frequencies. However, our experimental data shows much lower bandwidth. This can be attributed to the thermal effects, i.e., the thermal saturation of photon number (S o ) at roll-over current injection due to the self-heating. It could also be caused by serious hole thermalization due to the closely spaced hole levels in 1.3-μm InAs QDs. Considering the dependence of bandwidth on resonance frequency, this suggests that the saturation of the bandwidth is caused by saturation of the photon density. Saturation of the photon density could possibly be caused by the strong gain compression. Meanwhile, the nonlinear gain compression factor is in the order of 10 -16 cm 3 and shows a relatively weak dependence on temperature [refer to Figure 5 (hollow circle)]. The ε · S o product is less than 0.1, which suggests that the effect of the gain compression on the resonance frequency is relatively small. At relatively small damping effects, the thermal-limited bandwidth (f 3dB,thermal ) is related to f r by [16]: where f r,max is the maximum resonance frequency at a constant temperature. The f r,max of 6.6 GHz at 5°C and 2.5 GHz at 50°C would give a thermal-limited bandwidth of 10 GHz and 3.9 GHz (squares in Figure 4), respectively. This suggests that the main limitation on the bandwidth might be due to the decrease in differential gain, which may result from the thermal effects related to carrier thermalization in the multi-stack quantum-dots. The origin of the temperature-dependent differential gain is currently under investigation. The incorporation of p-type modulation doping and tunnel injection might be useful to improve the QD laser performance by reducing the thermal effects. Finally, the calculated intrinsic damping-limited bandwidth (squares) and thermal-limited bandwidth (circles) are shown in Figure 4 in comparison with the experimental results f 3dB,measured (triangles). The thermallimited f 3dB,thermal is in close agreement with the experimental results, indicating that the bandwidth measured in this study was limited by thermal effects. Conclusion In conclusion, we have studied the influence of thermal effects on the small signal modulation characteristics of undoped InAs/GaAs QD lasers. The role of Figure 5 The dependence of D-factor (solid circle) and nonlinear gain compression (hollow circle) on temperature. Figure 6 The differential gain at different temperatures. Zhao et al. Nanoscale Res Lett 2011, 6:37 http://www.nanoscalereslett.com/content/6/1/37 temperature-dependent differential gain and nonlinear gain compression factor in determining the frequency bandwidth was investigated. Calculation of the temperature-dependent bandwidth of the undoped QD laser shows close agreement between the thermal-limited bandwidth and the measurement results. The bandwidth of the undoped InAs/GaAs QD lasers is mainly limited by thermal effects, which may result from carrier thermalization in the undoped QD laser structure.
2,885.4
2010-09-26T00:00:00.000
[ "Physics", "Materials Science" ]
Fissile material detection using neutron time-correlations from photofission The detection of special nuclear materials (SNM) in commercial cargoes is a major objective in the field of nuclear security. In this work we investigate the use of two-neutron time-correlations from photo-fission using the Prompt Neutrons from Photofission (PNPF) detectors in Passport Systems Inc.'s (PSI) Shielded Nuclear Alarm Resolution (SNAR) platform~\cite{pnpf} for the purpose of detecting $\sim$5~kg quantities of fissionable materials in seconds. The goal of this effort was to extend the secondary scan mode of this system to differentiate fissile materials, such as highly enriched uranium, from fissionable materials, such as low enriched and depleted uranium (LEU and DU). Experiments were performed using a variety of material samples, and data were analyzed using the variance-over-mean technique referred to as $Y_{2F}$ or Feynman-$\alpha$. Results were compared to computational models to improve our ability to predict system performance for distinguishing fissile materials. Simulations were then combined with empirical formulas to generate receiver operating characteristics (ROC) curves for a variety of shielding scenarios. We show that a 10 second screening with a 200~$\mu$A 9~MeV X-ray beam is sufficient to differentiate kilogram quantities of HEU from DU in various shielding scenarios in a standard cargo container. The detection of special nuclear materials (SNM) in commercial cargoes is a major objective in the field of nuclear security. In this work we investigate the use of two-neutron time-correlations from photo-fission using the Prompt Neutrons from Photofission (PNPF) detectors in Passport Systems Inc.'s (PSI) Shielded Nuclear Alarm Resolution (SNAR) platform for the purpose of detecting ∼5 kg quantities of fissionable materials in seconds. The goal of this effort was to extend the secondary scan mode of this system to differentiate fissile materials, such as highly enriched uranium, from fissionable materials, such as low enriched and depleted uranium (LEU and DU). Experiments were performed using a variety of material samples, and data were analyzed using the variance-over-mean technique referred to as Y2F or Feynman-α. Results were compared to computational models to improve our ability to predict system performance for distinguishing fissile materials. Simulations were then combined with empirical formulas to generate receiver operating characteristics (ROC) curves for a variety of shielding scenarios. We show that a 10 second screening with a 200 µA 9 MeV X-ray beam is sufficient to differentiate kilogram quantities of HEU from DU in various shielding scenarios in a standard cargo container. I. INTRODUCTION Fissile materials refer to materials which, due to their nuclear structure, allow for sustained fission chains. The two most common isotopes which form the basis of fissile materials are 235 U and 239 Pu. Methods for detecting the presence of fissile materials support the goals of national and international programs for nuclear-nonproliferation. An extensive literature exists on potential identification schemes using active interrogation including neutron probes and delayed and prompt neutron signals [1][2][3][4][5], photon probes and delayed neutron signals [6][7][8][9][10][11][12], photon and neutron probes and delayed neutron signals [13][14][15], and photon probes and fission product radiation [16,17]. A general review of the various concepts can be found in Ref. [18]. All of these methods use single-particle signals. Multi-particle schemes, usually two-and three-neutron signals have also been pursued, with photon [19,20] and neutron [21][22][23][24][25] beams as well as in passive interrogation [26][27][28][29][30][31][32], where ambient radiations are used to induce the signals. Here we report on an active interrogation method using 9 MeV Bremsstrahlung photons to induce the emission of time-correlated, prompt neutrons measured within the Prompt Neutrons from Photofission (PNPF) system developed at Passport Systems Inc. (PSI) [33][34][35]. II. TIME-CORRELATION OF FISSION-CHAIN NEUTRONS A fission chain reaction occurs in fissile material when the neutrons from each fission diffuse through the material and induce subsequent fissions in other nuclei. The neutrons emitted from this process are highly correlated, resulting in neutron count distributions that deviate significantly from Poisson distributions produced by random neutron events. The theory of fission-chain correlations was initially developed by Richard Feynman while at Los Alamos [36]. The aim of that research was to describe the neutron fluctuations in a reactor pile where the measured neutrons originate from fission chains and random decays. To measure deviations from the Poisson expectation, Feynman defined a normalized second moment, where n is the measured neutron count per unit time, and n and n 2 represent averages over a series of time-gates. Subtracting the mean from the variance enforces Y 2F = 0 for the special case of a Poisson distribution, and division by the mean ensures that the quantity is independent of rate. The subscript "2F", not used in the original paper, has been added by others to credit Feynman and to denote that this quantity is derived from the second moment. This statistic has been used extensively in the study of nuclear reactor cores [37][38][39][40][41], and it has been extended for time varying sources of the fluctuations, e.g. in accelerator driven cores [42][43][44][45][46][47][48]. In these systems the number of neutrons generated spontaneously from radioactive decay greatly exceeds those generated by the introduced beam. For neutron-induced fission-chains from an idealized point source, a time-dependent expression for Y 2F has been developed by Snyderman and Prasad [54][55][56], where, λ = inverse of the neutron correlation time, a convolution of fission-chain and neutron transit times, T = time-gate during which neutrons are counted, = neutron detection efficiency, M = neutron multiplication, 1 1−k , where k = pν is k-effective and p is the neutron-induced fission probability, ν = the first moment, equal to the mean number for the neutron distribution from a single induced fission, ν 2 = the second combinatorial moment, half the variance for the neutron distribution from a single fission. Eq. 2 illustrates utility of Y 2F , whcih increases with the square of the multiplication and is therefore sensitive to the enrichment if the geometry of the object and efficiency of the detector are known. One can construct higher order moments (i.e. Y 3F,4F ) that depend on higher orders of the multiplication, but a corresponding dependence on higher orders of the efficiency makes these statistics more suitable for large acceptance detectors and/or long integration times due to the reduced coincidence rates. It is also important to note that Eq. 2 is appropriate for neutron beams. In Sec. V we will modify this expression for photon-induced fission-chains in extended objects. III. EXPERIMENTAL SETUP To study the feasibility of using photo-induced fissions chains to identify and characterize fissile materials we use a subset of the components of the PSI SNAR platform. The photon beam is generated from electron bremsstrahlung on a water-cooled radiator. The electron current is adjustable and uniform with high duty-cycle. For all experiments a 9 MeV electron kinetic energy was used. The beam currents are adjustable from 100 to 500 µAmps. For the results presented here, the beam current was set to 200 µAmps to avoid the non-Poisson effects of the data acquisition (DAQ) dead time losses. This corresponds to a rate of 2 × 10 12 Hz for photons with energies between 2 and 9 MeV, incident on an approximately 10 × 10 cm square spot size at the target location. The photon beam energy spectrum for the 9 MeV Bremsstrahlung radiator is shown in Fig. 1. The full SNAR configuration is shown in Fig. 2. The PNPF detectors consist of EJ-309 5 inch diameter liquid scintillator detectors coupled to 5" Hamamatsu photomutilpier tubes arranged in two sets of 2 × 8 arrays placed on opposite sides of the cargo container. The PNPF arrays have ∼5 cm thick high density polyethylene (HDPE) inserts placed in between the detectors to reduce adjacent detector cross-talk that can lead to an artificial Y 2F signal. Before and after tests show that these inserts reduce the cross-talk component by a factor of four. Measurements of Y 2F were made for a set of objects designed to study the relationship between Y 2F and neutron multiplication. These objects, listed in Table 1, were constructed from discs of depleted uranium (DU) and highly enriched uranium (HEU), and blocks of low enriched uranium (LEU). The objects were arranged in geometries to achieve a higher range of multiplication values. Fig. 3 shows the stacking arrangement for the three HEU discs. The highest multiplication values were achieved with an interleaved stack of LEU and HEU with HDPE moderators placed above and below the stack to provide neutron reflection. Multiplication values were calculated using MCNP 6.2 [57] run in the kcode mode. The control benchmark was obtained with a beryllium block, in order to generate a large quantity of uncorrelated photo-neutrons anticipated to have no measurable Y 2F value. Data were collected from a series of 600-second exposures, which were then combined for each object during the data analysis. Approximately ten exposures were collected for each uranium configuration, with additional exposures taken for the beryllium object. List of objects exposed to the 9 MeV Bremsstrahlung beam within the Passport Systems SNAR facility. A. Neutron Identification The main component of the detection system consists of an array of EJ-309 scintillators, coupled to Hamamatsu photomultiplier tubes (PMT). The choice of the PMTs was based on their < 3 ns rise time, thus retaining the sensitivity to the difference between the two main fast components of the light production from the triplet-triplet annihilation. The increase in triplet-triplet annihilation from the much larger ionization density of the neutronrecoiled proton tracks results in a distinct delay in light output. Neutrons can be distinguished from photons by their relative fractions of fast and slow light output using various Pulse Shape Descrimination (PSD) techniques, and their deposited energies can be quantified. The left panel of Fig. 4 shows a histogram of PSD vs. deposited energy, in electron equivalents, for a set of detector events. The distribution is then split into individual energy bins, and the PSD distribution for every energy bin is fitted with two Gaussians, as can be seen in the right panel of Fig. 4, which shows the 2D cut used to separate neutrons from photons. However, this is not sufficient to suppress the very large photon population, created by the intense 200 µA photon beam. Therefore, more sophisticated pulse shape discrimination algorithms were developed to reject the photon pileups, which otherwise would contribute a significant background to the neutron population [35]. Overall, the combined pulse shape analysis allows a suppression of the photon population by a factor of ×10 7 , limiting the beam-on background count to only the cosmogenic neutrons. B. Multiplicity analysis of neutron events The experimental data were collected in a series of 600-second exposures, with some exposures ending prematurely due to disruptions within the data acquisition system due to the extended run times and need for data synchronization. The identified neutrons in each run were arranged in time-ordered sequence to facilitate binning in time-gates of varying duration. For clarity, we rewrite Eq. 1 above with explicit averages, where sums are taken over sequential time-gates, n refers to the bin-number corresponding to the neutrons detected within a specific time-gate, and b n is the normalized probability for detecting n neutrons. The errors are calculated using the full covariance matrix for a multinomial distribution [58], The Y 2F values and errors for each exposure were calculated separately and checked for outliers before combining the exposures to calculate a final value and error for each object. The Y 2F measurements are listed in Table 2 and plotted in Fig. 5 along with a set of corrected Y 2F values that will be discussed in the next section. We note that the uncorrected values illustrate the time dependence expected from Eq. 2. One can also see from the uncorrected beryllium measurements that a significant non-fission component contributes to the uncorrected Y 2F values. C. Cross-talk Cuts and Corrections Neutrons that deposit energy in more than one detector will produce a correlated signal that contributes to the Y 2F term. This effect is shown clearly in Fig. 6. The left panel shows the distribution of pairs of neutron hits for one DU run as a function of the coincidence time and channel separation for channels in the same array column. Adjacent channels within a single column are separated by a distance of 18.5 cm. For ∆chan = 1, 90% of these channels are separated by this distance, with the remaining 10% separated by a distance of 2 meters or greater. ∆chan = 2, 80% are separated by a distance of 37 cm. The corresponding distribution for a GEANT4 [59] simulation is shown on the the right. The DAQ trigger logic rejects new signals within 256 ns after a given trigger, leaving zero entries ∆chan = 0 row of Fig. 6 creating the empty bins in the top row that is obscured from view. A significant cross-talk effect is evident in the vertically adjacent and next-nearest neighbor channels (∆chan = 1, 2). These enhancements are also visible in the GEANT4 simulation shown in the right panel. Additional time-dependent structure in the data is attributed to the neutron-correlations, which were not included in this simulation. We have corrected for this cross-talk effect on adjacent channels by fitting the time-dependence to a linear function at larger times and extrapolating into the cross-talk region, as shown in the left panel of Fig. 7. The fit was performed for time differences greater than 100 ns, and the integral of the fit was used to calculate the expected signal in the absence of cross-talk. A similar analysis was performed for next-nearest channels. The ratio of actual counts to extrapolated counts for adjacent and next-nearest channels is shown in the right panel Fig. 7. Note that for ∆ch = 2 the correction returns to unity for times that are smaller than the transit time between the next-nearest neighbor channels. Note that this correction was only used for time differences less than 100 ns, and was not used in the regions above where deviations from unity are dominated by statistical fluctuations. A cross-talk correction was also calculated for the horizontal neighbors separated by 2 meters (∆ch = 8) and was found to be negligible. The Y 2F values corrected for cross-talk are listed on the second row of each time-gate in Table 2 as solid symbols in Fig. 5. Note that the corrected Y 2F values for beryllium are consistent with zero as expected. The difference between Y 2F values for the DU and HEU stacked discs, already visible before the application of the correction, is still easily discernable in Fig. 5. Although the Y 2F difference grows larger with increasing time-gate, the most significant difference for this measurement, approaching 5-sigma, occurs in the region of 50-100 ns. A time-gate of 100 ns is applied to the analysis of all subsequent measurements. V. RESULTS AND SIMULATION BENCHMARKS The Y 2F values for 100 ns time-gates and cross-talk corrections for the complete set of objects listed in Table 1 are plotted in Fig. 8 increase for the composite systems. These results are compared to MCNP simulations of the PSI beam incident on the fissile objects. The detector acceptance was approximated by applying a ±15-degree angular cut above and below the horizontal. The angular cut matches the angle subtended by the PNPF detector array, and it is required to account for the polar-angle dependence of the neutron emission rates, which varies with object and is strongest for the LEU+HEU+HDPE composite objects. A minimum energy cut of 1.5 MeV is also applied to the neutrons to approximate the online energy cut implemented in the DAQ. Neutrons within this acceptance region were used to calculated the Y 2F in the same manner as the data, using 100 ns time-gates, but without the need for cross-talk corrections. A geometric efficiency factor of 0.25% was applied to the simulated Y 2F values. This value was determined empirically and is comparable to the 0.243% geometric detector efficiency calculated in GEANT4. The MCNP results reproduce the overall trends of the data, although the simulations under-predict the decrease for DU and over-predict the increase for the LEU+HEU composite object. The highest multiplication objects consisting of the interleaved LEU and HEU objects with HDPE moderation are reasonably well-reproduced by the MCNP simulations. This complex dependence of the Y 2F values that was observed in the data and qualitatively reproduced by the simulation led us to reconsider some of the assumptions within Eq. 2. One important modification is to account for the fact that in photon-induced fissions, the initial fission differs from those of subsequent neutron-induced fissions in the chain. This is analogous to the modification required for spontaneous fission-chains developed by Prasad and Snyderman [55]. The initial photo-fission is especially important for small multiplication values, where the neutrons from the initial fission form a non-negligible contribution to Y 2F . We introduce ν γ for the mean number of neutrons produced in a photo-fission, and ν γ2 for the half-variance. We also account for the non-fission photo-production and absorption in the extended object by adding the terms f o for the fraction of neutrons emitted from fission over the total number of neutrons produced by all nuclear reactions and o for the neutron absorption within the object. The neutron detection efficiency is denoted separately as d . Accounting for these additional factors yields the following expression for Y 2F for photon-induced fissions in an extended object, The mean number of neutrons per photon-fission, ν g has been measured by Caldwell et al. [60] and evaluated by Chadwick [61], however, the value for ν γ2 has not yet been measured. To estimate this value within the simulation, a TABLE 3. First and second combinatorial moments for photon and neutron induced fission for various levels of uranium enrichment. Neutron induced moments are calculated from Holden and Zucker tables for 1-2 MeV neutrons reported in the documentation for the MCNP fission package. Photon induced moments are calculated from the photo-fission model using the PSI beam energy spectrum. separate simulation of the MCNP photo-fission package was performed using the PSI energy spectrum as input. The mean and half-variance values for photon-induced fissions obtained in this way are listed in Table 3. We also list the corresponding values for neutron-induced fissions as reported by Zucker and Holden [62]. To compare our simulations to Eq. 5, which does not include energy dependence, we remove all energy cuts from the analysis, and we lengthen the Y 2F time-gate for this simulation analysis to 1 ms, to encompass the correlation-time for all neutrons, including thermal. The values for f o and o were taken from the simulations and a perfect detector efficiency ( d = 1) was assumed. The full comparison between this MCNP simulation and Eq. 5 is shown in Fig. 9. Eq. 5 accounts for the full multiplication dependence observed in the simulations, but consistently under-predicts the Y 2F values by 20%. One possible explanation for this discrepancy is the lack of energy dependent terms in Eq. 5. It is also possible that our estimate for the unmeasured ν γ2 differs from the value used in the full simulation. The Y 2F values can be very sensitive to the value of ν γ2 for objects with low multiplication. To illustrate the sensitivity of Y 2F to the value of ν γ2 , we vary its value by ±10%, as shown by the upper and lower bands. VI. SYSTEM PERFORMANCE PREDICTIONS With the MCNP simulations benchmarked by experimental results and qualitatively supported by an underlying theory, additional simulations of shorter exposures with larger masses were performed to predict PNPF system performance for distinguishing between DU and HEU in shielded configurations. For this study we performed simulations for a set of 24 solid spheres of DU and HEU spanning a range of masses from 10 grams up to 50 kilograms. The masses, radii, and multiplication values are given in Table 4. MCNP simulations were performed for these objects using a PSI beam of 200 µA beam for a 10 second exposure. The emitted neutrons were collected over the entire acceptance and multiplied by the overall geometric efficiency factor of 0.25% to match the experiment. A lower minimum energy cut of E>1MeV was used, based upon potential improvements to the neutron identification, and a nominal time-gate of 100 ns was used for calculated Y 2F values. Although the Y 2F values are calculated directly from the simulations, we chose to use experimental data to extrapolate errors. Fig. 10 shows a uniform, linear dependence in the Y 2F error as a function of N −1/2 counts , independent of object type. The data are well fit to the following functional form, We note that this formula does not extrapolate to zero error for zero counts, consistent with the fact that the errors are not strictly Poisson-distributed. This scaling is used to rescale our simulations for different objects and shielding configurations to predict the dependence on exposure time. Using the simulated Y 2F values and errors extrapolated from data, we use a Gaussian classifier to establish Receiver Operating Characteristics (ROC) curves for the purpose of distinguishing between HEU and DU objects. Using the HEU as the alarm scenario and the DU as the null scenario we can assign as "positive" value based on a Gaussian probability threshold: where Y exp is the measured Y 2F and σ is the extrapolated error. The ROC curves for different unshielded object masses are shown in Fig. 11 for spherical objects of 2, 5, 10, and 20 kg. For every object the ROC values were determined for the current PSI detector configuration (solid lines) and for one with 4× increased geometric efficiency. The analysis shows that while detection of small objects, e.g. 2kg, may be difficult, for larger objects the signal becomes large enough for a fast detection. This becomes especially true for a larger detector array. Note that the positive detection of 20 kg sphere with the 4× augmented detector array approaches 100% with a very low rate for false positives. Scaling Y 2F The scaling of the uncertainty Y 2F is described by Eq. 6. This is shown for the data in Fig. 5. The fractional uncertainty can also be plotted to be used in Figure 5: Data for the 300s and full duration runs. Red diamonds are the HEU points, blue diamonds the DU points. The uncertainty scales with 1/ p N counts independent of the e ciency or Y 2F value as expected from Eq. 6. scaling the Monte Carlo to data. Fig. 6 has the fractional uncertainty plotted for 1/ p N counts and the linear fits for the DU and HEU samples, separately. The slopes of the fits to the data points is related to the e ciencies of the In addition to characterizing performance for unshielded objects, the system was also studied for shielded configurations, where the 5 kg objects were surrounded by iron and HDPE blocks of varying thickness. The first shielding scenario has the primary effect of reducing the incident photon flux, thus reducing statistics in a fixed measurement time. The second scenario's impact primarily consists of moderating the fast neutrons, and thus reduces neutron statistics. Figures 12 shows the ROC curves for the 5 kg object for several thickness of full density iron (left) and HDPE shielding (right). VII. CONCLUSIONS The objective of the experimental effort described in this work was to extend the already existing Prompt Neutrons from Photofission (PNPF) technique, described in Ref. [35], to distinguish between fissionable (e.g. DU) and fissile (e.g. HEU, WGPu) materials by using neutron multiplicity analysis. The goal of the initial stages of the experimental program was to develop the necessary statistical and mathematical formalism, the Monte Carlo simulation infrastructure, and to experimentally demonstrate the feasibility of the technique. The results showed a 5-sigma difference in the signal for the two types of objects, proving the feasibility of this methodology. The next stages of the program focused on additional experiments, involving more complex target configurations using DU, LEU, and HEU. The Y 2F measurements were compared to simulations, which in turn were used to develop a theoretical-numerical model providing a basis for extrapolations to larger objects and shorter exposure times. The results showed that while the Y 2F signal is marginal for practical differentiation of the small objects used in the experiment, for 5kg spherical sizes the neutron multiplication M is large enough to differentiate fissile material types in shielded cargo configurations with reasonable scan times. While both the feasibility and practicality of the methodology was demonstrated, significant additional work remains to be done. Measurements on a wider range of object sizes, both large and small, would be valuable for further understanding the dynamics of Y 2F . Also high statistics measurement of photo-fission neutron distributions with thin-foil U-235 and U-238 targets with large acceptance detector that would enable precise and accurate determination of ν γ2 would significantly reduce uncertainties for future system performance studies.
5,805.8
2018-11-12T00:00:00.000
[ "Physics" ]
Discovery and characterization of a highly efficient enantioselective mandelonitrile hydrolase from Burkholderia cenocepacia J2315 by phylogeny-based enzymatic substrate specificity prediction Background A nitrilase-mediated pathway has significant advantages in the production of optically pure (R)-(−)-mandelic acid. However, unwanted byproduct, low enantioselectivity, and specific activity reduce its value in practical applications. An ideal nitrilase that can efficiently hydrolyze mandelonitrile to optically pure (R)-(−)-mandelic acid without the unwanted byproduct is needed. Results A novel nitrilase (BCJ2315) was discovered from Burkholderia cenocepacia J2315 through phylogeny-based enzymatic substrate specificity prediction (PESSP). This nitrilase is a mandelonitrile hydrolase that could efficiently hydrolyze mandelonitrile to (R)-(−)-mandelic acid, with a high enantiomeric excess of 98.4%. No byproduct was observed in this hydrolysis process. BCJ2315 showed the highest identity of 71% compared with other nitrilases in the amino acid sequence. BCJ2315 possessed the highest activity toward mandelonitrile and took mandelonitrile as the optimal substrate based on the analysis of substrate specificity. The kinetic parameters Vmax, Km, Kcat, and Kcat/Km toward mandelonitrile were 45.4 μmol/min/mg, 0.14 mM, 15.4 s-1, and 1.1×105 M-1s-1, respectively. The recombinant Escherichia coli M15/BCJ2315 had a strong substrate tolerance and could completely hydrolyze mandelonitrile (100 mM) with fewer amounts of wet cells (10 mg/ml) within 1 h. Conclusions PESSP is an efficient method for discovering an ideal mandelonitrile hydrolase. BCJ2315 has high affinity and catalytic efficiency toward mandelonitrile. This nitrilase has great advantages in the production of optically pure (R)-(−)-mandelic acid because of its high activity and enantioselectivity, strong substrate tolerance, and having no unwanted byproduct. Thus, BCJ2315 has great potential in the practical production of optically pure (R)-(−)-mandelic acid in the industry. Several approaches have been developed to discover novel nitrilases toward mandelonitrile [18][19][20][21][22]. Among these approaches, an enrichment culture [19] and the metagenome approach [20] have been used successfully. However, these methods require screening a large number of clones, and are thereby time consuming. Considering that the number of genes increases exponentially based on an automated genome annotation in the database, genome mining has become increasingly popular in the recent years. Researchers can easily find many genes with a defined function, such as nitrilase, from databases, such as GenBank, Pfam, and Brenda. Nitrilases of interest can be discovered more efficiently by combining the existing methods with substrate specificity prediction. Zhu et al. [21] discovered a mandelonitrile hydrolase (nitrilase) by combining traditional mining with the functional analysis of the flanking genes around this nitrilase. This nitrilase was organized in a mandelonitrile metabolic pathway and displayed high activity toward mandelonitrile. Seffernick et al. [22] also discovered a nitrilase and another mandelonitrile hydrolase from Burkholderia xenovorans LB400 using computational methods. However, these two nitrilases exhibited no or only slight enantioselectivity in producing (R)-(−)-mandelic acid. In our study, phylogeny-based enzymatic substrate specificity prediction (PESSP) was introduced for the efficient discovery of an ideal nitrilase to solve the problems of unwanted byproduct production, low enantioselectivity, and specific activity. A novel nitrilase (BCJ2315) was discovered from Burkholderia cenocepacia J2315. BCJ2315 could efficiently hydrolyze mandelonitrile to (R)-(−)-mandelic acid with high enantioselectivity. No byproduct was observed in the hydrolysis process. BCJ2315 was cloned and overexpressed in Escherichia coli M15, and its catalytic properties were investigated by analyzing its substrate specificity and kinetic parameters. The catalytic efficiency of the recombinant E. coli M15/BCJ2315 was also tested in the hydrolyzing mandelonitrile biotransformation to (R)-(−)-mandelic acid to investigate the potential of BCJ2315 further. Results and discussion Discovery of a predicted mandelonitrile hydrolase subgroup through PESSP Based on the screening criteria mentioned in Database mining and sequence analysis section, a total of 39 proteins were chosen for the mandelonitrile hydrolase activity assay (Table 1). These proteins were annotated as nitrilase, putative nitrilase, aliphatic nitrilase, and unnamed protein products. Among the 39 proteins, 16 were experimentally determined to have a nitrilase activity with different substrate specificities. For example, the nitrilase from Rhodococcus rhodochrous J1 [23] was designated as an aromatic nitrilase. The nitrilases from Synechocystis sp. PCC6803 [24] and Acidovorax facilis 72W [25] were specific to aliphatic (di)nitrile. The nitrilase from Pseudomonas fluorescens Pf-5 [26] had a regioselective activity toward aliphatic dinitrile. The nitrilases from Alcaligenes faecalis JM3 [13], Pseudomonas fluorescens EBC191 [17], Bradyrhizobium japonicum USDA110 [21], Burkholderia xenovorans LB400 [22], and an uncultured organism (nitrilase I, 2A6) [20] were characterized as mandelonitrile hydrolases. Finally, a cluster containing all the defined mandelonitrile hydrolases was found based on the phylogenetic analysis ( Figure 1). Seven proteins were not characterized experimentally within this cluster, and their functions remained unclear. Based on the substrate specificities of the defined nitrilases, this cluster was designated as the predicted mandelonitrile hydrolase subgroup. The uncharacterized seven proteins in this subgroup were further studied. Identification of BCJ2315 from the predicted mandelonitrile hydrolase subgroup The respective genes of the seven proteins were cloned and overexpressed in E. coli to verify whether these uncharacterized proteins in the predicted mandelonitrile hydrolase subgroup have a mandelonitrile hydrolase activity. The resulting recombinant histidine (His)-tagged proteins were all soluble and purified to homogeneity for the catalytic activity assay. All the seven enzymes were active toward mandelonitrile and have relatively high enantioselectivity (Table 2). Among these enzymes, LB400 [22], and an uncultured organism (nitrilase I, 2A6) [20], respectively. The highest identity was 71% compared with nitrilase (2A12) from an uncultured organism discovered by Robertson et al. [20]. To the best of our knowledge, the current study is the first report of nitrilase BCJ2315. BCJ2315 was chosen for further study because it had the highest activity and enantioselectivity toward mandelonitrile. The rest of the uncharacterized genes outside the predicted mandelonitrile hydrolase subgroup in the phylogenetic tree were also cloned and overexpressed in E. coli (Additional file 1: Table S1) so that any other nitrilases with good characteristics toward mandelonitrile would not be missed. After optimizing the expression conditions (induction temperature, Isopropyl-β-D-thiogalactopyranoside (IPTG) concentration, and expression vector/host), all the enzymes were expressed in a soluble form and showed activity toward at least one of the four assayed nitrile substrates (benzonitrile, phenylacetonitrile, acrylonitrile, and succinonitrile). The recombinant His-tagged proteins were purified for the mandelonitrile hydrolase activity assay. Little to no activity was observed in the highperformance liquid chromatography (HPLC) analysis after 12 h of hydrolysis (Additional file 1: Table S1). This result further proved the accuracy of the prediction based on phylogenetic analysis. Properties of the purified BCJ2315 The molecular weight of the purified native BCJ2315 estimated by conducting a gel filtration chromatography was about 450 kDa. BCJ2315 showed one single band on the SDS-PAGE with a molecular weight of 37 kDa ( Figure 2). This result indicated that the native BCJ2315 consisted of 12 subunits with identical sizes, which is in agreement with most nitrilases reported with 6 to 26 identical subunits that self-aggregated to form active enzymes [32]. The optimum temperature and pH of the purified BCJ2315 were determined. The optimum temperature was 45°C, as shown in Figure 3a. When the temperature was above 45°C, the activity of BCJ2315 decreased sharply. This behavior is similar to the nitrilases reported from mesophilic organisms, having optimum temperatures ranging from 30°C to 50°C [1,11,12,17,33]. BCJ2315 showed the highest activity at pH 8.0 (Figure 3b). Only small changes in activity were observed between pH 6.4 and 9.6. These variations suggested that BCJ2315 had a relatively broad optimum pH in contrast to other arylacetonitrilases that have a rather narrow optimum pH at neutral or slightly alkaline pH values [11]. The catalytic activity of BCJ2315 toward 24 different nitriles with structural diversity was investigated. Table 3 lists the relative activities determined by quantifying the amount of ammonia released during the hydrolysis. A clear preference of BCJ2315 for arylacetonitriles as substrates indicated that this enzyme is an arylacetonitrilase. A lower activity was also observed with the aliphatic and heterocyclic nitriles. No detectable activities were observed with the aromatic nitriles. BCJ2315 showed the highest activity toward mandelonitrile (8.8 times more than that of phenylacetonitrile), indicating that BCJ2315 is a highly active mandelonitrile hydrolase. Moreover, the activities of other arylacetonitrilases, such as Alcaligenes faecalis ATCC 8750 [10], Alcaligenes sp. ECU0401 [11], Alcaligenes faecalis ZJUTB10 [12], Alcaligenes faecalis JM3 [13], and Pseudomonas putida [33], toward mandelonitrile are only 12% to 50% of that for phenylacetonitrile. The kinetic parameters of BCJ2315 were determined using mandelonitrile as substrate. The obtained K m and V max were 0.14 mM and 45.4 μmol/min/mg, respectively. The low value of K m indicated that BCJ2315 had high affinity toward mandelonitrile. The K m -values of other highly enantioselective mandelonitrile hydrolases were one to three orders of magnitude higher (above 3.4 mM) than that of BCJ2315 (Table 4) [11,12,16,33]. The K cat and K cat /K m were 15.4 s -1 and 1.1×10 5 M -1 s -1 , respectively. BCJ2315 also had a high catalytic efficiency, comparable with that of the nitrilase (bll6402) from Bradyrhizobium japonicum USDA110 (1.04×10 5 M -1 s -1 ) [21]. BCJ2315 exhibited a relatively high specific activity (27.79 U/mg) among the reported mandelonitrile hydrolases. It has the 2 nd highest activity of all known highly enantioselective nitrilases in literature ( Table 4). The highest specific activity toward mandelonitrile was observed with nitrilase I (50 U/mg) from an uncultured organism discovered by Robertson et al. Nitrilase I also has an excellent enantioselectivity toward mandelonitrile (enantiomeric excess (ee), 98%) and mandelonitrile derivatives [20,34]. Strangely, these two nitrilases only shared a 66% identity in the amino acid sequence, probably because of the method we used (PESSP). The PESSP method discovered new enzymes based on their substrate specificity toward mandelonitrile, other than the sequence identity. Therefore, despite the low identity between these two nitrilases, they were successfully clustered together into the predicted mandelonitrile hydrolase subgroup. PESSP exhibited an advantage in searching for enzymes with similar characteristics, although these enzymes may be quite different in the amino acid sequence. A higher specific activity was also observed with the nitrilases from Pseudomonas fluorescens EBC191 (32.8 U/mg) and Bradyrhizobium japonicum USDA110 (24.38 U/mg). However, these two nitrilases had lower ees, and the nitrilase from Pseudomonas fluorescens EBC191 also produced amide as a byproduct. The other highly enantioselective mandelonitrile hydrolases from Alcaligenes sp. ECU0401, Alcaligenes faecalis JM3, Aspergillus niger CBS 513.88, Neurospora crassa OR74A, Pseudomonas putida and Alcaligenes faecalis ATCC8750 exhibited a relatively low specific activity (Table 4). Thus, when the specific activity, enantioselectivity, and production of unwanted byproducts were taken into account, BCJ2315 demonstrated a great potential for the industrial production of optically pure (R)-(−)-mandelic acid. Conversion of mandelonitrile by the recombinant E. coli M15/BCJ2315 Whole cell biocatalysis was performed using the recombinant E. coli M15/BCJ2315 as biocatalyst to estimate the potential of BCJ2315 for mandelonitrile hydrolysis further. An optimization of the reaction conditions for M15/BCJ2315 was performed by evaluating the effect of pH, temperature, cell concentration, and mandelonitrile concentration in a reaction system of 10 ml. The optimal reaction system consisted of wet cells (100 mg) and mandelonitrile (100 mM) in 10 ml of phosphate buffer (100 mM, pH 8.0) (data not shown). Although the optimal temperature was 45°C, we conducted the reaction at 30°C to consider the thermal deactivation of the enzyme. The reaction process is shown in Figure 4. Considering that mandelonitrile can be decomposed into benzaldehyde and hydrogen cyanide in aqueous solution at pH 7.0 and above, the benzaldehyde concentration was also plotted into the figure to help understand the mechanism of the mandelonitrile hydrolysis mediated by M15/ BCJ2315. The mandelonitrile (100 mM) could be hydrolyzed completely by M15/BCJ2315 within 1 h. No mandelamide was detected during the reaction. The ee value for the (R)-(−)-mandelic acid was constant at 97.6% during the whole hydrolysis process. Finally, the (R)-(−)-mandelic acid was recovered with a total yield of 93.5%. The ee of the production was determined as 99.8% after the recrystallization in benzene through HPLC. The product was characterized as follows: Among the reported nitrilase-mediated hydrolysis of mandelonitrile, only the nitrilase from Alcaligenes sp. ECU0401 could enantioselectively hydrolyze mandelonitrile in a high substrate concentration [36]. When 100 mM of mandelonitrile was used, the yield of the (R)-(−)-mandelic acid reached 100% with wet cells (100 mg/ml) in 2 h and the ee was as high as 99%. In the current study, we used less biocatalysts (10 mg/ml of wet cells) to realize the hydrolysis of the same mandelonitrile concentration (100 mM) with a high enantioselectivity (97.6%) in 1 h of reaction time. This result may be due to the high specific activity of BCJ2315 and the high soluble expression in E. coli. The biocatalyst preparation accounts for a great part of the total cost of an enzymatic process. Therefore, using M15/ BCJ2315 for the production of (R)-(−)-mandelic acid is beneficial in two ways, i.e., the processing time is shorter and require less amount of biocatalyst (resting cells), thereby significantly reducing the production cost compared with other reported systems. Conclusions A novel mandelonitrile hydrolase BCJ2315 was discovered from Burkholderia cenocepacia J2315 through PESSP. BCJ2315 took mandelonitrile as its optimal substrate and exhibited great advantages in the production of optically pure (R)-(−)-mandelic acid. These advantages include a high enantioselectivity and activity, strong substrate tolerance, and having no byproduct. These advantages make BCJ2315 more efficient in the hydrolysis of mandelonitrile. Thus, BCJ2315 has a great potential in the practical production of optically pure (R)-(−)-mandelic acid. Materials All the bacterial strains used for the mandelonitrile hydrolase analysis were obtained from the China General Microbiological Culture Collection Center, German collection of Microorganisms and Cell Cultures, and American Type Culture Collection (ATCC). E. coli BL21/pET28a(+) and M15/pQE30 were used for expressing the nitrilases. The nitrile substrates and carboxylic acids were purchased from Sigma-Aldrich (Milwaukee, USA). Database mining and sequence analysis The database searches of the sequence data were performed using the BLASTP program (http://blast.ncbi. nlm.nih.gov/Blast.cgi?PAGE=Proteins&PROGRAM=blastp &BLAST_PROGRAMSblastp&PAGE_TYPE=BlastSearch &SHOW_DEFAULTS=on&LINK_LOC=blasthome). The nitrilase (bll6402, NP_773042) from Bradyrhizobium japonicum USDA110 [21] was chosen as the identifier to detect mandelonitrile-specific nitrilase (mandelonitrile hydrolase). Bll6042 was the most active toward mandelonitrile and took mandelonitrile as its optimal substrate. The identity of the amino acid sequence was used as the first criterion to screen the BLASTP result. Sequences with identities greater than 90% and less than 30% were removed to filter the enzymes with the same characteristics or with more distinct characteristics. No more than two sequences with the same identity were kept for analysis. The second criterion was the source of the sequence. Sequences from the same species were chosen once because these sequences usually share high identities and the same characteristics with one another. Two or more different sequences from the same strain were kept. The third criterion was the availability of the organisms that harbor nitrilase. Regardless of these three criteria, the sequences with experimentally defined nitrilase substrate specificity were chosen as a priority to refine the phylogenetic analysis. The sequences annotated as unnamed protein products were checked for the Glu-Lys-Cys catalytic triad [32,37] using the ScanProsite tool in the ExPASy proteomic server. The alignment of the obtained sequences was conducted using ClustalW [38]. A bootstrap consensus tree was built by using a neighbor-joining method packaged in MEGA version 4.0 [39]. Cloning and expression of nitrilase genes in E. coli The nitrilase gene primers in the predicted mandelonitrile hydrolase subgroup used in this study are listed in Table 5. Recombinant DNA techniques were performed according to standard protocols [40]. All the recombinant expression plasmids were transformed into E. coli BL21 (DE3) or E. coli M15. The recombinant E. coli cells were cultivated in a Luria-Bertani medium containing antibiotics at 37°C. IPTG was added to a final concentration of 0.1 mM to induce the cultures when the OD 600 reached 0.6 to 0.8. The E. coli BL21 (DE3) or E. coli M15 cultures were further incubated for 20 h at 20°C or 30°C, respectively. The induced cells were harvested through centrifugation (12,000 rpm, 10 min) at 4°C and stored at −20°C. Protein purification The obtained cell pellets were resuspended in 10 ml of ice-chilled lysis buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, 10 mM imidazole, 1 mM dithiothreitol (DTT), pH 8.0). A cell disruption was performed by sonication on ice, and the lysate was centrifuged at 10,000 × g for 30 min to remove the cell debris. The resulting supernatant was passed through a 0.22-μm filter, and then applied to a Ni-NTA Superflow column (1 ml, Qiagen) previously equilibrated with the lysis buffer. The column was subsequently washed with 10 ml of wash buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, 20 mM imidazole, 1 mM DTT, pH 8.0) to remove the impurity protein. The fusion protein was then eluted with the elution buffer (50 mM NaH 2 PO 4 , 300 mM NaCl, 250 mM imidazole, 1 mM DTT, pH 8.0). The eluted protein was desalted and concentrated through ultrafiltration using a 50-ml Amicon Ultra Centrifugal Filter Device with a molecular weight cutoff of 10 KDa (Millipore, USA). The purified enzyme was resuspended in a sodium phosphate buffer (pH 7.0) containing 1 mM of DTT and 20% glycerol, and then stored at −40°C. The crude extract and the pure enzyme were analyzed by SDS-PAGE. Protein concentration was determined using the Bradford method, with bovine serum albumin as the standard. All purification steps were carried out at 4°C. Enzyme assay The standard reaction with the purified nitrilase was performed at 30°C in a reaction mixture (1 ml) containing 100 μmol sodium phosphate (pH 7.0), 20 μmol of nitrile substrate, and an appropriate amount of nitrilase. Aliquots (100 μl) were withdrawn at different time intervals, and 10 μl of 2 M HCl was added to quench the reaction. The production and optical purity of the mandelic acid were determined through an HPLC analysis. In some cases, the amount of ammonia formed in the reaction was measured by performing the Berthelot assay [41]. All the experiments were performed in triplicates. One unit of the enzyme activity was defined as the amount of enzyme that produced 1 μmol of mandelic acid or ammonia per min under the standard assay conditions. Determination of the molecular weight The molecular weight of the purified BCJ2315 was determined by gel filtration chromatography on a Superdex 200 10/300 GL column (GE Healthcare). The calibration of the column was performed with the HMW-Gel Filtration Calibration Kit (GE Healthcare) containing thyreoglobulin (669 kDa), ferritiin (440 kDa), aldolase (158 kDa), and conalbumin (75 kDa). The void volume was determined using Blue Dextran (2,000 kDa). Effects of temperature and pH on the purified BCJ2315 activity The reaction was performed at different temperatures or in buffers of different pH values for 10 min with 20 μg of purified BCJ2315 and 20 mM mandelonitrile in 1 ml of reaction mixture to determination the temperature and pH effects. The optimum temperature of BCJ2315 was determined by incubating the enzyme with mandelonitrile at different temperatures (18°C to 70°C). The optimum pH of BCJ2315 was determined by measuring the enzyme activity in buffers with different pH values (4.0 to 10.6) using mandelonitrile as the substrate. Sodium citrate-citric acid buffer (pH 4.0 to 6.4, 0.1 M), sodium phosphate buffer (pH 6.4 to 7.6, 0.1 M), Tris-HCl buffer (pH 7.0 to 9.0, 0.1 M), and Glycine-sodium hydroxide buffer (pH 8.5 to 10.6, 0.1 M) were used in this process. Measurement of the kinetic parameters The kinetic parameters of BCJ2315 were determined over a wide range of mandelonitrile concentrations (0.1 mM to 15 mM) under standard assay conditions. The kinetic constants V max and K m were calculated from the Lineweaver-Burk plots using standard linear regression techniques. Substrate specificity The specific activities of BCJ2315 toward different nitriles with structural diversity were measured under standard conditions. The reaction was incubated at 30°C for 5 min to 240 min. The conversion was determined by measuring the amount of ammonia produced in the reaction using the Berthelot assay, as described previously. Enantioselective mandelonitrile hydrolysis with the recombinant E. coli M15/BCJ2315 For the recombinant E. coli M15/BCJ2315, a standard reaction mixture (10 ml) containing wet cells (100 mg) and mandelonitrile (100 mM) suspended in phosphate buffer (100 mM, pH 8.0) was incubated in a rotary shaker (30°C, 200 rpm). Aliquots (100 μl) were withdrawn and quenched with 10 μl of 2 M HCl at different time intervals. The production and optical purity of the mandelic acid were determined by HPLC analysis. (R)-(−)-mandelic acid was recovered using an ion-exchange process, as described by Xue et al. [42]. The product was recrystallized using benzene as the solvent.
4,740.4
2013-02-18T00:00:00.000
[ "Biology" ]
Measurements and simulations to investigate the feasibility of neutron multiplicity counting in the current mode of fission chambers In two earlier papers [1], [2] we investigated the possibility of extracting traditional multiplicity count rates from the cumulants of fission chamber signals in current mode. It was shown that if all neutrons emitted from the sample simultaneously are also detected simultaneously, the multiplicity rates can be retrieved from the first three cumulants of the currents of up to three detectors, but the method breaks down if the detections of neutrons of common origin take place with a time delay spread wider than the pulse shape. To remedy these shortcomings, in this work we extended the theory to two- and three-point distributions (correlations). It was found thatthe integrals of suitably chosen two- and three-point moments with respect to the time differences become independent of the probability density of the time delays of detections. With this procedure, the multiplicity rates can be retrieved from the detector currents for arbitrary time delay distributions. To demonstrate the practical applicability of the proposed method, a measurement setup was designed and built. The statistics (shape and amplitude distribution) of the detector pulse were investigated as important parameters of the theoretical model. Simulations were performed to estimate the expected value of the multiplicity rates in the built setup. Measurements were performed and two types of moments (the mean and the covariance function) of the recorded detector signals were calculated. Values of singles rates were successfully recovered. I. INTRODUCTION The primary objective of multiplicity counting is to determine the mass of small fissile samples which is extracted from the detection rates of single, double and triple coincidences (S, D and T count rates) with pulse counting techniques [3]. The development of a method that extracts these multiplicity rates from the first three cumulants of the current of fission chambers is the subject of this paper and its two predecessors [1], [2]. The basic idea was introduced in Ref. [1], in which the neutrons emitted simultaneously from the sample were assumed to be detected simultaneously. It was shown that the S, D and T count rates could be retrieved from the first three cumulants of the detector current, provided that certain parameters of the detector (such as its pulse shape and amplitude distribution is known. In reality, though, detection of neutrons of common origin does not take place simultaneously. Incorporation of this phenomenon into the theory was made in the sequel paper [2], by assuming a random arrival time to the detector. The results showed that when the width of the density function of the time delay is much wider than the pulse width (which is a typical case is thermal detection systems), the coefficients of the D and T rates (analogues of the doubles and tripes gate factors in pulse counting methods) will become vanishingly small, hence only the S rates can be unfolded from the cumulants. To remedy this problem, the theory has recently been extended to the use of the double and triple covariance functions of the detector signals derived from their two-and three-point distributions (in time). In this paper, the main results of this extension are reported. In order to demonstrate the use of the proposed method and to investigate its practical applicability, an experimental configuration has recently been prepared in the Training Reactor Facility at the Budapest University of Technology and Economics in Hungary. Extensive measurements are currently in progress in various configurations. At the end of the paper, some preliminary experimental results will be presented as well. II. TRADITIONAL MULTIPLICITY COUNTING In this section the traditional method of multiplicity counting is summarized briefly using the terminology of [4]. In a multiplicity counting measurement, the detection rates of the first three k-tuplets (k detected neutrons originating from the same sample emission) are determined. These rates are called the singles (S), doubles (D) and triples (T ) rates, respectively and they can be written as: Here F is the intensity of spontaneous fission in the sample, " is the detection efficiency, e ⌫ i is a modified form of the socalled Böhnel moments, the factorial moments of the number of emitted neutrons per one source event. f d and f t the socalled doubles and triples "gate factors" which are introduced empirically in order to account for the non-coincident detection of neutrons of common origin, and the loss of detection outside the measurement windows. Using these equations, the sought sample quantities (including the fissile mass of the sample) can be obtained from the measured values of the S, D and T rates using algebrain inversion [4]. SIGNALS The theory of the newly proposed method of multiplicity counting is based on a formalism describing the fluctuating signals of neutrons detectors [5]. The key element of the formalism is a stochastic model of the detector response, which describes the pulse by a deterministic (constant) pulse shape f (t) with a random amplitude a, whose nth moment is denoted by ha n i. It is assumed that the detection of neutrons occurs with a random time delay ⌧ , which is independent and identically distributed for each neutron, and is characterized by a density function u(⌧ ). Using these quantities as building blocks, various low order cumulants of the detector signals can be calculated with a master equation formalism. In [1] and [2] expression were derived for the one-point cumulants. In this paper calculations for the two and three point cumulants will be presented. We shall see that as in the one-point case, the two-and three point cumulants can also be expressed with the Böhnel moments hence they are also related to the traditional multiplicity rates. As it was already discussed in the previous papers, when interpreting the cumulants in terms of the multiplicity rates, with the purpose of unfolding the Böhnel moments from the cumulants, one has to substitute f d = f t = 1 in (1). This is because the effect of non-coincident detection of neutrons of common origin, which is described by the empirical gate factors in the traditional method, is explicitly included in the theory of the new method. Nevertheless, as it will be seen, the analogies of the gate factors appear in the cumulants as well. A. One-point Distributions To serve as a reference of comparison with the expressions of the two-and three point cumulants presented in the following subsection, a brief overview of the theory concerning the one-point distribution is given here based on [2]. By omitting the details of the derivations, the first three cumulants (the mean, variance and skewness) of the detector signal are given by Following similar considerations, the expressions for the double and triple cross-covariances between two and three detectors read as Here, I n as well as ⇠ 1,1 , ⇠ 1,2 and ⇠ 1,1,1 are integrals of the pulse f (t) shape and the delay density function u(⌧ ) and their definitions can be found in [2]. Because it will appear in the two-and three point cumulants, we include here the definition of I 1 : We will refer to the ⇠'s as "gate factors" because they play a similar role in the above formulas as the traditional gate factors in (1). From Equations (2)- (6) it is seen that if I n and the ⇠'s are known from calibration, the detection rates can be obtained from the measured cumulants by simple algebraic inversion. However, as it was also shown in [2], if the spread of the density of the time delay is much larger than the pulse width (which is typical in thermal detection systems), the gate factors become vanishingly small. In such a case, only the S rates can be extracted. B. Two-and Three-point Distributions To explore the temporal correlations in the detector signals, their distribution must be described at two or even three points in time: besides time t, a second time t − ✓ and a third time t − ✓ − ⇢ is considered as well. Regarding a single detector, our goal is to determine the integrals and Cov 3 ⌘ of the second and the third order cumulants of the signal, the covariance function Cov 2 (✓) and the bi-covariance function and Detailed calculation of the moments can be found in [6]; here only the final results will be provided. One finds that the integral of the covariance function Cov 2 (✓) has the form whereas the integral of the bicovariance function Cov 3 (✓, ⇢) has the form Here I 1 is the same as (7) in the previous Section, whereas the three point doubles gate factors ⇠ A , ⇠ B and ⇠ C are also integrals of the pulse shape and the delay density function; their definitions can be found in [6]. The corresponding integrals of the covariance function Cov 1,1 (✓) of two detectors and the bi-covariance function Cov 3 (✓, ⇢) of three detectors read as One can see that the expressions (12)-(15) have, with minor differences, the same form as the one-point cumulants (3)-(6). The difference is that, except in the doubles term of Cov 3 , all the gate factors (hence the density of the unknown time delay distribution) have disappeared. With the results presented above, it possible to design an experimental procedure such that all selected moments (cumulants and covariances) are independent of the time delay distribution. This makes the method applicable for multiplicity counting with thermalised neutrons, for which the detector efficiency is much higher than for fast neutrons. In particular, one can use the first cumulant  1 to determine the singles rates, the covariance Cov 1,1 or Cov 2 to determine the doubles rate, and the bi-covariance Cov 1,1,1 to determine the triples rate. IV. EXPERIMENTAL INVESTIGATION An experimental configuration has been prepared in the Training Reactor Facility at the Budapest University of Technology and Economics in Hungary in order to demonstrate the use of the newly proposed method. Strictly speaking, the applied measurement setup corresponds to the active version [7] of multiplicity counting as opposed to the passive version described in the first part of this paper. The reason for this is that no spontaneous fissionable materials are available at our facility, therefore the only option is to use low enriched uranium samples irradiated by an isotopic neutron source to induce fission in the sample. Although the two versions of multiplicity counting provide different formulas for the detection rates [7], the relationship between the moments of the detector signals and the detection rates is expected to be the same in the two cases. Therefore as our primary goal is not to unfold the fissile mass of an unknown sample, but only to show the possibility of extracting detection count rates from the time-resolved signals of fission chambers, we shall disregard the differences of the two versions and use the formulas for the signal moments as presented in the preceding sections. The central element of the setup is an EK-10 type fuel assembly. Different variants of the same assembly type are available in the facility which differ in their shape, in the number of fuel rods they contain, and the arrangement of the rods within the assembly. The one shown in the figures i s a square shaped assembly with 68 mm side length containing 16 fuel rods in a square lattice. Each rod is filled w ith fresh fuel pins of 10% enriched uranium oxide in metal magnesium matrix yielding 7.94 grams of 235 U per rod. The rods have a 50 cm active length and a 7 mm active diameter. The cladding of the fuel pin is 99.5% purity aluminum alloy, with an outer diameter and thickness of 10 mm and 1.5 mm, respectively. A. Description of the measurement set-up The fuel assembly is surrounded by three KNT-31-1 type fission c hambers l abeled w ith A -C on t he F igure. T he fission chambers have a 17.6 cm length, a 32 mm outer diameter and 500 cm 2 area of sensitive layer covered with 90% enriched uranium. The thermal neutron sensitivity of the detectors is 0.25 pulse per one neutron/cm 2 . A 241 Am-Be source, labeled with S on the Figure, is located close to the fuel assembly to provide neutrons that cause induced fission in the fuel. The isotopic source has an emission intensity of 2·10 6 neutrons/s. Both the source and the detectors are covered (at least partially) by a paraffin-wax coating which serves as a moderator for the fast neutrons originating from the isotopic source and the fuel. The thickness of the coating was chosen to maximize the fission i ntensity in t he detectors; based on MCNP simulations with different thicknesses, 3 cm was found to be optimal. The signal of each detector is sent to an in-house-built high-frequency pre-amplifier w hich p roduces a voltage signal ranging between −1 and 1 V. The pre-amplifier circuit has a small time constant (compared to the charge collection time of the detector), hence the shapes of the amplified voltage pulses reflect the shapes of the current pulses in the detector. The voltage signals of three detectors are then registered by a pair of Red Pitaya STEMLab 125-14 type FPGA-based A/D converters. Each converter has two analogue inputs and provides a 14 bit vertical resolution as well as 125 MHz maximal sampling frequency (corresponding to a 8 ns maximal resolution in time). 2) Next, by running MCNP simulations, the detection efficiency " of the measurement configuration is determined, furthermore, the expected number of singles, doubles and triples detection events is estimated. 3) Finally, by running long measurements and analyzing signals, the mean value and the covariance function of the detector signal is determined. From the mean value, the singles rate is also recovered. B. The properties of the detector pulse The properties of the impulse of the detectors (their shape and amplitude distribution) were determined by analyzing a large number of recorded pulses. Signals were recorded for 5 minutes with a time-resolution of 48 ns; Fig 3 shows C. The efficiency of detection The efficiency o f d etecting s ingle n eutrons, a s w ell as doubles and triples was investigated with MCNP simulations using the model shown on Fig 1. Each simulation was performed with the PTRAC option which, provided listed information on (among many others) the location and time of fission events. This data was then processed and used to determine the quantities discussed in this subsection. In order to estimate the efficiency o f t he d etectors, that is, the probability of detecting a neutron coming from the fuel assembly after an induced fission, s ource n eutrons were initiated from the fuel region. Their position and direction followed uniform and isotropic distributions, respectively, whereas their energy followed the Watt spectrum specific to 235 U. To eliminate the effect of internal multiplication, which is of no interest in determining the detection efficiency, the fuel pins were replaced by void space. After initiating 3 · 10 9 source particles in total, the number of fission events in the detectors were counted, from which the detection efficiency could be easily estimated. The results are summarized in Table I. One can see that the efficiency of all three detectors is around 0.07% yielding an overall detection efficiency of around 0.21%. This is a very low value (especially when compared with the usual 40-60% efficiency of traditional multiplicity counters [3]) primarily caused by the unfavorable geometrical conditions in the system. As a consequence, by recalling expressions (1) of the detection rates, we expect that the observed doubles and triples rates will be several orders of magnitudes smaller than the singles rates, which can make their determination difficult or even impossible. There is an additional effect related to the detection of neutrons, which is not included in the theoretical model presented in the first part of the paper, nevertheless it affects the statistics of the measured signals. Namely, in a fission chamber the detection of a neutron generates further neutrons, which might then get detected in this particular detector or in a neighboring one, thus increasing observed rates of singles, doubles and triples detections. To estimate the probability of such events, simulations were performed in which source neutrons were initiated from one of the detectors (distributed uniformly in space, isotropically in direction and with Watt energy spectrum), and the number of fission events in all detectors were counted. This same procedure was repeated with all three detectors, and the probabilities of detecting neutrons originating from any of the three detectors were estimated. The results are shown in Table II, where the rows represent the detectors where the neutrons starts from, and the columns represent detectors where the neutrons arrives to. Considering a neutron generated during the detection process in one of the detectors, the probability that it will be detected by the same detector is around 0.033-0.035%, whereas the probability that it will be detected by an other detector is around 0.016%. Since these probabilities are quiet low, the number of such "false counts" and hence their contribution to the singles, doubles and triples is expected to be negligible. Nevertheless, in order to gain a better understanding of this process and its consequences, we are planning to incorporate it into the theoretical model in the near future. The detection probability presented above is a simple way to characterize the detection efficiency in the measurement setup. An other, more direct approach is to estimate the expected number of single, double and triple detection events in a given period of time. To get this information, neutrons were initiated from the 241 Am-Be isotopic source region, and the number of events when one, two and three neutrons were detected (excluding the detection of neutron born in the fission chambers) were counted. The simulations were performed with 3 · 10 9 source neutrons in total which, taking into account the strength of the isotopic source, corresponds to a measurement time of 25 minutes. This simulation was repeated with four different number of fuel rods (8, 10, 13 and 16) in the assembly, in order to investigate the sensitivity of the detection rates on the amount of fissile material. The results are summarized in Table III. Two types of singles are shown: source singles are neutrons detected directly from the isotopic source, whereas sample singles originate from induced fission in the fuel assembly. No values are presented for the triples, because their number was zero in each case due to the very low detection efficiency of the system. As expected, with increasing amounts of fuel the number of sample singles and sample doubles increases, because the probability of inducing fission in the fuel also increases. The expected number of source singles shows the opposite tendency which can be explained by the shielding property of the fuel: the more fuel is present, the lower the probability that source neutrons reach the detectors without causing induced fission in the fuel. In general, as it is expected from the detection probabilities shown earlier, the expected number of singles is several order of magnitudes higher than the expected number of doubles. Additionally, the expected number of source singles is much higher than that of the sample singles. As a result, one expects that the source single counts will have a dominating contribution to the measured detector current and to its moments, as we will see in the following subsection. D. The mean value and the covariance function of the signals As a simple preliminary demonstration of the proposed method, measurements were performed using three different types of fuel assemblies, containing 0, 5 and 16 rods. The mean values and the covariance functions of the registered signals were then determined which, according to (2) and (14), are related to the (sample) singles and (sample) doubles rates, respectively. The bicovariance functions were not calculated due to the low expected value of the triple events. Since, as we saw earlier, detection events (especially doubles) are rare, long measurements lasting several hours are necessary to gain usable statistics. Considering the 48 ns time resolution of the AD converter, the recorded signals would require a large amount of disk space. In order to save space, a compression technique was developed and applied. The technique utilizes the fact that -due to the low detection intensityneighboring pulses in the signal are often fare from each other and separated by background noise which, however, contains no useful information on the system. Therefore, a triggering procedure was implemented on the FPGA module of the AD converter to disregard the unnecessary parts of the signal. The procedure is illustrated on Fig 9 and is explained in the following. An arbitrary threshold level is defined for the input analogue signal coming from the detector to the converter. By default, the digitized signal values are not written to files, instead they are collected in a memory buffer within the FPGA. When the signal goes above threshold, the values start being written to files from 40 samples earlier. When the signal goes below threshold and stays there for at least 200 samples, they stop being written to files. During analysis of the signals, these unrecorded segments were considered to be zero. Using the described compression technique, a single 8 hour long measurement was performed with each of the three assemblies. The threshold values for the triggering were determined individually for each detector based on their amplitude spectrum, just like the one seen on The mean values of the registered signals of all three detectors are shown on Fig 10. One immediately sees, that they show a decreasing tendency with increasing amounts of fuel, just as the source singles in Table III. This is not surprising if we recall that the mean value is proportional to the singles detection rate and -as shown in the previous subsection -singles events are dominated by the so called source singles. Table IV lists the singles rates (number of singles per 1 second) calculated with formula (2) from the mean values of detector A. To serve as a comparison with the simulated values shown in Table III, the number of singles per 25 minutes were also calculated. We see that the measured singles rates are in the same order as the simulated (and dominating) source singles. The covariance function of the signals of detectors A and B are shown on Fig 11. Regardless of the number of fuel rods in the assembly, the covariance function appears to be a constant zero function buried in noise, whose integral will also practically be zero, making it impossible to recover the doubles detection rate using formula (14). This is most likely a result of the low rate of doubles events, which are suppressed by the much more frequent singles events and by the background noise. V. CONCLUSIONS AND FUTURE PLANS A new form of neutron multiplicity counting has been developed with the possibility of extracting traditional multiplicity count rates (namely the singles, doubles and triples rates) from the cumulants of fission chamber signals in current mode. It was shown that, at least in theory, by using two-and threepoint statistics of the currents of one to three fission chambers, the detection rates can be recovered even in the case when the detection of neutrons of common origin does not take place simultaneously. The proposed method has the advantage that it does not suffer from the dead time problem, on the other hand it requires the knowledge of the detector pulse statistics (shape and amplitude distribution), which are though only the properties of the detector and can be determined from calibration. An experimental setup was designed and built to demonstrate the practical usability of the method. The properties of the detector pulse required by the theoretical model have been successfully determined. Monte Carlo simulations were performed to estimate the detection efficiency in the built setup. It was found that the probability of detecting a single neutron is much smaller than in a traditional multiplicity counter and as a consequence, the expected number of doubles and triples events is almost negligible compared to that of the singles events. Measurements were performed and the mean values as well as the covariance functions of the signals were estimated. Values for the singles rates were successfully extracted from the mean current, but the measured covariance functions were found to be zero, making it impossible to recover any doubles rates. These latter results indicate that in order to effectively demonstrate the use of the proposed method of multiplicity counting, the detection efficiency should be significantly increased. There is two obvious choice to achieve this: one possibility is to use three groups of detectors where each group contains several small-sized detectors whose signals are unified; the other possibility is to use three large fission chambers. Coordination is in progress with partner institutes to realize such measurements.
5,819.8
2020-06-01T00:00:00.000
[ "Physics" ]
The Relationship Between the Economic Development Levels of the Countries and Their Sporting Achievements in the 2020 Tokyo Olympics The purpose of this study is to investigate the relationship between the economic development and order of success of the countries ranked in the top 20 at the 2020 Tokyo Olympics. In this context, the total number of medals of the countries in the top 20 of the total number of medals in the Tokyo 2020 Olympics was selected as a sporting success, as an indicator of development, the countries’ Gross Domestic Product (GDP) levels were also considered. In order to investigate the relationship between sporting success and economic development; SPSS package program was used. The significance level was considered as p < 0.05. Correlation analysis was performed by selecting the total number of medals as a dependent variable, the gross domestic products as an independent variable, and the population as a control variable. Findings of this research, a relationship was found the Gross Domestic Product (GDP) of the countries and the number of medals obtained at the 2020 Tokyo Olympics. According to these findings, a relationship has been found between economic development of countries and the number of medals won at the 2020 Tokyo Olympics, which we can see as Introduction In recent years, interest in sports has been increasing every day, accordingly, the competition in sports has been increasing. The phenomenon of competition in sports is not only with athletes, but also turns into the fact that countries are competing with each other. Even in most countries, the international sporting success of athletes has become a source of prestige for the country and the managers who manage that country. For this reason, the source of the international sporting achievements of the countries has become the subject of research with academic circles. As a result of these studies, it is seen that the economic advanced levels of the countries are at the beginning of the factors of international sporting success. The most important reason for this situation is based on the hypothesis that countries can allocate more resources for the necessary sports infrastructure investments with their level of development and thus they can be more successful in the international sports arena (Saatcioglu, 2012). The relationship between the economic deceleration levels of the countries and international sporting success is very rich in the literature and many empirical studies have been carried out on this subject as well. Some literature studies are as follows: In underdeveloped countries, there is a shortage of funding for sports. Sports facilities and equipment are not at an adequate level. As a result of the Logit model forecast, it was concluded that the increase in the number of medals at the Olympics is associated with an increase in GDP and population. Economic development is the basic recipe for sporting backwardness (Andreff, 2001). The economic situation of the country is quite important in sports. Because countries with economic prosperity can develop their sports infrastructure opportunities more effectively. Countries with a strong sports infrastructure can also develop their talented athletes more and bring success to their countries more easily in the international arena (Bernard, 2004). Hoffman and colleagues (2004), in their study of ASEAN countries at the 2000 Sydney Olympics, found that GDP had an impact on medals won at the Olympics. However, they mentioned that this effect is limited. Seoul 1988, Barcelona 1992, Atlanta 1996and Sydney 2000 in the context of regression analysis are made as a result of a positive relationship between GDP and the number of medals in the Olympic Games four of these have been identified (Bian, 2005). A correlation analysis was applied between the number of medals won at the Olympics held between 1952 and 2004 and the GDP amounts of the countries, and as a result of this analysis, the population and GDP size became the main decider of the total number of medals (Lui, 2008). In their study, Rathke and Woitek (2008) decertified the relationship between GDP and the Journal of Educational Issues ISSN 2377-22632021 www.macrothink.org/jei 35 number of medals obtained at the Olympics in the model they created. The model also includes population and communist country variables. According to their results, they concluded that GDP has a positive effect on the number of medals obtained at the Olympics. A study in 2009 in China and Granger causality test between GDP per capita and investments in the sports industry, sports industry and investment per capita is the GDP of causality towards it has been determined that (Li, 2013). Buts et al. (2013), their research for the first time examined the relationship between economic deceleration and the number of medals in the Olympics within the framework of the Paralympic Games. As a result of the research, they concluded that they have a positive effect on the GDP and the number of medals of the population. When the literature is examined, it is noted that the level of economic development is an important determinant of sporting success at the Olympics. It is assumed that countries with a relatively large amount of GDP will be able to transfer more resources to sports infrastructure investments and thus become more successful in the international arena. Most of the empirical studies conducted in the literature have also reached conclusions according to which this assumption can be accepted. Source: IOC. Table 1 shows the total number of medals that countries received at the 2020 Tokyo Olympics. The biggest reason why countries rank according to the total number of medals is that for most countries, the medal that will come in any sport branch, which may even be a bronze medal, is considered an indicator of international sporting success for that country. For this reason, we believe that when we rank countries by the number of medals, it will be more convenient to rank them not by the number of gold medals, but by the total number of medals. The United States is the country with the most medals, having won 113 medals at the Tokyo 2020 Olympics. China is in second place with the number of 88 medals. These two countries Table 2 with the same ranking. Table 2 shows the 20 highest countries according to the amount of GDP of the countries. When ranking countries by their level of development, GDP amounts are usually used. If the amount of GDP of a country is high, it is seen that the country is more developed with its investments, infrastructure, production quantity, and the opportunities and opportunities it offers to its citizens than countries with relatively low GDP quantity (Ünsal, 2007). Source: World Bank. At Table 2, it can be seen that 16 of the countries in the top 20 are also included in Table 1. Even if we look only at Tables 1 and 2, we can assume that there is a relationship between the amount of GDP and the total number of medals in order to strengthen this assumption further, ISSN 2377-22632021 statistical analyses are needed. Journal of Educational Issues Education is the basis of countries economic development levels and international sporting achievements. One of the most important factors in international sporting success is sports education. One of the common features of developed countries is the importance they attach to sports education. When these countries were examined in detail, they conducted sports and education together. In this way, international sports success levels are high. For example, in the United States, sports scholarships are provided. The sports scholarship is carried out in accordance with both the student's sporting success and his/her educational success. We can determine that the United States, which is one of the most developed countries in the world ( Table 2) and won the most medals at the 2020 Tokyo Olympics (Table 1), achieved these achievements through sports training. The aim of this study ranked countries in the Summer Olympics Tokyo 2020 Top 20 international sporting success is to investigate the relationship between levels of economic development. In this context, the total number of medals of the countries in the top 20 of the total number of medals in the Tokyo 2020 Olympics was chosen as a success, while the gross domestic product levels of the countries were considered as an indicator of development. Method In this study, the relationship between economic development levels and countries' sporting success in the international arena was investigated with the following model established taking into the relevant literature: where, MS describes the number of medals, GDP is the gross domestic product, NU is the population variable. α 0 in the model represents the constant term, and ε i represents the error term. All variables were analyzed with logarithmic values. The total number of medals that countries have achieved at the Tokyo 2020 Olympics is available on the official website of the International Olympic Committee. The GDP figures and population figures of the countries for the year 2020 were obtained from the official website of the World Bank. In the study, in order to investigate the relationship between sporting success and economic development levels, DECSS package program was used and the significant level was considered as p < 0.05. Correlation analysis was performed by selecting the dependent variable for the total number of medals (MS), the independent variable for the amount of gross domestic product (GDP), and the control variable for the population (NU). In statistical analyses, Correlation Analysis is performed to examine the relationships variables (Bursal, 2019). The necessary assumption test before the analysis is the assumption that the variables are distributed normally. In this regard, the variables were firstly subjected to a Normal Distribution Conformity Test. Results In some parametric tests, the series to be analyzed must meet the normal distribution conditions. In correlation analysis, the requirement of compliance with the normal distribution is also required (Field, 2009). Hypotheses for the test of conformity to the normal distribution are established as follows: H 0 : The series is in accordance with the normal distribution. H 1 : The series is not suitable for normal distribution. Since the H 0 hypothesis cannot be rejected because it is at a 5% significance level, p < 0.05 in the Kolmogorov-Smirnov test, we can say that all of our series correspond to the normal distribution. Since our series are in accordance with the normal distribution, we can perform Correlation Analysis. Correlation analysis helps us to decipher the mutual relationship between variables. When applying the correlation analysis, the population variable was considered as the control variable. So the effects are under control. The assumptions of correlation analysis are as follows: H 0 : There is no relationship the variables. H 1 : There is a relationship the variables. ISSN 2377-22632021 Since the calculated significance value is p < 0.05, the H 0 hypothesis is rejected for this relationship and it is concluded that the relationship between GDP and number of medal points is at a significant level. When the population variable was checked, the correlation coefficient between GDP and the number of Medals determined as (r = 0.554; n = 18; p = 0.006). Journal of Educational Issues The correlation coefficient (r) varies between -1 and 1. Decimation of A coefficient dec deceleration to -1 indicates a strong inverse relationship between the variables, while a coefficient deceleration to 1 indicates a strong relationship between the variables in the correct direction (Bursal, 2017). The r value obtained as a result of the analysis is 0.554. In other words, there is a strong positive relationship between GDP and the Total Number of Medals. There is a strong positive relationship between GDP and total number of medals. Discussion The most important factor that comes to the fore in the literature investigating the issue of which factors determine the success of countries in the Olympics is the level of economic development. Talent is a very important factor for success in sports, but the discovery of talented athletes and the development of their abilities require infrastructure investments in sports area (Saatcioglu, 2012). From a theoretical point of view, it is suggested that economically developed countries can transfer more resources to sports infrastructure investments and thus become more successful. It is also accepted that more effective participation of sports in education in rich countries and a high probability that individuals will find more free time that they can devote to sports will also have a positive impact on sports success. Most of the empirical studies in the literature also confirm this opinion, which we mentioned about, and give the conclusion that the level of economic development positively affects sporting success. 16 of the countries that are in the top 20 ranking in terms of GDP are again in the total medal rankings of the Tokyo 2020 Olympics. We can say that with the support of the literature for these 16 countries, it is an expected result that they will be included in both tables. However, it is worth mentioning the countries that are not included in both tables. Although India, Mexico, Indonesia and Saudi Arabia are in the top 20 in terms of GDP, they are not in the top 20 in terms of the total number of medals (Total Medal Ranking: India: 33, Indonesia: 42, Mexico: 47, Saudi Arabia: 77). There may be many reasons for this situation for each country. Among these reasons, there are many factors; investments in which areas it is their source of sports infrastructure and the amount of sport in the interests of the people and the rulers of the total population, the percentage of the young population are connected to political and cultural structures, climatic features. New Zealand, Hungary, Ukraine, Cuba and Poland are not in the top 20 in terms of GDP. Despite this, they managed to get into the 20th place in terms of the total number of medals (Total Medal Standings: New Zealand: 13,Hungary: 13,Ukraine: 16,Cuba: 18,Poland: 19). Although these countries are relatively behind in terms of economic development level, it can be predicted that they achieve success with their infrastructure investments in sports, the importance they attach to sports and the athlete, and the necessary importance to sports in their education systems. The common aspect of New Zealand, Hungary, Ukraine, Cuba and Poland is that they were previously or are still governed by a socialist regime (except: New Zealand). Socialist regimes use sports, sports organizations and success in sports as political propaganda (Kılıç, 2016). Due to the fact that sport is used as a propaganda purpose, sport takes an important place, especially in the education sector. For this reason, since aspects such as the discovery of talented players and the development of their abilities will be realized more easily, the impact on international sporting success is quite large. Although other countries today, except Cuba, do not implement the socialist regime, the discipline of sports in education from the past continues to be active today. In deciphering the relationship between GDP and the total number of medals, we added the population variable to our model as a control variable in our study. The main reason for this is due to the fact that the population variable is also often considered when studying this issue in the literature. Because the population is an important variable in terms of the ability to win an Olympic medal (Moosa, 2004). However, in the correlation analysis that we applied, we took the effects of the population variable under control and included it as a modeled control variable.
3,614.8
2021-12-26T00:00:00.000
[ "Economics" ]
Urinary 3-(3-Hydroxyphenyl)-3-hydroxypropionic Acid, 3-Hydroxyphenylacetic Acid, and 3-Hydroxyhippuric Acid Are Elevated in Children with Autism Spectrum Disorders Autism spectrum disorders (ASDs) are a group of mental illnesses highly correlated with gut microbiota. Recent studies have shown that some abnormal aromatic metabolites in autism patients are presumably derived from overgrown Clostridium species in gut, which may be used for diagnostic purposes. In this paper, a GC/MS based metabolomic approach was utilized to seek similar biomarkers by analyzing the urinary information in 62 ASDs patients compared with 62 non-ASDs controls in China, aged 1.5–7. Three compounds identified as 3-(3-hydroxyphenyl)-3-hydroxypropionic acid (HPHPA), 3-hydroxyphenylacetic acid (3HPA), and 3-hydroxyhippuric acid (3HHA) were found in higher concentrations in autistic children than in the controls (p < 0.001). After oral vancomycin treatment, urinary excretion of HPHPA (p < 0.001), 3HPA (p < 0.005), and 3HHA (p < 0.001) decreased markedly, which indicated that these compounds may also be from gut Clostridium species. The sensitivity and specificity of HPHPA, 3HPA, and 3HHA were evaluated by receiver-operating characteristic (ROC) analysis. The specificity of each compound for ASDs was very high (>96%). After two-regression analysis, the optimal area under the curve (AUC, 0.962), sensitivity (90.3%), and specificity (98.4%) were obtained by ROC curve of Prediction probability based on the three metabolites. These findings demonstrate that the measurements of the three compounds are strong predictors of ASDs and support the potential clinical utility for identifying a subgroup of ASDs subjects. Introduction Autism spectrum disorders (ASDs) are neurodevelopmental disorders characterized by limited social interaction, abnormal use of language, and stereotypical behaviors, interests, and activities [1]. During the last decades, ASDs prevalence estimates have risen to as much as 113/10,000 children in the USA (2012) and 62/10,000 globally [2], corresponding to 1 : 88 and 1 : 161 children, respectively. Hence this once rare disease has now become one of the most frequent conditions in child neuropsychiatry and it has to be paid more attention to. ASDs's etiology and pathogenesis are not precisely known, although genetic and environmental factors have been proposed as the two primary causes of ASDs heritability estimates have shown a trend of decrease in a recent study [3], leaving sufficient room for environmental contributions to explain ASDs. Among environmental factors possibly relevant to clinical feature, the overgrowth of unusual gut microbial species in a sizable subgroup of autistic patients is of great interest reported in several recent studies [4][5][6][7][8]. An excess of Ruminococcus and Clostridium species was initially reported in fecal samples from ASDs patients compared with the controls [4]. Parracho found a higher incidence of the Clostridium histolyticum group (Clostridium clusters I and II) in the fecal flora of 58 ASDs children compared to 10 healthy children. Interestingly, 12 unaffected siblings of ASDs probands displayed intermediate levels. Several members of the C. histolyticum group are known toxin producers which could lead to gut dysfunction [7]. Adams et al. found lower levels of bifidobacteria in 58 ASDs children compared to 39 controls. The growth of bifidobacteria may be inhibited by some unusual microbial species overgrown in gut, such as Clostridium species [8]. Additionally, recent studies have documented elevated concentrations of abnormal aromatic metabolites presumably derived from overgrown Clostridium species or other gut microbiota in the urine of autistic individuals [9][10][11][12][13][14]. In this study, to seek similar markers and further explore possible pathophysiological roles of gut microbiota in ASDs, we have developed a GC-MS based metabolomic approach for urine analysis in 62 autistic individuals and in 62 sex-and age-matched non-ASDs controls. Patient Recruitment and Sample Collection. This prospective study was approved by the Ethics Committee of Maternity and Child Care Hospital of Hunan Province. Informed consent was obtained from the parents of the patients. Sixtytwo patients (48 males and 14 females aged from 1.5 to 7 years) previously diagnosed with ASDs and age/gender-matched non-ASDs controls (male 48, female 14) were obtained from Maternity and Child Care Hospital of Hunan Province. All the children with ASDs did not have a history of food restricted. The controls were excluded with mental retardation, verbal disorder, attention deficit hyperactivity disorder, and tics, and the ASDs cases were diagnosed according to DSM-IV diagnostic criteria. Children included in the study had no antianaerobic drug use history. Urine samples were collected into untreated vials during routine medical consultations, principally in the morning, and the exact time of collection was recorded. Each urine sample was aliquoted into 1.5 mL Eppendorf tubes and stored at −70 ∘ C immediately after collection until analysis. Sample Pretreatment. The samples were pretreated as described in our previous work [15]. Briefly, urine samples were thawed at room temperature and centrifuged (at 3000 g) for 10 min and 100 L urine samples (contained 2.5 mmol/L creatinine) was first treated with 30.0 L urease (1.2 U/ L) at 37 ∘ C for 30 min to remove interfering urea and then spiked with heptadecanoic acid (0.5 mg/mL, 50 L). Proteins, including the added urease, were precipitated with 800 L ethanol and removed after 15 min centrifugation (12000 r/min). Forty microliters of 0.04 mol/L hydroxylamine hydrochloride and 60 L of 0.05 mol/L Ba(OH) 2 was added to the deproteinized solution and the mixture was then incubated at room temperature for 20 min. Subsequently, the mixture solution was evaporated to dryness, and the compounds in the dried residue were converted to TMS derivatives with 100 L of BSTFA/TMCS (100 : 1) and analyzed by GC/MS. More experimental details can be found in our patented technology (ZL 201210114246.2). GC-MS Analysis. An Agilent GC-MS system (7890-5975C) was used to analyze the derivative samples. A sample (1 L) was injected with a split ratio of 50 : 1 into the GC and then separated with a fused silica HP-5 capillary column (30 m, 0.25 mm inside diameter, 0.25 m thickness of the inner liquid in the column). The chromatographic conditions were as follows. The injector temperature was set at 250 ∘ C. High purity nitrogen was used as carrier gas at a constant flow rate of 1.5 mL/min. The column temperature was initially kept at 60 ∘ C for 4 min, ramped to 320 ∘ C at 6.5 ∘ C/min, and then held for 10 min. The parameters of the mass spectrum were as follows. The interphase temperature and ion source temperature were 300 ∘ C and 230 ∘ C, respectively. Ions were generated by electronic impact (EI) at 70 eV. Masses were acquired from / 50 to 800. Drift of retention time of each peak was minimized by locking heptadecanoic acid at 36.00 min with retention time locking technology (RTL, Agilent). GC/MSD ChemStation Software was used for autoacquisition of GC total ion chromatograms (TICs) and fragmentation patterns. Each compound had a fragmentation pattern composed of a series of split molecular ions; the mass charge ratios and the abundance of which were compared with a standard mass chromatogram in the NIST (National Institute of Standards and Technology) mass spectra library by the ChemStation Software. Peaks with similarity index more than 70% were assigned compound names. The chromatograms were subjected to noise reduction, and peaks with intensity higher than threefold of the ratio of signal to noise (S/N) were recorded prior to peak area integration. The relative intensity of each peak was normalized against that of the internal standard in GC/MS run. All known artifact peaks, such as peaks due to column bleed and BSTFA artifact peaks, were not considered in the final data analyses. Integrated peak areas of multiple derivative peaks belonging to the same compound were summed and considered as a single compound. Each sample was characterized by the same number of variables, and each of these variables was represented across all observations with the same sequence. Thus, a data matrix was generated by intensities of the commensal peaks from all samples to characterize the biochemical pattern of each sample. The resulting three-dimensional matrix consisting of peak indices (retention time (RT)-/ pairs), sample names (observations), and normalized peak areas (variables) was exported for principal component analysis (PCA). Statistical Analysis. After GC/MS analysis, each sample was represented by a GC/MS TIC, and ion peak areas of compounds were integrated. The peak area ratio of each compound to creatinine was calculated as the response. The results are expressed as ratios to the urinary creatinine concentration. Statistical analysis was used for the comparison of the metabolite levels to determine their significant differences between the ASDs group and the control group. The differentially expressed compounds with values of <0.05 were considered to be statistically significant. Principal component analysis (PCA) was used to differentiate the samples and performed by Mass Profiler Professional software (Agilent). All of the data from the differentially expressed compounds were used for constructing PCA models. The score plots of the first three principal components allowed the visualization of data and comparison of samples between the ASDs and control group. The classification performance (specificity and sensitivity) was assessed by the area under the curve (AUC) of the receiveroperating characteristic (ROC) curves. Metabolomic Profiling of Urine Samples. Representative GC/MS TIC chromatograms of urine samples from the ASDs group and the control group are displayed in Figure 1. The majority of the peaks in the chromatograms were identified as endogenous metabolites by the NIST mass spectra library, including amino acids, organic acids, carbohydrates, amides, and fatty acids. Table S1. Pattern Recognition and Function Analysis. After normalization of data using creatinine as internal standard, a number of differentially expressed compounds with values of <0.05 were selected by statistical analysis to construct a PCA model for assessment of the clustering of the ASDs group and the control group. The PCA scores plot showed a clear separation of the two groups besides only a few ASDs cases (Figure 2(a)), which could be explained by the fact that the etiology and pathogenesis of these cases are probably caused by genetic factors. According to the previously reported studies, elevated concentration of 3-(3hydroxyphenyl)-3-hydroxypropionate (HPHPA) may be the catabolism product of phenylalanine by Clostridium species [9]. Interestingly, besides HPHPA, we also found two aromatic metabolites 3-hydroxyphenylacetic acid (3HPA) and 3-hydroxyhippuric acid (3HHA) among these differentially expressed compounds. The three compounds were further qualitatively analyzed by comparing their retention time and fragment-ion of the chromatograms between the urine sample and the corresponding standards (Figures S1-S4; see Supplementary Material). All of the data support the identification of the three compounds as 3HPA, HPHPA, and 3HHA, respectively, Figures 2(b), 2(c), and 2(d) graphically show the concentration distribution of urinary HPHPA, 3HPA, and 3HHA by age, respectively. The graphs clearly distinguish the ASDs cases from the controls. There is no correlation between each pair of HPHPA, 3HHA, and 3HPA ( values were all greater than 0.05, data not shown). Statistic results of the three compounds were summarized in Table 1. There were no statistical differences in the means for age. 3HPA ( < 0.001), HPHPA ( < 0.001), and 3HHA ( < 0.001) concentrations were significantly higher in ASDs children compared with age-matched controls. Effect of Vancomycin on Urinary Excretions of the Three Compounds. Some studies tested vancomycin treatment for ASDs since vancomycin given orally is virtually not absorbed, and it is generally effective against gram-positive bacteria and Clostridia species [9,16,17]. For preventing the generation of vancomycin resistance strains, we made some modifications of vancomycin treatment. Fifty HPHPA-positive autistic children (9/50 patients 3HPA-positive and 17/50 patients 3HHA-positive) were selected for oral vancomycin treatment at standard age-appropriate dosages (50 mg/kg/d, 30 days as one therapeutic course) followed by supplement therapy with Bifidobacterium agent (Bifidobacterium BB-12, 2 pills a day). After one therapeutic course, the treatment was discontinued for 15 days and the next course began only with Bifidobacterium agent treatment. The subsequent Bifidobacterium agent treatment followed this cycle and continued depending on the severity of patients' condition assessed by Autism Behavior Checklist (ABC) and the excretions of the three compounds. Two months later, paired-sample -test was applied to test the change of amounts of the three compounds before and after treatment. Significant decreases in levels of HPHPA (mean value from 302.78 to 37.06 mmol/mol creatinine, < 0.001), 3HPA (from 222.30 to 15.89 mmol/mol creatinine, < 0.005), and 3HHA (from 56.59 to 5.95 mmol/mol creatinine, < 0.001) were found following the oral administration of vancomycin. HPHPA, 3HPA, and 3HHA were completely eliminated in 35/50, 6/9, and 12/17 cases after vancomycin treatment, respectively. Following the cessation of this treatment 3-6 months later, the concentration of HPHPA almost recovered to its initial level in 3 patients and recovered to 0.08-0.45 times their initial values in 12 patients. The levels of 3HPA and 3HHA recovered to 0.1-0.3 times their initial values in 3 patients. This may be consistent with the frequent recurrence of gastrointestinal Clostridium species due to germination of resistant spores following antibiotic treatment [9]. However, the recurrence was less severe compared with that in these reported studies, which may be attributed to the supplement therapy with Bifidobacterium agent, a probiotic bacterium that benefits the dynamic equilibrium for intestinal microecology. Furthermore, after two therapeutic course treatments, the ABC score decreased significantly (mean value from 73 to 59); 90% autistic children showed improved communication and eye contact, but no obvious improvement in stereotyped behavior was seen. These findings indicated that these compounds were probably derived from overgrown intestinal microbiota, and the pathogenesis of ASDs may be not only correlated with overgrown gut pathogenic bacteria. Additionally, the effect of this treatment on intestinal symptoms was also studied. 32/62 children with ASDs have frequent constipation; this gut symptom was positively correlated with the HPHPA level (Pearson correlation = 0.253, < 0.05). In 50 HPHPA-positive patients employed for the treatment, 22 individuals have constipation. Interestingly, after two therapeutic course treatments, 22 patients with constipation all showed remarkable improvements in constipation, which revealed that gut symptoms in ASDs may also resulted from overgrown gut pathogenic bacteria. Suggested Pathway for the Metabolism of HPHPA, 3HPA, and 3HHA. Based on preexisting hypothesis [9] and our experimental results, we speculated that these compounds were from disordered phenylalanine metabolism by overgrown intestinal microbiota like Clostridium species. As shown in Figure 3, dietary phenylalanine firstly is converted into m-tyrosine, o-tyrosine, and 2,3-dihydroxyphenylalanine by gut microbiota, for example, chloridazon-degrading bacteria [18]. It has been proved that m-tyrosine induces a characteristic behavioral syndrome in rats consisting of forepaw padding, head weaving, backward walking, splayed hind limbs, wet dog shakes, hyperactivity, and hyperreactivity and depletes the brain of catecholamines. Therefore, m-tyrosine might play a direct role in causing abnormal behaviors in ASDs [19]. It is also possible that m-tyrosine might form an analog of dopamine, if m-tyrosine is metabolized by the same enzymes that convert tyrosine to dopamine (Figure 3) [9]. m-Tyrosine converts to m-tyramine and 3-hydroxyphenylpropionic acid by decarboxylation and deamination, respectively ( Figure 3). It is documented that Escherichia coli could induce the activities of amine oxidase (MaoA) and phenylacetaldehyde dehydrogenase (PadA) for the catabolism of aromatic amines. Phenylethylamine, tyramine, and dopamine are substrates of MaoA and PadA, leading to formation of the corresponding aromatic acids, that is, phenylacetic acid, 4-hydroxyphenylacetic acid, and 3,4dihydroxyphenylacetic acid, respectively [20,21]. Therefore, 3HPA would be expected to be formed if there are amounts of m-tyramine for the substrate of the two enzymes ( Figure 3). Another metabolite from m-tyrosine, 3-hydroxyphenylpropionic acid, converts into HPHPA and 3-hydroxybenzoic acid in order as reported by Shaw [9]; the latter product then conjugates with glycine and forms 3HHA (Figure 3). Sensitivity and Specificity. Receiver-operating characteristic (ROC) analysis with sensitivity (true positives) and 1 minus specificity (false positives) of HPHPA, 3HPA, and 3HHA was used to evaluate the possibility of using these markers for diagnosing ASDs. Selected sensitivity and specificity calculations for the three metabolites measures in detecting ASDs cases are presented in Table 2. High specificity (>96%) was obtained by each metabolite for ASDs. After two-regression analysis, the optimal AUC (0.962), sensitivity (90.3%), and specificity (98.4%) were obtained by ROC curve of Prediction probability based on the three metabolites (Figure 4), which means that the three metabolites are good discriminators to differentiate between ASDs and non-ASDs control. These results indicate that the measurements of the three metabolites are strong predictors of ASDs and support the potential clinical utility for identifying a subgroup of ASDs subjects in whom disordered phenylalanine metabolism may be a salient characteristic. Conclusions The present metabolomic profile approach provides comprehensive analyses of metabolites in urine and elevated levels of three aromatic acids HPHPA, 3HPA, and 3HHA were found in ASDs group compared with the controls. In particular, vancomycin has significant effect on decreasing the excretions of these compounds, which indicated that they seemed to be derived from intestinal microbiota. Further studies will have to define the degree of overlap between elevated urinary HPHPA, 3HPA, and 3HHA and intestinal microbiota composition in ASDs patients, as well as their potential relationship with gastrointestinal symptoms, abnormal behavior, and personalized response to pharmacological treatments. Additionally, the sensitivity and specificity data assessed by ROC analysis demonstrate that the measurements of the three metabolites are strong indicators of ASDs. Principal component analysis ROC:
3,949.4
2016-03-30T00:00:00.000
[ "Medicine", "Biology" ]
An integrated pan‐cancer analysis of TFAP4 aberrations and the potential clinical implications for cancer immunity Abstract Studies have shown that transcription factor activating enhancer binding protein 4 (TFAP4) plays a vital role in multiple types of cancer; however, the TFAP4 expression profile is still unknown, as is its value within the human pan‐cancer analysis. The present study comprehensively analysed TFAP4 expression patterns from 33 types of malignancies, along with the significance of TFAP4 for prognosis prediction and cancer immunity. TFAP4 displayed inconsistent levels of gene expression across the diverse cancer cell lines, and displayed abnormal expression within most malignant tumours, which closely corresponded to overall survival. More importantly, the TFAP4 level was also significantly related to the degree of tumour infiltration. TFAP4 was correlated using gene markers in tumour‐infiltrating immune cells and immune scores. TFAP4 expression was correlated with tumour mutation burden and microsatellite instability in different cancer types, and enrichment analyses identified TFAP4‐associated terms and pathways. The present study comprehensively analysed the expression of TFAP4 across 33 distinct types of cancers, which revealed that TFAP4 may possibly play a vital role during cancer formation and development. TFAP4 is related to differing degrees of immune infiltration within cancers, which suggests the potential of TFAP4 as an immunotherapy target in cancers. Our study demonstrated that TFAP4 plays an important role in tumorigenesis as a prognostic biomarker, which highlights the possibility of developing new targeted treatments. human malignancies. 2 Pan-cancer analysis is the analysis of the molecular abnormalities of various types of cancer, which can identify any common features and heterogeneities during vital biological processes that are under dysregulation as the result of diverse cancer cell lineages. Pan-cancer analysis projects, such as the Cancer Cell Line Encyclopedia (CCLE) and The Cancer Genome Atlas (TCGA), have been created based on the assessment of different human cancer cell lines and tissues at epigenomic, genomic, proteomic and transcriptomic levels. [3][4][5] Recently, pan-cancer analysis has been used to identify certain functional and pathway genes, which allows for a comprehensive and thorough understanding of human cancers. For example, tumour hypoxia-associated multiomic molecular characteristics have been investigated, and it has been suggested that some molecular alterations can be correlated with drug sensitivity or resistance to antitumour agents. This helps to comprehensively understand tumour hypoxia at the molecular level and has certain implications for cancer treatment in clinical practice. 6 New data on FOXM1 up-regulation frequency, aetiology and outcomes in human cancers have been defined from 33 TCGA-derived cancers. 7 The information obtained from these cancers has revealed lncRNA-mediated dysregulation within the cancer at a system level, and provides a valuable approach and resources to investigate lncRNA functions in the context of cancer. 8 Characterizing immune phenotype occurrence frequency and variability in a variety of types of cancer helps to understand the immune status of untreated cancers, and this approach has been used in more than 9000 TCGA-derived cancer gene expression data sets. 9 Therefore, pan-cancer analysis can illustrate patterns beneficial for developing combination and individualized therapies for the treatment of various cancers. Transcription factor activating enhancer binding protein 4 (TFAP4) is involved in cancer proliferation, metastasis, differentiation, angiogenesis and other biological functions. 10 In recent years, it has been suggested that the overexpression of TFAP4 may indicate a poor prognosis for various cancers, including hepatocellular carcinoma (HCC), non-small cell lung carcinoma (NSCLC), prostate cancer (PCa), colorectal cancer (CRC) and gastric cancer (GC). [11][12][13][14][15] According to our prior research, TFAP4 plays a role as an efficient prognostic biomarker, which also activates the PI3K/AKT signal transduction pathway to enhance the metastasis and invasion of HCC. 16 Other studies have been carried out to examine the proliferation, overexpression or mutation of TFAP4 in specific types of cancer, but those studies had low sample sizes and diverse methods. Additionally, research on TFAP4 has mainly focused on an individual or limited number of types of cancers, and no available studies have comprehensively examined several types of cancers simultaneously to identify their similarities and differences. This information is of great importance for understanding the roles of TFAP4 in various cancers, so a comprehensive analysis is urgently needed. To that end, and taking advantage of the large data sets from TCGA, the present study aimed to examine TFAP4 expression profiles and their prognostic significance among human cancers. Additionally, the associations between TFAP4 and the levels of tumour infiltration, tumour mutational burden (TMB) and microsatellite instability (MSI) were analysed for different types of tumour using correlation analysis. Gene set enrichment analysis (GSEA) was conducted to investigate any possible underlying mechanisms. The results of the present study can help to understand vital parts of TFAP4 in the context of tumours, reveal the possible association of TFAP4 with tumour-immune interactions and illustrate the potential mechanism. | Patient data sets and processing TCGA, a cornerstone of the cancer genomics projects, had characterized more than 20,000 primary cancer samples and corresponding non-carcinoma samples from 33 types of cancers. In the present study, the TCGA-processed level 3 RNA-sequencing data sets, along with the corresponding clinical annotations, were obtained using the University of California Santa Cruz (UCSC) cancer genome browser (https://tcga.xenah ubs.net, accessed April 2020). The CCLE public project has comprehensively characterized a tremendous number of human tumour models both genetically and pharmacologically (https://porta ls.broad insti tute.org/ccle). To examine differential gene expression in cancers on a larger scale, the CCLE database, which contains RNA-sequencing data sets for over 1,000 cell lines, was used. For this research, only open-access data were used, which precluded the requirement of approval of the Ethics Committee. | Screening of TFAP4 differential expression and its survival-associated cancers To compare gene expression levels between the cancerous and adjacent normal samples, data regarding TFAP4 gene expression were extracted from the 33 TCGA cancer types to form an expression matrix, as shown in Table S1. Thereafter, the expression matrix and clinical information were matched by patient ID. Afterwards, a univariate Cox model was used to calculate any association between gene expression levels and patient survival, where a difference of P < .05 for TFAP4 in a specific cancer was deemed statistically significant. The survival-associated forest plot was further drawn, and a Kaplan-Meier (KM) analysis was conducted to compare the overall survival (OS) for TCGA cancer patients stratified according to the median TFAP4 expression level, using the log-rank test. | TFAP4 and tumour immunity The Tumour Immune Estimation Resource (TIMER, https://cistr ome. shiny apps.io/timer/) represents the integrated approach to systemically analysing the immune infiltrates of different types of cancers. 17 In TIMER, the deconvolution statistical approach is used for inferring tumour-infiltrating immunocyte levels based on gene expression data. 18 Using the TIMER algorithm, we examined the associations between TFAP4 levels and six different immune infiltrate levels (Table S2). For the present study, the association of TFAP4 expression with each leucocyte phenotype across 33 cancer types was computed. Additionally, we examined the associations of TFAP4 levels with tumour-infiltrating immunocyte gene markers selected based on previous research. [20][21][22] The correlation analysis generated the estimated statistical significance and Spearman's correlation coefficient. Then, an expression heat map was plotted for gene pair within the specific type of cancer. The estimation of stromal and immune cells in malignant tumour tissues using expression data (refer to ESTIMATE for short) represents an approach that uses gene expression profiles to predict the purity of both tumours and the infiltrating stromal cells/immunocytes within tumour tissues. 23 The ESTIMATE algorithm produces three scores on the basis of single sample Gene Set Enrichment Analysis (ssGSEA), including 1) stromal score, which determines stromal cells within the tumour tissues, 2) immune score, which assesses immunocyte infiltration within the tumour tissues, and 3) estimate score, which can infer the purity of tumour. In this study, we used the ESTIMATE algorithm to estimate both immune and stromal scores (Table S3) for tumour tissues according to the corresponding transcription data. Then, we calculated the correlations between these scores and TFAP4 expression. TMB measures the mutation number in a specific cancer genome. Numerous studies have explored the significance of using TMB as a biomarker for predicting which patients would be most responsive to checkpoint inhibitors. 24 We downloaded the somatic mutation data for all TCGA patients (https://tcga.xenah ubs.net), F I G U R E 2 The box plot shows the association of TFAP4 expression with pathological stages for 21 types of cancers calculated their TMB scores (Table S4) and then determined the correlation between TMB and TFAP4. MSI is characterized by the widespread length polymorphisms of microsatellite sequences due to DNA polymerase slippage. Recently, it has been suggested that patients with high-MSI cancers gain benefits from immunotherapy, and MSI has been utilized as an indicator of genetic instability for the cancer detection index. 25 We computed the MSI score for each patient, as shown in Table S5, and subsequently performed a correlation analysis between MSI and TFAP4. | Pan-cancer expression landscape of TFAP4 According to CCLE analysis results, TFAP4 displayed inconsistent gene expression levels among various cancer cell lines (P = 1.3e-11, | TFAP4 level was related to the level of immune infiltration Tumour-infiltrating lymphocytes (TILs) can serve as independent predictors of sentinel lymph node status and cancer survival. As a result, the present study examined the correlation between TFAP4 Figure 7A, and pan-cancer associations of TFAP4 levels with the levels of immune infiltration are presented in Figure S1 and Table S6. Using CIBERSORT, detailed immunocyte compositions of all TCGA patients were calculated, after which the correlations between 22 immunocytes and TFAP4 expression were determined for 33 types of cancer, as seen in Table S7. We found that many immunocytes were significantly correlated with TFAP4 levels. As seen in Figure 7B, in CHOL, OV, UCS and UVM, only one type of immunocyte was correlated with TFAP4 level, while at least two immunocytes were correlated with TFAP4 levels in other cancers. | Correlations of TFAP4 level with immune markers To investigate the association of TFAP4 expression with different immune infiltrating cells, the relationships between TFAP4 expression might regulate the immune response in these cancer types. | Correlation analysis with ESTIMATE score, TMB and MSI The ESTIMATE method was developed to calculate the immune and stromal scores of cancer tissues. Using the ESTIMATE method, we calculated the immune, stromal and estimate scores, after which we evaluated the relationship between immune/stromal scores and TFAP4 expression. Figure 7D shows the typical results for HCC, in which TFAP4 expression is significantly correlated with both stromal and estimate scores. The detailed correlation results are summarized in Table 2. Moreover, the association between TMB/MSI and TFAP4 expression was also evaluated, as seen in Table 3. We found that TFAP4 ex- | Functional analysis The biological effect of TFAP4 expression was assessed using GSEA. In HCC, TFAP4 showed significant enrichment in the following GO terms: METABOLISM. These can be seen in Figure 8C and 8D, respectively. The pan-cancer functional GO and KEGG lists of TFAP4 are available in Tables S9 and S10. ysis. TFAP4 was also found to be correlated with TIL gene markers, as seen in Figure 7C. ESTIMATE was reported as a metric for evaluating cancer patient prognosis. 29 Recently, numerous studies have used the ESTIMATE method to assess various tumours, and it has been successfully applied to genomic data. For instance, ESTIMATE is used to predict prognoses in glioblastoma and cutaneous melanoma patients. 30,31 Using the TCGA cohort, the ESTIMATE approach was utilized to generate immune and stromal scores. We found that TFAP4 was negatively correlated with the ESTIMATE scores. | D ISCUSS I ON Gene mutations are the primary cause of cancer formation. 32 Specific gene mutations may predict patient prognosis and treatment response. 33 However, further studies are required to determine whether TFAP4 can serve as a predictor for the efficacy of immunotherapy in these types of cancers. Taken together, the findings of the present study provide clues for the association between TFAP4 and cancer immunity. Collectively, our comprehensive pan-cancer analysis has illustrated the characterization of TFAP4 within cancer cell lines and tissues. Moreover, we have found that TFAP4 can serve as a valuable prognostic biomarker for some types of cancer. Based on the results of the present study, the TFAP4 level is related to cancer immunity. Moreover, our new integrative omics-based workflow may be adopted to generate hypotheses about novel targets for cancers. CO N FLI C T O F I NTE R E S T The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. DATA AVA I L A B I L I T Y S TAT E M E N T Publicly available data sets were analysed in this study. These data can be found at https://tcga.xenah ubs.net and https://porta ls.broad insti tute.org/ccle.
2,970.8
2020-12-29T00:00:00.000
[ "Medicine", "Biology" ]
Influence of the bus waveguide on the linear and nonlinear response of a taiji microresonator We study the linear and nonlinear response of a unidirectional reflector where a nonlinear breaking of the Lorentz reciprocity is observed. The device under test consists of a racetrack microresonator, with an embedded S-shaped waveguide, coupled to an external bus waveguide (BW). This geometry of the microresonator, known as"taiji"microresonator (TJMR), allows to selectively couple counter-propagating modes depending on the propagation direction of the incident light and, at the nonlinear level, leads to an effective breaking of Lorentz reciprocity. Here, we show that a full description of the device needs to consider also the role of the BW, which introduces (i) Fabry-Perot oscillations (FPOs) due to reflections at its facets, and (ii) asymmetric losses, which depend on the actual position of the TJMR. At sufficiently low powers the asymmetric loss does not affect the unidirectional behavior, but the FP interference fringes can cancel the effect of the S-shaped waveguide. However, at high input power, both the asymmetric loss and the FPOs contribute to the redistribution of the energy between the clockwise and counterclockwise modes within the TJMR. This strongly modifies the nonlinear response, giving rise to counter-intuitive features where, due to the FP effect and the asymmetric losses, the BW properties can determine the violation of the Lorentz reciprocity and, in particular, the difference between the transmittance in the two directions of excitation. The experimental results are explained by using an analytical model based on the transfer matrix approach, a numerical finite-element model and exploiting intuitive interference diagrams. I. INTRODUCTION In the last decade, several efforts have been spent to implement optical circuits which show different behavior depending on the propagation direction of the incident light [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20]. The realization of an integrated system capable of working as an optical isolator in the linear regime is prohibited by the Lorentz reciprocity theorem [21,22]. This ensures that transmission through any linear and non-magnetic media does not depend on the direction of propagation. However, by properly engineering the optical system, it is possible to induce a non-Hermitian behavior and obtain direction-dependent properties [23][24][25]. A widely exploited non-Hermitian system is a racetrack microresonator with an embedded S-shaped waveguide (taiji microresonator, TJMR). The TJMR with a gain medium has been studied to achieve unidirectional behaviour in semiconductor ring laser devices [16][17][18] and, recently, in topological lasers [19,26]. In [25], we studied the unidirectional reflector behaviour of TJMR. When a TJMR is coupled to a bus waveguide (BW), the transmission in both excitation directions is the same while the reflection can assume completely different values. Moreover, such a non-Hermitian design can be combined with the nonlinear material response * corresponding author<EMAIL_ADDRESS>to break the Lorentz reciprocity theorem, as was demonstrated in [8]. There, the breaking of the reciprocity is observed in both a direction-dependent nonlinear shift of the TJMR resonances as well as in a direction-dependent optical bistability loop. These results are strictly related to the role of the S-shaped waveguide which allows to selectively couple the counter-propagating modes in a direction-dependent way. Therefore, the energy stored within the TJMR shows different values for the different excitation directions. While the experiments in [8] were restricted to the simplest configurations and provided a pioneering understanding on the basic effect, here we proceed in our analysis by investigating in full detail the role of the BW in this physics. In fact, the reflections at the BW facets [27,28] and the BW propagation losses cause a redistribution of the internal energy in the TJMR which depends on the actual position of the microresonator along the waveguide. In particular, we report a joint experimental and theoretical study of the interference of the BW optical mode with the clockwise (CW) and counterclockwise (CCW) TJMR modes. We discuss the response of the system in the linear and nonlinear regimes where the microresonator-position-dependent asymmetric propagation losses and the Fabry-Perot oscillations (FPOs) redistribute the internal energy between both modes yielding a direction-dependent response. The structure of the paper is the following. In section II we report the experimental evidences of different arXiv:2106.09649v1 [physics.optics] 17 Jun 2021 transmission and reflection behaviors in the linear and nonlinear regimes. In section III we discuss the numerical simulations which reproduce the experimental observation. In section IV we draw the conclusions. II. EXPERIMENTS A. The device and the experimental setup The BW/TJMR coupled system (the device in the following) is built on single mode channel waveguides made of a silicon oxynitride (SiON) film on a 6 inch Silicon wafer, see [25] for more details. The TJMR consists of a racetrack resonator with a S-shaped waveguide across, as shown in Fig. 1 (a). The tips of the S-shaped waveguide have a dark cavity shape to trap the propagating mode and, consequently, to avoid back-reflections [29]. The coupling between the waveguides is ensured by three directional couplers: one for the BW (t 1 , k 1 ) and two for the S-shaped branch (t 2 , k 2 and t 3 , k 3 ). The perimeter of the racetrack is defined as p = z 1 +z 2 +z 3 = 810.24 µm (see Fig. 1), while the length of the S-shaped waveguide is z 4 = 391.12 µm. The BW has two polished end facets where light is input or output by butt coupling tapered fibers. Its length is given by l L + l R . l L and l R define the relative position of the TJMR along the BW. We measured two samples with equal TJMR parameters and l L 0.431 mm while different l R 5.52 mm and l R 1.062 mm. More details on the device are reported in [25]. The experimental setup shown in Fig. 1 allows measuring the transmission and reflection spectra of the device. A continuous wave tunable laser (Yenista OP-TICS, TUNICS-T100S) operating in the IR range (1490 -1640 nm) is fiber-coupled to an erbium doped fiber amplifier (IPG photonics). In order to prevent laser damage its emission passes through an isolator and the resulting signal is adjusted in polarization by means of a polarization control stage. After it, the light is coupled to a fiber-circulator, which sends the light into a lensed tapered fiber. The light is then butt-coupled to the device using a xyz piezo-positioner for a correct alignment. At the device output, another lensed tapered fiber collects the transmission response and sends the light into an InGaAs detector-T (Thorlabs, PDA20CS(-EC)). At the same time, the light, which is back-reflected by the device input facet, is filtered out by the circulator and it is acquired by another InGaAs detector-R (Thorlabs, PDA20CS2). The detector-T and detector-R signals are then measured simultaneously with an oscilloscope (Pico-Scope 4000 Series). We note that at high input powers, only the transmission spectra are measured because of the damage threshold of the optical circulator. Turning the device on the sample holder, we input the light in either forward or in reverse configurations. In the forward configuration, light is CCW-coupled to the TJMR (see blue arrows in Fig. 1 (a)). Therefore, ne- glecting the FPOs due to reflections at the BW facets, the light circulates into the outer path and the S-shaped waveguide is just a source of losses. In this case only a finite transmittance is recorded. In the reverse configuration, light is CW-coupled to the TJMR and part of it is coupled into the CCW direction by means of the Sshaped waveguide (see red arrows in Fig. 1 (a)). Therefore, in this case, we do measure both finite transmission and reflection signals from the device. In the linear regime this leads to the unidirectional reflector behavior described in Ref. [25]. B. Experimental results in the linear regime The transmission and reflection spectra of two devices with a different BW length are shown in Figure 2 for both forward and reverse configurations. Panel (a) refers to l R 5.52 mm while panel (b) to l R 1.062 mm. In agreement with the Lorentz reciprocity theorem, the transmission spectra for the forward and reverse configurations are the same. They display a set of resonance dips at the TJMR resonances within short FPOs due to the reflection at the input and output facets of the BW. Each resonance dip exhibits a typical Lorentzian shape and never shows a doublet as in the case of backscattering [30,31]. This means that, in our case, the surface wall roughness is not a dominant source of intrinsic losses. Consequently, it does not contribute to the non-Hermitian dynamics induced by the presence of embedded S-shaped [25]. It is worth noting that out of resonance the two reflections overlaps perfectly. Therefore, the two bus waveguide facets contribute in the same fashion to the reflected component of the optical field. As expected, by decreasing the BW length, the number of resonances remains constant, while the FP period increases by about four times (see Fig. 2 (b)). In the forward orientation, this variation does not modify the reflection response of the device which shows the usual FP fringes (see blue curves of Fig. 2). On the other hand, the reflection in the reverse configuration dras-tically changes. Specifically, in panel (a) the reflected intensity always shows clear resonance peaks, while in panel (b) it strongly varies as a function of the incident wavelength. This is due to the fact that the short interference fringes of the long device do not affect the reflected optical mode from the TJMR while the long FP interference fringes in the short device cause significant interference between the taiji reflected mode and the BW modes. This interference may destroy the effect of the S-shaped waveguide in the device reflection. Specifically, as shown in the zoom of Fig. 2 (b), we observe three main cases: constructive-like (b1), Fano-like (b2) and destructive-like (b3) reflection lineshape. In the first case (denoted with the letter C), constructive interference generates a resonant peak and, therefore, the device behaves as the typical TJMR [25]. In the second case (denoted with the letter F), the interference gives rise to a sharp peak with the same height of the FPO. Interestingly, in the third case (denoted with the letter D), destructive interference rules out the resonant reflection peak. In this case, the efficiency of the taiji as an unidirectional reflection device is much reduced. C. Experimental results in the nonlinear regime The three interference cases described in II B affect also the nonlinear response of the device. As demonstrated in [8], the TJMR exhibits a higher internal power in the reverse than in the forward configuration. In fact, in the forward configuration, the light is partially lost at the end of the S-shaped waveguide. On the other hand, in the reverse one, the S-shaped branch couples light from the CW to the CCW mode increasing the stored energy. As a result, the transmission response of the device to strong fields shows a non-reciprocal behavior. Since the reflected intensity is strictly connected to the energy stored inside the taiji, the FP and the propagation losses of the BW strongly affect the nonlinearity-induced non-reciprocal response. First, we studied the role of the FP. We measured the transmitted spectra for different input powers (P ). Figure 3 shows the transmission in forward ( Fig. 3 (a)) and in reverse ( Fig. 3 (b)) configurations for a resonance showing constructive-like feature in reflection in the linear regime. At low P , the device exhibits the same resonance Lorentzian dips for both orientations. Increasing P , the resonance is pushed towards longer wavelengths due to the build-up of the internal energy in the TJMR and the thermo-optic nonlinearity, see Appendix 3. Also, the lineshape changes and takes the typical triangular shape of a microresonator under strong pumping [32][33][34][35]. To quantify these behavior we trace the resonance wavelength (λ res ) as a function of P . As the FPOs modify the wavelength at which transmittance reaches its minimum value, λ res is measured as the wavelength position of the transmission dip at low P , and as the threshold wavelength at which the transmittance switches to a value close to one after optical bistability [32,33,[35][36][37][38][39][40][41] for the higher values of P . Note that a large stored energy in the TJMR gives rise to a larger λ res shift, as show in [8]. If we look at the experimental results and compare Figs. 3 (a) and (b), at sufficiently high P , we note that the transmission spectra differ substantially. In particular, there is a wavelength interval where the two transmissions are no longer equal, i.e. where the Lorentz reciprocity is broken [8]. We quantify the extent of this wavelength region, by calculating the difference ∆λ r −∆λ f (P ) between the relative shift of λ res for the reverse configuration ∆λ r (P ) = λ res r (P ) − λ res r (P 0) and the forward configuration ∆λ f (P ) = λ res f (P ) − λ res f (P 0), i.e. between the "hot" and "cold" resonant wavelengths. ∆λ r − ∆λ f vs P is shown in Fig. 4 (a) together with representative comparisons between the normalized transmittance spectra at maximum P for the forward and the reverse orientations ( Fig. 4 (a1)-(a3)). In Fig. 4, the brown, green and orange colors refer to the different wavelength shifts for the constructive-like (C), Fano-like (F) and destructive-like (D) linear regime reflection lineshape, respectively. As already reported in [8], in the constructive-like case (blue symbols), ∆λ r − ∆λ f (P ) is positive and, therefore, the reverse configuration shows a higher resonance shift (see Fig. 4 (a1)) for all P . Since this shift is proportional to the power stored inside the cavity, the reverse configuration is characterized by a high internal energy. Similarly, the destructive-like case shows positive but small ∆λ r − ∆λ f (P ) values (orange symbols and panel (a3) of Fig. 4). On the contrary, the Fano-like case exhibits a negative detuning ∆λ r − ∆λ f (P ) which implies a higher internal energy in the forward than in the reverse configuration (see panel (a2) of Fig. 4). This means that reflectance in the BW facets could cancel the effect of the S-shaped waveguide. A. Linear regime In order to confirm the role of the FPOs in the linear and nonlinear regimes, we performed numerical simulations of the device. These simulations were based on the theoretical model reported in [25]. Here, the whole system is modeled by using the transfer matrix 1.062mm. lL 0.431 mm is constant in whole maps. In panels (b3), (b4), (b5) and (b6) are plotted the transmission and reflection spectra for lR = 1.0620 mm (top) and lR = 1.0624 mm (bottom). The different types of rectangles connects these graphs with the maps (b1) and (b2). specifically, the dotted, dash-dotted/solid and dashed line refer to the destructive-like (D), constructive-like (C) and Fano-like (F) case. The plus and minus signs inside the graphs highlight when the difference of the internal energy between the reverse and forward orientation is positive and negative, respectively. Panels (c1) and (c2) show the interference diagrams. The red (blue) arrows labels the CW (CCW) mode. method where the only source of back-reflection is given by the FP cavity generated by the end facets of the BW. The three directional couplers of Fig. 1 (a) are schematized by three reciprocal and lossless beamsplitters characterized by their transmission and coupling amplitudes (t 2 + k 2 = 1). The parameters used in these simulations are reported in Fig. 9 of Appendix 1 and were determined by the geometry of the device and by a fit of the transmission spectrum of Fig. 2 (b). A wavelength-dependent effective refractive index as in [25] was also used. Note that, for these simulations l L = 0.431 mm is fixed. As we are interested in understanding the role of the FP fringes, we computed the device reflectivity R for the reverse (R r ) and the forward (R f ) configurations and for the case without coupling between the BW and the TJMR (t 1 = 1, as defined in Fig. 1 (a)). This last quantity describes the contribution to the reflectance of the device due to the FP in the BW and is labeled R FP . (b1) show the λ vs l R maps of R r − R f and of R r − R FP . A 2 µm-range of l R around a value of l R = 5.524 mm (Fig. 5 (a)) and l R = 1.062 mm (Fig. 5 (b1)) is mapped. Since interference effects affect the internal energy (I) in the TJMR, we also plot in Fig. 5 (b2) the λ vs l R map of the difference between the internal energies in the reverse (I r ) and forward orientations (I f ) for the short device case. More details on the calculation of I r and I f are reported in Appendix 2. These various differences show the unidirectional behavior of the device and the role of the BW in this phenomenon. In particular, the clear lines that cut vertically through the maps represent the TJMR resonances. The colors reflect the different values of R r −R f /R r −R FP /I r −I f for each resonance. These take into account the spectral dispersion of the effective refractive index, of the propagation losses, of the coupling parameters and the l R dependence of the interference. For long l R (Fig. 5 (a)) the fact that R r −R f always shows clear peaks is in agreement with the experimental data of Fig. 2 (a). For short l R (Fig. 5 (b1)), the decrease of the BW length allows catching all the experimental cases. These are highlighted by the rectangles in Fig. 5 (b1)-(b2). Specific examples of the simulated transmission and reflection lineshapes for the destructivelike (D), constructive-like (C) and Fano-like (F) cases are shown in Fig. 5 (b3), b(4)-b(5), and (b6), respectively, for l R = 1.0620 mm (top) and l R = 1.0624 mm (bottom). Let us start from the destructive-like case. This is characterized by a dip of the reflectance in the reverse configuration ( Fig. 2 (b3)). This case is exemplified by the dotted rectangles in panels (b1) and (b2) and by the lineshapes in (b3) of Fig. 5. The reflectance dip is a consequence of the interference between the light that is reflected at the input facet of the BW (magenta arrow) and the light that propagating in the CW mode (red arrows) is coupled into the CCW one through the S waveguide (blue arrows), as shown in the sketch of panel (c1). When such interference is destructive, the reflected intensity can exhibit a dip. The condition for interference in the device is: where n eff and m I1 are, respectively, the effective refractive index and an integer number. Thus, the phase difference between the path followed by the light reflected from the TJMR (left hand of Eq. (1)) and the one followed by the light reflected from the input facet must be an odd multiple of π. That is, when m I1 satisfies: This condition is satisfied in the example shown in panel (b3, top), where the reflectance (red line) reduces to zero at the resonant wavelength. However, as shown in panel (b3, bottom), a slight shift of the FP fringes due to a slight variation in l R (from 1.0620 mm (top) to 1.0624 mm (bottom)) causes a different interference which yields a non-zero reflection. This interference pattern is also confirmed by the positive value of the internal energy difference shown in panel (b2), as evidenced by the dotted rectangle. Hence, less reflection of the device does not mean less internal energy in the reverse with respect to the forward configurations. This can be understood by considering two other interference diagrams. The first one is defined by the path followed by the light in the BW. It gives rise to the typical constructive FP interference at the exit of the input facet: where m FPCR is an integer number. The second is more complex and is shown in panel (c2) of Fig. 5. It is given by the constructive interference, inside the TJMR, between the light which is transferred from the S-shaped waveguide to the CCW mode (from red to blue arrows) and the one that is reflected from the output facet of the BW (magenta arrows). Defining m I2 as an integer number, this interference occurs when the following relation is satisfied: 2π λ n eff 2l L + π 2 = π 2 + 2π λ n eff (2z 3 + z 4 + z 2 ) + π + 2πm I2 . (4) These three numbers m FPCR , m I1 and m I2 are strictly connected as: If m FPCR and m I1 are integer numbers, then also m I2 is an integer number. In other words, if the FP interference exhibits a peak and the device reflection shows a dip, then inside the TJMR occurs a constructive interference with a build up of internal energy (I r − I t > 0). This analytical model also explains the constructivelike case shown in Fig. 2 (b1). The solid and dasheddotted rectangles highlight regions characterized by a high reflection intensity (Fig. 5 (b1)) but different internal energies (Fig. 5 (b2)). Characteristic spectra are plotted in panel (b4) and (b5) for l R = 1.0620 mm (top) and l R = 1.0624 mm (bottom). In this case, the TJMR behaves as a unidirectional reflector. Therefore, the reflected intensity exhibits a maximum in the reverse configuration (red lines). The panel (b4) differs from panel (b5) because of the difference between the internal energy of the forward and reverse configurations. In the first, the stored energy is higher in the reverse orientation than in the forward one. In the second, a lower energy is found in the reverse than in the forward configuration. The difference between the two situations is due to the wavelength dependence of the propagation losses in the BW (Appendix 1), as we will discuss in the following. Panel (b1) of Fig. 5 shows also the Fano-like case (see dashed rectangles) as highlighted by the graphs of panel (b6). This is an intermediate case between the constructive-like and the destructive-like cases. The TJMR loses its fundamental property of being a unidirectional reflector because of the FPOs. Simulating the response of the device in the absence of the FP (i.e with zero facets reflectivity), we obtain the λ vs l R maps in Fig. 6. Note that, for these simulations l L = 0.431 mm is fixed. In particular, Fig. 6 (a) shows R r −R f while Fig. 6 (b) shows I r −I f . The red, black, and blue rectangles highlight the regions around the values l R = l L = 0.431 mm, l R = 1.062 mm, and l R = 5.52 mm, respectively, i.e. in the latter two the TJMR is not placed in a symmetric position with respect to the two BW facets. In contrast with Fig. 5, no oscillations are observed and at the resonances R r − R f > 0 always since R f = 0. Note that R r changes as l R varies. In fact, as l R increases, the BW propagation losses affect the amount of light coupled to the microresonator. Therefore, less energy is transferred from the CW to the CCW mode. As a function of l R (see the rectangles), the reflected intensity in the reverse configuration increases with the wavelength. This is due to the spectral dependence of the BW propagation losses, which are large at 1540 nm and decrease monotonically as λ increases (see Appendix 1). Also I r − I f is affected by the relative position of the TJMR with respect to the BW. Indeed, by increasing l R more and more resonances present a negative I r − I f . Moreover, this negative value becomes larger as λ decreases, i.e. as the losses increase. To summarize the analysis of the linear regime, the interference between the reflected fields at the ends of the BW and by the TJMR generate different spectral responses. In particular, depending on the period of the FP fringes, the device may preserve or lose its unidirectional reflector nature. As a result, the difference between the internal energies in the reverse and forward configurations may assume both positive and negative values. B. Nonlinear regime The device is modelled in the nonlinear regime by following the finite-element model developed in Ref. [8]. The light propagation inside the device is obtained by solving the nonlinear Helmholtz equation while taking also into account reflection at the BW facets. We took the thermal nonlinearity parameters from [40]. The set of employed parameters is shown in Appendix 1 and 3. Figure 7 (a) and (b) show the transmission spectra and the TJMR internal energies as a function of the input power (P in ) for l R = 1.0624 mm, while scanning λ from low to high values. Panels (a1) and (b1) display the reverse configuration while (a2) and (b2) show the forward one. As expected, increasing P in , the resonances shift proportionally to the internal energy due to the nonlinear refractive index. This shift is towards longer λ in agreement with the positive sign of the nonlinear coefficient (see Fig. 10 in Appendix 3). Notice that the FP fringes slightly shift to longer wavelengths too. The difference between the resonance and fringe shifts is explained by the larger energy stored in the microresonator than in the BW. In fact, a 9 times field enhancement factor is computed for the TJMR. Within the maps, we can identify the different features seen in the experimental section, i.e. constructive-like (C), destructive-like (D) and Fano-like (F) shape. These are labelled with a + (-) when, in the linear regime, I r − I f > 0 (I r − I f < 0). Figure 7 (c1) and (c2) are the theoretical analogue of Figure 3 (b) and (a), which show the experimental transmission spectra. Figure 7 (c1) and (c2) display the transmittance for different input powers in the C-case. The wavelength is scanned from low to high values. In particular, panel (c1) and (c2) show the reverse and forward orientations, respectively. The theoretical model reproduces the experimental behavior and Lorentz reciprocity breaking appears through a different resonance shift between forward and reverse orientation increasing the input power. Comparing the nonlinear shift for the forward and reverse orientations, we do not observe a regular trend. Fig. 7 (d1) shows ∆λ r − ∆λ f as a function of P in , computed from the (a1)-(a2) maps. Specifically, the dotted, dash-dotted/solid and dashed lines highlight the destructive-like (D), constructive-like (C) and Fano-like (F) cases, respectively. These resonances are the ones shown in Fig. 5 (b3)-(b6) for the linear regime and l R 1.0624 mm (i.e. the bottom panels). It is observed that ∆λ r − ∆λ f shows different behaviors in the three cases in agreement with the experimental results of Fig. 3. In fact, in both the experimental (labeled D in Fig. 4) and the theoretical case (labeled D+ in Fig. 7), the destructive-like case shows a positive value of ∆λ r − ∆λ f slightly greater than zero. The same agreement holds for the experimental (C) and theoretical (C-) constructive-like cases where the detuning is always positive and reaches a maximum value around 0.07 nm. Similarly for the Fano-like case, where both the theoretical (F-) and experimental (F) shift differences show negative values. However, a clear relation between I r − I f in the linear and nonlinear regimes does not emerge. In fact, in the constructive case, the ∆λ r −∆λ f vs P in curve shows both a positive slope for the C-situation, where I r − I f < 0 in the linear regime, as well as an almost zero slope for the C+ situation where I r − I f > 0 in the linear regime. This lack of a direct relation between I r − I f in linear and nonlinear regimes is also shown in Fig. 7 (d2). It displays ∆λ r − ∆λ f vs P in for the resonances shown on the top panels (b3)-(b6) of Fig. 5 (i.e. when l R 1.0620 mm). Here, the destructive-like case (D+) presents a negative ∆λ r −∆λ f shift despite I r −I f > 0 in the linear regime. In addition, the constructive-like case with I r − I f < 0 (C-) exhibits a negative ∆λ r −∆λ f shift in contrast with Fig. 7 (d1). Therefore, depending on their spectral position, the resonances of the TJMR show a different ∆λ r − ∆λ f shift which we attribute to the interplay between the FP and the asymmetric losses (l L = l R ) in the BW. This is evidenced in Fig. 8. When the FP effect is switched off by zeroing the reflection coefficients at the BW facets, ∆λ r − ∆λ f grows linearly with P in . The different slopes are related to the values of the BW propagation losses. Negative and positive slope values are due to larger or smaller asymmetric losses. In fact, the maximum slope appears at longer wavelengths where the losses are smaller (see dashed and solid lines for 1564.6 nm and 1561.2 nm in Fig. 8 (a1)). When the losses are symmetric, i.e the TJMR is placed in a symmetric position, the ∆λ r − ∆λ f slopes are always positive ( Fig. 8 (a2)). Furthermore, when the FP effect is switched on, a more complicated scenario appears (Fig. 8 (b)). The ∆λ r − ∆λ f does no longer show a linear P in dependence and negative or positive values appear even when the losses are symmetric. Here, ∆λ r − ∆λ f shows variations strictly connected to the interference between the fields reflected by the end facets of the BW and the one reflected within the TJMR. The phase relation between these fields is given by the different variations of the nonlinear refractive index inside the TJMR and the BW. Interestingly, as shown in Fig. 8 (b), even with a symmetric BW the FP fringes could drastically change the shift of the resonances. As a result, a positive (negative) difference of the resonance shift may become negative (positive) by increasing the input power (see dotteddashed and dashed line in Fig. 8 (b)). Therefore, we can conclude that the combined action of the FP and of the asymmetric losses due to the BW can compensate the effect of the S-shaped waveguide in the TJMR leading to a higher internal energy in the forward configuration than in the reverse configuration. In fact, since l R > l L , more light attenuation is observed in the reverse than in the forward configuration. It is worth noticing that the presence of the FP effect increases the wavelength interval ∆λ r − ∆λ f where the Lorentz reciprocity is broken. This is observed by comparing Fig. 7 (d1) and Fig. 8 (a1). In the first, ∆λ r − ∆λ f 0.07 nm while in the second, ∆λ r − ∆λ f < 0.03 nm. IV. CONCLUSION In this work, we have theoretically and experimentally shown how the properties of the bus waveguide influence the linear and nonlinear response of the taiji microresonator. Indeed, both the Fabry-Perot effect, due to the bus waveguide end facets reflection, and the asymmetric propagation losses along the bus waveguide affect the measured and simulated responses. In the linear regime, the experimental spectra are well explained by an analytical model based on the transfer matrix method and by using intuitive interference diagrams. Increasing the period of the Fabry-Perot oscillations, the device does not preserve its functionality as a unidirectional reflector. Indeed, the interference between the reflected field at the input facet of the bus waveguide and the one reflected within the taiji can also reduce the device reflectivity to zero. Furthermore, the Fabry-Perot can redistribute the taiji microresonator internal energy between the clockwise and counterclockwise modes and, thus, strongly modify the nonlinear response. In this nonlinear regime, the different powers stored inside the taiji microresonator are the base of the Lorentz reciprocity breaking in the device. The breaking appears as a distinct difference between the resonance shifts in the reverse and forward configuration. Depending on the specific configuration, the Fabry-Perot effect in the bus waveguide can either reduce or increase the wavelength region where the Lorentz reciprocity breaking is observed. Using a numerical finiteelement model we have explained the experimental observations in terms of a different shift between the resonant wavelengths and the Fabry-Perot fringes. Moreover, we demonstrated that a critical role is also played by the propagation losses in the bus waveguide. Indeed, when the taiji microresonator is placed in an asymmetric position with respect to the bus waveguide ends, a variation in the taiji microresonator internal energy also stems from the interplay between the asymmetric propagation losses and the field enhancement due to the microresonator. However, this asymmetry does not influence the unidirectional behavior of the taiji microresonator at sufficiently low input power, i.e. in the linear regime. Finally, let us note that the device we studied here can be understood as a sophisticated example of a pair of coupled resonators. Therefore, this work is a starting point towards the study of more complex structures, where an active control of the feedback between nonlinear resonators is used. This allows controlling the violation of the Lorentz reciprocity, and therefore, holds interesting promise for exploiting nonlinear non-Hermitian physics in integrated devices. Appendix: parameters of the simulations In order to model the experiments, we set the parameters of the device as follow. The perimeter of the taiji racetrack microresonator is fixed imposing z 1 206 µm, z 2 398 µm, z 3 206 µm, while the S-shaped waveguide length is z 4 391 µm. All these values were derived from the design of the TJMR. The effective mode index was extrapolated by slightly modifying the one reported in [25] to match the taiji experimental resonances (see Fig. 9 (a)). The BW length l L was measured from the design l L 0.431 mm. l R 1.062 mm and the reflection coefficients (r L, R = 0.23) were extrapolated from the experimental FP fringes. The spectral dependence of the transmission coefficients t 2 = t 3 = t S , and of the losses were estimated by measuring the transmittance, the reflectance and the propagation losses (see Fig. 9). t 1 = 0.868. By fitting the experimental spectra in the linear and nonlinear regimes, we observed lower propagation losses in the BW than in the TJMR. This difference is due to the bending loss in the microresonator. Appendix: taiji microresonator internal energy calculation To simulate the device in the nonlinear regime, it is needed to evaluate the internal energy in the following regions: microresonator, S-shaped waveguide and BW. Since the method is the same, we will describe only the calculation of the microresonator internal energy. Following [25], the TJMR can be analyzed through twelve different electric fields. Precisely, half of them fields propagates in the CW direction and the other half in the CCW one. All of these fields can be computed by solving the system of equations shown in [25], for the linear regime or by iterating until convergence for the nonlinear one. To determine the internal energy it is first necessary to calculate the CW and CCW fields at each point of the microresonator. We start from the CW direction and use the fact that from one coupling region to the next and along the wave propagation direction the electric field can be described as E(z) = E(z 0 )e iγ(z−z0) , where z 0 is the starting position, z is the coordinate along the waveguide, and γ is a complex parameter that accounts for both phase variation and propagation losses (γ = 2π λ n eff + iα). By transfer matrix multiplication, we compute all the CW (E CW ) and CCW (E CCW ) fields. Then, the internal energy is the integral of E CCW + E CW 2 along the microresonator. Appendix: simulation model In the linear regime, we used the model presented in [25] to simulate the device. In the nonlinear regime, we extended the equations taking into account that the refractive index is not only wavelength dependent but varies also as a function of the intensity of the electromagnetic wave. As seen in [8], n eff = n L + n T I thermal + n K |E CCW, CW | 2 + 2|E CW, CCW | 2 , where n L is the refractive index in the linear regime, n T is the coefficient of the thermo-optic nonlinearity, n K = 8×10 −16 cm 2 /W n T is the Kerr nonlinear index and I thermal is the total electromagnetic intensity for the three different regions: microresonator, S-shaped waveguide and BW. To obtain transmissions, reflections, and internal energies as a function of wavelength we process the spectra of all electric fields within the system starting at shorter wavelengths and for each wavelength we evolve the system of field equations to their convergence. In this model we consider the following relationship between n T and the propagation losses: n T ∝ 1 − e −αp , where α and p are the propagation losses and the microresonator perimeter, respectively. By comparing experimental and simulated spectra, we obtained an estimation of n T reported in Fig. 10.
8,821.4
2021-06-17T00:00:00.000
[ "Physics", "Engineering" ]
The Multiverse in an Inverted Island We study the redundancies in the global spacetime description of the eternally inflating multiverse using the quantum extremal surface prescription. We argue that a sufficiently large spatial region in a bubble universe has an entanglement island surrounding it. Consequently, the semiclassical physics of the multiverse, which is all we need to make cosmological predictions, can be fully described by the fundamental degrees of freedom associated with certain finite spatial regions. The island arises due to mandatory collisions with collapsing bubbles, whose big crunch singularities indicate redundancies of the global spacetime description. The emergence of the island and the resulting reduction of independent degrees of freedom provides a regularization of infinities which caused the cosmological measure problem. I. INTRODUCTION In the last two decades or so, we have learned a lot about the origin of spacetime in quantum gravity. A key concept is holography [1][2][3][4], which states that a fundamental description of quantum gravity resides in a spacetime, often non-gravitational, whose dimension is lower than that of the bulk spacetime. This concept has been successfully applied to understanding the dynamics of an evaporating black hole, in particular to address the information problem [5]; for recent reviews, see Refs. [6][7][8]. There are two distinct approaches to implementing the idea of holography. One is to start from global spacetime of general relativity and identify independent quantum degrees of freedom [9][10][11] using the quantum extremal surface (QES) prescription [12][13][14][15]. When applying this prescription to a black hole, the existence of the interior is evident, whereas understanding unitary evolution requires non-perturbative gravitational effects [16,17]. The other approach is to begin with a description that is manifestly unitary (if all the relevant physics is included in the infrared) and understand how the picture of global spacetime emerges [18][19][20][21][22][23]. Specifically, in this approach the interior of an evaporating black hole arises as a collective phenomenon of soft (and radiation) modes [21][22][23][24]. While the two approaches appear radically different at first sight, they are consistent with each other in the common regime of applicability [25,26]. In this paper, we study the eternally inflating multiverse using the first approach which begins with global spacetime. A key assumption is that for a partial Cauchy surface R in a weakly gravitating region, we can use the QES prescription [15]. In particular, the von Neumann entropy of the microscopic degrees of freedom associated with the region R is given by the island formula [11] S(R) = min ext where I is a partial Cauchy surface spacelike separated from R. 1 Here, the boldface symbol R in the lefthand side is to emphasize that S(R) is the microscopic von Neumann entropy of the fundamental degrees of freedom, while is the generalized entropy for partial Cauchy surface X calculated in bulk semiclassical theory, where A(∂X) is the area of the boundary ∂X of X, and S bulk (X) is the von Neumann entropy of the reduced density matrix of X calculated in the semiclassical theory. In this work, we show that when R is a sufficiently large region on a late time hypersurface in a bubble universe, an island I appears which encloses the bubble universe. Given that the semiclassical physics in I is fully reconstructed using the fundamental degrees of freedom in R, this implies that the full semiclassical physics of the multiverse needed to make cosmological predictions is encoded in the fundamental degrees of freedom of the region R, which has a finite volume! While one might feel that this is too drastic a conclusion, in some respects it is not. Even for a black hole, the interior region described as an island I can have an ever increasing spatial volume, which can even be infinite for an eternal black hole [27,28]. However, in quantum gravity, the number of independent states associated with this region is bounded by the exponential of the entropy of the system. This is because exponentially small overlaps between semiclassically orthogonal states lead to a drastic reduction in the number of basis states [25,[29][30][31]. What happens in the multiverse is an "inside-out" version of the black hole case. As anticipated in Refs. [32][33][34], this allows us to address the cosmological measure 1 In this paper, I refers to a spacelike codimension-1 surface. Although it is more standard to refer to the domain of dependence of I, D(I), as the island, we also refer to I as an island in this paper. FIG. 1. The multiverse as an entanglement castle. On a given Cauchy surface Ξ, the physics of the multiverse can be described by the fundamental degrees of freedom associated with the region R ∪ (R ∪ IΞ), where IΞ = D(I) ∩ Ξ with I being the (inverted) island of a partial Cauchy surface R. Entanglement Castle In the black hole case, the region R encloses I, so I looks geographically like an island. However, in our setup, I encloses R so it no longer appears as an island. Thus, we call I an inverted island. The geography for a Cauchy surface Ξ containing R is depicted in Fig. 1. It is customary to treat the regions R and I as "land" and everything else as "water." Following this convention, Ξ has a central land R surrounded by a moat R ∪ I Ξ which separates R from I Ξ , where I Ξ = D(I) ∩ Ξ. To describe the multiverse at the semiclassical level, one only needs fundamental degrees of freedom associated with the complement of I Ξ on Ξ, . This is the region corresponding to the castle-the multiverse lives in an entanglement castle. Relation to Other Work Entanglement islands in cosmological spacetimes have been discussed in the context of toy models, e.g., models in which a nongravitational bath is entangled with a gravitational system as well as models in lower dimensional gravity [40][41][42][43][44][45][46][47][48][49][50]. In this paper, we study them in a realistic scenario of eternal inflation. Several holographic descriptions of the multiverse have been proposed [32][33][34][51][52][53][54][55][56], mostly to address the measure problem. These correspond to the unitary description of a black hole, although the issue of unitarity at the fundamental level is not quite clear in cosmology. Outline of the Paper In Section II, we review the eternally inflating multiverse and describe some basic assumptions employed in our analysis. In Section III, we discuss how bulk entanglement necessary for the emergence of an island can arise from accelerating domain walls, which are pervasive in the eternally inflating multiverse. Section IV is the main technical part of the paper, in which we show that a sufficiently large region R in a bubble universe has an inverted island that surrounds R. Implications of this result for the multiverse are discussed in Section V. Finally, Section VI is devoted to conclusions. II. THE ETERNALLY INFLATING MULTIVERSE IN GLOBAL SPACETIME In this paper, we are concerned with eternally inflating cosmology. Eternal inflation occurs when the theory possesses a metastable vacuum which has a positive vacuum energy and small decay rates to other vacua [57,58]. If the universe sits in such a vacuum at some moment, there will always be some spacetime region that remains inflating for an arbitrarily long time. This scenario of eternal inflation is naturally realized in the string landscape [59][60][61][62]. In the string landscape, the number of local minima of the potential, i.e. false vacua, is enormous. Vacuum energies at these minima can be either positive or negative. Since exactly vanishing vacuum energy requires an infinite amount of fine-tuning, we expect that it is realized only in supersymmetric vacua. Spacetime regions in different vacua are created by nucleation of bubbles, each of which can be viewed as a separate universe. We assume that bubble nucleation occurs through Coleman-De Luccia tunneling [63], although we expect that our results also apply to other vacuum transition mechanisms such as the thermal Hawking-Moss process [64,65]. As explained in the introduction, we begin with the global spacetime picture, which is the infinitely large multiverse with a fractal structure generated by continually produced bubbles. We assume that the global quantum state on a Cauchy surface is pure. We are interested in studying the existence and location of the island corresponding to a partial Cauchy surface R in the global multiverse. To address this problem, we focus on a particular bubble, which we call the central bubble. We assume that the central bubble is formed in a parent de Sitter (dS) bubble. After being nucleated, it undergoes collisions with other bubbles [58]. Let us follow a timelike geodesic to the future along (and outside) the bubble wall separating the central bubble from other bubbles. The last bubble that this geodesic encounters must be either an anti-de Sitter (AdS) bubble or a supersymmetric Minkowski bubble, or else the geodesic still has an infinite amount of time to encounter another bubble. We assume that the last bubbles such geodesics encounter are all AdS bubbles and call them surrounding AdS bubbles. Since AdS bubbles generally end up with big crunch singularities [63], they are collapsing bubbles. Note that the choice of the central bubble was arbitrary, so all the bubbles have the feature of being surrounded by collapsing AdS bubbles. A typical example of the spacetime structure described here is illustrated in Fig. 2. (We have omitted an infinite number of bubbles that form a fractal structure in the asymptotic future infinity which are not relevant for the discussion here.) We postulate that the cosmological history we study takes place in the semiclassical regime. This implies that the characteristic energy scale E of the potential is sufficiently smaller than the cutoff scale, and hence the Planck scale. On the other hand, in the string landscape we expect that this energy scale is not much smaller than the string scale, e.g., E ∼ O(10 −5 -10 −1 )/l P , where l P is the Planck length. Note, however, that some of these bubbles could be associated with much smaller energy scales by selection effects. For instance, the bubble universe that we live in has a vacuum energy much smaller than the naive value of O(E 4 ) [66][67][68]. III. BULK ENTANGLEMENT FROM ACCELERATING DOMAIN WALLS In this section, we discuss the possible origin of bulk entanglement S bulk leading to an island in eternally inflating spacetime. As discussed in Ref. [43], an island cannot be created by S bulk originating solely from entanglement between regular matter particles. In particular, the generation of S bulk must involve spacetime (vacuum) degrees of freedom. Examples of such processes include Hawking radiation and reheating after inflation. Here we discuss another such process: S bulk generated by Unruh radiation [69,70] from accelerating domain walls. Consider a domain wall in 4-dimensional flat spacetime which is extended in the x 2 -x 3 directions and is acceler- ating in the x 1 direction. In an inertial reference frame, the domain wall appears to emit radiation. This occurs because the modes of a light quantum field colliding with the domain wall from behind are (partially) reflected by it, which converts these modes into semiclassical excitations on top of the vacuum; see blue arrows in Fig. 3. (For a review and recent analyses, see Refs. [71][72][73].) An important point is that this process stretches the wavelength of reflected modes. In particular, radiation emitted later corresponds to a shorter wavelength mode at a fixed early time. We postulate that, as in the case of Hawking radiation [74] and the generation of fluctuations in cosmic inflation [75][76][77][78], this picture can be extrapolated formally to an infinitely short distance, below the Planck length. This allows for converting an arbitrary amount of short distance vacuum entanglement to entanglement involving physical radiation. In particular, if we take a spatial region A that contains the radiation but not its partner modes, then we can obtain a large contribution to S bulk from this process. This is illustrated in Fig. 3. This mechanism of generating S bulk operates at any wall separating bubble universes. It converts entanglement in a semiclassical vacuum, which is assumed to take the flat space form at short distances [79], into that involving radiation emitted by the wall. There are two classes of walls relevant for our purpose. The first is a bubble wall separating a nucleated bubble from the ambient bubble (parent dS bubble in our context). In this case, the bubble wall accelerates outward, so that the radiation lies inside the bubble. This radiation is homogeneous on a Friedmann-Robertson-Walker (FRW) equal-time slice and has coarse-grained entropy density where a(t) is the scale factor at FRW time t, and 1/ √ −κ is the comoving curvature length scale at an early stage of the bubble universe, when a(t) ≈ √ −κ t. The second is a domain wall separating two bubbles colliding with each other. A domain wall relevant for our discussion is that separating the central bubble and one of the surrounding AdS bubbles colliding with it. In this case, the domain wall accelerates outward in the AdS bubble [80,81], so the mechanism described above applies to the AdS bubble; in Fig. 3 the regions left and right of the wall would correspond to the AdS and central bubbles, respectively. If the domain wall is also accelerating away from the central bubble, the radiation emitted into the central bubble also results in a large S bulk , although this is not relevant for our setup. IV. ENTANGLEMENT ISLAND FROM SURROUNDING COLLAPSING BUBBLES In this section, we argue that a sufficiently large spacelike region R in the multiverse has an island I. We use the method of island finder [82] to demonstrate this. First, we locate a partial Cauchy surface I that (i) is spacelike separated from R, (ii) provides a reduction of generalized entropy S gen (I ∪ R) < S gen (R), and (iii) has the boundary ∂I that is quantum normal or quantum antinormal with respect to variations of the generalized entropy S gen (I ∪ R). We will find such an I which has a quantum antinormal boundary. We then argue that there is a partial Cauchy surface I 0 whose domain of dependence, D(I 0 ), contains I and whose boundary, ∂I 0 , is quantum normal with respect to variations of S gen (I 0 ∪R). Having such an I and I 0 guarantees the existence of a non-empty island I. We focus on (3 + 1)-dimensional spacetime throughout our analysis, although it can be generalized to other dimensions. In our analysis below, we assume that the central bubble is either a dS or Minkowski bubble, which simplifies the analysis [80,81]. We believe that a similar conclusion holds for an AdS central bubble, but demonstrating this requires an extension of the analysis. The argument in this section consists of several steps. In Section IV A, we identify a two-dimensional quantum antinormal surface ∂Σ in a surrounding AdS bubble for a region R in the central bubble. In Section IV B, we gather a portion of ∂Σ in each surrounding bubble and sew them together to form a closed quantum antinormal surface ∂I which encloses R. In Section IV C, we argue that appending I reduces the generalized entropy of R and hence it can serve as the I of Ref. [82]. In Section IV D, we find I 0 , establishing the existence of a non-empty QES for R. Finally, Section IV E contains some discussion about the (inverted) island I. While our argument applies more generally, in this section we consider a setup that involves only a central bubble and its surrounding AdS bubbles. We discuss more general cases in Section V. A. Quantum Antinormal Surface in a Colliding Collapsing Bubble Let us consider the central bubble and only one of the surrounding AdS bubbles. These bubbles are separated by a domain wall. This system preserves invariance under an SO(2, 1) subgroup of SO(3, 1) symmetry of a single Coleman-De Luccia bubble. The spacetime is thus given by a warped product of a two-dimensional hyperboloid H 2 with a two-dimensional spacetime M 2 . Consider a two-dimensional hyperbolic surface ∂Σ given by the SO(2, 1) orbit of a spacetime point as shown in Fig. 4. We denote the partial Cauchy surface which is bounded by ∂Σ and extending toward the AdS side by Σ . We focus on the region near the domain wall at late times. Given a ∂Σ in this region, let k µ and l µ be the future-directed null vectors orthogonal to ∂Σ , pointing inward and outward relative to Σ , respectively, as depicted in Fig. 4. We normalize them such that k · l = −2 and denote the corresponding classical and quantum expansions by θ k,l and Θ k,l , respectively. Here, Θ k,l are given by the changes in the generalized entropy S gen (Σ ∪R) under infinitesimal null variations of ∂Σ [84]. Suppose that a surface ∂Σ in the AdS bubble is located near the big crunch singularity but sufficiently far from the domain wall. This surface is classically trapped (θ k , θ l < 0). When ∂Σ is moved toward the central bubble, first it becomes normal (θ k < 0, θ l > 0) and then antitrapped (θ k , θ l > 0) [80,81]. What about the quantum expansions? In general, S bulk , and hence S gen , can only be defined for a closed surface, and its change δS bulk under a small variation of the surface depends non-locally on the entire surface. In our setup, however, the only relevant contribution to δS bulk (Σ ∪ R) comes from partner modes of the Unruh radiation emitted by the domain wall into the AdS bubble, and we can locally determine the signs of Θ k,l . 2 Suppose we locally deform ∂Σ in the ±l direction. Then, δS bulk receives a contribution from the reflected modes, depicted by blue arrows in Fig. 4. This contribution, however, is not strong enough to compete with the classical expansion, since the modes are spread out in the l direction. To see this explicitly, let us assume that every radiation quantum carries O(1) entropy, and that the rate of emission as viewed from the domain wall's frame is controlled by the Unruh temperature T = a w /2π, where a w is the acceleration of the domain wall. We then find that 3 where is the AdS radius in the bubble, (t, r) are the location of ∂Σ in the coordinates [80,81] δr is the change of r when we deform ∂Σ in the l direction, and Ω H is the coordinate area of the portion of the hyperboloid for which we deform ∂Σ . Also, λ is a parameter appearing in the trajectory of the domain wall where τ is the proper time along the domain wall trajectory, with r 0 = r(τ = τ 0 ) and t ∞ = t(τ = ∞), and we have introduced the null coordinates To derive the above expressions, we have assumed that λ 1 and r is sufficiently larger than so that f (r) ∼ r 2 / 2 , which implies t ∞ ∼ 2 /r 0 (also t ∞ > 2 /r 0 ). The expression in Eq. (4) should be compared with the corresponding change in area, 2 The contribution from partner modes of Unruh radiation emitted into the central bubble is not relevant if R is sufficiently large such that it intersects most of the radiation, since then the contribution has the same sign as the variation of the area A(∂Σ ). 3 We thank Adam Levine for discussion on obtaining the quantum contributions. Assuming that the scalar potential responsible for the domain wall is characterized by a single energy scale E, we find ∼ 1/E 2 l P and λ ∼ a w ∼ E, 4 so δS bulk δA/4l 2 P l P , where we have only considered ∂Σ satisfying t < t ∞ . We indeed find that the quantum effect, δS bulk , is negligible compared to the classical contribution, δA/4l 2 P , for sufficiently larger than l P . On the other hand, if we vary ∂Σ in the ±k direction, δS bulk receives a contribution from the partner modes, depicted by red arrows in Fig. 4. If ∂Σ is far from the domain wall, this contribution is small, so that ∂Σ remains trapped at the quantum level: Θ k,l < 0. However, if ∂Σ is moved toward the null surface to which the domain wall asymptotes, x + = t ∞ , the contribution becomes enhanced because the partner modes are squeezed there. Specifically, the quantum effect can be estimated as Here, we have assumed that the reflected modes, the partners of which ∂Σ crosses, all pass through Σ , which requires where c = (t ∞ − 2 /r 0 )/(t ∞ + 2 /r 0 ) is a constant satisfying 0 < c < 1. We thus find that the relevant ratio is given by and the quantum effect can indeed compete with the classical contribution when ∂Σ approaches the null surface x + = t ∞ . 5 Since the sign of δS bulk from this effect is such that S bulk gets reduced when ∂Σ is deformed in the −k direction, Θ k can become positive, making ∂Σ quantum antinormal: We assume that this transition happens before ∂Σ changes from being classically trapped to normal. 6 This behavior of quantum expansions is depicted in Fig. 4. 4 The second relationship holds for generic bubbles. For supersymmetric bubbles, we instead have λ ∼ aw ∼ 1/ . 5 For supersymmetric bubbles, the numerator becomes 4 l 2 P . In this case, we need a more careful analysis to show that δS bulk can compete with δA/4l 2 P . 6 If this assumption does not hold, we still have an island as will be shown in Section IV D. B. Forming a Closed Quantum Antinormal Surface In the previous subsection, we have shown that there is a quantum antinormal surface ∂Σ in the AdS bubble. If there were no other bubbles except for these two bubbles, then this surface would extend infinitely in H 2 and would have an infinite area. However, this is not the case because the central bubble is surrounded by a multitude of AdS bubbles, as shown in Fig. 5. The surface ∂Σ corresponding to a particular AdS bubble is cut off by the domain walls resulting from collisions with the neighboring AdS bubbles. Thus, we are left with a finite portion of ∂Σ . Such a finite-sized, quantum antinormal surface can be obtained in each AdS bubble, which we denote by σ i (i = 1, 2, · · · ). These surfaces σ i can be connected with appropriate smoothing in such a way that the resulting closed surface encloses the central bubble and is quantum antinormal everywhere. To see this, we note that we have some freedom in choosing the values of (t, r) for each σ i . Using this freedom, we can make two adjacent σ i 's intersect along a curve. The resulting "kink" can then be smoothed at a length scale smaller than that of bulk entanglement. This smoothing retains quantum antinormalcy, so we end up with a closed, quantum antinormal surface. We label this closed surface as ∂I , and the partial Cauchy surface bounded by ∂I and outside it as I ; see Fig. 5. Note that ∂I being quantum antinormal means that Θ k > 0 and Θ l < 0, where the quantum expansions are defined using S bulk (I ∪ R). C. Reduction of the Generalized Entropy We now move on to discuss the generalized entropy. For a sufficiently large R, we expect that the region I reduces the generalized entropy of R in the sense that 7 S gen (I ∪ R) < S gen (R). (14) To understand this, we first note that Unruh radiation from the bubble walls of the central and surrounding bubbles, as well as that from the domain walls separating the central and surrounding bubbles, contributes to entanglement between R and I . Appending I to R therefore reduces the S bulk contribution to S gen . To illustrate Eq. (14), let us take R to be a spherically symmetric region in the central bubble. We assume that the distribution of AdS bubbles surrounding and colliding with the central bubble is statistically spherically symmetric. We then append I to R and compare the decrease in S gen due to the change of S bulk with the increase in S gen coming from A(∂I ). We do this comparison by focusing on an infinitesimal solid angle dΩ S in the central bubble. Using Eq. (3), we can estimate the differential change in S gen due to Unruh radiation from the central bubble wall to be where χ * is the coordinate radius of R in the hyperbolic version of the FRW metric. Here, we have used the fact that the global state is pure, so that S bulk (I ∪ R) = S bulk (I ∪ R). Moreover, we have assumed that S bulk (I ∪ R) is sufficiently smaller than S bulk (R) and have taken √ −κχ * 1. These conditions can be satisfied if the bubble nucleation rates in the parent bubble are small, so that the collisions with AdS bubbles occur at large FRW radii in the central bubble. The corresponding area element of ∂I is given by where r σ i is the location of σ i in coordinate r defined by Eq. (5), and dΩ H is the hyperbolic solid angle. By matching the area element of the domain wall expressed in hyperbolic and FRW coordinates on the side of the central bubble, we find dΩ S ∼ dΩ H . This leads to (To do this properly, we need to regulate the solid angle Ω AdS which an AdS bubble asymptotically occupies and take dΩ S sufficiently small so that this area element fits within the corresponding domain wall. We can then take the limit Ω AdS , dΩ S → 0 afterward.) The radius r σ i is microscopic and is controlled by l P and i , where i is the AdS radius in the bubble in which σ i resides. When a surface ∂Σ is moved from an AdS bubble to the central bubble, the radius r grows and becomes macroscopic. However, this transition occurs mostly in the region where ∂Σ is classically normal, and since σ i resides on the AdS side of it, r σ i is small. We thus find that for a sufficiently large region R satisfying appending I to R reduces S gen , so Eq. (14) holds in this case. D. Existence of a Quantum Extremal Surface The existence of a surface ∂I satisfying Eqs. (13) and (14) is not sufficient to ensure that of a non-empty island I for R. The existence of an island, however, is ensured [82] if there is a partial Cauchy surface I 0 that (i) is spacelike separated from R, (ii) has the boundary ∂I 0 that is quantum normal with respect to S gen (I 0 ∪ R), and (iii) encloses I in the sense that I ⊂ D(I 0 ). To argue for the existence of such I 0 , let us consider a codimension-2 surface ∂Σ 0 similar to ∂Σ . Such a surface is specified by the coordinates (t, r) in Eq. (5). The analysis in Sections IV A and IV B then tells us that when ∂Σ 0 is moved from the near singularity region to the central bubble, it changes from being quantum trapped to quantum antinormal (as viewed from the side opposite to the central bubble, which we denote by Σ 0 ). This occurs before the classical expansions become normal. As we move the surface further, we expect that the quantum effect becomes subdominant at some point, making the signs of quantum expansions the same as those of classical expansions. In Fig. 6, we depict possible behaviors of quantum expansions in this region by green Bousso wedges which are consistent with the quantum focusing conjecture [84]. We can thus take ∂Σ 0 in the quantum normal region to construct the surface ∂I 0 . Like ∂Σ , the surface ∂Σ 0 is truncated by AdS-AdS domain walls and becomes a finite surface σ 0 . As earlier, we form a closed surface using these truncated surfaces σ 0,i (i = 1, 2, · · · ) from each surrounding AdS bubble. By using the freedom of locating each surface, these pieces can be sewn together to form a closed surface enclosing the central bubble. The resulting surface, however, has folds at the junctions between AdS bubbles, with angles opposite to those required for quantum normalcy. Nevertheless, the effect of these angles is suppressed by O( i /r) compared to that of the expansions of σ 0,i 's in the interior of the AdS bubbles. Therefore, by locating σ 0,i 's at large r, we can smooth out the folds to form a closed surface that is classically normal and hence quantum normal. This surface can play the role of ∂I 0 : where we define I 0 as a partial Cauchy surface bounded by and outside ∂I 0 . It is easy to see that the smoothing can be done such that the resulting I 0 is spacelike separated from R and I ⊂ D(I 0 ). This guarantees the existence of an island for R. We note that the existence of I 0 is sufficient by itself to ensure the existence of an island if R is very large, satisfying Eq. (18) with max i (r σ i ) replaced with the radius of I 0 . Our argument involving I , however, indicates that the island exists for much smaller R. E. Inverted Island and Entanglement Castle Given that the collisions between the central and surrounding bubbles play an essential role in the existence of I and I 0 , we expect that ∂I is located in the region near the corresponding domain walls. In fact, it is reasonable to expect that the two possibilities for quantum expansions depicted in Fig. 6 are both realized, depending on the path along which a codimension-2 surface ∂Σ is moved. The edge of island ∂I would then lie at the point where trajectories of ∂Σ bifurcate to behave in these two different ways. The structure of the Bousso wedges around this location is indeed consistent with ∂I being a quantum maximin surface [87,88]. Strictly speaking, this only implies that the surface ∂I is a QES. In order for this surface to be the boundary of an island, it must be the minimal QES. We assume that this is the case, which is true if R has only one nontrivial QES with S gen (I ∪ R) < S gen (R). Since the topology of I is the same as that of I or I 0 , the island I for region R is an inverted island, and hence does not geographically look like an island. Let Ξ be a Cauchy surface containing R and I Ξ = D(I) ∩ Ξ the section of the inverted island on this surface. Given the geography, we may refer to the region I Ξ , complement of I Ξ on Ξ, as an entanglement lake. However, R occupies a significant portion of I Ξ , so (regarding R as a land as other authors do) the region R ∪ I Ξ which corresponds to water is more like a moat; see Fig. 1. In this sense, the region I Ξ in the present context may be called an entanglement castle. V. COSMOLOGICAL EVOLUTION Consider a Cauchy surface Ξ in the global spacetime. The existence of a non-empty island I for a subregion R of Ξ implies that the information about the semiclassical state in I Ξ = D(I) ∩ Ξ is encoded in the fundamental degrees of freedom associated with R. Therefore, physics at the semiclassical level can be fully described by the fundamental degrees of freedom associated with the partial Cauchy surface I Ξ = Ξ \ I Ξ . In the eternally inflating multiverse, an inverted island I appears for sufficiently large R. This implies that the semiclassical physics of the multiverse, which is all that we need to make cosmological predictions, is described by the fundamental degrees of freedom in a finite volume portion of a Cauchy slice that involves R. We call such a surface an effective Cauchy surface. Here we make two general comments about effective Cauchy surfaces. First, the location of the island D(I), or ∂I, depends on the Cauchy surface. For example, since R is spacelike separated from I, a Cauchy surface describing the state of the parent bubble cannot have ∂I around the central bubble as seen in the previous section. However, in this case there exists a region R p in the parent bubble such that an island I p appears around the parent bubble, so that the effective Cauchy surface is given by Ξ \ (D(I p ) ∩ Ξ). In general, when we consider a Cauchy surface describing the state of an earlier bubble, the relevant island appears around that bubble. Second, when two or more (non-surrounding) bubbles collide, we may want to consider Cauchy surfaces spanning all of these bubbles to describe the collision. In this case, we can choose a region R c spanning the colliding bubbles such that the island I c encloses all the colliding bubbles. This allows us to describe the bubble collision directly without relying on reconstruction from microscopic information in the fundamental degrees of freedom in R. A sketch of the global multiverse illustrating the above points is given in Fig. 7, where possible effective Cauchy surfaces are depicted by red lines. For a given gauge choice, the state on an effective Cauchy surface Υ 1 can uniquely determine the state on an effective Cauchy surface Υ 2 that is in the future domain of dependence of Υ 1 . In general, the final state of this time evolution is given by a superposition of states in different geometries M i : Here, all M i 's share the surface Υ 1 and the state on it, and Υ 2,i is an effective Cauchy surface on the geometry M i which is in the future domain of dependence of Υ 1 . It is worth noting that the evolution equation in Eq. (20) takes the form that once the knowledge of the current state, |Ψ(Υ 1 ) , is given, we can predict its future, more precisely what an observer who is a part of the state can in principle see in their future. Note that the equation does not allow us to infer from |Ψ(Υ 1 ) the global state of the multiverse in the past. This structure is the same as time evolution of states in the Schrödinger picture of quantum mechanics. Our approach solves the measure problem in the sense described above: once we are given the initial state on an effective Cauchy surface, we can in principle predict any future observations. The existence of the inverted island implies that the necessary information for this prediction, i.e. the physics of matter excitations over semiclassical spacetimes, is fully encoded in the microstate of the fundamental degrees of freedom associated with the effective Cauchy surface. As discussed in Ref. [22] for a dS spacetime, this information is expected to be encoded in quantum correlations between the matter and Unruh radiation degrees of freedom. VI. CONCLUSIONS In this paper, we have shown that a Cauchy surface Ξ in an eternally inflating multiverse has an entanglement island for a sufficiently large subregion R ⊂ Ξ. The island I Ξ on Ξ is, in fact, an inverted island surrounding the region R, implying that the semiclassical physics of the multiverse is fully described by the fundamental degrees of freedom associated with the finite region I Ξ , the complement of I Ξ on Ξ. This provides a regularization of infinities which caused the cosmological measure problem. As in the case of a black hole, the emergence of an island is related to the existence of a singularity in the global spacetime; in the multiverse, this role is played by the big crunch singularities in the collapsing AdS bub-bles. This picture is consistent with the interpretation of singularities in Refs. [21][22][23]: their existence signals that a portion of the global spacetime is intrinsically semiclassical, arising only as an effective description of more fundamental degrees of freedom associated with other spacetime regions. The result in this paper strongly suggests the existence of a description of the multiverse on finite spatial regions. Proposals for such descriptions include Refs. [51][52][53] and Refs. [32,34,56] in which the fundamental degrees of freedom are associated with the spatial infinity of an asymptotic Minkowski bubble and the (stretched) cosmological horizon, respectively. It would be interesting to explore precise relations between these holographic descriptions and the description based on the global spacetime presented in this paper. ACKNOWLEDGMENTS We thank Raphael Bousso, Adam Levine, and Arvin Shahbazi-Moghaddam for useful conversations. This work was supported in part by the Department of Energy, Office of Science, Office of High Energy Physics under contract DE-AC02-05CH11231 and award DE-SC0019380 and in part by MEXT KAKENHI grant number JP20H05850, JP20H05860.
8,792
2021-06-09T00:00:00.000
[ "Physics" ]
Efficient Driving of Piezoelectric Transducers Using a Biaxial Driving Technique Efficient driving of piezoelectric materials is desirable when operating transducers for biomedical applications such as high intensity focused ultrasound (HIFU) or ultrasound imaging. More efficient operation reduces the electric power required to produce the desired bioeffect or contrast. Our preliminary work [Cole et al. Journal of Physics: Condensed Matter. 2014;26(13):135901.] suggested that driving transducers by applying orthogonal electric fields can significantly reduce the coercivity that opposes ferroelectric switching. We present here the experimental validation of this biaxial driving technique using piezoelectric ceramics typically used in HIFU. A set of narrow-band transducers was fabricated with two sets of electrodes placed in an orthogonal configuration (following the propagation and the lateral mode). The geometry of the ceramic was chosen to have a resonance frequency similar for the propagation and the lateral mode. The average (± s.d.) resonance frequency of the samples was 465.1 (± 1.5) kHz. Experiments were conducted in which each pair of electrodes was driven independently and measurements of effective acoustic power were obtained using the radiation force method. The efficiency (acoustic/electric power) of the biaxial driving method was compared to the results obtained when driving the ceramic using electrodes placed only in the pole direction. Our results indicate that the biaxial method increases efficiency from 50% to 125% relative to the using a single electric field. Introduction In biomedical applications of ultrasound, the efficiency of a piezoelectric actuator is often defined as the ratio between the acoustic power obtained over the applied electrical power. More efficient driving of piezoelectric materials is desirable when designing and building transducers for biomedical applications such as high intensity focused ultrasound (HIFU), ultrasound imaging and ultrasonic motors. More efficient operation reduces the electric power needed to produce the desired bioeffect or contrast in imaging. For applications requiring continuous operation at high power (thousands of Joules), such as HIFU, more efficient energy conversion translates into less internal heat generation and consequently reduces the constrains for cooling, which is often needed in the design of actuators. Piezoelectric actuators are commonly driven by applying the electric field along the poling axis in order to maximize their mechanical response. To better understand the piezoelectricity phenomenon, and evaluate potential solutions to increase efficiency, our group has performed preliminary microscopic theoretical studies using modern polarization theory to establish a relation between atomic structure and dielectric dissipation of ferroelectric materials [1][2][3]. In this preliminary work, we adapted Density Functional Theory [4,5] to better understand the energetics of ferroelectric switching driven by an external electric field. This modelling is based on first principle calculations and differs from many of the traditional macroscopic models of piezoelectric response [6][7][8][9]. The free energy profile for single-domain ferroelectric PbTiO 3 obtained from first principle calculations is shown in Fig 1. The high energy barrier for cubic structure and a much lower barrier for orthorhombic structure below the Curie temperature create favourable conditions for the polarization rotation [10][11][12]. Fu and Cohen [13] have shown that a polarization rotation can enhance the piezoelectric response. This rotation mechanism was proposed in Ref. [13] to explain the "giant" piezoelectric response in PZT-PT and PMT-NT materials, where a non-aligned field alternates the material strain vectors between tetragonal and rhombohedral configurations. During the structural transformation associated with polarization rotation, the polarization vector P does not vanish, but changes its direction while maintaining a magnitude almost identical to spontaneous polarization P 0 (Fig 1). These arguments suggest that the uniaxial electric field along the polling direction may not be the optimal driving method for guiding polarization rotation along a curved path. The proposed method of electrical excitation used in the present study allows to exploit polarization rotation to further enhance the mechanical response of actuators. A possible strategy to facilitate polarization rotation along a curved path uses two-dimensional excitation with active modulation of the E x (t) and E z (t) components of the applied field. Our work in [3] suggested that the application of two orthogonal electric fields, instead of one as commonly done in most applications, can significantly reduce the coercivity that opposes the ferroelectric switching. A specific condition was predicted whereby a rotating electric field would result in a reduction in the coercivity. This rotation can be achieved by dephasing two sinusoidal electric fields. Theoretical studies based on the Landau-Ginzburg-Devonshire phenomenological model showed a similar reduction in the coercive field when a supplemental external mechanical stress is applied to the piezoelectric material. When this external stress is applied in the polarization direction the coercive field is reduced and the piezoelectric response is enhanced [14]. In our previous numerical work, we obtained an enhanced response by modifying the pattern of the applied electric field rather than a supplemental mechanical stress. Other experimental work have also obtained a similar enhancement by using a direct voltage field to produce a supplemental stress and used the effect to achieve higher pressure output for lithotripsy pulses [15]. The goal of the present study is to validate the predictions of our theoretical work on the reduction of coercivity when driving piezoelectric actuators. For this purpose, a set of narrowband transducers were fabricated with two sets of electrodes placed in an orthogonal configuration. Transducers were cut to have resonance frequencies as similar as possible in both orthogonal directions. A series of experiments were then conducted where each pair of electrodes was driven independently and measurements of effective acoustic power were obtained using a radiation force method. The outer diameter of the transducer was 12 mm; with a ring width of 3 mm and a height of 6 mm. In addition, the natural resonance frequency of the transducers was specified to be as [3]. The labels C, T and O refer to cubic, tetragonal and orthorhombic structures, respectively. The arrows indicate evolution of polarization in a rotational manner. Sample preparations close as possible to 500 kHz. The transducers were configured as air-backed, using a cork layer below the bottom face of the ring (Fig 2a). A 0.05mm-thick plastic film was used to isolate the cork from the transducer. The ensemble was secured on a 3D-printed ABS support (Makerbot, Brooklyn, NY, USA) using epoxy glue (301, epoxy technology, Billerica, MA). As shown in Fig 2c and 2d, two pairs of electrodes were placed following the propagation (P) mode and the lateral (L) mode. Each pair was driven independently by its own power supply. The P mode electrodes were placed on the top and bottom of the ring and the L mode electrodes on the outer and inner walls of the ring. The transducers were poled along the propagation direction P. Each mode was electrically characterized independently using a network analyzer (8751A, Hewlett Packard, Kobe Instruments Division, Hyogo, Japan) and matching circuits were built using solenoids and capacitors to adapt each mode to 50 O. Measurements of acoustic power vs. phase As shown in Fig 3, each transducer was characterized using a radiation force setup [16]. The principle of this setup assumes that all acoustic energy generated by the transducer is absorbed, resulting in mechanical displacement that can be measured as a change of mass on a scale. A 6-cm diameter cylindrical absorber made specifically for radiation force measurements (HAM A, Precision Acoustics, Dorchester, Dorset, UK) was placed at the bottom of a water container, which sat on top of the plate of an analytical scale (PI-225D, Denver Instruments, Bohemia, NY, USA). The transducer was placed 2 cm above the absorber and the container was filled with deionized and degassed water (less than 1 ppm of oxygen). The P and L mode electrodes were driven using a dual-channel function generator (33522A, Agilent Technologies, Santa Clara, CA) programmed in continuous mode. Signals were amplified using linear amplifiers (A150, ENI, Rochester, NY). The signal driving the L-electrodes was programmed with a phase shift ϕ relative to P-electrodes. The acoustic power as a function of the relative phase W A (ϕ) was calculated using [16] where m is the mass (kg) measured using an analytic scale when the transducer is excited, g is the gravity constant (9.81 mÁs −2 ) and c is the speed of sound in water (1481 mÁs −1 ) at room temperature. The efficiency of the transducer η(ϕ) was calculated with [16] where W E (ϕ) = W EP (ϕ) + W EL (ϕ) is the effective electrical power (forward minus reflected) on the P (W EP ) and L (W EL ) electrodes. W EP and W EP were measured simultaneously using power meters (N1914A, Agilent Technologies, Santa Clara, CA) and −30 dB couplers (C5085-10, Werlatone, Patterson, New York). Power was configured individually to deliver 1 effective electrical W in continuous mode in each W EP and W EL for a total of 2 electrical W. To measure the gain in efficiency of the P+L configuration, a series of acquisitions was performed driving only the P electrodes calibrated with W EP equal to 2 W. Using this configuration, the P+L driving mode can be compared to the P mode alone under the same initial electrical power conditions. For each transducer, three (3) identical series of measurements were performed where ϕ was changed from 0 to 360°in steps of 5°. In each series a total of 72 radiation force acquisitions were performed and the order of the phase values was randomized. Because the dimensions of the container were small (total water volume of 180 cm 3 ), evaporation effects were present and compensated. Each acquisition started only after the scale detected stable readings using its internal filter set to "normal" mode. Scale measurements were also done without ultrasound every 0.5 s during 10 s and a linear fitting of the weight loss over time caused by evaporation was calculated. This fitting was then used to correct m(ϕ) at the time of transducer excitation. Since weight losses due to evaporation follow a linear relationship, the R 2 value of the fitting was also used to determine the stability of the scale measurements. Only measurements of evaporation showing a linear fitting with R 2 equal or larger to 0.9 were kept for analysis. After the evaporation measurements, the transducer was excited for 8 s and the scale reading was taken immediately prior to the end of the excitation. The value of m(ϕ) was corrected with the linear fitting of evaporation and then used to calculate W A using Eq (1). The acquisition was controlled with a laptop computer (Lattitude E6500, Core 2 P8600 at 2.4 GHz, 4 GB RAM, Dell Computers, Round Rock, TX) running Matlab R2009 (Mathworks, Natick, MA). Table 1 shows the electrical characterization of the transducers. The resonance frequencies of the P and L electrodes were in general very similar showing a global average (± s.d.) of 465.1 (±1.5) kHz. Because the resonance frequency of both modes for each transducer was not exactly the same, experiments were conducted under the following conditions: Results 1. Driving both the P and L electrodes using their average frequency, which was calculated on a per transducer basis. 2. Driving both the P and L electrodes using the P-resonance frequency. 3. Driving both the P and L electrodes using the L-resonance frequency. 4. Driving each of the P and L electrodes using their individual resonance frequency. Conditions 1 to 3 used the same frequency for both sets of electrodes while condition 4 used a different frequency for each set. P and L electrodes driven at the same frequency when driving only the P electrodes. Similar results were obtained when driving both the P and L electrodes either with the P-resonance or L-resonance frequency. Plots indicate a sinusoidalshaped relationship between the acoustic power W A and the phase ϕ applied on the L electrodes. W A almost tripled from 0.09 W, when using only the P-electrodes, to 0.24 W when using the P+L mode and a phase ϕ of 352°. However, when ϕ was set at 182°W A decreased to 0.03 W, which was a third of the baseline value. This result indicates that ϕ needs to be carefully selected to ensure an enhancement in the output acoustic power. It is worth noting that the measured effective electrical power W E also showed a sinusoidal-type function for ϕ, suggesting that the electrical response of the transducer in each channel changes when applying the two electric fields simultaneously. This effect was more pronounced for the P electrode which resulted in W EP ranging from 0.22 W with ϕ = 227°to 1.8 W with ϕ = 47°. For the L electrode, W EL changed from 0.95 W with ϕ = 177°to 1.09 W with ϕ = 347°. For both sets of electrodes the difference in ϕ between the maximum and minimum was close to 180°. There was also a difference of 55°between the phases where W E and W A show their maximum. Our results indicated that the efficiency η could be doubled from 4.9% when only driving the P electrodes, to 11.4% when driving the P+L electrodes at a phase ϕ of 292°. Fig 5 shows the ensemble of results (W A , W E , η) for all transducers when their P and L electrodes were driven at their average frequency. The transducers all showed similar results producing higher efficiency when the transducers were driven simultaneously with the P+L driving method than when driving them with the P electrodes only. The maximal gain in efficiency observed for transducers #1, #2 and #3 was, respectively, +74%, 124% and +53%. In each of the 3 transducers, η was maximal for a value of ϕ where W EP and W EL were both close to 1 W, as they were individually configured prior to the experiment. Table 2 shows the summary of the results for the observed maximal and minimal efficiency when driving the transducers with the dual-mode technique under the four testing conditions indicated above. Table 3 shows a similar summary for the maximal and minimal acoustic power. The results in both tables indicate that the optimal phase was frequency dependant, and that the optimal value of ϕ translated into values of W EP and W EP close to 1 W, as they were individually configured. For each combination, when driving transducers at same frequency, transducers #1 and #3 showed both maximal η and W A at same value of ϕ. For transducer #2, the peak of η was found 60°off the peak observed for W A when driving electrodes at their average frequency. This difference in ϕ was reduced to 10°when driving both sets of electrodes at the P-mode resonance frequency, and reduced to to 20°when using the L-mode resonance frequency. P and L electrodes each driven at its resonance frequency constant increase in η was observed for transducers #1 and #2 when both electrodes were driven, but again no trend was observed and this gain was inferior to driving both electrodes at the same frequency. A summary of the results observed when driving electrodes P and L at their individual resonance frequency is included in Tables 2 and 3. Discussion The results presented in this study indicate that a more efficient conversion of electrical power to forward acoustic power can be achieved in piezoelectric materials by applying two orthogonal electric fields. These measurements are in agreement with our previous numerical studies that predicted this phenomenon. Furthermore, the predictions in [3] indicated that the coercive field is highly anisotropic, and it was anticipated that the ferroelectric hysteresis would be sensitive to the direction of the applied electric field. The first principles modelling used in [1][2][3] differs from most macroscopic phenomenological modelling since the total energy losses in the former model are linked to the coercive field that opposes the external electric field. For the macriscopic phenomenological approach, the mechanical losses and the electric losses are often approximated separately and have been of great value in characterizing the losses of resonant or non-resonant driving of piezoelectric actuators [7,9]. In first principle modelling materials are modelled as a perfect quasi-infinite layer; this type of approach is not suited to model differences between resonant and off-resonant driving techniques since this difference depends on macroscopic properties such as the actuator thickness. However, the first principle approach has brought some new insight into the energy losses that take place at the atomic level, opening new opportunities such as the biaxial driving method proposed here. The choice of the ring configuration for this test was primarily for simplicity reasons in order to fabricate a simple device, easy to mount and operate in the frequency range of therapeutic ultrasound (around 500 kHz). Our setup for radiation force assumed plane wave conditions typically used in flat circular sources, which are not necessarily applicable for the ring configuration used in this experiment. The measurements obtained by the scale are thus a sum of the wave coming from the top face of the transducer and a partial contribution from the inner face of the ring transducer. Effects of clamping were potentially present but, as noted in Tables 2 and 3, those effects were uniform across all samples. Our study was limited to a set frequency and operational mode, and many conditions remain to be explored. For example, it may be worth driving transducers at higher harmonics and in combinations of modes, such as driving the P-mode at the fundamental frequency and the L-mode at the 3rd harmonic. It is also worth exploring the potential effects of driving the transducers under broadband conditions as it is done in imaging applications. The proposed method requires two independent power lines per transducer, which doubles the complexity to drive transducer arrays. However, given an optimal fixed value for the phase, it is possible to split one common signal to produce the required delay using a filter circuit in series with the Lelectrodes. The proposed technique requires that both orthogonal dimensions be tailored to the central frequency of operation. In the case of the tested material, this requirement translated into a width of the ceramic that was half its height. However, using higher harmonics on the L-mode could facilitate fabrication of larger ceramics; a 3-times larger ceramic on the L dimension should in principle resonate at its 3rd harmonic, at the same frequency as the fundamental frequency of the P-mode. The requirement for application of orthogonal electric fields limits the possibility of using ceramic shapes that do not allow the placement of two set of orthogonal electrodes, such as a circular piston. Nevertheless, the proposed configuration of electrodes is well suited to a ring geometry, as used in this study, or prismatic shapes, which are ideal for linear arrays or even 2D-arrays. Conclusions A new technique for driving piezoelectric materials was presented with the purpose of increasing operation efficiency under narrow band conditions. The technique consists of applying two spatially orthogonal electric fields. In this study, three samples of ring-shaped piezoelectric ceramics were fabricated and driven using the technique. Ring-shaped piezoelectric ceramics were poled from top to bottom and the electrodes were placed on all four sides of the ring. Both the height and thickness of the ring transducer were optimized to obtain similar resonance frequencies in both dimensions. The average (± s.d.) resonance frequency among all samples was 465.1(±1.5) kHz. A radiation force system was used to calculate the efficiency (conversion from electrical power to acoustic power) of the new driving method and was compared to driving the ceramics using electrodes placed solely in the pole direction. Our results indicate that the biaxial method increases efficiency, depending on the sample, from 50% to 125% when compared to applying a single electric field in the direction of the pole.
4,576.6
2015-09-29T00:00:00.000
[ "Engineering", "Physics" ]
Intelligent Assistant Decision-Making Method for Power Enterprise Customer Service Based on IoT Data Acquisition (e prevailing era of the Internet of(ings (IoT) has renewed all fields of life in general, but, especially with the advent of artificial intelligence (AI), has drawn the attention of researchers into a new paradigm of life standards. (is revolution has been accepted around the world for making life comfortable with the use of intelligent devices. AI-enabled machines are more intelligent and capable of completing a specific task which saves a lot of time and resources. Currently, diverse methods are available in the existing literature to handle different issues of real life based on AI and IoT systems. (e role of decision-making has its prominence in the AI-enabled and IoTsystems. In this article, an AIand IoT-based intelligent assistant decision-making method is presented for power enterprise customer service. An intelligent model of the customer service data network is designed, and the method of collecting data from IoT to assist decision-making is presented. (en, the semantic relationship of customer service data is defined, and the sharing scope of data transmission and resources are determined to realize intelligent assistant decisionmaking of customer service in power enterprises. Simulation results show that the proposed method improves the decision data transmission speed and shortens the transmission delay, and the network performance of data interaction is better than that of the existing methods. Introduction AI-enabled devices are more intelligent and capable of completing a specific task which saves a lot of time and resources. e use of IoT and network provides the principal way out due to less cost and adaptable features [1]. e main function of IoT is to provide links to the available resources with effectiveness and reliability. IoT is composed of three main components: digitization of resources, collection of data about the resources, and computational algorithms to control the system formed by the interconnected resources [2]. e IoT has several massive applications in real life which make life easier and comfortable. Various methods, techniques, tools, and approaches are available to handle different problems of real life based on AI and IoT systems. e role of decision-making has its significance in the AIenabled and IoT systems [3][4][5]. Several methods are available for AI-enabled decisionmaking in the system of IoT. An intelligent decision-making system based on IoT was presented in [6]. e system architecture was based on two steps, and the criteria were devised for the selection of each part of the system. e performance of the system was validated with a case study of temperature monitoring. Chatfield and Reddick [7] designed a framework of smart government performance decision-making systems based on IoT-enabled techniques. e framework was functional for conducting the study analysis of IoT cybersecurity policy at the US federal government level. Gill et al. [8] analyzed the impacts of artificial intelligence, blockchain, and IoT on future cloud computing systems. e authors in [9] conducted a study to identify, perform, and evaluate approaches of AI for securing the environment of IoT. Parda [10] presented a study on the applications of IoT in decisionmaking about the assessment and management of pain in different organs of the human body. Hansen et al. presented a thorough investigation on AI and IoT for discovering the existing opportunities and shortcomings for enabling predictive analytics. An overview of the IoT and AI-enabled system along with four capabilities are provided, and then the review of the literature and its analysis was shown. Du et al. [11] presented an intelligent decision strategy for the iron ore sintering process based on abnormal condition prediction. Initially, the model of running mode prediction was established by using the fuzzy rules model, and the input was selected by one-way ANOVA. Based on this, the intelligent decision strategy of running parameters was proposed. e experiment was carried out with the actual operational data collected from the industrial field. e innovation of their model lies in the establishment of a prediction model based on fuzzy rules and the design of intelligent decision strategies based on priority to improve the abnormal operation mode. However, this approach has poor throughput performance. Niyama and colleagues [12] used a competitive ecosystem model, the competitive Lotka-Volterra (LV) model, to demonstrate the identification mechanism of the decision-making process. Based on the winner-take-all mechanism in the competitive LV model, the nonoptimal selection was eliminated and only the optimal selection was selected. In addition, the mean-field approximation mechanism was applied to the proposed decision method, and it was proved that the proposed method has good scalability as compared to data selection. Bandaragoda et al. [13] conceptualized, designed, and developed an AI-based commuter behavior profiling framework to predict different commuter behavioral profiles and fluctuating and routine patterns among commuters using traffic flow profiling and travel trajectory analysis. eir system was capable of real-time decision-making for road infrastructure and decision-making of government and business entities to optimize operations. An IoT-inspired framework has been designed for real-time analysis of athlete performance [14]. IoT data was employed to quantify athlete performance in the terms of probability parameters of probabilistic measure of performance and level of performance measure. e proposed model showed improved performance in terms of temporal delay, classification efficiency, and reliability. Decision support technology [15] including computer, artificial intelligence, multimedia network, and other high technologies, artificial thinking, and decision calculation complement each other by assisting and supporting the organization of decision-making activities or businesses. With the rapid development of communication technology, decision-making activities have entered a new era. In the scientific and programmed stage, intelligent decision support technology emerges spontaneously. By the obvious advantages of qualitative analysis and uncertain reasoning [16], the technique makes full use of empirical knowledge, further penetrates the fusion of artificial thinking and decision computing, and is widely used in various fields. erefore, based on the data acquisition method of the Internet of ings, this paper studies the intelligent decisionmaking method for customer service centre data sharing. e intelligent service centre is mainly used for product consultation, feedback, presale, and customer protection and has been widely used in the field of finance and telecommunications [17]. In the case of a large number of consumers, some enterprises use the tools of intelligent service centres, but due to the limitation of technical level, for some intelligent services, the information is inaccurate and the service efficiency is low. To make intelligent decisions, it is necessary to provide accurate information resources according to the needs of customers [18]. Considering that the customer service centre is still a new field in the electric power enterprise, and the electric power enterprise is an important part of the national economy leapfrog. In this study, an AI-and IoT-based intelligent assistant decision-making system is proposed for power enterprise customer service centres. We designed an intelligent model of the customer service data network and devised the method of collecting data from IoT to assist the decision-making process. Furthermore, the semantic relationship of customer service data was identified, and the scope of data transmission was determined to realize intelligent assistant decision-making of customer service in power enterprises. e paper is organized as follows. Section 2 provides a framework of the customer service data network. Section 3 describes the cyclic neural network and backpropagation method for developing an intelligent decision-making system of power customer service enterprise. In Section 4, different experimental results are presented. Finally, the paper is concluded in Section 5. Customer Service Data Network Model To bring the technical value of the customer service data model of electric power enterprises into full function, the network model of customer service data of electric power was designed based on the data collection terminals of the Internet of ings [19] to realize data integration and management and resource sharing of customer service data of electric power enterprises on the cloud. e customer service data network model is shown in Figure 1. e model framework is composed of an extended application module, a model module, a data module, a data source, and related software. e main functions of each module are described as follows: (i) Extended Application Module. is module presents an interface for electric power enterprise users to operate through web browsers. After the user completes the interface login process on the portal site, they can get the relevant data in real time. e role of the extended application module is to share customer service model data of power enterprises using the IoT techniques [20]. (ii) Model Module. e function of this module is to manage module browsing and database background services. e former presents a customer service data model by rendering technology, and the latter realizes dynamic management by managing data and cooperating with customer service participants. At the development stage, the model module should focus on three key points: reducing the file memory, ensuring the display quality and operation speed of the customer service data model of the power enterprise in the web page; improving the data retrieval speed; and providing different login accounts and management authority for the user role to ensure that different users on the customer service data network model of the power enterprise can perform data sharing safely. (iii) Data Module. is module adopts different storage management methods for each data type. e data of the customer service model of structured power enterprises are managed using basic standards [21], the data of unstructured documents are stored by the document management system, and the data of organization and process are managed by the corresponding database. e function of the module is to store the basic data, customer service model data, and engineering data. (iv) Data Source. is module is used to convert multiclass initial data into an industrial basic standard format, reduce the complexity of data queries, and provide data sharing and other functions. Proposed Model Due to the complexity of customer service structure and the uncertainty of customer demand in power enterprises, a cyclic neural network with strong fuzzy data processing capability was designed [22] to realize intelligent decision support for customer service in power enterprises. Cyclic Neural Network Model. Using the nonlinear transformation unit of a recurrent neural network (RNN) neuron, the backpropagation training model was created. Backpropagation is the core of training a neural network. It is the method of tuning the weights of a neural network based on the error rate received in the previous epoch. Proper fine-tuning of the weights allows you to decrease the error rates and make the model reliable by increasing its generalization [23]. e recurrent neural network also called a circular neural network can traverse the data elements inside a sequence and save the current state every time an element is traversed, as the input state of the next iteration. e basics structure of the neural network is shown in Figure 2. e optimization and fine-tuning of the circular neural network depends upon the following factors: e sigmoid function with gradient search function is selected as the network transformation function, and its monotonously increasing nonlinear function curve was used to reflect the property of neuron saturation. (vi) Sample Selection and Normalization. e samples are selected based on the level of accuracy. To ensure that the variables can be effectively input to the network, input variables are normalized in the range of 0 and 1. Since the sigmoid function tends to infinity at intervals of a and b, the convergence time is usually long. erefore, the normalized range of the sample is set in the domain other than the two intervals. Neural Network Model Learning. To design the proposed intelligent decision-making system, the neural network was trained using the backward-propagation training technique. e model was also optimized using different hyperparameters. Before the explanation of the backpropagation process, the forward propagation process of the neural network is described in the following section. Forward Propagation Supervised Learning Algorithm. Taking a four-layer neural network as an example, the forward propagation steps in its learning algorithm are described as follows: (i) Assuming the existence of N nodes in the input layer, equalize the input and output of any node i; namely, where x i represents the input of the input layer node i and O i represents the output of that node. (ii) Assuming that the number of nodes in the first hidden layer is N ′ , the connection weight between the node j of the hidden layer and node i of the input layer is w ij , the threshold value of the node j is θ j , and the sigmoid type function is f(·), then the expressions of input x j and output O j of node j of the first hidden layer are as calculated using the following equation: (iii) If the number of nodes of the second hidden layer is known to be N ″ , the following equations are used to solve the input x k and output O k of any node k: where w jk represents the connection weights between nodes j and k of the two hidden layers and θ k represents the threshold of the node k. (iv) If the number of nodes in the output layer is N ‴ , the threshold value of any node l in the output layer is θ l , and the connection weight between the node l and the node k in the hidden layer is w kl , then the equation of input x l and output O l of the node l is as follows: (v) If the sample-invariant function is a square function, and p l out of p samples are trained, and l node value of p l training sample is t p l l , and the operation result of this node after p l training is O p l l , then the decision deviation E can be obtained by computing the following equation: Backpropagation Supervised Learning Algorithm. We adjusted the network weights according to the negative gradient direction of deviation E and created the following backpropagation learning algorithm flow: (i) If the network is trained M times, the weight gain coefficient is η, the input to the node k after p l training is x p l k , and the degree between the node k and the node is lδ ; the new connection weight w kl ′ between the output layer and the second hidden layer and the new threshold θ l ′ of the node l is obtained using the following equation: , the new hidden layer node k threshold θ k ′ and node connection weight w jk ′ are as follows: (iii) After the degree value between the input layer node i and the first hidden layer node j is obtained by solving equations (8)-(10), they are adopted to solve the new connection weight w ij ′ and the new threshold θ j ′ of the hidden layer node between the two nodes: According to the characteristics of the network, the decision facts are divided into nonsequential values and sequential values based on the network values. Among the factors affecting the decision support, the nonsequential value includes the precision of the customer service model, knowledge integrity, performance, and innovative technology, and the sequential value includes the project price. Data Node Transfer Protocol. To ensure the accuracy of the proposed customer service intelligent decision-making model, the data node transmission protocol was designed. e first node specifies the starting point of the network connection and, based on the broadband characteristics, determines the profile of the leaf node or the service node to obtain the node address data in the network after the node is connected to the network. It also establishes and maintains the transmission with the nearest node based on the knowledge of other nodes, forwards the updated business resource intelligence data to the business node in real time, and then the service neighbour node sends the updated resource data. To adapt to the dynamic change of the intelligent service in the resource, when the leaf node connects to the service node that needs to connect to the resource, the leaf node exits the network. Service nodes receive messages, send output messages, disconnect from table nodes, delete resource pointers that need to update table data, and subtract the number of service table nodes [25]. In addition, if a leaf node does not send an update message to the service node within a specified time interval, it will detect a leaf node failure. In this case, the service node sends a message to the checklist node. If no response is received, the output of the leaf node is generated according to the above procedure. Based on the LEACH protocol, in the case of resource sharing requests, from the perspective of effectively organizing resources, it allows nodes to send connection requests to service nodes according to the LEACH protocol strategy and allows service nodes to share resources among data nodes, transmission requests, and neighbour service nodes and transmit resource sharing requests. When the leaf node finds that the connection to the service node does not work, the structure of the network is self-organized, so the leaf node supports a logical neighbourhood topology, and a request is sent to the shared service of another model so that the node can find the shared result in the adjacent region. Power enterprises' customer service data are processed as the core of transmission between nodes, through the exchange of data in the header, recording the relevant routing data [26]. e message type and description of the header are shown in Table 1, where the type is sorted in the positive order of the header. e receiver node is added to the routing list according to its situation to respond to the transmission request, and then the description of the node transmission protocol in the model is completed. Effective Decision Data Interaction Based on Semantic Matching. To avoid data heterogeneity, the semantic matching algorithm was used to explain and share heterogeneous data and realize the interaction of effective decision data between nodes. e model adopts the data interaction mode of mixed ontology, constructs the metadata ontology of customer service data resources of electric power enterprises, and creates the standard vocabulary of data resources. It gives out the consistent data semantics representation of all nodes and finds out the concepts with the same or similar semantics in mapping semantics among metadata ontologies and resource metadata ontologies of all nodes [27]. Moreover, it also defines vocabulary through linguistic ontology, expresses concept names and attribute names, divides vocabulary relations, determines the corresponding relationship weights, and quantifies the relevance degree between node words. e details of association weights are given in Table 2. Mobile Information Systems Based on Table 2, the reachable path correlation strength A of terms in the semantic network is obtained as follows: where a ij is the associated weight and m represents the length of the reachable path. Assume that there are several paths between words, the maximum correlation intensity in the path is chosen as the similarity H and is computed as where W ij represents the metadata ontology similarity of words i and j and C ij represents similarity. e semantic relationship similarity Q can be calculated as where a J and a n represent the weights of the semantic relations J and n, respectively. In this study, the semantic matching algorithm was employed to classify the weight of semantic association according to the similarity of lexical names, contexts, and semantic relations and to map the semantics satisfying the conceptual conditions according to different associations' concepts. Moreover, it was applied to determine the query scope of customer service data resources of power enterprises, locate shared data resources, and then transmit interactive decision data by nodes. Experimental Demonstration and Analysis e proposed method was compared with the two common methods of customer service data-aided decision-making mentioned in the Introduction section (method 1 and common method 2 [10,11], resp.) to compare the throughput performance and transmission efficiency of data sharing. Experimental Preparation. e customer service centre of an electric power enterprise was selected as the experimental object. e enterprise is a comprehensive enterprise of the electric power industry, including nuclear power plants, thermal power plants, coal-fired power plants, and other production projects. e actual data of the tools, consumables, and supply chain of the enterprise were simulated to obtain the metadata contributing resources. In the stand-alone environment, this study simulated the decision-making process of customer service data, including sharing resource design, resource uploading, resource sending, and resource consensus. e simulation parameters set are shown in Table 3. We used MyEclipse10 as the developmental environment, XACML as the description language, and Java 1.8.0 as the running environment. A PC with 618 MB RAM and a 4.0 GHz CPU was used for simulating the proposed decision-making system. e service node, leaf node, and document data were extracted randomly, and each document was uniquely identified for data-aided decisionmaking. roughput. To examine the throughput of the proposed intelligent decision-making system for power enterprises, we set the maximum sharing distance of customer service centres nodes of electric power enterprises to 250 m. e number of customer service nodes of electric power enterprises was adjusted so that the sparsity of customer service nodes might directly affect the stability of the network. During sharing data resources, we compared the data throughput of all three methods using the number of customer service nodes of different electric power enterprises. e experimental results are shown in Figure 3. As can be seen from Figure 3, when the number of customer service centre nodes in the power enterprise is increased, the data resource sharing path and data interaction volume are also increased. If the route establishment is more perfect, the data throughput will increase, and the data throughput will reach its peak at the number of 40 nodes when the network resources are fully utilized. If the network resources are relatively blocked, the throughput decreases again. However, the throughput of the proposed method is still higher than that of the two common methods. e average throughput of the proposed method is 4.56 M/s, which is higher than the average throughput of the common methods 1 and 2, i.e., 3.78 M/s and 3.45 M/s, respectively. When the number of customer service centre nodes of power enterprises was set to 40, and the sharing distance between the nodes was also changed, the sharing distance directly affected the number of message forwarding, the number of adjacent nodes, and the competition intensity of shared channel. e experimental results are given in Figure 4. As evident from Figure 4, when the distance of customer service nodes of power enterprises is too small, the collision probability, message forwarding times, and message hops increase, and the network throughput is low. When the distance of customer service nodes of power enterprises is too large, the transmission nodes will decrease rapidly, the network resources will not be fully utilized, and the network throughput will decrease accordingly. However, the throughput of the proposed method is higher than that of the two common methods, the average throughput is 4.56 M/s, and the average throughput of the common methods 1 and 2 is 3.52 M/s and 3.62 M/s, respectively. Transmission Delay. We compared the data transmission delay of all three methods using a different number of customer service nodes. e comparison results are illustrated in Figure 5. With increasing the number of customer service centre nodes, the data transmission delay becomes irregular. However, the transmission delay of the proposed method in all cases is less than the other two methods. e proposed method reported the lowest average transmission delay of 0.41 s as compared to delay of 0.63 s and 0.69 s observed in the case of method 1 and method 2, respectively. Figure 6 shows the effect of increasing distance between customer service centre nodes on transmission time. It is evident that the average delay of data transmission is 0.37 s, 0.60 s, and 0.73 s for the proposed method and methods 1 and 2, respectively. is shows that the transmission delay of the proposed method is lower than the other two methods even if the distance between the two nodes in customer service centres is increased. is proves the superiority of the proposed method. Transmission Rate. Several experiments are carried out to count the data transmission rates using the three different methods. e results are given in Table 4. It can be seen from Table 4 that when data resources are shared using all the three methods, the average data transmission rate is for the proposed method is 278.34 bit/s, Conclusion e role of decision-making has its prominence in the AIenabled and IoT systems. In this article, an intelligent assistant decision-making method is proposed for power enterprise customer service. We designed an intelligent decision-making model of the customer service centre and devised the method of collecting data from IoT to assist in decision-making. e semantic relationship among the customer service data is defined, and the sharing scope of data transmission and resources are determined to realize intelligent assistant decision-making of customer service in power enterprises. Simulation results show that the proposed method improves the decision data transmission speed and shortens the transmission delay, and the network performance of data interaction is better than the existing methods. In our future work, we will provide more flexible and efficient interoperability specifications for the interaction and integration of transmission languages between nodes and enhanced data sharing and will improve the security and reliability of the functional components of the proposed system. Data Availability e data used to support the findings of this study are included within the article.
5,978
2021-08-06T00:00:00.000
[ "Computer Science" ]
Toward a large bandwidth photonic correlator for infrared heterodyne interferometry Infrared heterodyne interferometry has been proposed as a practical alternative for recombining a large number of telescopes over kilometric baselines in the mid-infrared. However, the current limited correlation capacities impose strong restrictions on the sensitivity of this appealing technique. In this paper, we propose to address the problem of transport and correlation of wide-bandwidth signals over kilometric distances by introducing photonic processing in infrared heterodyne interferometry. We describe the architecture of a photonic double-sideband correlator for two telescopes, along with the experimental demonstration of this concept on a proof-of-principle test bed. We demonstrate the \textit{a posteriori} correlation of two infrared signals previously generated on a two-telescope simulator in a double-sideband photonic correlator. A degradation of the signal-to-noise ratio of $13\%$, equivalent to a noise factor $\text{NF}=1.15$, is obtained through the correlator, and the temporal coherence properties of our input signals are retrieved from these measurements. Our results demonstrate that photonic processing can be used to correlate heterodyne signals with a potentially large increase of detection bandwidth. These developments open the way to photonic processing of wide bandwidth signals for mid-infrared heterodyne interferometry, in particular for a large number of telescopes and for direct imager recombiners. Introduction Optical interferometry and Very Long Baseline Interferometry (VLBI) are the two techniques that currently achieve the highest angular resolution in astronomy. The scale-up of infrared interferometry to an imaging facility with milli-arcsecond resolution and below represents a long-term objective of major interest for astrophysics (Monnier et al. 2018). Such an instrument would require a large number of telescopes (N ≥ 12), in order to obtain a (u,v)-coverage that is compatible with imaging, and a kilometric baseline, in order to reach milli-arcsecond resolution in the near-and mid-infrared. At the present time, current facilities have the capacity to recombine up to four telescopes in the near-and mid-infrared at the Very Large Telescope Interferometer (VLTI) (Lopez et al. 2014) and up to six telescopes in the near-infrared at the CHARA array (Che et al. 2012); these two facilities have a maximum baseline of 130 m and 330 m, respectively. However, the extension of this current direct detection scheme represents a major technical challenge, in particular because of the infrastructure requested to operate the vacuum delay lines and the recombination of a large number of telescopes, which cannot necessarily be extrapolated from current existing infrastructures. In this context, heterodyne detection, in which incident light is coherently detected on each telescope, has been proposed as a potential alternative in the mid-infrared (Townes 1984;Swenson 1986;Ireland & Monnier 2014). Although heterodyne detection is commonly used in the radio to submillimeter domain, its ex-trapolation to higher frequencies (1 THz to several 10s THz) is limited by a radically different instrumentation compared to radio and submillimetric interferometry, and more fundamentally, by its lack of sensitivity at higher frequency. There are two reasons for this lack of performance. First, at equal bandwidth, there is a relative penalty in signal-to-noise ratio (S/N) between direct and heterodyne detection due to the fundamental quantum noise in heterodyne detection, which is a degradation that has been estimated to be on the order of ∼ 40 by Hale et al. (2000). Second there is a very narrow instantaneous detection bandwidth in heterodyne detection (a few GHz typically) compared to the frequency (30 THz at 10 µm) of the incident radiation. On the other hand, heterodyne detection offers the advantage of recombining a large number of telescopes without a loss in S/N in contrast to direct detection. The work presented in this paper should be placed in the context of a global effort to examine how present-day technology allows us to revisit the true performance of a mid-infrared heterodyne astronomical interferometer composed of tens of telescopes and how it can be fairly compared with a direct interferometry approach. In this work, we do not attempt a full comparison, that we reserve to a forthcoming paper. We do explore one novel approach to one of the building blocks of such an interferometer : the correlator. Following the idea laid out by Swenson et al. (1986) and Ireland & Monnier (2014) we propose that part of the sensitivity issue of the heterodyne concept related to the bandwidth limi-Article number, page 1 of 7 arXiv:2006.04722v1 [astro-ph.IM] 8 Jun 2020 A&A proofs: manuscript no. 37368corr tation can be overcome by using synchronized laser frequency combs as local oscillators (LOs) and detectors with much higher bandwidths. In this framework, the incoming celestial light interferes with a frequency comb and is dispersed to sample tens to hundreds of adjacent spectral windows. In addition, progress in mid-infrared technology has recently led to spectacular improvement of more than an order of magnitude of the detection bandwidth, in particular with the emergence of graphene detectors (Wang et al. 2019), which have a frequency response of up to 40 GHz, and quantum well infrared photodetectors (QWIP) (Palaferri et al. 2018) demonstrated at 20 GHz. These developments bear the promise of even higher bandwidths of up to 100 GHz, more than an order of magnitude larger than what has been used on sky. As a consequence, as pointed out by Ireland & Monnier (2014), this detection scheme raises the formidable challenge of correlating thousands of pairs of signals. The three-beam Infrared Spatial Interferometer (ISI) was based on the use of an analog radio frequency (RF) correlator with an input bandwidth ranging from 0.2 GHz to 2.8 GHz, using passive RF components. In the same way, Cosmological Microwave Background (CMB) interferometry has a long history of developing analog RF wideband correlators (Dickinson 2012); an analog lag-correlator design recently reached up to 20 GHz bandwidth (Holler et al. 2011). Although these developments in CMB interferometry could constitute immediate attractive solutions, several difficulties inherent to wideband RF technology limit its use in the short and medium term, for infrared interferometry. The 20 GHz correlator presented in Holler et al. (2011) requires a specific RF design based on a custom-made Gilbert cell multiplier and Wilkinson splitter tree at the limit of the current technology and this design is unlikely to go far above 40 GHz any time soon. Parasitic frequency, although not a fundamental limit, could also turn out to be a disadvantage of wideband RF systems. On the numerical side the currently most advanced digital correlation systems are those developed for the Northern Extedend Array (NOEMA) (Gentaz 2019) or for the Atacama Large Millimiter Array (ALMA) (Escoffier et al. 2007). For NOEMA, the PolyFIX correlator currently accepts the widest instantaneous bandwidth per antenna. The PolyFIX correlator can process 32 GHz wide digitized signals coming from 12 antennas (8 GHz per receptor for the two polarizations and two sidebands). ALMA correlator can process 8 GHz wide signals coming from up to 64 antennae. Both approaches are worth exploring for infrared interferometry when considering an array of a few telescopes that have detector instantaneous bandwidths of a few 10 GHz and only a few spectral channels. However, their extrapolation to instantaneous bandwidths of 50 GHz to 100 GHz such as the bandwidths expected with new generation detectors, tens of telescopes and tens to hundreds of spectral bands call for a different approach. As evaluated in Ireland & Monnier (2014), this requires custom made developments with computing power at least two orders of magnitude greater than the existing correlators. In order to tackle this conundrum, we propose a photonic solution to the problem of the correlation of broadband RF signals. We exploit the old idea of transmitting RF signals over optical waveguides to encode the intermediate frequency (IF) beating between the incoming signal and the local oscillator onto a coherent optical carrier, which could then be processed by the means of photonic operations. Remarkably, the past decade has seen an impressive development of microwave photonics, which aims precisely at generating, routing, and processing broadband RF signals, using standard photonic techniques (Capmany & Novak 2007;Nova Lavado 2013). The ability to couple such ana-log processing with optical transport over fiber, compatible with standard telecom components, provides the building blocks of an analog correlator. In this paper we introduce the idea of a correlator for infrared heterodyne interferometry that makes use of photonic phase modulators to encode the RF beating signal onto a coherent carrier. Our scheme is based on commercially available electrooptic phase modulators and fiber-optics components. These can handle up to 50 GHz (off the shelf) and bear the promises of hundreds of GHz capability (Burla et al. 2019). In Sect. 2, we present the principles of a simple correlation architecture with two telescopes, which reproduces the equivalent function of the initial ISI analog correlator. The experimental results of a proof of principle of this concept are presented in Sect. 3, where two heterodyne beating signals have been generated experimentally, and correlated a posteriori on a photonic correlator several months later. The perspective and limits of this technique are discussed in Sect.4 . Conclusions are drawn in Sect.5 . A theoretical sensitivity study, taking into account the instrumental parameters of a practical infrastructure, the gain in detection bandwidth introduced by a photonic correlator, the extrapolation in a multiplexed architecture, and its comparison with a direct detection scheme for a large number of telescope will be the object of a follow-up paper. Principles of infrared heterodyne interferometry As in radio astronomy, a single baseline heterodyne interferometer is composed of two distant telescopes on which the incident light is coherently detected by its mixing with a stable frequency reference, referred to as the local oscillator (LO), on a detector squaring the field. In the optical domain, the LO is a laser, and the detector a fast photodiode. Assuming a local oscillator E LO and an incident field E S the heterodyne signal at each telescope is written as where ω LO is the laser angular frequency, ω IF the IF detected in the RF range, and φ S and φ LO the phases of the signal and LO, respectively. The signal angular frequency is denoted as ω S = ω LO ± ω IF to highlight the lower and upper sidebands of the signal; these are downconverted at the same IF ω IF . The measured optical intensity thus contains a beating term proportional to the electric field, enabling the detection of the phase. After filtering through the detection chain, with transfer function H(ω), the beating term s k (t) is written (Boyd 1983) as As these signals coming from each telescope are proportional to the input electric field, their multiplication is propor- the phase of the astrophysical object, − → B p the projected baseline, − → σ the angular coordinate of the object from the phase center, and λ the central wavelength. More specifically, in the case in which the two sidebands ±ω IF are not separated, the product of the two voltages from each telescope is (Thompson et al. 2017;Monnier 1999) assuming the detection bandwidth could be modeled with a rectangular filter function of width ∆ω, centered around ω c , where τ is the relative delay between the two optical signals, ∆ω LO the frequency difference between the LOs, C a constant, and |G(τ)|e iφ G = 1 2π H 2 0 ∆ω sin(∆ωτ/2) ∆ωτ/2 the frequency response of the detection chain that has an amplitude H 0 . This expression corresponds to the signal of a double-sideband correlator (DSB), in which the fringes are modulated at the frequency ∆ω LO . Importantly, we assume in Eq. (3) that the relative phase between the LOs ∆φ LO = φ L01 (t) − φ L02 (t) is null and stable over the time of detection, that is, that the LOs are phase-locked to each other. In practice, this phase-locking can be obtained either by distributing the same LO or by measuring a beating signal between each distant LOs, in both cases on a phase-stabilized link. In addition, in the following, the object phase φ o is assumed to be constant, that is, the atmospheric piston fluctuations are assumed to be negligible during an integration time. Principles of a double-sideband photonic correlator In its simple form, the function required at the level of the correlator thus consists in multiplying two input signals with a very wide bandwidth. In this section, we show that this multiplication product can be achieved with a simple photonic design. We consider a Mach-Zehnder interferometer, as represented in Fig.1, in each arm of which is inserted a phase modulator with a characteristic voltage V π . In a phase modulator, the V π is defined as the equivalent tension for which a phase shift of π is introduced. Each phase modulator transposes the wide bandwidth RF signal coming from a telescope onto a monochromatic optical carrier. Assuming that the voltage amplitude is small compared to V π , and writing β = π V π , the optical field after each phase modulators is If a total relative phase shift of ∆φ = φ 2 − φ 1 = π is applied between the arms, the interferometer is placed in a quadratic regime and the output intensity of the Mach-Zehnder can be simply written as We note that if, for example, a phase shift of π/2 was used, there would not be a beat signal s 1 · s 2 between the two signals in the output. The two first quadratic terms appear as noise signals spread out over the wide frequency range of phase modulators. In turn, the last term is the product of the incident signal coming from the telescope, which is proportional to the coherent flux, as described in Eq. (3). In the case in which ∆ω LO 0, the DSB product signal is modulated at the frequency ∆ω LO , and thus gives access to a measurement of the coherent flux of the interferometer. This fringe peak can be integrated over a very restricted frequency range around ∆ω LO , in which the relative contribution of the quadratic terms s 2 k (t) can be neglected. In the above developments, it is fundamental to note that the total bandwidth is now limited by the bandwidth of the phase modulators. In practice, current standard off-the-shelf, fibered, electro-optic modulators (EOMs) at telecom wavelength reach a bandwidth of 50 GHz, and EOMs with flat-frequency response beyond 500 GHz have been demonstrated (Burla et al. 2019). Such bandwidths would represent a crucial improvement of the input bandwidth at the level of the correlator. Signal distribution and phase stabilization In this correlation scheme, the telescope signals are converted at the level of each telescope on an optical carrier by means of an EOM. The signal could then propagate through telecom fibers over kilometric distance, avoiding the problem of bandwidth limitations. This scheme is only possible under the condition in which the optical link is phase-stabilized over large distance to guarantee a stable functioning point at the Mach-Zehnder's null; this stresses the importance of a robust phase stabilization scheme. Given the similarities of the photonic correlation with the principle of operation of a nulling interferometer, the phase stabilization scheme developed in this frame (Gabor et al. 2008) could be adapted in the present case. However, stabilization through phase modulation, by the use of EOMs or a fiber stretcher, could only be applied on a limited optical path difference (OPD) range, which may be a limit for kilometric optical links. Alternatively, in Sect 3.2.2, we detail the principle of a fast phase stabilization scheme of the null based on frequency modulation of the optical carrier, which can correct an arbitrary OPD amplitude variation. Proof of concept and practical implementation In this section, we present the proof of principle of a DSB photonic correlator dedicated to infrared heterodyne interferometry. In Sect.3.1, we detail the test bed used on a broadband laboratory source to generate an equivalent heterodyne signal of a two element interferometer. Sect.3.2 describes the practical implementation of the photonic correlator and its phase stabilization scheme based on frequency modulation. In Sect.3.3, we finally present the a posteriori correlation through this photonic correlator of the two signals previously generated on the two-telescope test bed, together with a measurement of the temporal coherence of the broadband source initially used, and we provide an estimation of the S/N degradation through the correlator. Two telescope heterodyne signal generation The general purpose of the two-telescope simulator is to produce a correlated signal on two seperated detectors, whose signal could reproduce the beating between a broadband source of radiation and two LOs with a stable relative phase. The experiment was carried out at telecom wavelengths for practical reasons, but could be generalized to other optical wavelengths, in particular the N band, which is the target of the present study. We emphasize that the purpose of this test bed was not to evaluate the sensitivity limit of a complete detection chain from the detectors to the output of the photonic correlator, which would necessitate dedicated mid-infrared detectors and LOs, but to produce representative correlation signals in terms of coherence properties at the entrance of the photonic correlator. We acknowledge that typical astrophysical sources in the near-(H band) or midinfrared (N band) are significantly fainter than in this proof of concept. The characterization of a complete mid-infrared detection chain, on objects at the detection limit and low S/N, would be the next step of this study. This test bed is described on Fig. 2. The representative elements of a two-telescope interferometer in the test bed are the following : Local oscillator: A laser at 1.55 µm is separated in two arms. Given the sub-kHz linewidth of the laser, the two equivalent LOs distributed on each arm are naturally in phase at the timescale of the measurement. In addition, as in ISI, a small frequency differ- Fig. 2. Scheme of the simulator of a 2 telescopes interferometer at telecom wavelength. A shift of ∆ f = 7 MHz is introduced between each LO by the use of AOMs driven at f 1 = 80 MHz and f 2 = 87 MHz. The signal registered at the output of each PD has then been regenerated and correlated a posteriori on the optical correlator. ence is applied between each arm by means of two acousto-optic modulators (AOMs). As this fequency difference is also the modulation frequency of the fringes, it has been experimentally set to ∆ f = f 2 − f 1 = 7 MHz, which is a spectrum region in which parasitic RF frequencies were absent. Since these modulators are designed so as to operate at 80 ± 10 MHz, their frequency are set to f 1 = 80 MHz and f 2 = 87 MHz. Broadband source: We used an erbium-doped fiber amplifier (EDFA), without an input signal, as a broadband input source. An EDFA is a pumped gain medium, usually used in telecom to amplify an incident radiation. Without any input, it emits a broadband light spectrum through amplified spontaneous emission (ASE) of radiation. The ASE then passes through an optical tunable filter (OTF) adjusted to the few GHz bandwidth of the detector to limit the shot noise associated with the incident source. This source is finally divided in two arms, and distributed to two detectors. Once again, we emphasize that this source of radiation was not used to evaluate the sensitivity limit of a heterodyne detection in the near-infrared, but to reproduce representative coherence properties of a heterodyne signal. Detection: The local oscillator and the input broadband source signal are combined and detected on two separate fast detectors. As a first step, a correlation peak at 7 MHz was directly observed with an RF mixer, which multiplies the output of the two detectors. Multiple RF cables were successively used to introduce a delay ∼ 1/∆ν to scan the coherence length and to assess that the signal was not a parasitic frequency of the setup. In a second step, the output of each detectors were simultaneously recorded on a fast oscilloscope at a sampling rate of 2 Gb/s, with an analog bandwidth ∆ν = 400 MHz, which is the upper bandwidth limit in our detection scheme. A posteriori generation: Once registered, these two RF traces were electronically generated a posteriori by arbitrarywaveform generators (AWGs) to perform the a posteriori correlation on the photonic correlator. Given the limited memory of the AWG, a set of 2 16 points were generated at a sampling rate of 50 MHz. Taking into account the dilatation factor between registration and regeneration, the peak frequency was thus placed at a frequency 175kHz after regeneration. Experimental implementation of the photonic correlator and phase stabilization In this subsection, we detail the experimental implementation of the photonic correlator described in Section 2. As this photonic processing is independent of the carrier wavelength, its implementation could greatly benefit from the development Article number, page 4 of 7 Bourdarot, Guillet de Chatellus and Berger: A large bandwidth photonic correlator for infrared heterodyne interferometry of fibered components from telecom standards, which were also used in this proof of principle. Photonic processing The actual implementation is represented in Fig. 1. A sub-kHz linewidth laser at 1.55 µm is equally divided into two arms with a 50:50 fibered splitter. Each arm is then modulated by an EOM, on which is applied the RF signal generated a posteriori from one of the two-telescope simulator traces, as described in Sect. 3.1. A feedback loop is used to stabilize the phase of the Mach-Zehnder, and the two arms are recombined with another 50:50 fibered coupler. Finally, at the null output of this fibered Mach-Zehnder, the flux is split in two parts with a 90:10 fibered splitter, where 90% of the flux is sent to the signal photodiode and 10% of the flux to a detector used in the stabilization loop detector. After the signal photodiode, the fringes are modulated at the frequency f = 175 kHz, which can either be registered on an ADC, a lock-in amplifier, or a Fourier-transform oscilloscope. We adopted the latter solution. Phase stabilization loop The general goal of the phase stabilization loop is to maintain the photonic correlator at the null intensity of the Mach-Zehnder. In the intensity null, the output intensity varies quadratically with the input voltage. The basic idea of the stabilization consists in generating a small amplitude phase modulation signal at a defined frequency f m in one arm, and using the real part of the first harmonic signal as an error signal to be minimized. Usually, the command signal is applied on a phase modulator (e.g., PZT, fiber stretcher, or EOM) to compensate for OPD variation. In this case, we set up a frequency modulation system, composed of two AOMs, where one is modulated in frequency by a proportional-derivative (PD) controller. Integrated over a small of time dt, this frequency modulation ∆ f m acts as a phase modulation dΦ m = ∆f m dt, which is restricted neither in amplitude nor in speed in contrast to an OPD modulator system. Fig. 3 represents the closure of the phase stabilization loop. We estimated its stability to a mean phase deviation of φ = λ/240, and RMS deviation of σ φ = λ/440. It has though to be noted that in such a frequency modulation scheme, large OPD drifts, on the order of a fraction of the coherence length l c = c/∆ν, have to be corrected by a dedicated OPD offset. However, given a maximum spectral bandwidth of ∆ν ≈ 100 GHz, the coherence length is on the order of millimeter scale, thus requiring only occasional offset correction after minute or hour timescales. Noise factor and temporal coherence Once stabilized on the null, the signals a posteriori generated by the AWGs are applied to the phase modulators. According to Eq. (5), a fringe peak is observed at the modulation frequency f G f S ∆ f OL = 175 kHz, where f S is the recording sampling frequency and f G is the generation sampling frequency. This fringe signal is easily visible in the power spectral density of the photocurrent (PSD), as shown in Fig. 4. In order to estimate the degradation introduced by the photonic correlator on the signal, we measured the noise factor, defined as the ratio of input and output S/N, as follows: We estimated input S/N from the two recorded waveforms, by computing numerically the interference term in Eq. (5). Fringe power and noise are estimated on two defined frequency windows, as shown if Fig. 4, by computing the integrated power in the peak and the standard deviation of the noise floor, respectively. Output S/N is then estimated with the same method on the PSD of the photodiode output, on the same exact frequency windows. This analysis provides a ratio of the output S/N on the input S/N : 1/NF = 87% ± 5% that is, a S/N degradation of 13%, also corresponding to NF = 1.15. This result is limited by a nonnegligible oscilloscope dark current, as seen in the histogram of Fig. 3, and a strong contribution of a low-frequency 1/ f , as visible in Fig. 4, which artificially degrade the S/N of the fringe peak, but are not fundamentally due to the optical correlator. In addition, we assessed the temporal properties of our correlation signal to observe its coherence envelope and give an additional verification that the fringe peak observed could not be produced by a parasitic signal. To do so, a numerical delay is introduced at the level of one AWG, for each value for which we measured the fringe peak power. The coherence envelope is shown on Fig. 4, and superposed to the coherence envelope computed numerically. The experimental profile fits a Gaussian with a full width at half maximum (FWHM) τ ≈ 20 ns, which corresponds to an equivalent bandwidth ∆ f = 1/τ ∼ 50 MHz. This is consistent with the maximum bandwidth of our regenerated signal with a sampling frequency f S = 50 MS/s. Moreover, this measurement removes the possibility that the fringe peak is a parasitic signal. Further developments In this section, we discuss the further developments to be led in the path towards a practical correlator, dedicated to an imaging facility with kilometric baselines. In a second section, we discuss at a more general level the remaining open challenges of infrared heterodyne interferometry. Photometric calibration and delay compensation In the continuity of our demonstration of signal correlation, the next steps of the development consist in measuring the spatial coherence of a laboratory object in the mid-infrared. For this purpose, a detailed procedure of photometric calibration will have to be carried out in order to normalize the coherent flux measured on the source and to deduce an estimate of the visibility. Furthermore, in this work we did not address the problem of delay compensation and earth rotation. Earth rotation translates into a phase velocity that can be computed and compensated at the level of a local oscillator by a dedicated frequency shift, which is also called lobe rotation. In addition, we did not address a group delay, which has to be compensated for in order to track the maximum of the correlation envelope within a coherence length l c . This delay can be covered using a combination of switchable fibered delay, compensating for the large delays, and a continuously adjustable fibered delay line, covering small delays and relaxing the minimum resolution of the switchable module. We note that the design complexity of such a movable delay line, at telecom wavelength and on a very narrow spectral band, would be considerably lower than the design complexity of a direct mid-infrared vacuum delay line. As in telecom networks, dispersion could be managed with the use of dispersion compensating fibers, over a bandwidth of 100 GHz in this case, but with propagation distances significantly smaller than that encountered in telecom, up to a few kilometers in this case. The speed of the movable delay line could be relaxed by a careful control of the frequency shift of lobe rotation. Measurements with N ≥ 3 This measurement with two telescopes could then lead to a generalization of the method to more than two telescopes, and in particular to the measurement of closure phases with three telescopes, in a way that is analogous to the method performed on the ISI correlator (Hale et al. 2003). We note that from the perspective of achieving image reconstruction with a large number of telescopes (N ≥ 12), the encoding of the signal on an optical carrier also offers the possibility to recombine all the fibers into an homothetic pupil plane, as in a Fizeau configuration, which would enable us to use the array in a direct imager mode. The transposition of the technique presented in this paper, where fringes are modulated at a given frequency (7 MHz), may be adapted by lowering the modulation frequency to a rate compatible with a 2D-matrix acquisition rate (typically smaller than kHz), although this method does not seem optimal. Instead, direct phase stabilization scheme, as experimentally demonstrated in Blanchard et al. (1999) may possibly allow for direct imager acquisitions without applying a frequency modulation. Signalto-noise preservation has, however, not yet been considered for this scheme, nor has it been demonstrated in Blanchard et al. (1999). Open challenges of infrared heterodyne interferometry Although we addressed the problem of correlation and signal transportation by the introduction of a photonic correlator and photonic processing, several challenges remain open on the path to a practical mid-infrared heterodyne interferometer. The first problem we did not address in the heterodyne system is the synchronization of distant LOs separated by kilometric distances. We recall that this requirement concerns at least the relative phase between the different local oscillator, which has to be constant during a coherent integration time. For this purpose, the beating of each LO with an LO that serves as a master and reference can be used to apply a correction to each LO through a dedicated phase-lock-loop (PLL), a strategy that was implemented in ISI with up to three telescopes (Hale et al. 2000(Hale et al. , 2003. In practice, such a stabilization scheme would imply that we propagate each mid-infrared local oscillator on kilometric distances in our case, which imposes strong constraints in a practical infrastructure. The possibility to stabilize in phase each mid-infrared LOs with the distribution of a reference phase signal through a fiber link, in a way analogous to Chanteau et al. (2013) for example, would substantially simplify the infrastructure of a mid-infrared heterodyne interferometer. The second problem that we did not address concerns the limit in sensitivity imposed by atmospheric phase fluctuations, which severely restricts the maximum coherent integration time. Previous studies (Ireland & Monnier 2014;Ireland et al. 2016) already raised this limitation, and proposed an out-of-band cophasing based on a companion instrument in the H band. Although this auxiliary instrument would be a direct interferometer, which is apparently in contradiction with the heterodyne detection scheme proposed, its implementation in the H band would surely be much easier than in the mid-infrared with the use of fiber components. In the case in which this atmospheric cophasing were absent, the heterodyne interferfometer would still be functional, but limited to bright objects. Concerning the improvement of sensitivity, the introduction of this paper was based on the observation that current detectors now enable us to reach several tens of gigahertz of bandwidth in the mid-infrared. These compelling demonstrations will need further development to consolidate these results, in particular regarding the exact characterization and optimization of their quantum efficiency. Finally, as proposed in Swenson (1986), revived in Ireland & Monnier (2014), and from a more prospective view, a promising but difficult method to further increase the sensitivity of an heterodyne interferometer would consist in multiplexing a large number of LOs, with the associated number of detectors, to potentially obtain a spectral coverage comparable to direct detection. This method supposes the generation of mid-infrared frequency combs that have sufficient power per teeth, which constitutes a present active field of research. We note that such a multiplexed architecture could be advantageously coupled to a photonic correlation. Conclusions Within the context of infrared heterodyne interferometry, we have introduced the use of photonic correlation in order to overcome the bandwidth limitation of the correlators developed so far. We proposed the architecture of a DSB correlator for two telescopes based on the use of a fibered Mach-Zehnder at telecom wavelength, precisely stabilized at the null of intensity, and demonstrated the a posteriori correlation of two signals previously generated on a dedicated two-telescope test bed in the near-infrared. For this purpose, we realized a dedicated phase stabilization loop based on frequency modulation. The final photonic processing chain exhibits a degradation of the S/N of 13%, corresponding to a noise factor NF = 1.15. The coherence properties of the initial input signals were also retrieved by introducing an incremental temporal delay. The next step of this development will consist in measuring the spatial coherence of an object in the mid-infrared with two telescopes, and to generalize this architecture to more than two telescopes and to the detection of closure phases. More generally, this proof of principle opens the way to the photonic processing and transportation of wide bandwidth signals for infrared heterodyne interferometry, which could constitute a valuable advance in the perspective of kilometric baseline interferometry with a large number of telescopes.
7,791
2020-06-08T00:00:00.000
[ "Physics", "Engineering" ]
New methods for robust continuous wave T1ρ relaxation preparation Measurement of the longitudinal relaxation time in the rotating frame of reference (T1ρ) is sensitive to the fidelity of the main imaging magnetic field (B0) and that of the RF pulse (B1). The purpose of this study was to introduce methods for producing continuous wave (CW) T1ρ contrast with improved robustness against field inhomogeneities and to compare the sensitivities of several existing and the novel T1ρ contrast generation methods with the B0 and B1 field inhomogeneities. Four hard‐pulse and four adiabatic CW‐T1ρ magnetization preparations were investigated. Bloch simulations and experimental measurements at different spin‐lock amplitudes under ideal and non‐ideal conditions, as well as theoretical analysis of the hard‐pulse preparations, were conducted to assess the sensitivity of the methods to field inhomogeneities, at low (ω1 << ΔB0) and high (ω1 >> ΔB0) spin‐locking field strengths. In simulations, previously reported single‐refocus and new triple‐refocus hard‐pulse and double‐refocus adiabatic preparation schemes were found to be the most robust. The mean normalized absolute deviation between the experimentally measured relaxation times under ideal and non‐ideal conditions was found to be smallest for the refocused preparation schemes and broadly in agreement with the sensitivities observed in simulations. Experimentally, all refocused preparations performed better than those that were non‐refocused. The findings promote the use of the previously reported hard‐pulse single‐refocus ΔB0 and B1 insensitive T1ρ as a robust method with minimal RF energy deposition. The double‐refocus adiabatic B1 insensitive rotation‐4 CW‐T1ρ preparation offers further improved insensitivity to field variations, but because of the extra RF deposition, may be preferred for ex vivo applications. Measurement of the longitudinal relaxation time in the rotating frame of reference (T 1ρ ) is sensitive to the fidelity of the main imaging magnetic field (B 0 ) and that of the RF pulse (B 1 ). The purpose of this study was to introduce methods for producing continuous wave (CW) T 1ρ contrast with improved robustness against field inhomogeneities and to compare the sensitivities of several existing and the novel T 1ρ contrast generation methods with the B 0 and B 1 field inhomogeneities. Four hard-pulse and four adiabatic CW-T 1ρ magnetization preparations were investigated. Bloch simulations and experimental measurements at different spin-lock amplitudes under ideal and non-ideal conditions, as well as theoretical analysis of the hard-pulse preparations, were conducted to assess the sensitivity of the methods to field inhomogeneities, at low (ω 1 << ΔB 0 ) and high (ω 1 >> ΔB 0 ) spin-locking field strengths. In simulations, previously reported single-refocus and new triple-refocus hard-pulse and double-refocus adiabatic preparation schemes were found to be the most robust. The mean normalized absolute deviation between the experimentally measured relaxation times under ideal and non-ideal conditions was found to be smallest for the refocused preparation schemes and broadly in agreement with the sensitivities observed in simulations. Experimentally, all refocused preparations performed better than those that were non-refocused. The findings promote the use of the previously reported hard-pulse single-refocus ΔB 0 and B 1 insensitive T 1ρ as a robust method with minimal RF energy deposition. The double-refocus adiabatic B 1 insensitive rotation-4 CW-T 1ρ preparation offers further improved insensitivity to field variations, but because of the extra RF deposition, may be preferred for ex vivo applications. | INTRODUCTION Relaxation in the rotating frame under the presence of an external spin-locking radio frequency (RF) pulse, termed T 1ρ relaxation, 1 has been under active research for the quantitative assessment of different tissue types, such as the central nervous system, 2 liver, 3 and articular cartilage. 4,5 For instance, in articular cartilage, T 1ρ has been shown to be sensitive to the proteoglycan content, the collagen fiber network, and to degenerative changes in general. [5][6][7][8] T 1ρ relaxation depends on the amplitude of the spin-lock (SL) pulse, that is, the SL frequency, which in typical cases corresponds to the timescales of slow molecular motion. 9 In biological tissues, the processes affecting T 1ρ relaxation include dipolar interaction, chemical exchange, and the motion of spins through field gradients; broadly, any local fluctuations in the magnetic field that are on the same or lower frequency scale as the SL frequency. [8][9][10][11][12] The relative importance of each mechanism varies with the SL frequency and the strength of the main magnetic field. 13 The standard T 1ρ measurement uses on-resonance continuous-wave (CW) spin-locking (CW-T 1ρ ), and consists of tilting the magnetization 90 degrees and then locking the spins with a continuous RF pulse. 1 Several methods to produce T 1ρ contrast at constant spinlocking amplitude have been proposed, with variable sensitivity to the inhomogeneities of the main field (B 0 ) and the RF field (B 1 ). Spin locking slows the relaxation process in the transverse plane by forcing the spins to rotate around the RF field. Because of the high sensitivity of the T 1ρ measurement to field inhomogeneities, the design of the SL pulse is essential for high quality T 1ρ -weighted images and accurate quantification of the T 1ρ relaxation time. 14 Typically, in the clinical setting, the amplitudes of the SL pulses (ω 1 = γB 1 /2π, where γ is the gyromagnetic ratio) are between a few hundred and a thousand Hz, most often 400-500 Hz. To allow estimation of the T 1ρ relaxation time, the same SL amplitude is maintained, while the SL durations are varied. The relaxation processes affecting T 1ρ are modulated by the molecular makeup of the tissue, and thus T 1ρ correlates with the properties of the tissues. 5 Various methods have been reported for compensating the inherent sensitivity of T 1ρ measurement to field inhomogeneities. 14-16 Witschey et al. 14 introduced a T 1ρ weighting method, which was demonstrated to be highly insensitive to variations in the B 0 and B 1 fields, in phantoms and in vivo human brains at 3 T. The sequence is a modification of the ΔB 0 insensitive SL sequence proposed by Zeng et al., 17 with a change to the phase of the final 90 pulse, effectively inverting the magnetization at the end of the preparation. While the pulse sequence was proven to be highly robust against B 0 and B 1 field inhomogeneities, the authors noted that the downside of the sequence was that it would still require a perfect 180 refocusing pulse to fully compensate against field variations. Another attempt to alleviate the sensitivity of spin locking to field inhomogeneities with a single-refocus pulse, termed paired self-compensated SL (PSC-SL), was proposed by Mitrea et al. 15 In their version, the spin-locking periods were further split into pairs of opposite phases on either side of the refocusing pulse, making the SL pairs insensitive to B 1 inhomogeneities; however, tiltin the magnetization back towards the positive z-axis. The study demonstrated the sequence with phantom and small animal imaging at 7 T with gradient echo (GRE) and fast spin echo (FSE) readout sequences. A recent double-refocusing pulse sequence, termed balanced SL (B-SL), proposed by Gram et al., 18 applies an extra 180 refocusing pulse with opposite phase compensating for both inhomogeneities. The sequence was evaluated with simulations and demonstrated with an agarose phantom at 7 T. The authors concluded that B-SL was superior in comparison with the existing single-refocus sequence in which the magnetization is returned to the +Z axis, that is, the one presented by Zeng et al. 17 However, it remains unclear how the B-SL sequence performs in comparison with the sequence presented by Witschey et al.,14 which inverts the magnetization at the end of the preparation, as this sequence was also shown to be superior in comparison with the noninverting T 1ρ preparation. Adiabatic pulses have also been used to improve the robustness of T 1ρ imaging. Various studies used adiabatic half passage (AHP) pulses, coupled to CW spin locking to improve the B 1 robustness of the measurements 16,[19][20][21][22][23] The AHP pulses were utilized in these studies for tilting the magnetization to the transverse plane for the CW SL, followed by a reverse AHP to bring the magnetization back to the longitudinal axis. A dual acquisition method was proposed by Chen 16 to address the adverse effect from relaxation during the reverse AHP on T 1ρ quantification. The method was demonstrated with phantom and human liver imaging at 3 T. Similar methods, using pulsed, fully adiabatic T 1ρ preparation, have also been reported. [24][25][26] The purpose of this study was twofold; firstly, to perform a numerical, experimental, and partial theoretical comparison of the sensitivities of the different T 1ρ contrast generation methods to the inhomogeneities in the B 1 and B 0 fields; and secondly, to introduce additional ways of producing T 1ρ contrast with reduced sensitivity to the field inhomogeneities. We examined the different previously published and new T 1ρ preparation methods via both Bloch simulations and experimentally. In the theoretical part, we focused on the different hard-pulse implementations for T 1ρ preparation. | CW-T 1ρ preparation schemes Here, we focus on the conventional non-refocused hard-pulse, single-refocused ΔB 0 and B 1 insensitive preparation scheme presented by Witschey et al., 14 the double-refocused B-SL preparation scheme presented by Gram et al., 18 and on a novel triple-refocused hard-pulse CW-T 1ρ preparation scheme. Triple refocused hard-pulse CW-T 1ρ attempts to account for the reported inability of the single-refocus sequence presented by Witschey et al., 14 to fully compensate for the field variations if the single refocus is not a perfect 180 pulse (Figures 1, S1, and S2). Theoretical derivations on the sensitivities of the preparation are provided in the supporting information and in Witschey et al. 14 In addition, the ΔB 0 and B 1 insensitive T 1ρ preparation presented by Mitrea et al. 15 was considered in simulations. Adiabatic pulses are amplitude-and frequency-modulated RF pulses that are highly insensitive to B 1 inhomogeneity and off-resonance effects. 27 In adiabatic pulses, the amplitude of the effective field (ω eff [t]) of the pulse is the vectorial sum of the time-dependent B 1 and the offresonance component. The flip angle (φ) is largely independent of the applied B 1 field, given that the adiabatic condition jω eff (t)j>> jdφ / dtj is satisfied, that is, the sweep of the direction of the effective field (dφ/dt) is slow compared with its amplitude (ω eff ). During an adiabatic sweep, spins at different resonances are primarily affected at different times of the pulse, in contrast to the CW-pulse, which simultaneously affects the spins within its frequency bandwidth. Adiabatic pulses can be categorized as excitation, refocusing, and inversion pulses. 28 AHP pulses (Figure 2A) are employed to generate uniform excitation with a 90 flip on a defined frequency band, leaving the magnetization in the transverse plane, while reverse AHP pulses brings the magnetization back to the z axis from the transverse plane. 19 With the adiabatic excitation and CW-SL, the SL continues from the same phase where the adiabatic excitation pulse ends, but the amplitude of the RF pulse is reduced to the desired spin-lock amplitude (i.e., unlike in the adiabatic CW T 1ρ reported by Chen, 16 where the amplitude of the SL equals the maximum amplitude of the AHP). Similarly, the reverse AHP starts from the phase where the SL ends, with amplitude ramped up to the maximum of the AHP. 16,19,22,24 Besides AHP excitation pulses, either B 1 insensitive rotation (BIR)-4 plane rotation pulses or adiabatic full passage (AFP) inversion pulses, such as hyperbolic secant (HS)n pulses, can be used for adiabatic refocusing/inversion during the spin-locking train, both providing largely B 1insensitive means for the refocusing/inversion. 28,29 As long as the adiabaticity can be sufficiently maintained during the pulses, inhomogeneities in the B 1 field will not have an effect on the resulting flip angles using the adiabatic pulses. Here, we investigated four different CW-T 1ρ preparations utilizing AFP, AHP, BIR-4, and HS1 adiabatic pulses, without refocusing 22 or using single or double BIR-4 refocusing, or double AFP inversion, in between the SL ( Figure 2). | Numerical simulations Numerical Bloch simulations of the pulse trains were performed for ΔB 0 and B 1 field inhomogeneities of up to ±1 kHz and ±40%, respectively, to analyze the sensitivities of the sequences. The simulations for all the spin locking schemes were performed using SL durations of 8, 32, and 128 ms and SL amplitudes of 100 and 400 Hz. The duration of each of the hard 90 and 180 pulses was 200 μs. Maximum amplitudes of the adiabatic pulses were set to 2.5 kHz and the durations were 4, 3.03, and 5.17 ms for AHP, AFP, and BIR-4, respectively. Additionally, conventional adiabatic CW T 1ρ was simulated with a longer and lower maximum RF amplitude of 600-Hz of the AHP pulses. 16 The following modulation functions were used for adiabatic pulses: the AHP and BIR-4 pulses utilized tanh/tan modulations 30 and the AFP pulse was an HS1 pulse with a timebandwidth product value (R = 20). Relaxation effects were neglected in the simulations to focus on the effects of field inhomogeneity. | Sample preparation Cylindrical osteochondral plugs (n = 4, diameter = 6 mm) were prepared from the patella of bovine knee joints obtained from a local grocery store. The samples were immersed in phosphate buffered saline containing enzyme inhibitors and frozen at À20 C. Prior to imaging, the samples were thawed and transferred into a custom-built sample holder and test tube filled with perfluoropolyether (Galden HS-240, Solvay Solexis, Italy). In addition to osteochondral plugs, cherry tomatoes (n = 2) and an agarose phantom (n = 1) were used as test samples. The cherry tomatoes were chosen such that they neatly fit within the RF coil. The cherry tomatoes were placed into the coil without immersion solution. The agarose phantom was prepared with 3% w/v agarose and water by heating the solution at 90 C. The agar solution was then transferred to a test tube and placed into a refrigerator (at $ 5 C) for cooling and gel formation. The test tube was taken out of the refrigerator then allowed to settle to room temperature for 2 h prior to imaging. | MR imaging MRI studies were performed using a 9.4-T preclinical Varian/Agilent scanner (Vnmrj DirectDrive console v. 3.1) and a 19-mm quadrature RF volume transceiver (Rapid Biomedical GmbH, Rimpar, Germany). A set of RF shapes for all the methods shown in Figures 1 and 2 for generating T 1ρ contrast was created for the experiments. All the CW-T 1ρ measurements were conducted using a magnetization preparation block consisting of the RF train and a crusher gradient coupled to an FSE readout sequence. For each of the CW-T 1ρ methods, five SL amplitudes (γB 1 /2π = 0, 50, 100, 200, and 400 Hz) were used. Hard 90 and 180 pulses were both set to have a duration of 200 μs and the adiabatic refocusing/ inversion pulses used were BIR-4 and HS1, with durations of 5.17 and 3.03 ms, respectively. The AHP pulse duration was 4 ms. All the adiabatic pulses ( Figure 2) were set to have a maximum B 1 amplitude of 2.5 kHz. All the T 1ρ measurements were performed using SL (CW) durations of 0, 4, 8, 16, 32, 64, 128, and 192 ms. In addition to T 1ρ measurements, a B 0 map was acquired using the same FSE readout sequence, coupled to a water saturation shift referencing (WASSR) 31 preparation module utilizing a saturation range of À300 to +300 Hz with a 50-Hz step and saturation power of 30 Hz. Furthermore, the B 1 field was estimated using a set of hard-pulse saturation preparations around the expected 90 power F I G U R E 2 Adiabatic and CW SL preparations. (A) Conventional adiabatic CW-T 1ρ preparation, consisting of an AHP excitation, a SL of duration τ, and a reverse AHP. 19 Adiabatic CW-T 1ρ with (B) A single adiabatic BIR-4 refocusing pulse, (C) With two BIR-4 refocusing pulses, or (D) Using double refocusing with HS1 pulses. The negative sign in front of τ indicates a phase shift of 180 . AHP, adiabatic full passage; BIR, B 1 insensitive rotation; CW, continuous wave; CW-T 1ρ , continuous wave T 1ρ ; HS, hyperbolic secant; SL, spin lock (±40% from the expected power), coupled to a low-resolution scan with the same FSE readout. The scan time for each of the aforementioned T 1ρ setups was $ 48 min, for WASSR $ 8 min, and for the B 1 scan $ 13 min. The parameters of the readout FSE sequence varied slightly depending on the sample and its size ( Table 1). The samples were scanned under two nominal conditions: (i) as homogenous B 0 and B 1 as possible; and (ii) altered B 0 and B 1 settings to introduce inhomogeneities. At the beginning of every session, manual shimming of B 0 and a calibration of the B 1 transmit power was performed. The measurements were first conducted for case (i) with as good conditions and homogenous fields as possible, and subsequently for case (ii) with the shims deliberately set to an incorrect value along a specific axis to induce B 0 variation of approximately ±250 Hz along the chosen direction (in-plane, across the cartilage surface for osteochondral samples, and along the same axis for the other samples). Additionally, the B 1 amplitude was either set to 20% lower or higher than the nominal calibrated value, or the specimen was pulled approximately 15 mm away from the RF center (approximately 50% of the RF visibility range) so that the B 1 field along the sample became inhomogeneous. For those specimens that exceeded the homogenous region of the B 1 field, no additional B 1 inhomogeneities were introduced (Table 1). | Data analysis The results of the simulations were evaluated visually and semiquantitatively. For ΔB 0 response with a correct B 1 value and for ΔB 1 response with correct B 0 , a semiquantitative metric was estimated: the width of the flat region of the response, that is, the width of the relatively smooth and flat response around the on-resonance condition after applying a moving average window of 50 Hz width and a threshold of 90% of the onresonance amplitude. The averaging window width was changed to 10 Hz for the nonrefocused schemes and simulations of ΔB 0 response at 100-Hz SL amplitude to obtain reliable estimates. The results were calculated and visualized using the absolute values of the simulated z magnetization to facilitate comparison between the preparation schemes, because some of them deliberately take the magnetization to the -z axis. Relaxation time maps were fitted in a pixel-wise manner using the three-parameter monoexponential fit, using in-house developed plugins for Aedes (http://aedes.uef.fi) in Matlab (Matlab R2019b; MathWorks, Natick, MA, USA). B 0 maps were calculated using Lorenzian fits to the acquired WASSR saturation datasets 31 and the B 1 maps were estimated via linear fitting to the acquired saturation datasets. To compare the reliability and robustness of the different T 1ρ preparation schemes, mean normalized absolute deviation (MNAD) values in large regions of interest (ROIs) were calculated for each of the preparation schemes between the relaxation times measured under ideal and nonideal conditions. The large ROIs for each specimen were defined on an average T 1ρ map calculated over all the preparation schemes for the SL amplitude of 400 Hz. These ROIs, comprising areas with high SNR, were then used to extract the T 1ρ values from all the measurements under both conditions for further computations. The MNADs of the relaxation times were calculated by where i refers to an individual voxel within the ROIs under ideal and non-ideal conditions. The MNAD value of 0.5 corresponds to a mean deviation of 50% of the T 1ρ relaxation times under the nonideal conditions. For the comparison of the different T 1ρ preparation schemes, MNAD values from all the samples available for a given preparation were averaged. In addition to the primary spin-locking pulse, each of the T 1ρ preparation schemes requires other RF pulses to tilt and refocus the magnetization. Depending on the configuration, the RF power deposited by these additional pulses varies significantly. To assess the relative differences in RF energy deposition between the preparations, root mean square (RMS) integrals of the pulse trains with zero SL duration were calculated. To facilitate the comparison, the RMS values were normalized with that of the conventional CW-T 1ρ preparation. | RESULTS Numerical simulations demonstrated variable sensitivity of the sequences to a range of offsets in the B 0 and B 1 fields (Figures 3, 4, and S3). 2D plots of the simulated responses on both ΔB 0 and B 1 offset axes demonstrate the differences in the sensitivities of the T 1ρ preparations: adiabatic refocused schemes demonstrated the least B 1 -dependent variation and especially the double-refocused versions also minimal ΔB 0 -dependent variation at all simulated SL amplitudes (100 and 400 Hz) and SL times (8,32, and 128 ms) ( Figures 3F-H and 4F-H). Quantification of the flatness of the simulated ΔB 0 and ΔB 1 responses at the nominally correct B 1 and B 0 indicated that the non-refocused schemes had a very poor B 0 off-resonance response with almost no flat region even at the correct B 1 , while the refocused versions showed significantly improved responses ( Figures 3C-H, 4C-H, S7 and S8). However, the adiabatic CW pulse simulated at 600-Hz maximum amplitude ( Figure S3B) had a broader flat response for both B 0 and B 1 inhomogeneities at the higher SL amplitude (400 Hz) (Figures S3B and S9) when compared with the 2.5-kHz maximum amplitude simulations of the pulse (Figures 3, 4, and S7-S9). The adiabatic double-refocused schemes had the broadest ΔB 0 robustness, with the flat range essentially covering the entire simulated range from À1 to +1 kHz (and beyond), while the single-and triple-refocused preparations had the broadest flat responses among the hard-pulse preparation schemes ( Figures 3C,E and 4C,E), but with a slight drop at B1 amplitudes beyond ±31% of the nominally correct amplitude. The doublerefocused hard pulse was highly insensitive to a wide range of B 1 offsets, but was more sensitive to B 0 inhomogeneities, being the least robust among the refocused schemes ( Figures 3D, 4D, S7 and S8). For the experimental measurements under as ideal as possible conditions, the T 1ρ relaxation time maps of the cartilage bone samples, cherry tomatoes, and phantom were visually artifact-free for all the preparation schemes for SL amplitudes above 100 Hz (Figures 5-7). Under the nonideal conditions, however, at SL amplitudes equal to and below ΔB 0 , the conventional and adiabatic non-refocused T 1ρ relaxation time maps The conventional hard-pulse CW-T 1ρ preparation with only two 90 pulses imposes the least additional RF energy deposition and thus produces the least specific absorption rate (SAR) (Figure 9). The preparations including adiabatic pulses add a constant adiabatic T 1ρ weighting in addition to the T2 weighting from finite TE of the readout, and these pulses induce significantly higher RF energy deposition (the RMS integral of the 0-ms SL pulse for the double-refocus BIR-4 is approximately 86 times that of the conventional T 1ρ preparation) ( Figure 9, Table S1). However, for a plain SL pulse (i.e., without the 90 or 180 pulses) of 50-ms duration and 400-Hz amplitude, the RMS integral is $40 times that of the 0-ms SL pulse of the conventional T 1ρ preparation with the least extra RF. For increasing SL durations and amplitudes, relative differences in the energy deposition between the preparation schemes are reduced (an RMS integral ratio of a SL pulse of 64-ms duration and 400-Hz amplitude using double-refocus BIR-4 with respect to conventional is reduced from $86 times to just under three times) ( Figure 9, Table S1). The 0-ms SL adiabatic CW T1ρ pulse, with a longer duration and a reduced maximum RF amplitude of 600 Hz of the AHP, was observed to have approximately one-quarter of the RMS integral of the original pulse with a maximum amplitude of 2.5 kHz. With the same lower-power AHP pulses, the RMS integral of a SL pulse of 64-ms duration and 400-Hz amplitude was reduced by a factor of approximately 1.5 compared with the original using 2.5-kHz AHP pulses (Table S1, Figure S6). | DISCUSSION T 1ρ contrast remains interesting for various applications in the human body because of its sensitivity to low frequency molecular interactions that are often biologically important. 5,9 The different T 1ρ contrast preparation methods, particularly at very low SL amplitude, are however sensitive to imperfections of the imaging field and the RF field. In this study, we proposed four new methods for generating T 1ρ contrast and compared them experimentally and numerically with four existing methods for their sensitivity to the field inhomogeneities. The study builds on earlier reports introducing ΔB 0 and B 1 insensitive T 1ρ preparation schemes, 15,18,19,22 particularly the one by Witschey et al., 14 and utilizes the same theoretical examination of the proposed hard-pulse schemes (see the supporting information). The results of the study indicate that those methods employing a refocusing pulse are significantly more robust against field inhomogeneities than those methods which do not, and also that combining CW spin locking with fully adiabatic excitation and refocusing is the most robust method against field inhomogeneities. However, the fully adiabatic schemes have the additional cost of significantly increased RF energy deposition. Among the non-adiabatic hard-pulse refocusing Recently, there has been an increase in interest towards T 1ρ dispersion in cartilage, 13,32-36 because the measurement could provide information beyond a single amplitude T 1ρ scan. However, especially lowering the SL amplitudes requires methods that are robust against field inhomogeneities. If the B 0 variations exceed the spin-locking amplitude, the locking becomes inefficient, resulting in spurious signal loss, which is further amplified with methods that do not compensate for field variations. 1,12 The theoretical considerations regarding the triple-refocused hard-pulse CW-T 1ρ preparation lead to the same conclusions that were found for the single-refocused preparation scheme earlier by Witschey et al., 14 suggesting the methods should be approximately equal. The simulations showed a slightly broader flat response with respect to variations in B 0 for the single-refocus method, while the response of the triple-refocused method was slightly smoother. The double-refocused pulse scheme brings the magnetization back to the positive z axis; however, it appears to require nearly perfect 90 and 180 pulses, while the single-and triple-refocused methods only require that the 180 pulses should be nearly perfect. Because of this difference, the single-or triple-refocused schemes appeared more robust against field inhomogeneities, as confirmed by the simulations. In practice, however, all the refocused hard-pulse options were observed to be very similar in soft tissues. Adiabatic pulses are known for their excellent tolerance to RF inhomogeneity 28 and thus stand out as an interesting possibility to improve the robustness of CW T 1ρ preparation. Furthermore, adiabatic T 1ρ could be measured in fully adiabatic mode, using a train of AFP HS RF pulses, instead of a constant amplitude CW SL pulse in between AHP pulses. 22,24,29,37,38 In comparison with a CW SL with fixed B1 amplitude and F I G U R E 6 T 1ρ relaxation time maps of a cherry tomato sample, under as ideal as possible conditions and under non-ideal conditions, with inhomogeneous B 0 field, for SL amplitudes of 0-400 Hz acquired with the different methods. Anatomical reference (showing the MNAD analysis ROI with red shading) and the corresponding B 1 and B 0 maps are shown at the top. Under the ideal conditions, all the refocused methods provided largely artifact-free T 1ρ relaxation time maps at all SL amplitudes, while the nonrefocused methods showed artifacts at the edges of the FOV at low SL amplitudes. Under the non-ideal conditions, the nonrefocused T 1ρ methods in particular performed poorly at lower SL amplitudes, while the refocused methods provided mostly artifact-free relaxation time maps at all SL amplitudes. The differences between the ideal and nonideal conditions can particularly be seen at the top and bottom edges with more significant field inhomogeneities. FOV, field of view; MNAD, mean normalized absolute deviation; ROI, region of interest; SL, spin lock orientation, the adiabatic T 1ρ SL varies between off-resonance and on-resonance T1ρ during the adiabatic sweep, where the amplitude and frequency of the pulse are modulated during the time course of the pulse. 39 From the simulations, it was evident that the refocused adiabatic methods presented here are highly insensitive to ΔB 0 and B 1 field inhomogeneities. The robustness of the refocused adiabatic methods exceeded the simulated range of variation for the RF power, while the robustness against B 0 variations depended on the specific scheme. The doublerefocused adiabatic BIR-4 and HS1 versions were found to be the most robust in the simulations, while experimentally, the double-refocused BIR-4 scheme was found to be the most robust. The low-powered (600-Hz) adiabatic CW-T 1ρ , which had an AHP pulse approximately four times longer than the high-powered (2.5-kHz) AHP pulse, was highly insensitive to field inhomogeneities at the higher SL amplitude of 400 Hz in the simulations ( Figure S3B). This simulation demonstrates that when the maximum B 1 amplitude of the AHP pulses is brought closer to the spinlocking amplitude, then adiabatic CW-T 1ρ becomes highly insensitive to B 0 inhomogeneities that are of the order of or smaller than the spin- Under the ideal conditions, all the refocused methods provided largely artifact-free T 1ρ relaxation time maps at all SL amplitudes, while the nonrefocused methods showed artifacts at the edges of the FOV at low spin-lock amplitudes. Under the non-ideal conditions, the non-refocused T 1ρ methods performed poorly at lower SL amplitudes, while the refocused methods were able to mitigate the most severe artifacts, especially at the higher SL amplitudes. The arrows indicate locations where differences (artifacts) can be noted between the conditions. FOV, field of view; MNAD, mean normalized absolute deviation; RF, radio frequency; ROI, region of interest; SL, spin lock severe banding artifacts in the T 1ρ relaxation time maps under the non-ideal conditions, at SL amplitudes equal to and below ΔB 0 . At higher SL amplitudes (ω 1 > ΔB 0 , or ω 1 >> ΔB 0 ), the banding artifacts were minimal for all the schemes, unless B 1 variation was also present. The differences in the sensitivities to field inhomogeneities between the preparation schemes were assessed by calculating the MNAD values between the measurements conducted at ideal versus non-ideal conditions. This approach, while potentially dependent on the changes in the experimental conditions, provides a handle on the sensitivities of the methods, summarizing the results over all the measured samples. Among the hard-pulse schemes, the non-refocused preparations stood out with the largest deviations between the ideal and non-ideal cases, while the refocused methods showed significantly smaller deviation between the cases at all SL amplitudes. The adiabatic refocused schemes were aligned with the hard-pulse alternatives with similar small deviations. However, these analyses were conducted only in the tissues that had high SNR and were not clearly at off-resonance (such as the fatty bone marrow tissue). Further experimental differences were seen at the extreme areas, such as the fat, or the edges of the coil-visible region for the tomato specimen in Figures 5 and 6, and particularly in the phantom (Figure 7), where the non-refocused methods, the B-SL method, 18 and the double-refocus adiabatic HS1 preparations showed signal loss and banding artifacts. The experimental performance of the adiabatic double-refocus scheme incorporating HS1 inversion pulses was not as good as that of the BIR-4 approach, despite providing the most promising simulation results. This could be because of the flip angle dispersion effects of the HS1-AFP pulse on the magnetization components not being collinear with it, 28 as is the case here. Two HS1-pulses were utilized to compensate for this effect, but the result remained inferior to that achieved by using an adiabatic plane rotation BIR-4 pulse. In the clinical setting, T 1ρ relaxation measurements could provide important insights into disease diagnosis and progression. 33 was found to be the most robust against field inhomogeneities for improving the T 1ρ quantification. However, the most significant problem with this method is its significantly increased RF energy deposition: as realized here, the baseline zero SL pulse has a duration of approximately 18 ms at an RMS amplitude of 2.3 kHz, which is already well beyond what is typically even achievable on a clinical scanner (often the maximum transmit power is below 1 kHz, even for local transmit coils). 43 Besides the increased power requirements, such pulses are also likely to exceed SAR safety limits, 14 further limiting the use of such T 1ρ preparations. Among the less RF-intensive, yet ΔB 0 and B 1 insensitive T 1ρ preparation schemes, the single-refocus scheme 14 with minimal RF energy deposition appears to be the most feasible for in vivo imaging. However, because the magnetization after this preparation will be at the negative z axis, a spin-echo type of readout sequence would be preferable over a gradient-echo sequence with relatively small tip angles, which will drive the magnetization through zero if longer echo trains are collected. Alternatively, for a gradientecho readout sequence, an additional (adiabatic) inversion pulse could potentially be utilized at the end of the preparation to avoid this effect. Considering the overall scan duration, gradient-echo sequences with short TR and RF cycling 44 or tailored flip angles 45 could be utilized to enable faster scans. Other possibilities for improved T 1ρ have been presented previously, such as the one by Mitrea et al. 15 Initial tests ( Figure S5), however, suggested it to be more sensitive to field variations than the single-refocus method reported by Witschey et al., 14 further supported by the simulations (see the supporting information). Another very promising approach utilizes adiabatic excitation and rewinder pulses at the same amplitude as the target SL amplitude. 16,46,47 Simulations with a nearly matched amplitude SL pulse 16,47 suggested that this non-refocused adiabatic scheme performs very well against the field inhomogeneities (see the supporting information). However, this sequence is more akin to the adiabatic T 1ρ method, 7,24,25,43 and is a combination of on-resonance and off-resonance T 1ρ relaxation. Another potential challenge with this method is maintaining the adiabatic condition at very low SL amplitudes. Utilizing fully adiabatic spin locking 22,24,29,37,38 can further mitigate the effects of field inhomogeneities and even provide slice selectivity 37 as well as reduced orientation/magic angle dependence. 7 A variation of the doublerefocused hard-pulse preparation scheme investigated here 18 was presented recently with promising results, but without direct comparison with other preparation methods. 48 Besides presenting a method for faster T 1ρ acquisition by using tailored variable flip angle scheduling, Johnson et al. 45 also utilized a partially adiabatic variation of the single-refocus method by Witschey et al., 14 replacing the hard 90 pulses with adiabatic pulses. This variation presents another interesting option for T 1ρ preparation; however, no direct comparison with other T 1ρ preparations with respect to sensitivity to inhomogeneities was provided. The present study has certain limitations, including a limited selection of previously presented methods for the experimental generation of T 1ρ contrast. The number of samples is limited, and all the experiments were carried out at 9.4 T and using a relatively high maximum B 1 amplitude. However, the differences between the methods were generally confirmed with the simulations; similar practical differences may be expected with B 0 and B 1 variations regardless of the main field strength, although the practical in vivo importance is ultimately revealed with real measurements. In conclusion, artifacts arising from the field inhomogeneities in CW-T 1ρ -weighted imaging can be efficiently suppressed by different refocused spin-locking pulse schemes. In this numerical, experimental, and theoretical comparison of different T 1ρ contrast preparation methods, the double-refocus adiabatic BIR-4 preparation was found to be the most robust. However, because of the excessive RF energy deposition of the adiabatic method, its use is likely restricted to the preclinical setting. Of the less RF-intensive methods, the ΔB 0 and B 1 compensated singlerefocus hard-pulse CW-T 1ρ method reported by Witschey et al. 14 and the proposed triple-refocused method proved to be very robust against field inhomogeneities. The simulations confirm the increased robustness of the low-power AHP CW spin locking, and both the experimental and the simulation findings promote the use of the previously reported hard-pulse single-refocus ΔB 0 and B 1 insensitive method for clinical use, while the adiabatic double-refocused BIR-4 method could be preferred for ex vivo experiments.
8,537.2
2022-09-17T00:00:00.000
[ "Physics" ]
Auxiliary Information-Enhanced Recommendations : Sequential recommendations have attracted increasing attention from both academia and industry in recent years. They predict a given user’s next choice of items by mainly modeling the sequential relations over a sequence of the user’s interactions with the items. However, most of the existing sequential recommendation algorithms mainly focus on the sequential dependencies between item IDs within sequences, while ignoring the rich and complex relations embedded in the auxiliary information, such as items’ image information and textual information. Such complex relations can help us better understand users’ preferences towards items, and thus benefit from the recommendations. To bridge this gap, we propose an auxiliary information-enhanced sequential recommendation algorithm called memory fusion network for recommendation (MFN4Rec) to incorporate both items’ image and textual information for sequential recommendations. Accordingly, item IDs, item image information and item textual information are regarded as three modalities. By comprehensively modelling the sequential relations within modalities and interaction relations across modalities, MFN4Rec can learn a more informative representation of users’ preferences for more accurate recommendations. Extensive experiments on two real-world datasets demonstrate the superiority of MFN4Rec over state-of-the-art sequential recommendation algorithms. Introduction Recommender systems have had an ever-increasingly important role in our daily life to help users effectively and efficiently find the items of their interest from a large amount of choices. Sequential recommender systems, as a relatively new type of recommender system, have attracted much more attention in recent years. A sequential recommender system (SRS) aims at providing sequential recommendations, namely recommending the next item to an user by learning the user's preference from their recent historical interactions (e.g., clicks, purchases) with items. By effectively modeling the user's recent interactions, an SRS is able to capture a user's latest preference and thus generate accurate sequential recommendations. Although effective, existing SRSs still have some drawbacks. One typical case is the ignorance of auxiliary information. Specifically, in real-world e-commence cases, in addition to the explicit or implicit user-item interactions which are mainly indicated by item IDs, there are other types of information which can also reveal users' preferences, such as item attributes, appearance images, and description texts. In practice, item ID information and the corresponding various types of auxiliary information can be treated as multi-modal information where each type of information serves as one modality. Some conventional recommendation algorithms including collaborative filtering and content-based filtering have utilized this auxiliary information to better characterize items and to complement the user-item interaction information. As a result, more specific user preferences towards items can be extracted for improving recommendation performance. In the sequential recommendation scenarios, there are more complex relations embedded in the aforementioned multi-modal information. To be specific, there are not only sequential relations within modalities, e.g., a user's implicit interactions (clicks) with items usually being sequentially dependent, but also interaction relations between different modalities, e.g., the correlations between the item description texts and item appearance images. However, most of the existing SRS algorithms either ignore such auxiliary multi-model information or simply model a single type of relation embedded within such auxiliary information. For example, the visual content-enhanced sequential recommender system (VCSRS) first learns an attentive item visual content representation and then incorporates it into an LSTM-based recurrent neural network (RNN) for next-item recommendations [1]. However, VCSRS not only ignores the richer textual description information of items (e.g., reviews), but also fails to model the sequential dependencies within each modality as well as the interaction relations across different modalities. The parallel recurrent neural network (p-RNN) first utilizes multi recurrent neural networks to model the sequential dependencies over items embedded in user-item interactions, i.e., clicks, item description texts and item images, separately and then integrate the modeled sequential dependencies from different modalities together for the downstream recommendations [2]. However, p-RNN only models the sequential dependencies within each modality while ignoring the complex interaction relations across different modalities. Multi-view RNN (MV-RNN) also employs both text and image information for sequential recommendations [3]. In MV-RNN, an auto-encoder-based multi-modal representation fusion module is designed to generate a compound representation for each item by integrating the item-related information from multiple modalities. The compound representation of a given item is then input into a gated recurrent unit (GRU) of the corresponding time step of an RNN to model the sequential dependencies among items. Finally, the final hidden state of the RNN is regarded as the user's preference for generating recommendations. Although effective, such a method mainly considers the interaction relations across different modalities, while the sequential dependencies within each modality are weakened. To bridge the aforementioned drawbacks of existing works, in this paper, we aim at developing an accurate sequential recommendation algorithm by effectively extracting and aggregating useful information from multi-modal auxiliary information, as well as modeling the complex interaction relations embedded in them. To be specific, we devise a memory fusion network for recommendation (MFN4Rec) by effectively integrating the relevant information from three modalities, i.e., item IDs, item images and item description texts, and modelling the complex relations between and within modalities. MFN4Rec is built on a typical work in multi-modal sequence representation learning, i.e., a memory fusion network for multi-view sequential learning [4], for multi-modal representation learning for sequential recommendations. To be specific, MFN4Rec contains a multi-GRU layer, a multi-view gated memory network (MGMN), and a prediction module. The multi-GRU layer contains three GRU-based RNNs, while each RNN models the sequential dependencies by taking the modal-specific representation of each item as the input of each step. MGMN is designed to model and extract the interaction relations across different modalities. The outputs from both the multi-GRU layer and MGMN are combined together as the input of the prediction module for the next-item prediction. Benefiting from the information memory and spreading mechanism of the memory network, MFN4Rec is able to not only effectively handle the relations within and across modalities, but also effectively model the dynamic sequential dependencies in sequences, and thus make the multi-modal auxiliary information contribute more to the sequential recommendations. The contributions of this work are summarized below: • We propose a memory fusion network for recommendation (MFN4Rec) to effectively model auxiliary multi-modal information for accurate sequential recommendations. • A multi-GRU layer is designed to effectively model the sequential dependencies with each modality. • A multi-view gated memory network (MGMN) is particularly devised to effectively model the complex interaction relations across different modalities. Extensive experiments have been conducted on two real-world e-commerce transaction datasets. The results have demonstrated the superiority of our proposed SRS algorithm over the state-of-the-art ones when performing sequential recommendations. Related Work In this section, we first review the existing work on conventional sequential recommendations and then review the existing work on auxiliary information-enhanced sequential recommendations. Sequential Recommendation Algorithms Generally speaking, according to the employed techniques, sequential recommendation algorithms can be roughly divided into traditional sequential recommendation algorithms and deep-learning-based sequential recommendation algorithms. Traditional sequential recommendation algorithms are built on traditional data mining or machine learning techniques, including sequential pattern mining, Markov chain models, matrix factorization, and neighborhood models. Yap et al. [5] introduced a personalized sequential pattern mining algorithm to first mine personalized sequential patterns and then utilize the mined patterns for guiding the downstream recommendations. Pattern mining-based algorithms are simple and sometimes effective, but they easily lose those infrequent, but important, items and patterns, and thus reduce the recommendation accuracy. Feng et al. [6] proposed a Markov chain-based SRS algorithm called the Personalized Ranking Metric Embedding (PRME) model for the next POI recommendations. Markov chain-based algorithms can only model the first-order dependencies while ignoring the high-order dependencies, and thus reduce the recommendation accuracy. Rendle et al. [7] proposed a classic matrix factorization model called the Factorized Personalized Markov Chains (FPMC) model to factorize the transition matrix over items from adjacent baskets into the latent factors of items. The latent factors are then utilized for next-basket recommendation. However, matrix factorization methods easily suffer from data sparsity issues. In recent years, deep learning models including RNN and CNN have shown great potential to capture the complex relations in sequence, and thus have been widely employed into sequential recommendations. Due to its powerful capability to model sequence data, RNN is the prominent deep model for sequential recommendations. Hidasi et al. [8] proposed an Gated Recurrent Units (GRU)-equipped RNN-based model called GRU4Rec for the next-item prediction. GRU4Rec was further improved by introducing a novel and tailored ranking loss function [9]. Some other similar works include Long Short Term Memory (LSTM)-based SRS algorithms [10]. Later, hierarchical RNN was employed in sequential recommendations to model both intra-sequence dependencies and inter-sequence dependencies for next-item recommendations [11]. However, the rigid order assumption over any two adjacent interactions employed in RNN may lead to generating false sequential dependencies [12]. In addition to RNN, CNN are also applied into sequential recommendations to build CNN-based SRS algorithms. Tang et al. [13] developed a convolutional sequence embedding recommendation model called Caser. Caser employs horizontal and vertical convolutional filters to learn the item-level and feature-level dependencies, respectively, for sequential recommendations. Further, a 3D CNN model was developed for jointly modeling the sequential relations and item content features for next-item recommendations [14]. However, CNN-based SRSs may not be able to effectively capture the long-range dependencies due to the limited perceptive fieldof CNN. Most recently, graph neural networks (GNN), as an advanced deep architecture, have been applied into sequential recommendations. Typical GNN-based sequential recommendation algorithms include memory augmented graph neural networks (MA-GNN) [15] and RetaGNN [16]. Some other researchers employed an attention mechanism into sequential recommendations for improving the recommendation performance. Wang et al. [12] utilized the attention model to learn attentive item and session representations for next-item recommendations. Later, a self-attention mechanism was introduced to better capture those heterogeneous relations embedded in a sequence of interactions for accurate sequential recommendations [17][18][19]. Although these deep models have shown great potential in achieving good recommendation performance, they usually ignore the rich auxiliary information. This limits the further improvement of the recommendation performance. Auxiliary Information-Enhanced Sequential Recommendations In the real-world scenarios, in addition to the commonly used item ID information, there is rich auxiliary information related to items, users and interactions. Such auxiliary information can provide more contextual information for an in-depth understanding of users' sequential behaviors, and thus can benefit the subsequent sequential recommendations. For instance, Wang et al. [20] take both the item ID and the corresponding item attributes in a session as the input of a shallow neural networks to learn a compound embedding for each item for the downstream next-item recommendations. A 3D convolutional neural network was proposed by Tuan et al. [14] to learn informative item representations from both item IDs and content features of items for next-item prediction. With the introduction of a neighborhood model, Garg et al. [21] incorporated the readily available position information of items within sequences for more accurate sequential recommendations. The occurrenceof timestamps of users' interactions in sequences was explored by Li et al. [22] and Ye et al. [23] for next-item recommendations. These works have taken a step forward to incorporate more auxiliary information to enhance sequential recommendations, but they ignore the important and representative item image and textual information. Only quite limited works on sequential recommendations have taken item image and/or textual information into account. An RNN-based sequential recommendation model called VCSRS was proposed by Qu et al. [1]. VCSRS first utilizes an attention-based visual feature representation learning component to learn a task-specific item visual representation and then effectively incorporate it into a single LSTM-based RNN to complement the item ID information for next-item recommendations. Although effective, on one hand, VCSRS ignores another important piece of auxiliary information, i.e., item textual information; on the other hand, VCSRS does not model different types of information as different modalities, and thus it fails to effectively capture the sequential dependencies within each type of information, namely each modality, (e.g., item ID and visual feature), respectively, and the interactions between different types of information, namely different modalities. Therefore, VCSRS are different from this work in terms of the input data, the solution and the model architecture. A parallel RNN-based model called p-RNN was developed by Hidasi et al. [2] to take item IDs, item images and item textual features as the input to learn informative item representations for sequential recommendations. In the parallel RNN model, three RNNs are utilized to model the aforementioned three parts of information, respectively, and the outputs of all RNNs are combined together for the prediction task. Although p-RNN can improve the recommendation performance to some degree, it ignores the interaction relations between different modalities and thus cannot fully model the complex relations embedded in users' interaction sequence data. Another similar work is multi-view RNN (MV-RNN), which also employs both text and image information for sequential recommendations [3]. In MV-RNN, an auto-encoder-based multi-modal representation fusion module is developed to generate a compound representation for each item by integrating both the item image and textual information. The compound representation is then input to a GRU-based RNN for predicting the next item. Although effective, such a method mainly considers the relations across different modalities, while the sequential dependencies within each modality are weakened. In summary, although some works have tried to integrate multi-modal auxiliary information into sequential recommendations, they either ignore the interaction relations across different modalities or weaken the sequential dependencies within modalities. This has limited the further improvement of the recommendation performance. A more effective and reliable algorithm which can effectively incorporate different types of auxiliary information for sequential recommendations is in need, which motivates our work in this paper. The Proposed SRS Algorithm As shown in Figure 1, our proposed memory fusion network for recommendation (MFN4Rec) mainly contains three stages. (1) First, it extracts the feature embedding from each modality, i.e., item IDs, item images, and item description texts, and then imports these extracted feature embeddings into the multi-GRU layer including three GRU-based RNNs. The feature embedding from each modality is imported into the corresponding modal-specific GRU-based RNN for modelling the sequential dependencies within the modality. (2) Second, the output from each RNN is then imported into the multi-view gated memory network (MGMN) to learn the interaction relations across modalities. (3) Finally, the output from both the multi-GRU layer and MGMN are taken as the input of the prediction layer for the next-item prediction. Next, we introduce each stage of MFN4Rec algorithm, respectively. . . Multi-GRU Layer Given a sequence of items interacted by a user, s = v 1 , v 2 , . . . , v |s| , we first extract the multi-modal features for each item in s. Particularly, for v t ∈ s, we extract its ID embedding m t ∈ R d m , its image feature embedding f t ∈ R d f and text feature embedding g t ∈ R d g . Item ID embedding is obtained from a learnable ID-embedding matrix. Item image embedding is extracted via a 16-layer convolutional neural network (CNN) named VGGNet (shortened to VGG-16) [24] that was pre-trained on ImageNet [25]. The text feature embedding is obtained via the commonly used word-embedding algorithm called GloVe [26]. We first use the pre-trained GloVe to obtain the word-embedding vector of each word in the item description texts, and then we introduce the commonly used TF-IDF algorithm to calculate a weight for each word in the item's text. The final item text embedding with a dimension of 100 is calculated as a weighted sum of the embeddings of words in the text. The dimensions of item ID embedding and image embedding are 25 and 1000, respectively. After the embeddings in three modalities are ready for each item in a sequence, they are imported into the modal-specific GRU-based RNN to model the sequential dependencies within each modality. Given a sequence s, the embedding vectors from each modality of all the items can form a modal-specific embedding sequence. For the tth item v t in s, its ID embedding m t , image feature embedding f t and text feature embedding g t are taken as the input of the GRU of the tth step in the corresponding GRU-based RNN to output the corresponding hidden state. Accordingly, the multi-GRU layer conducts the following operations in the tth step: where GRU indicates the operations in a normal GRU cell [27]. GRU m , GRU f and GRU g are the corresponding modal-specific GRU cells. h t m , h t f and h t g are the corresponding hidden state of the current step and they keep the modal-specific sequential information in the sequence s. The dimension of the hidden state in each modal is equal to the dimension of the corresponding input in the same modality. Differentiated Attention Layer The differentiated attention layer is designed to extract the cross-modal interaction relations from the three different modal-specific hidden states at each step. Specifically, at the tth step, we want to extract the interactions over h t m , h t f and h t g . Since the different subspace in the hidden state may have different cross-modal interaction strength with the hidden states from other modalities, we need to differentiate the importance of the dimensions of the hidden state when extracting cross-modal interaction relations. For each hidden state, we devised an attention mechanism to emphasize the dimensions which have more interactions with other modalities. The hidden state of multi-GRU at the tth step can be represented as indicates the concatenated operation of vectors. The differentiated attention layer takes the hidden states from any two adjacent steps as the input to extract the cross-modal relations. At the tth step, the input is the concatenation of h t−1 and h t , denoted as h [t−1,t] ∈ R 6 * d h . Such input is imported into a fullconnected (FC) layer with softmax as the activation function to output the attention weights: The obtainedĥ [t−1,t] can be seen as the latent representation of the cross-modal relations at the current tth step. In Equation (4), by comparing the information embedded in two adjacent hidden states, the attention mechanism can assign the weights accordingly when the hidden state changes from step t − 1 to step t. The differentiated attention layer takes the hidden state as the input, which usually contains information from the past steps. Therefore, it can capture the interaction relations of different modalities across multiple time steps. This can help our model discover the complex relations embedded in the sequence data. Gated Multi-Modal Memory Network Once the cross-modal interaction relations are extracted at each time step, we utilize a memory network to handle such relations recurrently along with the time steps to achieve the final multi-modal compound memory representation u ∈ R d m e . Specifically, the candidate memory in the current step t is obtained by: where FC u is a fully connected layer. Then, in the memory network, we use the gate mechanism to control the remains and update the candidate memory. Particularly, two gates, i.e., a remaining gate g 1 and update gate g 2 are introduced as below: g 1 and g 2 determine how much information in the candidate memory should remain and be updated, respectively. The final memory is obtained as below: Prediction and Optimization The final multi-modal representation of the sequence s of length T is calculated based on the outputs of the multi-GRU layer, i.e., h T m , h T f , h T g , and the output of the gated multimodal memory network, i.e., u T . Specifically, where FC out is a fully connected layer. In the prediction layer, a softmax layer is used to map the h out into the probability distribution over all the candidate items. Then the candidate items will be ranked based on their probability and the top-ranked ones will form the recommendation list. To be specific, the probability is computed as: The cross-entropy loss is used as the loss function during the training of the model. where p T is the predicted probability distribution and p is the one-hot vector of the groundtruth item to be predicted. In the model training, Adam optimizer and batch gradient descent are used to optimize the model parameters. Dropout strategy is used to avoid the overfitting of model parameters. We utilize grid-search and cross-validation to adjust the hyper parameters of the algorithm and the used hyper parameters are listed in Table 1. Data Preparation and Experiment Set Up The two subsets of "Clothing, Shoes and Jewelry" and the subset "Phone" in the Amazon dataset https://jmcauley.ucsd.edu/data/amazon/ (accessed on 16 July 2021) are used for our experiments, denoted as the Amazon Clothing dataset and Amazon Phone dataset, respectively, in this work. Intuitively, for these two categories of items, the item images may play a more important role in users' choices of items. Both datasets contain users' reviews from Amazon.com with timestamps. Following existing works [2,3], we regard each user's review of an item as their interaction with the item. The reviewed items by each user are sorted in chronological order to build the user's sequence of interactions with items. In addition to such interactions, each item has a corresponding image describing the item appearance and review texts from users [28]. We removed sequences whose length is larger than 100 to avoid the lengths of the sequences to be too varying. To avoid the dataset being too sparse, we only keep the interactions which happened in the last two years. The statistics of the dataset are shown in Table 2. For each sequence, we take the last item as the target item to be predicted and use all the other items as the corresponding given context to predict the target item. For each useritem interaction sequence, we first rank them according to the occurrence time. Then, we take the first 60% of interacted items as the training data, the following 20% as the validation data and the last 20% as the test data. Similar to [29,30], we tune the hyperparameters according to the performance on the validation set to obtain the optimal hyperparameters. Then, we use the whole training set to re-train the model. Finally, we test the model on the test set. The dimensions of the hidden state d h and d mem are 20; the batch size is 64. The initial learning rate is 0.01. Performance Comparison with Baselines In the experiments, the representative and state-of-the-art sequential recommendation algorithms are selected as the baselines. We compare our proposed algorithm with these baseline algorithms to evaluate the performance of our algorithm. Specifically, the baseline algorithms include two representative sequential recommendation algorithms, BPR [31] and LSTM [32], and four representative and/or state-of-the-art sequential recommendation algorithms which also incorporate auxiliary information such as item images, namely VBPR [33], p-RNN [2], MV-RNN [3] and VCSRS [1]. Within the same setting of this work, p-RNN, MV-RNN and VCSRS incorporate multi-modal auxiliary information, i.e., item images and text, to improve sequential recommendations. VCSRS is adapted to incorporate both item images and text information by concatenating image representation vectors and text representation vectors to form a unified item auxiliary information representation as the input of the feature-level attention module (FAM). All the baseline algorithms and our proposed algorithm are tested on the aforementioned experimental datasets for recommendation performance comparison. Two representative ranking-based measures, recall and mean average precision (MAP), are used as the evaluation metrics. They are commonly used to evaluate the performance of sequential recommendations. The experimental results are shown in Tables 3 and 4, where the values are percentages and the best ones are marked in bold. According to Tables 3 and 4, it is clear that our proposed algorithm MFN4Rec achieved the best performance w.r.t all the evaluation metrics, which demonstrate the effectiveness of our proposed algorithm. BPR and LSTM only take the item ID as the input to model the single-modal sequential dependencies among user-item interactions, and they thus perform the worst. Based on BPR, VBPR adds the item image information as auxiliary information. Based on LSTM, p-RNN adds both the item image and text information for sequential recommendation. The performance improvement of both VBPR and p-RNN is limited; this is because they only model the sequential relations within modalities while ignoring the interaction relations across different modalities. Due to the effective learning of item visual content representation and the careful incorporation of it into the LSTM-based RNN dominated by item ID information, VCSRS can improve the recommendation performance. However, it fails to model the unique sequential dependencies within each modality and the interaction relations between modalities, e.g., item ID and item visual content. The reason is that only one RNN is utilized to model the sequential dependencies over items without a particularly designed component to model the interactions between different modalities. Therefore, the aforementioned sequential dependencies within different modalities are mixed together, and also the interaction relations between different modalities cannot be modelled effectively. Out of all the baseline algorithms, MV-RNN performs the best, demonstrating that the utilization of an auto-encoder can effectively capture the multi-modal information, especially the cross-modal interaction relations. However, it still fails to consider the sequential relations within modalities. In comparison, our proposed MFN4Rec is able to effectively capture the sequential dependencies embedded in multi-modal sequence data by simultaneously capturing both the sequential dependencies within modalities and interaction relations across modalities. As a result, our algorithm can achieve the best performance. Ablation Analysis To verify the effectiveness of each module in our proposed MFN4Rec algorithm, we conduct ablation analysis to measure the contributions of each module to the performance improvement. To be specific, three simplified versions of MFN4Rec are designed to: (1) only keep the multi-GRU layer and combine the final hidden states of all GRU-based RNNs as the input for next-item prediction, denoted as MFN4Rec-g; (2) only keep the gated multi-modal memory network and take its output as the input for prediction, denoted as MFN4Rec-m; (3) remove the gated multi-modal memory network and add the final hidden states of all modalities as the multi-modal memory representation u t , while others remain the same as MFN4Rec, denoted as MFN4Rec-add. We compare the performance of these three simplified versions with that of MFN4Rec under the same experimental setting. The results are shown in Figure 2. The experimental results show that MFN4Rec-m performs the worst; it only considers the interaction relations across modalities while ignoring the sequential dependencies within modalities. Particularly, the lack of item ID information significantly reduces the recommendation performance. MFN4Rec-g and MFN4Rec-add consider the sequential dependencies with modalities, while not fully capturing the complex interaction relations across modalities, and thus they cannot perform very well. In summary, such observations demonstrate the significance of effectively capturing both sequential dependencies within modalities and interaction relations across modalities. Conclusions Most of the existing sequential recommendation algorithms are not able to effectively utilize the multi-modal auxiliary information to capture the complex dependencies and interaction relations embedded in users' sequential behaviours. Aiming at this problem, we proposed a novel multi-modal sequential recommendation algorithm called MFN4Rec to effectively incorporate the item's images and text description information. Thanks to the particular design, MFN4Rec can effectively model both sequential dependencies within modalities and the interaction relations across modalities for more accurate sequential recommendations. The experiments on real-world e-commerce datasets demonstrate the effectiveness of MFN4Rec and the significance of modeling both sequential dependencies within modalities and interaction relations across modalities.
6,382.6
2021-09-23T00:00:00.000
[ "Computer Science" ]
Deep Neural Network Analysis of Clinical Variables Predicts Escalated Care in COVID-19 Patients This study sought to identify the most important clinical variables that can be used to determine which COVID-19 patients will need escalated care early on using deep-learning neural networks. Analysis was performed on hospitalized COVID-19 patients between February 7, 2020 and May 4, 2020 in Stony Brook Hospital. Demographics, comorbidities, laboratory tests, vital signs, and blood gases were collected. We compared data obtained at the time in emergency department and the time of intensive care unit (ICU) upgrade of: i) COVID-19 patients admitted to the general floor (N=1203) versus those directly admitted to ICU (N=104), and ii) patients not upgraded to ICU (N=979) versus those upgraded to the ICU (N=224) from the general floor. A deep neural network algorithm was used to predict ICU admission, with 80% training and 20% testing. Prediction performance used area under the curve (AUC) of the receiver operating characteristic analysis (ROC). We found that C-reactive protein, lactate dehydrogenase, creatinine, white-blood cell count, D-dimer, and lymphocyte count showed temporal divergence between patients were upgraded to ICU compared to those were not. The deep learning predictive model ranked essentially the same set of laboratory variables to be important predictors of needing ICU care. The AUC for predicting ICU admission was 0.782±0.013 for the test dataset. Adding vital sign and blood-gas data improved AUC (0.861±0.018). This study identified a few laboratory tests that were predictive of escalated care. This work could help frontline physicians to anticipate downstream ICU needs to more effectively allocate healthcare resources. Introduction Since it was rst reported in Wuhan, China in December 2019 (1,2), the coronavirus disease 2019 (COVID-19) has infected over 27 million people and killed more than 880,00 people worldwide (September 6, 2020) (3). There are recent spikes in COVID-19 cases and there will likely be second waves (4). To date, it is challenging for emergency room physicians to determine which patients need escalated care (i.e., ICU admission) or anticipate ICU needs downstream for effective allocation of healthcare resources in part because much is still unknown about this disease. Many studies have reported a large array of clinical variables associated with COVID-19 which include, but are not limited to, patient demographics, clinical presentations, comorbidities, imaging data, vital sign data, and laboratory blood tests (5)(6)(7). A few studies have attempted to predict the need for escalated care and mortality typically using data obtained at admission to the emergency department (ED) (8)(9)(10)(11). Current results are inconsistent and there is no consensus as to which variables are good predictors of escalated care. This is in part due to COVID-19 patients came into the emergency department at various stage of disease severity, which could confound the results. It may be more informative to study patients who were subsequent upgraded to ICU from the general oor. The goal of this study was to identify the most important clinical variables that can be used to determine which patients will need downstream ICU care early on. We performed comparison between those not upgraded to the ICU from the general oor versus those subsequently upgraded to the ICU, and contrasted with comparison between COVID-19 patients admitted to the general oor versus those immediately admitted to ICU. Clinical variables were obtained at the time of arrival to the emergency department as well as at the time of ICU upgrade. A deep neural-network algorithm was developed to identify the most important clinical variables that informed the need for escalated care, and used these variables to predict ICU admission. Hospital ED from February 7, 2020 to June 30, 2020. There were 2,892 COVID-19 positive patients as determined by real-time polymerase chain reaction (RT-PCR) for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), of which 1430 were hospitalized. Patients <18 years old, patients who were still in the hospital at the time of this analysis, and patient who did not have full codes were excluded. Methods The nal sample sizes included 1203 patients admitted to general oor ("general oor", Group A) and 104 directly admitted to the ICU from the ED ("direct ICU", Group B), 979 patients remained on the general oor ("no upgrade", Group C) and 224 were upgraded from the general oor to the ICU ("upgrade ICU", Group D) (Figure 1). These clinical variables were collected for general oor admission (group A) versus direct ICU (group B) at ED admission. Data were collected for the no-upgrade versus upgraded group at ED admission to the general oor. Data were also collected one day prior to ICU upgrade (group D) or three days after hospitalization for the no-upgrade group (Group C). The "3 day" was chosen for comparison because the median day for patients to be upgraded to the ICU from the general oor was 3 days. Preprocessing and deep neural network prediction model: Bicarbonates, pCO 2 , pO 2 , pH, hematocrit and troponin were also not used in the machine learning analysis because invasive blood gas samples and troponin were not routinely obtained in our hospital on general oor patients. For the rest of the laboratory variables, missing data (<25%) were imputated using standard methods (12). Two deep neural network models were built: one using laboratory tests (excluding vitals and blood gases) and the other using laboratory tests, vitals and blood gases. Both used Jupyter Notebook, Tensor ow, and Keras, and were constructed using 2 fully connected dense layers. The inputs consisted of the clinical variables for no-ICU versus ICU patients: namely those of Group A ( oor) at ED admission and Group C (no upgrade) at the corresponding time of upgrade versus Group B (direct ICU) at ED admission and Group D (upgrade) at the time of upgrade. The output was ICU admission. For both, the dataset was randomly split into 80% training data and 20% testing data, and trained for 50 epochs with a batch size of 6. For the model using laboratory tests, a learning rate of 0.001 was used, whereas for the model using laboratory tests, vitals, and blood gases, a learning rate of 0.0009 proved optimal. A Softmax function for activation in the output layer was used. The clinical variables were ranked using SHAP (SHapley Additive exPlanations), a Python package that explains the output of machine learning models based on game theory. Statistical analysis and performance evaluation: Statistical analysis was performed using SPSS v26 (IBM, Armonk, NY) and SAS v9.4 (SAS Institute, Cary, NC). Group comparisons of categorical variables in frequencies and percentages were performed using the Chi-squared test or Fisher exact test. Group comparison of continuous variables in medians and interquartile ranges (IQR) used the Mann-Whitney U test. For all analyses, a p value < 0.05 was considered to be statistically signi cant. For performance evaluation of deep neural network, data were split 80% for training and 20% for testing. Prediction performance was evaluated by area under the curve (AUC) of the receiver operating characteristic (ROC) curve for the test data set. The average ROC curve and AUC were obtained with ten runs and standard deviations were obtained. A p value < 0.05 was taken to be statistically signi cant unless otherwise speci ed. Table 1A summarizes the demographics and comorbidities for the general oor (group A, N=1203) versus direct ICU (group B, N=104). Compared to the general oor group, the direct ICU group had more males (p=0.005), smokers (p=0.008), diabetics (p=0.047) and patients with heart failure (p=0.016). Age, ethnicity, race, and prevalence of hypertension, asthmas, COPD, coronary artery disease, cancer immunosuppression and chronic kidney disease were not statistically different between groups (p>0.05). Table 1B summarizes the demographics and comorbidities for the no-upgrade (group C, N=979) versus upgrade group (group D, N=224). Compared to the no upgrade group, the upgrade ICU group had more males (p=0.005), and patients with asthma (p=0.008) but fewer patients with cancer (p=0.004). Race was different between groups. Age, ethnicity, and prevalence of smoking, hypertension, diabetes, COPD, coronary artery disease, heart failure immunosuppression and chronic kidney disease were not statistically different between groups (p>0.05). Results Laboratory tests: Figure 2 plots the laboratory tests for general oor (group A) versus direct ICU (group B) at ED admission, and no-upgrade (group C) versus upgrade (group D) at ED admission and at the time of upgrade. WBC, LDH, CRP, TNT, and ferritin were signi cantly different between the general oor and the direct ICU group at ED admission (red bars). Lymph, WBC, LDH, CRP, AST, CRT, ferritin, and ALT were signi cantly different between the no-upgrade and upgrade group at the time of admission to the hospital (green bars). Lymph, WBC, and CRP were signi cantly different between the no-upgrade and upgrade group at the day prior to upgrade (blue bars). Table 2 presents the results of Figure 2 in a simpli ed format for comparison. LDH, CPR and ferritin were signi cantly different for the general oor versus direct ICU group at ED admission, no-upgrade versus upgrade group at ED admission, and no-upgrade versus upgrade group at the time of upgrade (Table 3, row 1-3). WBC stood out in that it was different for the general oor versus direct ICU group at ED admission, the no-upgrade versus upgrade at the time of upgrade, but it was not different for the noupgrade versus upgrade at ED admission WBC and CRP signi cantly decreased in the no-upgrade group (Table 3, 4 th row). WBC, LDH, and Cr increased while lymph decreased in the upgrade group (Table 3, 5 th row). Lymph, WBC, D-dimer, LDH, CRP, and Cr improved or did not deteriorate between the two time points in the no-upgrade group but deteriorated in the upgrade group (Table 3, 6 th row). Ferritin, TNT, AST, BNP, procal, and ALT were not signi cantly different between the two time points in both the no-upgrade and upgrade group (Table 3, 7 th row). Vitals and blood gases: Figure 3 plots the vital signs and blood gases for general oor versus direct ICU at ED admission, and noupgrade versus upgrade at ED admission and one day prior to upgrade. RR, SpO 2 , temperature, pO 2 , and pH, were signi cantly different between the general oor versus direct ICU group (red bars). RR, HR, SpO 2 , temperature, pH, and pCO 2 were signi cantly different between the no-upgrade versus upgrade group (green bars) at the time of admission to hospital. HR, SpO 2 , DBP, SDP, and temperature were signi cantly different between the no-upgrade versus upgrade group (blue bars) at the day prior to upgrade. Table 3 simpli es the results of Figure 2. HR, SpO 2 , and temperature were signi cantly different for the general oor versus direct ICU group at ED admission, no-upgrade versus upgrade at ED admission, and no-upgrade versus upgrade at time of ICU upgrade (Table 3, row 1-3). pH stood out in that it was different for the general oor versus direct ICU group at admission, no-upgrade versus upgrade at time at upgrade but it was not different for no-upgrade versus upgrade at admission. For the no upgrade group, RR, HR, DBP, SBP signi cantly decreased and SpO 2 and temperature increased (Table 3, 4 th row), whereas for the upgrade group, HR and temperature decreased and SpO 2 increased (Table 3, 5 th row). Unlike the laboratory tests, none of the vitals and blood gases showed improvement in the no-upgrade group and deterioration in the upgrade group between the two time points (Table 3, 6 th and 7 th row). Predictors of ICU admission The deep neural network model built using laboratory tests ranked CRP, LDH, Cr, WBC, D-dimer, and lymph (in order of importance) to be the top predictors of ICU admission. This model yielded an accuracy of 86±5% and AUC of 0.782±0.013 for the testing dataset. The deep neural network model built using laboratory tests, vitals and blood gases ranked RR, LDH, CRP, DBP, procal, WBC, D-dimer, and O 2 (in order of importance) to be the top predictors of ICU admission. This model yielded an accuracy of 88±7% and an AUC of 0.861±0.018 for the testing dataset. Adding vitals and blood-gas data improved prediction performance. Discussion This study investigated the clinical variables associated with direct ICU admission and upgrade to ICU from the general oor. We found that lymphocyte count, white-blood cell count, D-dimer, lactate dehydrogenase, C-reactive protein, and creatinine (unranked) improved or did not deteriorate with time in patients who were not upgraded to the ICU, but deteriorated in patients who were upgraded to the ICU, showing temporal divergence. The deep learning predictive model using laboratory tests ranked Creactive protein, lactate dehydrogenase, creatinine, white-blood cell count, D-dimer, and lymphocyte count (in orders of importance), showing substantial overlaps with those variables that exhibited temporal divergence. The performance of the predictive model using these top predictors yielded an AUC of 0.782±0.013 for predicting ICU admission on the test dataset. Adding vitals and blood-gas data further improved prediction performance (0.861±0.018). Compared to the general oor group, the direct ICU group had signi cantly more males, smokers, diabetics and patients with heart failure. Compared to the no upgrade group, the upgrade ICU group had more males, and patients with asthma but fewer patients with cancer. Smokers, diabetics and patients with heart failure were more likely to receive escalated care at ED admission. Patients with asthma was the only comorbidity that were associated with ICU upgrade. Some major comorbidities were important factor for ICU admission especially at ED admission, but less so for ICU upgrade, suggesting that ED physicians might consider major comorbidities as factor needing escalated care. Many laboratory tests showed worse disease severity in the direct or upgrade ICU group compared to general oor and no-upgrade group. However, we found that these laboratory tests by themselves were inadequate to reliably determine which patients required ICU admission. Often time, there were no appreciable differences between those directly admitted or upgraded to the ICU and those admitted to the general oor. For example, LDH, CRP and ferritin were signi cantly different for the general oor versus direct ICU group at ED admission, and no-upgrade versus upgrade group for both ED admission and at time of the ICU upgrade ( Table 2, row 1-3), suggesting they might not be useful to distinguish ICU upgrade despite being abnormal due to COVID-19. WBC stood out in that it was different for the general oor versus direct ICU group at ED admission and the no-upgrade versus upgrade group at the time of upgrade, but not for the no-upgrade versus upgrade group at ED admission, suggesting it is one of the most informative variables of ICU upgrade. Our innovative approach was thus to identify the laboratory tests that showed improvement or plateau between the two time points in the no-upgrade group but deteriorated in the upgrade group. The laboratory tests that showed temporal divergence were identi ed to be lymphocyte count, white-blood cell count, D-dimer, lactate dehydrogenase, C-reactive protein, and creatinine (unranked). By contrast, most vitals and blood gases did not show such temporal divergence between groups, suggesting that vital signs and blood gases might be overall less important when compared to laboratory tests. This appears counter intuitive because vitals are readily available and are often informative in emergency room situation. Possible explanations are: i) SpO 2 might be affected by supplemental oxygen inhalation, ii) RR, HR, SBP and DBP could be highly variables, iii) these vital signs were within normal normative physiological ranges although there were group differences. We concluded that vital signs and blood gases appear to be overall less informative in predicting ICU admission compared to laboratory tests. Deep learning analysis To further explore whether the above-mentioned laboratory variables are predictive of direct and upgrade ICU admission, we developed a deep-learning model, trained it on 80% of the data, and tested it independently on 20% of data that the model had not seen before. Our deep neural network model identi ed C-reactive protein, lactate dehydrogenase, creatinine, white-blood cell count, D-dimer, and lymphocyte count (in orders of importance) to be the top predictors of ICU admission. These variables showed substantial overlaps with those variables exhibiting temporal divergence described above. The performance of the predictive model using these top predictors yielded an AUC of 0.782 for predicting ICU admission from the testing dataset. Adding vital and blood-gas data improved prediction performance, yielding an AUC of 0.861 for predicting ICU admission from the test dataset. It is worth noting that RR was one of the highly ranked variables. This is not surprising because COVID-19 patients usually exhibited respiratory distress. Taken together, there is corroborative evidence that a few laboratory tests and vital signs are amongst the most important predictors of severe illness that warrants escalated care. Previous studies A few studies have previously identi ed some clinical variables to be associated with disease severity or mortality in COVID-19 infection. A few studies have attempted to identify important clinical variables that predicted critical illness and mortality using data at ED admission. There is however no consensus as to which clinical variables are good predictors. Jiang et al. used supervised learning and found mildly elevated alanine aminotransferase, myalgias, and hemoglobin at presentation to be predictive of severe ARDS of COVID-19 with 70% to 80% accuracy. This study had small, non-uniform, heterogeneous clinical variables, obtained from different hospitals (9). Ji et al. used logistic regression to predict stable versus progressive COVID-19 patients (N=208) based on whether their conditions worsened during hospitalization (10). They reported comorbidities, older age, lower lymphocyte and higher lactate dehydrogenase at presentation to be independent high-risk factors for COVID-19 progression. Yan et al. utilized supervised machine learning to predict critical COVID-19 at ED admission using presence of X-ray abnormality, cancer history, age, neutrophil/lymphocyte ratio, LDH, dyspnea, bilirubin, unconsciousness and a number of comorbidities (11). They reported an AUC of 0.88. By the time this paper is reviewed, more studies will be published. Our study is innovative and unique in that we speci cally addressed the need for escalated care of patients who were admitted to the general oor. Nonetheless, comparisons of different predictive models on the same datasets are warranted. Limitations This study has several limitations. This is a retrospective study carried out in a single hospital. These ndings need to be replicated in a large and multi-institutional setting for generalizability. As in all observational studies, other residual confounders may exist that were not accounted for in our analysis. Finally, it is important to note that the COVID-19 pandemic circumstance is unusual and evolving. Flow of patients (i.e., ICU) may depend on individual hospital's patient load, practice, and available resources, which also differ amongst countries. Conclusions This study provided corroborative evidence that WBC, lymphocyte count, D-dimer, lactate dehydrogenase, C-reactive protein, and creatinine are amongst the most important predictors of severe illness requiring ICU care. This work could help frontline physicians to better manage COVID-19 patients by anticipating downstream ICU needs to more effectively allocate healthcare resources. Declarations Author contributions statements JL -collected data, analyzed data, drafted paper BM -analyzed data and drafted paper Patient selection owchart. The nal sample sizes included 1203 patients admitted to general oor ("general oor", Group A) and 104 directly admitted to the ICU from the ED ("direct ICU", Group B), 979 patients remained on the general oor ("no upgrade", Group C) and 224 were upgraded from the general oor to the ICU ("upgrade ICU", Group D). Figure 2 Laboratory tests for group A ( oor) and B (direct ICU) at ED admission, and group C (no upgrade) and group D (upgrade) at two time points (at ED admission and one day prior upgrade and equivalence). SI conversion factors: To convert alanine aminotransferase and lactate dehydrogenase to microkatal per liter, multiply by 0.0167; C-reactive protein to milligram per liter, multiply by 10; D-dimer to nanomole per liter, multiply by 0.0054; leukocytes to ×109 per liter, multiply by 0.001. Error bars are SEM. * p<0.05, ** p<0.01, *** p<.005. Sample sizes for each bar graphs are shown. Note that a lower lymphocyte count, whereas higher values of the other laboratory variables, are associated with worse prognosis. Figure 3 Vital signs and blood gases for group A ( oor) and B (direct ICU) at ED admission, and group C (no upgrade) and group D (upgrade) at two time points (at ED admission and one day prior upgrade and equivalence). Error bars are SEM. * p<0.05, ** p<0.01, *** p<.005. Sample sizes for each bar graphs are shown.
4,768.4
2020-09-09T00:00:00.000
[ "Medicine", "Computer Science" ]
Increased static dielectric constant in ZnMnO and ZnCoO thin films with bound magnetic polarons. A novel small signal equivalent circuit model is proposed in the inversion regime of metal/(ZnO, ZnMnO, and ZnCoO) semiconductor/Si3N4 insulator/p-Si semiconductor (MSIS) structures to describe the distinctive nonlinear frequency dependent capacitance (C-F) and conductance (G-F) behaviour in the frequency range from 50 Hz to 1 MHz. We modelled the fully depleted ZnO thin films to extract the static dielectric constant (εr) of ZnO, ZnMnO, and ZnCoO. The extracted enhancement of static dielectric constant in magnetic n-type conducting ZnCoO (εr ≥ 13.0) and ZnMnO (εr ≥ 25.8) in comparison to unmagnetic ZnO (εr = 8.3-9.3) is related to the electrical polarizability of donor-type bound magnetic polarons (BMP) in the several hundred GHz range (120 GHz for CdMnTe). The formation of donor-BMP is enabled in n-type conducting, magnetic ZnO by the s-d exchange interaction between the electron spin of positively charged oxygen vacancies [Formula: see text] in the BMP center and the electron spins of substitutional Mn2+ and Co2+ ions in ZnMnO and ZnCoO, respectively. The BMP radius scales with the Bohr radius which is proportional to the static dielectric constant. Here we show how BMP overlap can be realized in magnetic n-ZnO by increasing its static dielectric constant and guide researchers in the field of transparent spintronics towards ferromagnetism in magnetic, n-ZnO. The favourable electrical and optical properties of zinc oxide made it promising for applications in opto-electronics 1 , sensor technology 2 , UV light emitting diodes 3 , and photovoltaic devices. In the field of spintronics, special attention has been given to oxygen-deficient magnetic ZnO thin films with substitutional 3d transition metal ions [4][5][6] . Observed spontaneous magnetization has been related with the formation of stable Bound Magnetic polarons (BMP) 7 . The BMP concept was first introduced to explain metal-insulator transition in oxygen-deficient EuO 8 . BMPs are formed by the s-d exchange interactions between the electron spin of a singly charged oxygen vacancy + V o in the center of the BMP and the electron spins of substitutional 3d transition metal ions in a sphere with Bohr radius r B [9][10][11] . The Bohr radius is proportional to the static dielectric constant. Due to the s-d exchange interaction between the spin of singly charged oxygen vacancy + V o and the spins of the 3d transition metal ions in the sphere with Bohr radius r B , the spins of the 3d transition metal ions the align in same direction and sum up to the collective spin of the BMP. For example, spontaneous magnetization due to collective spins of BMPs in CdTe with substitutional Mn ions was reported by Peter and Eucharista 12 . From magnetic n-CdS 13,14 and n-CdSe 15,16 there is abundant evidence that the electron localized at the impurity in the BMP center can induce sizable magnetization in its vicinity, often having magnetic moments exceeding 25 μ B 17 . Interestingly, so far the focus in the BMP research was more on the formation of BMP and not on the increase of the static dielectric constant in the dilute magnetic semiconductor in comparison to the semiconductor host without substitutional magnetic ions. For example, the static dielectric constant of ZnO amounts to 8.5-9.5 [18][19][20] and we have observed an increase of the static dielectric constant of ZnCoO up to 25.0 if 4 at.% Co is added 7 . Investigations of dielectric constant of ZnCoO powders modelled from measured shift in bandgap showed that it is not possible to achieve significant increase in dielectric constant. This may be due to the absence of singly ionised oxygen vacancies ( + V o ) in ZnCoO powders enabling s-d exchange interaction and bound magnetic polaron formation which would enhance the static dielectric constant of ZnCoO powders. In this work we determine the magnetic species and concentration dependent static dielectric constant ε r of two ZnO thin films and eight magnetic ZnO thin films with 2 at.% and 5 at.% substitutional Co 2+ and Mn 2+ ions from analysis of capacitive metal/n-ZnO semiconductor/Si 3 N 4 insulator/p-Si semiconductor (MSIS) structures. The oxygen partial pressure during growth of the magnetic n-ZnO films by pulsed laser deposition (PLD) mainly determines the concentration of oxygen vacancies which are intrinsic donors and may form the center of BMP in magnetic ZnO. The intrinsic oxygen vacancy defects are donors that can be estimated from room temperature sheet resistance. This work proposes an approach to determine intrinsic defects from measured sheet resistance and volume of bound magnetic polaron which are the main ingredients that guide researchers towards ferromagnetism in transparent spintronics. The static dielectric constant has been modelled from the measured frequency dependent capacitance characteristics (C-F) of MSIS structures. The simpler metal insulator metal (MIM) structure for evaluation of static dielectric constant of magnetic, n-type conducting ZnO layers would be problematic for modelling frequency dependent capacitance data. This is because even nominally insulating ZnO thin films in MIM structures are leaky insulators and such MIM structures are not suitable for analysing non-linear frequency dependent impedance. And also, the analysis of current voltage (IV) and impedance (CV) data of Schottky diodes with completely depleted ZnO thin films have too many unknown implicit parameters to extract the static dielectric constant of the ZnO thin film in a Schottky diode from the IV and CV data. Schottky diodes with n-type conducting Zn 0.95 Co 0.05 O thin films have been investigated by Kasper et al. 7 . Kasper et al. used a static dielectric constant of ε r = 25 21 . It was not possible to extract the static dielectric constant of Zn 0.95 Co 0.05 O. Therefore, we chose a MSIS heterostructure in order to extract the static dielectric constant of magnetic, n-type conducting ZnO layers. Results Oxygen vacancies in n-ZnO are intrinsic donors and increase the concentration of the electron majority charge carriers n. If the carrier concentration n is small, the ZnO thin films in the metal/n-ZnO semiconductor/Si 3 N 4 insulator/p-Si semiconductor MSIS structures are insulating. With decreasing n the carrier mobility μ increases and influences the dc transport properties of the ZnO in the (MSIS) structures. The ZnO, ZnCoO, and ZnMnO thin films have been grown by PLD on insulator-semiconductor (Si 3 N 4 /p-Si) MIS structures for investigating the static dielectric constant of the magnetic ZnO thin films (Fig. 1). In the following we show how measured impedance has been modelled and how the extracted capacitance of the magnetic ZnO thin films has been used to extract the static dielectric constant of magnetic ZnO in dependence on the species and concentration of magnetic ions. The polarity and strength of the applied bias on the Al/ZnO interface determines the ionization of donor oxygen vacancies (V o ) ( Fig. 1(a-c)). The mobile defects in Si 3 N 4 are redistributed in Si 3 N 4 under a bias applied to the MSIS structure, namely with large negative applied bias in accumulation towards the ZnO/Si 3 N 4 interface ( Fig. 1(a)) and for a large positive applied bias inversion towards the Si 3 N 4 /p-Si interface ( Fig. 1(c)). The flat band voltage lies in the negative bias range (Fig. S3 in supplementary) for both ramping directions, namely from accumulation (Fig. 1a) to inversion (Fig. 1c) and from inversion to accumulation. This indicates the presence of positive charge defects in Si 3 N 4 ( Fig. S3 in supplementary). Si 3 N 4 contains both mobile (~) and fixed (▫) positive charge defects. The presence of fixed impurities and mobile positive charge defects in insulating Si 3 N 4 can be recognized from shift flat band voltage and midgap voltage of conductance and capacitance hysteresis measurements, respectively (Fig. S3). First the distribution of mobile defects in Si 3 N 4 is changed when the dc bias is ramped from +10 V to −15 V (accumulation in Fig. 1(a)) or when the dc bias is ramped from −15 V to +10 V (depletion-inversion in Fig. 1(b,c)). The positive fixed and mobile charge defects in the insulating Si 3 N 4 layer cause a shift of the flat band voltage to larger negative bias. The mobility of the mobile defects in Si 3 N 4 depends on the PLD growth temperature during deposition of the n-type semiconductor on the insulator Si 3 N 4 , namely 550 °C for the deposition of ZnO in this work and 380 °C for the deposition of BiFeO 3 in a previous work 22 . It has been reported that the threshold temperature for the formation of defects in Si 3 N 4 lies at circa 500 °C 23 . The small signal analysis of Al/n-ZnO semiconductor/Si 3 N 4 insulator/p-Si semiconductor structures was performed for obtaining the static dielectric constant of completely depleted ZnO, ZnCoO, and ZnMnO thin films ( Fig. 1(c)). The MSIS equivalent circuit model in strong inversion is shown in Fig. 2(b)) and accounts for all RC elements in the interfaces and layers of the MSIS structure. The equivalent circuit model describes the measured nonlinear behaviour of the frequency dependent capacitance (C-F) and conductance (G-F) curve ( Fig. S5 in supplementary) of samples grown under different oxygen partial pressures 6.5 × 10 −3 mbar (LP), 3.91 × 10 −2 mbar (HP) for two top contact areas A1 (5.026 × 10 −7 m 2 ) and A2 (2.827 × 10 −7 m 2 ). The equivalent circuit model describes the impedance characteristics of each region in the MSIS structure that includes each material and the interface regions between the materials. The small signal impedance of MIS and of MSIS structures is analyzed in strong inversion (s.a. supplementary). An equivalent circuit model describing the frequency dependent capacitance (C-F) and conductance (G-F) of the reference structure, namely of Al/Si 3 N 4 /p-Si/Au metal/insulator/semiconductor (MIS) structures, is presented in ref. 24 . In this work we also extended the MOS equivalent circuit model to describe voltage dependent impedance (C-V and G-V)) 24 to the MSIS equivalent circuit model with a n-ZnO semiconductor layer ( Fig. 2(b)) (s.a. supplementary). The modelled parameters of the MIS structure (reference samples) have been used as an estimate for the corresponding parameters of the MSIS structures (s.a. S3.1). The modelling of small signal impedance of the MSIS structure always starts in the high frequency range where the leaky Si 3 N 4 does not dominate frequency dependent small signal impedance (s.a. S3.2). Afterwards the small signal impedance has been modelled in the whole frequency range (s.a. S3.3). C ZnO is the parameter which is finally used to extract the static dielectric constant of the ZnO layer in the MSIS structures. The equivalent circuit model of the MSIS structure is given in Fig. 2(b). It describes the impedance characteristics of each layer in the MSIS structure and the interface regions between each layer. The capacitor C i represents 25 and work function of Si (5.07 eV) 26 . The sharp interface between ZnO and Si 3 N 4 causes the formation of surface states in ZnO at the ZnO/Si 3 N 4 interface. The MSIS equivalent circuit model ( Fig. 2(b)) also accounts for the slow and fast surface states in ZnO with capacitance/resistance C Zss /R Zss and C Zfss /R Zfss in parallel with the depletion capacitance C ZnO in ZnO, respectively. Also, charges at the interface of top contact aluminium (Al) and ZnO are taken into account with capacitance C Al in parallel with the resistance R Al . Additional resistive elements R ZI and R IS (R ZI = R IS ) which describe the conductivity changes in the defective Si 3 N 4 at the ZnO/Si 3 N 4 and Si 3 N 4 /Si interfaces, respectively, have been incorporated into the MSIS equivalent circuit model to describe the defects in the Si 3 N 4 (S3.4). In Fig. 2, dotted vertical lines indicate the interface between each layer. We show arrows at the interface position of ZnO/Si 3 N 4 and Si 3 N 4 /ZnO to sketch that R ZI and R IS are finite and belong to the leaky Si 3 N 4 dielectric. We see a frequency dependent capacitance for Si 3 N 4 in small signal ac analysis. Also, a voltage dependent dc conduction is seen in leaky Si 3 N 4 . Therefore, Si 3 N 4 can be considered as a broken ac channel with same dc conduction and for small signal equivalent circuit. Analytically we considered a capacitor with reduction in effective thickness described by Beaumont and Jacobs model 27 . Because ac conduction does not go through the Si 3 N 4 at all frequencies and because of charge neutrality, the resistance change due to accumulation of charges at the interface ZnO/Si 3 N 4 (R ZI ) and at the interface Si 3 N 4 /p-Si (R IS ) the corresponding resistance change is the same, i.e. R ZI = R IS . Discussion The dielectric constant of the ZnO layer in the MSIS structure has been determined from the modelled C ZnO (Fig. 2(b)) using the area of the Al top contacts and the ZnO thickness from SEM measurements (Table S1 in supplementary). The static dielectric constant ε r (Table 1) calculated for ZnO, ZnCoO, and ZnMnO grown at 6.50 × 10 −3 mbar (LP), 3.91 × 10 −2 mbar (HP) oxygen partial pressure is plotted in Fig. 2(a) for contact area A1 and in Fig. 2(c) for contact area A2 (A1 = 5.026 × 10 −7 m 2 and A2 = 2.827 × 10 −7 m 2 ). The modelled static dielectric constant of ZnO ranges between 8.2 and 9.3 and is in good agreement with literature values in the range between 8.5 and 9.5. A strongly increased static dielectric constant has been deduced from C ZnO of MSIS structures with ZnCoO and ZnMnO thin films. We also see a slight increase of dielectric constant for ZnO_LP and ZnO_HP in comparison to bulk ZnO. However, it is not proven so far that the observed increase of dielectric constant in ZnO can be related with magnetism in ZnO, e.g. with magnetism due to the formation of bound magnetic polarons (BMPs). One could speculate that for ZnO_LP which has been grown at low oxygen partial pressure and which has a larger concentration of intrinsic donors, more donors are available as centres for BMPs. One possible type of ferromagnetic s-d exchange interaction in pure ZnO is the s-d exchange interaction between 3d electrons of Zn ions and electron spin of oxygen vacancies (Vo + ). Therefore, we expect an increased volume of bound magnetic polarons (Eq. (1)) in magnetic ZnO in comparison to unmagnetic ZnO. The resistance of the ZnO has been measured and the transport properties are classified 28,29 by ranges of resistance in Table 1. Insulating ZnO thin films have lower ε r while low conducting ZnO and moderate conducting ZnO thin films have higher ε r which is an indication of the dielectric constant dependence on donor concentration. Here the donors are intrinsic donors formed in ZnO by oxygen vacancies (V o ) whose concentration depends on the oxygen partial pressure during PLD growth of ZnO. One might expect smaller dielectric constant in higher pressure (HP) samples in comparison to lower pressure (LP) samples, because electrically polarizable BMP represent a collective spin of 3d spins of Mn 2+ in ZnMnO and of Co 2+ spins in ZnCoO which is mediated by s-d exchange interaction between 3d wavefunction of 3d spins and s wavefunction of the electron spin of Vo + in the centre of the bound magnetic polaron 30 . More BMPs are expected for a larger number of oxygen vacancies in lower pressure samples. There exist three types of known native donors in ZnO oxide, i.e., O vacancies (Vo), Zn interstitials (I Zn ), and H related defects (H i ) 31 which play crucial roles in determining the transport and optical properties of zinc oxide. We investigated the species of shallow donors in ZnO thin films grown by pulsed laser deposition by assuming two different donors with two thermal activation energies in the ZnO. For example, in our previous work Vegesna et al. 28 the existence of two different donors could (E a 1 = 1.54 meV and E a 2 = 82.75 meV) be proven by modeling the temperature dependent free carrier concentration. This thermal activation energy hints towards hydrogen related defects and zinc interstitials. Because the thermal activation energy of oxygen vacancies amounts to 300 meV Hofmann et al. 32 , it is not possible to prove existence of oxygen vacancies in ZnO by temperature dependent transport measurements. Hoffman et al. used photoluminescence measurements and related the green emission from ZnO with the existence of oxygen vacancies. In a recent work Liu et al. 33 showed that oxygen vacancies are the dominant defects in n-type conducting ZnO using oxygen isotope diffusion which depends on the concentration of oxygen vacancies. Here we focus on native point defects providing a single electron spin for the formation of BMP in magnetic, intrinsically n-type conducting ZnO. The only native donor in n-ZnO carrying a single electron spin is the O vacancy ( + V o ). Zinc interstitials occur exclusively in the 2 + charge state, i.e., ++ I Zn 34 . Therefore 37 . The BMP will increase the polarizability of magnetic ZnO. In our work, we have extracted the static dielectric constant from frequency dependent impedance data measured on ZnO coated MSIS structures. The model does not capture frequency dependence of the dielectric constant of ZnO. In the measured frequency region up to 1 MHz the dielectric constant of ZnO are expected to be constant. Therefore, a time dependent switching characteristics of static dielectric constant in ZnO can only be studied if the switching is non-volatile. For example, the model could possibly be used to investigate the dynamics of spin alignment in BMPs in magnetic, n-ZnO if single magnetic field pulses of different lengths are applied before the measurement of impedance data in dependence on the magnetic field pulse length. Before applying subsequent magnetic field pulse and before measuring the resulting frequency dependent impedance data, the spin alignment in the BMP has to be destroyed, e.g. by an ac magnetic field. We expect that the dynamics of the spin alignment in BMPs will depend on the volume and on the material dependent ferromagnetic s-d exchange parameter. A direct measurement of the spin dynamics in BMP would be possible if the frequency dependence of the dielectric constant could be measured in the several hundred GHz frequency range, e.g. by microwave measurements. In the following we discuss possible percolation of BMP in ZnO with dependence on the static dielectric constant and the concentration of oxygen vacancies. Coey and Venkatesan 30 estimated the concentration of defects in ZnO for polaron percolation based on a static dielectric constant of ZnO of (ε r ) and Bohr radius (r H ). A threshold concentration of defects inZnO of 4 × 10 19 cm −3 has been obtained for ε r = 4.0 and r H = 0.76 nm from (  n crit ) 1/3 r H ≈ 0.26 38 www.nature.com/scientificreports www.nature.com/scientificreports/ ferromagnetism in magnetic ZnO at room temperature 39,40 if the orientation of the electron spin of the oxygen vacancy in the center of BMP is stable and not continuously changing due to hopping transport of free carriers via oxygen vacancies. We describe the frequency dependent capacitance (C-F) behaviour of the Al/n-ZnO semiconductor/Si 3 N 4 insulator/p-Si semiconductor MSIS structure with an equivalent circuit model in strong inversion regime where each layer and interface has been described. Static dielectric constant of ZnO has been extracted from modelled capacitance of the ZnO layer. The dielectric constant of ZnO lies in the expected range from 8.1 to 9.3. We observed strongly increased static dielectric constant in magnetic ZnO in dependence on the concentration of magnetic ions and on the concentration of oxygen vacancies. The dielectric constant in ZnMnO with 5 at. % Mn is 28.3 and with 2 at. % Mn is 31.8. The dielectric constant in ZnCoO with 5 at. % Co is 17.7 and with 2 at. % Co is 22.0. The ferromagnetic s-d exchange interaction between electron spin of donors ( + V o ) in the center of the bound magnetic poloron (BMP) and the electron spin of substitutional magnetic ions is partially superimposed by the anti-ferromagnetic coupling between nearest neighbours substitutional magnetic ions. With increasing concentration of substitutional magnetic ions it is expected that the anti-ferromagnetic coupling which excludes ferromagnetic s-d coupling increases and weakens the formation of BMPs. This is the possible reason why we see a larger static dielectric constant in magnetic ZnO with 2 at. % substitutional magnetic ions in comparison to magnetic ZnO with 5 at. % substitutional magnetic ions. The observed trend is in agreement with the observations from Franco et al. 41 on powdered ZnCoO who observed a maximum of static dielectric constant in powdered ZnCoO around 2 at. % Co. We related the increased static dielectric constant in magnetic ZnO with the formation of partially overlapping bound magnetic polarons and their contribution to the electrical polarizability of magnetic ZnO. Finally, we estimated the contribution of the BMP in ZnO to the polarizability of ZnO. The resonance of BMP typically lies in the several hundred GHz range. Here we chose the same resonance of BMP in magnetic ZnO as shown for the magnetic semiconductor CdMnTe where an additional absorption due to BMP has been observed at 120 GHz by Raman shift measurements (4 cm −1 ) 42 . We assumed an additional polarizability of magnetic ZnO due to BMP and added this to the modelled imaginary part (ε 2 ) of the dielectric constant ( Fig. 4(b,d,f)). where ε BMP 2 is the contribution due to BMP, ε Phonon 2 is the contribution due to phonons in ZnO 43 and where ε Electronic 2 is the contribution due to electronic transitions in ZnO 44 . ε BMP 2 has been described with a Lorentz oscillator model as follows: where ω o is the BMP peak position (ω o = 120 GHz), N peak is the peak strength and Γ is the FWHM. We calculated the real part (ε 1 ) of the dielectric constant ( Fig. 4(a)) using Kramers-Kronig relation (Eq. (4)) for ZnO with the electronic 44 and phonon 43 contribution to ε 2 . Additionally, the FWHM of a Lorentz oscillator with a fixed peak strength(N peak = 350) and fixed peak position has been varied to change the contribution from ε BMP 2 to ε 2 in Fig. 4(d,f)) and derived ε 1 of magnetic ZnO in Fig. 4(c,e), respectively, using Kramers-Kronig relation (Eq. (4)) ε BMP 2 as long as static dielectric constant ε 1 from Eq. (4) was the same as the modelled static dielectric constant from impedance measurements (ε r ). We expect that the dielectric constant peak position can be tuned via the material dependent ferromagnetic s-d exchange parameter. Here we rather focused on the amplitude of the additional absorption ε BMP 2 in the several hundred GHz range. We expect that the amplitude can be tuned via via the volume of the BMP. Dielectric constant shown in Fig. 4 represents the dielectric constant of magnetic ZnO layer in the MSIS structure. So far, we have not directly investigated the properties of BMPs in the several hundred GHz range. ZnO coated Si 3 N 4 /p-Si metal insulator semiconductor (MSIS) structures with nominal concentration of 2 at.% and 5 at.% Co 2+ , Mn 2+ ions at 6.50 × 10 −3 mbar, 3.91 × 10 −2 mbar oxygen partial pressure are grown by pulse layer deposition (PLD). Voltage dependent capacitance (C-V) and frequency dependent capacitance (C-F) characteristics have been measured. Thickness of ZnO layer and Si 3 N 4 is obtained from secondary electron microscopy (SEM) cross section images. Measured C-F characteristics at strong inversion regime of ZnO coated MSIS structure shows, nonlinear behaviour of the capacitance. To describe the nonlinear behaviour of the C-F characteristics we proposed an equivalent circuit model at strong inversion regime. The RC equivalent circuit model gives the description of each region of Al/ZnO/Si 3 N 4 /p-Si/Au MIS structure such as metal, insulator, semiconductor including interface region between materials. Dielectric constant is obtained from modelled ZnO capacitance value and with the thickness of ZnO from SEM measurements. Dielectric constant for ZnO is obtained in the expected range ε r = 8.17-9.34. We determined the static dielectric constant in magnetic, n-type conducting ZnO thin films with different Co and Mn concentration. With www.nature.com/scientificreports www.nature.com/scientificreports/ With increase in oxygen vacancies at the surface, bound magnetic polaron formed with oxygen vacancy as nucleus can overlap and provide ferromagnetic behaviour at room temperature 45 Davies et al. 46 and Kaspar et al. 7 suggest that ferromagnetic features from bound magnetic polaron can be used in developing magnetic sensors, non-volatile memories in spintronics devices which are potentially expected to be energy-efficient devices. Application of BFO coated Si 3 N 4 MIS structure as a photocapacitive detector has been studied by You et al. 22 . Because ZnO is transparent and because the ZnO coated Si 3 N 4 MIS structure shows similar capacitance behaviour as the BFO coated Si 3 N 4 MIS structure, the ZnO coated Si 3 N 4 MIS structure is expected to reveal similar photocapacitive functionality as the BFO coated Si 3 N 4 MIS structure to detect intensity and color of visible light by impedance measurements. In addition, we suggest to use the ZnO coated Si 3 N 4 MIS capacitor as magneto-capacitive detector where the presence of a magnetic field can be detected via the increase of static dielectric constant due to the formation of BMPs with aligned spins of magnetic ions. We propose to study change of static dielectric constant in magnetic transparent conducting oxides (TCO) 47,48 by preparing metal/n-TCO/insulator/p-Si MSIS structures and by measuring and modelling the impedance in strong inversion. It is expected that also other magnetic n-type conducting TCOs reveal an increase of static dielectric constant due to the formation of bound magnetic polarons and due to the contribution of BMP to the polarizability of magnetic TCOs. Bound magnetic polarons strongly influence transport, magnetization and magnetooptical properties in magnetic semiconductors within the confined volume of BMPs. For example, ferromagnetic behaviour in magnetic ZnO at room temperature can be related with BMP 45,49 and it has been suggested that ferromagnetic behavior related with BMP formation in magnetic n-type conducting TCOs can be used in developing magnetic sensors and non-volatile memories in spintronics devices with a low energy consumption 7,50 . If BMPs are coalescing, even at the room temperature strongest effect of BMPs on the transport, magnetization and magnetooptical properties 51 of magnetic semiconductors can be expected. Methods First alpha silicon nitride (α-Si 3 N 4 ) thin films with a nominal thickness of about 88 nm were deposited in a Roth and Rau AK1000 microwave PECVD reaction chamber. Afterwards ZnO, ZnCoO, and ZnMnO thin films with the nominal concentration of 2 at.% and 5 at.% Co and Mn have been grown on top of Si 3 N 4 /p-Si MIS structures by PLD with 700 1 Hz KrF excimer laser pulses with energy density of 1.60 Jcm −2 to ablate ZnO, ZnMnO, and ZnCoO ceramic targets at a substrate temperature of 550 °C with a constant oxygen flux of 4.50 sccm. Two different oxygen partial pressures, 6.50 × 10 −3 mbar and 3.91 × 10 −2 mbar, have been applied to control the concentration of oxygen vacancies in the magnetic ZnO thin films. The bottom of the p-Si has been coated with gold (Au) using dc magnetron sputtering at room temperature to form a bottom contact to the MIS structure. Circular dc magnetron sputtered aluminium dots of different size have been prepared on the ZnO films to form the top contacts on the MIS structure. For impedance measurements we have chosen Al contacts with and area of 5.026 × 10 −7 m 2 (A1) and of 2.827 × 10 −7 m 2 (A2). Structural properties of investigated ten different metal/n-ZnO semiconductor/Si 3 N 4 Si 3 N 4 insulator/p-Si semiconductor (MSIS) structures, mainly thickness of the n-ZnO and Si 3 N 4 , have been determined using secondary electron microscopy (SEM) cross section measurements (Sect. S1). Impedance of the MSIS structures with ten different ZnO, ZnCoO, and ZnMnO thin films grown on Si 3 N 4 /p-Si was measured versus voltage (V) and versus frequency (F) using the Agilent 4294A precision impedance analyzer. We determined the bias range for the different regimes in the MSIS structure (accumulation, depletion, inversion, strong inversion) by voltage dependent impedance measurements (Sect. S2). Nonlinear behaviour of the frequency dependent capacitance (C-F) and conductance (G-F) of all MSIS structure in strong inversion has been modelled with an equivalent circuit model which accounts for all RC elements in the interfaces and layers of the MSIS structure. The static dielectric constant of n-ZnO has been extracted from modelled capacitance (C ZnO ) of completely depleted n-ZnO layer of the MSIS structure (Sect. S3).
6,844.6
2020-04-21T00:00:00.000
[ "Physics", "Materials Science" ]
Short-term synaptic depression can increase the rate of information transfer at a release site The release of neurotransmitters from synapses obeys complex and stochastic dynamics. Depending on the recent history of synaptic activation, many synapses depress the probability of releasing more neurotransmitter, which is known as synaptic depression. Our understanding of how synaptic depression affects the information efficacy, however, is limited. Here we propose a mathematically tractable model of both synchronous spike-evoked release and asynchronous release that permits us to quantify the information conveyed by a synapse. The model transits between discrete states of a communication channel, with the present state depending on many past time steps, emulating the gradual depression and exponential recovery of the synapse. Asynchronous and spontaneous releases play a critical role in shaping the information efficacy of the synapse. We prove that depression can enhance both the information rate and the information rate per unit energy expended, provided that synchronous spike-evoked release depresses less (or recovers faster) than asynchronous release. Furthermore, we explore the theoretical implications of short-term synaptic depression adapting on longer time scales, as part of the phenomenon of metaplasticity. In particular, we show that a synapse can adjust its energy expenditure by changing the dynamics of short-term synaptic depression without affecting the net information conveyed by each successful release. Moreover, the optimal input spike rate is independent of the amplitude or time constant of synaptic depression. We analyze the information efficacy of three types of synapses for which the short-term dynamics of both synchronous and asynchronous release have been experimentally measured. In hippocampal autaptic synapses, the persistence of asynchronous release during depression cannot compensate for the reduction of synchronous release, so that the rate of information transmission declines with synaptic depression. In the calyx of Held, the information rate per release remains constant despite large variations in the measured asynchronous release rate. Lastly, we show that dopamine, by controlling asynchronous release in corticostriatal synapses, increases the synaptic information efficacy in nucleus accumbens. Let X be a discrete random variable with a finite sample space {x 1 , x 2 , ..., x n }. The entropy of X, denoted by H(X), is the amount of uncertainty about the value of X and is calculated by where P (.) is the probability measure. For two discrete random variables X and Y , the conditional entropy of Y given X, denoted by H(Y |X), describes the remaining uncertainty about the value of Y provided that the value of X is known. The conditional entropy is derived from The mutual information between the two random variables X and Y is defined by and quantifies the amount of information that can be obtained from X about Y . The notions of entropy and mutual information are extended to random processes as well. Let X = {X i } ∞ i=1 be a discrete time random process, where X i is the random variable corresponding to the value of X at time i. We represent by X n the first n instances of X, X n (X 1 , X 2 , ..., X n ) if n > 0 0 if n ≤ 0 (4) The entropy rate of X is defined by if the limit exists. The mutual information rate of two random processes provided that the limit exists. Assume that the random processes X and Y are the input and output of a communication channel. Let E i be the (random) amount of energy that is consumed by the channel at time i. The energy-normalized information rate of the channel is defined by where E(.) is the expected value. B. Recovery time constant and synaptic information efficacy The speed of recovery from depression modulates the rate of information transfer at a release site. Slower recovery expands the impact range of the release history and consequently, increases the effective memory length of the release site (Fig. S1A). For a synapse with a recovery time constant of τ = 100 msec (corresponding to e = 0.1), the relative variation of the mutual information rate caused by different initial states (various seed values u 0 in the algorithm in Fig. 1B) reduces to 10% after 160 msec. The effective memory length is reduced to 70 msec for a synapse with faster recovery, τ =28 msec (equivalent to e = 0.3). Faster recovery increases both the mutual information rate and energy-normalized information rate of the release site. (Fig. S1B). The mutual information rate changes substantially by the variations of recovery coefficient while the energy-normalized information rate is relatively robust. Specifically, by increasing the recovery coefficient, the capacity of the release site is attained at higher input spike rates. But the input spike rate that results in the optimal energy-normalized information rate is practically independent of the recovery time constant. From Fig. S1B and Fig. 3B, we conclude that the release sites with different depression dynamics can work at their optimal energy-rate regime with the same input spike rate. If the recovery of the synchronous spike-evoked release is faster than the recovery of asynchronous release, then depression can increase the mutual information rate (Fig. S1C) and energy-normalized information rate of the release site (Fig. S1D). We show that the differences in recovery coefficient among synapses (and release sites) create three distinct functional categories for short-term depression (Fig. S1E). C. Model parameters We set the parameters of the MRO model by establishing a correspondence with an updated version of the stochastic model of depression [1]. The release probability (synchronous or asynchronous) follows a first-order differential equation in the stochastic model, where p r , τ, p 0 , u, and t r are the release probability, recovery time constant, default (maximum) release probability, depression coefficient and the release timing. In the absence of release, the release probability recovers exponentially to its default value. Assuming that the release probability at time t = 0 is p in , Correspondingly, if we assume that in the MRO model, the release probability at time index i = 0 is p in , then it can be easily shown that after k steps of recovery (k successive quiescent intervals), where e is the recovery coefficient of the MRO model. The discrete time k and the continuous time t are related through the time unit, ∆, of the MRO model, To have similar recovery dynamics in the two models, and by substituting k from (11), This equation shows the relationship between the recovery coefficient of the MRO model and the recovery time constant of the synapse. For example, if the recovery time constant of a synapse is τ = 100 msec, then for a time unit of ∆ = 10 msec, the recovery coefficient of the MRO model should be e = 0.1. After a release at time t r , the release probability of the synapse (described by (8)) decreases to Therefore, the depression multiplier of the MRO model, c, can be derived from However, estimation of c from (15) leads to an estimation bias, due to the slight recovery of release probability during a single time bin. Using the approximation and assuming a uniform distribution for p r , we can derive a more accurate estimation for the depression multiplier: The memory length, L, of the MRO model represents the number of previous release outcomes that are used to determine the current release probability of the synapse. We define the effective memory length of a synapse, L ef f , as the minimum value of L for which the mutual information rate of the synapse becomes independent from its past (characterized by the seed value in the algorithm in Fig. 1B). We can find the effective memory of a synapse (in milliseconds) by L ef f × ∆. For example, if the recovery time constant of a synapse is 100 msec (e = 0.1) and its depression coefficient is u = 0.67 (c = 0.5), then the effective memory of the synapse is approximately 160 msec (corresponding to L ef f = 16). We use the context tree weighting algorithm [2] to calculate numerically the information rate of the synapse in a classical, stochastic model of depression [1]. In Fig. S2, we show that by increasing the memory length, L, the analytical mutual information rate of the MRO model converges to the numerical information rate estimates of the classical stochastic model of depression. D. Proof of Theorems Proof of Theorem 1: By definition where Using the chain rule [3], For integer values a, b, we define The random variable Y i depends on the release probabilities at time i, p i and q i , and the input spike variable at time i, X i . Since p i and q i are functions of ( Figure S2: Comparison between the analytical mutual information rate of the MRO model and the numerical estimation of the information rate of a classical stochastic model of depression [1]. The mutual information rate is plotted as a function of synchronous spike-evoked release probability (p 0 ) for various values of memory length, L, and input spike rates. The recovery time constant of the stochastic model is 100 msec and the corresponding recovery coefficient of the MRO model is e = 0.1. In this synapse model, the asynchronous release probability is zero, and the release site is inactivated after each release (i.e., c = 0). Applying the chain rule to H(Y n |X n ), With a similar argument, From (19), (22) and (24), and based on the definition of conditional mutual information, The sample set of Y i−1 i−L consists of all the binary vectors of length L. For the sake of notational simplicity, instead of the binary vector ( Let R j represent the mutual information rate of the release site at state j. By definition, The release probabilities of the release site at state j, denoted by p(j) and q(j), are calculated from the algorithm in Fig. 1B. It can be easily shown that Each state of the release site can transit to two other states and the transition probabilities are fully determined by the current state (Fig. 1D). Therefore, a Markov chain is used to model the state transitions of the release site. The transition matrix of the Markov chain, denoted by M , is a 2 L × 2 L matrix and has two non-zero entries on each row. The pattern of the non-zero entries of M is shown in Fig. S3A. It is shown in [4] that for an irreducible aperiodic finite-state Markov chain, regardless of the initial state, the probability of each state j will converge to a steady-state probability, denoted by π j . We prove in Lemma 1 that the Markov chain of the release site in the MRO model is irreducible and aperiodic. Therefore, By interchanging the summations in (27), Since We then have Using (30) and the Cesàro mean theorem, Finally, from (34) and (35), We note that the stationary probability vector − → π = (π 0 , ..., π 2 L −1 ) is calculated using the power method. We start by a random probability vector − → x 0 and in each iteration i ≥ 0, we calculate Then we substitute − → x i with − − → x i+1 and repeat (37). It is easily shown that the probability vector − → x i converges to − → π . Lemma 1. In the MRO model, the markov chain of the release site is irreducible and aperiodic. Proof. Let j and j be two arbitrary states of the release site corresponding to the binary vectors (a 1 , a 2 , ..., a L ) and (b 1 , b 2 , ..., b L ). We show that the state j is always accessible from the state j in the Markov chain M . Assume that the Markov chain is in the state j at time i = 1. The release site can transit to the state (a 2 , a 3 , ..., a L , b 1 ) with a non-zero probability P 1 (b 1 ). Similarly, at each time i, 1 ≤ i ≤ L, the release site can transit from the state (a i , ..., a L , b 1 , ..., b i−1 ) to (a i+1 , ..., a L , b 1 , ..., b i ) with the non-zero probability P i (b i ). Therefore, the probability of transition from (a 1 , a 2 , ..., a L ) to (b 1 , b 2 , ..., b L ) after L time steps is greater than or equal to L i=1 P i (b i ). This proves that the state j is accessible from the state j, and consequently, M is irreducible. Moreover, there is a non-zero transition probability from the state j = 0 to itself. Since every irreducible finite-state Markov chain with a self-loop is aperiodic [4], we conclude that M is aperiodic and the proof is complete. Proof of Theorem 2: The energy-normalized information rate of the release site is defined by (refer to Section A): where E i is the energy consumed by the release site to release a vesicle at time i. By assumption, one unit of energy is consumed at each release. Therefore, and From (38), (40) and Theorem 1, . E. Quantized release probabilities In the MRO model, the release probabilities of the release site at time i are determined by the last L release outcomes, Y i−1 i−L . Alternatively, the release probabilities at time i can be derived recursively from the release probabilities and the release outcome at time i−1. In this recursive approach, the state of the release site at time i is specified by the pair (P i , Q i ), where P i and Q i are the random variables corresponding to the synchronous spike-evoked and asynchronous release probabilities. Since P i and Q i are continuous variables, the number of states goes to infinity by increasing i. To avoid the limitations of infinite-state models, we quantize the release probabilities. For a quantization level of δ, the sample space of release probabilities is defined by where [.] is the floor function. Let [x] S represent the largest entry in S that is less than or equal to x, i.e., [x] S = max{y : y ∈ S, y ≤ x}. Also assume that p 0 , q 0 ∈ S are the default (maximum) synchronous spike-evoked and asynchronous release probability of the release site. The quantized release probabilities at time i + 1 are calculated recursively from and We refer to this model as the binary asymmetric channel with Quantized Release Probabilities, abbreviated by QRP. We note that since synchronous spike-evoked and asynchronous release probabilities do not exceed p 0 and q 0 , the sample space of (P, Q) can be reduced from S × S to S P × S Q , where (Fig. S3B). For each state (p, q), the stationary probability, π (p,q) , is calculated using the power method. Also, the mutual information rate of the binary asymmetric channel, R (p,q) , is derived from (53) D be the mutual information rate and energy-normalized information rate of the release site in the QRP model. Then (p,q)∈P S ×Q S R (p,q) π (p,q) (p,q)∈P S ×Q S (αp + αq)π (p,q) . (55) be the input and output random processes of the QRP model (the top panel in Fig. S3B). By definition, where Using the chain rule for H(Y n ) and H(Y n |X n ), The vector of release probabilities, (P i , Q i ), can be calculated from Y i−1 . Also, given X i and Similarly, given Hence, From the definition of conditional mutual information, The term I X i ; Y i |(P i , Q i ) = (p, q) is the mutual information rate of the binary asymmetric channel with release probabilities p and q, which is denoted by R (p,q) . Therefore, Together with (56), (63) and (64), By interchanging the summations and moving the limit inside, The state of the release site at time i is given by (P i , Q i ) and the state transitions of the release site are modeled by a Markov chain with a transition matrix M of order |P S | × |Q S |. We prove in Lemma 2 that the Markov chain M is uni-chain and its recurrent class is aperiodic. Therefore, it has a unique stationary distribution and the probability of each state (p, q) converges to its stationary probability π (p,q) [4]. That is Applying the Cesàro mean theorem to (68), Finally, similar to the proof of Theorem 2, . ( In the QRP model, Hence, lim Proof. To show that the transition matrix of the QRP model is uni-chain, we need to prove that there exists only one recurrent class in M and the other states (if any) are transient. Let (a 0 , b 0 ) = (0, 0) be an arbitrary state in M . Consider the path (a 0 , b 0 ) → (a 1 , b 1 ) → (a 2 , b 2 ) → ... in which every state (a i , b i ) transits to its depressed state, i.e., (a i+1 , b i+1 ) = ([ca i ] S , [db i ] S ). As long as (a i , b i ) = (0, 0), the transition probability to the depressed state, (a i+1 , b i+1 ), is positive. Moreover, if a i > 0 then a i+1 < a i , and if b i > 0 then b i+1 < b i . Since the number of states in the Markov chain is finite and the sequences (a i ) i∈N and (b i ) i∈N are monotonically decreasing to zero, there will be a large enough integer N such that (a N , b N ) = (0, 0). Therefore, there is a path from (a 0 , b 0 ) to the state (0, 0) in M . This implies that the state (0, 0) is accessible from every state in the Markov chain. Now assume that there are two recurrent classes C 1 and C 2 in the Markov chain. Since the states in C 1 have access to (0, 0), from the definition of recurrent states, (0, 0) ∈ C 1 . With a similar argument, (0, 0) ∈ C 2 . Therefore, C 1 = C 2 and there is only one recurrent class in M , meaning that M is a uni-chain. Now we show that the period of the recurrent class is equal to one. Since the transition probability to the recovered state is always positive in the Markov chain M , we can consider the path (a 0 , b 0 ) = (0, 0) → (a 1 , b 1 ) → (a 2 , b 2 ) → ... in which every state (a i , b i ) transits to its recovered state, i.e., ( . It is clear that for each i, a i+1 ≥ a i and b i+1 ≥ b i . Since the number of states is finite, there exists a finite integer N such that b Therefore, the state (a N , b N ) transits to itself with probability αa N + α b N . On the other hand, (a N , b N ) is accessible from (0, 0), meaning that it belongs to the recurrent class. Since a recurrent state with a loop is aperiodic [4], we conclude that (a N , b N ), and consequently the recurrent class of M , is aperiodic and the proof is complete. F. Comparison between the two models of short-term depression We presented two models to calculate the mutual information rate of the release site during short-term depression: the MRO model and the QRP model. The mutual information rates and energy-normalized information rates of the two model are similar (compare Fig. S4A to Fig. 3B). We show in Fig. S4B that the relative difference between the calculated rates of the two models is negligible. Each model, however, has its own advantages and disadvantages. The state of the release site in the MRO model is a binary vector of length L, which corresponds to the last L release outcomes. The Markov chain of the release site consists of 2 L states and its transition matrix, M , grows exponentially with memory length. The pattern of non-zero entries in M is always fixed and does not depend on depression dynamics (Fig. S3A). In contrast, the state , and the dashed lines show the relative difference of energy-normalized information rates, . The parameters of the two models are similar to (A). space of the QRP model consists of the quantized release probabilities, which is modeled by a Markov chain, M , of order |S P |×|S Q |. Therefore, the size of the Markov chain in the QRP model can be much smaller than that of the MRO model. This will decrease the computational resources that are required for calculation of information rate in the QRP model. However, the pattern of non-zero entries of M varies with depression coefficients (Fig. S3B), making it more difficult to achieve further analytical advances. G. Distinct pools of vesicles Studies indicate that asynchronous and synchronous release rely on the same pool of the vesicles, whereas spontaneous release may draw on a distinct pool. Here we consider the hypothetical scenario in which the rate of spontaneous release is similar to that of asynchronous release to explore the consequences of having distinct pools of vesicles. Although our framework is based on the notion of a shared pool of vesicles, it can be slightly modified to comprise these hypothetical cases as well. We calculate average depression multipliers,c andd, by marginalizing c and d over the release modes:c where p a is the average synchronous release probability, q a denotes the average asynchronous plus spontaneous release probability, and r determines the ratio of the number of asynchronous releases to the total number of releases in the inter-spike intervals. The average depression multipliers substitute c and d in the model and simulate the impact of distinct pools of vesicles. We study the extreme case for which the rates of spontaneous release and asynchronous release are identical, r = 0.5. If spontaneous releases come from a distinct pool of vesicles, the release of a vesicle after an action potential depresses both synchronous and asynchronous release, but does not change the spontaneous release probability. For releases occurring in the inter-spike intervals, if the released vesicle belonged to the separate pool of spontaneous release, only the probability of spontaneous release is reduced Figure S5: Information efficacy of a synapse with separate pools of vesicles. The mutual information rate of a synapse with a distinct pool of vesicles for spontaneous release is calculated numerically using context tree weighting algorithm (filled circles). The average depression multipliers of the MRO model (with a shared pool of vesicles) are estimated and the mutual information rate of the synapse is calculated (solid lines). Information rates are plotted as a function of q 0 (the summation of asynchronous and spontaneous release probabilities) for different depression multipliers of synchronous release, c. The other parameters are d = 0.5, a = 0.2, p 0 = 0.7, and e = f = 0.1. and the other modes of release are not affected; otherwise, depression reduces the probabilities of synchronous and asynchronous release, but does not impact spontaneous release. The mutual information rate of the synapse is calculated numerically using the context-tree weighting algorithm [2]. We then update the MRO model with average depression multipliers and calculate the mutual information rate of the synapse. As seen in Fig. S5, even in this extreme case (with an unrealistically high rate of spontaneous release), the mutual information rate of the synapse with a distinct vesicle pool for spontaneous release can be precisely estimated by the MRO mode (with a shared pool of vesicles), provided that depression multipliers are marginalized over the release modes. We should note that the same simulation can be used to study a hypothetical synapse in which half of the vesicles of asynchronous release are supplied from a separate pool of vesicles. Our results show that the MRO model, equipped with average depression multipliers, can provide an accurate estimation of the information efficacy of synapses with partially distinct pools of vesicles.
5,770.8
2019-01-01T00:00:00.000
[ "Biology" ]
The miR-98-3p/JAG1/Notch1 axis mediates the multigenerational inheritance of osteopenia caused by maternal dexamethasone exposure in female rat offspring As a synthetic glucocorticoid, dexamethasone is widely used to treat potential premature delivery and related diseases. Our previous studies have shown that prenatal dexamethasone exposure (PDE) can cause bone dysplasia and susceptibility to osteoporosis in female rat offspring. However, whether the effect of PDE on bone development can be extended to the third generation (F3 generation) and its multigenerational mechanism of inheritance have not been reported. In this study, we found that PDE delayed fetal bone development and reduced adult bone mass in female rat offspring of the F1 generation, and this effect of low bone mass caused by PDE even continued to the F2 and F3 generations. Furthermore, we found that PDE increases the expression of miR-98-3p but decreases JAG1/Notch1 signaling in the bone tissue of female fetal rats. Moreover, the expression changes of miR-98-3p/JAG1/Notch1 caused by PDE continued from the F1 to F3 adult offspring. Furthermore, the expression levels of miR-98-3p in oocytes of the F1 and F2 generations were increased. We also confirmed that dexamethasone upregulates the expression of miR-98-3p in vitro and shows targeted inhibition of JAG1/Notch1 signaling, leading to poor osteogenic differentiation of bone marrow mesenchymal stem cells. In conclusion, maternal dexamethasone exposure caused low bone mass in female rat offspring with a multigenerational inheritance effect, the mechanism of which is related to the inhibition of JAG1/Notch1 signaling caused by the continuous upregulation of miR-98-3p expression in bone tissues transmitted by F2 and F3 oocytes. INTRODUCTION Dexamethasone is a synthetic glucocorticoid that can easily cross the placental barrier to promote lung maturation in premature infants. Therefore, this drug is widely used in obstetric and pediatric diseases, especially in pregnant women at risk of preterm delivery 1,2 . However, more evidence has shown that prenatal synthetic glucocorticoid (such as dexamethasone) exposure results in intrauterine growth retardation of fetuses, leading to susceptibility to multiple diseases in adulthood [3][4][5][6] . Several studies have reported that the influence of glucocorticoid exposure during pregnancy on endocrine function and behavioral changes in offspring is not limited to current generations but can also be inherited over multiple generations 7 . Our recent studies also showed that prenatal dexamethasone exposure (PDE) causes developmental toxicity in the ovaries of offspring rats, which could be passed down to third generation (F3 generation) offspring 8,9 . Furthermore, the potential toxic effects of dexamethasone on offspring bone development during pregnancy have attracted extensive attention. Clinical studies have shown that the birth weight and body length of infants receiving dexamethasone treatment are lower than those of infants of the same gestational age; moreover, dexamethasone is an essential factor leading to a decrease in bone mineral content and bone mineral density after birth 10,11 . Animal studies also found that exposure to dexamethasone during the last 24 days of the fetal period in piglets can significantly decrease bone density and mass 12 . We previously confirmed that PDE has a toxic effect on the development of fetal bones, and this effect can continue into adulthood, causing susceptibility to osteoporosis 13,14 . However, it is not clear whether the effect of PDE on long bone mass in female offspring is sustainable in the F3 generation and its mechanism of multigenerational inheritance. At present, it is believed that the mechanism of multigenerational inheritance is related to epigenetic modifications in somatic or germ cells 15,16 . MicroRNAs (miRNAs), as epigenetic mediators, usually bind to the 3'-untranslated regions (3'-UTRs) of target gene mRNAs to promote mRNA degradation or inhibit mRNA translation, thereby regulating gene expression at the posttranscriptional level. Furthermore, miRNAs are involved in the epigenetic regulation of multigenerational inheritance [17][18][19] . For example, it was found that changes in sperm miRNAs mediated the multigenerational inheritance of obesity and insulin resistance in offspring caused by a paternal high-fat diet 20 . Our recent findings demonstrated that the expression change of miR-320a-3p in oocytes mediates the multigenerational inheritance of inhibited ovarian estrogen synthesis induced by PDE 9 . These studies suggest that miRNAs may participate in the multigenerational inheritance of bone mass caused by PDE, although the regulatory mechanism is still unclear. In this study, we established a rat offspring model induced by dexamethasone exposure during the middle and late pregnancy periods. The multigenerational inheritance phenomenon of bone mass induced by PDE in female offspring rats was investigated by detecting the changes in bone mass indices and functional genes of the long bone in the F1 to F3 generations. Furthermore, according to the miRNA sequencing analysis and the detection of miRNA expression in bone tissue and oocytes, we clarified the potential mechanism of the multigenerational inheritance of osteopenia caused by PDE. This study helps to reveal the long-term adverse effects of PDE on bone development and its early intervention targets and provides a theoretical basis for illuminating the multigenerational inheritance effect of adult diseases. Animals and treatment Specific pathogen-free Wistar rats (with weights of 200-240 g for females and 260-300 g for males) were purchased from the Experimental Center of the Hubei Medical Scientific Academy (No. 2017-0018, certification number: 42000600014526, Hubei, China). All animal experiments were performed in the Center for Animal Experiment of Wuhan University (Wuhan, China), which is accredited by the Association for Assessment and Accreditation of Laboratory Animal Care International (AAALAC International). The rats were housed in a temperature-controlled room (temperature: 18-22°C; humidity: 40%-60%; light cycle: 12 h light-dark cycle) and allowed free access to food and water. After 1 week of adaptive feeding, the animal experiment was started. All animal experimental procedures were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals of the Chinese Animal Welfare Committee. Two 12-week-old female rats were placed together with one 12-weekold male rat overnight in a cage for mating. The next day, female rats were examined for vaginal smears. The presence of sperm in vaginal smears confirmed mating, and the mating date was designated gestational day (GD) 0. Pregnant rats were randomly divided into the control and PDE groups. From GD9 to GD20, the rats in the PDE group were subcutaneously injected with 0.2 mg/kg.d dexamethasone at 9:00 a.m. every day, and the rats in the control group were given saline at the same volume. Some pregnant rats were randomly selected from the two groups and sacrificed after administration of anesthesia with isoflurane at GD20 to obtain fetal rats of the F1 generation (n = 8 per group). Pregnant rats with litter sizes of 10 to 14 were considered qualified. The female fetuses were decapitated immediately to collect long bones. The left femurs and tibias were fixed with 4% paraformaldehyde overnight and embedded in paraffin for histological or immunohistochemistry analysis. The right femurs and tibias were stored in a refrigerator at −80°C for further analysis. It should be noted that three whole fetal long bones in three different fetal rats from each litter were randomly counted as one sample and processed for gene analysis. The rest of the pregnant rats, including those in the control and PDE groups, went into spontaneous labor to produce F1 adult offspring. Pregnant rats with a litter size of 10 to 14 were retained and then normalized to 10 (the male/female ratio was approximately 1:1). There were at least eight pregnant rats in each group. At postnatal week 8 (PW8), one female rat was randomly taken from each litter, anesthetized with isoflurane, and euthanized; then, the long bone tissue was rapidly collected. The left femurs and tibias were used for microcomputed tomography (micro-CT) analysis. The right femurs and tibias were used for subsequent analyses, including histological or immunohistochemistry analysis and gene expression analysis. The remaining F1 female offspring from the control and PDE groups in adulthood mated with normal male rats to generate F2 offspring. The F2 female offspring were culled using the same protocol as the F1 generation to consecutively produce F3 offspring. At PW8, the female F2 and F3 generations underwent studies similar to those performed on the F1 generation. The experimental procedures and treatment methods in this study are described as follows (Fig. 1). Isolation of oocytes Pregnant mare serum gonadotropin (PMSG) hormones were intraperitoneally injected into the F1 and F2 female rats. After 24 h, the same dose of human chorionic gonadotropin (hCG) hormone was injected intraperitoneally. Fourteen to 16 h after hCG injection, the female rats were sacrificed after the administration of isoflurane anesthesia. The oviducts were dissected under a microscope, and oocytes were collected. The surrounding granulosa cells were removed, and the oocytes were stored in liquid nitrogen for subsequent detection. Histological and immunohistochemistry analysis The femurs of fetal and adult rats were soaked in 4% paraformaldehyde overnight and then paraffin-embedded. Serial longitudinal section (5 µm thick) were cut. One out of every six sections was used for hematoxylineosin (H&E) staining to quantify the length of the primary ossification center. For Von Kossa staining, the sections were dewaxed and stained with 5% AgNO 3 until they became dark brown. For immunohistochemical analysis, the sections were dewaxed and hydrated through a graded series of ethanol. Then, the sections were placed in 0.01 M sodium citrate buffer (pH 6.0) and boiled at approximately 95°C for 10-15 min until antigen retrieval. After antigen retrieval, the hydrated sections were incubated in 3% H 2 O 2 for 15 min to quench endogenous peroxidase activity. Sections were then blocked in 3% bovine serum albumin (BSA) (Servicebio, Wuhan, China) at room temperature for 1 h and incubated with primary antibodies against JAG1 (1:100 dilution) and Notch1 (1:200) overnight at 4°C. After the Fig. 1 The animal experimental procedure. Prenatal dexamethasone exposure model and multigenerational phenotype via maternal-line. sections were rinsed with PBS, they were incubated with biotinylated secondary antibody (1:100 dilution) and then incubated with avidin biotinylated horseradish peroxidase complex solution according to the manufacturer's instructions. Finally, peroxidase activity was detected by immersion in diaminobenzidine (DAB) substrate. The staining intensity was determined by measuring the mean optical density (MOD) in six random fields for each section. Micro-CT scan The obtained femur was immobilized with 70% ethanol. Then, the bone mass was scanned and analyzed with a VivaCT 40 μCT system (Scanco, Switzerland) as previously described 21 . To measure bone volume/total volume (BV/TV), bone trabecular number (Tb.N), bone trabecular thickness (Tb.Th), and bone trabecular separation (Tb.Sp), we selected 0.5-5.5 mm below the lowest point of the growth plate as the area of interest. The scanning resolution of the cross-sectional image was 21 μm. After scanning, three-dimensional reconstruction and quantitative analysis of cancellous bone were performed. Cell isolation, culture, and treatment As previously described, bone marrow mesenchymal stem cells (BMSCs) were obtained from the femur and tibia of female Wistar rats at 3-4 weeks and cultured in complete growth medium (α-MEM with 10% FBS, 100 μg/ ml streptomycin, and 100 U/ml penicillin) 22 . Then, we verified the multidirectional differentiation potential of BMSCs by osteogenic, chondrogenic and adipogenic differentiation ( Supplementary Fig. 1). For induction of osteogenic differentiation, third-passage BMSCs were seeded in 6-well plates and treated with osteogenic induction medium (α-MEM with 10% FBS, 100 μg/ml streptomycin, 100 U/mL penicillin, 10 mM β-glycerophosphate, 50 μg/mL ascorbic acid, and 10 nM dexamethasone). Then, the cells were treated with various concentrations of dexamethasone or cotreated with dexamethasone and miRNA inhibitor or JAG1 overexpression plasmid for further analysis. Alkaline phosphatase and Alizarin red staining for BMSCs Alkaline phosphatase (ALP) staining was performed using an Alkaline Phosphatase Color Development Kit according to the manufacturer's instructions. For Alizarin red staining (ARS), the cells were washed twice with PBS and fixed with 4% paraformaldehyde for 10 min, rinsed with double-distilled H 2 O, and stained with 0.1% Alizarin red dye (pH 4.2) for 20 min at room temperature. Then, they were washed again with doubledistilled H 2 O and observed by microscopy. Total RNA extraction and RT-qPCR Total RNA was extracted with TRIzol reagent. The concentration and purity of total RNA were detected by a NanoDrop 2000 nucleic acid analyzer. For mRNA detection, cDNA was synthesized by using HiScript III RT SuperMix for qPCR (+gDNA wiper) (Vazyme) and then quantified by RT-qPCR with AceQ Universal SYBR qPCR Master Mix (Vazyme). For miRNA detection, cDNA was synthesized by using the miScript II RT Kit (Qiagen) and then quantified by RT-qPCR with SYBR Green PCR Master Mix (Qiagen). The primer sequences are all shown in Table 1. The relative expression of mRNA and miRNA was analyzed by the 2 -ΔΔCt method and normalized to the expression of GAPDH and U6, respectively. MiRNA microarray analysis Total RNA was isolated from fetal long bones at GD20 using Magzol Reagent (Magen, China) according to the manufacturer's protocol. The quantity and integrity of the RNA yield were assessed by using K5500 (Beijing Kaiao, China) and Agilent 2200 TapeStation (Agilent Technologies, USA), respectively. Briefly, total RNA was ligated with a 3' RNA adapter, followed by 5' adapter ligation. Subsequently, the adapter-ligated RNAs were subjected to RT-PCR and amplified with a low cycle. Then, the PCR products were size selected on a PAGE gel according to the NEBNext Multiplex Small RNA Library Prep Set for Illumina (Illumina, USA). Finally, the purified library products were evaluated using an Agilent 2200 TapeStation system and Qubit (Thermo Fisher Scientific, USA). The libraries were sequenced on an Illumina HiSeq 2500 system (Illumina, USA) with a single-end 50 bp sequence at RiboBio Co., Ltd. Western blot analysis The cells were washed with PBS and then lysed on ice with RIPA lysis buffer (Beyotime, Shanghai, China) containing 1 mM PMSF and protease inhibitor cocktail for 30 min to extract the total protein. A BCA protein assay kit (Beyotime, Shanghai, China) was used to detect the protein concentration of the samples. A total of 30 μg of protein was loaded into each lane, isolated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) (10% gels), and then transferred to polyvinylidene difluoride membranes. The membranes were blocked with 5% skim milk at room temperature for 1 h and then incubated overnight at 4°C with specific antibodies: JAG1 (1:200) and Notch1 (1:1000). On the second day, the membranes were incubated with horseradish peroxidase-conjugated secondary antibody at room temperature for 1 h. Finally, the protein bands were visualized by using chemiluminescent ECL reagent. The bands were quantified by measuring the mean optical density for each group. Dual luciferase reporter system luciferase assay The JAG1 3'-UTR containing the conserved miR-98-3p binding sites or corresponding mutated sites was synthesized by GeneChem (Shanghai, China) and amplified by PCR. The PCR fragment was subcloned into the SacI and HindIII sites downstream of the luciferase reporter gene in the psiCHECK™-2 vector (Promega, Madison, USA). The luciferase reporter vector was cotransfected with miR-98-3p mimic or miR-negative control (miR-NC) into HEK-293T cells using Lipofectamine 3000 (Invitrogen). The luciferase activities were measured 48 h after transfection using the Dual Luciferase Reporter Assay System (Promega) according to the manufacturer's instructions. The Renilla luciferase activity was normalized to the firefly luciferase activity for each transfected well. Statistical analysis The data were analyzed and graphed by SPSS and GraphPad Prism 7 software. All of the numerical results are presented as the mean ± standard error of the mean (S.E.M.). Significant differences between the control and treatment groups were identified using Student's t tests. The differences among more than two groups were determined using one-way analysis of variance (ANOVA). P < 0.05 was considered statistically significant. RESULTS PDE delayed fetal bone development in female rats of the F1 generation First, we observed the effects of PDE on fetal bone development. The H&E staining results showed that PDE significantly shortened the entire femur length and the length of the primary ossification center of the femur compared with the control (P < 0.05, P < 0.01, Fig. 2a, b). Moreover, we analyzed the calcification of the primary ossification center by Von Kossa staining and found that PDE significantly reduced the mineralization area of the primary ossification center in F1 generation female fetal rats (P < 0.01, Fig. 2c, d). These results suggested that PDE could delay bone development and reduce bone mineralization in female fetal rats of the F1 generation. PDE reduced bone mass in adult female rats of the F1-F3 generations To investigate whether the effect of PDE on the bone mass formation of F1 female offspring can continue to adulthood or even the F3 generation, we detected the bone mass of 8-week-old female offspring of the F1-F3 generations by micro-CT. The results showed that PDE significantly reduced the bone mass of femoral cancellous bone of the F1-F3 generations, which was characterized by decreased BV/TV, Tb.N, and Tb.Th and increased Tb.Sp (P < 0.05, P < 0.01, Fig. 3a, b). These results suggested that PDE reduced bone mass in female rat offspring and that there was a multigenerational inheritance phenomenon. PDE increased miR-98-3p expression in bone tissues of F1-F3generation female rats through oocytes Next, we explored the multigenerational inheritance mechanism of PDE-induced bone dysplasia and low bone mass in female rat offspring. We performed a miRNA microarray analysis on female fetal rat bone tissue of the F1 generation from the control and PDE groups ( Supplementary Fig. 2a). RT-qPCR was further used to verify the five candidate miRNAs with the most apparent expression differences. The results showed that PDE significantly increased the expression of miR-98-3p in the bone tissue of the F1 female fetal rats (P < 0.01, Fig. 4a), and increased expression of miR-98-3p continued after birth and in the F2 and F3 generations (P < 0.05, P < 0.01, Fig. 4b). However, other miRNAs (e.g., miR-673-5p or miR-449a-5p) with significant changes during the intrauterine period could not be stably inherited by the F3 generation ( Supplementary Fig. 2b). The correlation analysis results indicated that the expression of miR-98-3p in bone tissue of the F1-F3 generations was significantly correlated with bone mass index (P < 0.05, P < 0.01, Fig. 4c). These results suggested that PDE induced osteopenia in F1-F3 female offspring rats, which was related to the increased expression of miR-98-3p in bone tissue. Furthermore, we compared the mature sequences of miR-98-3p in multiple species. Sequence analysis of miR-98-3p revealed that it is highly conserved in Homo sapiens, Mus musculus, Rattus norvegicus, and other species (Supplementary Fig. 3a). To clarify whether the continuous increase in miR-98-3p expression in the bone tissues of the F1-F3 offspring induced by PDE was transmitted through oocytes, we detected the expression of miR-98-3p in the oocytes of the F1 and F2 generations. The results showed that the expression of miR-98-3p in F1 and F2 oocytes of the PDE group was also higher than that of the control group (P < 0.05, P < 0.01, Fig. 4d, e). Collectively, the above results revealed that PDE could increase the expression of miR-98-3p in the bone tissue of F1 female rats and transmit it to the F2 and F3 generations via oocytes, thereby mediating the occurrence of a multigenerational inheritance effect of osteopenia. Upregulation of miR-98-3p expression mediated the inhibition of osteogenic differentiation induced by PDE in the F1-F3 female rats Osteogenic differentiation is an important factor affecting bone development and bone mass formation 23 . Therefore, we further detected the expression of femoral osteogenic differentiation marker genes (Runx2, Osterix, Ocn, and Col1a1) in F1 fetal rats and 8-week-old rats of the F1-F3 generations. The results showed that the expression of osteogenic differentiation marker genes in F1 fetal rats and F1-F3 female adult rats in the PDE group was significantly decreased (P < 0.05, P < 0.01, Fig. 5a). This result suggested that PDE could induce continuous inhibition of osteogenic differentiation in F1-F3 female rats. Next, we verified the role of miR-98-3p in dexamethasoneinduced inhibition of osteogenic differentiation of BMSCs. First, BMSCs were treated with different concentrations of dexamethasone (0, 20, 100, and 500 nM) in osteogenic differentiation culture, and then, the expression of osteogenic differentiation marker genes was detected. We found that dexamethasone inhibited the expression of osteogenic differentiation marker genes in a concentration-dependent manner and simultaneously upregulated the expression of miR-98-3p (P < 0.05, P < 0.01, Fig. 5b, c). Subsequently, we observed the effect of high-concentration (500 nM) dexamethasone on the osteogenic differentiation of BMSCs transfected with miR-98-3p inhibitor. RT-qPCR results showed that the miR-98-3p inhibitor alleviated the inhibitory effect of dexamethasone on osteogenic differentiation (P < 0.05, P < 0.01, Fig. 5d). Similar results were also obtained by ARS staining and ALP staining (Fig. 5e, f). In contrast, when we transfected the miR-98-3p mimic as a positive control, we found that the miR-98-3p mimic could further enhance the osteogenic inhibitory effect of dexamethasone ( Supplementary Fig. 4). These findings indicated that miR-98-3p is involved in the inhibition of osteogenic differentiation in F1-F3 female rats induced by PDE (dexamethasone). miR-98-3p participated in dexamethasone-induced inhibition of osteogenic differentiation by targeting the decrease in JAG1/Notch1 signaling in F1-F3 female rats To gain insights into the molecular mechanisms by which miR-98-3p regulates the osteogenic differentiation of BMSCs, we predicted the potential targets of miR-98-3p using bioinformatics tools (TargetScan) and found that the 3'-UTR of JAG1, a key ligand of the Notch signaling pathway, has a miR-98-3p binding site. Moreover, the binding site is highly conserved among vertebrates ( Supplementary Fig. 3b). We next constructed luciferase reporters that had either a wild-type (WT) 3'-UTR or a 3'-UTR containing mutant sequences of the miR-98-3p binding site to confirm that miR-98-3p can directly target JAG1 (Fig. 6a). The results (Fig. 6b) showed that overexpression of miR-98-3p remarkably inhibited the luciferase reporter activity of the WT JAG1 3'-UTR (P < 0.01) but not that of the mutated 3'-UTR. These results indicated that miR-98-3p could directly regulate the expression of JAG1. Previous studies have suggested that JAG1 can activate the Notch signaling pathway by affecting Notch1 and then participate in the regulation of osteogenic differentiation 24,25 . Then, we detected the protein expression of JAG1 and Notch1 by immunohistochemistry. The results showed that the expression of JAG1 and Notch1 was significantly decreased in the PDE group from the intrauterine period of the F1 generation to adulthood as well as in the subsequent F2 and F3 generations (P < 0.05, P < 0.01, Fig. 6c, d). In addition, Western blot analyses indicated that dexamethasone could significantly reduce the expression of JAG1 and Notch1 in BMSCs, and the miR-98-3p inhibitor could partially reverse the inhibition of JAG1 and Notch1 expression induced by dexamethasone (P < 0.01, Fig. 6e). Furthermore, we explored the role of JAG1/Notch1 signaling in inhibiting the osteogenic differentiation of BMSCs by dexamethasone in vitro. We found that overexpression of JAG1 alleviated the inhibitory effect of dexamethasone on the expression of osteogenic differentiation marker genes (P < 0.05, P < 0.01, Fig. 6f). This finding was consistent with the results of ARS and ALP staining (Fig. 6g, h). Taken together, these data demonstrated that miR-98-3p could participate in dexamethasone-induced inhibition of osteogenic differentiation by decreasing JAG1/Notch1 signaling. DISCUSSION PDE caused the multigenerational inheritance of osteopenia in female offspring The classic clinical application of dexamethasone in the treatment of premature infants is an intramuscular injection of 6 mg per 12 h for a total of four times over 24-34 weeks of gestation. A repeat course should be considered in women at risk of preterm birth 7 or more days after an initial course in women who remain at risk of preterm birth at less than 34 weeks of gestation 26,27 . To simulate the role of dexamethasone in the treatment of threatened preterm labor in the clinic, we administered dexamethasone to Wistar rats in the second and third trimesters of pregnancy. Based on the dose conversion relationship between rats and humans (conversion coefficient is 6.16:1) 28 , the dexamethasone administration dose of 0.2 mg/kg.d in rats in this study was comparable to the therapeutic dose of 0.03 mg/kg.d in humans. Since the standard dose for the clinical use of dexamethasone is 0.05-0.2 mg/kg.d 29 , the dose of dexamethasone administered in this study can be achieved in clinical practice. Moreover, due to the difficulty in the early diagnosis of preterm birth and the poor effect of some pregnant women with a single course of treatment, approximately 1/3 of pregnant women were changed from prophylactic drug administration to continuous drug administration 30 . Therefore, dexamethasone was given continuously at GD9-20 (the second and third trimesters of pregnancy) in this study. An epidemiological survey found that the incidence rate of osteoporosis in women over 45 years old is higher than that in men, accounting for 62%~83% of the total number of patients 31 . Some surveys around the world also found that the probability of osteoporotic hip fracture in older women is higher than that in men 32,33 . This result suggested that women are more likely to suffer from osteoporosis and have more typical symptoms. Therefore, the selection of female rats as the research object is representative. Transgenerational inheritance refers to the phenomenon in which epigenetic markers and phenotypes can still be transmitted to offspring without direct environmental exposure 34 . The programming intervention of the mother (F0 generation) during pregnancy directly affects the development of her offspring in utero (F1 generation), and the germ cells (future gametes) that form the F2 generation are also directly exposed to the adverse environment during this pregnancy, while the F3 generation is not directly exposed. Therefore, only the effect on the F3 generation and above can be truly called transgenerational inheritance 35 . Studies have shown that an adverse environment during pregnancy can lead to transgenerational inheritance effects of various disease phenotypes. For example, a low-protein diet during pregnancy can lead to abnormal pancreatic development and glucose metabolism in offspring and has transgenerational inheritance effects 36 . In addition, zinc deficiency during pregnancy can lead to immunosuppression of offspring and be inherited by the F3 generation 37 . In a rat model of PDE, we observed that PDE 8). b RT-qPCR was used to detect the expression of osteogenic differentiation marker genes in the BMSCs treated with different concentrations of dexamethasone during osteogenic differentiation (n = 3). c RT-qPCR was used to detect the expression of miR-98-3p in the BMSCs treated with different concentrations of dexamethasone (n = 3). d RT-qPCR was used to analyze the expression of osteogenic differentiation marker genes in the BMSCs transfected with miR-98-3p inhibitor under 500 nM dexamethasone and osteogenic differentiation conditions (n = 3). e Mineral deposition was indicated by Alizarin red staining (n = 3). f ALP staining was performed on Day 7 of osteogenic differentiation (n = 3). GD Gestational day; PDE Prenatal dexamethasone exposure; Runx2 Runt-related transcription factor 2; OCN Osteocalcin; COL1A1, α1 chain of type I collagen gene; Dex Dexamethasone; miR-NC miRNA negative control. Mean ± S.E.M., * P < 0.05, ** P < 0.01 vs. the appropriate controls. can lead to low peak bone mass in male offspring, which can be inherited by the F2 generation 38 . On this basis, we further studied the effect of PDE on the bone mass of the F3 female generation and explored the possible mechanism of transgenerational inheritance. First, we found that PDE can significantly shorten the absolute length of the primary ossification center of the fetal long bones, result in sparse bone trabeculae, and reduce the expression of osteogenic marker genes. The bone mass of long bones and the expression of osteogenic differentiation marker genes were also significantly reduced at 8 postnatal weeks. This . f RT-qPCR analysis of the expression of osteogenic marker genes in the BMSCs treated with 500 nM dexamethasone and the JAG1 plasmid (n = 3). g Mineral deposition was indicated by Alizarin red staining (n = 3). h ALP staining was performed on Day 7 of osteogenic differentiation (n = 3). JAG1 Jagged1; Con control; MOD mean optical density; miR-NC miRNA negative control; PDE prenatal dexamethasone exposure; Dex dexamethasone; Runx2 Runt-related transcription factor 2; OCN osteocalcin; COL1A1 α1 chain of type I collagen gene. Mean ± S.E.M., * P < 0.05, ** P < 0.01 vs. the control. result suggested that the long bone dysplasia of female rat offspring induced by PDE has an intrauterine programming effect, which could last until after birth. Furthermore, we found that the long bone mass and osteogenic differentiation marker gene expression in the F2 and F3 generations of the PDE group were significantly reduced. This finding indicated that the inhibitory effect of PDE on bone development could continue through the maternal line to adulthood of the F2 and F3 generations and had a multigenerational inheritance effect. miR-98-3p mediated the multigenerational inheritance of PDE-induced osteopenia in female offspring through germ cells Studies have shown that epigenetic modifications may be related to the multigenerational inheritance of some diseases 39 . Adverse prenatal environmental exposure often leads to abnormal epigenetic modification, which in turn leads to the occurrence of adverse phenotypes in offspring 40,41 . The main action sites of these different environmental factors are usually in the germline to promote the continuous inheritance of epigenetic modifications to the next generation, thereby providing a theoretical basis for the occurrence of some transgenerational diseases 42,43 . For example, it was reported that the offspring and even grandchildren of pregnant women with diabetes exhibit similar metabolic syndrome phenotypes, which may be related to epigenetic modifications of oocytes 44 . As a highly conserved epigenetic modification, miRNA plays a crucial role in regulating gene expression and genomic stability and may be involved in the occurrence of multigenerational inheritance phenomena 15 . For example, the injection of miR-1 into mouse fertilized eggs resulted in the development of a cardiac hypertrophy disease phenotype that could be inherited for at least three generations, and all three generations of spermatozoa showed persistent elevation of miR-1 45 . In this study, we verified that the expression of miR-98-3p in the long bone of the F1 generation before and after birth and the F2-F3 generation was significantly increased, as shown by highthroughput sequencing. We also confirmed that miR-98-3p is highly conserved. Moreover, we detected the expression of miR-98-3p in F1-and F2-generation oocytes and found that the expression of miR-98-3p was still elevated in PDE-group oocytes. These results suggested that the multigenerational inheritance effects of PDE-induced osteopenia in female offspring are due to the inheritance of F1-and F2-generation oocytes. The increased expression of miR-98-3p in adult long bone tissues of the F3 generation is maintained by stable mitosis of somatic cells derived from germ cells. The changes in miR-98-3p expression were retained in both germ cells and somatic cells, which together mediated the multigenerational inheritance of PDE-induced osteopenia in female offspring. Fetal-originated adult diseases usually have sex differences 46,47 ; therefore, we also investigated the changes in bone mass and the expression of miR-98-3p in male rats. We found that PDE still caused low bone mass in male F3 rats but did not affect the expression of miR-98-3p in male F1 fetal rats ( Supplementary Fig. 5). This finding indicated that there may be other mechanisms involved in the multigenerational inheritance of osteopenia in male rats; however, further research is needed. Epigenetic markers undergo two rounds of reprogramming during the life cycle, one during gamete formation and one during fertilization, which is likely to result in the loss of genome-wide epigenetic information. However, reprogramming is not carried out thoroughly, and there may be epigenetic markers that are not completely cleared at each stage 48 . If environmental factors lead to permanent changes in the epigenome of parental germ cells, the transgenerational epigenetic inheritance will occur through the transmission of germ cells 49 . In this study, the increased expression of miR-98-3p in oocytes induced by PDE was sustainably inherited, possibly due to escape during reprogramming, thus preserving the abnormal changes in miR-98-3p. However, the detailed mechanism of epigenetic modification changes and how epigenetic modification in oocytes is stably transmitted to the next generation are not fully understood. Whether a deeper mechanism is involved, such as miRNA methylation and imprinted genes, remains to be further studied. In addition, this study lacks in vivo miR-98-3p inhibition experiments in oocytes, which will be the focus of our future research. miR-98-3p/JAG1/Notch1 signaling contributed to the multigenerational inheritance of PDE-induced osteopenia in female offspring As a common epigenetic mediator, miRNAs typically regulate gene expression at the post-transcriptional level by promoting mRNA degradation or inhibiting mRNA translation through binding to the 3'-UTR of the target mRNAs 19,50 . In this study, we first screened the changes in the miRNA expression profile in fetal rat long bones induced by PDE by miRNA microarray analysis and confirmed that miR-98-3p could participate in the multigenerational inheritance of PDE-induced osteopenia through oocyte transmission. We further studied the mechanism at the cellular level and found that dexamethasone could increase the expression of miR-98-3p and inhibit the osteogenic differentiation of BMSCs in a concentration-dependent manner. The dexamethasone-mediated inhibition of osteogenic differentiation of BMSCs was partially reversed or enhanced by miR-98-3p inhibitor or mimics. This finding indicated that miR-98-3p could directly participate in and regulate the dexamethasone-mediated inhibition of osteogenic differentiation of BMSCs. We also confirmed that dexamethasone could regulate miR-98-3p by activating glucocorticoid receptors ( Supplementary Fig. 6). Furthermore, we discussed the downstream signaling pathway of miR-98-3p. JAG1 is one of the crucial ligands of the Notch signaling pathway. Interacting with Notch1 receptors leads to the release of the Notch intracellular domain (NICD), allowing it to translocate into the nucleus and activate Notch-responsive genes that are important for cell differentiation and morphogenesis in different biological systems 51,52 . JAG1/Notch1 signaling is also considered an essential factor in maintaining skeletal development and homeostasis in humans. A microdeletion of the JAG1 gene can cause a disease characterized by bone abnormalities (Alagille syndrome) 53 . Moreover, JAG1 has been shown to promote osteogenic differentiation of ligamentum flavum cells and human bone marrow mesenchymal stem cells through activation of Notch signaling 54,55 . In this study, we predicted that JAG1 was the target gene of miR-98-3p through bioinformatics and further confirmed it by dual luciferase reporter assays. Furthermore, in the multigenerational inheritance model of PDE, the expression levels of JAG1 and Notch1 continued to decrease. Cell experiments also confirmed that dexamethasone could increase miR-98-3p expression and decrease the expression of JAG1 and Notch1, while a miR-98-3p inhibitor could reduce the suppressive effects of dexamethasone on JAG1 and Notch1 expression. In addition, overexpression of JAG1 can partly rescue the inhibitory effect of dexamethasone on the osteogenic differentiation of BMSCs. These findings demonstrated that miR-98-3p and its molecular target JAG1/Notch1 signaling are involved in the multigenerational inheritance of PDE-induced osteopenia in offspring. In summary, this study confirmed that PDE caused the multigenerational inheritance effect of osteopenia in female offspring rats. The mechanism is related to the continuous upregulation of miR-98-3p expression induced by PDE in bone tissue transmitted through oocytes and then targeted inhibition of JAG1/Notch1 signaling, leading to poor osteogenic differentiation (Fig. 7). This study provides new experimental evidence for the analysis of bone developmental toxicity and multigenerational Fig. 7 miR-98-3p/JAG1/Notch1 signaling mediates the multigenerational inheritance effect of osteopenia caused by maternal dexamethasone exposure in female rat offspring. PDE prenatal dexamethasone exposure; GR glucocorticoid receptor; JAG1 Jagged1.
8,185.8
2022-03-01T00:00:00.000
[ "Biology" ]