id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
15021306
pes2o/s2orc
v3-fos-license
Phytochemical and Antioxidant Investigation of the Aerial Parts of Dorema glabrum Fisch. & C.A. Mey. Dorema glabrum Fisch. & C.A. Mey. (Apiaceae) is a monocarpic perennial plant distributed in southern Caucasus. In Azerbaijan Republic folk medicine, the gum-resin of this species is used as a diuretic and anti-diarrheal agent. It is also traditionally used for the treatment of bronchitis and catarrh. In the present study, chemical constituents of the essential oil and extract of D. glabrum aerial parts were investigated and their free radical scavenging potentials were assessed. GC-MS and GC-FID analyses of the plant essential oil resulted in identifying twenty compounds, out of which elemicin (38.6%) and myristicin (14.3%) were main compounds. Seven compounds including daucosterol (1), chlorogenic acid (2), a mixture of cynarin (3) and 3,5-di-O-caffeoylquinic acid (4), isorhamnetin-3-O-β-D-glucopyranoside (5), isoquercetin (6) and astragalin (7) were also isolated from the ethyl acetate and methanol fractions of D. glabrum aerial parts using different chromatographic methods on silica gel (normal and reversed-phase) and sephadex LH20. Structures of the isolated compounds were elucidated using UV and 1H, 13C-NMR spectrain comparison with those reported in respective published data. Antioxidant activities of the crude extract, fractions and isolated compounds were evaluated using DPPH free radical scavenging assay method. Among the fractions, methanol fraction (IC50=53.3 ±4.7μg mL-1) and among the isolated compounds, caffeoylquinic acid derivatives exhibited the highest free radical scavenging activity (IC50= 2.2-2.6 μg mL-1). Introduction The genus Dorema D. Don of the Apiaceae (alt. Umbelliferae) family comprises 12 species, mainly distributed in southwestern and central Asia (1,2). In Iran this genus is represented by seven species, and Dorema glabrum Fisch. & C.A. Mey. is one of them (3). Dorema glabrum is a monocarpic perennial plant native to the northwest of Iran, Azerbaijan Republic (in Nakhichevan region) and Armenia (2, 3). Like other species of the genus Dorema, D. glabrum exudes a gum-resin which is used in Azerbaijan Republic folk medicine as a diuretic and anti-diarrheal agent as well as for the treatment of bronchitis and catarrh (4).There are also some reports indicating that this plant is used by indigenous people for cure of some cancer types (5). 250 °C at a rate of 3 °C permin -1 . The injection temperature was 250 °C and the oil sample (1 μL) was injected with a split ratio of 1:90. The mass spectra were obtained by electron ionization at 70 eV. The retention indices (RI) of the compounds were calculated using a homologous series of n-alkanes injected in conditions equal to the samples. Identification of the compounds were based on computer matching with the Wiley7n.L library, direct comparison of the retention indices and fragmentation pattern of the mass spectra with those for standard compounds data published in the literature (9). Relative percentages amounts of the identified compounds were achieved using an Agilent HP-6890 gas chromatograph equipped with a FID detector. The FID detector temperature was 290 °C and the operation was performed under the same conditions as described above for GC-MS analyses. Extraction and fractionation The air-dried and ground aerial parts (0.8 Kg) were macerated with methanol (4 L ×5) at the room temperature. The obtained crude extract (180 g) was moved on a silica gel column (30-75 mesh, Merck, Germany) and eluted successively with petroleum ether, chloroform, ethyl acetate and methanol (each 4 L), to get four main fractions. All the fractions were concentrated under the maximum temperature 45 °C using a rotary evaporator. Phytochemical analyses In preliminary studies by thin layer chromatography using various reagents (TLC, Pre-coated silica gel GF 254 and silica gel 60 RP-C18 F 254 s plates, Merck, Germany), ethyl acetate and methanol fractions were found to contain number of spots characteristic for phenolic compounds. These fractions were then subjected to more phytochemical investigation using various chromatographic and spectroscopic methods. Ethyl acetate fraction (20 g) was moved on a silica gel column (230-400 mesh, Merck, Germany) and eluted with MeOH-CHCl 3 (0.5:9.5 -3:7) to give nine fractions (E 1-9 ). Fraction E 3 afforded 120 mg of white crystals which were purified on a silica gel column Previous pharmacological investigations have shown hypocholesterolemic and antioxidant activities of D. glabrum aerial parts and anti-proliferative effects of its fruits against WEHI-164 mouse fibrosarcoma cell line (6, 7). The essential oil of D. glabrum roots has also been reported to contain δ-cadinene (12.8%) and β-bisabolene (7.5%) as main compounds with a week antioxidant activity in DPPH free radical scavenging assay (IC 50 = 2.2 mg mL -1 ) (5). Unfortunately, extensive exploitation of this medicinal plant and reduction of its natural population has led it to be considered as an endangered species (8). The present study was an attempt to investigate the phytochemical constituents and antioxidant properties of the aerial parts of D. glabrum. To our knowledge, this is the first report on essential oil composition and isolation of compounds with free radical scavenging activity from the aerial parts of this medicinal species. Plant material The aerial parts of D. glabrum were collected from "Ghaflankuh" mountains located in East-Azerbaijan (northwest of Iran) during its flowering stage in June 2011. The voucher specimen of the plant (voucher no. 2120 MPIH) was deposited at the herbarium of Institute of Medicinal Plants, ACECR, Karaj, Iran. Essential oil extraction The air-dried and comminuted plant material (100 g) was subjected to hydrodistillation for 4 h using a Clevenger-type apparatus to produce essential oil with 0.2% yield. The obtained oil was dried over anhydrous sodium sulfate and stored in 4 °C until analysis. In all steps, column chromatography was monitored by TLC under UV at 254 and 366 nm and by spraying anisaldehyde-H 2 SO 4 reagent followed by heating (120 °C for 5 min) and the fractions giving similar spots were then combined. The structures of compounds were determined by UV spectrophotometer (CE7250, Cecil) using various shift reagents (10), 1 H-NMR, 13 C-NMR and DEPT spectral analyses [Brucker Avance 400 DRX (400 MHz for 1 H and 100 MHz for 13 C)] as well as by comparison with respective published data. DPPH free radical scavenging activity The crude extract, fractions and isolated compounds were evaluated for their free radical scavenging activities using 2, 2-diphenyl-1picryl-hydrazyl (DPPH) method described by Sarker et al. with slight modifications (11). Briefly, the prepared sample solution (5 mg mL -1 ) in methanol was serially diluted to get concentrations ranging from 0.5 to 9.5×10 -3 mg mL -1 . Diluted solutions (1 mL each) were mixed with 1 mL of DPPH (Sigma-Aldrich, Germany) solution (80 μg mL -1 in methanol) and were kept 30 min at 25 °C in dark for any reaction to take place. UV absorbances of the mixtures were recorded on a Cecil CE7250 spectrophotometer at 517 nm. Butylated hydroxytoluene (BHT) was also used as a positive control. All tests were performed in triplicate and IC 50 values were reported as means ±SD. Results and Discussion Twenty compounds, representing 88.8% of the oil, were identified as a result of GC-MS and GC-FID analyses of the oil obtained from the aerial parts of D. glabrum. The results showed that the tested oil was rich in oxygenated non-terpenes (56.3%) with elemicin (38.6%) and myristicin (14.3%) as main compounds (Table 1). A review of the previous studies on essential oil of Dorema species including D. ammoniacum and D. aucheri aerial parts and D. glabrum roots, showed that elemicin and myristicin have not been detected in their chemical composition and therefore, this is the first report on identification of these phenylpropanoid derivatives in the essential oil of Dorema genus plants (5,12,13,14,15). Elemicin and myristicin, however, have been reported at high levels in essential oils of Ferula species, the genus which is classified in the same tribe (Scandiaceae) as Dorema genuse (16,17). Considering to the reported antifungal and antibacterial effects of elemicin and hepatoprotective, anti-inflammatory and insecticidal properties of myristicin, the essential oil of D. glabrum aerial parts might also possess similar pharmacological potentials through notable amounts of these two bioactive compounds in its chemical content (18)(19)(20)(21)(22). A previous study on D. aucheri has reported isolation of four exudates methoxylated flavones (salvigenin, nepetin, crisiliol and eupatorin) from its aerial parts (29). There is also a report on isolation of three new sesquiterpene derivatives (kopetdaghins A-C) along with a known prenylated coumrin, daucosterol and stigmasterol-3-O-glucoside from the aerial parts of D. kopetdaghense (30). To our knowledge, present study is the first report on the isolation of compounds 2-7 from Dorema genus plants. The results of our Note: a Compounds listed in order of elution from HP-5MS column; b Retention indices to C 8 -C 24 n-alkanes on HP-5MS column. The results of free radical scavenging activities of the crude extract, fractions and isolated compounds in DPPH assay were summarized in Table 2. Methanol fraction exhibited the highest level of activity (IC 50 =53.3±4.7μg mL -1 ) among the tested fractions. Isolated compounds from the methanol fraction (3-7) also showed significant free radical scavenging activity (Table 2). These phenolic compounds (3-7), thus could be considered as responsible for the observed notable antioxidant activity of the methanol fraction. Among the isolated compounds, caffeoylquinic acid derivatives (2-4) were found to possess potent free radical scavenging activity (IC 50 =2.23±7.3 and 2.61±4.8μg mL -1 ) higher than BHT, a synthetic commercial antioxidant (IC 50 = 19.5±0.8 μg mL -1 ). Considering to the recognized important role of antioxidants in disease prevention and health promotion (39), our results introduce D. glabrum as a valuable source of natural phenolic antioxidants specially flavonoids and caffeoylquinic acid derivatives. Occurrences of biologically active compounds in D. glabrum aerial parts are indicative of more medicinal potentials of this species and make it an appropriate candidate for further pharmacological and toxicological studies. The results of our study also emphasizes on necessity of conservation of D. glabrum, as a valuable genetic resource for bioactive natural products. phytochemical studies also indicated that the major phloroglucinol glycosides reported from the roots of D. hyrcanum and D. aitchisonii (hyrcanoside, pleoside and echisoside), were not present (at least at high levels) in the aerial parts of D. glabrum (31,32). A review of the literature demonstrated that the isolated compounds from D. glabrum aerial (1) (2)
2017-03-31T11:26:46.515Z
2015-06-01T00:00:00.000
{ "year": 2015, "sha1": "c994d8acf14c9af2b622f48ddd337ade17f11798", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "e3d6d4558655fe4f0e718173b33f7a7517e2fff2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
39926920
pes2o/s2orc
v3-fos-license
Software Project ’ s Complexity Measurement : A Case Study Project management is a well understood management method, widely adopted today, in order to give predictable results to complex problems. However, quite often projects fail to satisfy their initial objectives. This is why studying the factors that affect the complexity of projects is quite important. In this paper, we will present the complexity factors that are related to project time, cost and quality management and then we will apply them to a number of selected projects, in order to compare the acquired results. The projects have been chosen in a way that results can be easily compared. Introduction Research studies have shown that very often projects fail to meet their requirements in terms of quality, time and cost restrictions.It is widely accepted that amongst the main reasons for these failures is the increased complexity of modern projects due to their special characteristics.There is a lack of consensus in defining what project complexity is.This fact resulted in the development of different approaches for classifying project management complexity.However many researchers agree that complexity is "consisting of many varied and interrelated parts" [1]. Software projects are among the most complex ones.Many studies on various types of software project have proven that their outcomes are far from the complete fulfilment of the initial requirements [2].Most studies measure complexity either by measuring the software project product based on its attributes such as size, quality, reliability or the characteristics of software project process using attributes such as performance, stability, im-provement [3].As such the need to establish a systematic way to evaluate the software project complexity is important. Project complexity is a common concept recognized in a number of different ways.It is given a number of different interpretations based on the reference context or on each individual's experience.In many cases project complexity is used as a replacement for project size, or alternatively to project difficulty; or it is perplexed with project's product complexity [4].In most cases, the complexity of projects is measured either by measuring the attributes of project products or by measuring the characteristics of project processes.In our approach, it is suggested that the complexity of projects should be studied by applying structured project management techniques [5]. The purpose of this paper is to study how factors are contributing to the complexity of projects.These factors are related with project time, cost and quality management and they have been identified in [6] [7].Based on these factors a complexity model is built.Subsequently, this model has been validated with a number of projects.Finally, the results and the future work are presented. Background Complexity is part of our environment and appears in different domains.Many scientific fields have dealt with complex systems and have attempted to define the term complexity according to their domain.This implies that there is a different definition of complexity in computational theory, in information theory, in business, in software engineering etc. and many times there are different definitions inside the same domain.Schlidwein and Ison [8] states that are two major approaches of complexity.The first one describes the complexity as a property of a system, called descriptive complexity.The other approach describes complexity as perceived complexity and translates it as the subjective complexity that someone experiences through the interaction with the system.This lack of consensus in defining what project complexity is has resulted in a variety of approaches on classifying project management complexity.One of the first researches that deal with the concept of complexity was Baccarini [9].He considers complexity as something "consisting of many varied and interrelated parts" and operationalized them in terms of "differentiation" the number of varied elements (e.g.tasks, components) and "interdependency" the degree of interrelatedness between these elements.Finally he describes four types of complexity: 1) organizational complexity by differentiation and 2) organizational complexity by interdependency 3) technological complexity by differentiation and 4) technological complexity by interdependency. Extending the work of Baccarinni, Williams [10] added the dimensions of uncertainty in projects and the multi-objectivity and multiplicity of stakeholders.The definition of project complexity according to Williams is divided in structural complexity sourcing from number and interdependence of elements and uncertainty sourcing from uncertainty in goals and methods. Maylor et al. [14] focused on perceived managerial complexity under two dimensions structural and dynamic and identified five aspects of complexity.They defined a complexity model that is based on Mission, Organization, Delivery, Stakeholders and Team (MODeST). According to our previous work [6] in order to assess complexity we need a holistic framework that will take into account all areas of PM as they are defined in PMBOK [15].This framework should define factors that affect complexity and metrics associated with each factor.Subsequently, these metrics will be evaluated by experts for their contribution in total project complexity and as such a model shall be developed.It should be noted that the developed model is not unique, neither the same for all projects.Simply, each developed model represents the consensus of each group of experts, of each company, etc.The final outcome is a parametric model that gives a quantitative indication of the expected complexity of the project (Figure 1).The numerical representation of the project complexity makes easier the relative comparison of projects complexity as well as the complexity of the project itself.In addition, this approach allows the implementation of thresholds in project complexity that will allow the projects classification into categories according to the level of complexity.Also relative comparison of complexity will allow the comparison of different management and implementation approaches of projects and selection of the one with the lowest complexity.The main problem of this approach was, that building such models requires laborious work since the number of subject areas as they have been defined in PMBOK are ten (www.pmi.org),resulting to hundreds of factors and metrics. The same problem has been faced by other researchers that attempted to limit the number of complexity categories and dimensions (factors) to a minimum number.For example, Vidal et al. [4], studied project complexity under the organizational and technological dimensions and identified four aspects for studying project complexity: project size, project variety, project interdependence and project context.For this reason, in our model it was decided to limit the number of subject areas and instead of using PMBOK's ten subject areas, to use three subject areas namely: time, cost and quality [15].This constitutes the well know project management iron triangle that according to all scholars and practitioners defined the most influential factors for project success.A number of potential complexity factors have been identified as a result of an extensive literature review. Presentation of the Case Study After adopting the proposed factors that were found most frequently in the literature review, this initial set of factors was evaluated using the multi-criteria decision-making method, Analytical Hierarchy Process (AHP) [16] for defining the relative weight of each factor.The application of AHP method was done by using online AHP system that facilitated the whole process (http://bpmsg.com/academic/ahp.php). The primary objective of AHP is to classify a number of alternatives (e.g., a set of quality determinants) by considering a given set of qualitative and/or quantitative criteria, according to pair wise comparisons/judgments provided by the decision makers.AHP results in a hierarchical levelling of the quality determinants, where the upper hierarchy level is the goal of the decision process, the next level defines the selection criteria which can be further subdivided into sub criteria at lower hierarchy levels and, finally, the bottom level presents the alternative decisions to be evaluated. The main advantages of applying the AHP method are: it is capable to provide a hierarchical decomposition of a decision problem that helps in better understanding of the overall decision making process, it handles both quantitative and qualitative criteria, it is based on relative, pair wise comparisons of all decision elements; instead of arbitrarily defining a percentage score and a weight for each decision element, AHP allows the decision maker to focus on the comparison of two criteria/alternatives, at a time, thus it decreases the possibility of defining ratings based only on personal perceptions of the evaluators or other external influences. Three are the basic concepts that AHP is based on (see Figure 2): Synthesis • Complexity Analysis: A hierarchical tree is created with criteria, sub-criteria and alternative solutions as the leaves. • Calculation/Estimation is executed in every tree level based on a 1 to 9 scale in order to measure priorities. More specifically, a pair wise comparison takes place in every tree level with regards to the parent node.The goal node in the hierarchical tree exists only to highlight the top-down analysis of the methodology.• Synthesis with ultimate goal to extract the final priorities of the alternatives. As it was mentioned, AHP is a method that orders the priorities in a given situation, incorporating the element of subjectivity and intuition so that a final decision can be reached by making decisions for part-issues in a consistent way and gradually move up levels to deal with the given situation having a clear view of what it entails.In AHP, alternatives are paired and decisions makers are called to note their preference between the two alternatives for a variety of issues (see Figure 2), in a scale of 1-9, assigning relative levels of priority to these judgments as they go along.Each element in compared to all other elements, using the scale presented, for defining their relative importance.These judgments are quantified and calculated so that when synthesized, they reveal the best alternative.AHP is relatively simple and logical and given that a certain consistency in the part-decisions is maintained, AHP can help decision makers deal with complicated issues where often not only tangible but also intangible parameters affect their decision.It should be noted briefly at this point that AHP is as effective as its design in each individual case and that analysts should exercise care and precision in capturing the true sub-elements and requirements of the case in question. A small number of project managers have evaluated these factors and the ranking produced is presented in Table 1.The above factors lead to the decision tree presented in Figure 3.We have done the assumption that all top level factors: cost, time and quality contribute to the complexity with the same weight, 33.33% since all factors are considered equivalent for project success. In order to evaluate the above model we have defined three projects.The reasons that we have decided to define these projects and not to evaluate real projects was simply for better demonstrating the validity of the proposed model, at least at this level.The of these projects are the following: Project A: It is a project offering IT services to third parties.As such it is a long duration project, with many of diverse type of activities, having many dependencies since it has many stakeholders and large number of deliverables.However, this is a well-planned project and therefore the number of critical activated is quite limited.Since this is a service project we might have additional service leading usually to budget increase.Project A is financed by a number of companies that are sharing the same building infrastructure.A QA department is used to ensure the quality of the services delivered and since it is a long duration project all procedures followed are documented in detail. Project B: Project B is totally different case than project A, since this is an IT services consultancy projects.This is a short duration (three months) with few tasks.However, the required resources are rare since this project requires high-end technical profiles.Project B's budget may vary significantly, in relation to the findings of the preliminary problem analysis.If a quick solution can be found, project B will finish successfully and quickly.However, if technical problems still persist it might be needed to be extended in duration, considerably.Project B is financed by a single source, the company that ordered the consultancy services and since project B was the result of urgent request due to technical problems, the project was not planned and it started without any actual planning.Quality procedures have not been foreseen. Project C: It is a software development project requiring complex software development.The duration of the project is average (one year), with average number of dependencies between the activities and average number of deliverables.Project C is a fixed price project and it is foreseen a regular and constant cash flow.Project C is financed by a single source, a public organization that is solely funded by the State.Since the project is funded by a public organization, there are bureaucratic procedures involved in all project management activities.All quality procedures are well documented, known and the quality assurance positions are fully staffed. Since, all projects are run from the same company we consider that factors related to experience and tools are influencing with the same weight the complexity of all three projects. In Figure 4, we present the evaluation of the time related factors, according to a small group of experts.As we can observe according to project time management complexity, the most complex project is A, that it is really a large project (see project profile above), followed by project B that is a project starving for resources.According to the classification of the time factors, the factor "Lack/Insufficient resources, especially rare" influences heavily Project B. The result is that project B is perceived as more complex that project C. The same analysis (pairwise comparisons) has been done for cost (see Figure 5) and quality (see Figure 6) factors.In relation, to the cost management project B is the most complex, since the project started without any initial planning and the budget may change considerably.The less complex is project C that is funded by a public organisation, the funding is secured and it is a fixed price contract.Similarly, if we evaluate the quality complexity factors we see that the fact that project B does not apply any quality procedure makes this project the most difficult to handle.The other two projects according to their profile exhibit similar complexity. The final step is to combine, time, cost and quality complexity scores into one score.As we have already explained we make the assumption that all these three factors have equal weight, since their contribution to project success is valued equally The end results are presented in Table 2 which demonstrates that even if project B is considered smaller, with less tasks, etc. it has been evaluated as more complex. The above presented results give a logical and valid representation of project complexity.However, in order, for this model to be used in reality, it has to be validated with real projects and possibly to make adaptions to the influencing factors and on its weights. Conclusions The need to measure complexity is well understood and sufficiently justified.Obviously, software project complexity is an area that needs to be studied further, and in detail. We have presented a simple and straight forward model for the measurement of project complexity.Project complexity is a useful measure on how much attention, we should put on a project taking into account not only the size but also the budget value of the project.It may be used together with other metrics for lowering the risk undertaken in various projects.Of course, a lot of work remains to be done.Firstly, all presented elements have to be further analysed in order to produce a model that is able to calculate robustly project complexity by combining factual, dynamic and interaction elements.Secondly, we need to know how we can practically measure the evolution of project complexity over project duration and what interventions are necessary for managing and controlling the complexity.Finally, we need to validate the model, in order to see if measured and perceived complexity is similar. Figure 1 . Figure 1.Project evaluation according to their complexity. Figure 4 . Figure 4. Evaluation of time related complexity factors. Figure 5 . Figure 5. Evaluation of cost related complexity factors. Figure 6 . Figure 6.Evaluation of quality related complexity factors.
2017-08-15T17:00:24.465Z
2015-10-12T00:00:00.000
{ "year": 2015, "sha1": "78f301a7fbf4300e7d7d0500bc3b61e884cdf7ba", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=60585", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "78f301a7fbf4300e7d7d0500bc3b61e884cdf7ba", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
16324241
pes2o/s2orc
v3-fos-license
Evaluating Term Extraction Methods for Interpreters The study investigates term extraction methods using comparable corpora for interpreters. Simultaneous interpreting requires efficient use of highly specialised domain-specific terminology in the working languages of an interpreter with limited time to prepare for new topics. We evaluate several terminology extraction methods for Chinese and English using settings which replicate real-life scenarios, concerning the task difficulty, the range of terms and the amount of materials available, etc. We also investigate interpreters’ perception on the usefulness of automatic termlists. The results show the accuracy of the terminology extraction pipelines is not perfect, as their precision ranges from 27% on short texts to 83% on longer corpora for English, 24% to 31% on Chinese. Nevertheless, the use of even small corpora for specialised topics greatly facilitates interpreters in their preparation. Introduction The study investigates term extraction methods using comparable corpora for interpreters. Simultaneous interpreting requires efficient use of highly specialised domain-specific terminology in the working languages of the interpreter. By necessity, interpreters often work in a wide range of domains and have limited time to prepare for new topics. To ensure the best possible simultaneous interpreting of specialised conferences where a great number of domain-specific terms are used, interpreters need preparation, usually under considerable time pressure. They need to familiarise themselves with concepts, technical terms, and proper names in the interpreters' working languages. However, there is little research into the use of modern terminology extraction tools and pipelines for the task of simultaneous interpretation. At the start of computer-assisted termbank development, Moser-Mercer (1992) overviewed the needs and workflow of practicing interpreters with respect to terminology and offered some guidelines for developing term management tools specifically for the interpreters. That study did review the functionalities of some termbanks and term management systems, yet there was no mention of corpus collection (a fairly new idea at the time) or automatic term extraction. A few previous studies mentioned the application of corpora as potential electronic tools for the interpreters. Fantinuoli (2006) and Gorjanc (2009) discussed the functions of specific online crawling tools and explored ways to extract specialised terminology from disposable web corpora for interpreters. Our work is most closely connected to Fantinuoli's work on evaluation of termlists obtained from Webderived corpora. However, that study relied on a single method of corpus collection and term extraction, and did not include an investigation into integration of corpus research into practice of interpreter training. Rütten (2003) suggested a conceptual software model for interpreters' terminology management, in which termlists are expected to be extracted (semi-)automatically and then to be revised by their users, the interpreters, who can concentrate on those terms which are relevant and important to remember. However the study neither tested the functions of the term extraction tools nor further discussed interpreters' perception on the usefulness of the automatically lists in their preparation for interpreting tasks. En Zh Texts 1 1 9 9 81 86 9 12 74 84 Size 774 1,641 42,006 30,174 206,197 129,350 20,533 40,545 166,499 116,235 Table 1: Corpora used in this study (the size is in words for En, in characters for Zh) Based on Rütten model, this paper will further test the functions of several term extraction tools for English and Chinese, and will discuss the interpreters' perception on the usefulness of the automaticallygenerated lists in their preparation for interpreting tasks. In the remainder of this paper we will describe the pipelines for corpus collection and terminology extraction (Section 2), present the results of their numeric evaluation (Section 3), and discuss options for future research, including the challenges for the term extraction pipelines in this setting (Section 4). Corpus collection and term extraction In this section we describe pipelines for interpreters' terminology preparation with the use of term extraction tools. We compare several approaches to corpus compilation and processing for specialised texts as well as several pipelines for terminology extraction. Description of the procedure Two specialized topics In this study MA student interpreters were invited to prepare for simultaneous interpreting tasks on two specialised topics: fast reactors (FR) and Seabed minerals (SM). They were provided with two monolingual specialised corpora in both English and Chinese for their advance preparation on each of the topics (FR & SM). Three term extractors The students started with the FR topic, and were asked to manually generate their own lists from the provided corpora (FR1 En & Zh) before their simultaneous interpreting exercise on the topic in both directions (English→Chinese and Chinese→English). After their interpreting tasks, they were then asked to evaluate the relevance of two monolingual lists (En & Zh) which were automatically generated by one of the three tools (TTC TermSuite, Syllabs Tools and TeaBoat). The purpose here is to see which tool could extract more relevant terms for the needs of trainee interpreters. We collected and compared the annotation results from the students to select a single tool with comparatively better performance. We then invited the students to prepare for the other topic (SM) by using automatically-generated lists in the simultaneous interpreting preparation. Corpus compilation There are two types of sources where comparable corpora are from: 1. Conference documents and relevant background documents provided by the conference organisers 2. Specialised corpora collected from the internet using WebBootCat (Baroni and Bernardini, 2004) Table 1 presents all the corpora we use in this study. FR0/SM0 has been created from a single relevant document, representing the speech that the trainee interpreters were asked to interpret from in this experiment. We also ran term extraction from this "corpus" since often a text of this length is the only source of information given to the interpreters in advance. We tried to balance the terminological difficulty for both languages, even if this was not always possible. After manual term selection, we found that FR0-Zh contains 147 terms per 591 seconds of delivery (15 terms per minute), FR0-En: 86 terms per 566 seconds (9 t/min), SM0-Zh: 157 terms per 604 seconds (16 t/min), SM0-En: 169 terms per 750 seconds (14/min). 1 (Baroni and Bernardini, 2004). For instance, to produce FR2 we started with a set of ten relevant keywords in English and Chinese as shown in Table 2, then used BootCat to retrieve online resources and generate two corpora (FR2 En & Zh). All the keyword seeds are from the English speech-FR0 that the students were going to interpret from, and are therefore considered very relevant and important terms. The Chinese keywords are the translations of the English ones. Seeds (En) Preprocessing included webpage cleaning (Baroni et al., 2008), as well as basic linguistic processing. Lemmatisation and tagging for English was done using TreeTagger (Schmid, 1994), while for Chinese we used "Segmenter", an automatic tokenisation tool (Liang et al., 2010) followed by TreeTagger for POS tagging. Lemmatisation is needed because the keywords in a glossary are expected to be in their dictionary form. Lemmatisation also helps in reducing the nearly identical forms, e.g., sulphide deposit(s). However, lemmatisation also leads to imperfect terms, e.g., recognise type of marine resource, while the plurals and participles should be expected in a dictionary form (recognised type of marine resources). Automatic term extraction TTC TermSuite (Daille, 2012) is based on lexical patterns defined in terms of Part-of-Speech (POS) tags with frequency comparison against a reference corpus using specificity index (Ahmad et al., 1994), which extracts both single (SWT) and multi-word terms (MWT) outputs their lemmas, part of speech, lexical pattern, term variants (if any), etc. The most important feature of the TTC TermSuite is the fact that term candidates can be output with their corresponding term variants. Syllabs Tools (Blancafort et al., 2013) is a knowledge-poor tool, which is based on unsupervised detection of POS tags, following the procedure of (Clark, 2003), and on the Conditional Random Field framework for term extraction (Lafferty et al., 2001). Teaboat (Sharoff, 2012) does term extraction by detecting noun phrases using simple POS patterns in IMS Corpus Workbench (Christ, 1994) and by applying log-likelihood statistics (Rayson and Garside, 2000) to rank terms by their relevance to the corpus in question against the Internet reference corpora for English and Chinese (Sharoff, 2006). Fantinuoli (2006) used five categories to find the level of specialisation and well-formedness of the automatically-generated candidate termlist: Term extraction evaluation 1. specialised terms that were manually extracted by the terminologist (and are contained in the reference term list); 2. highly specialised terms that were not detected by the terminologist; 3. non-specialised terms that are commonly used in the field of his study (medicine); 4. general terms that are not specific to the medical field; 5. ill-formed, incomplete expressions and fragments. Our annotation system extends Fantinuoli's study because the purpose of annotation in this project is to give the interpreters possibility to extract relevant terms from all the candidate terms regardless of their levels of specialisation. Our premise is that interpreters may need relevant terms, both highly specialised and less specialised, in order to prepare themselves for a conference. The annotators are the end users of the list, i.e. the trainee interpreters who participated in this research. Since the interpreters are tasked with translating speeches in the domain, they need themselves to decide what is likely to be relevant instead of relying on the terminologists who describe the overall structure of the domain. The following is the five-category annotation system that we used in this research: R relevant terms (terms closely relevant to the topic), eg. breed ratio, uranium-238, decay heat removal system; P potentially relevant terms (a category between "I" and "R": they are terms; but annotators are not sure whether they are closely relevant to the topic of their assignment), eg. daughter nuclide, neutron poison, Western reactor; I irrelevant terms (terms not relevant to the topic), eg. schematic diagram, milk crate; G general words (rather than terms), eg. technical option, monthly donation, Google tag, discussion forum; IL ill-formed constructions (parts of terms or chunks of words), eg. var, loss of cooling, separate sample container, first baseline data, control ranging. It only took several minutes to generate a termlist after uploading the designated corpus onto TTC TermSuite, Syllabs Tools and TeaBoat. Each of them automatically generated corresponding monolingual termlists sorted by their term specificity scores. For all the tools we set the threshold of obtaining 500 terms (if possible), as a practical limit for all evaluation experiments. The trainee interpreters were asked to annotate the list by using the above annotation system. Each of them reported that it took them about 60 minutes to annotate both lists (in EN & ZH) on each of the topics (FR & SM). All the annotators were briefed about what counts as terms and the annotation system before they started their evaluation of term lists. We aim for consistency, yet inter-annotator disagreement does exist and there is a certain degree of subjectivity in annotation. To measure the level of agreement we used Krippendorff's α over the other measures, such as Fleiss' κ, because Krippendorff's α offers an extension of such measures as Fleiss' κ and Scott's π by introducing a distance metric for the pairwise disagreements, thus making it possible to work with interval-scale ratings, e.g., considering disagreement between R and P as less severe than between R and I (Krippendorff, 2004). The values of Krippendorff's α (see Table 3) are relatively low. The most common cases of disagreement are between R and P (the boundary between them often depends on the amount of knowledge on the side of the annotator), but also quite surprisingly between R and IL, when some annotators interpret ill-formed sequences as a contribution to useful terms. With the disagreement taken into consideration, our evaluation on the number of relevant terms was judged by the agreement between at least two annotators among four to six annotators for the topic of FR. This established the gold standard lists reported in Table 4. The annotation results from Table 4 for English show that Syllabs generated more relevant terms than the other two tools from both FR0 and FR1. Both Syllabs and Teaboat generated good numbers of English: , which are likely to be already known by the trainee interpreters. The English termlists from all the tools contain a number of repetitions in the form of term variants, following Daille's definition as "an utterance which is semantically and conceptually related to an original term" (Daille, 2005). The automatically generated termlists contain the following types of term variations, which are counted as individual term candidates scattered in the termlists: Morphological variation: bathymetry vs bathymetric (not different when translated into Chinese) Anaphoric variation: pollymetallic sulphide deposit vs deposit Pattern switching: meltdown of the core vs core meltdown; level of gamma radiation vs gamma radiation level Synonymy in variation: deep sea mining vs deep seabed mining, seabed vs seafloor, ferromanganese crust vs iron-manganese crust One the one hand, these variations provide useful lexical information about the term, preparing the interpreters for what is possible in their assignment; on the other hand, the term variants need to be explicitly linked, which is possible only in the TTC TermSuite tool. The annotation results from Table 4 for Chinese show, both Syllabs' and Teaboat's lists offer obviously less relevant terms from FR1 compared with the English lists. When we further investigate the distribution of the term classes in annotations in Table 5, Syllabs' Chinese list on FR1 contains a large number of ill-formed constructions, including incomplete terms, eg. 水堆 'water reactor', 里岛核电 站 'Mile Island nuclear plant' and longer chunks, eg. 最大程度上保证了钠, 可用压水堆后处理得 到的钚作为核燃料. Teaboat's list contains a number of general words, eg. 开发 'development', 生产 'production' or 工程 'project'. Both categories (G and IL) are frequent in the TTC's Chinese list. On the basis of these results, we selected a single tool (Syllabs) with comparatively better performance in both languages to generate termlists on SM1 (En & Zh) and asked 12 annotators to select the relevant terms and learn the terms during their interpreting preparation. Among the 500 candidate terms for English, 441 terms were agreed as relevant by at least two annotators, 266 terms were agreed by five annotators. Precision rates are 88.2% and 53.2% respectively. On the other hand, only 130 terms were agreed as relevant by two annotators from the 500 Chinese candidate terms. The precision rate for the Chinese list is 26%. The results basically replicate the previous findings on FR1. The other pattern we observe from the current data is that the larger the corpus is, the more relevant terms the tools can generate. If the corpus is of very limited size (eg. FR0-en has only 774 words), the TTC TermSuite fails to generate any list for a 'corpus' of only 774 words, while the Syllabs and Teaboat tools produce shorter lists of 104 or 56 terms respectively. The situation is similar to other studies which used small (single-document) corpora, e.g., (Matsuo and Ishizuka, 2004). Reliability of the three term extractors The results show the accuracy of the terminology extraction pipelines is not perfect, as its precision ranges from 27% on short texts to 83% on longer corpora for English, 24% to 31% for Chinese. Among the three term extractors (TTC TermSuite, Syllabs Tools and Teaboat), Syllabs is more reliable in generating more relevant terms in English. All the three tools perform less satisfactory in generating relevant terms in Chinese. We hypothesise that at least three factors play an important role here: 1. Chinese is written without explicit word boundaries, while term extraction starts with already tokenised texts. Errors of the tokenisation process lead to difficulties in obtaining proper terms, e.g., 一回路 'primary loop' becomes 一回 'once' 路 'road', also 和非能动安全性 'and passive security' becomes 和非 'and not' 能动 'active' 安全性 'security', which reduces the chances of detecting 非 能动安全性 'passive security' as a term. 2. Word ambiguity in Chinese is high. This leads to POS tagging errors, for example, when nouns are treated as verbs, and this breaks the POS patterns for term extraction, e.g., 示范堆 'demonstration reactor' is treated as 示范/vn 堆/v. 3. Chinese exhibits more patterns than captured by the three term extraction tools we tested. For example, 并网发电 'connect to the grid' is potentially a useful term, which is correctly POS-tagged as 并网/v 发电/vn, but not captured by the patterns in all the tools. Two of the three causes of the results in Chinese concern text pre-processing. . Further investigation might be helpful in finding out how the pre-processing steps affect the performance of the term extractors and which terms are affected by each source of errors. Manual selection Vs Automatic extraction of terms For the interpreters, manually selecting terms from a single document of limited size (eg. FR0-en=774 words) is possible. However, when conference documents amount to the size of FR1 (FR1-en=42,006 words), it took the trainee interpreters 9 hours on average to extract terms manually and to produce initial termlists, since they had to spend the majority of their time on reading through fairly complex documents, copying the terms from the texts onto their own termlists and searching for unfamiliar terms. With the use of automatically-generated termlists on the same preparation task, students in the experiment group spent an average of 4 hours producing their initial bilingual termlists. Therefore half of the time spent on reading could be saved for the interpreters to get familiar with the concepts relevant to the terms and further activate the terms for their simultaneous interpreting tasks. Furthermore, if interpreters are given limited time for preparation, they would not be able to read through larger corpora of the size of FR2 (FR2-en=206,197 words) and to produce termlists from them manually. That is probably when such tools we discussed in this article may have obvious advantage over the manual terms extraction by the interpreters. Moreover, in other studies we also demonstrated that in addition to providing an automatically-extracted termlist, it is also beneficial to link the terms to their uses in the concordance lines of the corpus they have been extracted from. This is expected to give the interpreters an easy access to the context of the terms to see how they are used and get more background knowledge about the domain. Feedback from students After doing annotation, the students offered their written feedback on the termlists generated by the three term extractors. They also commented on the usefulness of the Syllabs' lists for their interpreting preparation. They generally reported that the termlists provided many relevant terms on the two topics, and the use of the lists saved their precious preparation time. Some of them found the lists 'unexpectedly accurate and complete', and the presence of irrelevant words in the lists and the repetitions in the lists 'tolerable' (even taking into account the 24% to 31% precision rate for Chinese). The students told us they used the lists as an important indicator for the content of the conference documents and relevant background documents. The lists helped them prioritise their preparation on the most relevant terms and concepts. Most of them expressed the opinion that if they are given very limited time, they would prefer to use the automatically-generated lists for their preparation. On the other hand, students reported that the termlists in Chinese offered much less relevant terms and contained quite a number of ill-formed constructions compared with the lists in English; therefore they felt the lists in Chinese were less useful and less reliable. Extraction of proper names Proper names (including names of organisations, names of places, names and titles of people) are equally if not more important than terms for interpreters, yet many of them are not included in the automatically-generated lists by the three term extractors (TTC, Syllabs and Teaboat). Therefore, named entity extraction tools in addition to term extraction are needed to generate more complete lists for interpreters' use. This would be further explored in our future research. File formats, plain text, encodings All the tools we tested can only process plain text (including UTF-8). Nevertheless, all the meeting documents are normally in one of the word processing formats (.pdf, .doc, .xls or .ppt) other than .txt. Interpreters need to take some time to convert all the files they obtain from their customers into plain text before they can possibly use any tool mentioned above.
2015-08-11T20:29:18.000Z
2014-08-01T00:00:00.000
{ "year": 2014, "sha1": "9b1c11c8af88f730a2a008706db5d22e83cee269", "oa_license": null, "oa_url": "http://anthology.aclweb.org/W/W14/W14-4811.pdf", "oa_status": "GREEN", "pdf_src": "ACL", "pdf_hash": "9b1c11c8af88f730a2a008706db5d22e83cee269", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
119657428
pes2o/s2orc
v3-fos-license
Smith Ideals of Operadic Algebras in Monoidal Model Categories Building upon Hovey's work on Smith ideals for monoids, we develop a homotopy theory of Smith ideals for general operads in a symmetric monoidal category. For a sufficiently nice stable monoidal model category and an operad satisfying a cofibrancy condition, we show that there is a Quillen equivalence between a model structure on Smith ideals and a model structure on algebra maps induced by the cokernel and the kernel. For symmetric spectra, this applies to the commutative operad and all Sigma-cofibrant operads. For chain complexes over a field of characteristic zero and the stable module category, this Quillen equivalence holds for all operads. This paper ends with a comparison between the semi-model category approach and the $\infty$-category approach to encoding the homotopy theory of algebras over Sigma-cofibrant operads that are not necessarily admissible. INTRODUCTION A major part of stable homotopy theory is the study of structured ring spectra. These include strict ring spectra, commutative ring spectra, A ∞ -ring spectra, E ∞ring spectra, E n -ring spectra, and so forth. Based on an unpublished talk by Jeff Smith, in [Hov∞] Hovey developed a homotopy theory of Smith ideals for ring spectra and monoids in more general symmetric monoidal model categories. Let us briefly recall Hovey's work in [Hov∞]. For a symmetric monoidal closed category M, its arrow category → M is the category whose objects are morphisms in M and whose morphisms are commutative squares in M. It has two symmetric monoidal closed structures, namely, the tensor product monoidal structure → M ⊗ and the pushout product monoidal structure → M ◻ . A monoid in → M ◻ is a Smith ideal, and a monoid in → M ⊗ is a monoid morphism. If M is a model category, then → M ⊗ has the injective model structure → M ⊗ inj , where weak equivalences and cofibrations are defined entrywise, and the category of monoid morphisms inherits a model structure from → M ⊗ inj . Likewise, → M ◻ has the projective model structure → M ◻ proj , where weak equivalences and fibrations are defined entrywise, and the category of Smith ideals inherits a model structure from → M ◻ proj . Surprisingly, when M is pointed (resp., stable), the cokernel and the kernel form a Quillen adjunction (resp., Quillen equivalence) between → M ◻ and → M ⊗ and also between Smith ideals and monoid morphisms. Since monoids are algebras over the associative operad, a natural question is whether there is a satisfactory theory of Smith ideals for algebras over other operads. For the commutative operad, the first author showed in [Whi17] that commutative Smith ideals in symmetric spectra, equipped with either the positive flat (stable) or the positive (stable) model structure, inherit a model structure. The purpose of this paper is to generalize Hovey's work to Smith ideals for general operads in monoidal model categories. For an operad O we define a Smith O-ideal as an algebra over an associated operad → O ◻ in the arrow category → M ◻ . We will prove a precise version of the following result in Theorem 4.4.1. For example, this Theorem holds in the following situations: (1) O is an arbitrary C-colored operad, and M is (i) the category Ch(R) of bounded or unbounded chain complexes over a semi-simple ring containing Q (Corollary 5.2.4), (ii) the stable module category of k[G]-modules for some field k and finite group G (Corollary 6.2.5), or (iii) the category of classical, equivariant, or motivic symmetric spectra with the positive or positive flat stable model structure (Example 4.4.2). The rest of this paper is organized as follows. In Section 2 we recall some basic facts about model categories and arrow categories. In Section 3 we define Smith ideals for an operad and prove that, when M is pointed, there is an adjunction between Smith O-ideals and O-algebra morphisms given by the cokernel and the kernel. In Section 4 we define the model structures on Smith O-ideals and Oalgebra morphisms and prove the Theorem above. We also include a discussion of what happens when there are only semi-model structures on Smith O-ideals and O-algebra morphisms. In Section 5 we apply the Theorem to the commutative operad and Σ C -cofibrant operads. In Section 6 we apply the Theorem to entrywise cofibrant operads. In Section 7 we include a comparison between various approaches to encoding the homotopy theory of operad-algebras, including model categories, semi-model categories, and ∞-categories. This discussion holds in general, beyond the situation of Smith O-ideals and O-algebra morphisms. MODEL STRUCTURES ON THE ARROW CATEGORY In this section we recall a few facts about monoidal model categories and arrow categories. Our main references for model categories are [Hir03,Hov99,SS00]. In this paper, (M, ⊗, 1, Hom) will usually be a bicomplete symmetric monoidal closed category [Mac98] (VII.7) with monoidal unit 1, internal hom Hom, initial object ∅, and terminal object * . Since M is closed, ∅ ⊗ X = ∅ for any X. Monoidal Model Categories. A model category is cofibrantly generated if there are a set I of cofibrations and a set J of trivial cofibrations (that is, morphisms that are both cofibrations and weak equivalences) that permit the small object argument (with respect to some cardinal κ), and a morphism is a (trivial) fibration if and only if it satisfies the right lifting property with respect to all morphisms in J (resp. I). Let I-cell denote the class of transfinite compositions of pushouts of morphisms in I, and let I-cof denote retracts of such [Hov99] (2.1.9). In order to run the small object argument, we will assume the domains K of the morphisms in I (and J) are κ-small relative to I-cell (resp. J-cell). In other words, given a regular cardinal λ ≥ κ and any λ-sequence X 0 / / X 1 / / ⋯ formed of morphisms X β / / X β+1 in I-cell, the map of sets colim β<λ M K, X β / / M K, colim β<λ X β is a bijection. An object is small if there is some κ for which it is κ-small. We will say that a model category is strongly cofibrantly generated if the domains and codomains of I and J are small with respect to the entire category. In Section 4, we will produce homotopy theories for operad-algebras valued in arrow categories equipped with some model structure. Depending on the colored operad and properties of M, sometimes we will only have a semi-model structure on a category of algebras. However, as shown in Section 7, it still encodes the correct ∞-category. A semi-model category satisfies axioms similar to those of a model category, but one only knows that morphisms with cofibrant domain admit a factorization into a trivial cofibration followed by a fibration, and one only knows that trivial cofibrations with cofibrant domain lift against fibrations. To the authors' knowledge, every result about model categories has a corresponding result for semi-model categories, often obtained by first cofibrantly replacing everything in sight (see, for example, [BW20a]). The following is Definition 2.1 in [BW20a]. i Every morphism in M can be functorially factored into a cofibration followed by a trivial fibration. ii Every morphism whose domain is cofibrant can be functorially factored into a trivial cofibration followed by a fibration. If, in addition, M is bicomplete, then we call M a semi-model category. M is said to be cofibrantly generated if there are sets of morphisms I and J in M such that the class of (trivial) fibrations is characterized by the right lifting property with respect to J (resp. I), the domains of I are small relative to I-cell, and the domains of J are small relative to morphisms in J-cell whose domain is cofibrant. An adjunction with left adjoint L and right adjoint R is denoted by L ⊣ R. (1) We call L ⊣ R a Quillen adjunction if the right adjoint R preserves fibrations and trivial fibrations. In this case, we call L a left Quillen functor and R a right Quillen functor. (2) We call a Quillen adjunction L ⊣ R a Quillen equivalence if, for each morphism f ∶ LX / / Y ∈ N with X cofibrant in M and Y fibrant in N, f is a weak equivalence in N if and only if its adjoint f # ∶ X / / RY is a weak equivalence in M. Definition 2.1.3. Suppose M is a category with pushouts and pullbacks. (1) Given a solid-arrow commutative diagram in M in which the square is a pullback, the unique dotted induced morphism is denoted f ⧅ g and called the pullback corner morphism of f and g. (2) Given a solid-arrow commutative diagram in M in which the square is a pushout, the unique dotted induced morphism is denoted f ⊛ g and called the pushout corner morphism of f and g. In the next definition, we follow simplicial notation 0 / / 1 so the reader can distinguish source and target at a glance. Definition 2.1.4. Suppose (M, ⊗, 1) is a monoidal category with pushouts. Suppose f ∶ X 0 / / X 1 and g ∶ Y 0 / / Y 1 are morphisms in M. The pushout corner morphism f ◻g of f ⊗ 1 and 1 ⊗ g is denoted f ◻ g and called the pushout product of f and g. Definition 2.1.5. A symmetric monoidal closed category M equipped with a model structure is called a monoidal model category if it satisfies the following pushout product axiom [SS00] (3.1): • Given any cofibrations f ∶ X 0 / / X 1 and g ∶ Y 0 / / Y 1 , the pushout product morphism is a cofibration. If, in addition, either f or g is a weak equivalence, then f ◻ g is a trivial cofibration. Additionally, in order to guarantee that the unit 1 descends to the unit in the homotopy category, it is sometimes convenient to assume the unit axiom [Hov99] (4.2.6): if Q1 / / 1 is a cofibrant replacement, then for any cofibrant object X, the induced morphism Q1 ⊗ X / / 1 ⊗ X ≅ X is a weak equivalence. Since (−) ⊗ X is a left Quillen functor, if the unit axiom holds for one cofibrant replacement of 1, then it holds for any cofibrant replacement of 1. Arrow Categories. Definition 2.2.1. A lax monoidal functor F ∶ M / / N between two monoidal categories is a functor equipped with structure morphisms for X and Y in M that are associative and unital in a suitable sense, as discussed in [Mac98] (XI.2), where this notion is referred to simply as a monoidal functor. If, furthermore, M and N are symmetric monoidal categories, and F 2 is compatible with the symmetry isomorphisms, then F is called a lax symmetric monoidal functor. If the structure morphisms F 2 and F 0 are isomorphisms (resp., identity morphisms), then F is called a strong monoidal functor (resp., strict monoidal functor). property with respect to i, and β has the right lifting property with respect to α i if and only if Ev 0 f / / Ev 1 f × Ev 1 g Ev 0 g has the right lifting property with respect to i. Thus these sets generate the injective model structure. The pushout product axiom and the unit axiom on → M ⊗ inj follows from the same on M [Bar10] (4.51). Projective Model Structure. The following result about the projective model structure is from [Hov∞] (3.1). Theorem 2.4.1. Suppose M is a model category. (1) There is a model structure on → M, called the projective model structure, in which a morphism α ∶ f / / g as in (2.2.3) is a weak equivalence (resp., fibration) if and only if α 0 and α 1 are weak equivalences (resp., fibrations) in M. A morphism α is a (trivial) cofibration if and only if α 0 and the pushout corner morphism Note that this implies that α 1 is also a (trivial) cofibration. The arrow category equipped with the projective model structure is denoted by → M proj . (2) If M is cofibrantly generated, then so is Proof. For a category M with all small limits and colimits, recall from [Hov99] (Sections 1.1, 6.1) that M is pointed if the unique morphism ∅ / / * is an isomorphism. In such a category, we define the cokernel of a morphism f ∶ X 0 / / X 1 to be the morphism coker f ∶ X 1 / / Z defined by the following pushout: / / X 1 For the left adjoints L 0 and L 1 in (2.2.4), we note the following equalities for each object X. Most of the observations in Proposition 2.4.3 are from [Hov∞] (1.4, 4.1, 4.3). We provide proofs here for completeness. (1) The cokernel is a strictly unital strong symmetric monoidal functor from → M ◻ to → M ⊗ whose right adjoint is the kernel. (2) The strong symmetric monoidality of the cokernel induces a strictly unital lax symmetric monoidal structure on the kernel such that the adjunction (coker, ker) is monoidal. (3) If M is also a model category, then (coker, ker) is a Quillen adjunction. Proof. For (1), first note that coker preserves the units since the cokernel of ∅ / / 1 is Id 1 . Next, it is strong monoidal because, given f ∶ X 0 / / X 1 and g ∶ Y 0 / / Y 1 we can form the following commutative diagram: Vertical pushouts yield a span whose pushout is coker( f ◻ g). Horizontal pushouts yield a span whose pushout is coker f ⊗ coker g. Since pushouts commute, we obtain the natural isomorphism . We take this isomorphism as the ( f , g)-component of the monoidal constraint for coker. Using similar reasoning and the universal property of pushouts, one can show that the symmetric monoidal coherence diagrams commute. morphisms f and g. This diagram commutes because the adjoint of each composite is the identity morphism of coker( f ◻ g). For the long composite, this uses (i) the naturality of (coker 2 ) −1 and (ii) one of the triangle identities for the adjunction (coker, ker) [Mac98] (IV.1 Theorem 1). To prove that the counit ε ∶ coker ○ ker / / Id is a monoidal natural transformation, it remains to show that the following diagram commutes. This diagram commutes because, starting from the lower-left corner to f ⊗ g, each composite is adjoint to ker 2 f ,g . For (3), let α be a (trivial) cofibration and note that coker α is the colimit of a morphism of pushout diagrams. That morphism of pushout diagrams is a Reedy (trivial) cofibration. The colimit functor is left Quillen as a functor from the Reedy model structure to the underlying category [Hov99] (Section 5.2). Hence, coker α is again a (trivial) cofibration, so coker is a left Quillen functor. See Lemma 6.1.8 for an analogous proof. For (4), we must prove that, if f is cofibrant in → M ◻ (so, a cofibration of cofibrant objects) and g is fibrant in → M ⊗ (so, a fibration of fibrant objects), then α ∶ coker f / / g is a weak equivalence if and only if its adjoint β ∶ f / / ker g is a weak equivalence [Hov99] (1.3.12). We display both morphisms: In the homotopy category, these data give rise to fiber and cofiber sequences. Since M is stable, every fiber sequence is canonically isomorphic to a cofiber sequence [Hov99] (Chapter 7). We can extend to the right and realize α and β as giving a morphism of cofiber sequences in the homotopy category: If either α or β is a weak equivalence, then so is the other, by the two out of three property. Hence, coker and ker form a Quillen equivalence. SMITH IDEALS FOR OPERADS Suppose (M, ⊗, 1) is a cocomplete symmetric monoidal category in which the monoidal product commutes with colimits on both sides, which is automatically true if M is a closed symmetric monoidal category. In this section we define Smith ideals for an arbitrary colored operad O in M. When M is pointed, we observe in Theorem 3.4.2 that the cokernel and the kernel induce an adjunction between the categories of Smith O-ideals and of O-algebra morphisms. This will set the stage for the study of the homotopy theory of Smith O-ideals in the next several sections. Definition 3.1.1. Suppose C is a set, whose elements will be called colors. (1) A C-profile is a finite, possibly empty sequence c = (c 1 , . . . , c n ) with each c i ∈ C. (2) When permutations act on C-profiles from the left (resp., right), the resulting groupoid is denoted by Σ C (resp., Σ op C ). (3) The category of C-colored symmetric sequences in M is the diagram category M Σ op C ×C . For a C-colored symmetric sequence X, we think of Σ op C (resp., C) as parametrizing the inputs (resp., outputs). For (c; d) ∈ Σ op C × C, the corresponding entry of a C-colored symmetric sequence X is denoted by X d c . (4) A C-colored operad (O, γ, 1) in M consists of: • objects X c ∈ M for c ∈ C and • structure morphisms in M for all 1 ≤ i ≤ n with n ≥ 1, d ∈ C, and c = (c 1 , . . . , c n ) ∈ Σ C . These data are required to satisfy associativity, unity, and equivariant conditions similar to those of an O-algebra but with one input entry A and the output entry replaced by X. A morphism of A-bimodules is required to preserve the structure morphisms. As a consequence of (2.4.2) and (3.1.2), we have the following equalities. Example 3.1.5. Every strongly cofibrantly generated model category is operadically cofibrantly generated. The category of compactly generated topological spaces is not strongly cofibrantly generated. However, it is operadically cofibrantly generated. Indeed, the domains and codomains of I ∪ J are small relative to inclusions [Hov99] M ⊗ for all d ∈ C and c = (c 1 , . . . , c n ) ∈ Σ C . This structure morphism is equivalent to the commutative square The associativity, unity, and equivariance of λ translate into those of λ 0 and λ 1 , making (X, λ 0 ) and (Y, λ 1 ) into O-algebras in M. The commutativity of the previous square means that f ∶ (X, Remark 3.2.3. For the associative operad As, whose algebras are monoids, the identification of → As ⊗ -algebras (that is, monoids in → M ⊗ ) with monoid morphisms in M is [Hov∞] (1.5). Propositions 3.3.3 and 3.3.11 below unpack Definition 3.3.1. They should be compared with Proposition 3.2.2. For objects or morphisms A cs , . . . , A c t with s ≤ t, we use the abbreviation that are associative, unital, and equivariant. Since For n ≥ 1, the structure morphism λ is equivalent to the commutative diagram The domain of the iterated pushout product f c 1 ◻ ⋯ ◻ f cn is the colimit . The morphisms that define the colimit are given by the f c i 's. For each n-tuple of indices ǫ = (ǫ 1 , . . . , ǫ n ) ∈ {0, 1} n ∖ {(1, . . . , 1)}, we denote by the morphism that comes with the colimit. For each i ∈ {1, . . . , n}, we denote by the n-tuple with 0 in the ith entry and 1 in every other entry. The upper left quadrilateral is commutative because D is the colimit in (3.3.6). The other two triangles are commutative by the definition of λ ǫ i 0 and λ ǫ j 0 in (3.3.8). The argument above can be reversed. In particular, to see that the commutative diagram (3.3.4), which is the boundary of (3.3.9), yields the top horizontal morphism λ 0 in (3.3.5), observe that the full subcategory of the punctured n-cube {0, 1} n ∖ {(1, . . . , 1)} consisting of (ǫ 1 , . . . , ǫ n ) with at most two 0's is a final subcat- where X ′ becomes an A-bimodule via the restriction along h 1 , such that the square Proof. Following the proof of Proposition 3.3.3, we unravel the given morphism in M such that the square (3.3.12) commutes. The compatibility of h with the → O ◻ -algebra structure means the following diagram commutes in → M for all d, c 1 , . . . , c n ∈ C. (3.3.14) If n = 0, then (3.3.14) is the commutative diagram below. For n ≥ 1, using the abbreviation 3.14) becomes the following commutative cube. The six commutative faces of (3.3.16) are as follows. (1) The back face is (3) The right face is the square (3.3.12) for d ∈ C. (4) The bottom face and the n = 0 case (3.3.15) together express the fact that The left face imposes no extra condition because D is the colimit in (3.3.6) and similarly for D ′ . In more detail, for each n-tuple (ǫ 1 , . . . , is commutative because it is a tensor product of n commutative squares corresponding to the n tensor factors of the upper left corner. • For a tensor factor with ǫ i = 0, by definition In this case, we have the commutative square (3.3.12) for c i ∈ C. • For a tensor factor with ǫ i = 1, by definition Both f * and f ′ * are given by the identity in the respective tensor factors, while both h * and h 1 * are given by h 1 c i . Pre-composing the top face of the commutative cube (3.3.16) with the morphism Id ⊗ ι ǫ i in (3.3.8) yields the following commutative diagram. This commutative diagram expresses the fact that h 0 ∶ X / / X ′ is a morphism of A-bimodules, where X ′ becomes an A-bimodule via the restriction along h 1 . Thus, pre-composing the top face of (3.3.16) with the morphism Id ⊗ ι ǫ yields a diagram that factors into two sub-diagrams, one of which is (3.3.18). The other subdiagram commutes and imposes no extra condition by the same argument above for (3.3.17). The description of Smith O-ideals and their morphisms in Propositions 3.3.3 and 3.3.11 imply the following result. Proof. Denote the first and the second copies of C in C ⊔ C by, respectively, C 0 and C 1 . For an element c ∈ C, we write c ǫ ∈ C ǫ for the same element for ǫ ∈ {0, 1}. The entries of O s are defined as follows for d, c 1 , . . . , c n ∈ C and ǫ 1 , . . . , ǫ n ∈ {0, 1}. The operad structure morphisms of O s are either those of O or the unique morphism from the initial object ∅. The identification of O s -algebra morphisms and Smith O-ideal morphisms follows similarly from Proposition 3.3.11. More explicitly, a morphism h of O s -algebras consists of a To see that these component morphisms make the diagram (3.3.12) commute, we use the fact that the components of f are the composites in (3.3.22) and similarly for f ′ . The desired diagram (3.3.12) is the boundary of the following diagram. This shows that the diagram (3.3.12) is commutative. The other two conditions in Proposition 3.3.11 are the following: This finishes the proof. The colored operad O s is somewhat similar to the two-colored operad for monoid morphisms in [Yau16] (Section 14.3). Operadic Smith Ideals and Morphisms of Operadic Algebras. In Proposition 2.4.3 we observe that, if M is a pointed symmetric monoidal category with all small limits and colimits, then there is an adjunction with cokernel as the left adjoint and kernel as the right adjoint. Since cokernel is a strictly unital strong symmetric monoidal functor, the kernel is a strictly unital lax symmetric monoidal functor, and the adjunction is monoidal. If M is a pointed model category, then (coker, ker) is a Quillen adjunction. If M is a stable model category, then (coker, ker) is a Quillen equivalence. in which the left adjoint, the right adjoint, the unit, and the counit are defined entrywise. Proof. To simplify the notation, in this proof we write C = coker and K = ker. First we lift the functors C and K. Then we lift the unit and the counit for the adjunction. Step 1: Lifting the Kernel and the Cokernel to Algebra Categories The functors in (3.4.1) lifts entrywise to the functors in (3.4.3) for the following reasons. • The functor becomes an → O ⊗ -algebra with structure morphism λ # given by the following composite for all d, c 1 , . . . , c n ∈ C, with C 2 = coker 2 the monoidal constraint of the cokernel in (2.4.4). The → O ⊗ -algebra axioms for (C f , λ # ) follow from the → O ◻ -algebra axiom for ( f , λ) and the symmetric monoidal axioms for the cokernel. The same reasoning also applies to the kernel. Thus there is a diagram of functors with both U forgetful functors and To see that this equality holds, suppose ( f , λ) is an → O ⊗ -algebra as in the proof of the → O ◻ -algebra structure morphism λ ′ is constructed from the monoidal constraint K 2 and Kλ. Since each U forgets the operad algebra structure morphism, we obtain the equalities The equality UK = KU holds on → O ⊗ -algebra morphisms because (i) both K apply entrywise to morphisms and (ii) both U do not change the morphisms. Next we show that the unit and the counit, .5) lift to the top between algebra categories. Step 2: Lifting the Unit To show that η defines a natural transformation for the top functors in (3.4.5), first we need to show that, for each 3), and K 2 = ker 2 the monoidal constraint defined in (2.4.5). (3.4.6) Kλ # To see that (3.4.6) is commutative, we consider the adjoint of each composite, which yields the boundary of the following diagram in → M. (3.4.7) The three sub-regions in (3.4.7) are commutative for the following reasons. • The left triangle is commutative by the naturality of the monoidal constraint C 2 = coker 2 of the cokernel. • The upper right region is commutative by the definition of λ # in (3.4.4). • To see that the lower right triangle is commutative, first note that the counit component morphism For each of the other n tensor factors in the lower right triangle, the composite ε C fc i ○ Cη i is the identity morphism by one of the triangle identities for the adjunction C ⊣ K [Mac98] (IV.1 Theorem 1). This proves that / / KC is a natural transformation for the top horizontal functors in (3.4.5) between algebra categories. Step 3: Lifting the Counit Next we show that the counit ε ∶ CK / / Id of the bottom adjunction C ⊣ K in (3.4.5) lifts to the top between algebra categories. First we need to show that, for each the → O ◻ -algebra obtained by applying the top functor K in (3.4.5). The → O ◻ -algebra structure morphism λ is the analogue of (3.4.4) for the kernel. In other words, it is the composite (3.4.10) The four sub-regions in (3.4.10) are commutative for the following reasons. • The top triangle is commutative by the definition of (−) # in (3.4.4). • The triangle to its lower right is commutative by the definition of λ in (3.4.9) and the functoriality of C. • The lower right quadrilateral is commutative by the naturality of the counit ε ∶ CK / / Id. Using the inverse of C 2 = coker 2 , the left triangle in (3.4.10) is equivalent to the following diagram. (3.4.11) The diagram (3.4.11) is commutative because the adjoint of each composite is K 2 = ker 2 defined in (2.4.5). This shows that (3.4.10) is commutative, and ε g is an (1) Pointed or unpointed simplicial sets [Qui67] and all of their left Bousfield localizations [Hir03]. (2) Bounded or unbounded chain complexes over a commutative ring containing the rationals Q [Qui67]. (3) Symmetric spectra built on either simplicial sets or compactly generated topological spaces, motivic symmetric spectra, and G-equivariant symmetric spectra with either the positive stable model structure or the positive flat stable model structure [PS18]. (4) The category of small categories with the folk model structure [Rez∞]. (5) Simplicial modules over a field of characteristic zero [Qui67]. (6) The stable module category of k[G]-modules [Hov99] (2.2), where k is a field and G is a finite group. We recall that the homotopy category of this example is trivial unless the characteristic of k divides the order of G (the setting for modular representation theory). The condition (♠) for (1)-(2) is proved in [WY18] (Section 8, which also handles symmetric spectra built on simplicial sets), and (4)-(5) can be proved using similar arguments. The condition (♠) for the stable module category is proved by the argument in [WY20] (12.2). For symmetric spectra built on topological spaces, motivic symmetric spectra, and equivariant symmetric spectra, we refer to [PS18] (Section 2, and the references therein) starting with C = Top, sSet G , Top G , and the A 1 -localization of simplicial presheaves with the injective model structure. In each of these examples except those built from Top, the domains and the codomains of the generating (trivial) cofibrations are small with respect to the entire category. So Proposition 2.4.6 applies to show that, in each case, the arrow category with either the injective or the projective model structure is strongly cofibrantly generated. The category of (equivariant) symmetric spectra built on topological spaces is operadically cofibrantly generated by an argument analogous to that of Example 3.1.5, as are the arrow categories, by the remark below. Proof. Suppose M satisfies (♠) with respect to a subclass C of weak equivalences that is closed under transfinite composition and pushout. We write C ′ for the subclass of weak equivalences β in → M ⊗ inj such that β 0 , β 1 ∈ C. Then C ′ is closed under transfinite composition and pushout. Suppose We will show that f X ⊗ Σn α ◻n belongs to C ′ . The morphism f X ⊗ Σn α ◻n in → M ⊗ is the commutative square Since α 0 and α 1 are trivial cofibrations in M and since X 0 , X 1 ∈ M Σ op n , the condition (♠) in M implies that the two horizontal morphisms in the previous diagram are both in C. This shows that → M ⊗ inj satisfies (♠) with respect to the subclass C ′ of weak equivalences. The second assertion is now a consequence of Proposition 2.4.6, Example 3.1.5, and Theorem 4.1.1. When (♠) is not satisfied but the classes of morphisms above still define semimodel structures (e.g., Remark 5.1.6, Corollary 5.2.3, and Theorem 6.2.1), we still denote those semi-model structures by Alg Since there is an equality (3.4.5) U ker α = ker Uα right Quillen functor by Proposition 2.4.3 (3), we finish the proof by observing that Recall that a pointed (semi-)model category is stable if its homotopy category is a triangulated category [Hov99] (7.1.1). → M ◻ is a weak equivalence. So ker α is entrywise a weak equivalence in M, or equivalently U ker α ∈ ( → M ◻ proj ) C is a weak equivalence. We must show that α is a weak equivalence, that is, that Uα ∈ ( → M ⊗ inj ) C is a weak equivalence. The morphism Uα is still a morphism between fibrant objects, and In other words, we must show that Uη is a weak equivalence in Here the left vertical morphism is a trivial cofibration and is a fibrant replacement of U coker f X . The top horizontal morphism is a weak equivalence and is U applied to a fibrant replacement of coker f X . The other two morphisms are fibrations. So there is a dotted morphism α that makes the whole diagram commutative. By the 2-out-of-3 property, α is a weak equivalence between fibrant objects in ( C is a right Quillen functor, by Ken Brown's Lemma [Hov99] (1.1.12) ker α is a weak equivalence in ( → M ◻ proj ) C . We now have a commutative diagram where ε is the derived unit of U f X . To show that Uη is a weak equivalence, it suffices to show that ε is a weak equivalence. By assumption U f X is a cofibrant object in ( → M ◻ proj ) C . Since (coker, ker) is a Quillen equivalence between ( M ◻ proj is more subtle. We will consider this issue in the next two sections, proving this condition for (1) in Corollary 5.2.4 and for (2) in Corollary 6.2.5. For classical, equivariant, or motivic symmetric spectra, we must tweak the proof of Theorem 4.4.1. Let ( → M ◻ proj ) C refer to the projective model structure on the arrow category where M is the injective stable model structure on the relevant category of symmetric spectra. Since the weak equivalences of the injective stable model structure coincide with those of the positive (flat) stable model structure, in the last paragraph of the proof, it is enough to prove that ǫ is a weak equivalence with respect to the injective stable model structure on spectra. Hence, it suffices for U f X to be a cofibrant object in ( → M ◻ proj ) C , which follows from the proof of [WY18] (8.3.3), using our filtrations and the fact that the cofibrations of the injective stable model structure are the monomorphisms. We note that we cannot add the injective stable model structure on symmetric spectra to the list in Example 4.4.2 because it is not true that every operad is admissible. A famous obstruction due to Gaunce Lewis prevents the Com operad from being admissible, for example. SMITH IDEALS FOR COMMUTATIVE AND SIGMA-COFIBRANT OPERADS In this section we apply Theorem 4.4.1 and consider Smith ideals for the commutative operad and Σ C -cofibrant operads (Definition 5.2.1). In particular, in Corollary 5.2.3 we will show that Theorem 4.4.1 is applicable to all Σ C -cofibrant operads. On the other hand, the commutative operad is usually not Σ-cofibrant. However, as we will see in Example 5.1.3, Theorem 4.4.1 is applicable to the commutative operad in symmetric spectra with the positive flat stable model structure. Commutative Smith Ideals. For the commutative operad, which is entrywise the monoidal unit and whose algebras are commutative monoids, we use the following definition from [Whi17] (3.4). The notation ? Σ n means taking the Σ n -coinvariants. Definition 5.1.1. A monoidal model category M is said to satisfy the strong commutative monoid axiom if, whenever f ∶ K / / L is a (trivial) cofibration, then so is f ◻n Σ n , where f ◻n is the n-fold pushout product (which can be viewed as the unique morphism from the colimit Q n of a punctured n-dimensional cube to L ⊗n ), and the Σ n -action is given by permuting the vertices of the cube. The following result says that, under suitable conditions, commutative Smith ideals and commutative monoid morphisms have equivalent homotopy theories. Corollary 5.1.2. Suppose M is a cofibrantly generated stable monoidal model category that satisfies the strong commutative monoid axiom, the monoid axiom, and in which cofibrant → Com ◻ -algebras are also underlying cofibrant in → M ◻ proj (this occurs, for example, if the monoidal unit is cofibrant). Then there is a Quillen equivalence in which Com is the commutative operad in M. Proof. First, [Whi17] (5.12 and 5.14) ensures that → M ⊗ and → M ◻ satisfy the strong commutative monoid axiom, and [Hov∞] (2.2 and 3.2) (also Theorems 2.3.1 and 2.4.1) ensures that they satisfy the monoid axiom. Hence, by [Whi17] For the commutative operad, it is proved in [Whi17] (3.6 and 5.14) that, with the strong commutative monoid axiom and a cofibrant monoidal unit, cofibrant → Com ◻ -algebras are also underlying cofibrant in → M ◻ proj . So Theorem 4.4.1 applies. Example 5.1.3 (Commutative Smith Ideals in Symmetric Spectra). Example 4.4.2 shows that the category of symmetric spectra with the positive flat stable model structure satisfies the hypotheses in Theorem 4.4.1. It also satisfies the strong commutative monoid axiom [Whi17] (5.7) and the monoid axiom [SS00]. While the monoidal unit is not cofibrant, nevertheless, [Whi17] (5.15) shows that cofibrant commutative Smith ideals forget to cofibrant objects of → M ◻ . Therefore, Corollary 5.1.2 applies to the commutative operad Com in symmetric spectra with the positive flat stable model structure. Example 5.1.4 (Commutative Smith Ideals in Algebraic Settings). Let R be a commutative ring containing the ring of rational numbers Q. Corollary 5.2.4 shows that the category of (bounded or unbounded) chain complexes of R-modules satisfies the conditions of Theorem 4.4.1. They also satisfy the strong commutative monoid axiom and the monoid axiom [Whi17] (5.1). Hence, Corollary 5.1.2 applies, to give a homotopy theory of ideals of CDGAs. The same is true of the stable module category of R = k[G] where k is a field and G is a finite group, using Corollary 6.2.5. The result is a homotopy theory of ideals of commutative R-algebras. Of course, taking G trivial in Example 5.1.5, one obtains that Corollary 5.1.2 applies to orthogonal spectra with the positive flat stable model structure [Whi22] (Section 8). 6)). Smith Ideals for Sigma-Cofibrant Operads. For a cofibrantly generated model category M and a small category D, recall that the diagram category M D inherits a projective model structure with weak equivalences and fibrations defined entrywise in M [Hir03] (11.6.1). We use this below when D = Σ op C × C is the groupoid in Definition 3.1.1. In this case, the category M D is the category of C-colored symmetric sequences. Proof. The Quillen adjunction lifts to a Quillen adjunction of D-diagram categories by [Hir03] (11.6.5(1)), and similarly for (L 0 , Ev 0 ). If X ∈ M D is cofibrant, then L 1 X and L 0 X are cofibrant since L 1 and L 0 are left Quillen functors. The following provides one source of applications of Corollary 5.2.3, and answers a question Pavel Safranov asked the first author. This result generalizes [Whi17] (5.1) and [WY18] (8.1), as it applies in particular to fields of characteristic zero. Corollary 5.2.4. Suppose R is a commutative ring with unit and M is the category of bounded or unbounded chain complexes of R-modules, with the projective model structure. The following are equivalent: (1) R is a semi-simple ring containing the rational numbers Q. In particular, for such rings R, every C-colored operad in M is Σ C -cofibrant, so Corollary 5.2.3 is applicable for all colored operads in M. If R contains Q (but is not necessarily semi-simple) then every entrywise cofibrant C-colored operad in M is Σ C -cofibrant and admissible. Proof. Assume (1). Maschke's Theorem [MS02] (3.4.7) guarantees that each group ring R[Σ n ] is semi-simple (since 1 n! exists in R, making n! invertible). This means every module M over R[Σ n ] is projective. In particular, M is a direct summand of a module induced from the trivial subgroup, and has a free Σ n -action. Hence, (2) follows. Conversely, if (2) is true, then it implies that, for every n, every module in R[Σ n ] is projective. This means each R[Σ n ] is a semi-simple ring. By [MS02] (3.4.7), this implies that R is semi-simple and n! is invertible in R for every n. It follows that Q is contained in R. For such R, the projective model structure on (bounded or unbounded) chain complexes of R-modules has every object cofibrant (so, automatically, cofibrant operad-algebras forget to cofibrant chain complexes). Hence, any C-colored operad is entrywise cofibrant, and hence Σ C -cofibrant. Furthermore, Theorem 4.1.1 implies that all operads are admissible, since every X ∈ M Σ op n is Σ n -projectively cofibrant. If R contains Q but is not semi-simple, then there can be non-projective R-modules, but the argument of [MS02] (3.4.7) shows that an R[Σ n ]-module that is projective as an R-module is projective as a R[Σ n ]-module. It follows that Corollary 5.2.3 holds for entrywise cofibrant operads, including the operad Com. Indeed, all operads are admissible thanks to Theorem 4.1.1, since for any trivial cofibration f and any X ∈ M Σ op n , maps of the form X ⊗ Σn f ◻n are trivial h-cofibrations and this class of morphisms is closed under pushout and transfinite composition [Whi22] (Section 8). Smith Ideals: The associative operad As, which has As(n) = ∐ Σn 1 as the nth entry and which has monoids as algebras, is Σ-cofibrant. In this case, Corollary 5.2.3 is Hovey's Corollary 4.4 (1) in [Hov∞]. Smith A ∞ -Ideals: Any A ∞ -operad, defined as a Σ-cofibrant resolution of As, is Σ-cofibrant. In this case, Corollary 5.2.3 says that Smith A ∞ -ideals and A ∞ -algebra morphisms have equivalent homotopy theories. For instance, one can take the standard differential graded A ∞ -operad [Mar96] and, for symmetric spectra, the Stasheff associahedra operad [Sta63]. Smith E ∞ -Ideals: Any E ∞ -operad, defined as a Σ-cofibrant resolution of the commutative operad Com, is Σ-cofibrant. In this case, Corollary 5.2.3 says that Smith E ∞ -ideals and E ∞ -algebra morphisms have equivalent homotopy theories. For example, for symmetric spectra, one can take the Barratt-Eccles E ∞ -operad EΣ * [BE74]. An elementary discussion of the Barratt-Eccles operad is in [JY∞] (Section 11.4). Smith E n -Ideals: For each n ≥ 1, the little n-cubes operad C n [BV73, May72] is Σ-cofibrant and is an E n -operad by definition [Fre17] (4.1.13). In this case, with M being symmetric spectra with the positive (flat) stable model structure, Corollary 5.2.3 says that Smith C n -ideals and C n -algebra morphisms have equivalent homotopy theories. One may also use other Σ-cofibrant E noperads [Fie∞], such as the Fulton-MacPherson operad ( [GJ∞] and [Fre17] (4.3)), which is actually a cofibrant E n -operad. An elementary discussion of a categorical E n -operad is in [JY∞] (Chapter 13). (1) S-modules with the model structure from [EKMM97]. (3) Mandell's model structure on G-equivariant symmetric spectra built on simplicial sets or topological spaces, where G is a finite group in the former case and a compact Lie group in the latter case [Man04]. (4) Model structures for (equivariant) stable homotopy theory based on Lydakis's theory of enriched functors [DR∅03]. For example, this includes the model category of G-enriched functors from finite G-simplicial sets to G-simplicial sets, where G is a finite group, from [DR∅03] (Theorem 2). (5) Any model structure M on symmetric spectra built on (C, G) where C is a model category and G is an endofunctor, as long as M is an operadically cofibrantly generated, monoidal, stable model structure. For example, taking C to be the canonical model structure on small categories, and using the suspension discussed in [WY20] (Section 13), one obtains by [Hov01] (7.3) a combinatorial, stable, monoidal model structure on symmetric spectra of small categories with applications to Goodwillie calculus. Using [PS18] (Section 2) one may obtain positive and positive flat variants. Another example is taking C to be the I-spaces or J-spaces of Sagave and Schlichtkrull, and building projective, positive, or positive flat spectra on them as in [PS18] (Section 2). (6) The projective model structure on bounded or unbounded chain complexes over a commutative ring R [WY20] (Section 11). (7) The stable module category of k[G] where G is a finite group and k is a principal ideal domain [WY20] (Section 12). All of these examples are stable monoidal model categories, so Corollary 5.2.3 applies, once the requisite smallness hypothesis for the generating (trivial) cofibrations is checked. Symmetric spectra, motivic symmetric spectra, examples (6) and (7), and Mandell's model (3) of G-equivariant symmetric spectra built on simplicial sets are all combinatorial, as is the model structure on enriched functors (4) in simplicial contexts. Symmetric spectra as in (5) are combinatorial if C is combinatorial. S-modules, G-equivariant orthogonal spectra, Mandell's model (3) in topological contexts, and symmetric spectra built on topological spaces (another example of (5)) are operadically cofibrantly generated just as in Example 3.1.5, since they are built from compactly generated spaces. We recall that spaces are small relative to inclusions, and the morphisms in (O ○ (I ∪ J))-cell are inclusions [WY20] (5.10). SMITH IDEALS FOR ENTRYWISE COFIBRANT OPERADS In this section we apply Theorem 4.4.1 to operads that are not necessarily Σ Ccofibrant. To do that, we need to redistribute some of the cofibrancy assumptionsthat cofibrant Smith O-ideals are underlying cofibrant in the arrow category-from the colored operad to the underlying category. We will show in Theorem 6.2.1 that Theorem 4.4.1 is applicable to all entrywise cofibrant operads, provided that M satisfies the cofibrancy condition (♡) below. This implies that, over the stable module category [Hov99] (2.2), Theorem 4.4.1 is always applicable. Proof. For simplicial sets with either model structure, a cofibration is precisely an injection, and the pushout product of two injections is again an injection. Dividing an injection by a Σ n -action is still an injection. The other cases are proved similarly. Proof. The condition (♡) only refers to cofibrations, which remain the same in any left Bousfield localization. The next observation is the key that connects the cofibrancy condition (♡) in M to the arrow category. Proof. Suppose f X ∶ X 0 / / X 1 is an object in ( This means that f X is a morphism in M Σ op n that is an underlying cofibration between cofibrant objects in M. The condition (♣) cof for the pushout of the bottom row in the commutative diagram (6.1.9) (X 1 ⊗ Z) Σn Here the left square is commutative by definition, and the right square is X 0 ⊗ Σn (−) applied to α ◻ 2 n in (6.1.6). We consider the Reedy category D with three objects {−1, 0, 1}, a morphism 0 / / − 1 that lowers the degree, a morphism 0 / / 1 that raises the degree, and no other non-identity morphisms. Using the Quillen adjunction (1) The left and the middle vertical arrows are cofibrations in M. (2) The pushout corner morphism of the right square is a cofibration in M. The objects X 0 and X 1 in M Σ op n are cofibrant in M. The morphism ζ 1 = Ev 0 (α ◻ 2 n ) ∈ M Σn is an underlying cofibration in M. Indeed, since α ∈ → M ◻ proj is a cofibration, so is the iterated pushout product α ◻ 2 n by the pushout product axiom [WY19b]. In particular, Ev 0 (α ◻ 2 n ) is a cofibration in M. The condition (♡) in M (for the morphism ∅ / / X i ) now implies that the left and the middle vertical morphisms X i ⊗ Σn ζ 1 in (6.1.9) are cofibrations in M. Finally, since X 0 ∈ M Σ op n is cofibrant in M and since the pushout corner morphism of α ◻ 2 n ∈ ( → M ◻ proj ) Σn is a cofibration in M, the condition (♡) in M again implies the pushout corner morphism of the right square X 0 ⊗ Σn α ◻ 2 n in (6.1.9) is a cofibration in M. Lemma 6.1.10. The pushout corner morphism of f X ◻ Σn α ◻ 2 n in (6.1.7) is a cofibration in M. Proof. The pushout corner morphism of f X ◻ Σn α ◻ 2 n is the morphism f X ◻ Σn (α ◻n 1 ⊛ f ◻n W ). This is the Σ n -coinvariants of the pushout product in the diagram is a cofibration in M. Underlying Cofibrancy of Cofibrant Smith Ideals for Entrywise Cofibrant where ∅ M is the initial object in M and the symbol ∅ in d ∅ is the empty C-profile. Since O is assumed entrywise cofibrant, it follows that each entry of the By Proposition 4.2.5, the semi-model structure on Alg where I and J are the generating (trivial) co- The three arrows in this diagram are as follows: • d 0 is induced by the composition of O. • d 1 is induced by the O-algebra structure on A. • The common section s is induced by the unit A / / O ○ A. Lemma 6.2.3. Under the hypotheses of Theorem 6.2.1, suppose α ∶ f / / g is a morphism in (L 0 I ∪ L 1 I) c for some color c ∈ C, and Proof. By the filtration in [WY18] (4.3.16) and the fact that cofibrations are closed under pushouts, to show that Uj ∈ ( → M ◻ proj ) C is a cofibration, it is enough to show that, for each n ≥ 1 and each color d ∈ C, the morphism Proof. The stable module category is a stable model category that satisfies the hypotheses of Theorem 6.2.1 in which every object is cofibrant [Hov99] (2.2.12), [WY20] (Section 12). There are several more examples where Theorem 4.4.1 likely applies to all entrywise cofibrant operads, but where (♡) has not been checked. For example, the positive flat stable model structure on symmetric spectra built on compactly generated spaces have the property that, for any entrywise cofibrant colored operad O, cofibrant O-algebras forget to cofibrant spectra [PS18] (Section 2), but the authors do not know a reference proving the same for → M ◻ proj . Conjecture 6.2.6. The positive flat stable model structure on symmetric spectra built on compactly generated spaces satisfies the conclusion of Theorem 6.2.1. Similarly, by analogy with the positive flat model structure on symmetric spectra, one would expect that the positive flat model structure on G-equivariant orthogonal spectra would satisfy this property. (1) Work out a positive complete flat stable model structure on GSp O . (2) Prove that it satisfies the condition that all colored operads are admissible. (4) Prove that this model structure satisfies the conclusion of Theorem 6.2.1. In a related vein, we have the following problem. (1) Prove that the positive injective stable model structure M + i is a monoidal model category. (2) Prove that all operads are admissible in M + i . If so, then automatically cofibrant O-algebras forget to cofibrant underlying objects. (3) Prove that M + i satisfies the conclusion of Theorem 6.2.1. (4) Do the same for symmetric spectra valued in a general base model category C, where stabilization is with respect to an endofunctor G. (5) Do the same for orthogonal spectra and equivariant orthogonal spectra, possibly restricting to ∆-generated spaces as is done in [Whi22] (Section 8). (6) Produce a model structure on the category of S-modules, Quillen equivalent to the one in [EKMM97], with the property that cofibrant commutative ring spectra are underlying cofibrant. Do the same for general entrywise cofibrant colored operads, and prove that the conclusion of Theorem 6.2.1 holds in this setting. SEMI-MODEL CATEGORIES AND ∞-CATEGORIES FOR OPERAD ALGEBRAS In this paper, we often transferred model structures, using (♠), or semi-model structures, using Def. 6.1.1 or using Σ C -cofibrant operads O, to categories of Oalgebras. The language of ∞-categories could also be used to study the homotopy theory of O-algebras. We work in the model of quasi-categories, i.e., everywhere we write ∞-category we mean quasi-category. The main results of this section, Theorems 7.3.1 and 7.3.3, show that the two approaches-namely, semi-model categories and ∞-categories-are equivalent in a suitable sense for Σ C -cofibrant Ccolored operads that are not necessarily admissible. Lurie [Lur∞] (4.5.4.12) proves this property for the Com-operad and a restrictive class of model categories M: namely, combinatorial and freely powered (4.5.4.2) monoidal model categories. Lurie then deduces (4.5.4.7) that the underlying ∞- We extend this result in two ways. First, we will show that it holds when O is only semi-admissible instead of admissible (i.e., Alg(O; M) has a transferred semi-model structure). Second, we will show the same thing for the setting of enriched ∞-operads. For the latter, we work in a monoidal model category M (notnecessarily simplicial) and consider a colored operad O valued in M. Note that if M is a V-model category for some monoidal model category V, and O is a colored operad valued in V, then there is a colored operad O ′ valued in M with the same algebras (obtained by tensoring the levels of O with the unit of M), so we focus on the case when O is valued in M. In this case, there is an associated enriched ∞-operad [CH20] as we now describe. First, we must restate [Hau∞] (4.1). Definition 7.1.1. Let M be a monoidal model category. A subcategory of flat objects is a full symmetric monoidal subcategory M ♭ (which implies the unit is flat) that satisfies the following two conditions: (1) All cofibrant objects are flat (that is, are in M ♭ ). (2) If X is flat and f is a weak equivalence in M ♭ , then X ⊗ f is a weak equivalence. If the unit of M is cofibrant, then the subcategory of cofibrant objects is a subcategory of flat objects [Hau∞] (4.2), by Ken Brown's lemma. We note that, if the unit of M is cofibrant, then the same is true for both → M ◻ proj and → M ⊗ inj . The purpose of the definition above is to avoid assuming the monoidal unit is cofibrant, as this would rule out positive (flat) model structures on spectra (which do admit a subcategory of flat objects, namely the cofibrant objects of the flat model structure, by [Hau∞] (4.11)). In [Whi17] and [Whi22], the first author gives many examples of model categories with a subcategory of flat objects (namely, the subcategory of cofibrant objects), including spaces, simplicial sets, chain complexes, diagram categories, simplicial presheaves, and various categories of spectra. It is known that for every cofibrantly generated monoidal model category M, every Σ C -cofibrant colored operad O in M is semi-admissible. In other words, there is a transferred semi-model structure on O-algebras [WY18] (6.3.1). An alternative approach assumes M satisfies (♣) and appeals to [WY18] (6.2.3) for such a semimodel structure. It is also known that there are Σ C -cofibrant colored operads O whose category of O-algebras do not admit a full model structure [BW21] (2.9). Hence, the results in this section really do apply to previously unknown examples, and complete the study of semi-model structures on operad-algebras set out in [WY18,WY20,WY19a,WY16]. For completeness, we handle the case of both symmetric and non-symmetric colored operads [Mur11], noting that for the nonsymmetric case, being Σ C -cofibrant is the same as being entrywise cofibrant. Proof. We follow the proof from [PS18] (7.9), which is itself based on the proof of [Lur∞] (4.5.4.12). First, as pointed out in [Lur∞], the reflection property is implied by the preservation property, and it is sufficient to prove that U preserves homotopy colimits indexed by a small category D such that the nerve N(D) is homotopy sifted. Consider the projective model structure (M C ) D , the projective semi-model structure Alg(O; M) D guaranteed by [Bar10] (3.4), and the forgetful functor The right hand side is canonically weakly equivalent to U(F Alg(O) (A)) because A is projectively cofibrant, and this is weakly equivalent to F(U D A) via α. At this point, the proof in [Lur∞] (4.5.4.12) requires a detailed analysis of so-called "good" objects and morphisms in (M C ) D . However, when O is Σ C -cofibrant, the situation is much simpler, because U takes cofibrant algebras to cofibrant objects of M C [WY18] (6.3.1) ([Mur11] (9.5) for the non-symmetric case). Furthermore, the D-constant operad O D , taking value O at every a ∈ D, is Σ Ccofibrant in Alg(O; M) D . This can be seen directly, as Σ C -cofibrancy for an operad P valued in M D is the condition that, for each a ∈ D and each (c; d) ∈ Σ op C × C, the object P a Remark 7.2.2. Following the model of [Lur∞] (or [PS18]), after establishing Proposition 7.2.1, the next step should be to prove that the semi-model category Alg(O; M) describes the ∞-category of N ⊗ O-algebras in the ∞-category associated to M, as discussed above. However, when Alg(O; M) is only a semi-model structure, an additional step is needed. We need to know that homotopy colimits (given by colimits of projectively cofibrant objects in Alg(O; M) D ) agree with ∞-categorical colimits. In the case of full model structures, one knows that the projective model structure on Alg(O; M) D describes the ∞-category of functors, and that a Quillen adjunction gives rise to an adjunction of ∞-categories. For the case of semi-model categories, we invoke [LoM∞] (A.10) for the latter. Remark 7.2.3. We conjecture that Proposition 7.2.1 remains true for entrywise cofibrant colored operads O, if M satisfies (♣), and if we replace appeals to [WY18] (6.3.1) above by appeals to [WY18] (6.2.3). However, the proof of this would require a detailed analysis of 'good' objects and would take us too far afield. For both cases, we handle the cases where O is a symmetric colored operad and where O is a non-symmetric colored operad simultaneously. We handle the enriched case first. Proof. The proof of [Hau∞] (4.10) goes through directly by replacing the appeal to [PS18] (7.8) with an appeal to Proposition 7.2.1. That is, we consider the forgetful functors from both categories to the ∞-category associated to M C , and appeal to the Barr-Beck theorem for ∞-categories [Lur∞] (4.7.3.16) to see that these forgetful functors are monadic right adjoints (this is where Proposition 7.2.1 is needed). We appeal to [Hau∞] (3.8), which occurs entirely on the ∞-category level, for the usual formula for free O-algebras and the observation that the two associated monads on M C have equivalent underlying endofunctors. This proof works for both symmetric and non-symmetric colored operads O, as both are known to inherit transferred semi-model structures from M C , and as Proposition 7.2.1 applies in both settings. Remark 7.3.2. The proof of [Hau∞] (4.10) relies on the observation that a Quillen adjunction F ∶ M ⇄ N ∶ G induces an adjunction between the underlying ∞categories. We appeal to [LoM∞] (A.10) for the semi-model category analogue of this fact. We turn now to the unenriched case. is an equivalence of ∞-categories. Proof. We deliberately phrased the proof of Theorem 7.3.1, so that word-for-word it proves this result as well (again with the critical step hinging on an appeal to Proposition 7.2.1). We only stated the two theorems separately to highlight the difference between enriched and unenriched ∞-operads, and the connection to where the colored operad O is valued. [Whi17] (for Com rectifying to E ∞ ) and [WY19a] (for general colored operads) among other places.
2017-03-15T20:39:47.000Z
2017-03-15T00:00:00.000
{ "year": 2017, "sha1": "7232f12dc758261c16e8c77e1dbd9932dcbad384", "oa_license": "CCBY", "oa_url": "https://msp.org/agt/2024/24-1/agt-v24-n1-p10-s.pdf", "oa_status": "HYBRID", "pdf_src": "ArXiv", "pdf_hash": "70bb7d02bb6f343e60073f2bab40415cd5979798", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
252268405
pes2o/s2orc
v3-fos-license
An Automatic Loop Gain Enhancement Technique in Magnetoimpedance-Based Magnetometer A low-power, low-noise, and high-bandwidth magnetometer that utilizes the magnetoimpedance (MI) element as a sensor head is presented. The MI element has a high sensitivity, and it can be implemented in the mm-scale through the MEMS process. The analog front-end (AFE) circuit of the magnetometer includes a digital calibration scheme that automatically enhances the loop gain of the system, resulting in high bandwidth and low-noise characteristics. The AFE circuit is designed based on a switched-capacitor (SC) approach, and its dedicated switching scheme can suppress the folded noise of an amplifier. A single-coil magnetic negative feedback architecture with correlated double sampling (CDS) enables to achieve a high dynamic range (DR) and stable passband gain in addition to simplifying the structure of the MI element. The AFE chip of the magnetometer is implemented in a 0.18-<inline-formula> <tex-math notation="LaTeX">$\mu \text{m}$ </tex-math></inline-formula> CMOS process, and it achieves an 8-pT/<inline-formula> <tex-math notation="LaTeX">$\surd $ </tex-math></inline-formula>Hz noise floor within a 31-kHz bandwidth and the DR of 96 dB, where the power consumption is 1.97 mW. An Automatic Loop Gain Enhancement Technique in Magnetoimpedance-Based Magnetometer Ippei Akita , Member, IEEE, Takeshi Kawano, Hitoshi Aoyama, Shunichi Tatematsu, and Masakazu Hioki Abstract-A low-power, low-noise, and high-bandwidth magnetometer that utilizes the magnetoimpedance (MI) element as a sensor head is presented. The MI element has a high sensitivity, and it can be implemented in the mm-scale through the MEMS process. The analog front-end (AFE) circuit of the magnetometer includes a digital calibration scheme that automatically enhances the loop gain of the system, resulting in high bandwidth and low-noise characteristics. The AFE circuit is designed based on a switched-capacitor (SC) approach, and its dedicated switching scheme can suppress the folded noise of an amplifier. A singlecoil magnetic negative feedback architecture with correlated double sampling (CDS) enables to achieve a high dynamic range (DR) and stable passband gain in addition to simplifying the structure of the MI element. The AFE chip of the magnetometer is implemented in a 0.18-µm CMOS process, and it achieves an 8-pT/ √ Hz noise floor within a 31-kHz bandwidth and the DR of 96 dB, where the power consumption is 1.97 mW. I. INTRODUCTION A BIOMAGNETIC sensing technique such as magnetomyography (MMG) or magnetoencephalography (MEG) is one solution for capturing biological information with a minimum invasive approach. Implantable MMG has the potential to acquire fast neuronal magnetic activity, which corresponds to the action potential of neurons close to skeletal muscle with high spatiotemporal resolution [1], [2], [3], as opposed to an approach with an optically pumped magnetometer that achieves low noise but a relatively large size because of the optical system [4]. Magnetometers for such applications require of low noise less than 100 pT/ √ Hz, high bandwidth over 10 kHz, low power, and small size because they are implanted. Furthermore, a wide input range over 100 μT is desired because there is a need to accept the geomagnetic field and artifact without saturating the signal. A magnetic negative feedback approach can be applied to realize a high dynamicrange (DR) magnetometer because it provides a wide linear input range and a stable passband gain. For a low-noise solution, magnetometers based on fluxgate (FG) [5], [6], [7] and fundamental-mode orthogonal FG [8], [9], [10] have achieved few pT/ √ Hz-level noise floor for the applications of aerospace, geomagnetic observatories, and nondestructive testing. However, these magnetometers cannot be implanted because the size of the sensor head tends to be large. Magnetometers with integrated FG (IFG) have been developed for small-sized realizations [11], [12], [13], [14], where its noise floors are relatively high, around a few nT/ √ Hz. FG-based magnetometers require a larger excitation current for saturating magnetization of a core in the FG sensor heads. Approaches using magnetoresistance (MR) [15] such as giant MR (GMR) [16] and tunneling MR (TMR) [17], [18] can be implemented in few tens of micrometer scale, and they have been designed for sensing the magnetic nanoparticle and biomagnetic field. TMR-based magnetometers with high sensitivity have achieved few tens of pT/ √ Hz noise floor. Its available input range, however, is less than 10 nT, which leads to signal saturation because of geomagnetic and artifact [17]. Although a magnetometer using hybrid architecture containing hall and coil sensors has an extremely large input range over few mT with a MHz bandwidth [19], [20], a noise level over 100 nT/ √ Hz is large for biomagnetic applications. The use of the magnetoimpedance (MI) element as a sensor head is an attractive approach for realizing magnetometers with compact, high DR, low noise, and low power, because of the high sensitivity and low excitation current into the sensor head [21], [22]. The analog front-end (AFE) circuits for MI elements have been developed with discrete components, and they achieved a noise floor of few pT/ √ Hz using the millimeter-scale sensor heads [23], [24], [25], [26]. A low-power, low-noise MI-based magnetometer with high DR is presented in this article, and we introduce mainly three techniques: 1) digital calibration for enhancing loop gain in a magnetic negative feedback loop; 2) switching scheme for lowering noise; and 3) single-coil architecture with correlated double sampling (CDS) for both pickup and magnetic feedback. This article is the extended version of our previously published magnetometer [27], and the details of the theoretical analysis and simulation results are included. Furthermore, the prototype chip is refined in terms of digital and analog designs, and new measurement results are provided. The rest of the article is organized as follows. Section II describes the proposed MI-based magnetometer by introducing the fundamentals of the MI element and details of the technical features. The overall chip architecture and circuit details are presented in Section III. In Section IV, the measurement results of the prototype magnetometer and discussions are shown and compared with other state-of-the-art designs. Finally, Section V concludes the article. A. Magnetic Field Sensing Using the MI Element The MI element comprises an amorphous alloy wire with a diameter of few micrometers and a coil wounded around the wire as illustrated in Fig. 1(a) [22], [28], [29], [30]. If a current pulse I ex with a fast transition time t r is applied to the wire, the skin effect arises and the magnetization vector toward the surrounding direction of the wire surface rotates, which results in a magnetization change M proportional to the external magnetic flux density B in . The magnetic flux change φ on the surface is proportional to M, and φ can be picked up by the coil as the induced voltage V in = Nφ/t r , where N represents the number of coil turns. Therefore, the peak voltage of the obtained V in has a linear relation to B in , and it can be captured using a simple sample and hold circuit as shown in Fig. 1(b), where B in can be detected parallel to the wire. If the switch S SMPL is driven by a clock SMPL with an appropriate sampling timing corresponding to the moment for the peak voltage of V in , a sampled peak voltage V in,s on the sampling capacitor is obtained as illustrated in Fig. 1(c), and can be expressed as where G 0 and G 1 represent parameters depending on the fabrication of the device and materials of the MI element [22]. The first and second factors of the right-hand side in (1) correspond to the intrinsic sensitivity of the MI-based magnetic sensor head; G = G 0 (G 1 / √ t r − 1). As seen from (1), G can be increased by using the excitation current with a faster rising edge because G is inversely proportional to √ t r . This approach that utilizes the current pulse and peak sampling shown in Fig. 1 allows saving the power consumption for the excitation of the sensor head because it dissipates a large current up to 50 mA only during a short period, less than few tens of nanoseconds, to have one sampled signal corresponding to the external magnetic field. Fig. 2(a) shows the basic architecture of the designed MI-based magnetometer that includes an MI element as a sensor head, a switched-capacitor (SC) integrator with a peak sampler, a clock generator to create clocks for the sampler, and the SC integrator, driver for the wire of the MI element, and logic circuit for a calibration described later. The peak sampling is done by the sampler as described in the previous subsection, and the obtained V in,s proportional to B in is integrated into a charge domain at the latter SC integrator. B. Architecture The magnetometer adopts the magnetic negative feedback that the output voltage V out is feedbacked as a current I fb through a resistor R fb . This I fb flows into the coil and creates a magnetic flux density B fb , the direction of which is opposite to B in , where I fb is linearly converted to B fb with the coil based on the Ampere's Law. This can be expressed using a coefficient β; B fb = β I fb . A simple linear model of the magnetometer is shown in Fig. 2(b), where the transfer function of the SC integrator is expressed as a continuous-time model for simplicity; 0 /s. Therefore, the subtraction of B in and B fb is performed in a magnetic field domain, and its difference B err will settle to zero with a large loop gain because this architecture forms the negative feedback. This implies the passband gain is almost determined by the parameters of components in the feedback path ideally, and the transfer function becomes where the intrinsic sensitivity of the MI element G, which tends to vary between devices, does not affect the passband gain directly. In addition, the nonlinearity of G is suppressed in the same manner as general negative feedback. The corresponding frequency response is shown in Fig. 2(c), and the bandwidth is βG 0 /(2π R fb ). In this design, the magnetic feedback is realized using only a single coil for both pickup and magnetic feedback, while two coils are generally used for each purpose. The single-coil approach has been utilized in some FG-and MI-based magnetometers for low-cost and small-size realizations [9], [11], [25]. There are several challenges when designing the MI-based magnetometer using the single-coil architecture shown in Fig. 2 Fig. 1(c), the intrinsic sensitivity G depends on a sampling timing for peak sampling, which is a time duration from the rising edge of I ex to the falling edge of SMPL; it is defined as a sampling delay t sd . If t sd is slightly changed from the moment of the peak of V in , V in,s is decreased and it indicates the degradation of G. This optimum timing for the peak sampling is changed between devices because of the variation of the resonance frequency at the coil terminals. Therefore, G represents a function of not only t r , but t sd . Although the influence of the varied G in terms of the passband gain stability can be suppressed using magnetic negative feedback, G is directly involved with its bandwidth as shown in Fig. 2(c). In addition, since G is located before the integrator, the larger G plays a role in reducing the contribution of the integrator's noise at the output. If it is assumed that the input-referred noise power of the integrator is V 2 n as shown in Fig. 2(b), the input-referred noise power of the magnetometer, B 2 in,n , in the passband can be expressed as (a). As shown in Therefore, it is important to keep G as large as possible, and it can be realized by finding the optimum t sd that the sampler can capture the peak of V in . This can be achieved by a digital calibration scheme, which is described in Section II-C. The single coil is used for both pickup and feedback to achieve low cost and compact realization. However, an intended drop voltage is sampled on a capacitor C 1 due to a parasitic resistor R p of the coil and the feedback current I fb as shown in Fig. 3(a), in addition to the desired signal component from the input magnetic field. In this situation, the magnetometer with the MI element including R p is modeled as shown in Fig. 3(b); thus, the passband gain is derived as where the variation of G affects the passband gain. This means that R p deteriorates the effectiveness of the magnetic negative feedback, and a solution for this issue is provided using a CDS technique in Section II-D. C. Automatic Digital Calibration Technique for Enhancing Loop Gain A digital calibration scheme is proposed to search for the optimum t sd automatically, as shown in Fig. 4. Fig. 4(a) illustrates the circuit diagram during the calibration where magnetic negative feedback is removed and a constant magnetic field is applied to the wire through a resistor and bias voltage, V REF . The SC integrator is reconfigured to be an SC amplifier with high gain and high bandwidth. This configuration shown in Fig. 4(a) measures the sensitivity G determined only by t sd because the frontend circuit amplifies V in,s for a constant magnetic field. The presented calibration is based on an automatic trimming approach; the adjustment of the sampling phase denoted by SMPL in Fig. 4(a) is accomplished using a delay-locked loop (DLL) circuit and multiplexer (MUX). The calibration procedure is explained as follows. In the calibration mode, after locking the DLL, the first sampling is performed by SMPL with the minimum t sd set through MUX according to a digital code D ctrl from the logic shown in Fig. 4(b). In the following steps, D ctrl is swept sequentially, and the maximized sensitivity of the MI element is found by monitoring V out . However, it is difficult to find the exact peak timing of V in directly because the system assumes an analog output and does not have an ADC. Instead of using ADC to detect the peak, the presented calibration scheme utilizes an indirect approach that employs a comparator as a zero-crossing detector at the output, where the comparator detects a half period of the resonant frequency of the impedance between the coil terminals, which is almost constant for each sample. If the output of the comparator is activated, the logic stops the t sd sweep and finishes the calibration. Thus, the optimum t sd can be obtained as nearly half of the zero-crossed timing. Although the detection accuracy of the presented calibration scheme relies on that of the comparator, its requirements on noise and offset can be drastically relaxed because the SC amplifier, using CDS to eliminate the offset voltage and flicker noise as shown in Fig. 4(c) [31], acts as a preamplifier of the comparator [32], resulting in low power and simple realization. The time duration for each delay element in the DLL is an important design parameter for determining the intrinsic sensitivity of the MI element. A jitter in t sd for SMPL should also be considered. Fig. 5 shows a timing model that expresses the effects of the limited time resolution in the DLL, t DLL , and the jitter effect on the sampled voltage V in,s . In an ideal case, V in,s is acquired at the peak of V in . If the expected resonance frequency of the induced voltage and amplitude of V in are defined as f res and V in, p , respectively, the optimum sampling delay t sd,opt becomes almost 1/(4 f res ), where V in,s ≈ V in, p . However, since this timing is quantized with t DLL , this instance is slightly different, and the error voltage from the ideal peak V e can be obtained as where the worst-case timing for SMPL is assumed as t sd = t sd,opt + t DLL /2. Therefore, the intrinsic sensitivity G is decreased by the ratio of V e /V in, p and it can be updated as G In this design, the required t DLL can be obtained by providing an acceptable change in G from the highest sensitivity G. This sensitivity change is associated with the noise change from (3), and thus, if an acceptable noise change is provided as a specification, t DLL can be specified. For example, a 10% increase in noise floor is allowed, G can be decreased by 1/1.1 through (6), and thus, a t DLL of almost 2.74 ns is calculated, where the maximum f res is assumed as 50 MHz. The simulated spot noise at 100 Hz for different t sd is shown in Fig. 6, and a 10% noise degradation is confirmed within a 2.8-ns range, which is almost the same as the calculated value. In addition to the above discussion about systematic error due to t DLL , the effect of jitter at SMPL on the sampled voltage V in,s should be analyzed by using the same model. As the rising edge of the driving current into the wire denoted as MIE in Fig. 5 is synchronized with SMPL, a jitter of MIE can be merged with that of SMPL, and a random jitter with a standard deviation of σ SMPL is assumed for SMPL in this analysis. The transfer gain G σ from σ SMPL to V in,s can be simply modeled as the slope at t = t sd,opt + t DLL /2, and thus it becomes If the input-referred noise power associated with σ SMPL in the magnetometer is defined as B 2 σ , it can be derived as where B err = V in, p /G is the magnetic flux density in the wire, and therefore, this jitter-related noise depends on the signal amplitude. Since B err also corresponds to B in − B fb in the magnetic negative feedback as shown in Fig. 2(b), it becomes almost zero if the loop gain is sufficiently large, which can be easily realized by a lossless SC integrator [33], [34]. Fig. 7 shows the simulated random jitter contribution to the inputreferred noise, where B σ is less than 5 pT rms over σ SMPL up to 1.4 ns. Since 5 pT rms corresponds to 22.4 fT/ √ Hz floors if a 500-kHz bandwidth is assumed, the contribution of σ SMPL is quite small compared with the overall noise floor of the magnetometer, which is a few pT/ √ Hz, and therefore, it is negligible in this design. Hence, the magnetic negative feedback plays an important role in drastically reducing the requirement on the jitter of SMPL. The accuracy of the zero-crossing detector determines the effectiveness of the proposed calibration, and it is characterized as the input-referred offset and rms noise voltages, which are represented in the form of standard deviation as V z,n . Since the SC amplifier acts as a preamplifier of the comparator in the zero-crossing detector, V z,n corresponds to the input-referred noise or the offset voltage of this amplifier. Therefore, it is important to specify the required V z,n to achieve the desired accuracy of the calibration, and it can be used to design the SC amplifier and comparator. V z,n is associated with the provided f res and t DLL , and the same parameters are considered as an example. Fig. 8 shows the waveforms of V in, p and the relation to V z,n for f res = 50 MHz and t DLL ≈ 2.74 ns. In this case, since the target D ctrl should be 2, the zero-crossing detector should be activated at D ctrl = 4 or 5 in the calibration procedure, where it is assumed that the decimal point is suppressed upon determining the final D ctrl . Therefore, activation at D ctrl = 3 and deactivation at D ctrl = 6 must be avoided for the success of the calibration, and these situations occur when the offset and noise of the zero-crossing detector exceed the V in,s 's. In particular, since V in,s at D ctrl = 3 is closer to zero, an unintended activation will occur with a certain probability defined by V z,n . If it is assumed that the |V in,s | should be three times larger than the standard deviation of the offset and noise of the zero-crossing detector to prevent the error, the condition can be formulated as where α min = 0.53, which is determined by the combination of f res and t DLL . The above consideration in the worst case with f res and t DLL can provide the specification for the zerocrossing detector. Cases with different f res and t DLL need not be considered because a lower f res or a finer t DLL will relax V z,n . As the calibration for automatically finding the peak timing of the induced voltage is a type of foreground one, the timing and frequency to perform the calibration should be considered, which depends on the applications. Although f res varies for different samples, it does not drift because f res is determined by the impedance of the coil, which is almost independent of temperature and supply voltage. Therefore, it is considered that the calibration at power-on timing is appropriate for some applications, and the calibration rate can be defined by the user if needed. Fig. 9 shows the detailed AFE circuitry with its timing diagram. The AFE circuit operates with three phases: sampling, hold/CDS, and amplifying as shown in Fig. 10, and this includes two important features. One is to introduce the switch S ISO for isolating the sampling part from the opamp side, resulting in a low-noise characteristic. Another is that an additional CDS technique is implemented during the amplifying phase to suppress an influence of a parasitic resistor R p in the coil of the MI element. The detailed explanations and effects of these two features are provided in this section using each phase shown in Fig. 10. D. SC-Based AFE Circuit for MI-Based Magnetometer In the sampling phase shown in Fig. 10(a), the feedback current I fb flows in the coil to form negative feedback in the magnetic field domain, whereas the peak of V in is sampled at the instance of the negative edge of SMPL. Since I fb is generated by V out through R fb , the noise of the opamp directly affects the signal quality of V in,s , in particular, the noise folding attributed to sampling by S SMPL . Therefore, the noise spectrum around the sampling frequency f s should be considered for low-noise design. As shown in the proposed AFE circuit in Fig. 9, the switch S ISO plays an important role in minimizing the folding noise as well as in avoiding the influence of parasitic capacitance of C 1 . During the sampling phase, C 2 holds a charge corresponding to the previously sampled signal. Furthermore, as shown in Fig. 10(a), if a parasitic capacitor C p is assumed between node X and the ground, a noninverting amplifier is formed around the opamp. Hence, the output noise power spectrum density (PSD) S out,n ( f ) at the moment of the peak sampling assuming the opamp noise V n is dominant becomes where S n ( f ) represents the noise PSD of the opamp corresponding to V n . In addition, the opamp is modeled as an integrator with the gain-bandwidth product (GBW) of f 0 , and the spot noise at f s , √ S out,n ( f s ), can be calculated as 18 nV/ √ Hz, where each parameter is assumed as follows: f 0 is almost over 10 MHz, C p /C 2 = 0.1, f s = 1.28 MHz, and √ S n = 8.1 nV/ √ Hz. The spot noise at f s can be used to estimate the folding noise during the sampling phase as shown in Fig. 10(a). The noise contribution from the series-connected feedback resistor R fb between the coil and the opamp should also be considered in addition to (10) if it is not negligible. The sampled voltage V in,s is held during the hold/CDS phase because the left terminal of C 1 is floating as shown Fig. 10(b). At this time, the nonideal components are sampled on C 0 because a CDS technique is adopted around the opamp to eliminate its offset voltage and flicker noise in the passband [31]. Therefore, the noise of the opamp is highpass-filtered by the CDS effect as shown in (10). The CDS technique also assists to realize the lossless SC integrator with the limited dc gain of the opamp [33], which can relax the jitter requirement on SMPL as discussed in Section II-C. At the instance in the sampling phase defined as n − 1 [see Fig. 10(a)], an unintended drop voltage due to R p and I fb deteriorates the effectiveness of the magnetic negative feedback as expressed in (4). In the designed magnetometer, an additional CDS technique is introduced for solving this issue, and it works in the amplifying phase shown in Fig. 10(c), and it eliminates the influence of the drop voltage. At the sampling instance, the sampled voltage V in,s z −1 becomes G(B in − B fb )z −1 + R p I fb z −1 ; the first term is related to the intended signal components and the second one is the undesired drop voltage that depends on the signal because I fb z −1 = V out z −1 /R fb as described in B. Then, the switch S SMPL turns on again during the amplifying phase as shown in Fig. 10(c), and in this phase, because the wire of the MI element is not excited, V in represents only a drop voltage R p I fb . Therefore, the voltage across C 1 becomes G(B in − B fb )z −1 + R p I fb (z −1 − 1), and therefore, the second nonideal term is suppressed within the signal bandwidth owing to highpass filtering.Since this is equivalent to the effect of the CDS technique and the only desired charge on C 1 is transferred to C 2 , the overall frequency response |H ( f )| can be derived by the transfer function H (z) = V out (z)/B in (z) as and The frequency-dependent factor expressed by (12) provides the characteristic of a low-pass filter with a gain of one. As seen from (11), the passband gain becomes R eff /β, which is independent of the intrinsic sensitivity G of the MI element, resulting in a stable passband gain. III. IMPLEMENTATION The overall system diagram of the MI-based magnetometer is shown in Fig. 11, and all components except for the MI element are integrated into a chip. The designed system includes a DLL/MUX, logic circuit, including serial peripheral interface (SPI), a comparator for the calibration, clock generator, voltage/current reference circuit, MI driver for the wire in the MI element, and SC-based AFE circuit. The DLL for adjusting the SMPL to an appropriate sampling timing is composed of a phase detector (PD), a charge pump (CP), and a voltage-controlled delays line (VCDL). In this design, the required range of delay adjustment is assumed to be almost 100 ns because the expected resonant frequency of the induced voltage is larger than a few MHz. Therefore, the DLL is designed with a two-stage cascaded configuration for minimizing the number of delay cells as shown in Fig. 12. The DLL is driven by a clock CLKD with 25% duty and a 1.28-MHz frequency generated by a divider circuit, and the first DLL outputs a clock O0 with a delay that is one-fourth the period of the root clock, 1/(4 × 2.56 MHz), from the rising edge of a clock CLKS, which is divided by two from the root clock. The obtained O0 is used at the second DLL as a reference, and the rising edge of a delay chain output O2 is adjusted to that of O0, where the required resolution t DLL is determined by the number of delay cells in this stage. As discussed in Section II-C, since t DLL can be specified by the acceptable variation on the loop gain or noise floor of the magnetometer, the number of delay cells is determined as 68, where the same condition as discussed before (10% variation), is assumed, and the margin is almost double, resulting in t DLL ≈ 1.46 ns. The clocks for driving the analog part and others are created by the clock generator. The MI driver is implemented as an inverter with a large channel width to push a current up to 50 mA into the wire of the MI element. The analog part for acquiring the peak of the induced voltage from the MI element is designed as a fully-differential configuration and the magnetic negative feedback is realized through the feedback resistor R fb . In the calibration mode, the control signal CE is activated to reconfigure the analog part for the calibration. As shown in Fig. 4(a), the feedback is removed and then R fb is reused to generate a constant magnetic field in the MI element for the calibration through CE. The SC integrator is reconfigured as a high-bandwidth SC amplifier that gains the sampled induced voltage. This can be implemented by adding a reset switch S RST driven by RST in parallel to the integration capacitor C 2 ; it works in the hold/CDS phase shown in Fig. 9. Therefore, RST is obtained by taking a logical product of CDS and CE. Meanwhile, the capacitance of C 2 is also changed by CE to adjust an appropriate gain and bandwidth. As discussed in Section II-C, the zero-crossing detector comprising the SC amplifier and comparator needs to satisfy (9). The detector is designed with 130-μV rms input-referred noise and offset, which is considerably lower than the required V z,n defined in (9) when it is assumed that V in, p = 100 mV, f res = 50 MHz, and t DLL = 1.46 ns. The opamp used in the SC integrator is based on a basic two-stage folded cascode topology with an output buffer, as shown in Fig. 13, where the bias and common-mode feedback circuits are omitted for simplicity. In the first stage, the bias current of the input differential pair is properly biased in the weak or moderate inversion region for a high transconductance/current efficiency [35] and for high transconductance, which is equivalent to low thermal noise. The source degeneration resistors, R sd 's, are utilized to reduce flicker noise contribution from the current source transistors, M n 's. The transconductance of the power-rail-sided transistors M p 's, which is another dominant noise contributor in the first stage, is reduced by letting its gate-overdrive voltage higher. The output stage is added to reduce its output impedance and have a driving capability because the amplifier requires to drive the feedback resistor R fb . This stage should be a wide-swing voltage buffer with a gain of one, and it is implemented by a differential-difference amplifier (DDA) [36] with a folded mesh class-AB output stage [37] where the DDA is configured as a voltage follower to reduce the output impedance with the open-loop gain of the DDA. In addition to this local negative feedback effect, since the opamp is used for the SC integrator, the output impedance can be further reduced. This closed-loop DDA is designed to have a higher bandwidth than the unity gain frequency of the first stage so that this stage does not affect the phase margin of the opamp. The designed opamp has approximately 10 MHz GBW for a proper setting in the 1.28-MHz clocked SC circuit. IV. MEASUREMENT RESULTS The prototype AFE chip is fabricated in 0.18-μm CMOS technology. The chip area, which includes IOs and PADs, is 1.35 × 1.35 mm 2 , as shown in Fig. 14, and the MI element chip occupies 0.6 × 6 mm 2 separately. Fig. 15 shows the measurement setup, where a magnetic field is generated by a custom Helmholtz coil with signal sources. A reference magnetometer is used to monitor the applied magnetic field in the device under test (DUT). The measured dc curve and linearity error are shown in Fig. 16(a) and (b), respectively. The total sensitivity, or gain, from the input magnetic flux density B in to the output voltage V out is 9.0 mV/μT, and the linearity error within the input range of ±120 μT is +0.38/−0.29%. The worst-case error with the same input range for ten samples is +0.43/−0.98%. The linearity error is considered to be due to the switches in the feedback path shown in Fig. 11, which are required to change the mode between magnetic sensing and calibration. The frequency response after the calibration is shown in Fig. 17 and the bandwidth is 31 kHz, where the passband gain is expressed in the decibel form calculated from V out /B in . The passband gain is determined by R eff and β as described in (11), and the measured gain variation between ten samples is 0.5 dB in the passband. Fig. 18 shows the noise spectral density of the magnetometer where the in-band noise floor is input-referred with the gain. The low-frequency noise from the AFE circuit is suppressed by the CDS technique around opamp, and the in-band noise floor is 8.0 pT/ √ Hz. Although in our previous design [27], there was a spur with the amplitude of almost 600 pT in the signal bandwidth, it is suppressed in this refined prototype. This in-band spur is attributed to the logic circuit, and this expected cause can be avoided by optimization of the digital part and strengthening the isolation to the analog part in layout design. Fig. 19(a) and (b) shows transient responses for a 500-Hz sinusoidal input with magnitudes of 2 and 100 μT, respectively. The waveforms of the DUT for both small and large inputs are obtained without large noise or distortion compared with the current of the Helmholtz coil and the reference magnetometer output. The designed magnetometer utilizes a magnetic negative feedback architecture, and it requires a relatively large compensation current to make a magnetic field in the opposite direction to the input one; in this design, this current is generated by the feedback resistor R fb and its applied voltage V out . This current, or power consumption of the AFE circuit, is proportional to the signal magnitude or operating point at the output owing to a class-AB output stage in the amplifier. The dependency of the operating point, or the input B in , on the power is illustrated in Fig. 20, where ten samples are measured. The results indicate that the magnetometer consumes the power of almost 8.0 mW when B in = 120 μT. To confirm the effectiveness of the proposed calibration scheme, the bandwidth and noise floor relationships before and after calibration are plotted for ten samples in Fig. 21. The initial code D ctrl before the calibration is 12, which corresponds to t sd of almost 17.5 ns. After the calibration, the code for each sample settled to 4, 5, or 6. It is confirmed that both the bandwidth and noise floor are improved through the proposed calibration. There is variation even after the calibration because the intrinsic sensitivity in the loop gain varies between the devices. The bandwidth and noise floor are also measured for each sampling delay t sd , which can be set by the code D ctrl into the MUX as shown in Fig. 4(a) and be swept manually through the SPI command. The t sd dependency on the bandwidth and noise floor are illustrated in Fig. 22(a) and (b), respectively, and the results for ten samples are obtained. The inappropriate t sd reduces the loop gain and loop bandwidth, and it directly degrades the signal bandwidth and the noise characteristic. These measurement results show there is the optimum t sd , which corresponds to around the code of 4, and it can be found automatically by using the proposed calibration scheme. The typical characteristics of a prototype chip are summarized in Table I. The MI driver, SC-based AFE circuit logic circuit, and clock generator are driven by a 1.28-MHz clock, which is generated by an external 2.56-MHz clock. Since the input range is ±120 μT, the DR becomes 96 dB for the in-band integrated noise. Conversely, if the noise with an out-of-band up to 200 kHz is integrated, it was 1.8 nT rms , corresponding to a DR of 93 dB. A comparison to prior works is summarized in Table II. The FG magnetometers [5], [6] achieve the lower noise floor of a few tens of pT/ √ Hz. However, their consumed power is relatively high because their excitation current tends to be large for saturating magnetization of a core in the FG sensor. The MI-based magnetometer is driven by a short time pulse, and therefore, the excitation current can be saved compared with the FG-based ones. Although magnetometers based on IFG [12], [14] can provide a chip-scaled implementation and achieve a larger input range, they still have a higher level of noise and power consumption. V. CONCLUSION An automatic digital calibration technique for enhancing the loop gain in an MI-based magnetometer has been presented. The designed magnetometer achieves low-noise and highbandwidth characteristics. A dedicated switching scheme can avoid the open-loop state in the SC circuit and reduces the folded noise influence of opamp. In addition, by adopting the CDS technique, a nonideality related to the parasitic resistor of the coil in the MI element is suppressed, recovering the effectiveness of the magnetic negative feedback. A prototype chip fabricated in a 0.18-μm CMOS process achieved a noise floor of less than 10 pT/ √ Hz and 96-dB DR at a power consumption of 2 mW. A comparison with other state-ofthe-art magnetometers shows that the presented chip achieves higher efficiency in terms of power, noise, DR, and bandwidth.
2022-09-15T15:42:09.690Z
2022-12-01T00:00:00.000
{ "year": 2022, "sha1": "a100aab6d98296cc78a63da80cc4331956d87281", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/4/4359912/09885189.pdf", "oa_status": "HYBRID", "pdf_src": "IEEE", "pdf_hash": "f2b894f504213895a964a27574db381fe4f78476", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
52921473
pes2o/s2orc
v3-fos-license
Lung tumor segmentation methods: Impact on the uncertainty of radiomics features for non-small cell lung cancer Purpose To evaluate the uncertainty of radiomics features from contrast-enhanced breath-hold helical CT scans of non-small cell lung cancer for both manual and semi-automatic segmentation due to intra-observer, inter-observer, and inter-software reliability. Methods Three radiation oncologists manually delineated lung tumors twice from 10 CT scans using two software tools (3D-Slicer and MIM Maestro). Additionally, three observers without formal clinical training were instructed to use two semi-automatic segmentation tools, Lesion Sizing Toolkit (LSTK) and GrowCut, to delineate the same tumor volumes. The accuracy of the semi-automatic contours was assessed by comparison with physician manual contours using Dice similarity coefficients and Hausdorff distances. Eighty-three radiomics features were calculated for each delineated tumor contour. Informative features were identified based on their dynamic range and correlation to other features. Feature reliability was then evaluated using intra-class correlation coefficients (ICC). Feature range was used to evaluate the uncertainty of the segmentation methods. Results From the initial set of 83 features, 40 radiomics features were found to be informative, and these 40 features were used in the subsequent analyses. For both intra-observer and inter-observer reliability, LSTK had higher reliability than GrowCut and the two manual segmentation tools. All observers achieved consistently high ICC values when using LSTK, but the ICC value varied greatly for each observer when using GrowCut and the manual segmentation tools. For inter-software reliability, features were not reproducible across the software tools for either manual or semi-automatic segmentation methods. Additionally, no feature category was found to be more reproducible than another feature category. Feature ranges of LSTK contours were smaller than those of manual contours for all features. Conclusion Radiomics features extracted from LSTK contours were highly reliable across and among observers. With semi-automatic segmentation tools, observers without formal clinical training were comparable to physicians in evaluating tumor segmentation. Methods Three radiation oncologists manually delineated lung tumors twice from 10 CT scans using two software tools (3D-Slicer and MIM Maestro). Additionally, three observers without formal clinical training were instructed to use two semi-automatic segmentation tools, Lesion Sizing Toolkit (LSTK) and GrowCut, to delineate the same tumor volumes. The accuracy of the semi-automatic contours was assessed by comparison with physician manual contours using Dice similarity coefficients and Hausdorff distances. Eighty-three radiomics features were calculated for each delineated tumor contour. Informative features were identified based on their dynamic range and correlation to other features. Feature reliability was then evaluated using intra-class correlation coefficients (ICC). Feature range was used to evaluate the uncertainty of the segmentation methods. Results From the initial set of 83 features, 40 radiomics features were found to be informative, and these 40 features were used in the subsequent analyses. For both intra-observer and interobserver reliability, LSTK had higher reliability than GrowCut and the two manual segmentation tools. All observers achieved consistently high ICC values when using LSTK, but the PLOS Introduction Precision medicine aims to customize cancer treatment for an individual patient by considering combined knowledge (i.e., conventional factors such as age and sex, genetics, proteins, and others) [1,2]. Precision medicine seeks to completely characterize the tumor to determine optimal treatment based on patient-specific characteristics. In recent years, studies have shown that radiomics features have the potential to significantly improve our ability to stratify patients according to likely treatment response beyond conventional prognostic factors, thereby leading to truly personalized cancer care [3][4][5][6][7]. The generic workflow of radiomics studies includes four steps: (1) image acquisition, (2) tumor delineation, (3) feature extraction, and (4) feature analysis [8,9]. The tumor delineation can be drawn manually or generated with a semi-automatic tool. Once the tumor delineation has been established, radiomics features are extracted from the tumor-defined region within the image. Thousands of radiomics features can be calculated for one tumor, and each feature characterizes the tumor in a different way. For example, roundness is a radiomics feature that characterizes the tumor shape and can be used to predict how the tumor may spread out to nearby locations. Lastly, features are evaluated to see whether they correlate with prognostic or predictive factors. Features that are shown to be predictive are then used to build outcome models that help predict how a patient will respond to a treatment. For different diseases, different radiomics features can be selected for outcome modeling to predict likely treatment response. Before radiomics features can be clinically useful, it is necessary to investigate and understand the uncertainties of radiomics features. One major source of uncertainty comes from the tumor delineation. To manually delineate the tumor precisely, in general, is difficult. Tumors often lay adjacent to other organs that share similar characteristics with the tumor, making it difficult to distinguish the true tumor boundary. Additionally, medical images are far from perfect, as they have limited resolution (limiting our ability to see very small objects) and can contain artifacts (features in an image that do not represent a real aspect of the imaged object). Physicians may interpret the tumor differently, depending on their training and experience [10]. In addition, the different software tools that physicians use to draw the tumor contours may also affect the results, depending on user familiarity with the tool. Because radiomics features are calculated from the delineated tumor, uncertainty in tumor delineation could propagate to the radiomics features. Recent advances in computer-aided automatic and semi-automatic segmentation approaches have been shown to reduce the burden in manual delineation and lessen the inconsistency in tumor delineation [11,12]. To date, a small number of studies have been performed to relate this reduced uncertainty in tumor delineation to the quality and reproducibility of radiomics features [13][14][15][16][17]. In this current study, we examined three specific factors that can influence the uncertainty of radiomics features for both manual and semi-automatic segmentation methods: (1) intraobserver, (2) inter-observer, and (3) inter-software. Manual contours were generated by three independent physicians using MIM Maestro TM (MIM Software Inc., Cleveland, Ohio, USA) and 3D-Slicer [18]. Semi-automatic contours were generated by three trained observers using the GrowCut algorithm from 3D-Slicer [11] and the Lesion Sizing Toolkit (LSTK) [19]. While the segmentation accuracy of LSTK has been evaluated [19,20], to our knowledge the reliability of radiomics features extracted from LSTK-generated contours has not been studied. Additionally, we evaluated whether manual software tools and semi-automatic software tools can be used interchangeably for generating contours for feature extraction. The purpose of this study can be summarized into two main objectives. The first objective was to identify a reliable segmentation tool that produces lung tumor segmentations that yield reliable and robust radiomics features for the same observer, across multiple observers, and across multiple software tools. The second objective was to identify a group of reliable radiomics features for nonsmall cell lung cancer (NSCLC) primary tumors. Patient data and CT image acquisition For this study, we retrospectively obtained patient data for 10 patients with histologically verified NSCLC. The Institutional Review Board (IRB) at the University of Texas MD Anderson Cancer Centers approved the present retrospective study, and the requirement for informed consent was waived. The lung tumors included in this study had volumes ranging from 1.15 cm 3 to 10.53 cm 3 . For each patient, breath-hold helical computed tomography (CT) scans were acquired with intravenous contrast. The CT scans were acquired on General Electric Healthcare CT scanners with a peak tube voltage of 120 kVp and tube currents ranging from 320 mAs to 570 mAs. Each scan was reconstructed with a slice thickness of 2.5 mm and pixel spacing between 0.635 mm and 0.977 mm. Manual segmentation Manual segmentations were performed by three radiation oncologists using two different software tools: MIM Maestro TM (MIM Software Inc., Cleveland, Ohio) and 3D-Slicer (a free open-source software platform) [18]. Each physician manually segmented each of the 10 tumors using both manual software tools, following the RTOG 1106 contouring guideline [21,22]. This guideline recommends contouring the primary tumor volume on CT images using a standard lung window/level for distinguishing lung borders and using a mediastinal window/level for distinguishing borders adjacent to the mediastinum. This process was repeated twice at two different times, yielding two sets of contours (Fig 2). The time intervals between the two sets of contours for each physician were approximately 1 year for the first two physicians and 1 month for the third physician. In total, 120 manual tumor contours were generated (2 software tools × 3 observers × 2 contours × 10 tumors). For both manual software tools, tumors were contoured using a paintbrush tool (thresholding in 3D-Slicer) in a slice-byslice fashion in the transverse plane. Physicians could observe and edit the tumor in the coronal and sagittal planes as well, when desired. Semi-automatic tumor segmentation Semi-automatic segmentations were generated using two different software tools: LSTK (a level-set algorithm available from an open-source toolkit) and GrowCut (a region growing algorithm implemented in 3D-Slicer). For the semi-automatic segmentations, three observers without formal clinical training were instructed to use the two semi-automatic tools to generate tumor segmentations. Verbal step-by-step instructions were given to each observer on using each software tool. After that, observers practiced using each software tool on three lung tumors (outside the study). The entire process took less than 15 minutes, with instruction lasting 5 minutes and practice lasting less than 10 minutes. Once observers felt comfortable with the software tool, the segmentations for this study were collected. The contouring process that was used for the manual contours was repeated for the semi-automatic contours for the same 10 tumors (Fig 2). The time interval between the two sets was 1 to 2 months for each observer to lessen memory effects. Other studies showed that 3 weeks between contouring runs are enough to mitigate the effects of memory [23]. For GrowCut, observers labeled foreground and background pixels with two clicks (Fig 3) in each view, totaling in at least six clicks per tumor case. If the tumor was attached to the chest wall or mediastinum, additional clicks at appropriate location are needed to help the algorithm differentiate the tumor from the chest wall or mediastinum. Once labels were established, the GrowCut algorithm was followed by manual editing of the GrowCut-generated contours. The editing process took up to 2 minutes for some tumor cases. For LSTK, the only interaction was to pick a seed which is a user-selected voxel within the tumor (Fig 3). Defining the maximum tumor radius was optional; however, defining an appropriate maximum tumor radius might save computation time in running LSTK. The LSTK algorithm has several preset parameters that can affect the segmentation result. We used the initial physician manual contours to guide us in selecting these parameters. Detailed discussions regarding the algorithms of GrowCut and LSTK can be found in other publications [19,20]. Validating tumor segmentation accuracy We validated the accuracy of each semi-automatic segmentation. A group-consensus contour was generated as the ground truth where the group-consensus contour is taken to be the intersecting tumor volume shared by a majority of experts [23][24][25]. In this study, the group-consensus contour consisted of the tumor region where at least four of the initial six manual physician contours overlapped. To assess the accuracy of each tumor segmentation, the Dice similarity coefficient (DSC) and Hausdorff distance (HD) were calculated between the groupconsensus contour and each individual semi-automatic contour. The DSC quantifies the spatial overlap between two contours, while the HD quantifies the longest contour distance between the boundaries of two contours. While the DSC can detect incorrectly labeled voxels, the HD metric is better at detecting deviations (sharp spikes or tiny holes) that significantly alter the contour shape but do not substantially alter the volume. Feature extraction Features were calculated for all 240 tumor segmentations (120 manual + 120 semi-automatic). For this study, feature extraction was performed using the open-source Imaging Biomarker Explorer (IBEX) software [26]. A total of 83 features were calculated. We stratified the features into three main categories: geometric shape (SHP), intensity histogram (HIS), and texture (TXT). Co-occurrence matrix features (a subcategory of texture features) were calculated in four directions (0, 45, 90, and 135 degrees), and the final value was taken to be an average of these four directions to avoid directional bias [27]. A common pre-processing step used to refine contours before feature extraction is to remove voxels with intensity values for normal lung tissue, bone, or air that might be inside the tumor contour. Since the purpose of this study is to investigate the segmentation uncertainty on radiomics features, we omitted this step to adhere to the original segmentation. We also did not correct for pixel size [28] or perform smoothing [29] to avoid introducing other uncertainties to this study. Feature reduction One common approach for narrowing the feature set is to apply a combination of different methods in a sequential manner [9,14,15,30,31] to remove features that are non-informative Defining the maximum tumor radius generates a 3D bounding box (green) centered about the seed, within which the segmentation result will be confined. (B) GrowCut requires the user to label foreground (blue) and background (yellow) pixels to initiate the segmentation algorithm. Once labels were established, the GrowCut algorithm was followed by manual editing of the GrowCut-generated contours. Note that only the transverse view is shown here. Observers also labeled foreground and background pixels in the coronal and sagittal planes for each tumor case. https://doi.org/10.1371/journal.pone.0205003.g003 or redundant. In the current study, we applied two steps to reduce the initial feature set of 83 features to 40 informative and non-redundant features. The first step was to remove features that did not vary across different patients. For a feature to be informative, it must exhibit a range of values across different patients [9,14]. In other words, it must have a wide dynamic range to differentiate patients. Because multiple contours were generated for each patient, the average feature value was calculated for each patient. Before calculating the normalized dynamic range (NDR) for each feature, the average values for each feature were rescaled (across the patients) to have a mean of 0 and a standard deviation of 1 using z-score normalization, so that features with values of different scales could be compared. The NDR for each feature, NDR f , was calculated as: where maxð c f avg Þ is the maximum normalized average feature value across all patients and minð c f avg Þ is the minimum normalized average feature value across all patients. Once the NDR is calculated for each feature, a cutoff value is chosen as a means to remove the least informative features. In general, the cutoff value is chosen arbitrarily and may be set to a higher or lower value [9,15]. For the second step, highly correlated features were removed. It is well known that many features are highly correlated [9]. To deal with this issue, we computed a correlation matrix to identify highly correlated features. In this step, Spearman correlation coefficients were computed to evaluate the correlation between all features. Feature reliability analysis In this study, we examined three specific factors that can influence feature reliability: intraobserver, inter-observer, and inter-software (Table 1). Intra-observer agreement is a reliability measure of repeatability, while inter-observer and inter-software agreement are reliability measures of reproducibility [32]. To assess feature reliability, intraclass correlation coefficients (ICCs) were calculated for each feature. There are ten different forms of the ICC [33] and selecting the appropriate form depends on the experimental setup. To assess intra-observer Table 1. ICC formulas used to assess feature reliability. Intraobserver One-way random-effects model, single measure, absolute-agreement To determine whether features can be extracted reliably from tumor contours generated by a single physician/observer using a single software tool at multiple timepoints Interobserver Two-way mixed-effects model, single measure, absolute-agreement To determine whether features can be extracted reliably from tumor contours generated by multiple physicians/observers using a single software tool Intersoftware Two-way mixed-effects model, single measure, absolute-agreement reliability, we used a one-way random-effects model where the tumor cases are a random effect. To assess inter-observer and inter-software reliability, we used a two-way mixed-effects model where the tumor cases are a random effect and the observers (for inter-observer) and the software tools (for inter-software) are a fixed effect. The specific ICC form used to assess each reliability relationship is shown in Table 1. The ICC values, which can range from values of -1 to values of 1, were stratified into four different classifications. ICC values less than 0.4, between 0.4 and 0.6, between 0.6 and 0.75, and greater than 0.75 represented the ICC bounds for the classifications of poor, fair, good, and excellent reliability [23]. Correlation between ICC and CCC. Concordance correlation coefficients (CCCs) were also calculated because other feature reliability studies have used the CCC metric in their analysis [14,29,34,35]. Spearman rank correlation coefficients and pairwise scatterplots were computed between the ICC and CCC estimates for each reliability relationship. Identifying reliable feature categories. For this part of the analysis, we wanted to determine whether a specific feature category (shape, histogram, texture) was significantly more reproducible than another feature category. For this determination, Wilcoxon rank sum test (aka Mann-Whitney test) values were computed between each feature category combination (e.g., shape versus histogram) for each ICC relationship. Feature range analysis For segmentations from each software tool, we calculated the feature range (inter-patient variability) across observers for each radiomics feature. First, we normalized each feature using zscore normalization. This allowed us to more easily compare and plot features on different scales. Each normalized feature, b f i , was calculated as: where f p,i is the feature for contour i from patient p, " f p is the mean value for feature f for all contours from patient p, and σ p,f is the standard deviation for feature f for all contours from patient p. Then we recorded the minimum and maximum normalized feature values for each segmentation method to assess the feature range of each segmentation method. Validating tumor segmentation accuracy For the semi-automatic tools, the mean DSCs were 0.88 ± 0.06 and 0.88 ± 0.08 for LSTK and GrowCut, respectively (Fig 4). For the semi-automatic tools, the mean HD values were 0.48 ± 0.17 cm and 0.43 ± 0.20 cm for LSTK and GrowCut, respectively. The DSC and HD results show that trained observers can achieve comparable contours with these semi-automatic tools to the group-consensus physician contour, and hence these semi-automatically generated contours can be used for feature extraction. Feature reduction To identify non-informative features, the NDR was calculated for each feature. A histogram showing the number of features within a range of NDR values is shown in Fig 5. All features had an NDR value greater than 2.4 and hence all features were considered to exhibit large enough inter-patient variability to remain in the feature set. To evaluate the correlation between all features, pair-wise Spearman correlation coefficients were computed (Fig 6). Pair-wise correlation coefficients with an absolute value larger than 0.95 were regarded as very redundant [15]. For correlated features, the feature with the largest mean absolute correlation was removed, reducing the feature set to 40 non-redundant features (Fig 7). statistically significant positive correlation (ρ>0.965, p<0.0001), indicating that feature reliability ranking was nearly the same for these two reliability metrics. For the pairwise scatterplots, all reliability relationships could be modeled with a strong positive linear regression fit line (R 2 >0.982, p<0.0001). These results indicate that the ICC and CCC metrics will yield similar results for analysis. Feature repeatability: Intra-observer. For intra-observer reliability, we wanted to evaluate whether features could be extracted reliably from tumor contours generated by a single observer using a single software tool at multiple time points. For each feature, ICC values were calculated between the features generated from the first and second contour runs for each user and software tool combination. The results showed that intra-observer reliability was highly observer dependent (Fig 8, Table 2). For the manual tools, the average ICC values were much lower for physicians 1 and 2 (MIM: 0.63, 0.17, 3DS: 0.72, 0.83) than the average values for physician 3 (MIM: 0.96, 3DS: 0.96). This is likely due to the fact that the time between the contour runs for physicians 1 and 2 was 1 year, whereas for physician 3 the elapsed time between contour runs was 1 month. For the semi-automatic tools, all observers achieved higher average ICC values with the software tool LSTK (0.97, 0.98, 0.85) than with GrowCut (0.94, 0.85, 0.75). This shows that LSTK can be used to minimize the effect from intra-observer variability compared with GrowCut, as was shown with observer 3 whose average ICC value improved substantially from 0.75 (for GrowCut) to 0.95 (for LSTK). LSTK requires less user interaction than GrowCut, which typically requires manually editing after the segmentation, thus leading to more consistent feature values and achieving better consistency. Feature reproducibility: Inter-observer. For inter-observer variability, we wanted to evaluate whether features could be extracted reliably from tumor contours generated by Box plot of ICCs for each intra-observer relationship. ICC values were computed between contour run 1 and contour run 2 for each feature. Each physician/observer and software tool combination is plotted along the x-axis. Intra-observer reliability was observer-dependent. All observers achieved excellent feature reliability with LSTK. https://doi.org/10.1371/journal.pone.0205003.g008 multiple observers using a single software tool. For each feature, ICC values were calculated between the features generated by multiple users for each contour run and software tool combination. For both manual tools, the average ICC was less than 0.79 for both contour runs (Fig 9, Table 2). For the semi-automatic tools, GrowCut (0.70, 0.85) had inferior feature reliability compared with LSTK (0.98, 0.96). Moreover, LSTK had average ICC values that fell within the excellent ICC classification for contour run 1 and contour run 2. This shows that LSTK has superior feature reliability across observers compared with the other software tools used in this study. Segmentation uncertainty for radiomics studies Feature reproducibility: Inter-software. For inter-software reliability, we sought to evaluate whether features could be extracted reliably from tumor contours generated by a single observer using multiple software tools. For each feature, ICC values were calculated between the features generated by multiple software tools for each user. For both manual and software methods, the average ICC was less than 0.78 for all physicians and observers (Fig 10, Table 2). Although 0.78 falls within the good reproducibility bounds, it is important to note that the confidence intervals for these results are very large (which could be attributable to the small sample size used in this study) and that for many features the lower bound of the confidence interval overlaps with the bounds of the ICC classification for poor reproducibility. These results indicated that different software tools do not yield reproducible features and should not be used interchangeably. This has also been concluded by other studies looking specifically at lung nodule volumes [36,37]. Because the boxplots (Figs 8-10) show only the spread of ICC values for each ICC relationship, Fig 11 allows one to see the ICC classification of each feature for each ICC relationship. ICC values were sorted into their respective ICC classifications based on the lower bound of the 95% confidence interval of the ICC value (Fig 11). Koo et al recommends using the 95% confidence interval to evaluate the level of reliability rather than using the ICC estimate, as the ICC estimate is merely an expected value of the true ICC [38]. Once more, the results in Fig 11 further support the fact that LSTK has superior feature reproducibility, with 31 of the 40 features having lower bound values that fell within the excellent classification for all intra- observer and inter-observer relationships. These results showed that LSTK helps to improve feature reliability for many features across observers and for repeat measures performed by a single observer. Additionally, it can easily be noted that most features, irrespective of the segmentation method, contour run, or physician/observer, fell within the poor classification for feature reproducibility for all inter-software relationships. Identifying reliable feature categories. In this part of the analysis, we wanted to evaluate whether a specific feature category was more reproducible than another feature category. The results for the Wilcoxon rank sum tests showed that for all ICC relationships, the reproducibility of shape features did not significantly differ from the reproducibility of histogram features, and that the reproducibility of histogram features did not significantly differ from the reproducibility of texture features (Fig 12). For assessing whether the reproducibility of shape features was significantly different from the reproducibility of texture features, only four ICC relationships had shape features that were significantly more reproducible than texture features, whereas three ICC relationships had shape features that were significantly less reproducible than texture features. Overall, no feature category was found to be more reproducible than another. Feature range analysis To assess the feature range for each feature, we plotted the minimum and maximum normalized feature values for each segmentation method (Fig 13). The semi-automatic contours had smaller feature ranges than the manual delineations for most of the features (except 7 features of a total of 40). Furthermore, when we compared the feature ranges for LSTK, all features had Segmentation uncertainty for radiomics studies smaller feature ranges across observers than the manual delineations. Additionally, all but four features had ranges that overlapped with the manual ranges. Discussion Tumor delineation is an important aspect of the radiomics workflow. Variation in contouring can affect the extracted feature values, which would undoubtedly influence subsequent steps in the radiomics workflow. Identifying contouring software tools that improve feature reliability helps to mitigate feature uncertainties that arise from inconsistent contouring. In this study, we evaluated the uncertainty of radiomics features from both manual and semi-automatic segmentation due to intra-observer, inter-observer, and inter-software reliability. We found that, using semi-automatic segmentation such as LSTK, observers without formal clinical training can generate contours that are comparable to manually drawn contours generated by formally trained physicians (Fig 4). In terms of intra-observer reliability, we found that features extracted from LSTK contours were more reliable than those extracted from contours generated with other software tools for Fig 12. Wilcoxon rank sum results between intraclass correlation coefficients for different feature categories. Asterisks indicate that the median ICC was significantly different (p<0.05) between the two feature categories being compared. Blue cells indicate that the reproducibility of texture features was significantly less than the reproducibility of shape features. Red cells indicate that the reproducibility of texture features was significantly greater than the reproducibility of shape features. https://doi.org/10.1371/journal.pone.0205003.g012 all observers (Fig 8, Table 2). In both semi-automatic segmentation tools, LSTK showed better intra-observer reliability than GrowCut because less human interaction was needed to generate contours with LSTK, which was exemplified by the improvement in intra-observer reliability from observer 3 ( Table 2). For inter-observer reliability, we found that features extracted from LSTK contours were more reliable across observers than features extracted with all other software tools (Fig 9). Regarding inter-software reliability, we found that different software tools do not yield reproducible features, even when the same observer uses the two tools ( Fig 10). In other words, segmentation tools cannot be used interchangeably if the contours will be used in subsequent radiomics studies. In addition, we also found that the feature range was smaller across observers for all features generated from LSTK contours than other contours (Fig 13), implying less uncertainty when the contours were generated with less human interaction. In other words, to minimize the uncertainty in radiomics studies, one should adhere to a single contouring approach and automate the contouring process as much as possible. Additionally, for the most part, we found that no feature category was found to be more reproducible than another (Fig 12). Our findings agree with a previously conducted study which found that features were less reliable when extracted from segmentations generated with different algorithms (similar to our inter-software relationship) compared with features extracted from segmentations from repeat runs of the same algorithm (similar to our intra-observer relationship) [17]. The difference between our study and the study by Kalpathy-Cramer et al is that we also looked at the effect of different observers using the same segmentation tool. This is an important interaction to assess because different observers, depending on their training and familiarity with the segmentation tool, may use the same tool differently which can affect the final segmentation. There are three main limitations of this study. The first limitation is that a small patient population was used. Sample size is an important factor to consider when using inferential statistics such as the ICC. Small sample sizes lack power and can result in large confidence intervals [39]. The negative ICC values observed in this study could be caused by the insufficient sample size as well. Future studies with larger sample sizes may help to reduce wide confidence intervals. Despite the small sample size, however, the width of the confidence intervals was narrower for all features extracted from LSTK contours compared with the other software tools for all intra-observer and inter-observer relationships. The second limitation is that the ICC (as is the case for any reliability measure) depends on the heterogeneity of the tumors of the patient population in the study [40,41]. Populations that are more heterogeneous (where the between-subject standard deviation is larger) will yield higher ICC values than more homogeneous populations. Because of these limitations, we reported confidence intervals of the ICC averages (Table 2), as well as the tumor volume range (1.15 cm 3 to 10.53 cm 3 ) for this patient population to give an idea of the between-patient tumor heterogeneity. The third limitation is that we tested only the most popular radiomics features instead of an exhaustive list of radiomics features. One group of radiomics features that is worth mentioning is the edge sharpness features [42]. On the basis of its construction, we expect edge features to be highly correlated with shape features under test. For example, the shape features sphericity and compactness would be influenced by the smoothness of the tumor's boundary, with smoother boundaries yielding larger feature values and rougher boundaries yielding smaller feature values. Because both shape and edge features are calculated from the tumor boundary, we believe that edge features may exhibit similar feature variability due to segmentation differences as we observed with shape features. Although we showed that LSTK improves feature reliability (within and across observers), its effect on outcome modeling has not been evaluated. Radiomics features alone are not very meaningful. After feature extraction, features are often evaluated to see if they correlate with prognostic or predictive factors. An important future study would be to evaluate the effect that contouring can play in building outcome models. It has been shown from this study and other studies that semi-automatic tools improve feature reliability [13][14][15][16]; however, to the best of our knowledge the effects of these tools on building outcome models have yet to be studied. Also, semi-automatic tools that yield accurate segmentations and improve segmentation consistency within and across observers are not only helpful for feature reliability studies but also can help with subsequent studies that utilize tumor contours in their analysis. Examples of such studies include but are not limited to longitudinal radiomics studies (delta-radiomics) and longitudinal clinical studies [7,43] that assess tumor response where contours may be generated across different observers or at different time points by a given observer. Conclusion Our findings showed that radiomics features computed from semi-automatic segmented volumes have better feature reproducibility and reliability than those computed from manual segmented volumes. In semi-automatic segmentation, the tool with less human interaction (i.e. LSTK) resulted in better feature reliability as well. Our results also showed that with semi-automatic segmentation tools, observers without formal clinical training were comparable to physicians in evaluating tumor segmentation. Our findings suggest the need of developing fully automatic segmentation tools (without any user input) for radiomics studies in order to minimize the impact from contouring uncertainty and to improve feature reproducibility and repeatability for subsequent analysis such as radiomics outcome studies or longitudinal clinical studies that assess tumor response.
2018-10-21T23:03:03.064Z
2018-10-04T00:00:00.000
{ "year": 2018, "sha1": "c308d58f348ab7fa2e22c02c056fe1777bb87ca4", "oa_license": "CC0", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0205003&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c308d58f348ab7fa2e22c02c056fe1777bb87ca4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
15652760
pes2o/s2orc
v3-fos-license
A Novel Colonial Ciliate Zoothamnium ignavum sp. nov. (Ciliophora, Oligohymenophorea) and Its Ectosymbiont Candidatus Navis piranensis gen. nov., sp. nov. from Shallow-Water Wood Falls Symbioses between ciliate hosts and prokaryote or unicellular eukaryote symbionts are widespread. Here, we report on a novel ciliate species within the genus Zoothamnium Bory de St. Vincent, 1824, isolated from shallow-water sunken wood in the North Adriatic Sea (Mediterranean Sea), proposed as Zoothamnium ignavum sp. nov. We found this ciliate species to be associated with a novel genus of bacteria, here proposed as “Candidatus Navis piranensis” gen. nov., sp. nov. The descriptions of host and symbiont species are based on morphological and ultrastructural studies, the SSU rRNA sequences, and in situ hybridization with symbiont-specific probes. The host is characterized by alternate microzooids on alternate branches arising from a long, common stalk with an adhesive disc. Three different types of zooids are present: microzooids with a bulgy oral side, roundish to ellipsoid macrozooids, and terminal zooids ellipsoid when dividing or bulgy when undividing. The oral ciliature of the microzooids runs 1¼ turns in a clockwise direction around the peristomial disc when viewed from inside the cell and runs into the infundibulum, where it makes another ¾ turn. The ciliature consists of a paroral membrane (haplokinety), three adoral membranelles (polykineties), and one stomatogenic kinety (germinal kinety). One circular row of barren kinetosomes is present aborally (trochal band). Phylogenetic analyses placed Z. ignavum sp. nov. within the clade II of the polyphyletic family Zoothamniidae (Oligohymenophorea). The ectosymbiont was found to occur in two different morphotypes, as rods with pointed ends and coccoid rods. It forms a monophyletic group with two uncultured Gammaproteobacteria within an unclassified group of Gammaproteobacteria, and is only distantly related to the ectosymbiont of the closely related peritrich Z. niveum (Hemprich and Ehrenberg, 1831) Ehrenberg, 1838. Introduction and phylogenetic analyses suggested that the ciliate represented a novel species of Zoothamnium and the bacteria covering the ciliate represented a novel bacterial genus. Ethics statement No specific permissions were required for the listed locations as they are publicly accessible. Furthermore, we confirm that our field studies did not involve endangered or protected species. Sample collection and fixation Colonies of Zoothamnium ignavum sp. nov. were collected from sunken wood at a depth between 1 and 1.5 m in the North Adriatic Sea (Mediterranean Sea) at two locations in the vicinity of Piran, Slovenia, the Bernardin harbor in 2014 and the canal Sv. Jernej in 2014 (Figs 1 and 2). Colonies were frozen in liquid nitrogen and stored at-80°C for DNA extraction, or fixed and stored in 100% ethanol for DNA extraction and FISH, respectively, or fixed and preserved in a modified Trump's fixative (2.5% glutaraldehyde, 2% paraformaldehyde in sodium cacodylate 0.1 mol L -1 ; 1100 mOsm L -1 ; pH 7.2) for up to six months until further treatment for scanning electron microscopy. Microscopic studies Freshly collected colonies were studied with bright-field and differential interference contrast optics on a Leica DM2000 microscope. Measurements were taken from living colonies and individual cells. In order to reveal the kinetosomes and nuclei, the pyridinated silver carbonate impregnation technique after Fernández-Galiano [63] was used. Bacteria were stained using the LIVE/DEAD1 BacLight™ Bacterial Viability Kit (Thermo Fisher Scientific). Drawings were made from photographs taken with the Leica DM2000 microscope equipped with a Leica DFC295 camera. Photographs of living colonies on wood were made with a Canon EOS 550D camera on a BMS 144 stereomicroscope. DNA extraction, polymerase chain reaction (PCR) and sequencing DNA was extracted from 13 individual colonies using the KAPA Express Extract Kit (KAPA Biosystems), with slight modifications of the reaction volume: the total volume was 20 μL, consisting of 2 μL Express Extract Buffer, 0.4 μL Express Extract Enzyme, and 17.6 μL dH 2 O. Lysis incubation was done at 75°C for 20 min, followed by an enzyme inactivation step at 95°C for 5 min. The 16S rRNA genes were amplified by PCR using the universal bacterial primers 27 forward and 1492 reverse [64]. The 18S rRNA genes were amplified using the universal eukaryotic primers 82 forward [65] and Medlin B reverse [66]. The obtained PCR products from each colony were cloned separately using the TOPO-TA cloning kit (Invitrogen) according to the manufacturer's instructions. For screening of the 16S rRNA and 18S rRNA genes, 10 to 15 clones each were picked and controlled for the correct size by PCR with the M13 forward and M13 reverse primers (Invitrogen). Polymerase chain reaction products of the correct size for the 16S rRNA gene (~1,500 nt) and for the 18S rRNA gene (~1,800 nt) were fully sequenced via Sanger sequencing and further analyzed using the program CodonCode Aligner (CodonCode Corporation; www.codoncode.com). Host and symbiont phylogenetic analyses The obtained 16S and 18S rRNA gene sequences were compared with the National Center for Biotechnology information NCBI (http://www.ncbi.nlm.nih.gov) database using BLAST [67]. For phylogenetic analyses of 18S rRNA gene sequences, all sequences available for Zoothamnium spp. plus some other Peritrichia were included (66 sequences). For phylogenetic analyses of 16S rRNA gene sequences, all BLASTn hits longer than 1400 nt with sequence identities higher than 90% to "Ca. Navis piranensis" gen. nov., sp. nov., including the sequences of species belonging to the "Thiobios Group" as defined in a previous study [39], and four sequences belonging to the Thiotrichales were included. As outgroup, four sequences of non Gammaproteobacteria were included. S1 Table provides accession numbers of sequences included in the phylogenetic analyses of 18S rRNA gene sequences. S2 Table provides accession numbers of sequences included in the phylogenetic analyses of 16S rRNA gene sequences. Sequences were aligned by MAFFT using the Q-INS-i strategy that considers the secondary structure of RNA [68] and alignments were checked manually. For phylogenetic analyses, we evaluated the optimal nucleotide substitution model based on the Akaike information criterion using MrModeltest2 [69]. The general time-reversible (GTR) model with invariable sites (I) and a Γ-correction for site-to-site rate variation was selected for all analyses. A 50% majority-rule Bayesian inference tree was constructed with MrBayes 3.2.6 [70]. The chain length was 10000000 generations with trees sampled every 100 generations. The first 2500 trees were discarded as burn-in. The maximum likelihood analyses were carried out using the packages phangorn version 2.0.4 [71] and ape version 3.5 [72] in R version 3.2.2 [73]. Node robustness was assessed by performing bootstrap in ML analyses and calculating posterior probabilities in Bayesian inferences. Bootstrap support and posterior probabilities of at least 70% are indicated at the nodes of the trees. Trees were rooted by the mid-point technique. 16S rRNA symbiont-specific probe design and fluorescence in situ hybridization The 16S rRNA bacterial gene sequences were added to the SILVA database [74] and two specific probes were designed with the ARB software package [75]. Probe specificity was checked against the ARB database and the Ribosomal Database Project by the implemented tool Probe Match [76]. Both probes showed, at least, one mismatch to all other 16S rRNA sequences available in the public databases. The nucleotide sequences of the newly designed probes ZIS645 and ZIS832 are available at probeBase ( [77]; www.microbial-ecology.net/probebase). Colonies fixed and stored in 100% ethanol were embedded in LR-white resin (London Resin Co.) and semi-thin sections (1 μm thickness) were prepared using an Ultracut E, Reichert-Jung ultramicrotome. FISH probes were labeled on their 5' end with the fluorescent dyes Cy3 or Cy5 (Table 1). Optimal hybridization conditions for the newly designed specific probes ZIS645 and ZIS832 were determined by applying a series of formamide concentrations (0 to 35%) in the hybridization buffer [78]. Positive and negative hybridization controls were the EUBMix probe set consisting of EUB338, EUB338II and EUB388III [79,80], targeting most Bacteria, and the probe Non338, complementary to EUB338 [81]. 4' ,6-Diamidino-2-phenylin dole (DAPI) was used as a counterstain. Microscopic analyses were performed with a Zeiss Axio Imager M2 epifluorescence microscope. Scanning electron microscopy (SEM) Thirteen colonies were fixed in a modified Trump's fixative and stored in the fixative up to six months. A graded series of acetone was used for dehydration for 10 min each, followed by 1:1 acetone/hexamethyldisilazane (HMDS) and 100% HMDS for 10 min each, followed by air-drying the colonies. Afterwards, colonies were mounted on stubs and sputter coated with gold/palladium. A JOEL IT300 (Germany) scanning electron microscope was used to view the colonies. Nomenclatural Acts The electronic edition of this article conforms to the requirements of the amended International Code of Zoological Nomenclature, and hence the new names contained herein are available under that Code from the electronic edition of this article. This published work and the nomenclatural acts it contains have been registered in ZooBank, the online registration system for the ICZN. The ZooBank LSIDs (Life Science Identifiers) can be resolved and the associated information viewed through any standard web browser by appending the LSID to the prefix http://zoobank.org/. The LSID for this publication is: urn:lsid:zoobank.org:pub:3B25B012-1ABF-4767-B2D7-0A95C26E5DF9. The electronic edition of this work was published in a journal with an ISSN, and has been archived and is available from the following digital repositories: PubMed Central, LOCKSS. SYSTEMATICS The ciliate classification follows Lynn [38]. Diagnosis. Zoothamnium species with alternately branched stalk; zooids alternate on branches; three different types of zooids: microzooids ("trophic stage"), macrozooids ("telotroch stage"), terminal zooids; microzooids bulgy, inverted bell-shaped; macrozooids roundish to ellipsoid, located only on the most proximal part of the branches; top terminal zooid on the tip of the stalk, terminal zooids of the branches on the proximal end of each branch; undividing terminal zooids similar to microzooids in shape, dividing terminal zooids ellipsoid; macronucleus of microzooids S-shaped, showing irregular thickness; macronucleus of macrozooids extended through the whole cell, band-like with constant diameter; macronucleus of dividing terminal zooids S-shaped, regular thickness, filling almost the entire cell; in each zooid one orally located contractile vacuole present; a telotrochal band of one circular row of barren kinetosomes present aborally. Type specimens. The holotype #5613 and nine paratypes #5614 -#5622 fixed in a modified Trump's fixative (2.5% glutaraldehyde, 2% paraformaldehyde in sodium cacodylate 0.1 mol L -1 , 1100 mOsm L -1 ; pH 7.2) were embedded in glycerol and mounted on microscope slides. Additionally, 10 paratypes #20443 were fixed in absolute ethanol. The type material was deposited at the Naturhistorisches Museum, Wien (Austria). Gene sequence. The sequence of the 18S rRNA eukaryote gene of Zoothamnium ignavum sp. nov. was deposited in the GenBank database under accession number KX669262. Etymology. The Latin adjective ignav . us, -a, -um [m, f, n] refers to the contraction behavior of this species, as in comparison to its close relative Zoothamnium niveum it seems to be 'idle' . Description. The colony is composed of three different types of zooids, which are connected by a common stalk: (i) microzooids ("trophic stage"), (ii) macrozooids ("telotroch stage") and (iii) terminal zooids (Figs 3 and 4). The microzooids are located along the entire branch, the macrozooids are restricted to the most proximal parts of the branches. The terminal zooids are considered to be the only zooids capable of longitudinal fission. They are located on the distal end of the stalk (top terminal zooid) and produce the terminal zooids of each branch (terminal zooids of branches), which then produce the microzooids, and on the proximal part of the branches the macrozooids. Due to the divisions, the colony can grow up to 1.8 mm in length. The number of macrozooids within each colony is variable, although colonies were often found having three to four macrozooids on one branch. The core of the stalk is a contractile spasmoneme that runs uninterrupted throughout the entire colony, through the stalk and branches into each individual cell (zooid). This spasmoneme allows a simultaneous contraction of the colony and the oral side of each zooid. The end of the spasmoneme within the stalk splits up into bands, which bundle towards the proximal end of the stalk. Only the most basal part of the stalk and the adhesive disc lack a spasmoneme. The contraction occurs in a typical zigzag pattern and takes place rapidly while the subsequent extension is much slower. Merely in the expanded condition of the colony, the cilia of the oral apparatus have been observed to beat. Furthermore, younger zooids on the distal end of the stalk have been observed to be more active than older ones at the proximal end. The branches occur alternating on the stalk. The stalk diameter increases from about 15 μm at the top end of the colony to about 23 μm at the location of the first and oldest branches, and decreases to about 11 μm at the basal end of the colony. Located in the stalk, the spasmoneme diameter is 4.5 μm at the top end of the colony, it increases to 5.5 μm at the level of the first branch, and decreases to 4 μm, where it ends at around 70% of the total stalk length. At the end, it splits up into bands and bundles towards the most proximal end of the colony. Divided into stalk with branches and stalk without branches containing a spasmoneme and lacking a spasmoneme, the relative lengths are 40%, 30%, and 30% (S1 Fig). The youngest and shortest branches are found at the top end of the colony. Throughout the colony, the distance between the branches varies between 48 and 100 μm. The average diameter of the branches is about 9.2 μm, with the corresponding diameter of the spasmoneme of about 2.9 μm. On the branches, the microzooids occur alternating. The distance between microzooids varies from 18 to 34 μm. Typically, the extended microzooids have a bulgy, bell-shaped form (average length 39.4 μm, SD 3 μm; average oral width 28.8 μm, SD 3.1 μm; average aboral width 8.2 μm, SD 1.9 μm; n = 20; Fig 3). At the oral side, the microzooids are strongly asymmetric. In the retracted stage, the peristome with the peristomial disc and the single oral lip are withdrawn giving the microzooid a more symmetrical appearance. The S-shaped macronucleus, having a variable number of constrictions, extends throughout the microzooid. A small, roundish micronucleus is found adjacent to the macronucleus. On the opposite side of the infundibulum, one contractile vacuole is present. The cytoplasm is packed with tiny, dense granules (average diameter 3 μm). The pellicula of the microzooids is plain, with a striped silver line system (width of the striae 0.2-0.4 μm). The oral ciliature consists of a paroral membrane (haplokinety), three adoral membranelles (am 1-3; polykineties), and one short stomatogenic kinety (germinal kinety). The paroral membrane lies outside the innermost adoral membranelle (am 1). Viewed from inside the cell, the paroral membrane and the adoral membranelle (am 1) run jointly 1¼ turns in a clockwise direction around the peristomial disc and run into the infundibulum, where they make another ¾ turn. There, a short stomatogenic kinety of barren kinetosomes is present outside the paroral membrane. The innermost adoral membranelle (am 1) extends to the posterior end of the infundibulum, where it is accompanied by two shorter adoral membranelles (am 2, am 3) (Figs 4 and 5). At around two-thirds distance from the peristomial disc, the somatic ciliature, consisting of a single irregular row of barren kinetosomes forming the telotrochal band, is found. Similar to the microzooids, terminal zooids have a bulgy, bell-shaped form, resembling the microzooids in shape and morphological characteristics (average length 51.6 μm, SD 9.1 μm; average oral width 23.3 μm, SD 4.5 μm; average aboral width 11.3 μm, SD 3.2 μm; n = 20). However, some terminal zooids have a more ellipsoid shape (average core width 29.2 μm, SD 2.1; n = 4). These are thought to be in a dividing stage, having a very large macronucleus filling up almost the whole cell body. The macrozooids are roundish to ellipsoid with a diameter between 35 and 86 μm. The macronucleus appears very thick and constant in diameter, filling up almost the whole cell. One micronucleus lies adjacent to it. Orally, a large contractile vacuole is present. A cytopharynx does not appear to be developed, although some cytopharyngeal microtubuli are present. The pellicula of the macrozooids has bands transverse to the oral-aboral axis of the cell. The width of the striae is correlated with the size of the cell and ranges from 0.9 to 2 μm. Aborally a telotrochal band with several circular rows of kinetosomes is present. It is found in the same position as the single circular row of kinetosomes in the microzooids. As long as the macrozooids remain attached to the colony, the telotrochal band is only partly ciliated. In all freeswimming macrozooids, the telotrochal kinetosomes are fully ciliated. Remarks. Zoothamnium ignavum sp. nov. resembles Z. alternans Claparède and Lachmann, 1859 redescribed from a population from Qingdao, China [44] in the shape of the colony, the branching pattern, and the size of the microzooids (Table 2). However, several characters are conspicuously different between Z. ignavum sp. nov. and Z. alternans and clearly distinguish these two species (Table 2, S1 Fig). The shape of the macronucleus in the microzooids is S-shaped in Z. ignavum sp. nov. while it is J-shaped in Z. alternans. In Z. alternans the infundibular polykineties in the microzooids perform a full turn around the infundibulum and extend posteriorly to the end of the infundibulum [44]. In contrast, they are much shorter in Z. ignavum sp. nov. and perform only a ¾ turn around the infundibulum, similar to the infundibular polykineties in Z. niveum ([82], S1 Fig). Distinguishing three parts of the stalk from the top to the bottom in 1) stalk with branches and spasmoneme, 2) stalk with spasmoneme and without branches, and 3) stalk without spasmoneme and without branches, the relative lengths of Z. ignavum sp. nov. colonies are about 40%, 30%, and 30%. In contrast, in Z. alternans they are about 80%, 10%, 10%. Thus, the lower part of the stalk (from the adhesive disc to the lowest branch) is much shorter in Z. alternans (about 20% of the total stalk length) than in Z. ignavum sp. nov. (more than 50% of the total stalk length) (S1 Fig). Besides Z. ignavum sp. nov. and Z. alternans, also Z. niveum and Z. plumula Kahl, 1932 (syn. Z. plumosum Perejaslawzewa, 1858) have an alternate branching pattern. However, in Z. plumula the microzooids are located regularly in pairs along the branches and macrozooids are completely absent [83][84][85]. In Z. niveum, the colony resembles a feather, which can reach a length of up to 1.5 cm, making it by far the largest representative of this genus. In addition, the microzooids of this species are slightly larger than those of Z. ignavum sp. nov., are slender in shape, and exhibit a pronounced asymmetric lobe (Table 2). Also, the relative lengths of the stalk are different (60%, 35%, and 5%) compared to the proportions of Z. ignavum sp. nov. colonies ( Table 2, S1 Fig). Furthermore, Z. niveum is characterized by an obligate association with the thiotrophic, ectosymbiotic bacterium "Ca. Thiobios zoothamnicoli". Due to the sulfur storage of these bacteria [33,[35][36][37]86,87], the whole colony appears in a bright white color, making it easily distinguishable from other Zoothamnium species, including Z. ignavum sp. nov. Z. pelagicum Du Plessis, 1891, in contrast, has no alternating but rather a pinnate pattern of branching and no adhesive disc. This species is a planktonic ciliate and therefore easily distinguishable from other Zoothamnium species, which are found attached to various substrates or other living organisms [55,[88][89][90]. The 18S rRNA eukaryote gene sequence and phylogenetic analyses The 18S rRNA gene sequences of the 13 Z. ignavum sp. nov. colonies examined shared over 99.6% sequence identity, indicating that they all belonged to the same species. The obtained sequences had a total length of 1,653 nt. In all phylogenetic analyses, the Z. ignavum sp. nov. sequence falls within the class Oligohymenophorea of the phylum Ciliophora and forms a monophyletic group (clade II) with Z. alternans populations from Qingdao (China) and USA, Z. niveum, Z. pelagicum, and Z. plumula (Fig 6). Based on the 18S rRNA gene sequence similarity, the closest relative is Z. alternans from Qingdao, with 96.7% sequence identity. "Candidatus Navis piranensis" gen. nov., sp. nov. Type locality. Same as for the host species Zoothamnium ignavum sp. nov. Gene sequence. The sequence of the 16S rRNA bacterial gene of "Candidatus Navis piranensis" gen. nov., sp. nov. was deposited in the GenBank database under accession number KX669263. SYSTEMATICS Etymology. The Latin noun nav . is, -is [f] refers to the morphology of the symbiont, rodshaped with pointed ends, similar to a boat. The species name refers to the location where the symbiosis was found (Piran, Slovenia) and was used as a Latin adjective piranens . is, -is, -e [m, f, n]. The 16S rRNA bacterial gene sequence For the symbiont, the obtained 16S rRNA gene sequence had a total length of 1460 nt. Phylogenetic analyses revealed that the ectosymbiont of Z. ignavum sp. nov. falls into the class of Gammaproteobacteria, forming a cluster with two uncultured and unclassified Gammaproteobacteria isolated from environmental samples rather than with other ecto-or endosymbionts (Fig 7). Thereby, the closest relative based on sequence similarity is an uncultured bacterium isolated from the Tao Dam hot spring in Thailand (92.1% sequence identity; accession number: FJ793190). Based on this 16S rRNA gene sequence similarity, the results clearly indicated that "Ca. Navis piranensis" gen. nov., sp. nov. represents a novel genus and species within a group of unclassified Gammaproteobacteria. Fluorescence in situ hybridization (FISH) FISH with both newly designed oligonucleotide probes ZIS645 and ZIS832 confirmed that the obtained sequence originated from the ectosymbiont of Zoothamnium ignavum sp. nov. The optimal formamide concentration in the hybridization buffer was found to be 20% for both specific probes. FISH signals from the ectosymbiont-specific probes and the general Bacteria probe set (EUBmix) were similar, indicating that no additional bacteria were present in the bacterial coat on the surface of the colonies, except for the most proximal parts of the stalk. These are apparently overgrown by various unspecific prokaryotes (data not shown). The specific ectosymbionts were found on the stalk, branches, terminal zooids as well as on the macroand microzooids (Fig 8). The application of probe NON338 (complementary to bacterial probe EUB338) as a negative control yielded no detectable fluorescence signal (data not shown), demonstrating that the signals were not caused by autofluorescence or unspecific staining of the bacteria but rather by specific binding of the probes. Furthermore, FISH signals from the general EUBmix probe set and the ectosymbiont-specific probes were detected within the food vacuoles in several terminal zooids and microzooids (Fig 8). This indicates that Z. ignavum sp. nov. feeds on both free-living bacteria in the water column and the ectosymbiont. Scanning electron microscopy (SEM) The entire colony, except for the most proximal part of the stalk and the adhesive disc was rather fragmentarily covered by symbionts (Fig 9). On the stalk and the branches, the symbiotic coat appeared to consist rather of a bacterial monolayer, while on the micro-and macrozooids the symbionts were mostly found overlapping each other in a multilayer. However, some microzooids also appeared to be completely aposymbiotic. Most cells were rod-shaped bacilli with pointed ends (average length 1.7 μm, SD 0.4 μm; average width 0.4 μm, SD 0.1 μm; doi:10.1371/journal.pone. 0162834.g006 n = 520). Occasionally, coccoid-shaped bacteria were found (average diameter 0.6 μm, SD 0.2 μm; n = 70) on the microzooids, especially on the oral side. Rod-shaped bacilli exhibited binary fission at an average length of 2.5 μm (SD 0.4 μm, n = 5), forming two equal daughter cells with an average length of 1.3 μm (SD 0.2 μm, n = 5). Nevertheless, dividing cells were rarely found throughout the colony. Apart from symbiotic bacteria matching in distribution and size to those in FISH sections, an overgrowth of various bacteria could be observed for the lower stalk and the lower branches. Discussion Here, we describe a novel Zoothamnium species that was found associated with epibiotic bacteria. The symbiosis was found on sunken wood at two different locations in the North Adriatic Whether they occur also in fall and especially in winter at lower temperatures needs to be further investigated. During the second collection in July 2015, Zoothamnium ignavum sp. nov. was found on wood pieces co-occurring with Z. niveum and its thiotrophic gammaproteobacterium "Ca. Thiobios zoothamnicoli" [32][33][34][35][36][37]. While Z. niveum was found on the strongly degraded parts of the wood where a strong smell of sulfide was noticeable, Z. ignavum sp. nov. was encountered on less degraded or intact parts of the wood. Occasionally, also free-living white bacteria, most likely sulfide-oxidizing bacteria could be observed (Fig 2). The genus Zoothamnium is characterized by a colonial growth with individual cells connected by a common stalk. Furthermore, the core of the stalk is a continuous spasmoneme, leading to a contraction of the entire colony in a typical 'zigzag' pattern [38]. Besides Zoothamnium, Carchesium Ehrenberg, 1830 is a further colony-developing genus within the peritrich ciliates. However, this genus is characterized by the presence of a discontinuous spasmoneme, leading to a contraction in a helical deformation [55]. In addition to morphological studies, molecular analyses based on the 18S rRNA gene sequence were conducted. Thereby, phylogenetic analyses assigned this novel species to clade II of the family Zoothamniidae (Oligohymenophorea), with the Z. alternans population from Qingdao (China) being the closest relative (96.7% sequence similarity). Another Z. alternans population from the USA, however, is rather distantly related to the Qingdao population (96.7% sequence similarity) within clade II [61]. This may suggest that the two Z. alternans populations represent different species. Therefore, a revision of the current classification of Z. alternans with detailed morphological comparisons of populations from different geographic locations is necessary. The 16S rRNA gene sequence analyses and microscopic studies presented in this work revealed that the symbiosis of Z. ignavum sp. nov. involves a single ectosymbiont species. The colony is rather fragmentarily covered by the symbiont, with certain zooids being fully covered by a monolayer or even a multilayer of symbionts, but others, particularly most recently formed cells on the distal end of the colonies being completely aposymbiotic. In the closely related peritrich Z. niveum, the ectosymbiont forms a strict monolayer, covering the whole colony except for the most basal part of the colony [33]. In order to sustain a strict monolayer, host growth and symbiont population density must be finely coordinated, in order to prevent overgrowth by or loss of the symbiont [39]. In Z. ignavum sp. nov. the bacterial layer is highly variable, indicating that the growth of host and symbiont are not well coordinated in this symbiosis. We observed two different morphologies of the ectosymbiont, rods with pointed ends and cocci with SEM. Coccoid shaped symbionts were especially found on the oral side of the microzooids. Morphological polymorphism is found widespread in symbiotic bacteria, e.g., the thiotrophic endosymbiont of the tubeworm Riftia pachyptila (Polychaeta), which is rod-shaped but changes by terminal differentiation into larger cocci, showing transitional stages between the two morphotypes [93,94]. Similarly, in Z. niveum coccoid rods are restricted to the oral side of the host's microzooids and rods are found on all other parts of the host [33]. These cell form modulations are considered to be related to nutrition [95]. In the case of thiotrophic symbionts, nutrition means sulfide, oxygen and carbon dioxide for sulfide oxidation and carbon fixation. Differences in nutrition supply might also explain the different morphotypes of "Ca. Navis piranensis" gen. nov., sp. nov.
2018-04-03T04:40:08.896Z
2016-09-28T00:00:00.000
{ "year": 2016, "sha1": "020bb591d27e59c160fe1c47984d11c9c04923fa", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0162834&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "020bb591d27e59c160fe1c47984d11c9c04923fa", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
51675594
pes2o/s2orc
v3-fos-license
Snakes and ghosts in a parity-time-symmetric chain of dimers We consider linearly coupled discrete nonlinear Schr\"odinger equations with gain and loss terms and with a cubic-quintic nonlinearity. The system models a parity-time ($\cal{PT}$)-symmetric coupler composed by a chain of dimers. Particularly we study site-centered and bond-centered spatially-localized solutions and present that each solution has a symmetric and antisymmetric configuration between the arms. When a parameter is varied, the resulting bifurcation diagrams for the existence of standing localized solutions have a snaking behaviour. The critical gain/loss coefficient above which the $\cal{PT}-$symmetry is broken corresponds to the condition when bifurcation diagrams of symmetric and antisymmetric states merge. Past the symmetry breaking, the system no longer has time-independent states. Nevertheless, equilibrium solutions can be analytically continued by defining a dual equation that leads to so-called ghost states associated with growth or decay, that are also identified and examined here. We show that ghost localized states also exhibit snaking bifurcation diagrams. We analyse the width of the snaking region and provide asymptotic approximations in the limit of strong and weak coupling where good agreement is obtained. I. INTRODUCTION Many nonlinear dynamical systems, such as spatially extended nonlinear dissipative systems [1], vertical-cavity semiconductor optical amplifiers [2], nematic liquid crystal layers with spatially modulated input beam [3], and magnetic fluids [4], exhibit spatially localized patterns and a snaking structure in their bifurcation diagrams in the plane of the length of the localized solution against a control parameter.This phenomenon of snaking is referred to as homoclinic snaking [5][6][7], where the spatial structure of such a localized state departs from and then returns to a uniform state.By definition, it has infinitely many turning points (i.e.saddle-node or saddle-centre bifurcations), which form the boundaries of the snaking region.Such a region is also called pinning region since the fronts at either end 'pin' or 'lock' to the structure within the localized state.An infinite number of localized states exist in the entire interval of the pinning region. In most of previous works devoted to localized states and snaking in continuous systems, the Swift-Hohenberg equation has been widely used as a model for pattern formation since it is the simplest model equation that illustrates the pinning effect [6,[8][9][10][11].In general, the effect cannot be described by conventional multiple-scale asymptotic method due to the fact that the length of * hsusanto@essex.ac.uk the pinning region is exponentially small in a parameter which is related to the pattern amplitude [7].Recently the Swift-Hohenberg with quadratic-cubic nonlinearities and cubic-quintic nonlinearities have been successfully studied with the help of exponential asymptotics [12][13][14].The calculations, however, are rather cumbersome and require two fitting parameters.Alternatively, variational methods to obtain scaling laws for the structure of the snaking region have been proposed and demonstrated, for example, in the system modelled by the cubic-quintic Swift-Hohenberg equation [15]. Like spatially continuous systems, several discrete systems can display the snaking behavior with the locking effect, however, being attributed to the imposed lattice.Examples include the discrete bistable nonlinear Schrödinger equation [16][17][18], which leads to a subcritical Allen-Cahn equation [19], optical cavity solitons [20,21], discrete systems with a weakly broken pitchfork bifurcation [22] and in patterns on networks appearing due to Turing instabilities [23].Pinning regions in lattices were studied analytically by Matthews and Susanto [24] and Dean et al. [25]. This paper is devoted to a detailed numerical and analytical study of homoclinic snaking in a parity-time (PT ) system.The physical problem is a chain of dimers that has two arms with each arm described by a discrete nonlinear Schrödinger equation with gain or loss and with cubic-quintic nonlinearity.While the concept of PT −symmetry has gained a lot of attention in the last decade [26,27], to the best of our knowledge, the effect of the gain and loss term in a PT -symmetric chain of dimers to the snaking regime has not been explored yet. A system of equations is PT −symmetric when it is invariant with respect to combined parity (P) and timereversal (T ) transformation [28][29][30].In the context of Schrödinger Hamiltonians with a complex potential V (x), PT −symmetry requires the potential to satisfy the condition V (x) = V * (−x), where * is the complex conjugation, i.e., V (x) has a symmetric real part and an antisymmetric imaginary part.Such symmetry is of great interest as it forms a particular class of widely studied non-Hermitian Hamiltonians in quantum mechanics [31], that does not satisfy the standard postulate that the Hamiltonian operator be Dirac Hermitian and yet can have real eigenvalues up to a critical value of the complex potential parameter.Above the value, the symmetry is broken, i.e. the eigenvalues of the Hamiltonian become complexvalued.Among PT −symmetric systems, dimers are the most basic and important.The concept was first demonstrated experimentally on dimers, which are composed of two coupled optical waveguides [32,33] (see also [34] and references therein).In particular, when nonlinear dimers are put in arrays where elements with gain and loss are linearly coupled to the elements of the same type belonging to adjacent dimers, one can obtain a distinctive feature in the form of the existence of solutions localized in space as continuous families of their energy parameter [35].The nonlinear localized solutions and their stability have been studied in [36][37][38] analytically and numerically (see also the references therein for localized solutions in systems of coupled nonlinear Schrödinger equations). The continuum limit of the set-up studied herein was considered in [39,40].In optical media, such nonlinearity can be obtained from a saturation of the Kerr response, which with the increase of the intensity will introduce a self-defocusing quintic term in the expansion of the refractive index [41,42].In the continuous case [39,40], it was shown that the presence of gain/loss terms only influences the stability of the localized solutions.Here, it will be shown that the discrete set-up admits homoclinic snaking.We show that the critical gain/loss parameter corresponding to the 'broken PT symmetry' phase is related to the merging of two snaking regions.Beyond the critical point, the system does not have time-independent states.Nevertheless, their continuation can be analytically obtained by defining a dual system.Here, we also identify and examine localized solutions of the dual equations, where interestingly we obtain that they also preserve the snaking region, including its width. The report is outlined as follows.The PT -symmetric chain of dimers with cubic-quintic nonlinearity is discussed in Section II.In Section III, we study spatially uniform solutions and their stability.We obtain that symmetric states can become unstable due to pitchfork bifurcations.The emanating solutions are asymmetric.In the presence of gain/loss parameter, such solutions are lost.By setting a complex-valued propagation parameter, they can be recovered.However, they are not actual solutions of the governing equations and are referred to as ghost states, that are discussed in Section IV.We study localized solutions, their stability, and the observation of homoclinic snaking numerically in Section V. When the PT -symmetry is broken, we can also define ghost states as continuations of uniform and localized solutions discussed in the previous sections.We analyse these states in Section VI.Section VII is on the asymptotic expression of the snaking width that is obtained in the limit of small and large coupling, which is then compared with computational results, where good agreement is obtained.In the strong coupling region, we use a variational method following [24], but with a different approach yielding a simple expression of the width that was not obtainable in [24].In the weak coupling case, we introduce a one-active-site approximation following [43].Conclusions are given in Section VIII. II. MATHEMATICAL MODEL AND STABILITY OF SOLUTIONS The governing equations describing PT -symmetric chains of dimers are of the form The derivative with respect to the evolution variable (i.e., the propagation distance, if we consider their application in fiber optics) is denoted by the overdot, u n = u n (t), v n = v n (t) are complex-valued wave function at site n ∈ Z with the propagation constant ω ∈ R, C > 0 is the constant coefficient of the horizontal linear coupling (coupling constant between two adjacent sites), ∆ 2 n = ( n+1 − 2 n + n−1 ) is the discrete Laplacian term in one spatial dimension, and the gain and loss acting on complex variables u n , v n are represented by the parameter γ > 0. The cubic nonlinearity coefficient has been scaled to +1, while Q is the coefficient of the quintic nonlinearity.System (1) is PT -symmetric because it is invariant with respect to the action of the parity P and timereversal T operators given by Next, we consider the equations for standing wave solutions of Eqs.(1), obtained from setting un = vn = 0 and substituting u n = A n , v n = B n e iφ into (1), Here, A n , B n , φ ∈ R. We can assume that u n is realvalued because of the phase invariance of the governing equations (1).Splitting the real and imaginary parts of the equations and simplifying them will yield which is also known as the discrete Allen-Cahn equation, where for the minus sign and φ = π + arcsin γ for the plus sign, which corresponds to the so-called symmetric and antisymmetric configuration between the arms, respectively.Note that (4) will have no real solution when γ > 1.This is the broken region of PT -symmetry. The linear stability of a standing wave solution is determined as follows.Introducing the ansatz |ǫ| ≪ 1, and substituting it into Eq.( 1) will yield from the terms at O(ǫ) the linear eigenvalue problem Generally the spectrum will consist of two types, i.e. continuous and discrete spectrum or eigenvalue.A solution is unstable when there exists λ with Re(λ) > 0. However, if λ is a spectrum, so are −λ and ±λ [36].A solution is therefore (linearly) stable only when Re(λ) = 0 for all λ, i.e. it is neutral stability.Nonlinear stability may be obtained numerically by evolving a perturbed solution in Eqs.(1) for a long while, which analytically is still an open problem due to the absence of a Hamiltonian structure of the system (see, e.g., [44,45] for nonlinear stability analysis of a similar system but with crossdispersion and different nonlinearity that becomes possible because it has a Hamiltonian form via a cross-gradient symplectic structure). Numerically we solve the steady-state equations of (3) using a Newton-Raphson method in Matlab.A pseudoarclength continuation scheme is implemented to do numerical continuation past a turning point.To model the infinite domain, we use a periodic boundary condition with a large number of lattices.The typical value we use is N = 100, but larger values were used as well to guarantee that the results are independent of the number of sites.After a solution is obtained, its stability is determined numerically by solving Eqs.(5) using a standard eigenvalue problem solver.(iv) Here, Q = 0.1 and ω = ωr + iωi.Branches 'a' and 's' are antisymmetric and symmetric configuration between the arms, respectively, obtained from (6), i.e., u = A. Along the branches, ωi = 0. Branch 'as' corresponds to asymmetric solutions, obtained from the nullclines of (1) with ωi given by (11).Stable solutions are shown as solid line and as dashed line otherwise.The insets are linear spectrum of the indicated solutions. III. UNIFORM SOLUTIONS Equation (4) has uniform solutions A n ≡ A that are given by Besides γ must be less than 1, the uniform solution (6) also requires 4QΩ < 1 to exist.Under competing cubic-quintic nonlinearities, i.e., Q > 0, we will have two branches of non-zero uniform solutions.The stability of uniform solutions ( 6) can be determined by computing their continuous spectrum.Introducing the plane-wave ansatz ( u r,n , u i,n , v r,n .v i,n ) = ( k, l, p, q)e ikn , k ∈ R, and substituting it into (5) will yield a dispersion relation.Continuous spectrum of the equilibrium is then obtained by setting k = 0 and k = π in the dispersion equation. The dispersion relation of the trivial equilibrium with K = 4C(cos k − 1), from which we obtain the con- with the spectrum boundaries The equilibrium is therefore stable for 0 < γ 2 < 1 and unstable otherwise.Continuous spectra of the non-zero solution can be obtained similarly.When C = 0, bifurcation diagrams of the nonzero solutions are shown in Fig. 1(a,b) for two values of γ.In this case, the chain is uncoupled and one obtains the dimer, which was studied in [34] (see also references therein) for Q = 0 and in [46] for nonzero Q. Consider antisymmetric solutions along branch 'a'.We obtain that the low intensity solution, i.e. the lower branch, is stable, while the high one is not.Branch 's' generally corresponds to stable symmetric solutions, but there is a small portion of unstable branch due to pitchfork (i.e.spontaneous symmetry breaking) bifurcations.Solutions emanating from the branching points are asymmetric, i.e. |u n | = |v n |, and denoted by branch 'as' in Fig. 1.However, they cannot be obtained from Eq. ( 6) because they do not satisfy the parity (P) symmetry.These will be discussed in the Section IV below. In panel (b), we consider γ = 0.8.As the gain/loss parameter increases towards the critical value γ = 1, branches 's' and 'a' become closer to each other.At the critical value, the two branches coincide, i.e. we obtain a turning point.This is due to the fact that when studying time-independent solutions, the governing equation (1) reduces nicely to the discrete Allen-Cahn equation ( 4) that is rather independent of γ and at the same time, Ω for the symmetric and antisymmetric solutions becomes equal at γ = 1. Panel (c) shows the effect of coupling constant that clearly only affects the stability of the equilibrium.Now we obtain that branch 'a' and the lower part of branch 's' have become unstable. IV. ASYMMETRIC SOLUTIONS AS GHOST STATES In the classical cubic dimer, i.e., Eqs. ( 1) with C = Q = γ = 0, symmetric solutions are known to be unstable for ω > 2 due to a pitchfork (symmetry-breaking) bifurcation (see, e.g., [47][48][49][50][51][52] and references therein).At the bifurcation point, an asymmetric state pair emanate.It is a matter of a standard perturbation expansion that for 0 < Q ≪ 1, asymmetric solutions will bifurcate at . This is in agreement with the result in Fig. 1(a).The bifurcation diagram of the asymmetric states denoted as branch 'as' can simply be obtained from (1) with γ = 0.However, in the cubic-quintic dimer they only exist in a finite interval.Even for larger Q, they may not exist at all, i.e. the symmetric states can be stable in their entire existence region. When γ = 0, symmetric solutions still can become unstable, but the bifurcating asymmetric ones will no longer exist [34,52].This observation was first reported in [53].Cartarius et al. [54] provide an analytic continuation of the asymmetric solutions that emerge as ghost states, namely, a solution of the steady-state problem with complex (instead of real) valued parameter ω and hence is not a true solution of the original system (1).Rodrigues et al. [52] used the proposal to obtain continuation of asymmetric states in a nonlinear Schrödinger equation with PT −symmetric double-well potentials and the twomode reduction in the form of a cubic dimer, i.e. (1) with To obtain asymmetric states of our problem, consider again time-independent equations of Eqs.(1) and their conjugates, where the propagation constant ω is now complex-valued, i.e., ω = ω r + iω i , The imaginary part ω i needs to be determined from a consistency equation below.Multiplying Eqs.(10a)-(10d) with u * n , v * n , −u n , and −v n , respectively, summing up the infinite-dimensional vectors over n, and adding the resulting equations will lead to the equation for ω i : It is clear that ω i will vanish either when |u n | = |v n |, i.e., symmetric and antisymmetric solutions, or when γ = 0.In Fig. 1(b,c) the branch of asymmetric solutions is obtained from time-independent equations of (1) with (11).We also have determined the states' stability by solving the corresponding linear eigenvalue problem even though they are not actual solutions of the original system (1). V. LOCALISED SOLUTIONS We consider discrete solitons of Eqs.(1) satisfying the localisation conditions u n , v n → 0 as n → ±∞.It is known that there are two fundamental localized solutions existing for any coupling constant C, i.e. an intersite (bond-centred) and onsite (or site-centred) discrete mode with an even and odd number of high-intensity sites, respectively. Fixing the coupling C and varying the propagation constant ω, we depict the bifurcation diagrams of the two types of discrete modes in Fig. 2. For each symmetric and antisymmetric configuration between u n and v n , there are two branches that correspond to the sitecentred and bond-centred solutions. In addition to symmetric solutions, there are also solutions that are asymmetric between the arms or asymmetric in the same arm.The former type of solutions corresponds to that giving 'as' branches in Fig. 1, while the latter one constitutes the 'ladders' connecting snaking branches of onsite and intersite modes in Fig. 2. Both types emanate from pitchfork bifurcations (see, e.g., [55] for relevant discussions on symmetry-breaking (pitchfork) bifurcations in generalized Schroödinger equations). In Fig. 3 we plot profiles of several localized solutions and their spectrum in the complex plane.Unstable solutions are due to spectra with nonzero real part, which belong to the red dashed segment in Fig. 2. Bifurcation diagrams in Fig. 2 form a snaking structure.Even though such structures have been reported before [16][17][18][19]24], the effect of the gain/loss parameter that yields different stability behaviours along the curves is novel.The region between the boundaries of the snakes is the pinning region.Comparing the two panels of Fig. 2, in agreement with the continuous case reported in [39,40] the gain/loss parameter tends to destabilise localized solutions, shown by the dashed curve that tends to expand in the second panel. Up in the snaking structures (represented by, e.g., point 4 in Fig. 2(a)), the stability of the branches is similar to those in Fig. 1.This is because the corresponding localized solutions have long plateau of nonzero uniform solutions, i.e. the stability is mainly determined by the continuous spectrum of the nonzero uniform solution.Again, ω = ωr +iωi.We plot the norm ||u|| = n u 2 n 1/2 for varying ωr.There are two pairs of snaking principle branches.Each pair is connected by 'ladders' of asymmetric solutions along the same arms.Except along the closed curve of asymmetric states between the arms (that looks like branch 'as' in Fig. 1), ωi = 0. Solutions at indicated points in panel (a) are plotted in Fig. 3. We show in Fig. 4 the typical time evolution of unstable solutions in Fig. 3.While Fig. 4(a) indicates a clear blow up of the wave field with gain, which is common in PT -systems [34], Fig. 4(b) shows intensity oscillations.The fact that the oscillations persist for quite a while is interesting by itself as PT -symmetric dimers with cubic nonlinearity are known to have oscillations that blow up [34].Similar oscillations in the continuum limit C → ∞ were also reported in [39], where the bounded oscillations were attributed to the quintic nonlinearity that may have suppressed the blow up.However, whether the long-live oscillation is a genuine cycle is addressed for future work. In the spatially uniform case, the branches of symmetric and antisymmetric solutions between the arms move towards each other as γ increases and merge at γ = 1.It is also the same with the case of localized solutions, i.e., the two snaking bifurcation diagrams in Fig. 2 become closer with the increase of γ and coincide at the critical value. VI. GHOST STATES IN THE PT −BROKEN PHASE In the broken PT −symmetric region (γ > 1), the trivial state u n = v n = 0 is unstable.The typical timeevolution is that u n as the field with gain will blow-up, while v n that experiences loss decays. The PT -phase transition (γ = 1) is characterized by the merger of symmetric and antisymmetric solutions in a fold bifurcation.A follow-up question is what becomes of them past the critical point.It was also due to [54] that it is possible to provide an analytic continuation for the original model in a nontrivial way.The continuation system is constructed by introducing a 'dual' system that 'forces' the solutions to mimic the PT −symmetry of the potential in the original system, i.e., by setting u In the broken PT -symmetric phase γ > 1, we therefore consider The parameter ω is again complex valued where the imaginary part must satisfy a self-consistency equation.Doing the same calculation, we also obtain Eq. (11).Because of complex ω, Eqs.(12) are not PT -symmetric and their solutions are also ghost states.The relation between (1) and ( 12) is that both systems yield the same symmetric and antisymmetric solutions at the phase transition γ = 1. We have computed the continuation of branches in Fig. 1 past the PT −phase transition point.We present bifurcation diagrams of the ghost states in Fig. 5.We have also computed their stability from the corresponding linear eigenvalue problem of the dual system (12).There are two uniform states that are mirror images of each other.Solutions with high intensity in |u n | are stable (in the sense of ( 12)), while the other ones with low |u n | are unstable, i.e., stable solutions correspond to In the sense of the original system (1), the stable solutions will lead to growth in time as the parameter ω i is positive.On the other hand, the ones with negative ω i decay in time and shall not be observed in direct numerical simulations.This thus means that ghost states of ( 12) may be interpreted as self-similar solutions of Eqs. In Figs. 6 and 7 we plot bifurcation diagrams of localized ghost states and their profiles and stability computed through the 'dual' equations ( 12) and (11).We observe that the homoclinic snaking persists and that solutions with larger |u n | are stable (in the sense of the dual equations ( 12)).It is important to note that numerically the width of the pinning region of localized ghost states also does not depend on γ. VII. ANALYTICAL APPROXIMATIONS In this section, we will study the width of the snaking region in Fig. 2 as a function of, e.g., the coupling constant C. We will derive an asymptotic approximation of the width.The approach is distinguished in two different regions, i.e. small and large coupling.Because the width of the pinning region is independent of γ, our result is also applicable for the snaking region of localized ghost states in Fig. 6. A. Small coupling case When C is small, as we follow the snaking structure upward (see Fig. 2), at the leading order there is only one site that is 'active', with the remaining sites being either at 0 or at the plateau of a nonzero uniform solution.Such behaviours were observed and exploited in many ways before, see, e.g., [58,59], but not in the context of snaking. From (6), we assume that up in the snaking diagram only the following nodes are involved in the dynamics, i.e. Note that we only use the '+' sign for the uniform solution forming the plateau, which is the upper branch in Fig. 1.Substituting ( 13) into the time-independent discrete equation ( 4) will yield the one-active-site approxi- mation: In general (14) will have five roots.The roots relevant to our study are the positive ones.As Ω varies, two of the roots will collide in a saddle-centre bifurcation.This condition corresponds to the boundaries of the snaking region.The condition for the collision is when a local maximum or minimum of the function f (a) crosses the horizontal axis.The critical points of f (a) are given by i.e. Substituting ( 16) into ( 14) and solving the resulting equation for Ω asymptotically give us The snaking width W is then given approximately by the difference between the two functions. Using Fourier series, we can then write the summation k=1 cos(2πkx), which converges to the Dirac comb non-uniformly.Taking only the first harmonic, (18) then becomes (21) which can be expected to approximate (4) in the large coupling case for C ≫ 1 [60]. Without the periodic potential 2 cos(2πx), Eq. ( 21) has a front solution given by à when Following [15,24], we will approximate the solutions along the snaking structure by where φ is the phase-shift distinguishing the two branches, i.e. φ = 0, X/(2x) for the on-site and intersite solutions, respectively.L is the length of the plateau, which is presently an unknown variable.Using the standard variational argument, requiring (24) to be an optimal solution of (21) implies that L must satisfy the equation (see, e.g., [62]) where Ω is set to be near the Maxwell point (23), i.e.Ω = 3/(16Q) + ∆Ω.Equation ( 25) can be simplified at the leading order for L ≫ 1 to The width of the snaking region is then simply given by which is exponentially small.As pointed out by one of the referees, the exponential factor in the approximation ( 27)-( 28) is correct, which can be justified in the following way, explained in details in [63].The continuous problem (21) involves two scales: a fast scale variable, x, and a slow one, X.These two scales are infinitely separated in the limit C → ∞, i.e., X/x → 0. The front solution (22) contains singularities in the complex plane, the closest to the real axis being X 0 = iπ, which then leads to the exponential part of the pinning.This was used in [63] to derive the dependance of the pinning range on the front orientation in general 2D pattern forming systems.As for the algebraic factor, C 3/2 , only a proper exponential asymptotic treatment (see, e.g., [64] for exponential asymptotic analysis for a nonlinear differential difference equation similar to (1)) can establish its exponent.The exponential and algebraic scales in (28) are, however, in agreement with those obtained using exponential asymptotics in [25] for homoclinic snaking on a planar lattice, i.e., of two-dimensional fronts that are localized in one direction only. We show in Fig. 8(a) the width of the snaking region computed numerically and our approximations (17) and (27).One can see good agreement between them. Note that Fig. 8 In panel (b), we also plot a curve fit (29), that is indistinguishable from the numerical curve. we curve fit the numerical result where obtain that α = 405.03and β = 1.71.There is a slight difference in the algebraic scale, which may be attributed to the step of taking the first harmonic only in Eqs.(21). VIII. CONCLUSION Spatially uniform and localized solutions (site-centered and bond-centered modes) and their bifurcation diagrams that form a snaking structure in a parity-time (PT )-symmetric coupler composed by a chain of dimers have been discussed.It has been shown that the gain/loss coefficient does not influence the width of the snakes. In the broken PT −symmetry region γ > 1, we have also analysed the continuations of the time-independent solutions, that are called ghost states.Interestingly lo-calized ghost states have also been observed to exhibit a homoclinic snaking in their bifurcation diagrams, with the same width of pinning region as that of the localized solutions with 0 < γ < 1. Asymptotic approximations of the width of the snaking region have been derived in two different limits, i.e. strong and weak coupling between the dimers.The approximations have been compared with numerical results where good agreement is obtained. FIG. 3 . FIG. 3. Localised solutions on the bifurcation diagram shown in Fig. 2 and their spectrum in the complex plane.Panels (a,b): bond-centred solutions.Panel (c): asymmetric solution, which has an intermediate shape between onsite and intersite profiles.Panel (d): site-centred solution. FIG. 7 . FIG. 7. Plot of the localized ghost states on the bifurcation diagram shown in Fig. 6 and their spectrum in the complex plane.Panels (a,c,e): unstable solutions.Panels (b,d,f): stable solutions.|un| and |vn| are represented by circle and star points, repectively. FIG. 8 . FIG.8.The width of the snaking region as a function of the coupling constant C for Q = 0.1.The solid curve is obtained from the numeric and the dashed and dash-dotted lines are the approximations(17) for 0 ≤ C ≪ 1 and (27) for C ≫ 1, respectively.In panel (b), we also plot a curve fit(29), that is indistinguishable from the numerical curve.
2018-05-31T14:00:22.000Z
2018-05-31T00:00:00.000
{ "year": 2018, "sha1": "dd3f54e0d19ddae3eca1b79492c35327dc7608d7", "oa_license": null, "oa_url": "http://repository.essex.ac.uk/22330/1/1805.12478.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dd3f54e0d19ddae3eca1b79492c35327dc7608d7", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Physics" ] }
5192870
pes2o/s2orc
v3-fos-license
Integration of an On-Axis General Sun-Tracking Formula in the Algorithm of an Open-Loop Sun-Tracking System A novel on-axis general sun-tracking formula has been integrated in the algorithm of an open-loop sun-tracking system in order to track the sun accurately and cost effectively. Sun-tracking errors due to installation defects of the 25 m2 prototype solar concentrator have been analyzed from recorded solar images with the use of a CCD camera. With the recorded data, misaligned angles from ideal azimuth-elevation axes have been determined and corrected by a straightforward changing of the parameters' values in the general formula of the tracking algorithm to improve the tracking accuracy to 2.99 mrad, which falls below the encoder resolution limit of 4.13 mrad. Introduction Sun-tracking system plays an important role in the development of solar energy applications, especially for the high solar concentration systems that directly convert the solar energy into thermal or electrical energy [1]. Over the past two decades, various types of sun-tracking mechanisms have been proposed to enhance the solar energy harnessing performance of solar collectors. Although the degree OPEN ACCESS of accuracy required depends on the specific characteristics of the solar concentrating system being analyzed, generally the higher the system concentration the higher the tracking accuracy that will be needed [2]. To achieve good tracking accuracy, sun-tracking systems normally employ sensors to feedback error signals to the control system in order to continuously receive maximum solar irradiation on the receiver. The two common types of sensors used for this purpose are closed-loop sensors and open-loop sensors. Firstly, a closed-loop sensor, such as CCD camera or photo-detector, is used to sense the position of the solar image on the receiver and a feedback signal is sent to the controller if the solar image moves away from the receiver. Sun-tracking systems that employ closed-loop sensors are known as closed-loop sun trackers. Over the past 20 years or so, the closed-loop tracking approach has been traditionally used in the active sun-tracking scheme [3][4][5][6]. For example, Kribus et al. designed a closed-loop controller for heliostats which improved the pointing error of the solar image up to 0.1 mrad, with the aid of four CCD cameras set on the target [7]. However, this method is rather expensive and complicated because it requires four CCD cameras and four radiometers to be placed on the target. Then the solar images captured by CCD cameras must be analysed by a computer to generate the control correction feedback for correcting tracking errors. In 2006, a sun-tracking error monitoring system that uses a monolithic optoelectronic sensor for a concentrator photovoltaic system was presented by Luque-Heredia et al. According to the results from the case study, this monitoring system achieved a tracking accuracy of better than 0.1°. However, the criterion is that this tracking system requires full clear sky days to operate as the incidence light has to be above a certain threshold to ensure that the minimum required resolution is met [8]. That same year, Aiuchi et al. developed a heliostat with an equatorial mount and a closed-loop photo-sensor control system. The experimental results showed that the tracking error of the heliostat was estimated to be 2 mrad during fine weather [9]. Nevertheless, this tracking method is not popular and only can be used for sun-trackers with an equatorial mount configuration, which is not a common tracker mechanical structure and is complicated because the central of gravity for the solar collector is far off the pedestal. Furthermore, Chen et al. presented studies of digital and analogue sun sensors based on the optical vernier and optical nonlinear compensation measuring principle respectively. The proposed digital and analogue sun sensors have accuracies of 0.02° and 0.2° correspondingly for the entire field of view of ±64° and ±62° respectively [10,11]. The major disadvantage of these sensors is that the field of view, which is in the range of about ±64° for both elevation and azimuth directions, is rather small compared to the dynamic range of motion for a practical sun-tracker that is about ±70° and ±140° for elevation and azimuth directions, respectively. Besides that, it is just implemented at the testing stage in precise sun sensors to measure the position of the sun and has not yet been applied in any closed-loop sun-tracking system so far. Although closed-loop sun-tracking system can produce a much better tracking accuracy, this type of system will lose its feedback signal when the sensor is shaded or when the sun is blocked by clouds. As an alternative method to overcome the limitation of closed loop sun-trackers, open-loop sun trackers were introduced by using open-loop sensors that do not require any solar image as feedback. The open-loop sensor will ensure that the solar collector is positioned at pre-calculated angles, which are obtained from a special formula or algorithm according to date, time and geographical information. In 2004, Abdallah et al. designed a two axes sun tracking system, which is operated by an open-loop control system. A programmable logic controller (PLC) was used to calculate the solar vector and to control the sun tracker so that it follows the sun's trajectory [12]. In addition, Shanmugam et al. presented a computer program written in Visual Basic that is capable of determining the sun's position and thus drive a paraboloidal dish concentrator (PDS) along the East-West axis or North-South axis for receiving maximum solar radiation [13]. In general, both sun-tracking approaches mentioned above have both strengths and drawbacks, so some hybrid sun-tracking systems have been developed to include both the open-loop and closed-loop sensors. Early in the 21 st century, Nuwayhid et al. adopted both the open-loop and closed-loop tracking schemes into a parabolic concentrator attached to a polar tracking system. In the open-loop scheme, a computer acts as controller to calculate two rotational angles, i.e., solar declination and hour angles, as well as to drive the concentrator along the declination and polar axes. In the closed-loop scheme, nine light-dependent resistors (LDR) are arranged in an array of a circular-shaped "iris" to facilitate sun-tracking with a high degree of accuracy [14]. In 2006, Luque-Heredia et al. proposed a novel PI based hybrid sun-tracking algorithm for a concentrator photovoltaic system. In their design, the system can act in both open-loop and closed-loop mode. A mathematical model that involves a time and geographical coordinates function as well as a set of disturbances provides a feedforward open-loop estimation of the sun's position. To determine the sun's position with high precision, a feedback loop was introduced according to the error correction routine which is derived from the estimation of the error of the sun equations that are caused by external disturbances at the present stage based on its historical path [15]. One year later, Rubio et al. fabricated and evaluated a new control strategy for a photovoltaic (PV) solar tracker that operated in two tracking modes, i.e., normal tracking mode and search mode. The normal tracking mode combines an open-loop tracking mode that is based on solar movement models and a closed-loop tracking mode which corresponds to the electro-optical controller to obtain a sun-tracking error that is smaller than a specified boundary value and enough for solar radiation to produce electrical energy. Search mode will be started when the sun-tracking error is large or no electrical energy is produced. The solar tracker will move according to a square spiral pattern in the azimuth-elevation plane to sense the sun's position until the tracking error is small enough [16]. As a matter of fact, the tracking accuracy requirement is very much reliant on the design and application of the sun-tracker. In this case, the longer the distance between the solar concentrator and the receiver the higher the tracking accuracy required will be because the solar image becomes more sensitive to the movement of the solar concentrator. As a result, a heliostat or off-axis sun-tracker normally requires much higher tracking accuracy compared to that of on-axis sun-tracker due to the fact that the distance between the heliostat and the target is normally much longer, especially for a central receiver system configuration. In this context, a tracking accuracy in the range of a few miliradians (mrad) is in fact sufficient for an on-axis sun-tracker to maintain its good performance when highly concentrated sunlight is involved [17]. Despite having many existing on-axis sun-tracking methods, the designs available to achieve a good tracking accuracy of a few mrad are complicated and expensive. It is worthwhile to note that conventional on-axis sun-tracking systems normally adopt two common configurations, which are azimuth-elevation and tilt-roll (polar tracking), limited by the available basic mathematical formulas of sun-tracking system. For azimuth-elevation tracking system, the sun-tracking axes must be strictly aligned with both zenith and real north. For a tilt-roll tracking system, the sun-tracking axes must be strictly aligned with both latitude angle and real north. The major cause of sun-tracking errors is how well the aforementioned alignment can be done and any installation or fabrication defects will result in low tracking accuracy. According to our previous study for the azimuth-elevation tracking system, a misalignment of azimuth shaft relative to zenith axis of 0.4° can cause tracking error ranging from 6.45 to 6.52 mrad [18]. In practice, most solar power plants all over the world use a large solar collector area to save on manufacturing cost and this has indirectly made the alignment work of the sun-tracking axes much more difficult. In this case, the alignment of the tracking axes involves an extensive amount of heavy-duty mechanical and civil works due to the requirement for thick shafts to support the movement of a large solar collector, which normally has a total collection area in the range of several tens of square meters to nearly a hundred square meters. Under such tough conditions, a very precise alignment is really a great challenge to the manufacturer because a slight misalignment will result in significant sun-tracking errors. To overcome this problem, an unprecedented on-axis general sun-tracking formula has been proposed to allow the sun-tracker to track the sun in any two arbitrarily orientated tracking axes [18]. In this paper, we would like to introduce a novel sun-tracking system by integrating the general formula into the sun-tracking algorithm so that we can track the sun accurately and cost effectively, even if there is some misalignment from the ideal azimuth-elevation or tilt-roll configuration. In the new tracking system, any misalignment or defect can be rectified without the need for any drastic or labor intensive modifications to either the hardware or software components of the tracking system. In other words, even though the alignments of the azimuth-elevation axes with respect to the zenith-axis and real north are not properly done during the installation, the new sun-tracking algorithm can still accommodate the misalignment by changing the values of φ, λ and ζ in the tracking program. The advantage of the new tracking algorithm is that it can simplify the fabrication and installation work of solar collectors with higher tolerance in terms of the tracking axes alignment. This strategy has allowed great savings in terms of cost, time and effort by omitting more complicated solutions proposed by other researchers such as adding a closed-loop feedback controller or a flexible and complex mechanical structure to level out the sun-tracking error [1,19]. To demonstrate the use of general formula for improving sun-tracking accuracy, a prototype solar concentrator has been constructed and tested on the campus of Universiti Tunku Abdul Rahman (UTAR). Methodology of Using General Formula to Improve Sun-Tracking Accuracy The derivation of the general formula for an on-axis sun-tracking system has been presented in our previous paper [18]. According to the general formula, the sun-tracking accuracy of the system is highly reliant on the precision of the input parameters of the sun-tracking algorithm: latitude angle (Φ), hour angle (ω), declination angle (δ), as well as the three orientation angles of the tracking axes of solar concentrator, i.e., φ, λ and ζ. Among these values, local latitude, Φ, and longitude of the sun tracking system can be determined accurately with the latest technology such as a global positioning system (GPS). On the other hand, ω and δ are both local time dependent parameters (please refer to Appendix for the details). These variables can be computed accurately with the input from precise clock that is synchronized with the Internet time server. As for the three orientation angles φ, λ and ζ , their precision are very much reliant on the care paid during the on-site installation of solar collector, the alignment of tracking axes and the mechanical fabrication. The following mathematical derivation is attempted to obtain analytical solutions for the three orientation angles based on the daily sun-tracking error results induced by the misalignment of sun-tracking axes. From our previous study [18], the unit vector of the sun, [S'], relative to the solar collector can be obtained from a multiplication of four successive coordinate transformation matrices, i.e., [ where α is elevation angle, β is azimuth angle, ω is hour angle, δ is declination angle, Φ is latitude at which the solar collector is located as well as φ, λ and ζ are the three orientation angles of two-orthogonal-driving axes of the solar collector. From the Equation (1) From Equation (2), we can further dissolve it into Equation (3): sin sin sin cos sin cos cos cos sin sin sin cos sin cos sin cos cos cos sin sin cos cos cos sin The time dependency of ω and δ can be found from Equation (3). Therefore, the instantaneous sun-tracking angles of the collector only vary with the angles ω and δ. Given three different local times LCT 1 , LCT 2 and LCT 3 on the same day, the corresponding three hours angles ω 1 , ω 2 and ω 3 as well as three declination angles δ 1 , δ 2 and δ 3 can result in three elevation angles α 1 , α 2 and α 3 and three azimuth angles β 1 , β 2 and β 3 accordingly as expressed in Equation sin cos cos cos sin sin cos sin sin cos cos cos sin cos cos cos sin sin cos sin sin cos cos cos sin cos cos cos sin sin cos sin sin cos cos cos sin sin sin 3 3 3 3 3 3 3 3 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 3 2 where the angles Φ, φ, λ and ζ are constants with respect to the local time. In practice, we can measure the sun tracking angles, i.e., (α 1 , α 2 , α 3 ) and (β 1 , β 2 , β 3 ) during sun-tracking at three different local times via a recorded solar image of the target using a CCD camera. Open-Loop Sun-Tracking System Design To test the aforementioned methodology, a prototype of an on-axis solar concentrator with a total reflective area of 25 m 2 has been constructed in the campus of UTAR, Kuala Lumpur (located at latitude 3.22° and longitude 101.73°; see Figure 2). The prototype consists of 120 sets of mirrors that are arranged into a hexagonal array and the target is placed at a focal point with the distance of 4.5 m from the centre of solar concentrator frame. This solar concentrator is designed to operate on the most common two-axis tracking system, which is the azimuth-elevation tracking system. The drive mechanism for the solar concentrator consists of stepper motors and associated gears. Two stepper motors, with 0.72 degree in full step, are coupled to the shafts, elevation and azimuth shafts, with a gear ratio of 4,400 yielding an overall resolution of 1.64 × 10 -4°/ step. Figure 2. A prototype of on-axis solar concentrator that has been constructed at Universiti Tunku Abdul Rahman (UTAR). A Windows-based control program has been developed by integrating the general formula into the open-loop sun-tracking algorithm. In the control algorithm, the sun-tracking angles, i.e., azimuth (β) and elevation (α) angles, are first computed according to the latitude (Φ), longitude, day numbers (N), local time (LCT), and the three orientation angles φ, λ and ζ. The control program then generates digital pulses that are sent to the stepper motor to drive the concentrator to the pre-calculated angles along azimuth and elevation movements in sequence. Each time, the control program only activates one of the two stepper motors through a relay switch. The executed control program of sun-tracking system is shown in Figure 3. An open-loop control system is preferable for the prototype solar concentrator so as to keep the design of the sun-tracker simple and cost effective. In our design, open-loop sensors, 12-bit absolute optical encoders with a precision of 2,048 counts per revolution, are attached to the shafts along the azimuth and elevation axes of the concentrator to monitor the turning angles and to send feedback signals to the computer if there is any abrupt change in the encoder reading [see the inset of Figure 4(b)]. Therefore, the sensors not only ensure that the instantaneous azimuth and elevation angles are matched with the calculated values from the general formula, but also eliminate any tracking errors due to mechanical backlash, accumulated error, wind effects and other disturbances to the solar concentrator. With the optical encoders, any discrepancy between the calculated angles and real time angles of solar concentrator can be detected, whereby the drive mechanism will be activated to move the solar concentrator to the correct position. The block diagram and schematic diagram for the complete design of the open-loop control system of the prototype are shown in Figure 4 The estimated total electrical energy produced by the prototype solar concentrator and the total energy consumption of the sun-tracking system are also calculated. Taking into account the total mirror area of 25 m 2 , optical efficiency of 85%, and the conversion efficiency from solar energy to electrical energy of 30% for direct solar irradiation of 800 W/m 2 , we have obtained a generated output energy Table 1 shows the energy consumption of 1.26 kW/h/day for the prototype includes the tracking motors, motor driver, encoders and computer. It corresponds to less than 3.5% of the rated generated output energy. Among all these components, the computer consumes the most power (more than 100 W) and in the future a microcontroller could be used to replace computer as to reduce the energy consumption. Performance Study and Experimental Results Before the performance of the sun-tracking system was tested, 119 sets of mirrors were covered with black plastic, except the one mirror which is located nearest to the centre of the concentrator frame so that we can analyse the tracking accuracy by only observing the movement of one solar image at the target. To avoid the sun-tracking errors due to wrong estimation of the prototype's geographical location, a GPS was used to determine the latitude (Φ) and longitude of the solar concentrator. Initially, we assume that the alignment of solar concentrator is perfectly done relative to real north and zenith by setting the three orientation angles as φ = λ = ζ = 0° in the control program. To study the performance of sun-tracking system on 13 January 2009, a CCD camera was employed to capture the solar image cast on the target for every 30 minutes from 10 am. to 5 pm. local time. A CCD camera with 640 × 480 pixels resolution was connected to a computer via a PCI video card to have a real time transmission and recording of solar image. Figure 5 illustrates some of the recorded solar images at different local times. According to the recorded results shown in Figure 6, the recorded tracking errors, ranging from 12.12 to 17.54 mrad throughout the day, have confirmed that the solar concentrator is misaligned relative to zenith and real north. To rectify the problem of the sun-tracking errors due to imperfect alignment of the solar concentrator during the installation, we have to determine the three misaligned angles, i.e., φ, λ, ζ, and then insert these values into the edit boxes provided by the control program as shown in Figure 3. Thus, the computational program using the methodology as described in Figure 1 was executed to compute the three new orientation angles of the prototype based on the data captured on 13 January 2009. The actual sun-tracking angles, i.e., α and β, can be determined from the solar image position relative to the central point of the target. Three sets of sun-tracking angles at three different local times from the previous data were used as the input values to the computational program for simulating the three unknown parameters of φ, λ and ζ. The simulated results are φ = −0.1°, λ = 0°, and ζ = −0.5°. To substantiate the simulated results, these values were then used in the next session of sun-tracking that was performed on 16 January 2009 from 10 am. to 5 pm. With the new orientation angles, the performance of the prototype in sun-tracking has been successfully improved to the accuracy of 2.99 mrad, as shown in Figure 7. This result has reached the limit of sun-tracking accuracy due to the resolution of the optical encoder which corresponds to 4.13 mrad, unless higher resolution optical encoders are used as sensors. Figure 8 shows the recorded solar images at the target for different local times ranging from 10 am. to 5 pm. on 16 January 2009. In the experimental results, even though the misalignment on the azimuth axis is in the range of 0.5°, the resulted sun-tracking error is significant, especially for the application in high concentration solar collectors and in particular for dense array concentrator photovoltaic systems [17]. Since then, the prototype has been tested by running it for a period of more than six months to confirm the validation of the sun-tracking results. Conclusions With the simulated parameter φ = −0.1°; λ = 0°; and ζ = −0.5°, the performance of a prototype in sun-tracking has been improved to a maximum pointing error of 2.99 mrad, which falls below the encoder resolution, 4.13 mrad. As a result, the general sun-tracking formula is confirmed to be capable of rectifying the installation error of the solar concentrator with a significant improvement in the tracking accuracy. In fact, there are many solutions of improving the tracking accuracy such as adding a closed-loop feedback system to the controller [1], designing a flexible mechanical platform capable of two-degree-of-freedom for fine adjustment of azimuth shaft [19], etc. Nevertheless, all these solutions require a more complicated sun tracker engineering designs, which also make the whole system more costly. Instead of using a complicated sun-tracking method, integration of an on-axis general sun-tracking formula into the open-loop sun-tracking system is a clever method of obtaining a reasonably high precision in sun-tracking with a simple and cost effective design. This approach can significantly improve the performance and reduce the cost of solar energy collectors, especially for high concentration systems. The hour angle expresses the time of day with respect to the solar noon: It is the angle between the plane of the meridian containing observer and meridian that touches the earth-sun line. It is zero at solar noon and increases by 15° every hour: where t s is the solar time in hours. A solar time is a 24-hour clock with 12:00 as the exact time when the sun is at the highest point in the sky. The concept of solar time is to predict the direction of the sun's ray relative to a point on the earth. Solar time is location or longitudinal dependent. It is generally different from local clock time (LCT) (defined by politically time zones) The conversion between solar time and local clock time requires knowledge of the location, the day of the year, and the standards to which local clocks are set. The conversion between solar time t s and local clock time (LCT) (in 24-hour rather than AM/PM format) takes the form: where EOT is the equation of time in minutes, LC is a longitude correction, and D is daylight saving time. Daylight saving time was initiated in the spring of 1918 to "save fuel and promote other economies in a country at war" [20]. According to this concept, the standard time is advanced by 1 hour, usually from 2:00 am. on the last Sunday in April until 2:00 am. on the last Sunday in October.
2014-10-01T00:00:00.000Z
2009-09-30T00:00:00.000
{ "year": 2009, "sha1": "5746ac7e10a61635f83b59f8db97fbda9a6aab36", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1424-8220/9/10/7849/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5746ac7e10a61635f83b59f8db97fbda9a6aab36", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Engineering", "Computer Science", "Medicine" ] }
220665632
pes2o/s2orc
v3-fos-license
Learning Person Re-identification Models from Videos with Weak Supervision Most person re-identification methods, being supervised techniques, suffer from the burden of massive annotation requirement. Unsupervised methods overcome this need for labeled data, but perform poorly compared to the supervised alternatives. In order to cope with this issue, we introduce the problem of learning person re-identification models from videos with weak supervision. The weak nature of the supervision arises from the requirement of video-level labels, i.e. person identities who appear in the video, in contrast to the more precise framelevel annotations. Towards this goal, we propose a multiple instance attention learning framework for person re-identification using such video-level labels. Specifically, we first cast the video person re-identification task into a multiple instance learning setting, in which person images in a video are collected into a bag. The relations between videos with similar labels can be utilized to identify persons, on top of that, we introduce a co-person attention mechanism which mines the similarity correlations between videos with person identities in common. The attention weights are obtained based on all person images instead of person tracklets in a video, making our learned model less affected by noisy annotations. Extensive experiments demonstrate the superiority of the proposed method over the related methods on two weakly labeled person re-identification datasets. I. INTRODUCTION P ERSON re-identification (re-id) is a cross-camera instance retrieval problem which aims at searching for persons across multiple non-overlapping cameras [1], [2], [3], [4], [5], [6], [7], [8]. This problem has attracted extensive research, but most of the existing works focus on supervised learning approaches [2], [9], [10], [1], [11]. While these techniques are extremely effective, they require a substantial amount of annotations which becomes infeasible to obtain for large camera networks. Aiming to reduce this huge requirement of labeled data, unsupervised methods have drawn a great deal of attention [12], [13], [14], [15], [16], [17]. However, the performance of these methods is significantly weaker compared to supervised alternatives, as the absence of labels makes it extremely challenging to learn a generalizable model. Xueping (d) demonstrates some semi-weakly labeled samples used in [5], in which the strongly labeled tracklets (one for each identity) in addition to the weakly labeled data are required. To bridge this gap in performance, some recent works have focused on the broad area of learning with limited labels. This includes settings such as the one-shot, the active learning and the intra-camera labeling scenarios. The one-shot setting [18], [19], [20], [21] assumes a singular labeled tracklet for each identity along with a large pool of unlabeled tracklets, the active learning strategy [22], [23], [24] tries to select the most informative instances for annotation, and the intra-camera setting [25], [26] works with labels which are provided only for tracklets within an individual camera view. All of these methods assume smaller proportions of labeling in contrast to the fully supervised setting, but assume strong labeling in the form of identity labels similar to the supervised scenario. In this paper, we focus on the problem of learning with weak labels -labels which are obtained at a higher level of abstraction, at a much lower cost compared to strong labels. In the context of video person re-id, weak labels correspond to video-level labels instead of the more specific labels for each image/tracklet within a video. To illustrate this further, consider Figure 1 which shows some video clips which are annotated with the video-level Multi-Instance Learning Identity-wise region attention Fig. 2. A brief illustration of our proposed multiple instance attention learning framework for video person re-id with weak supervision. For each video, we group all person images obtained by pedestrian detection and tracking algorithms in a bag and use it as the inputs of our framework. The bags are passed through a backbone CNN to extract features for each person image. Furthermore, a fully connected (FC) layer and an identity projection layer are used to obtain identity-wise activations. On top of that, the MIL loss based on k-max-mean-pooling strategy is calculated for each video. For a pair of videos (i, j) with common person identities, we compute the CPAL loss by using high and low attention region for the common identity. Finally, the model is optimized by jointly minimizing the two loss functions. labesls, such as video 1 with {A, B, C, D, E}. This indicates that Person A, B, C, D and E appear in this clip. By using pedestrian detection and tracking algorithms [27], [28], [29], we can obtain the person images (tracklets) for this video clip, but can make no direct correspondence between each image (tracklet) and identity due to the weak nature of our labels. Specifically, we group all person images obtained in one video clip into a bag and tag it with the video label as shown in Figure 1(c). On the contrary, strong supervision requires identity labels for each image (tracklet) in a video clip and thus, annotation is a more tedious procedure compared to our setting. Thus, in weakly labeled person re-id data, we are given bags, with each such bag containing all person images in a video and the video's label; our goal is to train a person re-id model using these bags which can perform retrieval during test time at two different levels of granularity. The first level of granularity, which we define as Coarse-Grained Reid, involves retrieving the videos (bags) that a given target person appears in. The second level entails finding the exact tracklets with the same identity as the target person in all obtained gallery tracklets -this is defined as Fine-Grained Re-id. Moreover, we also consider a more practical scenario where the weak labels are not reliable -the annotators may not tag the video clip accurately. In order to achieve this goal, we propose a multiple instance attention learning framework for video person re-id which utilizes pairwise bag similarity constraints via a novel coperson attention mechanism. Specifically, we first cast the video person re-id task into a multiple instance learning (MIL) problem which is a general idea that used to solve weaklysupervised problems [5], [30], however, in this paper, a novel k-max-mean-pooling strategy is used to obtain a probability mass function over all person identities for each bag and the cross-entropy between the estimated distribution and the ground truth identity labels for each bag is calculated to optimize our model. The MIL considers each bag in isolation and does not consider the correlations between bags. We address this by introducing the Co-Person Attention Loss (CPAL), which is based on the motivation that a pair of bags having at least one person identity e.g. Person A in common should have similar features for images which correspond to that identity (A). Also, the features from one bag corresponding to A should be different from features of the other bag (of the pair) not corresponding to A. We jointly minimize these two complementary loss functions to learn our multiple instance attention learning framework for video person re-id as shown in Figure 2. To the best of our knowledge, this is the first work in video person re-id which solely utilizes the concept of weak supervision. A recent work [5] presents a weakly supervised framework to learn re-id models from videos. However, they require strong labels, one for each identity, in addition to the weak labels, resulting in a semi-weak supervision setting. In contrast, our setting is much more practical forgoing the need for any strong supervision. A more detailed discussion on this matter is presented in Section IV-E, where we empirically evaluate the dependence of [5] on the strong labels and demonstrate the superior performance of our framework. Main contributions. The contributions of our work are as follows: • We introduce the problem of learning a re-id model from videos with weakly labeled data and propose a multiple instance attention learning framework to address this task. • By exploiting the underlying characteristics of weakly labeled person re-id data, we present a new co-person attention mechanism to utilize the similarity relationships between videos with common person identities. • We conduct extensive experiments on two weakly labeled datasets and demonstrate the superiority of our method on coarse and fine-grained person re-id tasks. We also validate that the proposed method is promising even when the weak labels are not reliable. II. RELATED WORKS Existing person re-id works can be summarized into three categories, such as learning from strongly labeled data (supervised and semi-supervised), learning from unlabeled data (unsupervised) and learning from weakly labeled data (weakly supervised) depending on the level of supervision. This section briefly reviews some person re-id works, which are related with this work. Learning from strongly labeled data. Most studies for person re-id are supervised learning-based methods and require the fully labeled data [31], [10], [2], [9], [11], [32], [33], i.e., the identity labels of all the images/tracklets from multiple cross-view cameras. These fully supervised methods have led to impressive progress in the field of re-id; however, it is impractical to annotate very large-scale surveillance videos due to the dramatically increasing annotation cost. To reduce annotation cost, some recent works have focused on the broad area of learning with limited labels, such as the one-shot settings [18], [19], [20], [21], the active learning strategy [22], [23], [24] and the intra-camera labeling scenarios [25], [26]. All of these methods assume smaller proportions of labeling in contrast to the fully supervised setting, but assume strong labeling in the form of identity labels similar to the supervised scenario. Learning from unlabeled data. Researchers developed some unsupervised learning-based person re-id models [12], [13], [14], [15], [16], [17] that do not require any person identity information. Most of these methods follow a similar principle -alternatively assigning pseudo labels to unlabeled data with high confidence and updating model using these pseudo-labeled data. It is easy to adapt this procedure to largescale person re-id task since the unlabeled data can be captured automatically by camera networks. However, most of these approaches perform weaker than those supervised alternatives due to lacking the efficient supervision. Learning from weakly labeled data. The problem of learning from weakly labeled data has been addressed in several computer vision tasks, including object detection [34], [35], [30], segmentation [36], [37], text and video moment retrieval [38], activity classification and localization [39], [40], [41], video captioning [42] and summarization [43], [44]. There are three weakly supervised person re-id models have been proposed. Wang et al. introduced a differentiable graphical model [26] to capture the dependencies from all images in a bag and generate a reliable pseudo label for each person image. Yu et al. introduced the weakly supervised feature drift regularization [6] which employs the state information as weak supervision to iteratively refine pseudo labels for improving the feature invariance against distractive states. Meng et al. proposed a cross-view multiple instance multiple label learning method [5] that exploits similar instances within a bag for intra-bag alignment and mine potential matched instances between bags. However, our weak labeling setting is more practical than these three works for video person reid. First, we do not require any strongly labeled tracklets and state information for model training. Second, we consider a scenario that the weak labels are not reliable in training data. Our task of learning person re-id models from videos with weak supervision is also related to the problem of person search [45], [46], [47] whose objective is to simultaneously localize and recognize a person from raw images. The difference lies in the annotation requirement for training -the person search methods assume large amounts of manually annotated bounding boxes for model training. Thus, these approaches utilize strong supervision in contrast to our weak supervision. III. METHODOLOGY In this section, we present our proposed multiple instance attention learning framework for video person re-id. We first present an identity projection layer we use to obtain the identity-wise activations for input person images in one bag. Thereafter, two learning tasks: multi-instance learning and co-person attention mechanism are introduced and jointly optimized to learn our model. The overview of our proposed method is shown in Figure 2 and it may be noted that only the video-level labels of training data are required for model training. Before going into the details of our multiple instance attention learning framework, let us first compare the annotation cost between weakly labeled and strongly labeled video person re-id data, and then define the notations and problem statement formally. A. Annotation Cost We focus on person re-id in videos, where labels can be collected in two ways: • Perfect tracklets: The annotators label each person in each video frame with identities and associate persons with the same identity (DukeMTMC-VideoReID [18]). Then, the tracklets are perfect and one tracklet contains one person identity. However, they are more time-consuming than ours which requires only video-level labels. • Imperfect tracklets: The tracklets are obtained automatically by pedestrian detection and tracking algorithms [27], [28], [29] (MARS [32]). They are bound to have errors of different kinds, like wrong associations, missed detection, etc. Thus, human intervention is required to segregate individual tracklets into the person identities. Our method uses only video-level annotations, reducing the labeling efforts in both the above cases. We put all person images in a video to a bag and label the bag with the video-level labels obtained from annotators. We develop our algorithm without any idea of the tracklets, but rather a bag of images. Further, we do not use any intra-tracklet loss, as one tracklet can have multiple persons in case of imperfect tracking. Table II and Table III show our method is robust against the missing annotation scenario where a person might be there in the video, but not labeled by annotators. Hence, our framework has remarkable real-world value where intracamera tracking is almost surely to happen with an automated software and will be prone to errors. Next, we present an approximate analysis of the reduction in annotation cost by utilizing weak supervision. Assume that the cost to label a person in an image is b. Also, let the average number of persons per image be p and the average number of frames per video be f . The total number of videos from all cameras is n. So, the annotation cost for strong supervision is f pnb. Now, let the cost for labeling a video with video-level labels be b , where b << b. Thus, the annotation cost for weak supervision amounts to nb . This results in an improvement in the annotation efficiency by f pb/b × 100%. B. Problem Statement Assume that we have C known identities that appear in N video clips. In our weakly labeling settings, each video clip is conceptualized as a bag of person images detected in the video, and assigned a label vector indicating which identities appear in the bag. Therefore, the training set can be denoted as .., I ni i } is the ith bag (video clip) containing n i person images. Using some feature extractors, we can obtain the corresponding feature representations for these images, which we stack in the form of a feature matrix , 1} C is the label vector of bag i containing C identity labels, in which y c i = 1 if the cth identity is tagged for X i (person c appears in video i) and y c i = 0 otherwise. For the testing probe set, each query is composed of a set of detected images with the same person identity (a person tracklet) in a video clip. We define two different settings for the testing gallery set as follows: • Coarse-grained person re-id tries to retrieve the videos that the given target person appears in. The testing gallery set should have the same settings as the training seteach testing gallery sample is a bag with one or multiple persons. • Fine-grained person re-id aims at finding the exact tracklets with the same identity as the target person among all obtained tracklets. It has the same goal as the general video person re-id -each gallery sample is a tracklet with a singular person identity. C. Multiple Instance Attention Learning for Person Re-id 1) Identity Space Projection: In our work, feature representation X i is used to identify person identities in bag i. We project X i to the identity space (R C , C is the number of person identities in training set). Thereafter, the identity-wise activations for bag i can be represented as follows: where f (·; θ) is a C dimensional fully connected layer. W i ∈ R C×ni is an identity-wise activation matrix. These identitywise activations represent the possibility that each person image in a bag is predicted to a certain identity. 2) Multiple Instance Learning: In weakly labeled person re-id data, each bag contains multiple instances of person images with person identities. So the video person re-id task can be turned into a multiple instance learning problem. In MIL, the estimated label distribution for each bag is expected to eventually approximate the ground truth weak label (video label); thus, we need to represent each bag using a single confidence score per identity. In our case, for a given bag, we compute the activation score corresponding to a particular identity as the average of top k largest activations for that identity (k-max-mean-pooling strategy). For example, the identityj confidence probability for the bag i can be represented as, where topk is an operation that selects the top k largest activations for a particular identity. W i [j, :] denotes the activation score corresponding to identity j for all person images in bag i. Thereafter, a softmax function is applied to compute the probability mass function (pmf) over all the identities for bag i as follows, . The MIL loss is the cross-entropy between the predicted pmfŷ i and the normalized ground-truth y i , which can then be represented as follows, where y i is the normalized ground truth label vector and N b is the size of training batch. The MIL only considers each bag in isolation. Next, we present a Co-Person Attention Mechanism for mining the potential relationships between bags. 3) Co-Person Attention Mechanism: In a network of cameras, the same person may appear at different times and different cameras, so there may be multiple video clips (bags) containing common person identities. That motivates us to explore the similarity correlations between bags. Specifically, for those bags with at least one person identity in common, we may want the following properties in the learned feature representations: first, a pair of bags with Person j in common should have similar feature representations in the portions of the bag where the Person j appears in; second, for the same bag pair, feature representation of the portion where Person j occurs in one bag should be different from that of the other bag where Person j does not occur. We introduce Co-Person Attention Mechanism to integrate the desired properties into the learned feature representations. In the weakly labeled data, we do not have frame-wise labels, so the identity-wise activation matrix obtained in Equation 1 is employed to identify the required person identity portions. Specifically, for bag i, we normalize the bag identity-wise activation matrix W i along the frame index using softmax function as follows: Here t indicates the indexes of person images in bag i and j ∈ {1, 2, ..., C} denotes person identity.Ŵ i could be referred as identity attention, because it indicates the probability that each person image in a bag is predicted to a certain identity. Specifically, a high value of attention for a particular identity indicates its high occurrence-probability of that identity. Under the guidance of the identity attention, we can define the identity-wise feature representations of regions with high and low identity attention for a bag as follows: where H f j i , L f j i ∈ R d represent the aggregated feature representations of bag i with high and low identity-j attention region, respectively. It may be noted that in Equation 5, the low attention feature is not defined if a bag contains only one person identity and the number of person images is 1, i.e. n i = 1. This is also conceptually valid and in such cases, we cannot compute the CPAL loss. We use ranking hinge loss to enforce the two properties discussed above. Given a pair of bags m and n with person identity j in common, the co-person attention loss function may be represented as follows: where δ = 0.5 is the margin parameter in our experiment. s(·, ·) denotes the cosine similarity between two feature vectors. The two terms in the loss function are equivalent in meaning, and they represent that the features with high identity attention region in both the bags should be more similar than the high attention region feature in one bag and the low attention region feature in the other bag as shown in Figure 3. The total CPAL loss for the entire training set may be represented as follows: where S j is a set that contains all bags with person identity j as one of its labels. |S j | 2 = |S j |·(|S j |−1) 2 . m, n are indexes of bags. 4) Optimization: The MIL considers each bag in isolation but ignores the correlations between bags, and CPAL mines the similarity correlations between bags. Obviously, they are complementary. So, we jointly minimize these two complementary loss functions to learn our multiple instance attention learning framework for person re-id. It can be represented as follows: where λ is a hyper-parameter that controls contribution of L M IL and L CP AL for model learning. In Section IV-C, we discuss the contributions of each part for recognition performance. D. Coarse and Fine-Grained Person Re-id In the testing phase, each query is composed of a set of detected images in a bag with the same person identity (a person tracklet). Following our goals, we have two different settings for testing gallery set. Coarse-Grained Person Re-id finds the bags (videos) that the target person appears in. So, the testing gallery set is formed in the same manner as the training set. We define the distance between probe and gallery bags using the minimum distance between average pooling feature of the probe bag and frame features in the gallery bag. Specifically, we use average pooling feature x p to represent bag p in the testing probe set and x g,r denotes the feature of rth frame in gth testing gallery bag. Then, the distance between the bag p and bag g may be represented as follows: D(p, g) = min{d(x p , x g,1 ), d(x p , x g,2 ), ..., d(x p , x g,ng )} (9) where d(·, ·) is the Euclidean distance operator. n g is the number of person images in bag g. Fine-Grained Person Re-id finds the tracklets with the same identity as the target person. This goal is same as the general video person re-id, so testing gallery samples are all person tracklets. We evaluate the fine-grained person re-id performance following the general person re-id setting. IV. EXPERIMENTS A. Datasets and Settings 1) Weakly Labeled Datasets: We conduct experiments on two weakly labeled person re-id datasets -Weakly Labeled MARS (WL-MARS) dataset and Weakly Labeled DukeMTMC-VideoReID (WL-DukeV) dataset. These two weakly labeled datasets are based on the existing videobased person re-id datasets -MARS [32] and DukeMTMC-VideoReID [18] datasets, respectively. They are formed as follows: first, 3 -6 tracklets from the same camera are randomly selected to form a bag; thereafter, we tag it with the set of tracklet labels. It may be noted that only bag-level labels are available and the specific label of each individual is unknown. More detailed information of these two weakly labeled datasets are shown in Table I. We also consider a more practical scenario that the annotator may miss some labels for a video clip, namely, missing annotation. For example, one person may only appear for a short time and missed by the annotator. It will lead to a situation that weak labels are not reliable. To simulate this circumstance, for each weakly labeled bag, we randomly add 3 -6 short tracklets with different identities into it and each tracklet contains 5 -30 person images. So, the new bags will contain the original person images and the new added ones, but the labels are still the original bag labels. In Section IV-B, we evaluate the proposed method under this situation. 2) Implementation Details: In this work, an ImageNet [49] pre-trained ResNet50 network [50], in which we replace its last average pooling layer with a d-dimensional fully connected layer (d = 2048), is used as our feature extractor. Stochastic gradient descent with a momentum of 0.9 and a batch size of 10 is used to optimize our model. The learning rate is initialized to 0.01 and changed to 0.001 after 10 epochs. We create each batch in a way such that it has a minimum of three pairs of bags and each pair has at least one identity in common. We train our model end-to-end on two Tesla K80 GPU using Pytorch. We set k = 5 in Equation 2 for both datasets. The number of person images in each training bag is set to a fixed value 100. If the number is greater than that, we randomly select 100 images from the bag and assign the labels of the bag to the selected subset. It may be noted that for WL-DukeV dataset, we split each original person tracklet into 7 parts to increase the number of weakly labeled training samples. To evaluate the performance of our method, the widely used cumulative matching characteristics (CMC) curve and mean average precision (mAP) are used for measurement. 1) Coarse-Grained Person Re-id: We compare the performance of our method (MIL and MIL+CPAL) to the existing state-of-the-art multiple instance learning methods -weakly supervised deep detection network (WSDDN) [30] (section 3.3 of their paper which is relevant for our case), multi-label learning-based hard selection logistic regression (HSLR) [48] and soft selection logistic regression (SSLR) [48] for the task of coarse-grained person re-id. It should be noted that we use the same network architecture for all five methods for fair comparison. From Table II, it can be seen that the proposed k-max-mean-pooling based MIL method performs much better than other compared methods. Comparing to WSDDN, the rank-1 accuracy is increased by 9.8% and 11.0% for mAP score on WL-MARS dataset. When combining with CPAL (OURS) the recognition performance is further improved. Especially, compared to WSDDN, the rank-1 accuracy and mAP score are improved by 15.2% and 16.8% on WL-MARS dataset, similarly, 10.2% and 9.9% on WL-DukeV dataset. In this subsection, we also evaluate our method under missing annotation scenario. As shown in Table II, we can see that when testing our method under missing annotation situation, for WL-MARS dataset, the rank-1 accuracy and mAP score decrease 0.5% (78.6% to 78.1%) and 4.4% (47.1% to 42.7%), respectively, and for WL-DukeV dataset, it decreases 3.3% and 3.8% accordingly. Our method is not very sensitive to missing annotation situation for coarse-grained re-id task. Furthermore, we find that the proposed method with missing annotation still performs significantly better than others with perfect annotation (annotator labels all appeared identities). For example, comparing to HSLR, on WL-MARS dataset, the rank-1 accuracy and mAP score are improved by 8.5% and 7.3%, respectively. 2) Fine-Grained Person Re-id: In Table III, we compare our framework against methods which utilize strong labels, as well as other weakly supervised methods for fine-grained person re-id. It can be seen that the proposed k-max-meanpooling-based MIL method performs much better than most of the other compared methods and when combining with CPAL (OURS) the recognition performance is further improved. Especially, comparing to HSLR, our method can obtain 8.6% and 10.2% improvement for rank-1 accuracy and mAP score respectively, on WL-MARS, and similarly, 8.8% and 10.2% improvement on the WL-DukeV dataset. The efficacy of using weak labels is strengthened by the improvement over methods which use strong labels, such as EUG (strong labeling: oneshot setting) [18] and UGA (strong labeling: intra-camera supervision) [19]. Weak labels also improve performance compared to unsupervised methods such as BUC [12], with gains of 6.4% and 8.0% in rank-5 accuracy and mAP score on WL-MARS dataset, and similarly, 6.1% and 3.0% on WL-DukeV dataset. Compared to EUG, the recognition performance is improved from 74.9% to 81.5% (6.6% difference) for rank-5 accuracy on WL-MARS dataset and 84.1% to 87.2% (3.1% difference) on the WL-DukeV dataset. We evaluate our method under missing annotation scenario for fine-grained re-id. As shown in Table III, we can see that when testing our method under missing annotation situation, for WL-MARS dataset, the rank-1 accuracy and mAP score decrease 5.2% and 5.4%, similarly, 1.0% and 1.2% for WL-DukeV dataset. We can observe that our results under missing annotation situation are still very competitive compared to others under perfect annotation. For example, comparing to HSLR, on WL-MARS dataset, the rank-1 accuracy and mAP score are improved by 3.4% and 4.8%, similarly, and 7.8% and 9.0% on WL-DukeV dataset. Comparing to unsupervised method BUC, we can also obtain better results, especially for the mAP score, our method is 2.6% and 1.8% better than that on WL-MARS and WL-DukeV datasets, respectively. C. Weights Analysis on Loss Functions In our framework, we jointly optimize MIL and CPAL to learn the weights of the multiple instance attention learning module. In this section, we investigate the relative contributions of the two loss functions to the recognition performance. In order to do that, we perform experiments on WL-MARS dataset, with different values of λ (higher value indicates larger weight on MIL), and present the rank-1 accuracy and mAP score on coarse and fine-grained person re-id tasks in Figure 4. As may be observed from the plot, when λ = 0.5, the proposed method performs best, i.e., both the loss functions have equal weights. Moreover, using only MIL, i.e., λ = 1.0, results in a decrease of 5.8% and 2.3% in mAP (5.4% and 1.4% in rank-1 accuracy) on coarse and fine-grained person re-id tasks, respectively. This shows that the CPAL introduced in this work has a major effect towards the better performance of our framework. D. Parameter Analysis We adopt a k-max-mean-pooling strategy to compute the activation score corresponding to a particular identity in a bag. In this section, we evaluate the effect of varying k, which is used in Equation 2. As shown in Table IV, the proposed multiple instance attention learning framework is evaluated with four different k values (k = 1, 5, 10, 20) on WL-MARS dataset for fine-grained person re-id. It can be seen that when k = 5, we obtain the best recognition performance 65.0% for rank-1 accuracy and 46.0% for mAP score. Comparing to k = 1 which selects the largest activation for each identity, the performance is improved by 4.0% and 4.0% for rank-1 accuracy and mAP score, respectively. We use this value of k = 5 for all the experiments. E. Comparison with CV-MIML In this section, we compare the proposed framework with CV-MIML [5] that has recently been proposed for weakly supervised person re-id task. Although [5] is presented as a weakly supervised method, it should be noted that it uses a strongly labeled tracklet for each identity (one-shot labels) in addition to the weak labels this is not a true weakly supervised setting and we term it as semi-weakly supervised. On the contrary, our method does not require the strong labels and is more in line with the weakly supervised frameworks proposed for object, activity recognition and segmentation [34], [35], [30], [36], [37]. Thus, CV-MIML is not directly applicable to our scenario where one only has access to bags of person images. However, for the sake of comparison, we implemented CV-MIML without the probe set-based MIML loss term (L p ) and cross-view bag alignment term (L CA ), since these require the one-shot labels to calculate the cost or the distribution prototype for each class. We refer to this as CV-MIML* and compare it to our method on WL-MARS dataset for coarsegrained re-id task. We also briefly compare our results with the one reported in [5] on Mars dataset. As shown in Table V, it can be seen that despite the lack of strong labels, our method performs comparably with CV-MIML and completely outperforms its label-free variant CV-MIML* (more than 300% relative improvement in mAP). In addition, comparing the recognition performance of CV-MIML* and CV-MIML, we find that CV-MIML method relies on strong labels a lot. F. Evaluation of Multiple Instance Attention Learning with Tracklet Setting Our proposed method work with individual frames of the tracklets given in the bag (video). In this section, we perform an ablation study, where we use tracklet features instead of using frame-level features. So, each training sample can be denoted as ., T mi i } contains m i person tracklets and T k i is the kth tracklet obtained in ith video clip, and y i is a weak label for the bag. Tracklet features are computed by a mean-pooling strategy over the frame features. Table VI reports the fine-grained person re-id performance on WL-MARS dataset. Even in this setting, our method still performs better than others. Compared to multiple label learning-based HSLR [48], we achieve 6.8% and 8.3% improvement for rank-1 accuracy and mAP score, respectively. Compared to the state-of-the-art unsupervised BUC [12], we can also obtain better recognition performance, especially 5% improvement for mAP score. Moreover, the proposed method is also very competitive compared to those semi-supervised person re-id methods, such as EUG [18] and UGA [19], under tracklet setting. Especially, the mAP score is improved by 0.6% and 2.5%, comparing to EUG and UGA, respectively. Next, we present a more practical scenario (Noisy Tracking) where each tracklet may contain more than a singular identity due to the imperfect person tracking in a video clip. Noisy tracking. Assuming correct tracking over the entire duration of a tracklet is a very strong and an unrealistic assumption. Thus, in a practical setting, a tracklet may contain more than a singular identity. Our method obviates this scenario by using frame features. Here, we present the performance using tracklets with noisy tracking. Specifically, we randomly divide the person images in the same bag into 4 parts and regard each of them as a person tracklet that may contain one or multiple person identities. Based on this setting, we compare the fine-grained person re-id performance of the proposed method to a few different methods on WL-MARS dataset. Table VII presents this comparison. Obviously, under noisy tracking setting, the recognition performance declines a lot for all methods comparing to those reported in Table VI. However, weak supervision-based methods outperform the state-of-the-art unsupervised BUC [12] by a large margin consistently, especially, the proposed method obtains 12.4% and 11.9% improvement for rank-1 accuracy and mAP score. G. Ablation Study In this section, we conduct ablation studies to evaluate the advantages of our proposed MIL loss and CPAL loss. We validate our methods on WL-MARS dataset under two different tasks -coarse-grained person re-id and fine-grained person re-id. From Table VIII, we can see that (1) adding CPAL to other methods, such as HSLR+CPAL, SSLR+CPAL and WSDDN+CPAL helps to improve recognition performance by a large margin consistently, such as 4.4% rank-1 accuracy and 6.9% mAP score improvement for HSLR-based coarsegrained re-id, and 6.8% rank-1 accuracy and 7.0% mAP score improvement for HSLR-based fine-grained re-id; (2) MIL loss performs better than other deep logistic regression-based methods. Comparing to HSLR-based coarse-grained re-id, the rank-1 accuracy is improved from 69.6% to 73.2%, and 35.8% to 43.7% for mAP score; (3) Combining MIL and CPAL (MIL+CPAL), we can obtain the best recognition performance 78.6% and 47.1% for rank-1 accuracy and mAP score on coarse-grained re-id, and 65.0% and 46.0% on fine-grained re-id accordingly. H. Matching Examples To have better visual understanding, we show some coarse and fine-grained person re-id results achieved by our proposed multiple instance attention learning framework on WL-MARS dataset in Figure 5. Figure 5(a) shows the coarse-grained person re-id results. We can see that each query is a bag containing one tracklet with one person identity and 4 returned bags (video clips) are shown in this figure. The bounding boxes indicate the most similar frame in a bag to the query person. Blue and red show the correct and wrong retrieval results, respectively. Yellow dots indicate the tracklets with the same identity as the query person. We find it happens that the most similar frame is wrong, but the retrieval results are correct as shown in Figure 5(a): Bag 3. That may explain coarse-grain rank-1 accuracy is better than fine-grained re-id to some extent. V. CONCLUSIONS In this paper, we introduce a novel problem of learning a person re-identification model from videos using weakly labeled data. In the proposed setting, only video-level labels (person identities who appear in the video) are required, instead of annotating each frame in the video -this significantly reduces the annotation cost. To address this weakly supervised person re-id problem, we propose a multiple instance attention learning framework, in which the video person reidentification task is converted to a multiple instance learning setting, on top of that, a co-person attention mechanism is presented to explore the similarity correlations between videos with common person identities. Extensive experiments on two weakly labeled datasets -WL-MARS and WL-DukeV datasets demonstrate that the proposed framework achieves the state-ofthe-art results in the coarse-grained and fine-grained person reidentification tasks. We also validate that the proposed method is promising even when the weak labels are not reliable.
2020-07-22T01:01:11.763Z
2020-07-21T00:00:00.000
{ "year": 2020, "sha1": "81a1d0204e963329e80a9b9dbedca8f635a56509", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2007.10631", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "81a1d0204e963329e80a9b9dbedca8f635a56509", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
227178636
pes2o/s2orc
v3-fos-license
Making sense of professional enablers’ involvement in laundering organized crime proceeds and of their regulation Money laundering has ascended the enforcement and criminological agenda in the course of this century, and has been accompanied by an increased focus on legal professionals as ‘enablers’ of crime. This article explores the dynamics of this enforcement, media and political agenda, and how the legal profession has responded in the UK and elsewhere, within the context of ignoring the difficulties of judging the effectiveness of anti money laundering. It concludes that legal responses are a function of their lobbying power, the determination of governments to clamp down on the toxic impacts of legal structures, and different legal cultures. However, it remains unclear what the effects on the levels and organization of serious crimes for gain are of controls on the professions. Introduction Why should we be interested analytically in the financial flows and in the professionals and financiers connected with organized crime? There has been intermittent interest since the 1960s and indeed, even centuries earlier, in the "depth of field" of organized crime. 'The fence' has been an enduring factual and fictional theme for centuries, for example in Dickens' Oliver Twist, and it is sometimes said that without receivers of stolen goods, there would be less crime. 1 Yet receivers and the conceptual Stolen The international media focus subsequent to these revelations tended to focus on the banks' ongoing willingness to act for clients they had made reports on, rather than on the lack of public investment in following up the reports they had made, freezing the alleged crime proceeds and prosecuting or taking other action against suspects. One reason for this may be that the inaction of private institutions such as bankers and professionals makes an attractive target for 'folk devilment', especially if there is official corruption to complement it. In parallel, alongside a journalistic undercover operation, Al Jazeera was able to shine a light on the Golden Passports scandal in Cyprus, offering Cyprus and therefore EU citizenship for a substantial fee to some less than impeccable non-Europeans, with senior officials allegedly 'on the take' and some senior political resignations in the aftermath of publicity 2 ; Malta has some very similar publicized issues, 3 but many other EU countries differ from Cyprus and Malta only as a matter of degree (which is not unimportant): they are less blatant in offering EU-wide passports without much due diligence (Global Witness 2020). The net effect of these revelations in 2020 might be to sharpen the questions about what are the benefits of Anti-Money Laundering (AML) controls and upon whom and what the controls bite (and do not bite) in different parts of the world. 4 Organized crime as a target of AML The regular critiques do not lead properly to the conclusion that money laundering controls have no effect, but there is little research that shows how or to what extent they impact on what crimes and what forms of criminal organization. Organized crime scholars over the past decade have become interested in the 'scripts' of organized crime and in the ways in which crimes are organized and shaped by controls: markets vary in the extent to which we know about their extent and shape, however, and given the alleged trillions of dollars involved in money laundering, a valid question is how much of that huge criminal space is well described and understood? Organized crime and money laundering have been put together in a model first developed by the US Presidential (Reagan) Commission on organized crime which reported in 1986. This saw a focus on the money trail as one of the key routes through which its particular forms of organized crime could be combated. 5 2 https://www.aljazeera.com/news/2020/10/15/cyprus-house-speaker-resigns-following-al-jazeerainvestigation; https://timesofmalta.com/articles/view/eu-to-launch-legal-action-over-cyprus-malta-goldenpassports.825802. 3 Even including the alleged involvement of the then Prime Minister (who had to resign) and his (arrested) chief of staff in covering up the networks involved in the killing of the investigative journalist Daphne Caruana Galizia ('Malta Murder Investigation Closes In on 'Mafia State" https://www.nytimes.com/2019/12/19/world/europe/malta-murder-daphne-caruana-galizia.html?action= click&module=RelatedLinks&pgtype=Article; https://www.theguardian.com/world/ng-interactive/2020/oct/ 15/justice-on-trial-three-years-after-murder-daphne-caruana-galizia. 4 Increasingly, we have authoritarian regimes using banks' obligations to report 'suspicions' of crime and terrorism to require them to divulge financial transfers by political opponents of the regime -e.g. 'Banks in Hong Kong advised to report security law breaches to police' https://www.ft.com/content/4f52cb9c-b069-4b6c-9ef6-f980946f6eb3, 20 October 2020or to 'de-risk' them as clients. One technique of control is to require foreign institutions and charities to register and be monitored more intensively. 5 See Jacobs (2020) for a recentalmost triumphalist -review of this conventional conception of organized crime in the US. 'If money laundering is the keystone of organized crime, these recommendations can provide the financial community and law enforcement authorities with the tools needed to dislodge that keystone, and thereby to cause irreparable damage to the operations of organized crime'. The Cash Connection, US Presidential Commission on Organized Crime (1986: 63) This model focusing on the goal of integration or legitimization of the proceeds of crime contains a paradox. If the crime syndicates are planning to finance future crimes, they are definitely not thereby legitimizing the proceeds of past crimes: rather the reverse, since they are both laundering (in the formal legal if not in the analytical sense of cleansing) and intending to commit whatever new predicate crime they are planning. This is highlighted to stress an important ambiguity: on the one hand, the dominant cultural image of 'laundering' among national, Inter-Governmental Organizations (IMF, World Bank and UN) and Non-Governmental Organizations is indeed the use of sophisticated methods to cleanse 'dirty money' (from an ever-increasing range of predicate offences, latterly including foreign bribery and both domestic and foreign tax evasion); but on the other, the offence of laundering applies in most jurisdictions to whatever anyone does to hide, transfer or transform the proceeds of any crime, whether or not this actually legitimizes the funds or is intended to do so. Some proceeds are moved around to conceal their illegitimate origins in such a way as to defeat a significant financial investigation by competent professionals (though because of resource constraints combined with secrecy havens, this is not likely to happen in practice in many cases). But others are merely simple (self) laundering into accounts in their own or friends' names, to fund possible future crimes as well as lifestyle expenditures and savings rather than as precursors to integration into the mainstream economy. 6 Thus, one finds many newspaper reports stating that people have been charged both with drugs trafficking and with money laundering if the police have found a large bundle of cash hidden behind a false wall in their home or buried in the garden or even concealed in their cars. 7 Many launderers fall in between these extremes. Once he (normally a male) generates a volume of business too large to spend immediately and/or to store physically in a place he considers safe (including a bank account or real estate which may be in the name of others), the drug dealer or other illicit trader will need someone with other skills to launder the revenues, at least on an intermittent if not regular basis if he intends to continue a life of crime. Both will be guilty of money laundering. Though convictions for corrupt financial insiders are rareas they are also for 'insiders' in cybercrimes (Williams et al. 2019) -there can be three parties: the money launderer could be an intermediary who recruits someone inside a financial institution to make the transaction and/or open accounts to facilitate transactions (knowingly or with willful blindness or even naively), or a set of 'money mules' with existing accounts recruited (sometimes believing it is a genuine job) to put the crime proceeds through for onward transmission. Nor is it always the case that it is the 'organized criminal' who makes the approach to the insider. It can be an insider who is rapacious and seeks out offenders for whom to launder money whether he has always done this or is responding freshly to financial or other pressures. This corresponds to distinction between 'grass eaters' and 'meat eaters' made by the Knapp Commission (1972) on New York police corruption: grass eaters are those who passively receive bribes, and meat eaters are those who proactively go in search of bribes. However, there is a constraint which results from an enforcement and regulatory focus: especially if they are going to launder money regularly, insiders within financial institutions need to neutralize internal vigilance by Compliance and/or Money Laundering Reporting Officers, who in theory could go to jail as well as receive large corporate fines for having inadequate money laundering controls. So despite the understandable cynicism when we observe big money sums in corporate sanctions for large-scale laundering, it would be very surprising if internal controls never had any effects in situational laundering prevention. Whistleblower accounts and criminal cases tell us only about the 'failures' to do so (or rather, intentional decisions to bypass controls), often involving kleptocrats and intermediaries in high level corruption, but also involving drug dealers and human traffickers. In the 1MDB case in which the then Prime Minister of Malaysia Najib Razak has been given a 12 year prison sentence in 2020 for funneling part of billions stolen from the country, senior executives at Goldman Sachs found ways to hide the involvement of wealthy entrepreneur Jho Low, by-passing the strong objections of Goldman's compliance officers, who sought to stop him becoming a private wealth client (Noonan 2020). In that case, as in Deutsche Bank compliance efforts to reduce the bank's dealings with Donald Trump pre-Presidency (Enrich 2020), the efforts failed. However, in some (we have no idea what proportion of) others, they have succeeded in reducing particular crimes (though not necessarily in reducing 'crime' overall). In 2017, HSBC staff became suspicious about the proposed transfer of $500 million by the son of the former President of Angola, reported it to the authorities, and blocked the account. The filing of the suspicious activity report led to an international investigation, the return of the funds, and the imprisonment of the son 8 (and following 'Luanda Leaks', action against his sister Isobel dos Santos, Africa's richest woman, including freezing all her accounts): though see Engebretsen and de Oliveira (2020). In the long period since the first US criminalization measures in 1986, one might have expected a strong evidence base on what happens to proceeds of crime. That is far from being the case. Indeed governments and intergovernmental organizations have spent almost nothing on public research on these issues, while regular costly (pre-Covid-19) international meetings service the global fight against this ill-understood phenomenon and to 'evaluate' these efforts. (These evaluations are mainly procedurally to date, though this is slowly changing to 'real world' assessment, at least in principlesee Levi et al. 2018;Ferwerda and Reuter 2019). Meanwhile, especially this century, what many governments include in the category of 'financial crime' expands ever wider: for example, it is difficult to evade taxes without also being a money launderer, though very few are prosecuted unless they are political opponents of authoritarian governments. In this morass of perpetual control activity, it may not be surprising that there is no coherent 'theory of change' that explains and/or predicts what levels of financial investigation and asset recovery will have on the volume or forms of laundered money, nor on the separate question of how these interventions will impact on criminal markets under which circumstances. Instead of serious empirical work (whose absence is strongly critiqued by academics from Naylor 1999 to Van Duyne et al. 2019), there is a compelling cross-cultural narrative of 'follow the money' established initially by the use of tax evasion charges to jail Al Capone 9 that has become a law enforcement mantra since the mid-1980s (Halliday et al. 2020). The evidence on where the money goes and most recently also where the money comes from (Levi 2015) has been seen as a route into answering the question of how concentrated organized crime is, how transnational it is, how 'complicated' it is (though by what criteria remain unexplicated) and whether there is any evidence to support the common view developed in the US to fit the drugs market and recycled endlessly ever since, that money laundering always goes through the three stages of placement, layering and integration. The integration concept is nicely captured in the themes of The Godfather but is also reflected in an earlier era in the analysis by Bell (1953) that organized crime represents a 'queer ladder of social mobility' in America. In most cases, and not disregarding the important point about organized criminals in the UK (and likely elsewhere) seeking local or at most regional rather than national or global control (Campana and Varese 2018), how much appetite is there among most launderers or organized criminals in OECD countries to attain political control or to become titans of finance and industry? The anti-money laundering transnational legal order has been developing rapidly in recent years to incorporate all crimes great and small, and one of the difficulties for this paper (and for the more substantial review by Levi and Soudijn 2020) is how to differentiate the laundering of organized crime funds from other sources of criminal income, which include tax evasion, grand corruption and the financing of terrorismwhich can sometimes involve committing 'organized crime' offences. For example, it is hard to maintain that the funneling of billions allegedly stolen in the Malaysian 1MDB scandal was not well organized. It is simply that the principal people involvedincluding the then Malaysian Prime Minister and his entourage, Goldman Sachs and many intermediaries -would not be viewed by many respectable elites or by the police as 'organized crime actors'. We might extend this boundary problem to the deliberate falsification of data to regulators e.g. 'diesel-gate' emissions, primarily by the Volkswagen Group, and also by the now defunct Takata of safety data for its worldwide manufacture of car airbags. Arguably that behavior involved several actors planning how to commit crimes and get away with them over a long period of time for the pursuit of profit and power. Yet many people (if not many readers of Trends in Organized Crime) would balk at the idea of labeling senior executives of major corporations as organized criminals and of describing their distribution of profits as organized crime money laundering, though others might complain if we did not so label them for their intentional deception (see Levi 2019 for a discussion). Other contexts of elite misconduct are less challenging to our stereotypical differentiation between whitecollar and organized crime: e.g. the investigative media exposure of Operation Laundromat and subsequent scandals shows cross-ties between politicians in the former Soviet Union, organized criminals, professional crime enablers and bankers in the Baltic States, and international finance centers including London and New York. Conventional divisions between white-collar and organized crime are challenged also by accusations about the conduct of Kazakhstan-based mining company ENRC, which has been involved in a long running battle with the UK Serious Fraud Office over its transnational bribery investigation which has so far lasted seven years without a decision to prosecute or not (Burgis 2020a, b). Many millions of pounds have been spent on lawyers' fees in this and similar cases, competently and aggressively defended as is their legal right. This included allegations that the now retired head of white-collar crime at law firm Dechert procured and/or used hacked emails and was complicit in torture (or 'robust interrogation') of a former senior lawyer in the UAE, in connection with defending ENRC; there are allegations that potential witnesses were murdered (Beioley 2020a, b). Dechert was also accused by ENRC of passing on information to the SFO about the case against ENRC's interests, though the Court of Appeal ruled that litigation privilege would apply to documents shared between ENRC and its then legal advisers, and the SFO could not get access to external or internal legal documents (SFO v ENRC [2018]EWCA Civ 2006; Kemp Little 2019). So some 'elite' corporate cases can be as murky as stereotypical 'organized crime cases', a fact depicted in movies and contemporary television series about whistleblowers and launderers, from Chinatown and The China Syndrome onwards. One useful way of thinking about it is to separate out full-time organized criminals; organized crime facilitation via otherwise legitimate or semi-licit legal and accounting professionals; and transport logistics for crime proceeds. How 'elite' the facilitators (including lawyers) of organized crime funds are compared with those that facilitate the proceeds of Grand Corruption is largely unknown and not explicitly examined. In some countries, there is a fusion or at least a strong overlap between some politicians and 'organized crime' activities as popularly understood. The involvement of these actors can be analyzed as routine activities (a) knowingly (b) unknowingly and (c) hard to tell if a or b: but the nature of these 'markets for laundering' and how asymmetric they are is ill understood. Professional enablers Much of the white-collar crime literature indicates how difficult it is to 'folk devil' corporate elites and professionals, whether intentionally or simply as a by-product of media coverage or regulatory/criminal justice interventions (e.g. Levi 2009;Lord et al. 2019). It is not clear where and when the first use of this apparently neutral but often pejoratively used term began. However, a report by the World Economic Forum (2012) on Organised Crime Enablers contained a section on Money Laundering Enablers. 10 The National Crime Agency's (2014: 1) strategy for tackling 'High End Money Laundering' somewhat bizarrely states: "For the purposes of this strategy, we are defining "high end" money laundering as the laundering of funds, wittingly or 10 Full disclosure: the author was a member of the report drafting team. unwittingly, through the UK financial sector and related professional services." Though the document did distinguish crimes that needed to hide an audit trail from street level crimes, many might have thought that most money laundering met that description of 'high level'. It plausibly and reasonably added (p.1) that Although there are many ways to launder money, it is often the professional enabler who holds the key to the kind of complex processes that can provide the necessary anonymity for the criminal. Professionals such as lawyers, trust and company formation agents, investment bankers and accountants are among those at greatest risk of becoming involved, either wittingly or unwittingly. Thus, lawyers are generically one set of enablers, and the construct was formalized in the UK's Serious Crime Act 2015 which criminalized membership of organized crime groups, in line with the EU Directive. In the US, the more common term for enablers is 'gatekeepers': readers may judge for themselves whether this is more or less pejorative as a construct, but the American Bar Association-which (like the Australians) has successfully resisted national regulation for AMLappears to embrace it. In 2002, two years after the Law Society for England and Wales created a Money Laundering Task Force, the US created an ABA Task Force on the Gatekeeper Regulation and the Profession whose principal goal appeared to be 'hands off' (https://www.americanbar.org/groups/criminal_justice/ gatekeeper/): Our Task Force was created…to examine U.S. government and multilateral efforts to combat international money laundering and the implications of these efforts for lawyers and the profession. The legal profession, as well as certain financial sector professions, have been characterized as the "gatekeepers" to the international financial and business markets. The mission of the Task Force is to respond to initiatives by the U.S. Department of Justice and other organizations that will impact on the attorney-client relationship in the context of anti-money laundering enforcement. We are reviewing and evaluating ABA policies and rules regarding the ability of attorneys to disclose client activity and information, and developing a position on the Gatekeeper issue. We are developing educational programs for legal professionals and law students, and organizing resource materials to allow lawyers to comply with their anti-money laundering responsibilities. The goal is to preserve the integrity of the attorney-client relationship. This is not the place for a detailed legal discussion of how different legal regimes have responded to demands from the Financial Action Task Force to control their gatekeeping function (see Levi 2020). Suffice it to say that the answers lie in political lobbying strengths and in different legal cultures struggling against pressures from the Financial Action Task Force and the European Union, within a context of massive general ignorance of both money laundering and the role of professionals, which does not appear to have gripped the public or politicians in the same way that some forms of street and household crime have. Dealing with kleptocracies has occasionally stimulated high level international action plans (e.g. the Cameron Summit of 2016https://www.gov.uk/government/speeches/anti-corruption-summit-2016-pms-closingremarks), and in the UK, the BBC television series (but not the decade earlier book) McMafia stimulated greater political support for enquiries into Russian money of questionable provenance and its impact in Britain. In parallel, the proselytizing by mega-rich former fund manager Bill Browder has led the US, Canada and the UK to enact Magnitsky sanctions against what they politically deem Human Rights abuses, in memory of the former lawyer to a hedge fund who was allegedly killed in a Russian prison at the instigation of the Putin regime, and the Panama Papers and other campaigns have highlighted the overseas assets of Putin and some other prominent politicians (Belton 2020). However, bankers and governments rather than lawyers have been the primary focus of these campaigns. Nevertheless, in October 2020, The German Koln prosecutors issued international arrest warrants for Jürgen Mossack and Ramón Fonseca to answer accusations of forming a criminal organization and complicity in tax evasion in Germany: they are also being sought by the US and are 'under investigation' in Panama, which does not extradite its citizens, so they are only prosecutable if they travel abroad. Nougayrede (2019) has explicated the different cultural factors underlying lawyer regulation in France, the UK and the US, to which I would add that in the UK, the cultural tradition of less legally formalized public-private partnerships has been a strong feature of the lawyer regime as well as other aspects of anti-money laundering (see also Vogel and Maillart 2020). American and Australian lawyers have generated a minimalist approach to their legal obligations to analyze the riskiness of their clients and to report to national Financial Intelligence Units any suspicions about their transactions, while UK lawyers are at the other end of that international spectrum, reporting suspicions of their clients' transactions more often than the rest of the EU together, despite theoretically having the same regulatory obligations. Notwithstanding that, criticisms of 'under-reporting' have continued to be made by senior members of the UK government, the National Crime Agency and NGOs over the past few years, leading to complaints by the legal and other professions that they have been unreasonably stigmatized (see e.g. Cross 2018; Walters 2016), as they have also been for actively defending Legal Aid and immigration issues, and as judges have been for making rulings supportive of Human Rights and critical of some Brexit legislation. There are multiple ways that this issue can be examined. One is to focus solely on the legal, organizational and political responses to control. The second is to start with what we know about lawyer participation in money laundering and crime (what the UK National Crime Agency (NCA), 2014, refers to as a strategic 'knowledge gap') and to consider how we might fill the gaps with reasonably valid knowledge rather than either (a) cynical assumptions about lawyer participation or (b) (by some legal professionals) naïve or faux naïf denials accompanied by demands for proof. Some aspects of the evidence are dealt with in Levi and Soudijn (2020). For our present purpose in focusing only on laundering, we should set aside cases of alleged serious lawyer misconduct in assisting mostly legitimate corporations, because though it may help with an alleged corrupt or fraud scheme, this is merely a precursor to the laundering phase. A lawyer or a sophisticated businessperson can probably launder the proceeds of their own crimes without recourse to a third party 'enabler'. Someone getting a non-trivial amount of cash from drugs or other illicit services might spend some of it on having fun, and build and/or buy homes for himself and his familyin either his country of origin or locally with it. It depends on his or her savings ratio and income from crime. If selflaundering is not practical, the laundering can be outsourced to so-called professional money launderers (PMLs) who are contracted by the criminal to solve particular logistical bottlenecks (Benson 2020, Kleemans et al. 2002Kruisbergen et al. 2014;May and Bhardwa 2018). A PML is a necessity if the offenders want to be able to develop crimes at scale, distinguishable from usually low-skilled and readily substitutable 'front men' or 'straw men', who might be nominees on property deeds or company documents but who do not plan or execute a laundering scheme. So the need for enablers depends on the scale and form of the profits from crime, and the seriousness with which the offenders want to conceal them. The classic nail bar, sun-tanning or massage parlour may not require an elaborate corporate construction, though it will require an accountant (who may not need to be a member of a professional body) and a lawyer for property title and transfers. An international business normally would require a lawyer/notary, though not necessarily a complicit one. A 2019 civil confiscation order for £6 million targeted the network around the winner of millions in the UK National Lottery, where it appeared that this offered a partial cover for a web of illicit funds from frauds and from Russia, involving a multimillionaire businessman, a barrister, and others. 11 Once the transactions grew larger and more frequent, the lottery win should not have disarmed the banks and professionals, though it apparently did. Levi (2005, 2015) examine cases of British lawyers who launder the proceeds of their own crimes such as fraud, but also the smaller number identified of those who launder the proceeds of other people's crimes, perhaps after mutual attraction through vice and/or blackmail pressure. Changes in ethical legal culture, financial pressures from deterioration in levels of business, and the ownership of law firms may increase money laundering opportunities and 'needs' of the firm and/or the individual to launder. Benson (2020) analyzed 20 British cases between 2002 and 2013 in which lawyers or accountants were convicted of money laundering. The cases varied by the purpose of the transactions, the level of financial benefit gained by the professional, and the nature of their relationship with the predicate offender. Whereas acting in the purchase or sale of residential property and moving money through their firm's client account were the most common means by which lawyers in the cases were involved with criminal funds, there were also lawyers who had: written to a bank to try to get them to unfreeze an account; paid bail for a 11 https://www.thetimes.co.uk/article/lottery-winner-hit-with-6m-bill-over-money-laundering-h6sbdc0dl. The NCA uncovered an international money laundering network that saw hundreds of millions of pounds transferred through more than 100 bank accounts held globally including in the UK, Russia, Hong Kong, and Switzerland. The lottery winner's husband's activities first came to light during another NCA civil recovery investigation involving convicted drug smuggler Amir Azam where £4million in assets were recovered. He structured his wealth so it was in his wife's name; they rented a London flat in Belgravia costing £2000 a week. The couple hired a private jet for a foreign trip, holidayed in Cannes and Dubai, shopped at Harrods and built a swimming pool at their country home. So not all was saved and integrated! See https://nationalcrimeagency.gov.uk/news/eight-year-nca-investigation-results-in-multi-million-pound-assetrecovery-including-luxury-hotel-and-100k-bentley (accessed 12 May 2019). In another case, following civil proceedings and an Unexplained Wealth Order, the unprosecuted head of the 88 M business group gave up to the NCA 45 properties and four parcels of land in London, Leeds and Cheshire, plus £584,000 in cash that was the subject of an account freezing order. Some accountants and lawyers who worked with him or his businesses have been reported to their professional regulators, with outcomes yet unknown. His business front made it not implausible that the money came from licit activities (The Times, 7 October, 2020). client using what was considered to be the proceeds of crime; transferred ownership of hotels belonging to a client; written a series of profit and loss figures on the back of a letter; and witnessed an email, allowed the use of headed stationery and provided legal advice for a mortgage fraudster. Four lawyers were knowingly and intentionally involved, but in the majority of cases, Benson concluded that there was no evidence of a deliberate decision to offend or dishonesty on the part of the lawyer (which in my view did not mean always that there was none, but that none could be proven). Although such behaviour did not show that the lawyers were part of the 'crime group', their contribution to the goal of the crimessuccessful extraction of proceedswas important. An American research study examined 123 case files of defendants who had been convicted and sentenced in 2009 in the US Second Circuit on federal money laundering charges (Cummings and Stepnowsky 2011). It noted that 98.4% of convictions were obtained through a plea, and found that 'lawyers facilitated money laundering, both wittingly and unwittingly, in 25 % of the cases examined'. Of the 10 cases pertaining to lawyers in the final data set of 40 cases, four involved 'lawyer self-directed schemes' where the lawyer had committed fraud or embezzlement, and then had laundered his/ her own illicit gains. This left only six cases where lawyers were unwittingly involved in facilitating money laundering, five of which pertained to real estate transactions. Cummings and Stepnowsky (2011) concluded that there was no demonstrable evidence to support government proposals that lawyers act as agents of the government by filing suspicious activity reports (SARS) with a federal entity, since lawyers who deliberately launder their own illicit monies will not report suspicious transactions, and any suspicious reporting regime could only possibly be of value in cases where lawyers unwittingly facilitate money laundering! (Though if they were truly unwitting, why would the lawyer have made a SAR? Presumably via education to raise their consciousness.) In that study, only 6 out of 123 cases had lawyers who were unwitting dupes of money laundering, and more than 80% of these cases concerned real estate money laundering. But of course, this may do little more than demonstrate that attorney-client privacy and legal professional privilege make convicting or even prosecuting crooked lawyers too much of a challenge except where they run the scams themselves and in essence are self-launderers! Discussion and conclusions Where offenders and would-be offenders expect lawyers and other professionals to be what Capone called 'the legitimate rackets', a plausible hypothesis is that they will make requests to assist in laundering more often, especially but by no means only if the professionals have vices for which they can be blackmailed. Conversely, if criminals think that lawyers are ethical and/or have nothing to blackmail or pressurise them with, fewer requests will be made. These positions may be too binary to match the reality within and between jurisdictions: for example, what are the effects of contraction of legitimate business on lawyers' suspiciousness of new business and whether conduct fits money laundering typologies? Such 'differential association' models are difficult to test, and the data are too weak and anecdotal to enable general inferences to be made. But there is no reason why the involvement of professionals in money laundering should be constant over place and time, and despite the high rhetoric about their flexibility and globalisation, the ease with which offenders who are not members of social elite circles can and do use lawyers in less regulated jurisdictions is largely unexamined, at least publicly. (It fits with the incorrect assertion that offenders who are frustrated in one jurisdiction can simply move their criminal operations elsewhere.) Moreover, it is important not to overstate the involvement of legal professionals in all forms of 'organised crime': Antonopoulos et al. (2019) have thrown a light on money movement in counterfeit goods, which usually requires little lawyer engagement. Nevertheless, even excluding the misconduct of otherwise legitimate corporations, many higher level offenders do use corporate and trust vehicles to transfer assets and money, especially in frauds; and they use lawyers or notaries when purchasing and selling homes and businesses. Unlike many areas of transnational organised crime, technologies have played only a minor role in this account. Hacking (as in the data compromise of professionals' files at Mossack Fonseca in the Panama Papers); leaking (as in the FinCEN files); automated searches for background checks/'adverse media' on clients and whether they are on international lists of sanctioned individuals and corporations; technologies of data analysis and cross-matching by journalists such as the International Consortium of Investigative Journalists (all cases) have played their growing part in the defamationavoiding exposure of matters normally held within what Van De Bunt (2010) evocatively described as the 'walls of secrecy and silence'. (See also Morselli and Giguere 2006.) But in the FinCEN and some other whistleblowing exposes, only electronic funds and corporate vehicles moved, though sometimes clients and professionals giving legally privileged advice did move as they went physically or electronically around the globe in search of increased trade in 'invisible earnings'. Data are not kept on how many cases lawyers turn down clients or accept deals that (reasonably or not) may be viewed as legally questionable in retrospect: but journalists for the Organized Crime and Corruption Reporting Project (https://www.occrp.org/en/) and Murray (2015Murray ( , 2020 have illustrated how lawyers, accountants and corporate vehicles are used by East European and Scottish criminals to develop their economic and social power. Much attention has been paid to the symbolic struggle of getting lawyers to report suspicions outside of the representation of their clients in legal matters, so the attempts to legislate this become a goal in themselves, generating serious political conflicts in countries such as Switzerlandrowing back reforms that were made to please the Financial Action Task Force (2019 and 2020), and constitutional conflicts in countries such as Canada and some European jurisdictions where the compromise is for lawyers to report first or even only to their local Bar Associations (Levi 2020;Nougayrede 2019;Vogel and Maillart 2020). By contrast with inactivity of lawyer regulation in many countries, in the aftermath of criticisms by FATF in its Mutual Evaluation Report 2020, the UAE suspended the license to practice of 200 law firms for a month until they got into compliance with their systems. 12 However, some might argue that this was a largely symbolic gesture of 'show-and-tell' compliance, and many transactions of concern are undertaken by firms that tick all the compliance boxes. The UK situation suggests that unless there is a very marked uptick in investigative resources, many reports from lawyers and others will not receive more than cursory attention from financial investigators and law enforcement unless the clients are already under suspicion, and would not have done so over the past decades (Levi and Gelemerova 2020): this may be true elsewhere, except perhaps in jurisdictions such as Switzerland where more pre-reporting vetting occurs. 13 In 2018-19, there were 478,437 Suspicious Activity Reports in the UK, including 2774 reports from independent legal professionals, though data on how many firms report (or do not report) suspicions are secret, and such data are anonymous, highly classified and relate to individual reports and sectors unless they are escalated in rare cases to the UK publicprivate partnershipthe Joint Money Laundering Intelligence Task Force, On the other hand, at a normative level, imposing on lawyers an obligation to consider the legality (even to some, legitimacy) of sources of funds and wealth, and the rationale for the legal constructions they put into effect does not seem in principle to be wrong. If noone other than the lawyer knows about it and the client is able to do what s/he wishes or find another more willing professional if one turns them down, this is little deterrence or prevention. The extension of overseas tax evasion and corruption as a predicate offence is a potential game changer in the volume of crimes that touch upon the work of the transaction lawyer, which is one reason for the resistance in the US and some other legal professions to onerous Customer Due Diligence rules. However, no jurisdiction (or the Financial Action Task Force or the European Union collectively) has grappled seriously with the problem of how to judge effectiveness in the regulation of enablers, beyond the massive reduction in 'crime' that has not yet happened and which, if serious organised crime did fall (with whatever measurement disputes there might be), they might then make a heuristic 'causal' connection to 'denial of laundering opportunities'. In the absence of a 'solution' to the effectiveness of lawyer regulation, there has been a focus on the number of SARs by lawyers and on prosecutions/regulatory interventions against lawyers which, in the eyes of NGOs fighting kleptocracy and the FATF, is never 'enough'. Like bankers, lawyers are unpopular and especially at times of high social inequality and economic crisis, they make good targets for folk devilment, and there is no reason to think that the struggle for expansion of the rules to include the legal profession will cease any year soon. Research involving human participants and/or animals Approval was obtained from the ethics committee of Cardiff University School of Social Sciences. The procedures used in this study adhere to the tenets of the Declaration of Helsinki. Informed consent Informed consent was obtained from all individual participants included in the study. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2020-11-28T05:05:54.187Z
2020-11-26T00:00:00.000
{ "year": 2020, "sha1": "8c50181bd5d10b64a8bb20bbb5aa8c41192b1578", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12117-020-09401-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2c0640e39d8c0fec2859ce72d878f6e150cf6b40", "s2fieldsofstudy": [ "Law", "Sociology", "Political Science" ], "extfieldsofstudy": [ "Medicine", "Political Science" ] }
254300115
pes2o/s2orc
v3-fos-license
Cybersecurity in Online Learning: Innovations for Teacher Training and Empowerment Recent technological advances in teaching and learning provide dynamic tools required to meet the educational needs of the digital era. However, teachers and other educators are increasingly experiencing cyber threats while teaching online, thereby affecting the quality of teaching as well as learning outcomes. In recent times, there have been reports of learning disruption caused by cybercriminals using attacks like ransomware, denial of service and data theft. In this context, teachers need to be able to harness the right tools, resources, and instructional practices to ensure not only continuity but also quality and effective learning. In this paper, we use the Cybersecurity Training for Teachers course series as a detailed case study to determine the cybersecurity challenges that educators faced when they moved their classes online, and how the knowledge and skills gained from the training helped to address them. The course series, offered by COL over a period of 2 years, attracted more than 7000 participants from 96 countries. Drawing from participant surveys, the paper presents an assessment of the perceived pedagogical impact of specific cyber threats. Finally, we propose innovations for teacher training and on-going empowerment in cybersecurity, to minimise the negative impact of cyber threats on online teaching and learning. Introduction The use of ICT in the education sector has increased globally over the last few decades, becoming an integral part of teaching and learning (Fouad, 2021).This upward trajectory is expected to continue with investments in education technology projected to hit $350 billion by 2025 (Li & Lalani, 2020).Developing countries have not been left behind in tapping into technological solutions for learning.The increasing affordability of ICT solutions has resulted in the adoption of digital learning in developing countries (von Solms & von Solms, 2014).However, the uptake of technology in schools has created new risks, more so around the human factor (Richardson et al., 2020).This has been further exacerbated by the rapid technological changes that teachers are not yet abreast with (Amankwa, 2021). The causes of attacks can be pointed to human and technical lapses.Students, faculty, and staff sometimes use personal devices to handle school data.The security of these computers might not be sufficient to protect the information accessed (Richardson et al., 2020).Learners in cyberspace might not be aware of the dangers therein pertaining to their personal safety and data (Pusey & Sadera, 2011;von Solms & von Solms, 2014) given on average the awareness among internet users is moderate to low (Amankwa, 2021).Young students not only have knowledge gaps but also miss adult support in keeping them safe online (Pencheva et al., 2020).Technically, there are inadequate security measures in place to protect the diverse set of data -academic, research, medical, banking, accommodation, etc. -that are in these institutions (Fouad, 2021;Impact Networking, 2021). The consequences of cyberattacks include loss of data, slow or no access to computer systems, cyber-bullying, exposure to inappropriate content, disruption of classes, cancellation of exams, financial losses, and legal action (Cybersecurity and Infrastructure Security Agency, 2020;Fouad, 2021;Richardson et al., 2020).Repeat incidents have been observed pointing to a potential failure to learn from previous attacks (Impact Networking, 2021). Despite holding invaluable information, schools have insufficient resources to handle cybersecurity (Ivy et al., 2019).They have inadequate funding, expertise, and capacities to prepare for cyber threats (Fouad, 2021).Thus, education ranks low in the security index by sector despite being among the top targeted segments (Impact Networking, 2021).It is clear that the use of technology can result in physical and emotional harm to users, their data and organisations (Pusey & Sadera, 2011).Therefore, teachers should be equipped with cybersecurity knowledge to safely apply their digital skills in teaching and learning (UNESCO, 2018).This paper uses the Cybersecurity Training for Teachers course series as a detailed case study to determine the cybersecurity challenges that educators faced when they moved their classes online, and how the knowledge and skills gained from the training helped to address them.The course series, offered by the Commonwealth of Learning (COL) over a period of 2 years, attracted more than 7000 participants from 96 countries.Drawing from participant surveys, the paper presents an assessment of the perceived pedagogical impact of specific threats.Finally, we propose innovations and policy considerations for teacher training and on-going empowerment in cybersecurity, to minimise the impact of cyber threats on online teaching and learning.The next section provides the background to the Cybersecurity Training for Teachers course series.Section 3 describes the design and development of the courses and the methodology used to collect and analyse data.Section 4 examines the pedagogical impact of cyberattacks while Section 5 assesses the impact of the training.Section 6 details the innovations for teacher training and empowerment followed by a conclusion in Section 7. Background Digital learning has increased use of technology among teachers and students (Sailer et al., 2021).Teachers have leveraged technology to take roll calls, interact with students efficiently and effectively, and share learning resources.Additional benefits have been observed in online learning including higher retention of information and reduced time in teaching.It is anticipated that online learning will be an integral part of school education with some educators already adopting the blended approach of online and e-learning post-pandemic (Li & Lalani, 2020).Thus, it is essential that they use technology securely (Fraillon et al., 2014). Teachers should understand the basics of cybersecurity as they use the digital space for teaching and learning (UNESCO, 2018).This is not only for their own safety but that of their students as well, many of whom might not be fully aware of the dangers in cyberspace.They tend to disregard online safety rules, thereby finding themselves in situations that teachers and parents do not fully understand (Pencheva et al., 2020).Most teachers have limited knowledge in cybersecurity, making it a challenge to enhance online safety when teaching and learning (von Solms & von Solms, 2014).While preservice teachers learn to integrate technology into instruction, they are not prepared to model or teach cybersecurity due to inadequate knowledge and teaching skills in this area.Consequently, they are unable to identify threats to themselves, their students, and institutions (Pusey & Sadera, 2011).On the other hand, schools have limited budget and resources since government support in cyber safety in schools is lacking or minimal especially in most developing countries.As a result, there is no cybersecurity curricula or extramural for cyber safety education (von Solms & von Solms, 2014). Covid-19 disrupted learning affecting nearly 1.6 billion learners in more than 190 countries.The closure of learning spaces affected 99 per cent of low and lower-middle income countries (United Nations, 2020).The pandemic necessitated a shift to remote learning thereby accelerating the adoption of online learning even though institutions were not prepared with the right skills and infrastructure (Fouad, 2021;Li & Lalani, 2020).Schools deployed learning management systems and social media for online teaching-learning system (Garg et al., 2020), emphasising the role of technology in teaching and learning (Sailer et al., 2021).Teachers would opt for asynchronous learning so they focus on learners rather than learning new pedagogy or technology.However, they would still need to post materials online for students to access at their own time.They would also schedule online appointments to engage students (Daniel, 2020).The demand for remote learning resulted into an increase in distributed denial of service (DDoS) and virus targeting online learning platforms (Fouad, 2021).Similarly, there was a spike in disruption of classes held via video conferencing where verbal harassment, display of pornography and violent images, and doxing attendees was rife (Cybersecurity and Infrastructure Security Agency, 2020).It was, therefore, necessary to upskill teachers on cybersecurity considering social engineering was and is the leading cause of breaches (Impact Networking, 2021). Teachers should possess skills to handle cybersecurity issues in the classroom (Richardson et al., 2020).Therefore, it is vital to create cybersecurity courses with content that educators can easily grasp and put into use (Ivy et al., 2019).Some of the available courses such as Future Learn's Introduction to Cybersecurity for Teachers (Future Learn, n.d.) are somewhat technical in nature and require payment.Educators with no relevant background and finances might miss out on such courses as they need to spend more time and money (Amankwa, 2021). Supporting teachers' readiness is important in realising resilient education systems (United Nations, 2020).COL developed two cybersecurity courses -Cybersecurity Training for Teachers (CTT) and Advanced Cybersecurity Training for Teachers (ACTT)to equip teachers, teacher educators and other education practitioners with the skills and knowledge that they need to protect themselves and their students online, as well as create awareness for parents and other stakeholders in digital learning.Both courses considered the pressing cybersecurity challenges, skills needed by educators, and their knowledge level.The CTT and ACTT courses were each offered for free in two iterations.Each offer lasted a period of four weeks.cat Target Audience Most teachers may be unaware of cyberthreats, as they have no knowledge and experience in cybersecurity.Therefore, they should be grounded in cybersecurity since they teach and advise students as well as observe changed behaviour (Pencheva et al., 2020;von Solms & von Solms, 2014).The CTT and ACTT courses targeted teachers in primary schools, secondary schools, and tertiary institutions.Education practitioners from ministries of education were also free to join.However, from the pre-course survey conducted, the course participants included those in early childhood education and a few individuals outside of the teaching profession.This demographic underscored the significance of the course and the importance of adapting content for a wider audience.Most of the participants were from developing countries in the Commonwealth.A considerable number of schools in these countries are underfunded and may not have the resources to train teachers on cybersecurity (von Solms & von Solms, 2014).They would benefit most from these courses to defend themselves, their learners and institutions, and inspire their students to join the cybersecurity workforce (Ivy et al., 2019;Richardson et al., 2020). Content A holistic approach was adopted in developing content for the courses as teachers typically engage with students, parents and other stakeholders (Amankwa, 2021). The CTT course was designed for teachers, teacher educators and education practitioners who were likely to have college-level education but no experience in cybersecurity.The course was structured into four modules: i. Introduction to Cybersecurity: covered the basic concepts to ensure participants could connect cybersecurity principles to their classroom practice (Ivy et al., 2019).ii. Cybersecurity Threats and Mitigation: was designed to help participants understand cybersecurity threats, vulnerabilities, attacks, and mitigation techniques.iii. Best Practices: it was vital to have general best practices and those specific to video conferencing, as the technology was prevalent during the pandemic (Cybersecurity and Infrastructure Security Agency, 2020).iv. Cyber Safety for Students: focused on student online protection; online risks; the role of students, teachers, parents, and guardians; incorporating cybersecurity in the classroom; and laws on child online protection. The ACTT course was designed for teachers and teacher educators who had either completed the introductory course, CTT, or had other relevant background.The course was structured into four modules: i. Advanced Cyber Attacks in Online Learning: covered attack vectors; wireless and mobile device attacks; application and web attacks; and internal threats.The aim was to acquaint participants with the prevalent attacks in online learning (Fouad, 2021;Impact Networking, 2021) ii. Protecting Data: provided appropriate measures for data security.Schools should plan for data security considering the information they hold and risks they face.This involves implementing technical, administrative, and physical controls (Richardson et al., 2020).iii. Securing Online Communication and Learning Devices: focused on advanced techniques to secure data, devices, and communication between entities in educational institutions.iv. Cybersecurity Concerns in Emerging Educational Technologies: emerging technologies bring new risks and teachers may face challenges keeping abreast with the threats posed by the evolving solutions (von Solms & von Solms, 2014).A cybersecurity preparedness plan was included to ensure teachers could plan, develop and implement cyber safety strategies in schools (Cybersecurity and Infrastructure Security Agency, 2020; UNESCO, 2018). The instructional design included videos, audios, lesson transcripts, articles, case studies, discussions, polls, and webinars.Each module had a quiz and module assessment to test the participants' understanding of the subject covered.An infographic that summarised the key takeaways in every module was included for ease of reference.Equally, a resource pack containing all the module resources was available for download.Participants could repurpose the OER for teaching (Pencheva et al., 2020). Delivery Platforms The first and second offers of CTT as well as the first offer of ACTT were delivered on MOOCs for Development MooKIT platform.However, the second offer of ACTT was delivered on COL's Teacher Futures Moodle platform for better facilitation and learning experience.Both MooKIT and Moodle are open-source platforms, and their use aligns with COL's purpose of promoting quality learning sustainably using open technology (Commonwealth of Learning, 2021).Course webinars were held via video conference platform.The technologies deployed were suitable in enhancing learner engagement.They supported lecture videos, self-assessment, networking and communication between learners, and course facilitation (Alturkistani et al., 2018). Evaluation of MOOC Experience Evaluation in MOOCs can help gauge their effectiveness and improve utilisation (Alturkistani et al., 2018).Data was collected using three surveys: pre-course, end-of-course, and a reflective tool, also known as 'Tell us your story'.The questions were structured to find out the impact of the courses at four levels -reaction, learning, behaviour and resultsas outlined by Kirkpatrick & Kirkpatrick (2006).Additional datasets were obtained from the forums and webinar chats. Both quantitative and qualitative analysis was done using Excel and NVivo respectively.The quantitative analysis focused on descriptive statistics.The qualitative analysis employed thematic analysis using the theoretical approach at a semantic level, to determine the cyberattacks participants had experienced, the challenges they faced during training, and the impact of the courses.The data was coded to answer the questions asked (Braun & Clarke, 2006). The results of the analyses included coverage (audience reached vs targeted), participation (engagement with the MOOCs), quality (of the MOOCs), achievement (certification and assessment results), and outcomes (changes in the organisation and individual) (Chapman et al., 2016). Pedagogical Impact of Cyberattacks Cyberattacks can disrupt learning in several ways including slow or no access to learning materials, cancellation of classes, and delayed assessments (Fouad, 2021).Data collected from the second offer of the ACTT course highlighted some of the impacts of cyberattacks on teaching and learning as follows. Disrupted teaching and learning: some participants had experienced 'Zoombombing' which can halt learning and in some cases, expose attendees to explicit and violent content (Cybersecurity and Infrastructure Security Agency, 2020). Slow or no access to learning systems occasioned by viruses, malware, or denial of service attacks. "Our computer network does go down from time to time due to malware or suspected hacks" "I found ransomware attack on my office PC" Delayed assessments can result from loss of data or denied access causing anxiety among learners (Daniel, 2020;Fouad, 2021).Providing prompt and meaningful feedback to assessment is important as it ensures immediacy and encourages learner engagement, thereby enhancing their online learning experience (Ogange et al., 2018). "The computer holding data for the school of postgraduate studies crashed" Loss of teacher-student trust: impersonation of teachers online could cast them in disrepute leading to loss of teacherstudent trust and student-teacher interaction in the classroom (Carter, 2020).Furthermore, the reputation of the teacher's school might be affected making it less attractive to talented teachers and prospective students (Endsleigh, n.d.).Some participants shared instances where their social media accounts were hacked and used to tarnish their name, collect money, and spam their network. Loss of teaching and learning materials could lead to more time spent developing them afresh. "My project on my tablet was attacked by a virus, so I had to start from scratch." Mental health of teachers who are victims of cyberbullying could negatively affect how they engage with learners and their peers.In some cases, they may opt to leave the profession (Bester et al., 2017). "Someone hacked my Facebook and wrote certain things as if it was me and it almost destroyed me." Impact of CTT and ACTT on Teaching and Learning There is need for more evidence on the impact of MOOCs on learners' knowledge, skills, and attitudes (Alturkistani et al., 2018).This section discusses the impact of the cybersecurity course series. Knowledge Impact: Participants indicated that their knowledge had improved across all the eight modules covered in the training.They found the course useful with those in the affirmative ranging between 89% and 100% across the four offers of the courses. Impediments to Learning and Professional Development: The courses helped participants to think about the impediments they faced.Most indicated that the training helped them to address cybersecurity challenges.They also found it necessary to incorporate ICT in teaching online and improving the learning experience of their students. " Effective use of Cybersecurity Tools: Participants were empowered to use appropriate tools and strategies to protect their devices, data, and communication. "I was able to apply the knowledge learned about best practices in cybersecurity, using VPNs, managing cookies, and blocking websites as I utilized learning platforms personally and the LMS with my students.Additionally, the encryption of sensitive data such as students' and parents' personal data and students' grades was meaningful to me as well." Readiness to Sensitise Learners and Institutions: A number of participants expressed readiness to sensitise their institutions, learners, and community using various means.Many indicated that they would conduct awareness campaigns through meetings, online learning platforms and classroom lessons. "I have actually started teaching this course to my students because it coincides with one of the topics I am required to teach, that is 'threats to computers and its users'.In fact, this course provided me with the resources needed for the lessons." Performance and Promotion: Participants indicated that the training would improve their teaching practice and performance.It would also be useful in their promotion or in getting alternative opportunities to apply their knowledge. "For sure the course will improve my teaching practice and performance.I have learned skills in document sharing with students and securing their scores and data.I can also help the institution understand the risk involved in the online classes and how to be protected." "It will improve my opportunity for promotion as now my school will rely on me on cybersecurity related matters in all departments." Facilitated MOOCs The MOOCs provide a low-cost solution as an innovative approach to teaching cybersecurity (Fraillon et al., 2014).Though internet connectivity and appropriate learning devices was a challenge to some (Li & Lalani, 2020), the content was delivered on lightweight platforms (Moodle and MooKIT) and in different formats (audio, text, video) accessible to a diverse group of learners. The CTT and ACTT MOOCs were designed to be easily understood by educators to ensure they would be able to apply the skills and knowledge gained in the classroom (Ivy et al., 2019).In fact, a new term was coined for this approach: 'teacherising cybersecurity'. Free and Open Source Tools and OER Several free and open source tools were used which teachers would exploit during and beyond the training.All the resources were available to the participants as OER, which they could repurpose for teaching (Pencheva et al., 2020). In addition, the courses also leveraged native capabilities in the learners' smartphones and computers to secure their data and devices. Cybersecurity Preparedness Plan Schools should plan, prepare, and implement security policies (Cybersecurity and Infrastructure Security Agency, 2020) and put an appropriate response plan in place in case of an incident (Richardson et al., 2020).Participants developed and shared a cybersecurity preparedness plan for their institutions as part of the activities in the ACTT course. Communities of Practice Educators trained in cybersecurity are expected to train others to build more capacity in the area (Amankwa, 2021).Some participants expressed the desire to train their colleagues, family, and community, thereby evolving local and regional communities of practice (Wenger, 1998).Consequently, facilitator guides for both courses were developed which would assist not only in teaching the course but also mentoring other trainers in their CoPs. AI-based Teacher Support Online learning should not replicate the traditional classroom.Different pedagogical approaches are required that incorporate collaboration tools and engagement methods which promote inclusion, personalisation, and intelligence (Li & Lalani, 2020).We view innovation in teacher education as the interfacing of knowledge sharing, learning analytics, and application of AI towards improved learning outcomes.The data from the surveys was analysed to identify challenges that may possibly be resolved using AI.The main outcome of this undertaking was a taxonomy of needs that would lead to sustainable capacity building when on-demand personalised learning is realised, as shown in Figure 1.By integrating AI at all the levels identified in the taxonomy, educators can learn on-demand and access the resources and support they need when learning and teaching cybersecurity (Amankwa, 2021;Pencheva et al., 2020). Policy towards Secure Learning Spaces Even though the education sector does not meet the existential threshold to warrant serious government policy interventions, practitioners and institutions in the sector do meet cybersecurity challenges that warrant attention including financial losses, disruption of learning, and theft of intellectual property, that combined greatly hamper personal and national security (Fouad, 2021). Cybersecurity should be included in pre-service and in-service teacher training programmes (UNESCO, 2018).This would ensure educators adopt teaching models that incorporate cybersecurity, and proactively protect themselves, their learners, and institutions.To realise this, government support is essential in subsidising expenses related to implementation of cybersecurity programmes in educational institutions and facilitating access to affordable and reliable internet connectivity (Amankwa, 2021;Pusey & Sadera, 2011). Teachers should upskill in AI in "a pedagogical and meaningful way" that is relevant to teaching, learning and research (Popenici & Kerr, 2017;UNESCO, 2019).The use of AI in teaching and learning will redefine the role of teachers, as teaching tools, learning methods, access to knowledge and teacher training revolutionise.In the same way ICT skills are important to today's teacher, AI skills will be crucial for tomorrow's educator (Higuera, 2019). Conclusion This paper highlighted the cybersecurity challenges affecting teachers in online learning, and explored innovative approaches to the problem including MOOCs, AI, communities of practice and policy considerations towards safer learning spaces.Educators and learners can be exposed to cyberattacks that could disrupt teaching and learning in various ways.It is, therefore, imperative to prepare teachers to be aware of and handle cyberattacks. Disclaimer The views represented in this paper are those of the authors and not those of the Commonwealth of Learning and eKRAAL Innovation Hub. MOOCs offer a low-cost innovative solution to teacher training and empowerment.Cybersecurity MOOCs for teachers should be tailored in the context of teaching and learning with a focus on practicality in the classroom.Such training offers knowledge, tools, and skills that educators can use to sensitise their learners and institutions.It also improves their performance and chances of promotion.As part of teacher professional development, AI has the potential of scaling cybersecurity training for teachers through on-demand personalised learning as well as providing access to targeted OER.Promoting cybersecurity training and the integration of AI requires policy intervention.From a policy perspective, cybersecurity training for teachers should be included in pre-service training and after they join learning institutions.Government support is vital for schools and teacher training institutions to successfully implement cybersecurity awareness and training programmes, and integrate AI in teacher development, teaching, and learning. It helped me protect the school email account, learner digital devices, and teacher digital devices" "The advanced Cybersecurity Training for Teachers (ACTT) course has helped me to reflect further on interactive learning for both the general education child and children with special needs (Autism spectrum, speech delays, ADHD) and ways to enhance virtual field trips, problem solving, critical thinking, literacy, and numeracy activities through gamification."
2022-12-07T18:55:49.050Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "cdb0f6c6babe19795ec8eafe03ff3982a9eb1bc7", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.56059/pcf10.8823", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "65769970011083f2c17c6b908c345003da8d1c1d", "s2fieldsofstudy": [ "Computer Science", "Education" ], "extfieldsofstudy": [] }
262049095
pes2o/s2orc
v3-fos-license
Numerical simulation for stress analysis of functionally graded adhesively bonded composite patch repair system Three-dimensional finite element analyses have been carried out for adhesively bonded composite patch repaired an aluminum alloy plate structure with centre circular notch. Presence of defect in the plate causes stress gradients in the bond region which results in reduction of the performance of the patched panel. The patterns of out-of-plane stress components at various interfaces (plate-adhesive-patch) at different angular positions with reference to loading direction are evaluated for mono-modulus adhesive. In the present research work, an attempt has been made to improve the strength of adhesively bonded composite patch repaired system by reducing stress concentration at the critical regions. This has been achieved by smoothening/relieving stresses over the entire bond region by introducing functionally graded adhesive (FGA) material in-lieu of conventional mono-modulus adhesive. Linear material gradation function profile has been used to tailor adhesive layer in the bond region. Effect of material gradation profile with different modulus ratios on out-of-plane stresses at different interfaces of composite patch repaired system have been studied. Numerical simulations based on FE analyses indicates significant reduction in out-of-plane stress levels by which strength of functionally graded composite patch repaired system will be enhanced. Introduction Aircraft structures are much optimised structures.But when these are subjected to severe loading due to wind gust, ground reactions during take-off and landing, manoeuvres, accelerations, bird strike etc., may cause damage to the structures.In order to increase the life of the structure, one has to do restoration.Main objective behind the repair is that to make the component functional and it should not cause the catastrophic failure of the whole assembly.There are many methods of restoration of the damaged aircraft structure.One of the economical ways of restoration is mechanical repairs.Modern trend in repairs of aircraft structure recommend use of adhesively bonded joints.Bonded repairs have advantages like reduced stress concentrations, improved fatigue life, light-in-weight.This is very versatile method which can be used for joining of large range of materials and dissimilar materials also.It can be used for variety of interfaces like metal-to-metal, metal-to-composites and composite-to-composite. 1 3 In adhesively bonded patch repair, adhesive layer is used to connect patch and parent structure which ensures stress and deformation transfer between them.Hence, adhesive film experiences different types of stresses like shear, tension, cleavage & peel.The adhesive film is experienced by high stress gradients at the overlap ends due to change in stiffness [1].The peel and shear stress distribution in the adhesive layer are not uniform.These stresses have maximum value at the overlap end which cause stress concentration.This results in reduced fatigue life and de-bonding which ultimately reduces the overall patch repair efficiency and performance.Hence, to improve the performance of patch repair system, the stress gradients need to be reduced.Stress concentration can be reduced by redistribution of the stresses over the bond area. Gu et al. [2] determined the behaviour of bonded composite patch repair for aluminium substrate through numerical method.Boron/epoxy patches absorb dissipated energy.Reduction in adhesive thickness reduces the SIF in the panel due to more transfer of load from panel to patch.Marioli-Riga et al. [3] explained analysis methodology for the design of composite patch repairs required for damaged metallic components.This methodology has resulted into a single design algorithm.It shows systematic way to guide the designer through the design/analysis process.Mohammadi et al. [4] concluded two-sided patch efficiency is higher than the one-sided patch.Similarly, the composite patch efficiency is higher than the metal patch.Tsamasphyros et al. [5] carried out study on composite patch repair by analytical and numerical methods.Rose model is simple but it is limited to simple geometries still it can be useful to do design approximations in early stages of design and calculate remaining life of the component.Rodríguez et al. [6] observed load achieved in patched specimens is twice than unpatched.Sand blasting of bonding surface improves the adhesion between steel and composite.Yang et al. [7] observed that angle of the repair plays an important role in composite patch repairs of the scarf joint.Lee et al. [8] studied the effects of different patch shapes in composite patch repairs.For inverted triangular patch shape, bonding strength is improved.Wang et al. [9] studied analytical method for stepped scarf patch repair.The model accurately predicts stress distribution along the bondline. Cohesive zone modeling has been widely used in recent years to predict fracture in adhesive joints and composite laminates [10].Cohesive process zone models can deal with the nonlinear zone ahead of the crack tip due to plasticity or micro-cracking present in many materials.Precise determination of the traction-separation laws is very crucial to the completion of the cohesive zone model.Cohesive behavior specified directly in terms of a traction-separation law can be used to represent the delamination at interfaces in composite laminates and enables the specification of material data such as the fracture energy as a function of the ratio of normal to shear deformation (mixed mode) at the interface.Within the cohesive model, the damage and fracture of a structure are modeled by a damage-free bulk material and special interface elements (cohesive elements).Ghabezi and Farahani [11] assessed the effects of nanoparticles on bridging laws, cohesive mechanism and traction-separation parameters of nanocomposites mode-I and II fracture.To do this analyzing of the experimental data from double cantilever beam, and end notched flexure tests including construction of the R-curves (energy release rate versus crack length), reconstruction of these curves in terms of the pre-crack tip opening and sliding displacement, and calculation of the corresponding bridging and traction-separation laws through the J-integral approach were carried out. Vadean et al. [12] examines the effects of the different parameters on the stress state in the patch adhesive, such as ply orientation, stacking sequence, ply thickness and scarf angle.Theyexplored an adaptive design of the repair patch for orthotropic materials and composite laminates, using 3D finite element analysis.The optimization method and the improved shapes were presented and discussed with respect to the influencing parameters.Shinde et al. [13] repaired thin panels of aluminium alloy 6061-T6 with a centre pre-crackwith a one-sided asymmetrical CFRP patch through co-curing of epoxy at the room temperature to have minimal residual stresses.The specimens were tested with a tension-tension fatigue load.The numerical simulation as well as the experiments showed that the fatigue life was improved considerably with the increasing patch length when tested at room temperature.Borges et al. [14] discussed the progresses in adhesive materials, divided into structural and non-structural adhesives, and highlighted advances insecond-and third-generation acrylics, toughened epoxies, and two-component polyurethanes for structural applications, and pressure-sensitive adhesives and bio-adhesives in the non-structural category.Afterwards, novel joint designsand manufacturing concepts are discussed, including functionally graded joints, hybrid joints, self-dismantling joints, and additive manufacturing applied to adhesive joints.Uslu and Kaman [15] repaired carbon fiber-reinforced U-notched composite plates with composite patches made of the same material using adhesive bonding and their mechanical performance was investigated experimentally and numerically.Repairs were made by applying a double-sided composite patch to the notched thin composite plates.The repaired composite plates were tested under tensile load.The effect of the variation in notch width and depth on the repaired plate failure load and type was investigated.Ghabezi and Farahani [16] performed experimental investigation and comparison between different bridging laws.For mode-II fracture in the presence of nano-particles, these laws are calculated from three data reduction schemes for describing the bridging zone and trapezoidal traction-separation law parameters.For the calculation of the energy release rate in mode-II fracture, three corresponding data reduction schemes, compliance calibration method, corrected beam theory and compliance-based beam method, have been utilized for different percentages of nano-particles in the adhesives and the adherends.Moreira et al. [17] addressed the behaviour of aluminium adherends repaired with CFRP patches under quasi-static and fatigue loading analysis.Three types of bonded repairs, single-strap, double-strap and scarf were evaluated under three-point bending loading.Representative load-displacement curves in the quasi-static case and compliance versus number of cycles curves under fatigue loading were obtained.The purpose was to determine the static strength and fatigue lives for the three repair geometries.Numerical analyses considering two different cohesive zone models, suitable for quasi-static and fatigue loading, were carried out.The aim is to compare the numerical predictions with the experimental results for all analysed cases, targeting to validate appropriate numerical procedures to deal with quasi-static and fatigue behaviour of bi-material bonded repairs.Truong et al. [18] investigated the failure of scarf patch-repaired composite laminates with different scarf angles and subjected to different bending loads.An experimental program was proposed as well as a finite element model simulation with cohesive zone modeling.The failure loads, including the crack initiation and propagation process, which was extremely difficult to observe in experiments, were considered in detail with the finite element model simulation. Carbas et al. [19] have carried out study on repair of wood structures.They used functionally graded bond line by using methodology of graded cure of adhesive.This imparts gradation of elastic modulus along the bond line.At high stress concentration area, adhesive with low elastic modulus is used while stiff adhesive is used in the area where high strength is required.They carried out numerical simulation and results validated with experimental set-up.Their results depicted that gradation of adhesive elastic modulus can improve strength and reliability of the component.Marques et al. [20] carried out simulation of double lap joint of aluminium alloy specimen.They provided internal taper at the edges and also used dual adhesives for gradation of adhesive.The strength is improved for the components, which have been provided with taper as compared to that of taper-less configuration.Combination of ductile adhesive (2015) and brittle adhesive (AV138) imparts synergetic effects.Joints with dual-adhesive have strength as equivalent to that of stiff adhesive alone.This can be useful during dynamic loading, where there is requirement of ductile adhesive which can withstand the shocks at the same time stiff adhesive can provide the strength.Breto et al. [21] varied adhesive properties along the bond line.Study was carried out for improvement of strength for aluminium and composite joint under shear loading.Material tailoring of bond line is helpful even for discrete models also.They could able to show improvement in the joint strength in order of 70% as compared to that of mono-modulus adhesive.Paroissiena et al. [22] observed gradation of elastic modulus can reduce the peak adhesive stresses.They propose gradation of properties for adhesive as well as for adherend in order to improve the strength of single lap joints.Khan et al. [23] carried out numerical simulation of a through-thickness defect repaired with circular patch.They compared results of the considered repaired structure with mono-modulus bondline (MMB) to that of functionally modulus graded bondline (FMGB).By using FMGB stress concentration at defect and edges of the patch are redistributed to ensure uniform distribution of stress and reduce peak adhesive stresses along the bond line.Srinivasan et al. [24] carried out finite element analysis and experimental tests for single lap joint using bi-adhesive.They used AF3109 and EA9696 for tailoring of the joint.The failure strength of the joint is improved by 51.64%.Failure mode changes from adherend fibre tear to cohesive failure of adhesive due to tailoring of the bond line.Nimje and Panigrahi [25] carried out three-dimensional non-linear finite element analyses of adhesively bonded double supported tee joint.Strain energy release rates (SERR) were used to determine damage growth parameters.Damage initiation takes place at interface of the adhesive layer and base plate.Use of functionally graded adhesives showed significant lowering of the damage growth parameters.Increase in modulus ratio causes more decrease in these damage growth driving forces. Dugbenoo et al. [26] proposed to use additive manufacturing (AM) technique for material tailoring of the joint.AM tailoring increases surface area available for bonding.The AM tailored joint showed 5.5 times improvement in strength as compared to baseline single lap joint.Nakanouchi et al. [27] used mixing of two types of acrylic adhesives for tailoring of the bondline.Ratios of two adhesives were varied to check the adhesive properties.They used applicator for adhesives for physical testing.Increase in ratio of hard adhesive increases modulus of elasticity while on other hand decreases the failure strain of the adhesive.Hence it is possible to manufacture adhesive layer with gradation of the properties.Marques et al. [28] discussed latest technologies in design and manufacturing of FGA.They suggested gradation of adhesive by embedding particles in the adhesive, differential curing of the bondline and adhesive mixing.Gradation enhances mechanical properties such as strength, stiffness, toughness along the bondline.The ability to tailor the adhesive open a facet in engineering design by using sophisticated and high-performance adhesive bond.Silva et al. [29] manufactured adhesive layer containing iron micro-particles and used magnetic field to create a graded particle distribution along the overlap region.Iron micro-particles can effectively be used to enhance the mechanical properties of adhesive in a joint, although the effectiveness of this reinforcement is highly dependent on the particle amount and distribution.They also validated numerical simulation against experimental data.Rudawska [30] studied effects of curing methods on mechanical properties of the joint.She observed high temperature curing is advantageous for the strength of the joint. From the above literature survey, it is clear that in composite patch repairs using mono-modulus adhesive causes stress concentration.Very few literatures are available for reducing stress concentration for the patch-repaired system.In the present work, numerical simulation of stress analyses are performed for Aluminium alloy (2024-T3) plate with circular notch repaired by functionally graded adhesively bonded composite patch using finite element analysis.FE model of the patch repaired system has been validated with published results available in literature.Systematic parametric study has been performed with various modulus ratios in order to investigate their effect on out-of-plane stresses at different interfaces of composite patch repaired system. Geometric modelling Geometry of the repair configuration of the plate is shown in Fig. 1.It consists of rectangular plate of length (L) 160 mm, width (W) 40 mm and thickness (t 1 ) 2 mm.Material of the plate is aluminium alloy 2024-T3.This plate consists of through thickness circular notch of diameter (d) 5 mm.The plate is subjected to uniform tensile load (F 1 ) of 8 kN.The origin of coordinate system is located at centre of hole and middle of plate across thickness.The material properties of plate are given in Table 1.The plate is repaired with composite circular patch.The diameter (D) of the patch is 25 mm and thickness (t 3 ) is 2 mm.The patch consists of 4 layers of boron/epoxy composite lamina each having thickness of 0.5 mm with fibre orientation of [45/-45/45/-45].Layer wise orthotropic material properties of boron/epoxy lamina in principal material direction are given in Table 2. The patch is bonded with the plate from both sides similar to symmetric repair.The adhesive film of thickness (t 2 ) 0.1 mm is used for bonding of patch with the plate.In present work, two types of adhesives namely mono-modulus and functionally graded adhesives are used to bond the patch with defective plate structure.The properties of monomodulus adhesive are given in Table 3.The details of functionally graded adhesive considered in the present work are extensively discussed in proceeding section. Modeling of bond region with functionally graded adhesive (FGA) Many researchers [33][34][35] considered linear and exponential function profiles for functionally graded materials.The same authors have used FGA materials in the earlier work [33].One of the objectives was the optimization of the suitability of graded profile (linear or exponential) by comparing the magnitude of the stress components for out-ofplane joint structure.It was clearly spelt out that the linear material gradation function profiles offer a better reduction in magnitude of peak values of peel stress based on 3D FE analysis.Again, the same authors have analysed the effects of exponential and linear material gradation profiles on stress intensity factors (SIF) in their earlier work [36].It was observed that the SIF reductions are more pronounced for linear material gradation profile compared to that of exponential profile.Also, Chandran and Barsoum [34] suggested the advantages of usingfunctionally graded material with liner function profile based on the SIF value for a finite-width functionally graded plate with embedded crack. The aforesaid reasons justified the use of linear function profile in the present research for improved structural performance of patch repaired system.Since, the structural performance in case of functionally graded adhesively bonded patch repaired system largely depends on the peak vales of stress levels and SERR.Significant reductions in those values are observed due to use of linear graded function profile.Hence, in the present investigation, continuous variation of elastic modulus of adhesive along the bond region has been considered.The smooth variations of bond region modulus have been implemented by applying a number of rings of adhesive of different moduli in the bond region which is expressed by following linear function profile [34]: where, (1) where, E 1 -Modulus of elasticity for stiff adhesive.E 2 -Modulus of elasticity for flexible adhesive. It was assumed that the relationship between the Young's modulus and shear modulus remains constantinvariably with grading or, in other words, Poisson's ratio 'ν' remains constant [34].Figure 2 shows the representative functionally adhesive layers in the bond region with different terminologies used in the above equation.The elastic modulus ratios used in present work are 1, 1.5, 2, 4 and 6.The elastic modulus ratio 1 corresponds to the mono modulus adhesive where the elastic modulus is constant.The elastic modulus ratios 1.5, 2, 4 and 6 are graded adhesive where the modulus of adhesive varies linearly for different radii of adhesives as shown in Fig. 3 2.3 Meshing scheme Plate The plate is modelled with solid elements.Two dimensional elements are generated on the front face of the plate.Around hole, 120 elements are created.In the patch region, size of the element in radial direction is kept constant.These 2D elements are then dragged along the thickness direction to generate volume.In the thickness direction, six elements are generated.The eight noded brick elements called structural SOLID 185 from ANSYS, are used for discretization of the geometry.Figure 4 illustrates views of meshing scheme of repaired plate structure. Composite patch The composite patch is modelled with three-dimensional solid elements (SOLID 185).The pattern of area meshing of patch/adhesive interface is generated similar, so that it can be easily coupled with respect to each other at the interface.The patch is made of composite laminate having different lay-up orientation, the layer angles are defined by assigning the element coordinate system to the patch elements.Each layer is assigned one element in thickness direction.It is assumed that patch/adhesive interface is perfectly bonded; hence the nodes are coupled at the interfaces to reflect the perfectly bonded behaviour.Figure 5 illustrates FE model of composite patch. Functionally graded adhesive Similar to composite patch, functionally graded adhesive is modelled with three-dimensional solid elements (SOLID 185).The pattern of area meshing of adhesive/patch and adhesive/plate interfaces is generated similar, so that it can be easily coupled with respect to each other at the interface.It is assumed that adhesive/patch and adhesive/plate interface is perfectly bonded hence the nodes are coupled at the interfaces to reflect the perfectly bonded behaviour.The meshing scheme of adhesive is highlighted in Fig. 6. For modelling of patch repaired system with functionally graded adhesive, the region of adhesive is assumed to have a Young's modulus changing along the radial direction using Eq.(1).In finite element model, the changes of material property have been modelled discretely, by assigning the value of E(r) at the middle for each of the elements [34] within the adhesive layer.The smooth variation of elastic moduli is ensured by keeping fine mesh (mesh size tends to zero) along the bond region.One hundred material properties are created for gradation of adhesive elastic modulus.For this purpose, a MATLAB code is generated.Output of this code is used in Mechanical ANSYS Parametric Design Language (Mechanical APDL) for creating material model and assigning it to respective elements based on spatial coordinates. Boundary conditions Boundary conditions applied to the configurations of the FE model shown in Fig. 7 are summarised in Table 4. Validation of FE model In the present work, FE model is validated by comparing the results with available literature by considering aluminium plate with circular defect repaired by circular patch made of graphite/epoxy.The out-of-plane shear stresses in the adhesive layer are evaluated and compared with those obtained from Madani et al. [37].Referring to Fig. 8, the distribution of out-of-plane shear stress at mid-surface of adhesive layer of patch repair system shows good agreement with the available results. Stress analysis of patch repaired structure with mono-modulus adhesive The stresses in adhesive film for mono-modulus adhesive (R = 1) are not uniform.Adhesive stress is more at the edges of hole and patch.The stress values vary from 37.2 MPa at edges to 2.9 MPa away from the edges.This much large variation is observed in the adhesive layer.This causes stress concentration in adhesive layer.Similar behaviour was observed by Khan et al. [23].Similarly, the stress values are not uniform with reference to loading directions.Adhesive stresses are varying with reference to loading axis.Also, across the thickness of the adhesive, stress values are varying. Variations of out-of-plane normal stress (σ z ) and shear stress (τ yz & τ xz ) components are obtained for mono-modulus adhesive (R = 1).The stress components are plotted as a three-dimensional graph at different bond radius and angular position with loading axis.These plots are obtained for three different positions along thickness of adhesives that is adhesive-plate interface, mid adhesive layer and adhesive-patch interface.The stress pattern for out-of-plane normal stress (σ z ) and shear stress (τ yz & τ xz ) components, among all these three interfaces are similar but values are different.Out of these three interfaces, Fig. 9 shows stresses for mid adhesive layer. Referring to Fig. 9a, out-of-plane adhesive stress (σ z ) varies from maximum negative value at bond radius (r = 2.5) to maximum positive value at bond radius (r = 12.5).These locations are edge of hole in the panel and edge of the patch respectively, which causes stress concentration in the adhesive layer.The normal stress (σ z ) in adhesive away from the stress concentration zone is nearly constant for remaining bond region.At extreme bond radius that is at edge of the hole and edge of the patch, normal stress (σ z ) can be observed.Here, variation of adhesive normal stress (σ z ) at different angular positions can be seenclearly.Along different angular positions with reference to the loading axis, adhesive normal stress (σ z ) value is varying smoothly.Along the loading direction that is at the angle of 0° and 180° stress value is higher, while it is lower for the direction perpendicular to the loading axis that is at the angle of 90° and 270°.Away from the edges of the hole and patch, effect of angular position with loading axis on adhesive normal stress (σ z ) is negligible.Out-of-plane shear stress (τ yz ) distribution is shown in Fig. 9b.The value of this stress component is maximum negative value at angular position of 0° with the loading axis to maximum positive value at angular position of 180° with the loading axis.Similar behaviour of this stress component is observed for different bond radius but it is having higher values at bond radius (r = 2.5) and (r = 12.5) and lower values at bond radius (r = 7.5).Also, at angular position on 0° against different bond radius, the variation of this stress component is smooth curve similar to semicircle which is having maximum negative value at bond radius (r = 2.5) and (r = 12.5) and minimum value at mean bond radius (r = 7.5).Similar pattern of stress is observed at angular position of 180° but here stress values are positive. The shear stress (τ xz ) component variation with bond radius and angular position with the loading direction is seen in Fig. 9c.As compared to other two stress components (σ z &τ yz ) the magnitude of this shear stress (τ xz ) is smaller and the stress pattern is different.At extreme bond radius variations of this stress shows different behaviour with different In Fig. 9, we have seen the adhesive stresses at mid-adhesive layer.Interfacial stresses are also obtained at adhesivepatch interface and adhesive-plate interface.Figure 10 shows the out-of-plane normal stress (σ z ) at extreme bond radius (r = 2.5 & r = 12.5) and at different interfaces such as mid of adhesive layer, adhesive-patch and adhesive-plate interfaces.The pattern and nature of the adhesive normal stress is similar along thickness of the adhesive but values of the stresses are different.At the adhesive-plate interface, the value of adhesive normal stress (σ z ) is higher compared to other two positions along the thickness of the adhesive and it is having lower value at the adhesive-patch interface. Out-of-plane shear stress (τ yz ) at the edge of the hole (r = 2.5) and the edge of the patch (r = 12.5) is shown in Fig. 11a, b respectively.At edge of the hole (r = 2.5) there is not much variation in this stress component across the thickness of the adhesive.While at the edge of the patch (r = 12.5), the value of adhesive shear stress (τ yz ) is slightly higher at adhesivepatch interface, as compared to other two positions along the thickness of the adhesive. Similarly, the shear stress (τ xz ) variation in the adhesive layer at boundaries that is at edge of the hole (r = 2.5) and edge of the patch (r = 12.5) is shown in Fig. 12a, b respectively. The behaviour of this stress component (τ xz ) is not similar across the extreme bond radius.But peak value in the stress observed at the extreme bond radius only.At the bond radius (r = 2.5), shear stress (τ xz ) varies from maximum in negative direction at angular position of 90° with the loading axis to maximum in positive direction at angular position of 270° with the loading axis.At bond radius (r = 12.5), the shear stress (τ xz ) varies with the direction of the load but having peak stress value at angular position of 90° and 270° similar for that of bond radius of (r = 2.5) but opposite in nature.Along bond radius for angular position of 90° the nature of this stress changes from maximum negative value at bond radius (r = 2.5) to maximum positive value at bond radius (r = 12.5).Similar behaviour is observed at angular position of 270° but the nature of the stress is opposite.Similar to shear stress (τ yz ) component, the value of adhesive shear stress (τ xz ) is slightly higher at adhesive-patch interface along the thickness of the adhesive.As observed in above discussion the stress components (σ z &τ yz ) have maximum value in the direction of the load.Hence, this direction is considered for comparing stresses for different elastic modulus ratios (R). Variation in adhesive stresses were observed along the thickness of the adhesive layer.The normal stress (σ z ) have maximum value at the adhesive-plate interface.The value is 18% higher at the edge of the hole as to that of other interfaces.The shear stress components (τ yz & τ xz ) have maximum value at the adhesive-patch interface.The variation of about 4-6% observed for the shear stresses as compared to other interfaces.Adhesive stresses are having intermediate value at the adhesive mid layer. Stress analysis of patch repaired structure with functionally graded adhesive In earlier section we have discussed about stress distribution in conventional mono modulus adhesive (R = 1) where adhesive properties are constant with reference space coordinates.The stresses within the adhesive layer are not uniform.Stresses are more at the boundaries of the adhesive which causes stress concentration.Due to availability of the modern manufacturing techniques for production of functionally graded adhesive, it is feasible to change the properties of the adhesive with reference to space coordinates.Hence effect of change in adhesive elastic modulus is discussed in this section. Figure 13 shows out-of-plane normal stress (σ z ) distribution along bond radius for different elastic modulus ratios (R) of adhesive.The variation in normal stress component is maximum negative value to maximum positive value at bond radius (r = 2.5) and (r = 12.5) respectively.The normal stress value is nearly constant or near to zero between bond radius (r = 5) and (r = 10).For elastic modulus ratio (R = 1) the peak stress obtained at the edges of hole and patch is higher.As the modulus ratio increases from (R = 1) to (R = 6) the peak adhesive stress observed at the edges of hole and patch reduces.The pattern of variation in normal stress across bond radius is similar for different elastic modulus ratios (R).The effect of reduction in peak adhesive normal stress at the stress concentration zone is promising.While away from stress concentration zone that is between bond radius (r = 5-10) there is smaller change in the adhesive stress due to gradation of adhesive elastic modulus.For higher elastic modulus ratio (R) the adhesive normal stress across the bondregion between bond radius (r = 2.5-12.5)becomes more uniform.Hence, at the edges where there is stress concentration zone, low elastic modulus adhesive can be used to reduce higher adhesive stress observed at the edges.Similarly at the interior, away from the edges where there is no much change due to gradation of adhesive elastic modulus, high elastic modulus can be used.This will help to carry more load. Out-of-plane shear stress (τ yz & τ xz ) distribution along bond radius for different elastic modulus ratios are shown in Figs.14, 15 respectively.Shear stress (τ yz & τ xz ) is maximum at the edges for mono modulus (R = 1) adhesive and variation between maximum & minimum stress along bond region is also higher for this adhesive.Similar to normal stress, the pattern of variation in shear stress across bond radius is similar for different elastic modulus ratios (R).As the elastic modulus ratio increases the shear stress (τ yz &τ xz ) value at the edges decreases along with reduction in variation between maximum and minimum stress values.Hence higher elastic modulus ratio (R) causes uniform shear stress distribution in the bond region. Table 5 shows values of the adhesive stresses at different adhesive interfaces for various modulus ratios at bond radius, r = 2.5 mm & 12.5 mm.It also shows reduction in adhesive peak stress for higher modulus ratios (R = 6) to that of mono modulus (R = 1) adhesive.The stress reduction (in percentage) is same across the thickness of adhesive due to gradation of adhesive elastic modulus.The two major stresses observed are normal stress (σ z ) and shear stress (τ yz ).The normal stress reduced by 41% and 48% at bond radius, r = 2.5 and r = 12.5 respectively.Similarly shear stress (τ yz ) component reduced by 59% due to gradation. The reduction in stress observed is significant for elastic modulus ratio (R) of 6.For higher modulus ratio (R > 6) the more reduction in adhesive stress is possible.Hence, tailoring of adhesive elastic modulus along bond region will reduce stress concentration in the adhesive layer.This improves the joint strength and performance and failure onset in adhesive layer can be delayed drastically. Conclusions Present research work shows there is significant effect of functionally graded adhesive on the out-of-plane stresses induced at various interfaces of patch repaired system.Following specific conclusions made out of present research work are given below: • For mono-modulus (R = 1) adhesive, out-of-plane normal and shear stresses in adhesive layer are not uniform. The adhesive normal stress (σ z ) varies from maximum negative value at bond radius (r = 2.5) to maximum positive value at bond radius (r = 12.5).It is more at the edges of hole and patch.This causes stress concentration in adhesive layer.The normal stress value is nearly constant or near to zero between bond radius (r = 5) and (r = 10).The adhesive shear stress (τ yz &τ xz ) have maximum value at the edges.• For different angular positions with loading axis adhesive stresses are varying.The adhesive normal stress (σ z ) has higher value in the loading axis, while it is lower for the direction perpendicular to the loading axis.Away from the edges of the hole and patch adhesive normal stress (σ z ) has negligible effect with the angular position along loading axis.The shear stress (τ yz ) component is varying from maximum negative value to maximum positive value from angular position of 0° to 180° with the loading axis and variation is a smooth curve.The shear stress (τ xz ) component varies from maximum negative value to maximum in positive value from angular position of 90 o to 270° with the loading axis.Hence out-of-plane normal stress (σ z ) & shear stress (τ yz ) components have maximum value in the direction of the load while shear stress (τ xz ) component have maximum value in the direction perpendicular to the load. • The pattern & nature of the adhesive stresses is similar along thickness of the adhesive but values of the stresses are different.At adhesive-plate interface, the value of adhesive normal stress (σ z ) is higher, while at adhesive-patch interface, the value of adhesive shear stress (τ yz & τ yz ) is slightly higher as compared to other two positions along the thickness of the adhesive.• Tailoring of adhesive across bond region reduces peak adhesive stresses.For mono-modulus adhesive, variation between maximum & minimum stress along bond region is higher.As the modulus ratio increases, the peak adhesive stress at the edges of hole and patch reduces.As the modulus ratio increases, the stress value at the edges decreases along with reduction in variation between maximum and minimum stress values.Hence, higher modulus ratio (R) causes uniform shear stress distribution in the bond region.• The effect of reduction in peak adhesive stress at the stress concentration zone is promising, while away from stress concentration zone that is between bond radius (r = 5) to (r = 10) there is smaller change in the adhesive stress due to gradation of adhesive elastic modulus.Hence, at the edges where there is stress concentration zone, flexible adhesive can be used to reduce peak adhesive stress.Similarly at the interior, away from the edgesstiff adhesive can be usedtocarry more load.• Due to functionally graded adhesive, reduction up to 59% in stress observed for elastic modulus ratio (R) of 6.For higher modus ratio (R > 6) the further more reduction in adhesive stress is possible.This reduces stress concentration in the adhesive layer and improves the strength and performance of patch repaired system.Furthermore, failure onset in adhesive layer can be delayed drastically.Hence, functionally graded adhesive is recommended for the designer of patch repaired system for its enhanced life and performance. Fig. 1 Fig. 1 Geometry of the patched plate structure Fig. 6 Fig. 6 FE model of adhesive; a full model, and b enlarged view Fig. 7 Fig. 7 Boundary conditions for the model Fig. 11 Fig. 12 Fig. 11 Out-of-plane shear stress (τ yz ) distribution for three interfaces at different angular positions along bond radius; a r = 2.5, b r = 12.5 Fig. 13 Fig.13 Out-of-plane normal stress (σ z ) distribution at midadhesive layer along bond radius for different elastic modulus ratios (R) Table 1 Modulus of elasticity at instantaneous bond radius (r).r o -Outer radius of the adhesive.r i -Inner radius of adhesive.r m -Mean radius of adhesive which is given by Material gradients are evaluated in terms of modulus ratio (R) which is expressed as follows:
2023-09-19T14:00:13.378Z
2023-09-18T00:00:00.000
{ "year": 2023, "sha1": "396ac2bf88c620c2591f6148c7511dbdb2fc6a7a", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s44245-023-00023-6.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "c053c11e37653c1213dff83781dab31b3c6bfa71", "s2fieldsofstudy": [ "Engineering", "Materials Science" ], "extfieldsofstudy": [] }
222170295
pes2o/s2orc
v3-fos-license
The assessment of the serum levels of TWEAK and prostaglandin F2α in COVID – 19 Background/aim It is claimed that aberrant immune response has a more important role than the cytopathic effect of the virus in the morbidity and mortality of the coronavirus disease 2019 (COVID-19). We aimed to investigate the possible roles of tumor necrosis factor-like weak inducer of apoptosis (TWEAK)/Fn14 pathway and leukotrienes (LT) in uncontrolled immune response that occurs in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Materials and methods This study included 25 asymptomatic patients and 35 patients with lung involvement who were diagnosed with COVID-19 as well as 22 healthy volunteers. Lung involvement was determined using computed-tomography. Serum TWEAK, LTE4, and prostaglandin F2α (PGF2α) levels were determined. Results Compared with the healthy control group, TWEAK, LTE4, and PGF2α levels were higher in the group of SARS-CoV-2 infection without lung involvement. In the group of SARS-CoV-2 infection with lung involvement, age, fibrinogen, sedimentation, C-reactive protein and ferritin, TWEAK, LTE4, and PGF2α levels were higher, and lymphocyte levels were lower compared with the asymptomatic group. Conclusions: In the study, TWEAK and LTE4 levels increased in cases with COVID-19. These results support that TWEAK/Fn14 pathway and LT may involved in the pathology of aberrant immune response against SARS-CoV-2. Inhibition of each of these pathways may be a potential target in the treatment of COVID-19. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), which emerged in December 2019 and is a new member of the coronavirus family, infected millions of people and caused thousands of deaths worldwide. There is not a vaccine, and deaths continue to occur at the same pace. Many epidemiological studies conducted so far have revealed considerable amount of data on the transmission route and clinical presentation of the disease. Thus, methods of protection from SARS-CoV-2 and protocols for identifying possible cases have been established. It has been noted that the main cause of death associated with SARS-CoV is a cytokine storm [1]. Although cytopathic effects of the virus are believed to be noteworthy in the severity of the disease, experience and clinical results from other coronaviruses, severe acute respiratory syndrome coronavirus (SARS-CoV-2), and Middle East respiratory syndrome virus (MERS-CoV) indicate that the aberrant host immune response causes an inflammatory cytokine storm and mortality [1]. Similar to the inflammatory cytokines in SARS and MERS, patients with SARS-CoV-2 also have increased plasma concentrations of inflammatory cytokines, such as interleukins (IL)-6 and IL-10, tumor necrosis factor-α (TNFα), granulocyte colony-stimulating factor (G-CSF) [2]. The effectiveness of the IL-6 inhibitor tocilizumab, which is used to stop cytokine storm, supports this view [3]. TNF-like weak inducer of apoptosis (TWEAK) is a member of the TNF ligand family and is first synthesized as a transmembrane protein of 249 amino acids [4]. Although it was initially described as an apoptosis stimulant [5], subsequent studies have shown that it is involved in many inflammatory and immunological processes [6,7]. TWEAK binds to fibroblast growth factor-inducible 14 (Fn14), its only known receptor, [8], and it stimulates the release of cytokines such as TNFα, IL-1, IL-6, G-CSF, and interferon-γ monocyte chemoattractant protein 1, macrophage inflammatory protein 1 alpha, intercellular adhesion molecule 1, vascular cell adhesion molecule 1 (VCAM-1), and interferon-γ-induced protein 10 (IP-10) from TWEAK tissues that increase with inflammation [9][10][11]. These data show that the TWEAK/Fn14 pathway makes a considerable contribution to the inflammation occurring in the tissues, and the excessive or persistent upregulation of this pathway plays an important role in the pathogenesis of some pathological inflammatory diseases such as systemic lupus erythematosus and rheumatoid arthritis (RA) [12][13][14]. Leukotrienes (LT) are generated as a result of the arachidonic acid metabolism and are lipid mediators of the inflammatory response. LTs are divided into two as dihydroxy acid LT (LTB4) and cysteinyl LTs (CysLTs; LTC4, LTD4, and LTE4). LTE4, which is one of the CysLTs, is the form that is more stable and abundant in biological fluids compared with others. CysLTs is known to play an important role in inflammatory diseases of the respiratory tract, such as asthma [15], pulmonary inflammation and fibrosis [16], and acute respiratory distress syndrome (ARDS) [17][18][19]. It is considered that the more morbid and mortal course of COVID-19 in some individuals is not due to the cytopathic effect of the virus but rather due to the aberrant immune response developed by the host against the virus [3]. In this study, we aimed to investigate the possible roles of TWEAK/Fn14 pathway and LTs in the immune response caused by SARS-CoV-2. Patients This study included patients who presented from March 30 to April 30, 2020, with a clinical presentation leading to suspicion of SARS-CoV-2 infection and were diagnosed with SARS-CoV-2 infection using polymerase chain reaction (PCR) of swab samples. Demographic, clinical, laboratory, and radiological data of all patients were recorded. Based on computed tomography findings, patients with SARS-CoV-2 infection were divided into two subgroups: those with lung involvement and those without lung involvement. ELISA tests Peripheral venous blood samples were collected at presentation. The blood samples were centrifuged at 3000 x g for 10 min and the sera were stored at -80 °C. On the evaluation day, the sera were melted at room temperature. When the samples had higher concentrations, they were diluted and measured in duplicate. The concentrations of TWEAK (Human Tumour Necrosis Factor Related Weak Inducer of Apoptosis, Cat. No. E1820Hu, Bioassay Technology Laboratory, Shangai, China), LT -E4 (Human Human Leukotriene, Cat. No. CSB -E05176h, Cusabio Biotech Co Ltd., Wuhan, China), and Prostaglandin F2α (PGF2α) (Human Prostaglandin F2α, Cat. No. CSB -E10142h, Cusabio Biotech Co Ltd., Wuhan, China) in serum were determined using commercially available Enzyme Linked Immunosorbent Assay (ELISA) kits. The enzymatic reactions were quantified in an automatic microplate photometer. The concentrations of them were determined by comparing the optic density of the samples to the standard curve. All assays were conducted according to the instructions of the manufacturers. Statistical analysis Statistical analysis was performed using the Statistical Package for the Social Sciences (SPSS 19.0, Chicago, IL, USA). Quantitative data were expressed as mean ± standard deviation (SD). Normal distributions were tested with the Kolmogorov-Smirnov test with Bonferroni correction. Parametric data were analyzed using the Student's t-test (age, hemoglobin, leukocyte, aspartate aminotransferase (AST), alanine aminotransferase (ALT), platelet, fibrinogen, TWEAK, LTE4, PGF2α). Mann-Whitney U test was performed to compare nonparametric data and to compare the skewed data (neutrophil, lymphocyte, C-reactive protein, and ferritin). Bivariate pearson correlation analysis was used for the linear variables. ROC analysis was performed for TWEAK, LTE4, and PGF2α, and ROC curves for them were drawn. P values less than 0.05 were considered as significant. Results Thirty-five patients with COVID-19 with lung involvement, 25 asymptomatic patients without lung involvement, and 22 healthy volunteers, as the control group, were included in the study. TWEAK ( Figure 1A), LTE4 ( Figure 1B), and PGF2α ( Figure 1C) levels were significantly higher in the group of SARS-CoV-2 infection without lung involvement compared with the healthy control group. In contrast, in the group of COVID-19 with lung involvement, TWEAK, LTE4, and PGF2α levels were significantly higher compared with those without lung involvement and the healthy control group (for all P < 0.001). Age, aspartate aminotransferase, alanine aminotransferase, fibrinogen, sedimentation, C -reactive protein, and ferritin levels were significantly higher (Table) and lymphocyte levels were significantly lower in COVID-19 patients with lung involvement compared with those without lung involvement. Discussions In this study, the possible role of TWEAK and LTE4 in SARS-CoV-2 infection, which is considered to cause morbidity and mortality with aberrant immune response rather than its cytopathic effect, was investigated. TWEAK and LTE4 levels were higher in both groups with the disease compared with the healthy control group. Compared with asymptomatic SARS-CoV-2 infection cases, COVID-19 patients with lung involvement were older, and their levels of acute phase reactants and oxidative stress markers (PGF2α) were higher, which is consistent with a more severe clinical presentation of infection. Moreover, the TWEAK level, which has been shown to play an important role in the pathogenesis of autoimmune diseases, and LTE4 levels that play a crucial role in hypersensitivity reactions such as asthma, were higher in COVID-19 cases with lung involvement. It has been demonstrated that the TWEAK/Fn14 pathway is involved in the pathogenesis of diseases in which the immune system does not have a protective role but is destructive against the host itself [12,14]. TWEAK, Significance was indicated with asterisk and numerical sign as following ***P < 0.001 versus control and ###P < 0.001, #P < 0.05 versus patients without lung involvement group. Fn14, and RANKL expressions are higher in serum and synovial fluid in RA patients compared with patients with osteoarthritis [20]. In a study by Park et al., it was found that as serum TWEAK levels increased, RA disease activity also increased [21]. These results suggested that blocking the TWEAK-Fn14 pathway could be effective in RA patients. Wisniacki et al. administered TWEAK-blocking monoclonal antibody (BIIB023) to patients with RA and showed that inflammatory cytokines were downregulated, and they have claimed that TWEAK blockers can be effective in diseases for which TWEAK expression has been found to be high [12]. In the present study, we have shown an increased TWEAK expression in COVID-19, as seen in autoimmune diseases. This result suggests that the TWEAK/Fn14 pathway is possibly involved in the immune response to SARS-CoV-2. Therefore, it suggests that there is an aberrant immune response in COVID-19, and antiTWEAK monoclonal antibodies may be an effective treatment method for COVID-19. An intensive increase in proinflammatory cytokines has been reported in severe clinical presentations and deaths associated with SARS-CoV-2 infection. The resulting cytokine storm leads to cardiovascular collapse, multiorgan failure, and rapid death [2,22]. It has been noted that IL-6 plays the most important role in the pathogenesis of cytokine storm, which develops secondary to SARS-CoV-2 infection [23]. Therefore, tocilizumab is recommended in COVID-19 treatment protocols (Turkish Ministry of Health Guide for COVID- 19), and the effectiveness of tocilizumab therapy has also been demonstrated in clinical trials [3,23]. Conversely, in experimental studies, TWEAK/Fn14 activation increases the secretion of many proinflammatory cytokines including IL-6 [10], whereas antiTWEAK antibodies have been shown to decrease the expression of IL-6 and TNFα in animal models [24]. This indicates that the TWEAK/Fn14 pathway may have an important role in excessive proinflammatory cytokine release and development of cytokine storm in SARS-CoV-2 infection. It is claimed that not only cytokine storm but also eicosanoid storm has a role in the pathogenesis of severe disease in COVID-19 [25]. Massive cell death and cellular debris caused by SARS-CoV-2 stimulate the inflammasome complex and [26] initiate macrophage-derived eicosanoid storm [27]. Proinflammatory bioactive lipid mediators such as prostaglandin and LT fuel local inflammation [28]. Thus, a hyperinflammatory clinical condition without resorption arises, which is resistant to treatments. In the present study, high LTE4 and PGF2α levels found in patients with SARS-CoV-2 infection support the view that hyperinflammation associated with eicosanoid storm is involved in the pathogenesis of aggravation of the disease in COVID-19. It is claimed that montelukast, a potent cysteinyl leukotriene receptor antagonist, can be a potential therapeutic agent in COVID-19 [29]. Montelukast suppresses leukotriene-mediated inflammatory response by inhibiting the binding of leukotrienes to the receptor [30]. The most important cause of COVID-19 related deaths is ARDS. The pathogenesis of ARDS, manifested by diffuse alveolar damage caused by intense inflammation, has not been fully understood. However, it is known that LTE4 and prostaglandins play an important role [31]. In ARDS patients, leukotriene levels increased by 20 times, and even by 150 times in complicated cases of ARDS in comparison with healthy volunteers [31]. Subsequent studies have also supported the importance of leukotrienes in the pathogenesis of ARDS [17]. Moreover, it has been shown in experimental studies that cysteinyl leukotriene blockers may have favorable effects in the clinical course of ARDS [17]. In the present study, the higher LTE4 and PGF2α levels in COVID-19 patients with lung involvement compared with the asymptomatic group shows the importance of LTE4 and PGF2 in lung inflammation and indicate that montelukast, a leukotriene receptor blocker, can positively contribute to the treatment of cases with SARS-CoV-2 infection associated pneumonia and ARDS. In conclusion, the immune system can sometimes respond in a hypersensitive manner to infectious agents, just as it does to allergens. In such cases, rather than the direct cytopathic effect of the infectious agent, the aberrant response of the immune system comes to the fore in the clinical presentation, and the destructive effects occur via the immune system. In the present study, the expression of TWEAK/Fn14 pathway and leukotrienes, which have important roles in aberrant immune response reactions, was found to have increased in COVID-19. When an aberrant immune response is triggered in SARS-CoV-2 infection, the probability of treatment success is significantly reduced, and the disease rapidly leads to death. Therefore, TWEAK/ Fn14 pathway blockade and cysteinyl receptor blockers may provide a new hope for the treatment of COVID-19.
2020-09-29T13:06:08.200Z
2020-09-27T00:00:00.000
{ "year": 2020, "sha1": "e2868fb5f87649de365a89749bdb96d3a0048408", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3906/sag-2006-96", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "1ccfdab197acbfdc00239a36ee5458387c8c2f8e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
235502806
pes2o/s2orc
v3-fos-license
The Open Sports Sciences Efficacy of McKenzie Manipulative Therapy on Pain, Functional Activity and Disability for Lumbar Disc Herniation : Introduction: Lumbar disc herniation (LDH) is a common determinant of low back pain (LBP) and priority for cost-effective therapeutic approach is necessary. The objective of the study was to explore the effectiveness of McKenzie Manipulative Therapy (MMT) for patients with LDH. Methodology: This was an assessor blinded, 36-month RCT, at Center for Rehabilitation of the Paralyzed (CRP) in Savaar, Bangladesh. Seventy-two subjects, ages 28-47 years and clinically diagnosed with MRI findings of LDH, were randomly recruited from hospital records and sixty-eight found eligible. The control group received stretching exercise and graded oscillatory mobilization, and the experimental group received McKenzie manipulative therapy for 12 sessions in 4 weeks, both groups received a standard set of care also. The pain was the primary outcome and the secondary outcome was participation in functional activities and disability. Results: Pain and Disability found significant improvement in both groups, with the McKenzie approach significantly superior to the control group (p<.05). Bothersome in Activities (SBI) reported significantly lower post report compared to baseline for both groups (p<.01). McKenzie showed significantly superior outcomes for fear avoidance (FABQ) total and SBI feeling of abnormal sensation in leg compared to the control group (p <.05). Conclusion: The McKenzie manipulative therapy approach was found to be effective for pain, disability and participation in activities for single or multiple level LDH patients in a short time from day 1 to week 4, and the treatment effect extends after 6 months. Clinical Trial Registration No.: CTRI/2020/ 04/024667. INTRODUCTION In developed countries, more than 80% of the population is affected by low back pain (LBP) at some time in their life [1,2]. The international prevalence of low back pain has been depends upon the level of disc displacement compressing the posterior or postero-lateral aspect of Lumbar spinal segments. LDH causes central low back pain and/ or radiating pain over the area of the buttocks or legs served by one or more spinal nerve roots of the lumbar vertebrae or sacrum, combined with neurologic deficits or associated symptoms of nerve root compression [6,7]; the phenomenon can also lead to motor deficits of lumbosacral plexus, impairments in regular functions related to activities and livelihood [7]. LDH is one of the most common problems confronting outpatient physical therapists. It is extensively established that herniation is a multidimensional mechanical disorder that is dependent on physical factors, lifestyle, and psychosocial factors [8]. The management of LDH depends on the severity of disc displacing, causing a spectrum of clinical presentations [9] and a conservative treatment approach is recommended for the patients without red flags. The red flag indicates extreme pain, progressive neurological deficit, and/or cauda equine syndrome. Conservative care includes a variety of pharmaceutical and non-pharmaceutical treatments such as patient education, analgesics, rest, exercise, traction, manipulation, mobilization, manipulative therapies; clinical guideline [10] suggests prioritizing conventional therapy as the first line of management although surgical or invasive therapies can be the treatment of choice [11,12]. The McKenzie method is widely prescribed by physical therapists to treat pain and increase flexibility for the patients having definite mechanical characteristics of LDH symptoms [13,14]. McKenzie Mechanical Diagnosis and therapy combine exercise based on directional preferences that are intended to "reduce derangements" and typically demonstrates one direction of repeated movement which decreases or centralizes referred symptoms, abolishes midline symptoms and emphasizes self-directed exercises performed by patients with manipulative therapy approach by the clinician [15]. McKenzie's approach is evident to be effective for low back pain in contrast with pain and disability in the short term and long term and considered as cost-effective. Hence, this is a research gap on specific concentration to lumbar disc herniation to evaluate if McKenzie manipulative therapy is effective [15,16]. Also, there are recommendations for evaluating the therapeutic approach for low-resource countries [16]. The study is intended to report the effectiveness of McKenzie manipulative approach for LDH patients compared to stretching and conventional manipulative therapy approach regarding outcomes of : (1) pain in different functional positions, (2) fear avoidance behavior,(3) Bothersome in functional activities and (4) low back disability index. METHODS The study was an assessor-blinded, randomized clinical trial (RCT), and carried out for 36 months at the Centre for the Rehabilitation of the Paralysed (CRP) in Bangladesh. The study was approved by the CRP ethical review board (CRP-R&E-0401-180). The study is a fundamental feasibility study of the research project titled "Manipulative therapy for Prolapsed lumbar Intervertebral disc (PLID) patients and relation with infectious diseases: A Randomized Controlled Trial" approved by Clinical trial registry India (CTRI/2020/ 04/024667) the primary registry authority approved by WHO trial registry. Patients, Sample Size Calculation and Randomization From June 2017 to December 2019, 72 patients aged 25-50 years with a complaint of low back pain and/or radiating pain and /or neurological symptoms towards lower limbs have been primarily enrolled in the study. Then they were investigated as per inclusion criteria (diagnostic criteria). Persons having MRI and previously diagnosed as Disc herniation or Lumbar disc herniation LDH or Prolapsed Lumbar intervertebral disc (PLID) were also enrolled and screened for the second time; the persons who had no MRI were advised to perform with proper justification. Samples were enrolled in the study through hospital randomization and voluntary participation. Sixty-eight (n=68) patients complied with the eligibility criteria and were assigned after voluntary written consent, Calculated according to Miot [17]. Subjects were randomized either into the McKenzie group or conventional physiotherapy group with computer-generated, concealed allocation. The inclusion criteria were : (1) patients with a single or multiple levels of lumbar disc herniation evident in Magnetic Resonance Imaging MRI, (2) positive Lasègue's sign or cross Lasègue's sign and (3) diagnosed as derangement syndrome 1-3 in Mechanical Diagnosis and Therapy -MDT assessment by McKenzie institute. The exclusion criteria were (1) any history of surgery for LDH, : (2) co-morbidity associated with endocrine disease, osteopenia, infection or carcinoma, (3) History of fracture in the spine, ribs, or upper limb within last 1 year, and (4) pre-existing phobia to physiotherapy or manipulative therapy. Both groups received interventions from two outpatient settings of a hospital. Interventions were given by an experienced physiotherapist ranging from 2-10 years of clinical practice experience, and a subsequent in-service training by coresearchers for the specific treatment protocol. The single assessor was blinded to the assignment and performed all assessments. The data was taken before treatment and after 12 sessions (4 weeks) of treatment in the hospital setting; a follow up was taken after 6 months of discharge by a phone call or a physical visit. Interventions The experimental group received McKenzie manipulative therapy for the lumbar spine. The manipulative therapy included repeated movements, typically include flexion in lying or standing; extension in lying or standing; and lateral movements of either side gliding or rotation and manipulative approach to lumbar spine segments [18,19]. Patients performed those movements at therapy sessions and home [20]. The repeated movements of McKenzie manipulative therapy has been prescribed as 10 repetitions of directed movements, 2-3 hourly in 14 hours of a day and for 4 weeks. Manipulative therapies were performed by physiotherapists for 10-15 repetitions in a single "on/off" maneuver for 5-7 minutes for 6 sessions in 2 weeks. The control group received manual passive stretching exercise for lumbopelvic muscles for 5-7 repetitions per muscle with 10-15 seconds hold performed twice a day for 2 weeks and graded oscillatory mobilization in Maitland concept in 5-7 minutes, 35-40 oscillation per minutes or static segmental mobilizations in Maitland concept for 35-50 second hold for 5-7 times in lumbar spine for 6 sessions in 2 weeks also, both groups received analgesics and hot compression in the lower back for 10 minutes for 2 weeks, stabilization exercises of lumbopelvic segment accompanied with a booklet indicating the proper way to do different activities and lifestyles habits for 4 weeks [21]. All of the interventions ended up after 4 weeks from the initial day of treatment. Outcome Measurements The pain was the primary outcome and the secondary outcome was participation in functional activities and disability. The pain was measured by the Dallas pain questionnaire (DPQ) in different activities and positions. Participation in functional activities was measured by the Fearavoidance beliefs questionnaire (FABQ) and Sciatica Bothersome Scale (SBS) and disability was assessed with the Oswestry Low Back Disability Questionnaire (ODI). All outcome measurement tools were found to have satisfactory sensitivity and reliability [22 -26]. The outcomes were measured before intervention (day 1) and after 12 sessions (4 weeks) of intervention in the rehabilitation center setting for all the variables. A follow up was measured 6 months after discharge by a phone call or a physical visit through DPQ and ODI. Statistical Analysis Data entry and checking the quality of data was examined by an independent non-associated researcher. Data were obtained in a general linear model for paired and independent ttest, and Mixed ANOVA Repeated Measures in SPSS Version 20. DPQ and ODI were analyzed utilizing a paired and independent t-test for time fraction analysis and Repeated Measures ANOVA for repeated measure analysis. FABQ and SBS were analyzed utilizing a paired t-test for within-group measures and independent t-test compared to baseline with a 5% level of significance. The chi-square test and independentsamples t-test were used to compare and determine the similarities of clinical baseline characteristics between the groups. Socio-demographic Data Sixty-eight (n=68) respondents were enrolled and randomly selected for each group. Within the control group, 3 subjects dropped out and the experimental group reported 4 subjects withdrew from the study (Fig. 1). In baseline assessment (Table 1), the control group reported a mean age, height, and weight as 38.59 ± 10.891 years, 61.38± 5.205 inches, and 63.97± 8.959 Kg; and the experimental group reported age at37.71± 8.803 years, 60.50 ± 5.160 inches, and 64.06± 8.180 Kg respectively. As both groups had a similar number of respondents, their occupations with service holder (Control n=7, Experimental n=8) and housewife (Control n= 7, Experimental n=9) comprising the majority of respondents. The level of the disc herniation evident from MRI readings was reported as follows: L4/5 (Control n= 9, Experimental n=8), L5/S1 (Control n= 8, Experimental n=9), and more than one level (Control n= 14, Experimental n=13). There were no significant differences in baseline characteristics between groups. L5/S1 8 9 More than 1 site 14 13 1 independent-samples t-test, 2 chi-square test; level of significance = <.05 Pain and Disability Analysis of the Dallas Pain Questionnaire (DPQ) and Oswestry Disability Index (ODI)was analyzed in three distinct statistical measures. Within-group analysis of DPQ and ODI from baseline (day 1) to discharge (4 weeks) and discharge to follow up (6 months) have been conducted by paired t-test (Tables 2-3) and hereby between-group analysis calculated by independent t-test (Tables 2-3). Changes in repeated measure from baseline (day 1) to follow up calculated with a Repeated Measures ANOVA (Table 4). Excluding the drop-out data, both the control and experimental group had significant changes separately (P=<.05) in all the variables. Discharge (4 weeks) to follow up (6 months) From discharge to follow-up ( Table 3) Fear-avoidance and Bothersome in Activities from Bbaseline (day 1) to Discharge (4 weeks) From Baseline to discharge within group analysis of Fear Avoidance Belief in physical activities, work related activities reported mean differences, lower and upper value of 95% ( .6 (p=.00). "Bothersome due to Leg pain", "abnormal sensation in leg", "weakness in leg" and "leg pain in sitting" wasreported with a mean difference, lower and upper value of 95% ( Fig. (2a). Changes of disability in ODI in day 1. Fig. (2b). Changes of disability in ODI after week 4. Efficacy of McKenzie Manipulative Therapy on Pain The Open Sports Sciences Journal, 2021, Volume 14 21 Fig. (2c). Changes of disability in ODI after 6 months. DISCUSSION This research intended to explore the effectiveness of McKenzie Manipulative Therapy for LDH patients compared with a set of conventional physiotherapy treatment. The statistical analysis showed a statistically significant difference between the two groups for the ODI, with the McKenzie group having a lower score (F=107.1)), which implies that the McKenzie group intervention was more effective in reducing disability than the control group (F=287.5, P=<.001) within the twelve treatment sessions, as well as follow up after six months. All the variables of the Dallas pain questionnaire represented a similar result. Evidence recommends [27] using similar scales to measure disability states through physiotherapy interventions. The control and intervention groups reported similar baseline characteristics in mean age, height, and weight. The occupation among groups varied, with service holders and housewives reported for the majority of respondents. Two recent meta-analyses showed that subjects who were overweight or obese were at increased risk for both low back pain (LBP) and lumbar radicular pain [23]. Abdominal obesity is defined by waist circumference and has been associated with LBP in women [24]. As the study was conducted in the hospital setting, the priority was through the diagnosis and clinical presentations, and for concealed allocation, the groups had an insignificant similarity of baseline statistics. Analysis of Dallas Pain Questionnaire (DPQ) and Oswestry Disability Index (ODI) has been analyzed by paired and independent t-test, and repeated measure ANOVA from baseline to discharge, discharge to follow up and baseline to follow up found a statistically significant difference in both group separately. Also, between groups analysis found McKenzie's concept to be superior in several parameters in several distinct timelines. However, the McKenzie group reported significantly better outcome improvement than control. The inter-quartile range (IQR) for the control was reported for the initial, discharge, and follow-up. Notable changes for the ODI mean was reported according to the timeline in both groups, with McKenzie reporting significantly better "remission of disability" than control. Several studies suggested that McKenzie therapy was more effective than most comparative treatments at short-term follow-up in comparison with the treatments included nonsteroidal anti-inflammatory drugs, educational booklet, and back massage with back care advice, strength training with therapist supervision, spinal mobilization, and general mobility exercises [25]. Six studies were reviewed by Clare and colleagues [26] and 1 of the 6 groups found the comparison treatment (massage/back care advice) to be more effective on both short-term and intermediate-term disability than McKenzie therapy. No other comparative treatment was more effective than McKenzie therapy at any identified point in time. Most authors focus on short-term effects of McKenzie therapy or report outcomes within 3 months of treatment but this study creates new evidence of long-term effects also. Moreover, the study [27] showing McKenzie treatment to reduce the level of disability reaching a statistical significance at 2 and 12 months follow up. This study holds unique features that explore changes in fear-avoidance beliefs in physical activities and work, and "impairments in different functional positions".From Baseline to discharge within-group analysis of Fear-avoidance belief in physical activities, work-related activities and total along with "Irritability due to leg pain", abnormal sensation in the leg, weakness in leg and leg pain in sitting by paired t-test reported mean difference, lower and upper value of 95% found significant changes in each group separately. The betweengroup analysis by independent t-test in FABQ and SBI reported mean difference, lower and upper value of 95% found superior results in McKenzie group in FABQ activity and total, and bothersome in abnormal sensation in the leg. In the study, the participants received controlled McKenzie manipulative therapy or a set of conventional approach weekly three days in four weeks consecutively. Similar studies explored that [28] six sessions over 3 weeks may bring benefits, as this study minimizes the length and proven increased frequency benefits the patient. This study recruited 64 subjects with diagnosed LDH and allocated them, equally, in two groups of physiotherapy interventions and found significant differences in outcomes of DPQ, ODI, SBI, and FABQ. One comparative randomized controlled trial reported [29] with a 3-month follow-up period among 271 patients with chronic LBP two groups as the McKenzie therapy group (n = 134) and the other was electrophysical agents group, (n = 137). In 28 sessions, significant improvement was achieved like increase in spinal motion, reduction of pain and disability within both groups but the greater improvement in the McKenzie group (p <0.05) hence, this study found improvement in pain, disability, fearavoidance, and bothersome in 12 sessions. In the mentioned study, 271 samples were recruited and revealed that the McKenzie physiotherapy with a different protocol like exercise or first-line care were significant, similar to this study with a minimum intervention time. The study implied the appropriate randomization with limited resources and scarcity of samples. The assessor was blinded and the treatment provider had separate inclusion criteria and allocated to groups as per the randomization process. This minimalizes the potential bias and ensures masking to the patients. There was no overlap of the treatment provider, hence the intervention was a form of exercise that is difficult to blind to the intervention provider and patient. The patient's participation was willing and voluntary. Because of Hospital-based randomization, there was variety in demographics of the patient, and in a sense despite small sample size, the result has external validity. The limitations of this study include a smaller sample size, long duration of the study, difficulty identifying qualified subjects with a specific diagnosis for inclusion factor, supporting documents, and eligibility criteria in the timeframe of 2 years. Among the cases, 5 participants (3.4%) had a relapse with minimum central symptom within 6 months. Drop out analysis could improve the sample size but that was minimum in number so the authors don't consider the analysis. Calculating adverse events could improve a new dimension, the study is recommended to extend to long term prospective cohort. Future studies with multicenter, compared to surgery are recommended. CONCLUSION The results of this study show that there is an overall statistically significant difference between the two intervention groups for the pain and disability in ODI and DPQ, but not for fear-avoidance belief and bothersome in functional activities in FABQ and SBI. This is providing insight that the McKenzie method may be more effective in addition to standard physiotherapy protocol for lumbar disc herniation. ETHICS APPROVAL AND CONSENT TO PARTI-CIPATE The study was approved by the CRP ethical review board (CRP R&E-0401-180). A Randomized Controlled Trial" approved by Clinical trial registry India (CTRI/2020/ 04/024667) the primary registry the authority approved by WHO trial registry. HUMAN AND ANIMAL RIGHTS No Animals were used in this research. All human research procedures were followed in accordance with the ethical standards of the committee responsible for human experimentation (institutional and national), and with the Helsinki Declaration of 1975, as revised in 2013. CONSENT FOR PUBLICATION Written consent has been taken from the participants. STANDARDS OF REPORTING CONSORT guideline has been followed in this study. AVAILABILITY OF DATA AND MATERIALS The datasets generated in the current study are available from the corresponding author (Z.U.) on request. CONFLICT OF INTEREST Dr. Zakir Uddin is the editorial board member of The Open Sports Sciences Journal.
2021-06-22T17:55:53.920Z
2021-04-22T00:00:00.000
{ "year": 2021, "sha1": "d5af28185fc5332f18ee418e0f1f74aff253ef50", "oa_license": "CCBY", "oa_url": "https://opensportssciencesjournal.com/VOLUME/14/PAGE/14/PDF/", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "94ef57eca258dd83e6de42f1c81698c7cc4357d7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238696344
pes2o/s2orc
v3-fos-license
Social Learning Class Topper Optimization (SL-CTO) Based Hop Localization Technique for Wireless Sensor Network To address the current situation limitation of traditional DV-Hop, we suggested a DV-Hop localization based on a rectification factor using the Social Learning Class Topper Optimization (SL - CTO) algorithm in that paper. In order to adjust the number of hops between beacon nodes, we have implemented a rectification factor in the suggested method. By measuring the dimensions of all the beacons at dumb nodes, the suggested algorithm decreases communication among unknown or dumb and beacon nodes. The model of network imbalance, It is often considered to be demonstrate a applicability of the Proposed approach in the anisotropic network. Simulations have been performed on LabVIEW@2015, and Comparisons were made with conventional DV-Hop, particle swarm optimization-based DV-Hop and runner-root optimization-based DV-Hop for our proposed algorithm. In comparison to current localization methods, simulation outcomes showed that the proposed localization technique reduces computing time, localization error variance and localization error. traffic calming, national security motive, etc. Physical location for certain applications of the relevant events [2,3]. Event data has no significance without its location information. One of the easiest methods for node localization is manual deployment of sensor nodes, but it is not feasible in remote regions for large-scale deployment. The simplest system for localization is the global positioning system (GPS), but the cost may boost of the network and even consuming more energy [4]. One of most cost-effective ways to locate sensor nodes is to activate some GPS nodes, also known as beacon nodes. Manually deploying sensor nodes and recording their coordinates is another method of localization. However, the network has a significant number of nodes, this technique is not universal, and human participation in some monitoring regions is not practical. Via GPS or manual deployment, the widely used node localization scheme is to obtain the coordinates of some nodes (called beacon nodes or anchor nodes) and then use localization algorithms to measure the coordinates of other nodes (called unknown nodes or dumb nodes). Such anchor nodes understand their precise location and tries to retain their location for the unknown nodes [5,6]. Many localization strategies have been suggested for WSNs [7] to solve these position estimation issues. These methods of localization are classified as dependent on range and free of range. The range-based method uses an algorithm based on constraint measurements and typically requires higher distance measurement costs [8]. The range-based approach is split into few types: Time Of Arrival (TOA) [9], Time Difference Of Arrival (TDOA) [10], Received Signal Strength Indicator (RSSI) [11], and Arrival Of Angle (AOA) [12,13]. These techniques demonstrate good accuracy of localization, but require additional hardware to estimate of nodes. Range-free destination technology, on the other hand uses the algorithm of approximate distance as well as hop number between beacon and dumb nodes to approximate the location of the locate nodes. This method lowers the cost because no additional hardware is required [14]. One of the commonly used range-free localization algorithms [15] is the DV-Hop algorithm. This algorithm, however, faces low precision of localization [16,17]. Nature has made a huge contribution to success of the modern algorithms for optimization and has also offered a solution to several complex engineering problems. There are numerous algorithms inspired by nature, just like the Genetic Algorithm (GA), Artificial Bee Colony (ABC), Particle Swarm Optimization (PSO), Simulated Annealing (SA) and Ant Colony Optimization (ACO). It has been found that the PSO is algorithm of quick processing. However, it is difficult for the algorithm to escape, when it is stuck in the local minimum. SA has random initial disadvantages and there's fewer chances of finding worldwide solutions. It is also observed that Genetic Algorithm (GA) has advantages over PSO. The approach is within an user defined boundary area. However, Genetic Algorithm (GA) has disadvantages of being sluggish and has better scope. One of those kinds of algorithm, inspired by strawberry plants, is also the Runner-root algorithm (RRA), used to solve multivariable problems. Half of the agents that become vulnerable are discarded in this algorithm and good agents are replicated at any iteration. From the above discussion, It is observed that the usefulness of these strategies depends also on problem under consideration. Social learning plays a significant role among social animals in the learning of actions. Social learning has the benefit of encouraging people to learn habits from the others without deducting the cost in specific trials-and-errors, as opposed to individual (asocial) learning. In order to build a social learning CTO, this paper integrates social learning process into class topper optimization (CTO). Each student in the proposed SL-CTO learns from good students, unlike CTO variants where students are modified based on knowledge, including the best student learned by the entire class (class topper) and the best student learned by each section (section topper). Furthermore, the proposed SL-CTO implements a dimension-dependent method of parameter control to ease the burden of parameter settings. We have been proposing a SL-CTO-based enhanced DV-Hop algorithm in WSNs to resolve the disadvantages of conventional DV-Hop localization. This paper's major contributions are as follows: 1. By calculation of the hop size with all beacons at dumb nodes, the proposed algorithm will reduce the amount of messages sent among dumb or unknown nodes and beacon nodes. This saves time of localization and localization resources, which also minimizes the proposed algorithm's Communication systems costs. 2. We also included a rectification factor in the suggested technique to modify hop dimensions of a beacon nodes. 3. To model the impact of the anisotropic environment, the imbalance variable (degree of irregularity) has also been integrated. 4. To correct the approximate locations of the dumb nodes, the SL-CTO optimization algorithm has been used to reduce localization error. The results of the simulation indicate that, the current range-free localization algorithm, Our suggested algorithm for localization reduces position error, variance in localization error and time required for computation. The remaining paper is organized as follows: Section 2 includes related research. The typical position of DV-Hop is addressed in Section 3. The SL-CTO-based DV-Hop localization proposed is listed in Section 4. The radio irregularity effect is mentioned in Section 5. Section 6, explains the output metrics. Results from the simulation are shown in Section 7. Finally, in Section 8, conclusions are drawn. Related works Numerous optimization-based algorithms have been developed to solve localization issues. Throughout this article, we had also done a comprehensive survey of the studies' work in the position of optimization based on hop localization. It primarily covers the tasks for which the CTO method is employed in hop-based localization as well as the DV-Hop algorithm changes. A amount with DV-Hop-based algorithms for range-free method were studied in this section. In order to estimate the position of the nodes, Chen and Zhang [18] distributed a few other anchor nodes only at boundary of a monitoring areas. Finally, to reduce the localization error for the algorithm, Particle Swarm Optimization (PSO) is used. For isolated areas, the deployment of anchor nodes at the boundary of the sensor network in this paper is not sufficient. To minimize the localization error, Kumar et al. [1] introduced a energyefficient range-free technique. Here, by asking unknown nodes about their coordinates, anchor nodes broadcast their location to dumb nodes. The hop size for a beacon nodes are defined on the dumb nodes. This method efficient builds the algorithm's energy efficient. Zaidi et al. [19] suggested a new algorithm in which dumb nodes would use locally available information to locate their positions, thus eliminating some redundant depreciation and energy costs sustained if it was necessary to share data between nodes. This paper does not demonstrate the impact of network irregularities on the localization algorithm. An advanced range-free method using genetic algorithms was proposed by Sharma and Kumar [20]. They change the average hop size of the anchor nodes by optimizing the correction factor, and the modified hop length is configured further by a line search algorithm. The efficiency of localization has been increased by using a genetic algorithm. However, because of gaps, non-uniform node distribution network randomness, and abrupt radio patterns, the suggested algorithm efficiency can be degraded. To use the Grey Wolf algorithm to get a more accurate estimate of the average hop distance as determined by each beacon node, Kaur et al. [21] suggested of grey wolf optimization. The algorithm has been shown to provide greater precision with a small increase in the cost of computing. Using PSO, an improved DV-Hop localisation algorithm was proposed by Singh and Sharma [22]. They have measured the value of hop count and minimum hop size in the proposed algorithm, measuring the location and performing error evaluation. The method developed utilizes the hop size of the anchor from which the dumb node defines its reach. In addition, with the PSO strategy, the positions of unknown nodes are enhanced. Kanwar and Kumar [6] have suggested a range-free localization using runnerroot algorithms. They change the average hop size of the anchor nodes by refining a correction factor, and the changed hop size is further optimised by a line search algorithm. The precision of localization is further improved utilizing a runner-root method. However, it is possible to degrade the efficiency of the proposed algorithm as a result of holes, abnormal radio patterns, and non-uniform node delivery network sparsity. We inferred from the above reviews that range-free localization algorithms still need improvements in localization accuracy terms. This motivates us to propose a new algorithm using SL-CTO for DV-Hop localization. Algorithm of conventional DV-Hop The localization technique is essentially a free localization technique that is based on a protocol for distance routing. For the number of hops of beacon nodes as well as the minimum hop distance besides WSNs, calculating the distance between dumb nodes or unknown nodes and beacon nodes is used. Different paths form in a network topology among dumb nodes and beacon nodes that are not linear due to non-uniform connectivity with wireless sensor nodes. Therefore, Some errors have been identified at the period of the algorithm in the node position method [6]. Step 1 The minimum hop amount is defined for unknown nodes and beacon nodes of step 1. By transmitting signals through beacon nodes by vector protocol system, the neighbouring nodes can be shown their location,Information exists in the form of Hi, ai, bi, id, in which id will be the identity, ai, bi have been coordinates, and Hi has been the hop count for the i beacon node. First, 0 is set of the value for H i [23]. The nodes obtain data from the broadcast and record the hop amount and localization of the vector's beacon nodes. The value of H i must be increased by 1 through this process [20]. In this update process, if any node receives the same id group, the new received data will be compared with the original value of H i . The nodes obtain broadcast data and keep track of the hop amount and localization for the beacon nodes of the vector. Step 2 Minimum hop count and average hop distance were determined to find the distance between unknown nodes and beacon nodes. In this method, the average hop distance for the entire network can be determined by obtaining the position and hop amount for nodes of beacons, as described in the previous stage. Then this data is transmitted to the entire network. For most nodes, The minimum hop distance data from the a beacon node closer to them is also necessary to obtain [24]. The following equation provides the typical distance of the hop (jp i ) and the hop range (p i ) between the i(a i , b i ) beacon node and the other (beacon) node j(a j , b j ) can just be computed as: The distance between beacon nodes and dumb nodes is expressed in the following formula. where jp i is the average hop distance, Hop min is the hop count between the i beacon nodes and the u dumb nodes. Step 3 Let the U dumb node coordinates be (a, b), and the i th beacon node coordinates be (a i , b i )(1 ≤ i ≤ n) analyzed in stage 3. Correspondingly, the distance between the beacon node i th to the unknown or dumb node U is p i (1 < i < n). The coordinates of the dumb node have been determined as follows: (3) can be arranged from SA = T in the matrix, where Finally, the last square approximation is used to calculate the unknown node coordinates as given below: A Social Learning Class Topper Optimization (SL -CTO) Algorithm Below will be a short summary for the sociological context of the proposed SL -CTO. A detailed explanation of SL -CTO will then be provided in conjunction with a study of complexity and integration in computation. Several British birds have been seen opening milk bottles for 1921 in the little town for Swaythling. Such findings were continuously reported in 25 years from numerous other locations in the United Kingdom and even in some other region of the continent of Europe. It's the first proof of learning socially [25], in which it is assumed that birds learn to open bottles of milk through observations and experiences for other birds, rather than learning on their own [26]. Different mechanisms in social learning theory have been suggested and debated over the past decades, such as incentive development and local improvement [27], observational conditioning [28], contagion [29] and social facilitation [30]. Imitation, which is considered distinctive from other mechanisms of social learning, is among these mechanisms [31], is the most interesting mechanism of social learning, since imitation, which occurs across a whole society, may lead to behavioural similarities at the population level, such as community or tradition [32]. Such parallels in population-level can mean the integration of a complex system, hence having the evolutionary algorithm with its critical applicability. In [33], the authors addressed in depth the various meanings for imitation, of the definition for Mitchell is known to be really applicable to animals and devices [34], in which imitation is assumed to be a method of producing a similar copy of the model. In [32], imitation is defined as an imitator procedure that copies part of a demonstrator's behaviour through observation. To replace the updating rules in Class Topper Optimization (CTO), We propose few other new methods for learning that are influenced by social learning. Like the CTO, the proposed SL -CTO initialises a student with a section where the size of the student and the performance index (PI). It holds a randomly initialised learning behaviour for each section, which represents a student solution to the optimization issue. Each student will be allocated with knowledge enhancement value calculated from learning form as reward input from the environment. The student is then sorted according to the learning values of a growing order of students. Each student can then correct their behaviors by learning from those students (demonstrators i.e ST and CT). The flowchart of SL -CTO is shown in Fig. 1. An simple explanation of the mechanism of learning. First, graded according to every student makes an effort to gain understanding that leads to improvement in class topper results. Then each students, except for the class topper, will benefit greatly from the best students. Enhancement of knowledge at section level and at student level, two types of knowledge changes are presented. The student's efficiency is enhanced by learning from the best student of that class [35]. An imitator's social learning system will learn the actions of various demonstrators [36] in the following manner: where L T i,j is the j th dimension of student i's behavior vector in generation t, with i ∈ {1, 2, 3....., m} and j ∈ {1, 2, 3....., n}, ∆L T +1 i,j is the behavior correction. Taking into account that the desire to learn from better people in a society will vary from person to person (typically better people are less likely to learn from the others), we describe a learning probability p L i of each student i. Probability generated randomly p i satisfies 0 ≤ p i ≤ p L i ≤ 1. Computationally, the following information is provided by a student: Each student(S) in each section gains knowledge of their relevant Section Topper (ST) as follows: with As a result, ST is the best student inside the section. The ST gains knowledge from the CT's knowledge. The following expressions: with The behavior correction ∆L T +1 consists of three components within the above modified frameworks influenced by social learning. In the CTO, the first factor ∆L T +1 is the same as the inertia component, while other components vary from Eq. (6) and Eq. (8). From Eq. (6), each learner in each section learns with their appropriate ST. Since this element is affected by imitation behaviour with normal social learning, it is referred to as the (I T ) imitation part. Form Eq. (8), ST rises to become the section's top student. The ST learns from the CT. It learns out of the whole class's collective actions, i.e. all students' behavior. As the social impact factor (C T ), the control parameter ε is denoted. For simplicity, three random coefficients φ 1 , φ 2 and φ 3 have been replaced with the current parameter in CTO (w, n1 and n2) which will be generated randomly with [0, 1] once the updated strategy is executed. There are three parameters to be defined in the proposed SL-CTO, i.e. the Student size m, the learning probability p L i and the social influence factor ε. Student size m is the first variable to be specified. It is suggested that the student size m be calculated in the following form as a function of the search dimensional space [37]: Here M was its basic student size for both the correct functioning of the SL-CTO. The idea of a second parameter that sets the probability of learning p L i is also inspired by natural social learning. The following learning probability was adopted: where (1 − i−1 m ) suggests that in a sorted student, the learning likelihood is inversely proportional to the i performance index, α · log( n M ) specifies also that probability of learning is inversely related to the complexity of a search, and α·log(·) the admixed feature is used to smooth the effect of n M . Empirically, the coefficient α < 1 is recommended, and also in the said work, α = 0.6 was used. If the probability of learning meets p i (t) ≤ p L i , L T i,j will be corrected as follows: The following expression can be obtained, if we substitute Eq. (6) and Eq. (8) for Eq. (7) and Eq. (9) respectively, and replace all random parameters with their predicted value: where 1/2 is the predicted value of φ 1 , φ 2 and φ 3 . The last variable that is left to define is the social impact variable ε. The convergence complexity is generally proportional to the dimensional space of the search, since the convergence of the entire class involves the convergence for each dimension into each student's functional vector. The social influence factor ε is described on the basis of this observation as follows: That means ε is inversely related to the dimension of the problem. Since the effect of the class-level mean behaviour is controlled, if the value of this variable has also been set too high, premature integration with the mean behaviour (instead of the best behaviour) could occur. In our work, a smaller value for β = 0.01 is therefore used. In this way, we may reduce the convergence of the proposed SL-CTO to the convergence of our model. Description of Proposed enhanced correction factor DV-Hop localization algorithm using SL -CTO In this section, a new SL -CTO based, enhanced rectification factor DV-Hop localization algorithm that includes the following steps: Step 1 Calculate the beacon node coordinates and relay to the network the position of the nodes. The minimum hop count value will be obtained by all sensor nodes in the network after that. Step 2 The dumb node defines the modified hop size for the beacon nodes based on the number of hops, wight of beacon node [38] as well as distance information among beacon nodes as shown below: where the weight of beacon node is w. The approximate difference between the i and j beacon nodes is calculated as The true distance between beacon nodes i and j shall be calculated by Error between nodes i and j of the beacon is given as Now, we have added a rectification factor, and this is defined as: where s is the number of beacon nodes. The τ rectification factor is used by adding it to the previous hop size to change the hop size of the beacon node. The shifted distance between the i beacon nodes and the k dumb node is determined as [37]: Step 3 To calculate the location of dumb nodes, the 2D hyperbolic position technique is used. Let (a, b) be the location of the dumb node and a i , b i be the position of the i th beacon node. The following formula is used to calculate the distance between these nodes: The following formula can be used to calculate this: where S i = a 2 i + b 2 i and T = a 2 + b 2 , Finally, with the aid of the least mean square estimated process, we can gain Z value: Step 4 To correct this approximate position of the dumb nodes, we used the SL -CTO algorithm. The objective function of SL -CTO using DV-Hop localization is formulated mathematically as: The flow chart for the proposed localization algorithm is shown in Fig. 2. Model of communication imbalance In actual life, the communication pattern of sensor nodes is not similar in every direction. An unusual broadcast path can be caused by various propagation loss in RF transmission data. In the network of wireless sensors, communication disturbances are a common issue. So we considered a complete situation in this paper for formulating modulus of elasticity and discovering the effect of communication imbalance on our proposed algorithm of localization. To indicate that irregularity of a communication signal, the variable DOI ( degree of irregularity) is inserted into to the model. The DOI variable is based on the maximum propagation loss ratio variance per unit degree shift in direction for communication networks. VSP (Sending Power Variance), which will be attributed to the high percentage variance between the multiple devices of the transmission sending power. We modeled the distortion of the communication method used in the following equation [37]: where C f is the maximum range of transmission after an imbalance variable is imposed, C is the effective transmission range before an imbalance variable is imposed, γ is uniformly dispersed by an unique number identified as: γ ∼ U (0, 1) and the factor of irregularity is any amount that is ∈ [0, 0.5]. Metrics for Performance The efficacy of the suggested algorithm was as determined by the basis of time for computation, error in localization and error variance in localization, as follows: Time for computation Computation time is the amount of time taken during the entire localization process to perform a computational process. It is calculated using the tictoc function. We contrasted the computation time for suggested algorithm in this paper with original DV-Hop [17], PSO-based DV-Hop [22] and RRA-based DV-Hop [6] localization algorithms. Error in localization The measurement of the dumb node's true position resulted in a location error. The total number of dumb nodes, beacon nodes, imbalance variable, communication range, and distribution area all had an effect on the localization error. For localization, the average error is calculated as: where t represents the amount of dumb nodes and R represents the sensor node's communication range. Variance of localization error The variance is a measure of how each amount is derived from the mean in the set: where ALE s is the mean of error of localization. Simulations and Outcomes Using LabVIEW@2015,We looked into the effectiveness of the offered method. This part provides an analysis of performance in definitions of localization error, localization error variance and computation time. The simulation outcomes assessed the quality of the suggested algorithm for various parameter alterations, i.e. node size, beacon size, communication range, irregularity factor and distribution area. The parameters of the simulation are summarized in Table 1. For simulation purposes, We considered a region of 100 × 100 m 2 with dumb and beacon nodes uniformly distributed, as illustrated in Fig. 3. Effect on computation time of the total number of dumb nodes The computation time difference as the number of dumb nodes changes is given in Fig. 4. An region of 100 × 100 m 2 is considered to have 25 beacon nodes and 25m of transmission range fixed for simulation. The number of dumb nodes varies from 100 to 400. Calculation time has been shown to increase as the overall number of dumb nodes increases. It can be observed from Table 2 that, compared to PSO DV-Hop and RRA DV-Hop, our algorithm needs less computational time. Effect on computation time of the total number of beacon nodes The difference throughout computation time caused by a change in the amount of beacon nodes is illustrated in Fig. 5. An region of 100 × 100 m 2 is considered to have 200 dumb nodes and a 25m communication range fixed for simulation. The beacon nodes range between 20 and 160. Table 3 indicates that with an increase in the number of beacon nodes, the calculation time increases. Effect of the overall amount of dumb nodes on localization error and variance Out of Fig. 6 and Fig. 7 localization error and variance were found to decrease with an enhance in the number of dumb nodes. An region of 100 × 100 m 2 is regarded to have 30 beacon nodes and 25m of transmission range fixed for simulation. The number of dumb nodes varies from 100 to 400. It is noted in Table 4 that the basic DV-Hop method has much less localization accuracy likened to the our method. Localization Error Variance Basi c DV-Hop [17] Al gori thm i n [22] Al gori thm i n [6] Proposed Al gori thm Fig. 9: Changes in the difference in a localization error of beacon nodes. Effect of communication range on error of localisation and variation of localization error The effect of communication range on localization error and variance are illustrated in Fig. 10 and 11. An region of 100 × 100 m 2 for simulation. The units are considered to have 300 dumb nodes or unknown nodes and 25 beacon nodes fixed. The transmission ranges between 15 and 35 metres. The simulation results in Table 6 show that as communication range is increased, localization error and variance decreases. The effect for deployment region on localization error and variance is depicted in Fig. 12 and 13 . Set 300 dumb nodes, 50 beacon nodes and 30m of communication range are assumed for simulation. The area of deployment ranges from 100 * 100 ∼ 300 * 300 m 2 . With an increased distribution area, localization error has been observed to increase. In addition, it can be seen from table 7 that, compared to conventional DV-Hop, PSO DV-Hop and RRA DV-Hop, the proposed method has less localization error and variance. Conclusion We presented a DV-Hop localization based on a correction factor based on social learning class topper optimization (SL-CTO) in this work to reduce localization error and increase precision in positioning. We evaluated the hop size for the beacons at the dumb nodes in the suggested technique. This reduces full communication between beacon and dumb nodes, making the proposed algorithm energy efficient. The correction factor is used to change the hop size of the beacon nodes. It is also considered that the model of network imbalance illustrates the applicability of the proposed anisotropic network algorithm. Simulation results demonstrate that, relative to conventional localization methods, the proposed algorithm performs better calculation time, localization error and variance. The location of the node in the wireless sensor networks is also an important framework. Energy consumption and the time required for the future path are two other important factors to consider in the proposed algorithm. One of the future research projects would be to apply the Social Learning Class Topper Optimization (SL-CTO)-Hop in a three-dimensional framework and on a real proving ground.
2021-09-27T19:53:19.372Z
2021-08-09T00:00:00.000
{ "year": 2021, "sha1": "8768f8d51a6d2590b9e70ce5d93dce8e562518b3", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-659985/v1.pdf?c=1631889941000", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "d7eeedda9268505ce1a7a84814ae665b7f57e243", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
251772738
pes2o/s2orc
v3-fos-license
The metabolic fate of oxaliplatin in the biological milieu investigated during in vivo lung perfusion using a unique miniaturized sampling approach based on solid-phase microextraction coupled with liquid chromatography-mass spectrometry Adjuvant chemotherapy after pulmonary metastasectomy for colorectal cancer may reduce recurrence and improve survival rates; however, the benefits of this treatment are limited by the significant side effects that accompany it. The development of a novel in vivo lung perfusion (IVLP) platform would permit the localized delivery of high doses of chemotherapeutic drugs to target residual micrometastatic disease. Nonetheless, it is critical to continuously monitor the levels of such drugs during IVLP administration, as lung injury can occur if tissue concentrations are not maintained within the therapeutic window. This paper presents a simple chemical-biopsy approach based on sampling with a small nitinol wire coated with a sorbent of biocompatible morphology and evaluates its applicability for the near-real-time in vivo determination of oxaliplatin (OxPt) in a 72-h porcine IVLP survival model. To this end, the pigs underwent a 3-h left lung IVLP with 3 doses of the tested drug (5, 7.5, and 40 mg/L), which were administered to the perfusion circuit reservoir as a bolus after a full perfusion flow had been established. Along with OxPt levels, the biocompatible solid-phase microextraction (SPME) probes were employed to profile other low-molecular-weight compounds to provide spatial and temporal information about the toxicity of chemotherapy or lung injury. The resultant measurements revealed a rather heterogeneous distribution of OxPt (over the course of IVLP) in the two sampled sections of the lung. In most cases, the OxPt concentration in the lung tissue peaked during the second hour of IVLP, with this trend being more evident in the upper section. In turn, OxPt in supernatant samples represented ∼25% of the entire drug after the first hour of perfusion, which may be attributable to the binding of OxPt to albumin, its sequestration into erythrocytes, or its rapid nonenzymatic biotransformation. Additionally, the Bio-SPME probes also facilitated the extraction of various endogenous molecules for the purpose of screening biochemical pathways affected during IVLP (i.e., lipid and amino acid metabolism, steroidogenesis, or purine metabolism). Overall, the results of this study demonstrate that the minimally invasive SPME-based sampling approach presented in this work can serve as (pre)clinical and precise bedside medical tool. Introduction Oxaliplatin (OxPt) is a third-generation platinum derivative that is currently used in various poly-chemotherapy schemes to treat advanced colorectal cancer (Di Francesco et al., 2002). When administered in combination with fluorouracil and leucovorin, or together with folinic acid and fluorouracil (FOLFOX), OxPt provides potent antitumor activity, presumably through the mechanism of blocking DNA replication and transcription. However, the benefits of OxPtbased treatments are limited by several recognized adverse effects, such as peripheral neuropathy, hematologic toxicity, and hypersensitivity effects (including severe anaphylaxis), as well as the development of resistance in the tumour. As such, there is significant interest in developing new strategies that would improve the tolerability and efficacy of platinum-based therapies (Di Francesco et al., 2002;Martinez-Balibrea et al., 2015;Branca et al., 2021). Over the last few decades, researchers have become increasingly interested in the use of platinum (IV) complexes, which are more kinetically inert than their platinum (II) counterparts and are characterized by lower reactivity towards biomolecules (Schueffl et al., 2021). Satraplatin has been the most thoroughly studied platinum (IV) pro-drug, having been tested in a clinical phase III trial for use in treating metastatic prostate cancer (Olszewski and Hamilton, 2010). While the findings showed that satraplatin significantly influenced progression-free survival, they also revealed that it was unable to achieve the primary endpoint of overall survival, which ultimately resulted in the denial of regulatory approval. This result was partially attributed to a lack of sufficient tumour specificity due to the premature reduction/activation of the drug in systemic circulation (e.g. in the red blood cells (RBCs)). Further attempts have been made to improve the tumourtargeting properties of platinum-based pro-drugs (Schueffl et al., 2021). To this end, researchers have proposed new albumin-targeted pro-drugs that demonstrate enhanced accumulation in malignant tissues, with the resultant overall increase in the concentration of intact drug in the cancer cells potentially serving to enhance the efficacy of OxPt therapy. Nonetheless, such approaches are still hampered by limitations associated with difficulties in reaching therapeutic levels within cancerous cells or common side effects when drug levels fall outside the therapeutic window (Li et al., 2019). Adding to the above, the lungs are the most frequent sites of extraabdominal metastasis in patients with colorectal cancer. Thus, the development of new platforms that permit high doses of OxPt to be delivered locally to specific target organs, combined with analytical tools that enable precise drug level monitoring to ensure that tissue concentrations fall within the therapeutic window, are critical for effective cancer management. Dr. Marcelo Cypel's research group has recently developed a technique for isolated in vivo lung perfusion (IVLP) that facilitates the localized delivery of high doses of antineoplastic drugs to the lungs during surgical resection with the aim of preventing metastasis recurrence (dos Santos et al., 2014;Reck dos Santos et al., 2016). The IVLP uses the perfusion principles of the ex vivo lung perfusion (EVLP) platform (identical circuit and guidelines for perfusion), which was developed for the assessment and treatment of injured donor lungs prior to transplantation (Cypel et al., 2011). The optimized IVLP platform has already been used to administer sarcoma-based chemotherapy, specifically doxorubicin, (within 3 h-isolated left lung perfusion), thus demonstrating that high doses of therapeutic drugs can be safely administered without causing lung injury or systemic toxicity . Currently, a phase I clinical trial is being conducted at University Health Network (UHN), with 9 patients having undergone surgery and adjuvant therapy via IVLP to date. Solid-phase microextraction (SPME) has emerged as a novel, miniaturized sample-preparation approach that has been shown to be useful for the extraction/analysis of a broad range of metabolites and lipid species in a variety of matrices (Reyes-Garcés et al., 2018;Reyes-Garcés and Gionfriddo, 2019). The SPME procedure entails the insertion of an acupuncture-needle-sized microprobe (200 µm in diameter) coated with a biocompatible polymeric extraction phase (40 µm thickness) into tissue to the full length of the coating, followed by a short extraction period determined based on the compounds of interest. Since this approach does not require the removal of any of the tissue sample, it allows for repeated extractions/ measurements, which is not be feasible with conventional biopsy. The simple design of coated SPME devices-which utilizes tuneable extraction phases and dimensions, while also providing negligible depletion-not only facilitates in vivo sampling, but it also enables the integration of sample collection and preparation into a single step (Ouyang et al., 2011;Reyes-Garcés et al., 2018;Reyes-Garcés and Gionfriddo, 2019). In addition, SPME is a solvent-less extraction technique that is based on equilibrium between the sample and the coating on the probe, which endows it with several advantages over traditional extraction methods, including simplicity, rapidity, and cleanliness. However, perhaps the most important feature of SPME is that it is sensitive enough to capture elusive pools of metabolites that are prone to rapid and extensive conversions, thereby allowing it to capture dynamic biochemical processes in the investigated system. Furthermore, coupling SPME with highly sensitive mass detectors permits xenobiotics (such as therapeutic drugs) tracking and the profiling of specific metabolic pathways or global metabolites, thus providing a snapshot of the entire metabolome in "one go." The main advantages of biocompatible SPME probes are: their suitability for direct exposure to complex biological matrices without a prior sample pre-treatment step; their high selectivity for small molecules; and, most importantly, their suitability for in vivo and non-destructive sampling (Ouyang et al., 2011;Piri-Moghadam et al., 2017;Reyes-Garcés et al., 2018). SPME's applicability for in vivo tissue sampling has been demonstrated in numerous pre-clinical and clinical studies involving the sampling of the brain, liver, lungs, and heart (Reyes-Garcés and Gionfriddo, 2019). For instance, Lendor et al. (2019) applied an interesting miniaturized SPME device (total probe diameter: 195 µm) with multisite measurement capabilities for the extraction of multiple neurotransmitters within a single sampling event in a macaque brain (Lendor et al., 2019). With regard to targeted metabolite profiling in different brain areas, SPME microprobes have been also successfully applied for the indepth profiling of up to 52 oxylipins in the brains of conscious moving rats (Napylov et al., 2020). Furthermore, recent studies have demonstrated that not only can SPME facilitate the in vivo extraction of a narrow metabolite/lipid subset from the brains of freely moving animals, but it can also be applied to concomitantly monitor changes in multiple metabolite or lipid classes to provide novel information about the biochemical pathways affected by a given treatment (i.e., deep brain stimulation or fluoxetine administration) Boyaci et al., 2020). Nonetheless, the most compelling application of SPME for in vivo tissue analysis is for non-destructive organ (liver, heart, or lung) sampling to support decision making prior to transplantation, or for isolated lung sampling during chemo-perfusion to monitor spatial and temporal drug biodistribution or to identify potential markers associated with drug activity (Bojko et al., 2014;Bojko et al., 2021). In this context, Bojko et al. employed C8/SCX devices to perform spatial and temporal resolution mapping of the lung during local high-dose doxorubicin delivery via IVLP . In this paper, we propose a biocompatible SPME technology as a minimally invasive analytical strategy to assist in the nearreal-time measurement of OxPt (and its metabolites) in tissue and perfusate during a porcine IVLP 3-days survival study. Additionally, a preclinical model is also investigated to further demonstrate the proposed technique's applicability for monitoring changes/alterations in metabolomic patterns throughout IVLP. Finally, comprehensive time-course metabolite profiling is conducted, and dysregulated biochemical pathways are identified in an attempt to develop a deeper understanding of the manifold processes that occur in perfused lungs, thus enabling improvements in targeted treatments. Chemicals and materials OxPt and internal standard (IS)-namely, carboplatin, diaquo-DACH platinum, and dichloro-DACH platinum (transient reactive species formed during nonenzymatic OxPt biotransformation)-were purchased from Toronto Research Chemicals Inc. (North York, ON, Canada). Stock solutions (2 mg/ml) containing OxPt, its metabolites, or carboplatin were prepared by dissolving an appropriate amount of the relevant compound in DMSO, followed by storage at −80°C until further use. The relevant OxPt standard working solutions (which were further used to prepare the calibration standards and quality control samples) were prepared by serially diluting the OxPt stock solution with water. The SPME fibres used in this study consisted of a 200 µm nitinol wire with an extraction phase length of 8 mm and a coating thickness of 40 μm, with mixed-mode (MM) particles, which are a combination of benzenesulfonic acid (strongly acidic cation exchanger) and octyl functionalized silicate (SCX/C8), being selected as the extraction phase. Additionally, hydrophilic-lipophilic balanced (HLB) particles and C18 coating were investigated alongside the C8-SCX coating in the early optimization phase to determine the optimal conditions for the extraction/enrichment of the Frontiers in Cell and Developmental Biology frontiersin.org chemotherapeutic drug and its catabolites. The MM and C18 fibres (40 μm thickness, 5 μm particle size) were kindly provided by Millipore Sigma (Bellefonte, PA, United States), while the HLB probes (40 μm coating thickness) were manufactured in-house by repeatedly dipping nitinol wires in a slurry containing polyacrylonitrile (PAN) as a binding agent dissolved in dimethylformamide (DMF) and 5 μm HLB particles. SPME method development SPME protocol The extraction protocol was optimized using phosphate buffered saline (PBS) solution (for perfusate samples) and homogenized lamb's lungs (for tissue samples), which served as a surrogate matrix. Prior to use (at the time of condition optimization and during lung sampling in the hospital facility), the fibres were sterilized for 45 min via steam autoclaving at 121°C in a Market Forge Sterilmatic Sterilizer (model STM-E type C; Middleby Corp., Elgin, IL, United States). Next, the sterile fibres were placed in an MeOH/H 2 O solution (1:1, v/v) to condition the extraction phase. After the conditioning step, the fibres were used to perform extractions under static conditions in either 300 µL of PBS solution or 10 g of homogenized lamb's lung spiked with OxPt. The probes were subsequently wiped with a Kimwipe (Millipore Sigma, Burlington, MA, United States) to remove any loosely attached cell components and rinsed manually in water for 5 s. Finally, the extracts for LC-MS analysis were obtained by desorbing the fibres in an organic-aqueous solvent for 60 min at 1,500 rpm agitation. Investigation of performance of SPME coatings containing different functionalities 8-mm MM, C18, and HLB fibres were exposed to PBS solution spiked with the OxPt at a concentration of 1-40 µg/ ml (mg/L) for 20 min at room temperature and under static conditions. Three-six fibre replicates were used per coating chemistry for each concentration level. Following extraction, each fibre was wiped with a Kimwipe, quickly dipped in water, and the analytes were desorbed under constant vortex agitation (1,500 rpm) for 1 h. Extraction time profile The extraction time profiles for OxPt were evaluated to determine the extraction kinetics, equilibration time, and optimal extraction time that would enable the preservation of an abundant fraction of the intact drug during in vivo SPME sampling from lung tissue or perfusate. Sterile MM fibres were employed to perform extractions from 10 g of homogenized lamb's lung spiked with OxPt at a concentration of 20 μg/g. Extractions were performed in sextuplicate at intervals of 10, 20, 30, 60, and 120 min (two independent experiments in triplicate), while the extraction time profile in perfusate samples was determined by sampling PBS solution at 5, 10, 20, 30, and 60 min using an OxPt concentration of 20 µg/ml. The preconditioning of extractive phases, rinsing, and desorption steps were carried out using the above-described conditions. Red blood cell (RBC) partitioning/binding of the drug and the degree of drug-protein binding that may affect its efficiency OxPt at 40 µg/ml was incubated in Steen Solution, raw perfusate (collected before drug administration), and porcine plasma at 37°C for 3 h. At relevant intervals (1, 2, and 3 h of incubation), the MM-SPME probes were used to perform extractions from the aliquots containing the incubated solution. After analyte recovery via desorption in an aqueousorganic solvent, the obtained extracts were subjected to LC/MS analysis. The PBS solutions spiked with OxPt were also utilized as controls. SPME sampling of the left lung and perfusion fluids throughout IVLP Male Yorkshire pigs weighing an average of 40 ± 5 kg underwent a 3-h left lung IVLP procedure ( Figure 1). OxPt (Pfizer, Kirkland, QC, Canada) was administered to the perfusion circuit reservoir directly as a bolus at doses of 5, 7.5, and 40 mg/L after full perfusion flow had been established. A detailed description of the left lung perfusion procedure, perfusion circuit, priming solution composition, and protective perfusion/ventilation strategy used in this study has been provided elsewhere (i.e., Ramadan et al., 2021). The experimental protocol was approved by the Institutional Review Board (IRB) at the University Health Network (UHN; Toronto, ON, Canada) and the University of Waterloo's Research Ethics Board (# 40573). For lung sampling, two MM-SPME fibres were placed in the upper and middle sections of the lung at predetermined time points over the course of the procedure (Figure 1), with sampling taking place prior to drug administration, hourly during IVLP, and once during blood reperfusion. The extraction of the drug/ metabolites lasted for a duration of 20 min. For the perfusate sampling, raw perfusate and supernatant (RBC free) were subjected to on-site extractions. The perfusate samples were collected just before the chemotherapeutic drug Frontiers in Cell and Developmental Biology frontiersin.org was injected into the perfusion circuit and again at 1, 2, and 3 h after administration. Perfusate sampling was conducted using 3 MM and three C18 fibres to perform a 20-min static extraction for each type of collected fluid. After each SPME sampling had been completed, the fibres were wiped with a Kimwipe, rinsed manually in water for 5 s, and immediately placed in dry ice. LC-ESI-MS operating conditions Before instrumental analysis, the MM-SPME fibres were desorbed in 60 µL of an ACN/H 2 O (8/2, v/v) mixture in glass vials with an insert and caps. To facilitate the desorption of analytes from the fibres to the surrounding solution mechanical agitation at 1,500 rpm was applied for 60 min. FIGURE 1 Bio-SPME sampling of porcine lungs during in vivo lung chemo-perfusion (IVLP). (A-D) Photographs of the IVLP circuit along with the solidphase microextraction (SPME) probes applied for in vivo and non-destructive sampling of small-molecular-weight molecules, including oxaliplatin and its metabolites. As can be seen, the microprobes, which are approximately the size of an acupuncture needle (0.28 mm diameter), are inserted into upper lobe and lingula such that their entire 8 mm coating is fully immersed. (E,F)Preparation of matrix-matched calibration curve samples for accurate drug quantitation using homogenized lamb lung as a surrogate matrix. The mobile phase flow was set to 0.400 ml/min, and the following gradient program was used: 0-1 min 95% B; 1-1.5 min 95-40% B; 1.5-3.5 min 40% B; 3.5-4 min 40-95% B; 4-7 min 95% B. An injection volume of 10 μL was used for all analyses. The effluent from the LC column was directed to the ion source of the mass spectrometer, which was operated in selected reaction monitoring (SRM) positive-ion mode. For the determination of OxPt, its metabolites, and the IS (carboplatin), the two most sensitive/specific ion transitions (one used for quantification and the other for confirmation) and the optimal collision energies (CE) were selected as follows : 1) Metabolomic and lipidomic investigations The SPME extracts initially subjected to targeted drug/ metabolites determinations were subsequently used for untargeted/global metabolomic determinations. To this end, relevant LC/MS analyses were performed using a Vanquish UHPLC system (Thermo Fisher Scientific, Waltham, MA United States) interfaced to a high-resolution benchtop Exactive Orbitrap mass spectrometer (Thermo Fisher Scientific, Waltham, MA United States). Data were collected in ESI+ (positive ion) and ESI− (negative ion) mode in two different analytical runs using conditions that have been detailed in a prior work (see Olkowicz et al., 2021). In order to monitor LC/MS performance across sample runs, a quality control (QC) sample was prepared as a pooled mixture of sample aliquots and injected along the sequence. For the lipidomic investigations, the C18-SPME fibres were desorbed in 60 µL of MeOH/IPA/H 2 O (45:45:10, v/v/v) under mechanical agitation at 1,500 rpm for 60 min. Chromatographic separation was achieved with an XSelect CSH C18 column (2.1 × 75 mm, 3.5 µm; Waters Corporation, Milford, MA, United States) using a two-solvent system, which has been detailed elsewhere (i.e., Solvent A: 40:60 MeOH:H 2 O with 10 mM ammonium acetate and 1 mM acetic acid in positive mode, and 0.02% acetic acid in negative mode; Solvent B: 90: 10 IPA:MeOH with 10 mM ammonium acetate and 1 mM acetic acid in positive mode, and 0.02% acetic acid in negative mode) . The LC/MS data were initially processed with ProteoWizard-which includes a very handy tool (msconvert) for converting raw data into mzXML format (Chambers et al., 2012)-and subsequently analyzed for peak extraction, grouping, retention-time correction, and peak filling using the XCMS software package (Huan et al., 2017). The XCMS parameters were optimized using the IPO package and adjusted to be slightly more inclusive (Libiseller et al., 2015;Albóniga et al., 2020). The xMSannotator Integrative Scoring Algorithm was employed to annotate the extracted peaks, with METLIN, KEGG, and LIPID MAPS being used as reference databases (Uppal et al., 2017). Only unique features with medium-to-high confidence matches annotated by METLIN/KEGG/LIPID MAPS were selected for further investigation. Unsupervised principal component analysis (PCA) was performed on log-transformed, meancentred data to detect potential outliers, assess data quality, and visualize major structures in the data. PCA score plots were generated to show clusters of samples based on their similarity, while PCA loading plots were created to identify the components that contributed to positive separation among studied groups/conditions. Once the PCA score and loading plots had been constructed, supervised PLS-DA (partial least-squares discriminant analysis) was performed, followed by model validation and variable selection, the latter of which being achieved using a variable influence on projection (VIP) and an absolute value of p (corr) greater than 1.5 and 0.5, respectively. Any statistical treatment of data was carried out using the web-based MetaboAnalyst 5.0 software package (Chong et al., 2019). Data analysis for targeted determinations of OxPt All SRM data were processed and visualized using the Xcalibur 2.1. Quan Browser software package (Thermo Scientific), while the quantification of OxPt was performed using the matrix-matched calibration method. Eleven-point calibration curves (including blank samples plus IS and triplicate determinations for each level) spanning a 1000-fold concentration range were constructed with linear regression analysis and 1/x (x = concentration) weighting. The covered ranges were 0.1-100 µg/ml (mg/L) and 0.1-100 µg/g for perfusion fluids and lung tissue, respectively. Data were presented as the mean ± SD for repeated determinations of analyte per sampling time point (perfusate). Frontiers in Cell and Developmental Biology frontiersin.org Assessing protocol feasibility The optimization framework for early-stage design included evaluations of the following parameters: 1) the selectivity/ performance of SPME coatings with various functionalities (C8/SCX, C18, or HLB particles); 2) optimal settings for the extraction of intact OxPt that closely simulate in vivo SPME extraction conditions; 3) the time-course of sample collection; 4) OxPt's RBC and protein-binding behaviour; and 5) the optimal instrumental and data-acquisition conditions for capturing and identifying a broad range of metabolites, including OxPt and its metabolites. Of the three tested coatings, the HLB coating provided the broadest metabolite coverage and greatest intensities for significant detected features, including OxPt and its catabolites (Figures 2A,B). The MM coating exhibited superior recovery for the majority of polar and moderately hydrophobic compounds compared to the C18 coating, and was therefore deemed more suitable for the analysis of hydrophilic platinum-based anticancer drugs in the lung. Since HLB fibres are not yet commercially available, thus potentially making their implementation in routine metabolomic workflows tedious/ The values presented on Y axis correspond to chromatographic peak areas collected for oxaliplatin under particular studied conditions. In turn, each dot (in the graphs) represents fibre replicate. Three-six fibre replicates were used per coating chemistry for each concentration level. Frontiers in Cell and Developmental Biology frontiersin.org time-consuming, MM fibres are a good option, as they offer an acceptable compromise between suitability for high-throughput applications and balanced metabolite coverage, including for OxPt and its bio-transformative environment. Next, static extraction at equilibrium conditions was proposed as the most reliable sampling strategy, as it can prevent matrix conditions from influencing SPME's quantitation capabilities in the absence of a reference standard (added to the investigated matrix) (Figures 1E, F). As indicated in Figure 2C, 30 min was identified as the minimum extraction time required to achieve equilibrium extraction (in homogenized lung tissue) and, therefore, maximum sensitivity. Nonetheless, it is critical to note that, compared to static extraction, the insertion of fibres into a living system promotes faster extraction, presumably due to blood-flow-related convection, which means that equilibrium will always be reached faster during in vivo extraction (Roszkowska et al., 2018). As such, a sampling time of 20 min was selected for the final analyte (OxPt) determinations in the porcine lung models. In contrast, equilibrium appeared to be reached more quickly in the PBS solution using shorter extraction times (below 10 min) ( Figure 2D). Overall, a sampling time of 20 min was selected for both cases (lung tissue, perfusate) to ensure that alterations in drug levels during chemotherapy could be monitored with high precision and reliability, thus providing a near-real-time profile of the biodistribution of OxPt in the lung tissue throughout IVLP next to its levels in the priming fluid. Five sampling time points were also selected for investigation: 1) before OxPt administration, 2) at the first, second, and third hour during IVLP, and 3) during blood reperfusion. To minimize any possible organ stress associated with the insertion of the MM-SPME microprobes, the minimum number of fibres that would still yield sufficient biological information was used (i.e., two fibres in the upper and middle sections of the lung). In addition, the perfusate was sampled in parallel with lungs using 3 MM (OxPt and untargeted investigations) and three C18 (untargeted analysis) fibres for each sampling time point. To study OxPt's protein-binding properties with respect to albumin, and to measure its real-time concentration in the priming fluid, OxPt was incubated in Steen Solution at 37°C for 3 h. The maximum protein binding rate was found to be 55-60%, with equilibrium for drug-protein binding being reached at the first hour of incubation ( Figure 2E). Conversely, when OxPt was incubated in raw perfusate, its uptake into the erythrocytes was rapid, with an average of 25-30% of the drug being partitioned into the erythrocytes over a 1 h-time period ( Figure 2F) Figures S1-S4). In contrast, in untargeted metabolomic analysis, the objective is to separate as many compounds as possible in a single analytical run (Wishart, 2016;Azad and Shulaev, 2019;Jacob et al., 2019). Therefore, the stationary phase and chromatographic mode selected for such analyses should provide the broadest separation and selectivity for the widest possible range of metabolites. However, the required level of metabolite coverage may depend on the objectives of the study and it is generally accepted that no one analytical method is capable of comprehensively profiling the entire metabolome. In the current work, three stationary phases were tested with the aim of providing expanded metabolite coverage, namely: 1) PFP-bonded phase, 2) HILIC, and 3) the most common C18bonded phase. Although the PFP and C18 phases offered similar reversed-phase (RP) selectivity, the PFP phase outperformed the C18 and HILIC phases in terms of the number of metabolic features retained, while the HILIC phase was able to retain the greatest proportion of highly polar metabolites. Ultimately, the PFP phase was deemed to provide the best performance and was selected for use in subsequent metabolomic investigations in the lung. Moreover, it is possible to increase metabolome coverage by utilizing data from both ionization modes, as the detection of particular metabolite/lipid species, such as acylcarnitines, fatty acids, and specific lipid classes, occurs in either positive or negative ionization mode. Furthermore, to capture perturbations in the hydrophobic fraction of compounds (throughout IVLP), C18-based microprobes were employed for perfusate sampling, with the extracts being analysed in RP-LC/MS mode. Tissue and perfusate OxPt levels Tissue measurements of OxPt concentrations revealed the rather heterogeneous distribution of the drug (over the course of IVLP) in different sections of the lung. This trend was most evident for higher doses, with higher concentrations of OxPt being noted (in most cases) in the upper section of the lung (Figures 3A,B). Furthermore, OxPt concentrations in the upper lobe appeared to have a dose-dependent relationship, peaking at 2 h of IVLP (in most cases) before rapidly declining to near-zero values at reperfusion. Conversely, drug levels in the lower lingula tended to be more inconsistent as the dosage increased; thus, the lingula might be not representative during sampling. Apart from the heterogenous biodistribution of OxPt in lung tissue, the findings also indicated relatively low tissue levels of the drug, which may be attributable, at least in part, to binding between Frontiers in Cell and Developmental Biology frontiersin.org OxPt and the albumin present in the Steen solution, its sequestration into erythrocytes, or its rapid and extensive nonenzymatic biotransformation. Perfusate levels of OxPt peaked at the second hour of IVLP in animals being treated with lower doses of the drug (5, 7.5 mg/L) ( Figure 3C). In contrast, the perfusate levels of OxPt peaked in animals being treated with higher doses at the first hour of IVLP, with concentrations subsequently declining over time. Furthermore, the OxPt in supernatant samples represented 25% of the entire drug 1 hour after injection. Similar to the supernatant samples, raw perfusate levels of OxPt peaked at the second hour of IVLP in animals being treated with lower doses of the drug, whereas concentrations peaked at the first hour for animals being treated with a higher dose (40 mg/L) ( Figure 3D). However, OxPt levels were lower in raw perfusate samples (vs. supernatant samples), thus confirming the tendency of the drug to bind/accumulate in RBCs. OxPt levels during IVLP and reperfusion were also assessed in selected plasma samples. These results showed no detection of OxPt systemically, indicating an effective separation between the pulmonary and systemic circulations. The proposed analytical method was developed for the determination of intact OxPt and two other cytotoxic platinum species that form during nonenzymatic OxPt conversion (diaquo-DACH platinum and dichloro-DACH platinum), which comprise the dominant metabolic routes of biotransformation. However, given the highly reactive nature of these species, it is likely that they are only present temporarily rather forming complexes with amino acids, proteins, DNA, and other macromolecules, as our determinations found only trace levels of these compounds (Supplementary Figures S1-S4). Cellular metabolic landscape of the chemotherapy milieu at spatial and temporal resolution As previous studies have demonstrated, SPME is suitable for the extraction and profiling of a broad range of metabolites and lipid species in different biomatrices, including elusive (unstable) fraction of compounds (Reyes-Garcés et al., 2018;Reyes-Garcés and Gionfriddo, 2019). The Bio-SPME extracts that were initially Frontiers in Cell and Developmental Biology frontiersin.org 12 used to evaluate OxPt (and its metabolites) in tissue and perfusate samples were also used for the screening of other low-molecularweight compounds using an LC-HRAM (High Resolution Accurate Mass) system. It should be noted that the global analyte investigations only used samples collected from cases treated with a 40 mg/L dose of OxPt. Figure 4 presents the PLS-DA results obtained in ESI+ (A, B) and ESI− (C, D) ionization modes for lung tissue samples obtained at different time points during the IVLP/surgical procedure. From the plots, it is evident that the samples collected at each time point generate relatively well-separated clusters, with separation appearing as a transitionary pattern from left to right in the plot. These results indicate that the proposed method provided clear discrimination among metabolomic or lipidomic patterns in the samples collected before commencing lung perfusion, during IVLP, and during blood reperfusion. Additionally, 148 (out of 958 detected) and 153 (out of 907 detected) metabolic features with a VIP score >1.5 were identified in ESI+ and ESI− modes, respectively. The top 25 discriminative features along with their corresponding VIP values are presented in Figures 4E,F. It is further important to note that most dysregulated compounds increased in abundance during IVLP. In addition to lung tissue, MM and C18 SPME microprobes were also used to perform extractions on perfusate samples. Specifically, metabolomic and lipidomic evaluations were performed on both the supernatant and raw perfusate samples. As can be seen in Figures 5A-D, 6A-D, the samples were clearly divided into 4/3 groups with highly distinct metabolic profiles coinciding with the time of sampling. Furthermore, VIP scores were calculated to determine the features responsible for the variance in the PLS-DA prediction models. In total, 133 variables detected in MM/ESI+ mode and 120 variables detected in MM/ESI-mode met the VIP score threshold (i.e., >1.5), as did 87 variables detected in C18/ESI+ mode and 133 detected in C18/ESI-mode. The top 25 dysregulated features (for a given analysis mode) are presented in Figures 5E,F, 6E, F. Furthermore, the two-dimensional PCA score plots for the samples in both positive and negative ion modes revealed no outliers, with the tightly clustered QC samples (in relevant PLS-DA score plots) confirming detection stability and the high quality of the collected data (see Supplementary Figures S5-S11). Next, significant features were annotated based on the Metabolomics Standards Initiative guidelines (Sumner et al., 2007) and the International Lipid Classification and Nomenclature Committee recommendations (Liebisch et al., 2013). The putative annotations, which were based upon accurate mass database matches and tandem MS fragmentation data (where the latter were available), are presented in Supplementary Tables S1, S2 in the Supplementary Information. Additionally, box plots comparing the abundances of the annotated compounds over the course of IVLP are shown in Figures 7-9. Further details of the putative compound annotations and their extracted-ion chromatograms (XIC) can be found in the (Supplementary Tables S1-S2 and Supplementary Figures S12-S14). Finally, we sought to determine which biochemical pathways were affected by the administration of chemotherapy. These results indicated that the application of chemotherapy disrupted pathways related to lipid (specifically, free fatty acid) and amino acid metabolism, such as the tryptophan/ kynurenine, histidine, and phenylalanine pathways. Other candidate pathways potentially related to lung toxicity or injury include accelerated oxylipin generation, amplified steroidogenesis, and the production of reactive oxygen species (ROS). Moreover, profound alterations were observed in the metabolism of purine and pyrimidine, as well as the cellular lipid mediators. Thus, these results establish Bio-SPME as the perfect tool for detecting alterations to the cellular lipidome in response to chemotherapy-induced stress. Despite these encouraging results, further research will be required to determine whether the membrane lipid remodelling was a primary cellular adaptation to detrimental environmental conditions (Supplementary Tables S1-S2 and Figures 7-9). Discussion OxPt is a third-generation platinum-based anticancer drug that is highly effective in preventing the growth of colorectal cancer and other malignancies, including lung, gastric, ovarian, and prostate cancer (Di Francesco et al., 2002;Martinez-Balibrea et al., 2015;Qin et al., 2020), predominantly by forming intrastrand crosslinks with DNA and inhibiting DNA synthesis. Although DNA damage has been suggested as the main mechanism affecting the proliferation of neoplastic cells, more recent data indicate that OxPt may kill cells by inducing ribosome biogenesis stress (Bruno et al., 2017) or by influencing the mitochondrial respiratory chain and energy metabolism (Qin et al., 2020). Regardless of the mechanism of action, OxPt therapy also frequently results in numerous systemic side effects, including those affecting the central and peripheral nervous systems, which force physicians to reduce the dose of medication or discontinue treatment (Branca et al., 2021). Therefore, in recent years many efforts have been made to ameliorate these adverse drug effects to improve the safety profile of OxPt. The present study demonstrates that an IVLP platform can be used to deliver high doses of OxPt to the lung during surgical resection safely and without inducing systemic toxicity. Furthermore, this work presents precise, minimally-invasive sampling tools for evaluating the pharmacokinetic profile of OxPt within the studied setting, which provides sophisticated information regarding the distribution of the intact drug and its concentrations around the target area-both of which being Frontiers in Cell and Developmental Biology frontiersin.org Frontiers in Cell and Developmental Biology frontiersin.org 14 FIGURE 8 Box plots of metabolic features identified in the perfusate via MM probes showing the strongest differences between samples obtained at different points in the IVLP procedure. SN_baseline: samples collected before OxPt administration. SN_IVLP_1h/2h/3h: samples collected at the first, second, and third hour of IVLP. Frontiers in Cell and Developmental Biology frontiersin.org 15 FIGURE 9 Box plots of metabolic features identified in the perfusate via C18 probes showing the strongest differences between samples taken at different points in the IVLP procedure. SN_baseline: samples collected before OxPt administration. SN_IVLP_1h/2h/3h: samples collected at the first, second, and third hour of IVLP. Frontiers in Cell and Developmental Biology frontiersin.org 16 critical to the efficacy of OxPt-based treatments. Additionally, a comprehensive mapping of cellular disruptions following the administration of chemotherapy was conducted, with the results identifying a number of early predictive biomarkers for cytotoxic chemotherapy-induced lung injury. A 72-h IVLP porcine survival model was used in this study. Details related to the setting, experimental design (i.e., an accelerated titration dose-escalation study), and an assessment of the subacute toxicities of OxPt within the tested dosage range (5-80 mg/L) have been presented in our recent work (Ramadan et al., 2021). In the current study, three doses of OxPt were explored in detail, with a maximal-tolerated dose of 40 mg/L being administered several times to confirm its safety. Cases at 40 mg/L showed only mild and subclinical lung injury, as manifested by minor histologic and gross changes, as well as limited consolidation on computed tomography, without compromising gas exchange. The novel chemical biopsy approach proposed in this work, which couples SPME and sensitive/accurate analytical instrumentation, was also applied for the pharmacokinetic profiling of OxPt in a novel IVLP-based setup, as well as the comprehensive profiling of disturbances in multiple metabolic pathways. Knowledge about the cellular pharmacokinetics and/or subcellular distribution of platinum-based anticancer drugs is critical to understanding their pharmacology and toxicity. Several studies on the pharmacokinetics of platinum analogues (cisplatin, carboplatin, and oxaliplatin) have attempted to quantify intact concentrations, or the total Pt, of these drugs. However, these studies have exclusively focused on the metabolic routes taken by these drugs after intravenous (I.V.) administration (Graham et al., 2000;Qin et al., 2020). The present study documents the first comprehensive approach for profiling OxPt and its transient intermediates after administration via IVLP and screening the metabolic pathways that are affected as a result. The known pharmacokinetic profile of OxPt during intravenous administration is as follows: within the first hour, the vast majority of the drug rapidly binds to plasma proteins including albumin and gamma-globulins; such binding was found to be moderate and time-dependent, with 85-88% of the total Pt becoming bound within 5 h (Graham et al., 2000). Furthermore, Pt has been shown to irreversibly bind to and accumulate in RBCs. Even though Pt binds to blood cells, it is not considered clinically significant because it represents a minor compartment for drug distribution in patients. For instance, previous studies have demonstrated that approximately 15% of the administered Pt will still be present in the blood at the end of a 2 h infusion. Pt is rapidly cleared from systemic circulation by cellular uptake/covalent binding to tissues and renal elimination. In the context of IVLP, the pharmacokinetics of OxPt is affected by the composition of perfusion fluid, residual blood in the perfusion circuit, and the absence of urinary excretion. Our findings showed significant binding between OxPt and the albumin in the Steen Solution, and a tendency for OxPt to accumulate in RBCs; however, the unbound (free) fraction of the drug seemed to be greater than after I.V. injection. Furthermore, IVLP administration results in OxPt exposure to that is several times higher than after I.V. injection (for detailed explanation, see Ramadan et al., 2021), thus improving the effectiveness of the applied drug/strategy. IVLP administration also minimizes/eliminates some of the adverse effects associated with OxPt frequently observed with I.V. injection, such as haematological toxicity or neuropathy (Di Francesco et al., 2002;Branca et al., 2021). At this point, it is worth noting that there is a clear relationship between the degree of OxPt accumulation in RBCs and the tolerability of therapy (Peng et al., 2005). Indeed, prior analysis of erythrocytes in cancer patients has provided direct evidence that patient prognosis is inverse to the fraction of haemoglobin (Hb) that binds to OxPt; that is, the more haemoglobin that binds to OxPt, the worse the patient's prognosis. Thus, Hb-OxPt adducts in RBCs can serve as a clinical indicator for toxic response and treatment efficacy. Undoubtedly, the most intriguing aspect of this study is the proposed sampling approach's ability to simultaneously monitor spatial and temporal variations in the metabolic patterns of OxPt and several proximate cytotoxic species (monochloro-, dichloro-, and diaquo-DACH platin), along with other noncytotoxic products. Measurements of OxPt levels in tissue captured with Bio-SPME probes showed a rather heterogeneous distribution of the drug (over the course of IVLP) in the two sampled sections of the lung. In particular, higher concentrations of OxPt were observed in the upper lobe, indicating a dose-dependent relationship. The maximal cellular absorption of the drug occurred at the second hour of IVLP (in most cases), followed by a rapid decline to near zero at reperfusion. However, the lower levels of OxPt observed in the lingula were inconsistent with the above-noted trend regarding increased dosages, which suggests that the lingula may not provide representative sampling. This finding is consistent with those of other studies evaluating the reliability and validity of donor tissue biopsies prior to lung transplantation. For instance, Chao et al. (2021) proved that a donor lung biopsy collected from an area other than the lingula can be considered representative of the overall condition of the lung, unless there is obvious localized injury possessing a unique inflammatory-related profile (Chao et al., 2021). Thus, compared to biopsies taken from other sites in donor lungs, the lingula may not provide representative results for diagnostic investigations presenting different physiological pattern. Furthermore, in vivo SPME sampling was also able to provide a representative snapshot of the dynamic changes to the metabolome caused by the IVLP-OxPt treatment. The main advantages offered by SPME, particularly in tissue analysis, include: the ability to tune the geometry of the miniaturized devices to target specific sampling sites; its low invasiveness Frontiers in Cell and Developmental Biology frontiersin.org compared to standard tissue sampling approaches that require biopsy collection; and the non-destructive nature of the extraction procedure. SPME's miniaturized format, high selectivity for small molecules, and the variety of available biocompatible extraction phases, have positioned it as a convenient and viable strategy for the analysis of a broad range of compounds in diverse matrixes, particularly within the context of global analyte profiling. Indeed, this analytical strategy enabled the identification of multiple pathways that were altered due to the injection of higher doses of OxPt into the perfusion circuit. Specifically, the results indicated the presence of alterations to the metabolism of amino acids, free fatty acids, purine, and pyrimidine over the course of the lung chemo-perfusion, pointing to compromised energy substrate utilization and, presumably, energy deprivation (Pavlova and Thompson, 2016). Amplified oxidative stress manifested by elevated levels of ubiquinone-1 and L-phenylalanine was also recognized in the studied setting. Several previous studies have clearly demonstrated that phenylalanine upregulation in peripheral blood is often the consequence of chronic immune activation, inflammation, and oxidative stress in cancer patients (Lieu et al., 2020). On the other hand, elevated levels of ubiquinone-1 or ubiquinol-6 (the components of mitochondrial respiratory chain) have been characterized as compensatory mechanisms against excessive ROS generation in various pathological conditions (Orr et al., 2013). Furthermore, the upregulation of levels of several bioactive lipid mediators derived from arachidonic acid (proinflammatory prostaglandins) or generated from docosahexaenoic and eicosapentaenoic acids (pro-resolving mediators) over the course of IVLP strongly suggests acceleration of the inflammatory response (Serhan, 2014;Sansbury and Spite, 2016;Serhan and Levy, 2018). Our findings demonstrate the dysregulation of a steroidogenic system in a perfused lung, but further research is required to define its overall physiological relevance. Finally, profound alterations in lipid metabolism and/or signalling were also observed. Perturbations in lipid metabolism and, consequently, lipid composition have important therapeutic implications, as they may affect the survival, membrane dynamics, and therapy response of tumour cells (Munir et al., 2019). Changes in the levels of many of the observed lipids may have been due to exposure to metabolically challenging conditions, which is significant, as such adaptation can help cells thrive in harsh microenvironments. Nonetheless, our study has several important limitations. The main limitation is a relatively small sample size that is directly related with an accelerated titration design. Furthermore, this study's main goal was to demonstrate the proposed chemical biopsy tool's usefulness for the non-invasive comprehensive profiling of the metabolic route of anti-cancer drugs and for screening the evolving landscape of the chemotherapy-altered metabolome. Additionally, while the identification of biomarker profiles predictive of lung injury was beyond the scope of this paper, studies are currently in progress that use the proposed analytical approach and larger samples to make clinically relevant observations and identify biomarkers that could aid in the detection of toxicity or inform on therapeutic outcomes. Finally, we explored the cellular distribution of OxPt and its levels in normal tissue and cells, but we did not explore this characteristic in the target cancer cells. Despite these limitations, we ultimately achieved the purpose of this study, which was to provide a rapid estimate of dose-limiting toxicity that can be used as a guide before initiating a safety clinical trial involving metastatic cancer patients. In conclusion, this paper presented a state-of-the-art analytical pipeline for the comprehensive metabolic profiling of in vivo perfused lungs. The proposed method was applied to study the pharmacokinetic behaviour of OxPt in a detailed animal model, including its distribution, effective concentrations, and metabolic route of biotransformation/ elimination. In addition to enabling the quantification of the intact drug in diverse biological compartments (lung tissue, perfusate, plasma), Bio-SPME probes enabled us to capture a composite snapshot of the intracellular metabolome. Furthermore, when directly coupled to highly sensitive mass detectors, SPME microprobes can be an invaluable tool for use in rapid diagnostics or tailoring treatments to individual patients. The findings presented herein demonstrate that it is possible to achieve spatially and temporally resolved biochemical/molecular characterization of a living mammalian lung subjected to medical or scientific experimentation or treatment in a fast and minimally invasive manner, thus not disturbing other medical/surgical procedures. Furthermore, SPME technology makes it possible to tailor the analytical protocol to the metabolites or lipids of interest, as it allows researchers to employ more selective extractive phases and desorption parameters. Adding to the above, owing to its miniaturized format, flexibility of design and its easy adaptation to proven analytical approaches, SPME has been positioned as a convenient and viable strategy for in vivo analysis with possible future applications directed at providing new insights into the processes characterizing complex biological systems. The sampling devices which potentially can be conveniently coupled to a variety of instrumentations (for rapid determinations) might be customized into a personalized diagnostic tool to support surgeons' decisionmaking processes. Along with the evolution of analytical technologies, including portable reading devices using ion mobility devices and optical spectroscopic techniques which might be directly hyphenated to SPME microprobes this technology will continue to advance to address very specific needs of biomedical field. Altogether, that research can be an important reference for researchers conducting pharmacokinetic studies of other anticancer drugs using an IVLP-based route of administration. Frontiers in Cell and Developmental Biology frontiersin.org
2022-08-25T13:07:23.515Z
2022-08-25T00:00:00.000
{ "year": 2022, "sha1": "531846822fb39a29be823520daa62a5cbaebef38", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "531846822fb39a29be823520daa62a5cbaebef38", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
235966277
pes2o/s2orc
v3-fos-license
Impact of cardiovascular disease on clinical outcomes in hospitalized patients with Covid-19: a systematic review and meta-analysis Contrasting data have been published about the impact of cardiovascular disease on Covid-19. A comprehensive synthesis and pooled analysis of the available evidence is needed to guide prioritization of prevention strategies. To clarify the association of cardiovascular disease with Covid-19 outcomes, we searched PubMed up to 26 October 2020, for studies reporting the prevalence of cardiovascular disease among inpatients with Covid-19 in relation to their outcomes. Pooled odds-ratios (OR) for death, for mechanical ventilation or admission in an intensive care unit (ICU) and for composite outcomes were calculated using random effect models overall and in the subgroup of people with comorbid diabetes. Thirty-three studies enrolling 52,857 inpatients were included. Cardiovascular disease was associated with a higher risk of death both overall (OR 2.58, 95% confidence intervals, CI 2.12–3.14, p < 0.001, number of studies 24) and in the subgroup of people with diabetes (OR 2.91, 95% CI 2.13–3.97, p < 0.001, number of studies 4), but not with higher risk of ICU admission or mechanical ventilation (OR 1.35, 95% CI 0.73–2.50, p = 0.34, number of studies 4). Four out of five studies reporting OR adjusted for confounders failed to show independent association of cardiovascular disease with Covid-19 deaths. Accordingly, the adjusted-OR for Covid-19 death in people with cardiovascular disease dropped to 1.31 (95% CI 1.01–1.70, p = 0.041). Among patients hospitalized for Covid-19, cardiovascular disease confers higher risk of death, which was highly mitigated when adjusting the association for confounders. Supplementary Information The online version contains supplementary material available at 10.1007/s11739-021-02804-x. Introduction Since its spread in late 2019, Coronavirus disease 2019 (Covid-19) caused more than 1 million deaths. Cardiometabolic risk factors, such as hypertension and diabetes are among the most frequent comorbidities in patients hospitalized for Covid-19. The mounting literature describing clinical features of patients with Covid-19 initially suggested that also pre-existing cardiovascular disease is an important risk factor for severe disease and death [1]. Nevertheless, our group and others failed to show significant associations between history of cardiovascular disease and poor Covid-19 outcomes, especially after adjustment for confounders [2][3][4][5][6]. Indeed, most of the available data are from small and underpowered studies differing in settings and features of the population enrolled. Therefore, a comprehensive synthesis and a pooled analysis of the rapidly increasing number of studies conducted in patients with Covid-19 are welcome to allow a better risk stratification and a more effective clinical care. In particular, it is important to disentangle whether and at what extent the presence of cardiovascular disease is associated with poor Covid-19 outcomes and if the impact of history of cardiovascular disease varies by countries and type of outcome. We also meant to understand if the existing evidence supports an association between cardiovascular disease and Covid-19 outcomes independently from confounders, such as older age and sex. To these aims, we conducted a systematic review and meta-analysis of studies reporting clinical outcomes of subjects hospitalized for Covid-19 with and without history of cardiovascular disease. We secondarily aimed to investigate whether cardiovascular disease further 1 3 increases the risk of poor Covid-19 outcomes in the highrisk group of people with diabetes mellitus, which may be considered a cardiovascular equivalent. Search strategy and selection criteria In this systematic review and meta-analysis, we searched PubMed for the term "covid-19" looking for observational studies published in English language up to 26 October 2020, reporting original clinical data about history of cardiovascular disease in Covid-19 inpatients aged > 18 years old with and without at least one outcome among death, mechanical ventilation, admission in an intensive care unit (ICU), or a composite outcome with at least one of the above. The search was filtered to include only "clinical studies" and "observational studies". We excluded studies that were not original articles, randomized clinical trials testing the efficacy of therapeutical interventions on Covid-19 outcomes, whole population studies, studies conducted in nonhospitalized people, mathematical modeling and machine learning or computational studies. Four investigators (AS, CM, CL and RA) independently screened titles, abstracts and full-text articles reporting potentially eligible studies. Disagreements were resolved by consultation with two adjudicators (EM and LDO) when necessary. Data collection Results in studies' reports and their accompanying supplementary materials were used as the only source of information. Databases of the individual studies were not obtained from the sponsoring institutions and analyses were performed at the study level. Data from each eligible article were independently extracted by one investigator (LDO, AS, CL, RA, CM) and entered in a structured spreadsheet. Data extraction was duplicated for all papers by two independent researchers (EM and IC). The following data were extracted: total number of participants, country of the hospital where patients were enrolled, definition of cardiovascular disease, outcomes of the study, number of patients with and without the study outcomes, number of patients with and without cardiovascular disease among patients with and without the study outcomes. Absolute numbers were recalculated when percentages were reported. Adjusted odds ratio (OR) with the corresponding 95% confidence intervals (CI) were extracted if available. Outcomes The clinical outcomes evaluated in this meta-analysis were: death, mechanical ventilation or ICU admission, and a composite outcome with at least one of the above. If one study reported data for two outcomes among those above specified, data for both the outcomes were retrieved and analyzed. No study reported data for all the three above specified outcomes. Effect measures Crude OR and 95% CI from each study were recalculated based on the absolute numbers of patients with and without cardiovascular disease among those with and without the study outcome. Adjusted OR and the corresponding 95% CI were used instead of crude OR if available from the study. Data analysis The DerSimonian-Laird method for random effects [7] was used in the primary analyses to estimate the pooled OR for the three study outcomes, having history of cardiovascular disease (defined according to the definition reported in each study) as exposure. The DerSimonian-Laird method for random effects was also used to evaluate the pooled OR for death in the subgroup of subjects with diabetes mellitus. A separate meta-analysis including only studies reporting OR adjusted for confounders was also performed. I 2 was used to assess heterogeneity. Subgroup meta-analyses by country was conducted to explore heterogeneity. Countries represented in only one study (France, Greece, and Brazil) were grouped in the "other countries" subgroup. Publication bias was assessed visually using funnel plots and formally with Egger's test for the primary analyses if at least ten studies were included in the meta-analysis [8]. All meta-analyses were conducted using Stata version 12.1 (StataCorp, United States). p values < 0.05 were considered to be statistically significant. Study selection We identified 638 articles in the published literature according to the search strategy used for this systematic review and meta-analysis (Fig. 1). We excluded 446 articles at the title/ abstract level because not in English, not on humans, reporting results of randomized clinical trials or study protocols. Of the remaining 192 articles assessed for eligibility at the full-text level, 149 did not report information useful for the calculation of the OR for any of the relevant outcomes in people with previous cardiovascular disease, 6 studies were not conducted in hospitalized patients, 3 studies enrolled children and 1 study did not define cardiovascular disease. Finally, 33 studies were included in this meta-analysis. Cardiovascular disease and Covid-19 outcomes Compared to Covid-19 hospitalized patients without, those with history of cardiovascular disease had a higher risk of death (pooled OR 2.56, 95% CI 2.12-3.10, p < 0.001, number of studies 24) but not of ICU admission or mechanical ventilation (pooled odds ratio 1.35, 95% CI 0.73-2.50, p = 0.34, number of studies 4); the pooled OR for composite outcomes was 1.72 (95% CI 1.13-2.63, p = 0.011, number of studies 10) ( Fig. 2A-C). The heterogeneity was considerable among studies investigating death (I 2 84.3%, p < 0.001), while it was lower among studies investigating ICU admission or mechanical ventilation (I 2 54.9%, p = 0.084) and among those investigating composite outcomes (I 2 53.3%, p < 0.001). No significant publication bias was found (Egger's tests: p = 0.18 for death, p = 0.16 for the composite outcome; funnel plots in supplementary figures S1, S2; publication bias was not formally tested for the outcome ICU admission or mechanical ventilation because less than ten studies were included). Five studies reported OR for poor Covid-19 outcomes adjusted for confounders. Among these, only Kim DW et al. reported a significant 2.38-fold increased risk of death (95% CI 1.03-5.49) among patients with previous acute myocardial infarction after adjusting for sex, age, type of districts, high epidemic region and socio-economic status [14]. On the contrary, Zhou et al. [23] and Iaccarino et al. [19] did not find independent associations with death. Similarly, cardiovascular disease was not independently associated both with the respective composite primary outcomes and with death in Petrilli et al. [21] and in Maddaloni et al. [2]. Accordingly, the meta-analysis showed that cardiovascular disease was not independently associated with the primary outcomes of these studies (pooled OR 1.20, 95% CI 0.87-1.66, p = 0.26), and the pooled adjusted-OR for death among inpatients with cardiovascular disease decreased to 1.31 (95% CI 1.01-1.70, p = 0.041, I 2 19.6%, p = 0.29) (Fig. 3). Meta-analyses by country To explore the heterogeneity found among studies evaluating the risk of death, pooled ORs by country were calculated and confirmed an increased risk of death among inpatients with cardiovascular disease hospitalized in all countries, but in Greece (OR 0.69, 95% CI 0.18-2.02) [20] and in USA (pooled OR 1.32, 95% CI 0.76-2.28). Of note, the analysis by countries seemed to explain at least in part the heterogeneity found in the primary meta-analysis for death, remaining considerable only for studies from China (I 2 72.9%, p < 0.001) (Fig. 4). Cardiovascular disease and Covid-19 in patients with type 2 diabetes Four studies reported data about the prevalence of cardiovascular disease among Covid-19 survivors and non-survivors with comorbid type 2 diabetes [5,[24][25][26]. Overall, the presence of cardiovascular disease on top of diabetes was associated with a 2.9-fold higher risk of death (pooled OR 2.91, 95% CI 2.13-3.97, p < 0.001) (Fig. 5). Cariou et al. [5] also reported data about the risk of a composite outcome of mechanical ventilation or death within 7 days of admission among people with comorbid diabetes and found no significant association of cardiovascular disease with the composite outcome. Discussion This systematic review and meta-analysis of observational studies conducted among hospitalized patients with Covid-19 shows that those with history of cardiovascular disease are, on average, 2.58-times more likely to die than those without, while no significant increase in the risk of mechanical ventilation or ICU admission was found. When restricting the analysis to include studies adjusting results for Accordingly, crucial mechanisms that have been hypothesized to explain the high rates of Covid-19 progression towards critical scenarios, or even death, may be enhanced by cardiometabolic conditions. In particular, the pro-thrombotic and pro-inflammatory milieu predisposing cardiometabolic patients to cardiovascular events [37] may also promote the cytokine storm and the formation of multiple blood clots that can occur in the most severe Covid-19 cases [38,39]. Indeed, thrombotic complications are frequent and significantly contribute to morbidity and mortality among Covid-19 patients [40,41]. In this regard, differences in thromboprophylaxis, which has been indicated in ICU patients, in those with acute respiratory insufficiency and in the presence of mildto-moderate respiratory symptoms and an elevated risk of venous thromboembolism [38,42], may exist. However, most of the studies published so far did not adjust their observations for confounders, potentially leading to deceiving conclusions. Therefore, we also investigated this association gathering data only from studies conducting multivariate analyses which allow to understand the relevance of considering such confounders when evaluating the role of cardiovascular disease in Covid-19 progression. Of note, we found results corrected for confounders in only 5 studies out of the 33 (15.2%) included, and almost all (4 out of 5) failed to show independent associations of cardiovascular disease with Covid-19 deaths or composite outcomes. Accordingly, the adjusted pooled OR for death was more than 1 point lower compared to the crude pooled OR. However, the heterogeneity of adjustments between studies should be acknowledged as a limitation of this meta-analysis. Another finding of this meta-analysis is the heterogeneity of the prognostic impact of cardiovascular disease on Covid-19 observed among different countries. Possible explanations to this result may rely in different secondary prevention strategies in various healthcare systems, in different criteria used for hospitalizing people affected by Covid-19 or in a role for ethnicity. Differently from what observed for death, no association between cardiovascular disease and risk of ICU admission or mechanical ventilation was found. This observation may lead to the hypothesis that cardiovascular disease impacts on disease progression among patients affected by the most severe cases of Covid-19, who are at the highest risk of death, but not among people affected by moderate or mild Covid-19. However, we were not able to perform a sensitivity analysis by subgroups of Covid-19 severity because of the lack of such information in the available literature. Finally, we evaluated whether cardiovascular disease increases the risk of poor Covid-19 outcomes in subjects with type 2 diabetes confirming the association found in the general population when using crude OR. This result is not consistent with a previous study conducted by our group reporting that the presence of cardiovascular disease was not associated to Covid-19 hospitalization among people with type 2 diabetes [43]. However, the different outcome and the fact that correction for confounders was not performed in any study reporting data in the subgroup of people with diabetes may explain this apparent contrast. Strengths of this study include the systematic review of published papers with available data helping to disentangle the complex association between cardiovascular disease and Covid-19 outcomes [44], the gathering of data from a high number studies from different countries including more than 50,000 inpatients and the identification and separate analysis of studies reporting adjusted associations to better clarify the real impact of cardiovascular disease on Covid-19 outcomes. Nevertheless, some limitations should be acknowledged. Our search was limited to studies published in PubMed and, therefore, we might have missed papers published in EMBASE, Cochrane Library, PROSPERO or other databases. Differences across papers with regards to populations and explored outcomes and the often-vague definition of cardiovascular disease resulted in high heterogeneity. However, this does not preclude pooling of data, it is consistent with other meta-analyses on Covid-19 [45], and heterogeneity was explored through subgroup analyses. Instead, our study provides a reliable outlook of the available data, highlights the heterogeneity across the Covid-19 literature and the need to improve the quality and standardization of research in this field. Specifically, a clearer definition of cardiovascular disease is needed when reporting data about the risk factors for poor Covid-19 prognosis. Indeed, our systematic review and meta-analysis shows that about a half of the included studies does not clearly define "history of cardiovascular disease", possibly including a highly heterogeneous population within the group of people with the disease. In this regard, this study did not specifically investigate the impact of heart failure on Covid-19 outcomes, which should deserve a separate meta-analysis. It is also important to highlight the high heterogeneity we found in the literature with regards to the definitions of poor Covid-19 outcomes, claiming for a widely agreed consensus to standardize the analysis of clinical data around the globe. Finally, despite this systematic review and meta-analysis being conducted in the late phases of the Covid-19 pandemic, we believe that these results are still of value to guide prioritization of certain patients for primary and secondary Covid-19 prevention. Unfortunately, time is still needed before the pandemic will be definitely defeated and future infectious diseases by pathogens similar to SARS-CoV-2 could spread. Conclusions Among patients hospitalized for Covid-19, cardiovascular disease confers higher risk of death, which is mostly explained when adjusting for confounders, but not of mechanical ventilation or ICU admission. Since the majority of the studies with multivariate analyses failed to show an independent role of cardiovascular disease to increase the risk of Covid-19 progression towards poor outcomes, potential explanations for the higher prevalence of cardiovascular disease among patients suffering from severe Covid-19 should be mostly searched in cardiovascular risk factors rather than cardiovascular disease itself. These may include ageing, the increased frailty of patients with comorbid cardiovascular disease or, most probably, the comorbidities often co-existing with and predisposing to cardiovascular events, such as obesity, diabetes and hypertension. Funding Open access funding provided by Università degli Studi di Roma La Sapienza within the CRUI-CARE Agreement. No funding source to declare. Availability of data and material All data used in this manuscript can be found in the online versions of the studies that were accessed. Our own data synthesis of these manuscripts is available from the corresponding author upon reasonable request. Conflict of interest The authors declare no conflict of interest related to this manuscript. Human and animal rights and informed consent This is an analysis of previous research data and uses data that existed in studies published. Therefore neither ethics approval nor patient consent was required for this analysis. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-07-17T13:35:23.129Z
2021-07-17T00:00:00.000
{ "year": 2021, "sha1": "fb006e319570bfa28bf1c30e4a945e967564cbf2", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s11739-021-02804-x.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "fb006e319570bfa28bf1c30e4a945e967564cbf2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
58599098
pes2o/s2orc
v3-fos-license
Ultrasound‐assisted prompted voiding care for managing urinary incontinence in nursing homes: A randomized clinical trial Aims To determine whether ultrasound‐assisted prompted voiding (USAPV) care is more efficacious than conventional prompted voiding (CPV) care for managing urinary incontinence in nursing homes. Methods Thirteen participating nursing homes in Japan were randomized to CPV (n = 7) or USAPV care group (n = 6). Residents of the allocated nursing homes received CPV (n = 35) or USAPV (n = 45) care for 8 weeks. In the CPV group, caregivers asked the elderly every 2‐3 h whether they had a desire to void and prompted them to void when the response was yes. In the USAPV group, caregivers regularly monitored bladder urine volume by an ultrasound device and prompted them to void when the volume reached close to the individually optimized bladder capacity. Frequency‐volume chart was recorded at the baseline and after the 8‐week intervention to measure the daytime urine loss. Results The change in daytime urine loss was statistically greater in the USAPV (median, −80.0 g) than in the CPV (median, −9.0 g; P = .018) group. The proportion of elderly individuals whose daytime urine loss decreased by >25% was 51% and 26% in the USAPV and CPV group, respectively (P = .020). Quality‐of‐life measures of elderly participants showed no significant changes in both groups. The care burden scale score of caregivers was unchanged in the USAPV group (P = .59) but significantly worsened in the CPV group (P = .010) after the intervention. Conclusions USAPV is efficacious and feasible for managing urinary incontinence in nursing homes. Results: The change in daytime urine loss was statistically greater in the USAPV (median, −80.0 g) than in the CPV (median, −9.0 g; P = .018) group. The proportion of elderly individuals whose daytime urine loss decreased by >25% was 51% and 26% in the USAPV and CPV group, respectively (P = .020). Quality-of-life measures of elderly participants showed no significant changes in both groups. The care burden scale score of caregivers was unchanged in the USAPV group (P = .59) but significantly worsened in the CPV group (P = .010) after the intervention. Conclusions: USAPV is efficacious and feasible for managing urinary incontinence in nursing homes. K E Y W O R D S elderly, nursing homes, randomized clinical trial, ultrasound-assisted prompted voiding, urinary incontinence 1 | INTRODUCTION Urinary incontinence (UI) is common in community-dwelling and institutionalized elderly populations, with a prevalence ranging from 25% to 50%. 1,2 UI is associated with multiple health problems, such as dermatitis, 3 falls, 4 lower activity of daily living, 5 longer hospital stays, and higher mortality, 6 thus impairing patients' quality of life (QOL) and dignity. 7 In addition, management of UI imposes a heavy burden on the society because of the need for human and financial resources. 8,9 Changing diapers is one of the most stressful tasks for care workers. 9 Management of UI in elderly individuals usually involves the use of absorbents and toileting programs such as bladder training, habit retraining, timed voiding, and prompted voiding (PV). 10 In PV care, care workers regularly prompt elderly individuals to void and provide positive feedback for appropriate voiding. PV is recommended by the International Consultation on Incontinence as an effective modality for the care of nursing home residents and home-care clients. However, the efficacy of PV is limited 11 ; less than 30% of cases achieve reduction in absorbent use. 12 More efficacious and feasible care for UI in elderly populations is an urgent need in the globally aging society. In this decade, we developed an ultrasound-assisted prompted voiding (USAPV) care program for the management of UI in elderly individuals; care workers regularly monitor the intravesical urinary volume of elderly individuals via ultrasonography, and prompt to void when the volume reaches the pre-fixed optimal value. Regular monitoring of intravesical urine volume may prevent delayed prompting or non-voiding in the toilet. In previous studies, the USAPV method was found to reduce absorbent consumption in 63% (50/80) of hospitalized incontinent adults 13 and 52% (40/77) of nursing home residents. 14 However, these studies lacked controls receiving conventional prompted voiding (CPV) care. In the present randomized clinical trial, we aimed to compare the efficacy and feasibility between USAPV and CPV for UI care in nursing home residents. | MATERIALS AND METHODS At first, we recruited participant nursing homes. Then, using a random number table, we randomized the nursing homes into the CPV and USAPV groups with a ratio of 1:1 in the geographical clusters of western and eastern parts of Japan. All residents were assessed for care-needs level (1 [mild] to 5 [severe]), which has been defined by the Long-Term Care Insurance system, a public social service for elderly individuals in Japan. 15 Elderly individuals with care-needs levels of 1-3 are usually ambulant or partially dependent, whereas those with care-needs levels of 4 or 5 are almost dependent. 16 We recruited residents to participate in the trial from September 1, 2015 to September 30, 2016. The inclusion criteria for residents were as follows: (a) aged 65 years and older; (b) care-needs level of 1 or more; (c) using pads or diapers because of UI; and (d) post-void residual urine volume of less than 300 mL. Individuals with serious consciousness disorder, acute phase disease, end-of-life care, or a mass in the lower abdomen (eg, pelvic tumor, pelvic cystic disease, and abnormal ascites detected by ultrasonography) were excluded from recruitment. Written informed consent was obtained from each participant or a family member. Written informed consent was also gained from care workers. Before the start of the study, the co-morbidities QOL, cognitive function, mental state, depression, physical function, and motivation level were assessed using the Charlson comorbidity index, 17 EQ-5D, 18 mini-mental state examination (MMSE), 19 geriatric depression score (GDS), 20 Barthel index, 21 and vitality index, 22 respectively. In the CPV group, we recorded a frequency-volume chart for 1 day during daytime before the intervention to measure the volume of voided urine and urine loss. Urine loss was determined by weighing the pads on a scale. In the interventional phase, caregivers regularly asked the residents every 2-3 h whether they had a desire to void and prompted them to void when the response was yes. The residents were also allowed to void whenever they had a desire to void. The individuals who were physically unable to visit the toilet were provided physical assistance. In the USAPV group, we measured the amount of residual urine volume at least twice during the recording of the frequency-volume chart by using a portable ultrasound device (BladderScan BVI6100; Verathon, Bothell, WA). The rental cost of BladderScan BVI6100 was 95 USD per month. The sum of the mean values of voided volume and residual volume was considered as the participant's optimal intravesical urine volume for voiding. Subsequently, caregivers regularly monitored the urine volume in the bladder every 2-3 h with an ultrasound device and prompted the participant to void when the volume was more than 75% of the individually prefixed bladder capacity. After an 8-week intervention of CPV or USAPV, the frequency-volume chart was recorded for 1 day to measure the daytime volume of voided urine and urine loss. Measures and indices to assess physical and mental conditions were also repeated. The participating care team of five care workers per institution was kept unchanged during the study. The care teams of all nursing homes had a protocol meeting before the start of the study to standardize the care. None of the family members were involved in the intervention. The primary outcome measures for efficacy were the change in daytime urine loss and QOL assessed using the EQ-5D from baseline to the end of the intervention. The secondary outcome measures were the scores of mental state, depression, physical function, and level of motivation assessed using the EQ-5D, MMSE, GDS, Barthel index, and vitality index, respectively. We also assessed the change in caregivers' FIGURE 1 Profile of a randomized clinical trial. CPV, conventional prompted voiding; USAPV, ultrasound-assisted prompted voiding QOL with the SF-12v2 23 and mental stress for care burden with the visual analog scale (VAS). 24 The VAS scores ranged from 0 (not stressful at all) to 100 (very stressful). The sample size was calculated with the following assumptions: the rates of urine loss reduction were 22% with CPV 12 and 52% with USAPV, 14 two-sided α = .05, 80% power, and 1:1 ratio of CPV and USAPV. The simulation indicated a sample size of 92 elderly individuals (46 in each group). With an estimated dropout of 20%, 115 elderly individuals were to be recruited. Statistical analysis was performed using the JMP software, version 13.1.0 (SAS, Cary, NC). Randomization had been performed by a facility based; however, we analyzed data at an individual level. The Wilcoxon rank sum test was used to compare the baseline characteristics, except the sex, between the groups. The chi-squared test was performed to compare the sex between groups. The Wilcoxon signed rank test was used to compare the changes in daytime urine loss; daytime voided volume; EQ-5D, MMSE, GDS, Barthel index, vitality index, SF-12v2, and VAS. We calculated a propensity score including facility and factors in the characteristics of the study participants, with a P-value less than .20. The change in daytime urine loss between the groups was evaluated using the Wilcoxon rank sum test. We conducted analysis of covariance using propensity score as a covariate to compare the change in daytime urine loss. In a post hoc analysis, we used a chi-square test to compare the proportion of elderly individuals whose daytime urine loss decreased by more than 25%. We also evaluated the effect of USAPV care by calculating a propensity score-adjusted odds ratio for improving urine loss by a logistic regression analysis. The model performance for improving urine loss was evaluated by calculating the concordance index (c-index), which is a generalization of the area under the curve of the receiver operation characteristic curve. P-values less than .05 were considered statistically significant. | RESULTS Thirteen nursing homes participated in the study and they were randomly assigned to the CPV (n = 7; four from western part and three from eastern part) and USAPV group (n = 6; three from western part and three from eastern part). Among the 13 randomized nursing homes, no participants were enrolled from two nursing homes (one nursing home each for the CPV and USAPV groups). We excluded 39 of the 119 incontinent residents enrolled because of the following reasons: no daytime urine loss during baseline frequency-volume chart recording (n = 18), discharged within 8 weeks (n = 17), and age younger than 64 years (n = 4), resulting in one nursing home without a CPV group. The statistical power of the enrollees (n = 119) and evaluable subjects (n = 80) was estimated as 90.3% and 72.8%, respectively. The remaining 80 participants (35 from 5 nursing homes for the CPV group and 45 from 5 nursing homes for the USAPV group) underwent a full-term 8-week intervention ( Figure 1). Both arms included three western and two eastern facilities. The mean institutional capacity was 95 and 92 residents in the CPV and USAPV groups, respectively (P = .68). All the elderly participants were long-term care recipients. Mean care-needs levels were 3.2 in the CPV and 3.5 in the USAPV groups, respectively (P = .25). The general characteristics of the participants at the baseline (Table 1) were not statistically different between the groups, except for the EQ-5D score and Barthel index, which were significantly worse in the USAPV group than in the CPV group (P = .024, and P = .006, respectively). As for frequency-volume chart variables, daytime urine volume was equivalent between the groups, whereas daytime urine loss was significantly larger in the USAPV group than in the CPV group (median, 300 g vs 150 g; P = .001). We calculated a propensity score including covariates of age, baseline scores of EQ-5D and Barthel index, daytime urine loss at baseline, and institution. The c-index of our model was 1.000. When the parameters were compared at the baseline and at the end of the study (Table 2), the median daytime urine loss in the USAPV group alone significantly decreased after the intervention (P = .008). As for the change in daytime urine loss, which is a primary outcome, the change was significantly larger in the USAPV group (median [IQR], −80.0 [−175 to +24] g) than in the CPV group (median [IQR], −9.0 [−60 to +80] g; crude P = .018 and propensity score-adjusted P = .048, respectively) ( Figure 2). Four participants in both groups attained complete dryness (0 g daytime urine loss) at the end of the study. In supplemental analysis, the proportion of elderly individuals whose daytime urine loss decreased by more than 25% was 51% (23/45) in the USAPV group and 26% (9/35) in the CPV group (P = .020). Of the 50 care workers participating in the study (5 each from the facility), 49 (25 of the CPV group and 24 of the USAPV group) completed the SF-12v2 questionnaire and VAS for care burden ( and VAS score addressing care burden were not statistically different between the groups (P = .92, P = .35, P = .89, and P = .62, respectively). The subscales of SF-12v2 showed no significant changes after the intervention in both groups. The VAS score was almost unchanged in the USAPV group (P = .59), but it significantly worsened after the intervention in the CPV group (P = .010). The change in the VAS score was not significant between the groups (median [IQR], +9 [−1.5 to +25] in the CPV group and 0 [−5 to +14] in the USAPV group; P = .10). No adverse events related to the study protocol were observed in elderly individuals or care workers. FIGURE 2 Change in daytime urine loss. CPV, conventional prompted voiding; USAPV, ultrasound-assisted prompted voiding compared with CPV care. We have previously shown that USAPV care decreases absorbent consumption in incontinent hospitalized adults 13 and incontinent elderly living in nursing homes. 14 This study is the first randomized controlled trial to confirm the significantly higher efficacy of USAPV than of CPV in managing UI, which is common in the elderly population. USAPV care could reduce urine loss. Care workers can determine the intravesical urine volume and relative proportion of the individual's optimal bladder capacity at the bedside. They can prompt the elderly individuals to go to the toilet with certainty when the intravesical urine volume reaches the optimal value. Alternatively, the care workers could wait until the next monitoring rather than prompting the elderly individuals to go to the toilet when the intravesical urine volume is still low. This could prevent delayed prompting or non-voiding in the toilet. Given these positive experiences, the elderly individuals and caregivers may be more motivated to void properly in the toilet. As for the participants' QOL, cognitive function, mental state, depression, physical function, and level of motivation were assessed using the EQ-5D, MMSE, GDS, Barthel index, and vitality index, respectively. All these indices showed no change after the 8-week intervention in both groups. We previously reported that the vitality index did not change after 12 weeks of USAPV care. Extended intervention may detect changes in these variables, as UI negatively affects multiple aspects of physical, mental, and social health. [3][4][5][6][7] A concern with USAPV is the greater burden for care workers because ultrasound monitoring may be too labordemanding or bothersome. However, none of the QOL domains of SV-12v2 showed significant changes. The VAS score assessing the burden of voiding care was almost unchanged in the USAPV group, but it worsened in the CPV group. We previously reported significant improvement in two subscales of emotional and mental health of SF-36 in care workers practicing USAPV. 14 These results consistently indicate that USAPV did not deteriorate the care workers' QOL. Worsening of care burden VAS scores in the CPV group may be related to the lack of improvement in urine loss among elderly individuals despite excessive tasks to follow the study protocol. The study limitations include the cluster randomized trial design in a single country with a lower statistical power, and lack of information on time for care, number of prompted voiding, nighttime urine loss, long-term outcomes, and costeffectiveness. Further studies with a larger number of samples in countries/areas with different socio-cultural backgrounds or focusing on nighttime urine loss, long-term outcomes, and care cost are warranted to support the results. Additional measures to assess work engagement or burnout of care workers are also to be investigated. In conclusion, USAPV care is efficacious and feasible for managing UI in elderly individuals living in nursing homes. USAPV potentially counteracts the increasing demand for managing UI of dependent elderly individuals. | CONCLUSION Ultrasound-assisted care reduced daytime urine loss of incontinent nursing home residents more greatly than conventional care without increasing caregivers' care burden. Ultrasound-assisted prompted voiding is efficacious and feasible for managing urinary incontinence in elderly individuals living in nursing homes. ACKNOWLEDGMENTS We sincerely thank all the participants and care staff for their co-operation in this study. This study was completely supported financially by grants from the Japanese Society of Geriatric Urology (Tokyo, Japan). The funder had no role in the study design, data collection, data analysis, data interpretation, writing of the report, or in the decision to submit the manuscript for publication. ETHICAL STATEMENT This randomized open-label clinical trial was approved by the institutional review board of the University of Tokyo (No. 10667) and registered in the UMIN Clinical Trials Registry (ID No. UMIN000017963).
2019-01-22T22:28:42.773Z
2019-01-08T00:00:00.000
{ "year": 2019, "sha1": "cd5f545bcfbc02516008bc5a9026a324e90cb3c5", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/nau.23913", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "de1bc3fa1cc5a892a220e82428726a2cdea57116", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
255121549
pes2o/s2orc
v3-fos-license
Combination of genetic engineering and random mutagenesis for improving production of raw-starch-degrading enzymes in Penicillium oxalicum Background Raw starch-degrading enzyme (RSDE) is applied in biorefining of starch to produce biofuels efficiently and economically. At present, RSDE is obtained via secretion by filamentous fungi such as Penicillium oxalicum. However, high production cost is a barrier to large-scale industrial application. Genetic engineering is a potentially efficient approach for improving production of RSDE. In this study, we combined genetic engineering and random mutagenesis of P. oxalicum to enhance RSDE production. Results A total of 3619 mutated P. oxalicum colonies were isolated after six rounds of ethyl methanesulfonate and Co60-γ-ray mutagenesis with the strain A2-13 as the parent strain. Mutant TE4-10 achieved the highest RSDE production of 218.6 ± 3.8 U/mL with raw cassava flour as substrate, a 23.2% compared with A2-13. Simultaneous deletion of transcription repressor gene PoxCxrC and overexpression of activator gene PoxAmyR in TE4-10 resulted in engineered strain GXUR001 with an RSDE yield of 252.6 U/mL, an increase of 15.6% relative to TE4-10. Comparative transcriptomics and real-time quantitative reverse transcription PCR revealed that transcriptional levels of major amylase genes, including raw starch-degrading glucoamylase gene PoxGA15A, were markedly increased in GXUR001. The hydrolysis efficiency of raw flour from cassava and corn by crude RSDE of GXUR001 reached 93.0% and 100%, respectively, after 120 h and 84 h with loading of 150 g/L of corresponding substrate. Conclusions Combining genetic engineering and random mutagenesis efficiently enhanced production of RSDE by P. oxalicum. The RSDE-hyperproducing mutant GXUR001 was generated, and its crude RSDE could efficiently degrade raw starch. This strain has great potential for enzyme preparation and further genetic engineering. Supplementary Information The online version contains supplementary material available at 10.1186/s12934-022-01997-w. Of these, gelatinisation and liquefaction require high energy input, accounting for 10-20% of the bioethanol price, which negatively affects the competitiveness of bioethanol against fossil fuels [2], and hence our ability to achieve carbon neutrality. Although a number of amylases have been identified and characterised, only a few RSDEs are known to contain starch-binding domains [5]. In general, RSDEs are mainly biosynthesised by filamentous fungi, such as members of the genera Penicillium and Aspergillus, but yields are quite low [2]. Several approaches, including physical and/or chemical mutagenesis, optimisation of cultivation parameters, and genetic modification have been employed to improve enzymatic yields [6][7][8]. However, a single approach often has limited success. Previous work identified a potential RSDG, Pox-GA15A, in Penicillium oxalicum strain GXU20 [9], which exhibited broad substrate specificity and high pH stability. Application of PoxGA15A in simultaneous saccharification and fermentation, alongside a commercial α-amylase, led to high fermentation efficiency (> 90%) with raw flour from either corn or cassava as feedstock [10]. Furthermore, engineering PoxGA15A incorporating a strong promoter and signal peptide, as well as mutagenesis by atmospheric and room temperature plasma (ARTP) and ethyl methanesulfonate (EMS), were used to enhance crude RSDE production by P. oxalicum, [7,8]. However, the RSDE production has not met the requirement for raw starch biorefinery. Biosynthesis of amylase is precisely regulated by transcription factors (TFs). Several TFs regulate the expression of amylase genes, including transcriptional activator AmyR and repressor CreA in P. oxalicum [4]. Unfortunately, only few TFs are known to regulate RSDE gene expression. For example, CxrC, NsdD and HmbB negatively regulate expression of the PoxGA15A gene in P. oxalicum [11][12][13], while POX01907 positively regulated PoxGA15A expression [14]. However, the effects of TF cooperation on RSDE production have not been reported. Here, we employed mutagenesis by Co 60 and EMS combined with TF-based genetic engineering to improve production of RSDE, using P. oxalicum mutant A2-13 as the parent strain, derived from mutant OXPoxGA15A in which PoxGA15A is overexpressed compared with wild-type strain HP7-1 [7,8]. We then evaluated the digestion efficiency of raw flours from both corn and cassava using crude RSDE from the resulting mutant. Results and discussion Combined Co 60 and EMS mutagenesis and isolation of RSDE-hyperproducer TE4-10 In previous work, P. oxalicum mutant A2-13, a strain producing high levels of RSDE, was generated through multiple rounds of random mutagenesis with ARTP and/ or EMS from parent strain OXPoxGA15A using a twolayer agar gel diffusion method [7]. In the OXPoxGA15A, the RSDG gene PoxGA15A, controlled via the inducible promoter P PoxEgCel5B by cellulose, was overexpressed in parental strain ∆PoxKu70 [8]. Here, we employed both random mutagenesis and genetic engineering to improve the yield of RSDE using A2-13 as the parent strain. Random mutagenesis included four rounds of EMS mutagenesis and two rounds of Co 60 -γ-ray mutagenesis. Prior to random mutagenesis, conidia of mutant A2-13 were treated with a final concentration of 1.2% EMS for 10 h, resulting in a 96.05% lethality rate. Co 60 -γ-ray irradiation mutagenesis was carried out at five doses from 0.6 KGy to 3.0 KGy. At 0.6 KGy, the lethality rate of mutant TE2-23 reached 92.74%. By contrast, the lethality rate of mutant Co1-17 was 87.65% at 1.6 KGy (Additional file 1: Fig. S1). After several rounds of physical-chemical mutagenesis, 3619 colonies were obtained, of which 108 displaying a large ratio between colonies and clear zones were selected for further evaluation. Eventually, we identified three mutants (TE2-23, Co2-12 and TE4-10) producing more RSDE than A2-13. The ratio of the diameter between colonies and clear zones for mutant TE4-10 was the largest (Fig. 1a, b). The RSDE yields of mutants TE2-23, Co2-12 and TE4-10 on day 8 of culture in medium containing wheat bran plus Avicel were 193.29 ± 3.80, 205.36 ± 7.61 and 218.55 ± 3.83 U/mL, respectively, using raw cassava flour as substrate, an increase by 8.94%, 15.75% and 23.18%, respectively, compared with A2-13 ( Fig. 1c). Avicel added in the medium can induce the promoter P PoxEgCel5B controlling the engineered RSDG gene PoxGA15A. Furthermore, the stability of mutant TE4-10 was assessed based on RSDE production. The results revealed that production of RSDE exhibited no significant alteration after six rounds of sub-culture (Fig. 1d). Raw starch-degrading enzyme (RSDE) production of mutants obtained by random mutagenesis after culture for 8 days on wheat bran plus Avicel. (d) RSDE production of mutant TE4-10 sequentially sub-cultured six times for 8 days each time on wheat bran plus Avicel. (e) RSDE production of engineered strains cultured for 8 days on wheat bran plus Avicel. Strains A2-13 and TE4-10 served as controls. In panels c and e, capital and small letters indicate p < 0.01 and p < 0.05, respectively. Different letters reveal significant differences between mutant and parental strains, evaluated by one-way ANOVA. (f ) RSDE production of engineered strain GXUR001 sequentially sub-cultured six times for 8 days each time on wheat bran plus Avicel. Results are mean ± standard deviation. All tests were performed in triplicate Genetic engineering of transcriptional regulators to improve RSDE production Expression of enzyme genes can be enhanced significantly using strong promoters, and this depends on regulation by TFs. Genetic engineering based on regulatory networks formed by numbers of TFs is an efficient method for enhancing enzyme production. PoxCxrC and PoxAmyR respectively repress and stimulate the transcription of key amylase genes, PoxGA15A and PoxA-my13A [4,13]. Therefore, it will be interesting to explore the effects of combining these TFs for RSDE production in P. oxalicum. We engineered P. oxalicum strain TE4-10 to obtain TE4-10ΔcxrC::amyR, renamed GXUR001, in which Pox-CxrC was deleted and PoxAmyR was simultaneously overexpressed, and verified the strain by PCR analysis (Additional file 2: Fig. S2). When cultivated in medium containing wheat bran plus Avicel as carbon sources for 8 days, RSDE production of GXUR001 reached 252.58 ± 4.24 U/mL with raw cassava flour as hydrolysis substrate, 42.36% and 15.55% higher than mutants A2-13 and TE4-10, respectively (Fig. 1e). Moreover, the genetic stability of GXUR001 was evaluated based on RSDE production. The results revealed that RSDE production of GXUR001 showed no significant alteration after six successive sub-culture steps (Fig. 1f ). Compared with results of previous studies (Table 1), RSDE production by GXUR001 using culture of shake flask was the highest yet reported, with raw cassava flour as hydrolysis substrate. Notably, enzymatic activity specifically depended on the hydrolysis substrate. Therefore, RSDE yields reported for other raw starch substrates are not directly comparable. However, unexpectedly, PoxCxrC was found to more weakly inhibit RSDE production in TE4-10 compared with ∆PoxKu70. The ∆PoxCxrC mutant exhibited 1.5to 1.8-fold enhanced production of RSDE relative to ∆PoxKu70 when cultured on soluble corn starch for 2 to 4 days, respectively [13]. Mutant TE4-10 was derived from ∆PoxKu70 through multiple rounds of physical-chemical random mutagenesis and overexpressing the PoxGA15A gene, which might alter the regulatory network controlling the biosynthesis of amylases [7,8]. Additionally, the distinct effects caused as a result of deletion of PoxCxrC in the ∆PoxKu70 and TE4-10 on RSDE production might be resulted from induction by different carbon sources. Properties of crude RSDE secreted by engineered strain GXUR001 In order to examine the effects of EMS and Co 60 -γ mutagenesis and genetic engineering on the features of crude RSDE, the optimal pH and thermostability were determined. The optimal pH was 4.5 and the optimal temperature was 65 °C (Fig. 2a, b). The pH and thermostability of GXUR001 RSDE was in accordance with those of A2-13, apart from a stronger tolerance of alkaline conditions (Fig. 2c, d). Alteration of gene expression in GXUR001 The abundance of mRNAs plays a critical role in boosting cellulase and xylanase content in fungi. Here, RNA sequencing (RNA-seq) and RT-qPCR were employed to probe alteration of gene expression in genetically engineered GXUR001 cultured on wheat bran plus Avicel, especially amylase-encoding genes, with mutant TE4-10 serving as a control. After pre-growing in glucose medium for 24 h, mycelia were collected and placed on induction medium containing wheat bran plus Avicel for 24 h. RNA-seq data yielded 22 million clean reads for each sample, and each read was ~ 50 bp in length. Over 98% of clean reads could be matched against the genome of P. oxalicum wild-type strain HP7-1 [19]. Quality Aspergillus fumigatus Raw corn starch 25 [17] Aspergillus sp. MZA-3 Raw cassava starch 3.3 [18] evaluation of RNA data revealed a very high Pearson correlation coefficient (r > 0.96) among three biological replicates for each strain (Additional file 3: Fig. S3), indicating that the transcriptome data were credible. With a threshold of p < 0.05, 3292 differentially expressed genes (DEGs) were found in GXUR001 relative to the parental strain TE4-10 ( Fig. 3a; Additional file 4: Table S1), of which 1524 were upregulated (0.2 ≤ Log2 fold change ≤ 6.8) and 1768 downregulated (− 0.2 ≤ Log2 fold change ≤ − 11.9). Analysis of metabolic pathways using the Kyoto Encyclopedia of Genes and Genomes database showed that DEG-encoded proteins were mainly related to metabolism (Fig. 3b). Moreover, 187 DEGs encoding putative TFs were identified, approximately two-thirds of which were downregulated in GXUR001 relative to TE4-10. As expected, expression of PoxAmyR exhibited a 62.75% increase, whereas PoxCxrC expression wasn't detected. The transcriptional repressor gene PoxCreA showed 47.6% reduced expression. C2H2 protein CreA that mediates carbon catabolite repression impaired the transcription Crude RSDE is prepared from P. oxalicum strains cultivated on wheat bran plus Avicel for 8 days under optimal culture conditions. In panels a and b, the highest RSDE activity of GXUR001 and A2-13 was set as 100%. In panels c and d, the RSDE activity of untreated GXUR001 and A2-13 was set as 100%. Results are mean ± standard deviation. Each experiment included three biological replicates of amylase genes, either indirectly by repressing the expression of regulatory genes including amyR, or directly by binding the promoters of amylase genes [4]. Surprisingly, some known transcriptional activator genes involved in amylase biosynthesis, including PoxPrtT [21], PoxHmbB [12] and PoxNsdD [11], displayed decreased expression by 22.6-32.2% in GXUR001 (Fig. 3e). Therefore, expression of major amylase genes in GXUR001 was altered via the coordination of many regulatory genes when cells were cultured on wheat bran plus Avicel. To further confirm the RNA-seq results, expression levels of four important genes, PoxGA15A, PoxA-my13A, POX_b02418 and PoxAmyR, were examined by RT-qPCR. P. oxalicum strains GXUR001 and TE4-10 were induced on wheat bran plus Avicel for 12-48 h after transfer from glucose, and the resulting mycelia were subjected to RNA extraction. The results revealed that the transcriptional abundances of all tested genes in GXUR001 were enhanced throughout the induction period, by 61.7-2492.0%, compared with those in TE4-10, except PoxAmyR at 12 h which showed no significant alteration (Fig. 3f ). Interestingly, although a remarkable increase in expression of major amylase genes was achieved, RSDE production by GXUR001 remained unsatisfactory. Previous studies revealed that high mRNA levels were a prerequisite for enhanced amounts of secreted proteins, but a strong translation machinery [22] and transport system were also essential. Therefore, in future work, transcription, translation and transportation should be Results are mean ± standard deviation: **p < 0.01 and **p < 0.05 indicate significant differences between GXUR001 and TE4-10, analysed by Student's t-test. Each experiment included three biological replicates simultaneously investigated to improve RSDE production by filamentous fungi. Phenotypic analyses of P. oxalicum mutants Colony phenotypes of the four P. oxalicum strains, A2-13, TE4-10, TE4-10ΔcxrC and GXUR001, on solid plates grown on several carbon sources were comparatively analysed after culturing for 4 days. The results revealed no significant differences in colony diameter of the four strains on all tested plates. However, P. oxalicum mutant GXUR001 showed the largest hydrolysis zone on plates with raw cassava flour. Colonies of TE4-10, TE4-10ΔcxrC and GXUR001 were yellow-green in colour on potato dextrose agar (PDA) and plates containing raw cassava flour, whereas colonies of the parent strain A2-13 were cyan. Compared with A2-13, the colony colour of TE4-10, TE4-10ΔcxrC and GXUR001 was lighter on plates with wheat bran plus Avicel (Fig. 4). Saccharification of raw starch flour by crude RSDE from engineered strain GXUR001 Currently, more than 50% of global bioethanol is made from corn starch as feedstock [1,23]. Saccharification of raw starch is the key step during fermentation. Therefore, the saccharification efficiency of crude RSDE produced by GXUR001 was evaluated using raw cassava flour or raw corn flour. In this study, total starch contents contained in the raw cassava flour and raw corn flour were 75.6-78.7%, respectively. The results showed that the released glucose concentration following hydrolysis of raw cassava flour reached 117.2 g/L, with a starch conversion of 93.04% at 120 h, when carried out with 150 g/L loading of raw cassava flour at 40 °C and 250 U/g substrate (Fig. 5a, b). By contrast, under the same hydrolysis conditions, the released glucose concentration from raw corn starch reached 126.1 g/L with a starch conversion of 100% at 84 h at 150 U/g substrate (Fig. 5c, d). The hydrolysis ability of GXUR001 was comparable to that of the parental strain A2-13 (Fig. 5e-h). Starch hydrolysis requires the appropriate and coordinated action of glucoamylase and α-amylase; α-amylase cleaves internal α-1,4-glycosidic linkages of starch chains, while glucoamylase breaks down α-1,4-and 1,6-glucosidic bonds at nonreducing ends to produce glucose [3,4]. Raw starch granules are recalcitrant to amylase hydrolysis due to α-glucan packing and crystal allomorphs that are dependent on botanical origin. Interestingly, RSDE can digest raw starch granules to release glucose below the gelatinisation temperature via binding of starch granules with starch-binding domain. In general, cereal starch such as corn starch is susceptible, whereas unprocessed root starch such as cassava starch is resistant [20], and our results were consistent with this presumption. Additionally, the hydrolysis efficiencies of raw starch by crude RSDE from engineered strain GXUR001 and parental strain A2-13 showed no significant difference, suggesting that coordinated action among different kinds of amylases was similar. Conclusion This study sequentially employed random mutagenesis and genetic engineering to enhance production of RSDE by P. oxalicum. The resulting GXUR001 strain achieved RSDE production of 252.6 U/mL using raw cassava flour, an increase of 42.4% relative to the parent strain A2-13. Both random mutagenesis and genetic engineering markedly upregulated the transcription of key amylase genes, including RSDG gene PoxGA15A. Moreover, crude RSDE from GXUR001 efficiently hydrolysed raw cassava flour and raw corn flour into glucose, with conversion values of 93.0% and 100%, respectively, comparable to those of A2-13. This mutant strain provides a potential source of RSDE for starch biorefining to produce bioethanol. Material and methods Fungal strains used in this study and culture conditions P. oxalicum strains including the A2-13 parent strain [7] were grown on solid PDA for 5 days at 28 °C, and used for short-term preservation at 4 °C or reproduction. To prepare crude RSDE, minimal modified medium containing carbon sources wheat bran (2%, w/v) and Avicel (3%, w/v) [7] was used to culture P. oxalicum strains Results are mean ± standard deviation, and each experiment is performed in three biological replicates for 8 days at 28 °C. Cultures were centrifuged for 10 min at 16,000 × g and 4 °C, and the obtained supernatant served as crude RSDE. To extract total RNA for RT-qPCR assays, equal numbers of asexual spores from each P. oxalicum strain were inoculated into minimal modified medium containing glucose and cultured for 24 h. Hyphae were transferred into minimal modified medium containing wheat bran plus Avicel and culture was continued for 4-48 h. Plant materials and their pretreatments Raw starch flours as hydrolysis substrates were purchased from a local farmers' market in Nanning, China, and processing was performed in accordance with a previous study [7]. Construction and verification of engineered P. oxalicum strains P. oxalicum strains were genetically engineered based on homologous recombination [24] and confirmed via PCR using specific primers (Additional file 5: Table S2). Determination of RSDE activity Measurement of RSDE activity was in accordance with a previously published procedure [8]. One RSDE activity unit (U) was the amount of enzyme required to release 1 μmol of reducing sugars per minute when hydrolysing raw cassava flour under specific conditions (pH 4.5 and 65 °C). RNA and DNA extraction Total RNA from P. oxalicum was extracted using a DP419 RNA Extraction Kit (Tiangen Biochemical Technology Co., Ltd., Beijing, China) according to the manufacturer's instructions. The DNA extraction referred to the method previously published [25]. RNA-sequencing RNA samples from P. oxalicum strains were submitted to Fraser Gene Information Co., Ltd. (Wuhan, China) for sequencing and further analysis according to previously described procedures [26]. Three biological replicates were included for each P. oxalicum strain. RT-qPCR RT-qPCR was implemented based on a previous study [13], and the POX_c04656 gene encoding actin served as an internal reference to calculate relative expression levels of each target gene. Expression levels in engineered P. oxalicum strains were normalised against levels in the parent strain. All experiments were repeated three times. Phenotypic survey Spore suspensions of P. oxalicum strains were spread on solid plates supplemented with different carbon sources and cultured for 4 days at 28 °C. Carbon sources were raw cassava starch, soluble corn starch, wheat bran plus Avicel, PDA and glucose. Colony photographs were taken with an EOS-6D digital camera (Canon, Tokyo, Japan). A Cellsens Imaging system (Olympus, Tokyo, Japan) was also employed. All studies were performed in triplicate. Properties of crude RSDE The methods for determining the optimum temperature and pH of crude RSDE, as well as the corresponding stability, were performed as previously reported [10]. Hydrolysis of raw starch flours Saccharification of raw cassava flour and raw corn flour was executed as previously described [10]. Statistical analysis Statistical analysis of the obtained data was conducted by one-way analysis of variance using SPSS (IBM, Armonk, NY, USA) and Student's t-test using Microsoft Excel (Microsoft, Redmond, WA, USA). Data availability Transcriptomic data was loaded into the Sequence Read Archive database, and the accession number GSE210161 was assigned.
2022-12-26T16:07:08.818Z
2022-12-24T00:00:00.000
{ "year": 2022, "sha1": "adb68b38281fbe70f2f0814afd429739dbb431f5", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "23c6dcc252607d3e83ac0b03d706400590565adb", "s2fieldsofstudy": [ "Environmental Science", "Engineering", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
220328300
pes2o/s2orc
v3-fos-license
Towards sentiment aided dialogue policy learning for multi-intent conversations using hierarchical reinforcement learning Purpose Developing a Dialogue/Virtual Agent (VA) that can handle complex tasks (need) of the user pertaining to multiple intents of a domain is challenging as it requires the agent to simultaneously deal with multiple subtasks. However, majority of these end-to-end dialogue systems incorporate only user semantics as inputs in the learning process and ignore other useful user behavior and information. Sentiment of the user at the time of conversation plays an important role in securing maximum user gratification. So, incorporating sentiment of the user during the policy learning becomes even more crucial, more so when serving composite tasks of the user. Methodology As a first step towards enabling the development of sentiment aided VA for multi-intent conversations, this paper proposes a new dataset, annotated with its corresponding intents, slot and sentiment (considering the entire dialogue history) labels, named SentiVA, collected from open-sourced dialogue datasets. In order to integrate these multiple aspects, a Hierarchical Reinforcement Learning (HRL) specifically options based VA is proposed to learn strategies for managing multi-intent conversations. Along with task success based immediate rewards, sentiment based immediate rewards are also incorporated in the hierarchical value functions to make the VA user adaptive. Findings Empirically, the paper shows that task based and sentiment based immediate rewards cumulatively are required to ensure successful task completion and attain maximum user satisfaction in a multi-intent scenario instead of any of these rewards alone. Practical implications The eventual evaluators and consumers of dialogue systems are users. Thus, to ensure a fulfilling conversational experience involving maximum user satisfaction requires VA to consider user sentiment at every time-step in its decision making policy. Originality This work is the first attempt in incorporating sentiment based rewards in the HRL framework. Originality This work is the first attempt in incorporating sentiment based rewards in the HRL framework. Contextualization Goal-oriented Dialogue System continues to be an area of immense interest for the NLP researchers and AI in particular where VAs in the form of rational agents have to complete a predefined goal or retrieve information (related to booking of flights, restaurants etc.) by interacting with users via natural language. Prominent works in the context of Dialogue Management (DM) include those of [1][2][3][4][5][6][7][8][9] etc. But such works lack diversity, i.e., those works are related to the context of serving a particular dialogue scenario or intent of the user. But in the real world applications, user generally wants to accomplish tasks which include getting several intents/subtasks fulfilled in a single dialogue conversation with minimal effort and dialogue turns. Thus, creating VAs to manage composite goal of the user pertaining to multi-intent conversations in an unified manner is the need of the hour. Relevance Also, the eventual evaluators and consumers of such dialogue systems are users. So, for a fulfilling conversational experience involving maximum user satisfaction requires VA to consider user sentiment at every time-step in its decision making policy. The extra feedback from the user in terms of sentiment will steer the VA to be user adaptive in order to learn an efficient dialogue policy [10] as user sentiment is a true reflection of user satisfaction. VAs of such kind become immensely important in today's time where the demand for such automated and personalized VAs is at an ever high. What makes these creation of VAs even more difficult is the complexity involved as the VA needs to solve composite queries of user in a single dialogue conversation while also taking care of the user's sentiment. Research question Reinforcement Learning (RL) [11,12] has been utilized over the years to solve the problem of dialogue management and it has been proven to be quite effective to model the above task by treating it as an optimization problem. But as the ever-growing needs and the complexities of the user are taken into consideration, there arises an imperative need to curate comprehensive dialogue managers. These dialogue managers should be capable of handling larger and intricate state space of dialogues, supervise multiple dialogue scenarios with ease, higher accuracy and precision. These intense challenges make traditional RL models [1,[13][14][15][16] unscalable to manage such complex conversations. Hierarchical RL (HRL) [17][18][19] on the contrary provides a more doctrined way for learning dialogue management strategies or policies for complex problems. It focuses on reducing the problem of curse of dimensionality that afflicts the creation and modeling of solutions for such complex tasks by dividing a composite task into numerous and sequence of subtasks. Thus, it needs to be studied and evaluated that how HRL can be employed to provide a learning framework that caters to the requirement of handling various subtasks at the same time while additionally also taking into account other behavioral cues of the user such as sentiment to serve the user in an efficient manner. Objective This paper proposes a HRL framework specifically options based VA to model the task of learning dialogue policies for multi-intent conversations for successful task completion. Along with it, user's sentiment is also incorporated into the hierarchical value functions to attain maximum user satisfaction. A unique representation of Semi-MDP is proposed with novel task based and sentiment based reward functions to guide the learning process of the VA. To address all these aspects together, a new dataset is introduced which is collected from opensource dialogue datasets containing multi-intent conversations with sentiment pertaining to Restaurant domain. Empirically, it is shown that apart from user semantics, additional user behavioral information such as sentiment plays an important role in attaining maximum user satisfaction while creating complex VAs of composite nature. The key contributions of this paper are the following: • Integration of hierarchical value functions with Deep Reinforcement Learning (DRL) for the VA to learn strategies for managing multi-intent conversations. Along with it, sentiment of the user is incorporated to these hierarchical value functions to make the VA adaptive to the sentiment of the user. • It is shown empirically that task based and sentiment based immediate rewards cumulatively are required to ensure successful task completion and attain maximum user satisfaction in a multi-intent scenario instead of these rewards alone. • First large scale dataset named SentiVA for multi-intent conversations annotated with its corresponding intent, slot and sentiment (considering the entire dialogue history) labels for the Restaurant domain is made available. Structure of the paper Section 2 presents a brief overview of the recent works for RL based Dialogue Management Strategy followed by the motivation behind solving the current problem. Section 3 demonstrates the process of dataset creation and its details. Section 4 presents the proposed methodology in detail. Section 5 lists the experimental details for the implementation of the proposed methodology. Section 6 presents all the experimental results along with its detailed discussion and error analysis. Section 7 presents the conclusion and the future works. Background In recent times, two prominent paradigms of research have emerged in Goal-oriented Dialogue Systems. The first category includes sequence to sequence based supervised models [20], encompassed as Natural Language Generation (NLG) task wherein an user utterance and its context are encoded to decode a VA response directly [21]. The data requirement for these categories of models is huge as they directly imitate the knowledge contained within the training data [8]. The second ones are frameworks based on Reinforcement Learning (RL) algorithms such as Deep Q-Networks (DQN) [22] wherein supervised learning techniques are combined and applied to RL tasks [14]. These approaches require less amount of data as compared to the former because of their ability to simulate dialogue conversations. They explore various facets of dialogue space efficiently by exploiting its sequential nature. The focus of this paper is on the latter category for developing VAs for what is popularly known as the Dialogue Management (DM) task. The concept of HRL is relatively old with some of its works that date back to the early 1970s. [23] proposed a HRL approach based on options framework to learn policies in different domains. In [24], authors propose a divide and conquer approach for efficient policy learning where a complex goal-oriented task is broken into simpler subgoals in an unsupervised manner and then these subgoals are used to learn a multi-level policy using HRL. Feudal Reinforcement Learning has been used with DQN in the work of [25] for learning policies in large domains. These works are significantly different in their problem statements where the focus was to propose DM methodologies to handle multi-domain conversations with a single subtask/intent per domain. Whereas our work focuses on handling multi-intent dialogue conversations pertaining to a single domain. In [10], authors used only sentiment based immediate rewards in an end-to-end dialogue system for a single intent. Apart from them, there are other significant works that aim to propose methodologies to learn DM policies for a single intent pertaining to a domain. In [1], authors developed an easy and open-sourced dialogue system using DRL for the restaurant domain and so the system evades from using hand-crafted features for learning an action-selection strategy without the use of the Natural Language Understanding (NLU) module. It employs the Deep Q-Network (DQN) algorithm for its implementation. One of the limitations of this work is that even if the VA learns an optimal policy, its usability is restricted because of its dependence on the vocabulary. The system falters in out of vocabulary words and hence is difficult to be scalable in complex scenarios. In [26], authors proposed a fast DRL approach that uses a network of DQN agents that skips weight updates during exploitation of actions. In [6], authors proposed a variant of DQN where the VA explores via Thompson sampling, drawing Monte Carlo samples from a Bayes-by-Backprop neural networks by maintaining a probability distribution over the weights in the network. In [7], authors presented an adversarial advantage actor-critic based model that comprises of a discriminator to differentiate actions generated by VAs from actions by experts. Later, a discriminator is also added as another critic into the framework to encourage VAs to explore state-action within the regions where the agent takes action identical to those of the experts. In [27], authors presented a Hindsight Experience Replay (HER) based Dialogue Policy Learning from failed conversations. They tuned the vanilla ER to incorporate two other types of ER mechanisms namely Trimming based HER that trims failed conversations to generate successful ones and Stitching based HER that computes the similarity between belief states and stitch together segments to create successful dialogues. They formulate their problem statement for a movie booking task as a Markov Decision Problem (MDP) and demonstrate their proposed approach using DQNs. In [28], authors developed a DRL based approach on Dyna-Q framework, where they introduced a world model to simulate the environment. So, now the VA learns from direct RL method using real experiences from the data and also from the simulated user experience generated from the world model (which is multi-classification network: two classification and one regression task to simulate several aspects such as user action, rewards etc.) to combat the absence of large conversational data for training. They also formulated their problem statement for a movie booking task as a Markov Decision Problem (MDP) and demonstrated their proposed approach using DQNs. In [29], authors extended the work of Deep Dyna-Q framework [28] to counter the low quality of simulated user experience from the world model. They incorporated a discriminator (influenced from adversarial network) to differentiate between the real user experience and the simulated ones. The ones where the discriminator failed to identify or had difficulty detecting the simulated experiences from the real ones were then used in the policy learning phase of the VA. In [30], authors presented yet another variant of Deep Dyna-Q framework [28] called Switch-based Active Deep Dyna-Q to counter the problem of low quality simulated user experience of the world model and the sample efficiency of the Dyna-Q framework. They incorporated a switcher and an active sampling strategy to determine when to use real or simulated user experience depending on different phases of dialogue policy training and generate those simulated user experiences that have not been fully explored by the VA. In [31], authors presented yet another variant of Deep Dyna-Q framework [28] called Budget-Conscious Scheduling-based (BCS) Deep Dyna-Q to best utilize a fixed, small number of human interactions (budget) for learning dialogue policies. They incorporated a BCS module to manage the budget and select the most effective way to generate real or simulated experiences for learning a dialogue policy in a fixed budget. In all these works stated above, the focus was on proposing different ways combined with DQN that requires lesser real user experience for training VA for a single intent of movie booking task. Additionally in [31], author's aim was to demonstrate that how in fixed budget setting (limited human experience) a cost-effective dialogue policy can be learnt as obtaining high quality dialogue data is a challenging task in itself. Thus, they proposed different ways to tweak the DQN algorithm to incorporate different aspects related to their task at hand. However, the current work focuses on incorporating sentiment, an important user behavior in the learning process to handle multi-intent conversations with the help of HRL. Independently, there exists several works in literature focused on developing supervised and unsupervised models for understanding sentiment from user utterance [32][33][34]. However, there exists very little work that utilizes these additional information of the user behavior in the decision making process for the VA to be efficient and competent enough to converse and execute its goal appropriately. In [35][36][37], authors used rule-based reactions to incorporate sentiment as a part of dialogue policies in order to create interpersonal interactions. In [10], authors used sentiment based rewards instead of task success based rewards in the policy learning process to establish that sentiment provides better reward assistance for the VA to achieve user goal. However, their work is focused on learning dialogue policy for only a single intent throughout the conversation. But in a real-life scenario such as multi-intent and multi-domain, sentiment based rewards alone can't solve the purpose as apart from keeping user sentiment in mind there also exists other complexities such as sub-task, multi-task completion etc. Motivation From the literature, it is evident that several works done earlier in the context of dialogues had shortcomings. The applications developed earlier based on the traditional RL approach had tremendous amount of human labor and interference involved right from manual hand-crafting of the rules to carrying out experiments to train the agent. Performing large scale experiments to establish the robustness of the learnt strategy was a cumbersome process. State tracking was difficult because the representation of the states in the MDP was complex as more number of variables with varied ranges were used to capture the information in a particular time-step. Recent research focuses on merging the NLU and DM into a single module eliminating the need of NLU modules and creating a single model in order to avoid NLU fault chances. These types of models restrict the usability of trained policy only to situations where dialogue vocabulary matches the training corpus, change in vocabulary requires a new model to be trained from scratch which becomes cumbersome for continually evolving and online systems. Recent works, which employed Deep RL techniques for the problem, incorporate vocabulary of the system as state representation without the use of NLU module. So, even if the VA learns an optimal policy, its usability is restricted because of its dependence on the vocabulary and hence is not scalable. Other approaches proposed require extensive dialogue data, demand huge computational cost for training such complex networks. Often scalability, reusability and reproducibility of these proposed models are not achievable in real life implementation scenarios. Apart from user semantics, other useful information such as sentiment that depicts an aspect of user behavior were never integrated in the learning process to address multi-intent scenarios. Also, majority of these works focus on serving single intent or task of the user in a dialogue conversation which is highly not desirable in practical scenarios. Motivated by the inadequacy of the existing systems and approaches, this paper presents an approach to serve multiple intents of the user in a single dialogue conversation without discretizing information across intents using Hierarchical Deep Reinforcement Learning. Apart from these, sentiment based rewards is also incorporated along with task success based rewards for the VA to understand and mimic human behavior as closely as possible and provide gratifying user experience and satisfaction. Dataset To facilitate the research in dialogue policy learning assisted with user sentiment pertaining to multiple intents, this paper introduces a new dataset (SentiVA) consisting of dialogue conversations manually annotated with its intent, slot and sentiment (considering the entire dialogue history) labels. Data collection For the current work, the dialog bAbI dataset [2] is used to curate conversations for the Sen-tiVA dataset. The dialog bAbI dataset contains conversations for a set of 6 tasks for testing end-to-end dialog systems in the Restaurant domain. Each task tests a unique aspect of dialog. For each task, there are 1000 dialogue for training, 1000 for development and 1000 for testing. This particular dataset was chosen because of its task-oriented nature and also conversations were primarily based on slot-filling structure with user satisfaction taken into account. This dataset contains conversations concerning several intents such as restaurant_info, restaurant_ book, restaurant_phone, restaurant_address involving slots or entities such as <location>, <cuisine>, <no. of people>, <restaurant name> and <price>. Dialogues pertaining to task-4 and 5 were utilized to prepare conversations for the current work. To the best of our knowledge, we were unaware of any sizable and open-access dialogue data pertaining to multi-intent conversations annotated with its corresponding intent, slot and especially sentiment labels at the time of writing. Thus, dialog bAbI dataset has been manually modified and annotated for the corresponding intent, slot and sentiment labels to make it suitable for developing a VA capable of learning strategies to converse with the user to accomplish its composite task by taking into account user sentiment and enable novel research in the field of sentiment aided dialogue policy learning. Data annotation Initially, each conversation was modified to incorporate multiple intents such as the combination of intents mentioned above to a maximum of three intents per dialogue. Then each of these conversations were annotated for their corresponding utterance-level intents and wordlevel slots. Following this, conversations were also annotated for the sentiment into three categories, i.e., positive, negative and neutral. For the annotation of sentiment, the annotators were presented with the entire dialogue history and were explicitly asked to focus on the user's conduct and nature rather than that of the VA's. Three annotators graduate in English linguistics were assigned the task of annotating the sentiments. The inter-annotator score (kappa) with more than 0.80 was considered as reliable agreement. SentiVA dataset The SentiVA dataset contains a total of 1286 dialogues modified for the presence of multiintents in a particular dialogue and annotated for its corresponding intent, slot and sentiment labels. Tables 1 and 2 show statistics of the annotated dataset. The skewness in the dataset for sentiment (as seen in Table 2) can be attributed to the nature of the task. In taskoriented scenarios, users are less likely to depict negative or positive sentiments unless an extraordinary circumstance. Fig 1 shows a sample chat transcript from the annotated dataset with its corresponding intent, slot and sentiment labels. To the best of our knowledge, this dataset, SentiVA is the first large scale, open-access dataset for multi-intent conversations annotated with its corresponding intent, slot and sentiment (considering the entire dialogue history) labels. In [10], authors annotated conversations considering the dialogue history for conversations pertaining to single goal or intent. In [38], authors also created a similar dataset with emotion labels for a single intent but those are not annotated considering the conversation context. Qualitative analysis The current work seeks to analyze the effect of sentiment in learning dialogue strategies for the VA while also taking into account the goal of accomplishing composite task of the user pertaining to multiple intents of a domain in a single dialogue conversation. Below, an analysis is provided using instances from the proposed dataset to further support the claim which requires sentiment aided reasoning along with multi-intent conversation illustration. • Role of multi-intent conversation: In real-life scenario, users generally take assistance of VA in order to fulfill its complex and composite goal and do not restrict themselves to just one task per conversation. Thus, incorporating such complex scenarios in dialogue conversations is the need of the hour in order to make the VA more competent and efficient in handling such events. Here, multi-intent means addressing more than one intention of the user across the dialogue. As seen in Fig 1, the VA handles such scenarios by addressing multiple intents across the dialogue as restaurant, phone and address. • Role of user sentiment: As explained above, incorporating user sentiment in the learning process helps attain maximum user gratification. As seen in Fig 1, the negative sentiment of the user helped the VA in providing more options to the user for its satisfaction. Otherwise its task of successfully filling up all relevant slots along with a valid database query is attained in the first option turn itself. However, the user wasn't satisfied with the provided option and that is only visible from its sentiment. Also, different cases where sentiment actually plays a role include repetition, interruption [10] in every sub-task and completion of each of the subtask successfully as queried by the user to achieve the goal in whole to attain maximum or absolute user satisfaction. Repetition are primarily of two types: (i). where the user asks the VA to reiterate its prior action or utterance; (ii). where the VA falls in a loop asking or picking up the same action continuously due to its failure to understand some entity corresponding to the intent being served. Interruption means the user interrupting the VA while it is processing a subgoal or a subtask. Note that subtask and subgoal has been used synonymously in this paper. Materials and methods This paper employs a well known foundation of HRL called the Options, belonging to the group of decision problems called the Semi-MDPs [17]. Options framework fundamentally provides a hierarchical schema to decompose a composite task into several subtasks at different levels of hierarchies. Thus, we integrate hierarchical value functions with DRL for the VA to learn strategies for managing multi-intent conversations in an unified manner. Along with it, sentiment of the user is incorporated to these hierarchical value functions to ensure higher user satisfaction and make the VA adaptive to the sentiment of the user. Hierarchical DRL Agent It is a two-level HDRL agent that comprises of a top-level intent meta-policy, π i,d and a lowlevel controller policy, π a,i,d . The intent meta-policy takes as input state s from the environment and selects a subtask i 2 I among-st multiple subtasks identified based on the user requirement, where I represents the set of all intents/subtasks of that domain. The controller policy and its state space π a,i,d are shared amongst all the options/intents thereby satisfying slot constraints amongst overlapping subtasks. It inputs state s and outputs a sequence of primitive actions a 2 A where A represents the set of all the primitive actions of a domain. The internal critic present in the VA gives task based immediate rewards to top and low-level policies, respectively, at every time-step for picking up actions at different points in the conversation to ensure successful task completion. To conceive the HDRL agent, a generic architecture of semi-MDP is used. It finds its applicability in any domain having n number of intents and m number of slots. Incorporating sentiment Primarily, works in literature are focused on improving techniques or dialogue policies for the VA to be diverse enough to handle complex scenarios (such as multi-intent conversations) for task (user goal) fulfillment such as [23,25]. As a result only task success based immediate rewards were incorporated in the training phase for Reinforcement Learning (RL) based algorithms for the VA to learn policies. In this work, the focus is in integrating sentiment based immediate rewards (identified from the user utterance) with immediate rewards from the internal critic to assure higher user satisfaction and better user experience. To accomplish this, a novel reward function is proposed that fuses user sentiment so that it emulates or mimics human behavior. So as mentioned above, these sentiment based rewards will be incorporated in the hierarchical value functions to ensure higher user satisfaction and make the VA adaptive to the sentiment of the user. What the VA really needs to distinguish is the negative cases from the neutral and positive cases as the dialogue evolves, in order to avert negative sentiment and end the conversation on a positive note for the user. Therefore, the user sentiment scores at every time-step of the conversation is detected using the sentiment classifier/detector already pre-trained using the dataset discussed above on the fly and used them in the state space and the reward models of the semi-MDP for an end-to-end dialogue training. Different cases where sentiment actually plays a role include repetition, interruption [10], user satisfaction in every sub-task to cumulatively complete entire task (multi-intents) designated by the user. The utility of integrating these notions will be explained empirically in later sections. Fig 2 shows the architectural diagram of the proposed Hierarchical DRL agent fused with sentiment. State space An universal state space for both the intent meta and controller policies is used which is a tuple of n + m + 1 variables to a total of z variables. For the intent meta-policy, the n variables are multi-hot encoding values representing the multi-label intents identified by the pre-trained Intent Classification (IC) module for a given user utterance. Whereas for the controller policy, the n variables are one-hot encoding values representing the current option/intent being picked up by the intent meta-policy to be served. m variables store the confidence scores of different slots which are the probability values outputted from the pre-trained Slot-Filling (SF) module, representing the confidence of the module in predicting different slot labels. The task of the controller policy is to then pick up primitive actions to fill in relevant slots from m pertaining to the option in control. The zth variable corresponds to the user's sentiment score (ss) which is the probability value (P s ) outputted from the pre-trained Sentiment Classification (SC) module for a given user utterance. Action space The action space consists of actions for the meta as well as controller policies. For the intent meta-policy, n + 1 options are available to serve the intents. The n + 1th option represents an option to execute the policy to ask the user if he/she needs any other services from the VA when all the tasks previously queried by the user are successfully completed. For the controller policy, 21 primitive actions are available; categorized in five different classes, i.e., Ask, Reask/ Confirm, Update, Option and Salutation. Reward model The task based and sentiment based reward functions for different hierarchies at different time-steps of the dialogue are as follows: • Controller policy: The task based reward (TR c ), at every time-step of the conversation (TR c (s, a, i, s 0 )) for the controller policies is as follows: where nta = non-terminating action, ta = terminating action. kS 0 k 1 is the summation of the confidence scores of all the state variables in the state vector s 0 0 which is obtained after taking an action a in state s. kSk 1 is the summation of the confidence scores of all the state variables in the state vector s. w 1 encourages the agent to act in a way so as to increase its confidence on the acquired slots. w 2 encourages useful communication and discourages unnecessary iterations. Here, w 1 = no. of unique slots of the domain and w 2 = 1 for our experiments. All specific values of w 1 and w 2 were assigned through empirical analyses by conducting the parameter sensitivity tests. kẼV k 1 is the summation of the maximum expected confidence scores of different slots that adds up to be equal to m for controller policies with m slots (the maximum expected confidence score for each slot being 1). The checking criteria (check(s)) is as follows: if the confidence scores of all the individual slots for a particular controller state S � threshold (set to 0.7) are relevant to the option in control, then the checking condition is True, otherwise it is False. • Intent meta-policy: TR i for the intent meta-policy at every time-step (TR(s, i, s 0 )) of the conversation is: Case study where kS 0 i k 1 represents state vector S 0 after completing subtask i. kS i k 1 represents state vector S while beginning to serve intent i. • Sentiment based reward: The sentiment based reward (SR) which includes repetition, interruption and user satisfaction at every time step of the controller and intent meta policies based on the sentiment tag identified are as follows: Case study SRðs; a=i; s 0 Þ ¼ The proposed sentiment based immediate rewards banks on the fact that it utilizes user information in the form of sentiment scores that doesn't involve any manual labeling of the reward or the reward function once a sentiment classifier/detector is ready. Also, it doesn't require any prior domain knowledge and can be easily generalized to other domains. Case study Here the S z variable corresponds to the last variable of the state space. So, let's say stateS (either for meta or controller policies) be- Thus, the proposed reward at every time-step of the conversation at different hierarchies is: Case study Fig 2 shows the architectural diagram of the proposed Hierarchical DRL agent fused with sentiment. The working of this end-to-end system is described as follows: For eg., let the conversation start with the user asking the VA Which restaurant can I book for two people for today?. This query of the user is passed through several components of the Natural Language Understanding module comprising of Intent Classification (IC), Slot-Filling (SC) and Sentiment Classification (SC) to extract relevant information and semantics from the user input to be processed by the VA. The IC module (described below) takes as input this user query and returns the corresponding intent of this utterance which is restaurant_info (for this particular utterance). Similarly, this user utterance is also processed through the SF module (described below) which extracts relevant and useful information in the form of slots which are no. of people = two, date = today. Along with this, it is also passed through the SC module to identify the user behavior in terms of sentiment associated with its query which is neutral (for this particular utterance). Now, these extracted information i.e., user semantics and behavior are updated in the state space of the VA (described above). Based on the current or updated state space, the top-level hierarchy of the VA picks up the relevant option (not known to the user) to process the identified intent or sub-task by the low-level hierarchy (say i 2 here refer to the option of serving intent restaurant_info). The low-level hierarchy now picks up a primitive action in order to communicate with the user to serve the option/sub-task in control (option picked up by the top-level hierarchy). Let's say the VA picked up the action of ask(category), in order to elicit the information from the user. This picked up action is passed through the Natural Language Generation (NLG) module (described below). The NLG module converts the VA's action to natural language text for it to be presented to the user in the form of System or VA's response (let's say Which category restaurant are you looking for? be the response of the VA). This concludes one time-step of the user-VA interaction. Similarly, the conversation continues until the sub-task(s) is completed and the conversation terminates. Experimentation details Natural Language Understanding (NLU) module comprising of Intent Classification (IC), Slot-Filling (SF) and Sentiment Classification (SC) has been pre-trained on the modified bAbI dataset. We trained separate deep learning models for IC, SF and SC on the developed SentiVA dataset to curate the NLU module for a to and fro communication between the VA and the user. Training and testing Training RL algorithms requires feedbacks in the form of consequences from the environment which is users in our case. However, interacting with real users for training is highly expensive and sometimes infeasible (for large number of training episodes). Therefore, we have developed a pseudo-environment i.e., a user simulator that is based on a pseudo-random generator to mimic the confidence values and output from the SF, SC and IC modules, respectively, for different intents in control. This is used as an input to the state space of different policies at different levels of hierarchies. This environment and training procedure are curated to represent a real SF, SC and IC as closely as possible and expedite the process of training as much faster and robust to random noises that might exist in a NLU module. This gives the trained DM module the flexibility to be reused and generalize to any other state since it has not been trained on a particular corpus or conversational data for a task, thereby prohibiting it to learn features and policies specific to a corpus. At the beginning of each episode/dialogue the simulator is initiated with a goal consisting of multiple intents out of the four intents with each of them having pre-defined entities and its values mentioned above. The goal remains unchanged till the completion of the initiated multi-intents. However, new goals can be added after no sub-task remains to be completed by the VA depending on the user requirement. To incorporate user sentiment in the simulation phase, we maintain a record or a statistic for every VA action that shows how many times a particular entity has been queried by the VA during the course of a particular dialogue. This is done to counter repetition from the VA's side, as users exhibit strong sentiment when repeatedly asked about a certain entity. Also, after relevant slots have been filled by the VA for a particular sub-task, we maintain a threshold of maximum 3 time-steps for the VA to provide suitable options to the user for it to be satisfied and exhibit positive sentiment. After which, the user sentiment automatically switches to positive in order for the dialogue/episode to terminate. Based on these factors, confidence values are generated using the pseudo-environment to emulate user behavior as an input to the state space at different hierarchies. Later, the learned policy which is trained on the pseudo-environment is tested against real IC, SC and SF modules trained on the dataset discussed above. Thus, real SC, IC and SF modules are integrated with the system replacing the randomness from state space of all the policies, thereby incorporating natural language to test the robustness of the policy learnt. The rest of the system functions exactly in the same manner as described during training enabling slot-constraint, user sentiment and optimally completing the subtasks for a successful dialogue conversation. Algorithm 1 shows the procedure to train the Hierarchical Dialogue Manager with Task Success and Sentiment based Immediate Rewards. Algorithm 1 Proposed Hierarchical Learning Algorithm with Task Success and Sentiment based Immediate Rewards (SR+TR) 1: Initialize: Set of Deep-Q-Networks (Intent and Controller) with replay memories M (i,ss) and M (c,i,ss) , action-value functions Q (i,ss) and Q (c,i,ss) with random weights θ (i,ss) and θ (c,i,ss) , and target action value functionsQ ði;ssÞ andQ ðc;i;ssÞ with weightsŷ ði;ssÞ ¼ y ði;ssÞ , y ðc;i;ssÞ ¼ y ðc;i;ssÞ 2: Initialize: Sumtrees with maximal priority for replay memories 3: repeat 4: Reset environment, initialize S (i,ss) and S (c,i,ss) states where ss = sentiment score 5: r e = 0, r ss i ¼ 0 6: repeat 7: intent_option (i) = argmax i2I Q (i,ss) (S (i,ss) ;θ (i,ss) ) 8: repeat 10: a = argmax a Q (c,i,ss) (S (c,i,ss) ;θ (c,i,ss) ) ⊳ e.g., � greedy 11: Execute action a and observe task reward, r i q , sentiment reward, r ss c , and next state, S , a 0 ;θ (c,i,ss) )) 2 This transition is appended in the replay memory M c,i,ss in step 12. A mini-batch (say batch size = 32) of such experiences are sampled out from the memory based on maximum priority P j in step 13. In step 14, true action-value estimates of these samples are calculated in order to train the Deep Q-networks. So, if for a sample (say the above sample), action a is the terminating action, then its true action-value estimate becomes, y c;i;ss ¼ r i q þ r ss c . Here, clearly a = 3 is not the terminating action. Otherwise, the current estimate of the action is obtained fromQ ðc;i;ssÞ ¼ 2:66 (say) action-value function. This is scaled with the discount factor γ and added to its TR and SR. So, the true estimate of the above sample now becomes y c,i,ss = 3.15+ 0.5+ 0.7 � 2.66 = 5.51. Now, the error of all the samples of the mini-batch is calculated, i.e., the difference between the true estimate and the current estimate from Q (c,i,ss) . Let's say for the above sample, the current estimate from Q (c,i,ss) is 3.02. So, the error is of amount 2.49 for this particular sample. The cumulative squared error of all the samples of the mini-batch is back-propagated in the Deep Q-Network Q (c,i,ss) using the gradient descent algorithm in order to update the weights of all the parameters accordingly to learn a desired behavior in step 16. After every C steps (say 100), the weights ofQ ðc;i;ssÞ are equalized with the current estimate Q (c,i,ss) in step 17. In step 18, r e is updated as r e ¼ r e þ r i q þ r ss c ¼ 0 This transition is appended in the replay memory M i,ss in step 20. A mini-batch (say batch size = 32) of such experiences are sampled out from the memory based on maximum priority, P j , in step 21 for the top-level policy. In step 22, true actionvalue estimates of these samples are calculated in order to train the Deep Q-networks. So, if for a sample (say the above sample), option i is the terminating option, then its true action-value estimate becomes, y i;ss ¼ r e þ r ss i . Here, clearly i = 2 is not the terminating option. Otherwise, the current estimate of the option is obtained fromQ ði;ssÞ ¼ 10:19 (say) action-value function. This is scaled with the discount factor, γ, and added to its TR and SR. So, the true estimate of the above sample has now become y i,ss = 15.63 + 0.8 + 0.7 � 10.19 = 23.56. Now, the error of all the samples of the mini-batch is calculated, i.e., the difference between the true estimate and the current estimate from Q (i,ss) . Let's say for the above sample, the current estimate from Q (i,ss) is 18.52. So, the error is of amount 5.04 for this particular sample. The cumulative squared error of all the samples of the mini-batch is back-propagated in the Deep Q-Network, Q (i,ss) , using the gradient descent algorithm in order to update the weights of all the parameters accordingly to learn a desired behavior in step 24. After every C steps (say 100), the weights ofQ ði;ssÞ are equalized with the current estimate, Q (i,ss) , in step 25. In step 26, the next state S 0 (i, ss) becomes the current state S (i,ss) . This process continues until no query or subtask is left by the VA to be processed and no new query comes in. Finally, the outer loop in step 28 terminates after the given number of episodes completes execution. Intent Classification (IC) module The task of this module is to identify or predict one or more of the intents from the user's utterance. Thus, its objective is to maximize the conditional probability of intent(s) i given x. where n represents the number of intents in a domain. For this, a two layer Convolutional Neural Network (CNN) based deep learning model has been trained. The input to the network is the word embeddings of the corresponding words in the utterance. GloVe word embedding [39] of dimension 300 has been used to represent words (used for SF and SC modules as well). CNNs of kernel size 4 and 5 with 64 feature maps are used with softmax activation at the final layer for classification. Thus, this module identifies one or more of the intents at a time which is the input to the state space of the intent meta-policy. Fig 3 shows the architectural diagram of the IC module. Case study The IC module takes as input the user utterance at every time-step. It outputs one of the intents from the set of intent labels present in the dataset corresponding to the utterance. For CNNs are known to be a popular choice for classification task. Here, IC is treated as a classification task. CNN layer learns abstract representation of the phrases reflecting its semantic meaning which finally spans over to the entire sentence. It basically captures abstract n-gram features. Here, by using two extracted layers of filter size 4 and 5, it intends to identify abstract 4-gram and 5-gram features spanning over the sentence to capture context across a longer sentence. The features from both the convolution layers of varying filter sizes learn different kinds of semantic features, which are then concatenated to pass through a fully-connected layer to learn a sentence representation. This representation is then passed through a softmax layer to obtain the classified output or the intent. The motivation is that with a single layer CNN, we might miss semantic information ranging across longer sentences. With more complex model than a two layered CNN, the complexity of the model increases without any significant increase in the accuracy or precision of the classified output. This is also evident through empirical results as shown in Table 3. As seen in the table, with a single layer CNN, the model attained an accuracy of 83.64% whereas with a two layer CNN attained an accuracy of 85.62%, i.e., an increase of about 2%. On the other hand, with a three layer CNN, the model attained an accuracy of 85.86%, i.e., an increase of less than 0.5% compared to its two layer counterpart. Additionally, we also provide results of other models such as Bi-LSTM, GRU etc. Slot-Filling (SF) module To extract relevant information from the user's utterance in the form of slots, an SF module has been trained. It is a deep learning model which uses a single Bi-directional Long Short Term Memory (Bi-LSTM) Network [40] at its core. wherex is the input word sequence andỹ contains its corresponding slot labels. The number of hidden units used for the Bi-LSTM is 90 with the softmax activation at the final layer. The necessary slots identified, along with the probability scores of the predicted labels are used by state space of both the intent meta and controller policies for further processing. Case study The SF modules takes as input the entire word sequence in the form of word embeddings and outputs slot labels for each of the words present in the sequence from the set of slot labels Here, "O" refers to as null i.e., no relevant information is present in these respective words. Whereas, labels such as price, location and cuisine provide useful information to the VA related to the user's preference. So, as seen in Fig 4, x 1 , x 2 . . ., x n refer to each of the words in the user utterance and y 1 , y 2 . . ., y n refer to the corresponding slot labels for each of these words. Sentiment Classification (SC) module To identify the implicit sentiment of an user utterance, an SC module has been trained. Here also, we use a single Bi-directional Long Short Term Memory (Bi-LSTM) Network. wherex is the input sentence representation andỹ contains its corresponding sentiment label. Number of hidden units used for the Bi-LSTM is 90 with the softmax activation at the final layer. The sentiment identified, along with the probability scores of the predicted sentiment labels are used by state space of both the intent meta and controller policies for further processing. Table 4 shows the quantitative analysis of SC module in terms of varying architectures. For the SC and SF modules, Bi-LSTM have been used. Bi-LSTM is a popular choice while processing sequential information. They are known to capture long-term dependency features across a sequence in both directions i.e., one signal access past information in forward direction while the other access future information in reverse direction. While handling the above two tasks i.e., SC and SF, long-term context throughout the sequence is of utmost importance. Whereas, RNNs suffer from vanishing gradient problems and is unable to capture long-term context in practicality. Bi-LSTMs also has the advantage to learn how and when to forget unnecessary information and when not to use gates in their architecture. Whereas GRUs do not make use of any kind of gates in their architecture thereby encompassing all the information throughout a sequence without any filter. This makes the entire learning process complex and heavy-weight. Natural language generation A retrieval based NLG framework has been used that maps the action picked up by the VA to its corresponding natural language to present to the user. Similarly, predefined sentence templates with slot placeholders which are replaced by the user goal for a dialogue have been defined for the user responses to present to the VA [1]. Model architecture The Results and discussion The following metrics were used to analyze the performance of various baselines and the proposed framework: 1. Learning Curve during training: This gives a visual representation of the learning pattern and growth of the VA during training. 2. Average Dialogue Length/Turn: It is basically the average system actions per dialogue. The VA should be able to complete its task in less number of time-steps. 3. User Satisfaction: It gives an estimate of the qualitative analysis of the conversations and to determine if the actions picked up by the VA help user attained maximum satisfaction and experience. This is done by analyzing for how many dialogues, the conversation ended on a positive note for the user by monitoring the user sentiment score at the end of the dialogue. Second and third metrics are computed by taking the average of 100 such executions of the policy during testing with the intents picked up in random. The values reported for the baselines and state of the art model are obtained by taking the mean of the values obtained by executing different intents sequentially (in the same order as the proposed system). To evaluate the performance of the proposed framework, we compare our model with the following baselines: • Flat DRL: Trained with a single state space encompassing all the intents and slots of a domain collectively without any abstraction or hierarchies; • HDRL(TR): Contains only task based rewards in the hierarchical framework with different algorithms; • HDRL(SR_partial): Encompasses only sentiment based rewards in the hierarchical framework to ensure user satisfaction at the end of the conversation without incorporating repetitions and interruptions; • HDRL(SR): Encompasses only sentiment based rewards in the hierarchical framework to ensure user satisfaction, repetitions and interruptions. [43], DDQN-PER [41]. As seen from the figure, Random Agent performs the worst compared to all other training algorithms. This is because the random agent takes up random action at every time-step with no learning algorithm guiding the VA. Whereas all the DQN based variations of the algorithms do not converge at all. The policy do not improve over time. This is in lines with [43], where authors demonstrated that DQN has a problem of overestimating the q values, because of the max operation. Whereas DDQN addresses this problem by using a Q-Network (that which is updated) to select the action a for which the target network computes the estimated reward. As seen in Fig 5, DDQN performs comparatively better than all its DQN counterparts. However, it is observed that DDQN-PER perfroms the best amongst all the learning algorithms as the concept of PER stresses more on such samples whose error is large compared to other experiences. Thus, DDQN-PER is used as the learning algorithm for all the remaining experiments. Fig 6 shows the learning curves of different models during training. It is seen that the flat DRL policy does not improve or learn over time due to the increased complexity in the flat state space encompassing all the intents and slots together without any abstraction or hierarchies. Fig 7a and 7b show the performance of all these policies during testing (with 100 dialogues) in terms of user satisfaction and average turn. Here, user satisfaction includes successful task completion along with positive gratification from the user. All the reported results are statistically significant [44] at 5% significance level. Out of all the policies, HDRL(SR+TR), i.e., the combination of both the rewards yielded the best results and efficient convergence of the policy as visible. This is due to the fact that by taking into account the user sentiment, the VA is able to avoid unnecessary actions to make the conversation more effective. The importance of including repetitions and interruptions along with user satisfaction can be realised by viewing the difference between SR_partial and SR. This is because by incorporating repetition, the VA encompassed and learned more data points leading to the VA taking lesser time-steps to complete the conversation with higher user satisfaction. Detailed analysis of the policies revealed that with TR alone, VA wasn't able to consider the sentiment of the user thereby taking more number of dialogue turns to complete a given sub-task. Whereas, with SR alone, the user wasn't necessarily giving a negative sentiment to an irrelevant slot query in multi-intent scenario leading to unnecessary VA actions. Statistical significance test For statistical significance test, we have performed Welch's t-test [44]. The test is performed at 5% significance level. Welch's t-test is conducted between SR+TR and the remaining models and the results are reported in Table 5. All the p-values reported in Table 5 are less than 0.05. These values establish that improvements obtained by SR+TR models over other baseline are statistically significant. Human evaluation Three human users from the authors' affiliation were asked to rate the quality of the dialogues generated from the SR+TR VA. The users were presented with 100 simulated dialogues during testing. For each of the dialogues, users were then asked to rate the general quality of the conversation and the VA on two marking schema: (i) Rating the dialogue on a scale of 1 (worst) to 5 (best) to get a detailed marking score based on coherence, sentiment awareness and naturalness. By coherence, we mean that the VA should ask questions or provide information based on the query of the user. The users' need and the VAs' actions should be coherent. Sentiment awareness refers to whether VA takes into consideration sentiment of the user during the dialogue and whether the conversation ends on a positive note for the user. Naturalness refers to the users' view on how suitable or successful the VA can be in its endeavor without much Error analysis Some instances from the chat transcript to portray the differences amongst the baseline and proposed policies during testing are shown in Fig 9. As is evident, when the SR+TR VA detected a negative sentiment from the user, irrespective of it successfully filling all the irrelevant slots, it had the capability to recover from such a scenario by executing a more efficient strategy. There was no rule-based strategy to force the model to pick up actions after encountering such a situation, but the model learned these fine differences by itself with the help of robust reward functions. Whereas, the TR VA falls in a loop, unable to revive itself from such a scenario, thus, stressing the role of incorporating sentiment for every sub-task in the hierarchical value functions. Also detailed observation and analysis of the proposed HDRL (SR+TR) system revealed various scenarios where the system falters which are discussed as follows: • Sentiment Identification Error: Sentiment score inputs to the state space of the intent meta and controller policies are managed by the SC module in order to achieve maximum user satisfaction. A mis-classification of the intended implicit sentiment due to the limitation of the SC module leads to the Dialogue Manager ignoring or misjudging the user's sentiment towards attainment of user gratification. For e.g., for an user utterance do you have something else, the sentiment was incorrectly identified as neutral instead of negative, that leads to the Dialogue Manager ignoring the sentiment of the user; thus, making the user dissatisfied by the actions picked up by the VA further. • Intent Identification Error: Inputs to the state space of the intent meta-policy is managed by the IC module in terms of multiple subtasks to be completed in order to achieve the user goal. A mis-classification of the intended intent due to the ambiguous user utterance or the limitation of the IC module leads to the Dialogue Manager serving a wrong intent. For e.g., for an user utterance I need a table for two, the intent was incorrectly identified as restauran-t_info instead of restaurant_book, that leads to the Dialogue Manager executing a wrong controller policy based on the option picked up by the intent meta-policy; thus, making the user dissatisfied by the information provided by the VA. • Slot-filling Error: Similarly, a mis-identification of the relevant user information in the form of slots leads to the VA taking extra turns to retrieve the correct information thereby increasing dialogue length. For e.g., for an user utterance I want to eat sea food, the cuisine slot was wrongly identified as sea with a very low confidence. This prompted the VA to confirm the acquired slot from the user as per its controller policy, to which the user denied, thereby taking extra turns to elicit correct information from the user. Quantitative Analysis of all the above modules with respect to varying architectures are shown in Table 3, 6 and 4 in terms of accuracy and F1-score. As seen from Table 4, the error rate of SC is about 2%, i.e., correct classification of sentiment indeed helps the VA in serving the user with less number of dialogue turns. The error rate of IC on the other hand is about 15% (refer to Table 3), i.e., by using the utterances from the dataset, intents were wrongly classified for significant number of times leading to unsuccessful dialogue conversation and reduced user satisfaction. The SF module has an error rate of almost 19% (refer to Table 6) which is significantly larger. But the VA still has the capability to recover from the errors of SF module by reasking about a particular entity based on its confidence score. This though increases number of dialogue turns but ensures user satisfaction at the end of the conversation. Discussion, implication and conclusion This paper presents a HRL based DM using Options framework for managing multi-intent conversations. Sentiment based immediate rewards are incorporated at every time-step of the hierarchical value functions to induce user adaptiveness behavior in the VA. To enable research with these aspects, a novel dataset, SentiVA, is created that contains multi-intent taskoriented dialogue conversations of Restaurant domain annotated with its intent, slot and sentiment labels. A unique representation of Semi-MDP is presented along with novel task-based and sentiment based reward models. These rewards are induced in the hierarchical value functions (options here). This paper shows experimentally that sentiment based rewards are necessary to be incorporated along with task based rewards to ensure successful task completion and acquire maximum user contentment by taking into account several notions of sentiment from the user perspective such as successful subtask completion, repetition and interruption. Discussion on the compliance with the literature review There exist varieties of works in the literature that make use of different HRL techniques to develop VAs [23,25]. However, all these works only incorporate user queries belonging to just one intent/subtask per domain. Also works such as of [1,3] utilize separate or individual DRL models for each subtask/intent of a domain, thus, creating networks of DRL models for multidomain conversations. However, in practicality these assumptions and techniques limit the usage of such heavy-weight models. It is to be noted that all these end-to-end frameworks also do not incorporate user sentiment as the guiding factor to the VA. In [10], authors have used only sentiment based immediate rewards in an end-to-end dialogue system for a single intent. But in multi-intent conversations, sentiment alone is not sufficient to learn desired user behavior. Thus, the current study shows how HRL can be employed to provide a learning framework that caters to the requirement of handling various subtasks at the same time while additionally also taking into account other behavioral cues of the user such as sentiment to serve the user in an efficient manner. Conclusion As discussed above, the paper shows that it is crucial to include other behavioral cues of the user such as sentiment to ensure higher user satisfaction and success of such composite taskoriented VAs. The paper demonstrates a methodology to induce sentiment and make VAs user adaptive in the dialogue learning policy by introduction of novel state space and reward models. Academic implication Several works in recent times are focused on developing task-oriented VAs grounded with various aspects such as sentiment, emotion, empathy in several modules of the Dialogue system and so on in order to serve the user efficiently. The proposed approach leverages from the fact that it can be easily adapted for any other domain because of its task-independent methodology and training procedure, thus, stressing the importance of such light-weight models for the complex yet one of the most important modules of the Dialogue system i.e., DM. Limitation However, because of limited training data for multiple intents, the HDRL agent has been trained using a simulator (pseudo-environment). Training the VA with real-time data will surely make it much more diverse and relevant. Also, in the current form, the VA is unable to process an unknown slot or dynamic slot value given by the user. For eg., if the user communicates a preference over say parking, i.e., a slot rarely found and not known to the VA, it deals with such a situation in a very minimalistic way (say reduced user satisfaction) as the VA is not rigged with a robust error-handling strategy in that context. Future studies and recommendations In future, we would like to extend this idea to managing conversations pertaining to multiple intents belonging to multiple domains with the increased level of hierarchy. Also, many chat bots have been deployed over the years but cannot be used across the globe because of language constraints and the range of these facilities thus becomes limited. Deploying the proposed framework to curate VAs in low-resource language will also be addressed in the future work since this will increase its diversity and make it available for many more people. Also, we will focus on incorporating other channels of identifying sentiment in task-based scenarios thus stressing the role of multi-modality. Users do not only relay their queries through text but also use other communication forms such as images. Integrating these multi-modal dimensions of knowledge elicitation is becoming crucial with time and will be addressed in the future work.
2020-07-04T13:05:42.033Z
2020-07-02T00:00:00.000
{ "year": 2020, "sha1": "a1e63d6b15928545ad17e508471789db3f6a81ac", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0235367&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "acfd8ccd39cc58f48e791ad9b933eab4a317a6ac", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
250931460
pes2o/s2orc
v3-fos-license
Protocol to evaluate a (magneto)caloric device with static thermal switches using a 1D numerical model Summary This protocol describes the use of a simple 1D numerical model to evaluate a single-stage (magneto)caloric refrigerating device with static thermal switches. The model can be used to find appropriate values of parameters that lead to a significant refrigerating effect and COP of the device. The modeled device can comprise any type of static thermal switch in combination with any kind of magnetocaloric or electrocaloric material. Simulation parameters need to be set with care for acceptable computational time. For complete details on the use and execution of this protocol, please refer to Klinar et al. (2022). 1. Make sure you have Python 3.0 or higher installed. 2. If needed, also install the following modules: math, numpy, os, time, multiprocessing. 3. Download the mc_switch.zip from GitHub (mc_switch) or Zenodo (10.5281/zenodo.6628383) and extract it to the mc_switch folder. It contains the code and datasets used as examples in the paper (Klinar et al. (2022)). Check that the folder contains all the modules listed in the Code preparation section, the data and the results folder. Data collection Timing: 10 min (assuming data from the accompanying files are used) This section contains instructions on what data are needed for the protocol. Note: If the required data of some other (magneto)caloric material in the correct format are at your disposal, it takes 10 min to change the default files. 4. There is a text file in the directory mc_switch/data named s_total_Gd.txt that contains the data for the (magneto)caloric material (mcm) gadolinium. If you would like to use a different material, it might take some time to obtain the corresponding data. Note: For magnetocaloric or electrocaloric materials, the data must contain a table of the total specific entropy values of the material with respect to the magnetic field density and temperature. The first row of the table includes the temperatures in K (kelvin) and the first column includes the magnetic fields in T (tesla) at which the total specific entropy values in J/kg (joules per kilogram) are given. Note: In our example, we provided the total specific entropy for gadolinium for magnetic field densities between 0 and 2 T (with a step of 0.01 T) and temperatures between 265 and 326 K (with a step of 0.02 K). The specific-entropy data of gadolinium (m)cm in mc_switch were calculated using the mean-field theory (Kitanovski et al. (2015)). The specific heat capacity of the (m)cm is calculated from the same data and depends on the temperature and the magnetic field density. Therefore, it is calculated at each node as a product of the node temperature and the derivative of the node entropy with respect to the temperature. The calculation is given in the module makematrix.py. Linear interpolation between given discrete values is used. 5. The mc_switch modules contain exemplary material properties and parameters. The device consists of the following components ( Figure 1A): two static thermal switches, a (magneto) caloric material, a heat source, and a heat sink. Note that you need to define the components and provide your own data for each of them (thickness, density, specific heat capacity, thermal conductivity as well as total entropy for the (m)cm), so make sure you have all the data prepared. Model description Timing: 10 min This section describes the model used in the protocol in detail. 6. The mc_switch simulates the operation of a (magneto)caloric refrigerating device or a heat pump, shown in Figure 1A. This section describes the preparation of the numerical model for the ll OPEN ACCESS simulations: provide the properties of the device's components, and the operating conditions of the (magneto)caloric refrigerating device or the heat pump (magnetic field values, magnetization time, frequency of switching). a. The main part of the mc_switch directory consists of 11 Python files, of which two are the main scripts and one of them should be run to start the simulation, depending on whether the code is running on a PC (runcycles.py) or an HPC (runcycles_HPC.py), while the other nine are imported as modules into these two scripts. There are also two folders, one of which contains the (m)cm data and the other one is for storing the simulation results. Additional files are required for running the simulation on an HPC (more information about these files is given in the section Preparation of code and files for simulations). The mc_switch makes it possible to run more (depending on the available CPU cores on the computer) simulations at the same time for different thicknesses of components ((m)cm, thermal switches, heat sink, and heat source) under the same operating conditions and with the same properties. b. The numerical model is based on the implicitly discretized, transient heat-conduction equation (Fourier's law), coupled with the (magneto)caloric effect. For the mc_switch, a single-stage, non-regenerative Brayton thermodynamic cycle is used, as shown in Figure 1B. It consists of four processes: 1) adiabatic temperature change inside the (m)cm due to the (magneto)caloric effect when the external (magnetic) field is turned on, 2) heat transfer during the high isofield while the external (magnetic) field is still on, 3) adiabatic temperature change inside the (m)cm due to (magneto)caloric effect when the external (magnetic) field is turned off, and 4) heat transfer in low isofield while the external (magnetic) field is still off. c. Two thermal switches are used to control the heat flux between the heat source and the (m)cm (thermal switch 1) or the heat sink and the (m)cm (thermal switch 2). The heat flows through the switch during its ''on'' state, while no or very little heat flows through the switch during its ''off state''. The operating frequency is calculated as the inverse of the time that is needed for one complete cycle (all four processes). Repeating the cycles should result in a temperature change between the heat source (cold part) and the heat sink (hot part) in a way that heat is transported from the cold to the hot side of the device. The flowchart of the mc_switch code is presented in Figure 2. d. The governing equation for the heat-transfer processes is thus: where T stands for temperature, t for time, c p for specific heat capacity, F for trigger type (magnetic field, electric field, force or pressure), r for density, and x for spatial coordinate. The caloric effect is implemented by a temperature change for each node of the caloric material, where ''fi'' stands for the final value, ''in'' for the initial value, ''ad, app'' for the adiabatic external field application, and ''ad, rem'' for the adiabatic external field removal. For discretization, the finite-difference method is used with a finite number of control nodes for each device component, i.e.,: OPEN ACCESS where index i stands for time discretization and index k stands for spatial discretization. On the boundaries, the chosen boundary conditions are used, for example, a constant incoming heat flux on the left and convection on the right. In each step, the equations are rewritten in a tridiagonal matrix using the coefficients a, b, c, z in the following order: and solved with the Thomas algorithm. Note: Because the (de)magnetization process is considered adiabatic in our case (there is no heat transfer between any of the components during the process), this numerical model works only for cases where the duration of the (de)magnetization process is much shorter than the duration of the heat-transfer processes. The (de)magnetization process occurs as a step function of time at the point of total (magnetic) field change. Note: In this original code, we use an example of a magnetocaloric device with mcm gadolinium. Note: The numerical model mc_switch is validated with the numerical model heatrapy (Silva et al. (2019)). For more details see (Klinar et al. (2022)). The numerical model is not validated experimentally. STEP-BY-STEP METHOD DETAILS Code preparation for your (m)cm device Timing: 1 h This section defines Python modules and describes the corresponding free parameters for a particular device. Note: The following is the description of the modules in the mc_switch folder. runcycles.py and runcycles_hpc.py are scripts for setting the thicknesses of the components and running the simulation. They are presented in detail later in the text. The preparedata.py module contains a function for importing and initializing the (m)cm data to be used in the simulations. The simulation.py module contains a class named Simulation with all the attributes needed for the simulation. This module is presented in detail later in the text. The cycle.py module contains a function that defines the processes inside the cycle: magnetization, demagnetization, and both heat-transfer processes. It also updates and returns the temperatures of the nodes after each process. The magnetization.py module contains a function to calculate the adiabatic temperature change of the nodes inside the (m)cm based on the current temperature and the predicted magnetic field change. It updates and returns the temperatures of the nodes inside the (m)cm after magnetization (increased temperature) or demagnetization (decreased temperature) process. The heat_transfer.py module contains functions that define the heat-transfer process and the return temperatures of all the nodes after the heat-transfer process. The makematrix.py module contains the functions for building the system of equations for the process of heat transfer inside the device. It considers the locations of the components, the properties of the nodes inside the components, the thermal resistance between the components, the internal heat generation and the boundary conditions. The system of equations is arranged in a tridiagonal matrix and a vector of right-hand sides. The myprint.py module contains the functions called by other modules for exporting different data (temperatures, properties, resulting temperatures, heat fluxes, etc.) to the console or in the form of output files. The thomas.py module contains a function for solving the tridiagonal system of equations by Thomas algorithm (tridiagonal matrix algorithm). The function takes lists of the coefficients a, b, c (under-diagonal, diagonal and above-diagonal coefficients) and z (right-hand sides), calculated with the functions from the makematrix.py module, and returns a new, modified set of temperatures for all the nodes of the device after a heat-transfer step. The magneticwork.py module contains a function for calculating the magnetic work from the entropy change for each node inside the (m)cm for the whole thermodynamic cycle. For the model to be thermodynamically accurate, the magnetic work of the (m)cm calculated in this way must be equal to the difference between the heat fluxes into and out of the device plus the eventual internal generated heat. Both values are exported at the end of the simulation in the quasi-steady state of the system to check the validity of the simulation. 1. Modify the code according to the device you want to simulate. a. In the simulation.py file, input the values of the attributes, listed in Table 1, according to your case (the thicknesses of components are set in the runcycles.py or in the runcycles_hpc.py script). b. Some exemplary values of the attributes are already given in the code and some recommended values are given in the comments. You can also refer to two exemplary simulations, available at GitHub/Zenodo, for which the results are already calculated. c. Note that any of the values assigned to the attributes listed in Table 1 CRITICAL: We strongly recommend that you only make changes to the simulation.py and runcycles.py or runcycles_hpc.py files, and leave the rest of the code unchanged, as any changes might lead to errors in the code. Preparation of code and files for simulations Timing: 1 h This section contains instructions on how to prepare the code and accompanying files to run the simulations on a PC a or an HPC. 2. Prepare the data and scripts in accordance with what hardware you are using. a. When running the code on a PC: i. Insert your (m)cm data text (.txt) file into the data folder. ii. Open the runcycles.py script and set the thicknesses of the components of your device: sw_th (thickness of each switch), mcm_th (thickness of the (m)cm) and hex_th (thickness of the heat source and the heat sink). These three parameters are passed to the constructor of your Simulation object. b. When running the code on an HPC: i. Create a folder on the HPC, named, for example, my_simulations_folder, where you will place the downloaded and extracted mc_switch folder. ii. As with running on a PC, put the (m)cm data in the data folder. iii. Open the runcycles_hpc.py script and navigate to line 102, where the function mp_handler is defined. This function prepares the threads for separate simulations with different combinations of thicknesses of your device components. The module used to do that is called multiprocessing. Note: The processes to be run are saved in a pool and then run simultaneously as separate threads. The whole process exits when all the processes from the pool are finished. In the mp_handler, define three lists: sw_ths, mcm_ths and hex_ths. Combinations of the values in these lists will define the different device structures. For example, if the sw_ths, mcm_ths and hex_ths contain two values each, the function will prepare eight (2 3 ) simulations, each one of them considering one of the eight combinations of device components' thicknesses. Note: The number of combinations should not exceed two times the number of CPU cores available for your simulations Create three additional files in the my_simulations_folder: an empty text file named error.txt (this is error log), an empty text file named out.txt (here the list of simulations will form), job.sh file containing execution instructions for the HPC. Below is an example of the job.sh file. CRITICAL: Follow the rules of the HPC provided by the HPC administrator (especially the reserved time, which partition to use, how many cores and nodes may be used, which Python version is available, etc.). Timing: 11 h This section contains instructions on how to run the simulations. Note: The code execution time is given for one of the exemplary simulations, available at GitHub. The code was executed with a single thread on a PC with an Intel Core i7 processor. The execution time strongly depends on the chosen parameters, particularly the space and time discretization. It increases with number of nodes of the device and a decrease of the time step. When all the parameters and files are prepared and loaded to a PC or an HPC, you can run the simulation. 3. Start the simulation by running the runcycles.py script when on a PC or the runcycles_hpc.py script when running on an HPC. a. The easiest way to run the script is by using an IDE like PyCharm, opening the mc_switch folder as a PyCharm project and running the runcycles.py script from within. But you can, of course, use any other method for running the script. b. If everything works, you should see the results files in your results folder and you can proceed to step 5 of this section (for the analysis of results). Note: You can use the runcycles_hpc.py script on your PC, but you should be careful not to run more threads than your CPU allows. 4. The workload on a Linux HPC is usually distributed and organized with SLURM. In the command line, navigate inside the folder my_simulations_folder. To run the simulations, start your job script (for example job.sh) with the sbatch command: If the job is submitted successfully, a notification like this one will appear: You can then check whether your job is active with the squeue command. If you want to cancel your job, you need its ID. For example, the command to cancel the job with ID 289373 is: Some potential problems are already accounted for in the code and will show immediately after submitting the job (see troubleshooting, problems 1, 2, and 3). CRITICAL: Some combinations of device components' thicknesses will lead to slow convergence to the quasi-steady state. Depending on the discretization in time and space, this might lead to very long execution times. To predict the execution times, run some test simulations (on a PC or an HPC). For example, for your set of device-part thicknesses, try running the simulation with less accuracy, i.e., larger tolerance (self.tolerance and self.end_tolerance) and just a few nodes. Then lower the tolerance and increase the number of nodes to see how the computational time changes. In this way, you can make an estimate of how much time the simulations will take. Note: It is possible to stop simulations and rerun them later. In that case, the simulations that were already finished will not run again and the ''ALREADY DONE'' message will appear for each of them. Note: On GitHub, there are two example codes with exemplary parameter values available. Running the script runcycles.py from the example folder will produce the results that are already generated in the results folder. Results analysis Timing: 2 h This section contains an example of simulation results and their analysis. Figure 3 shows typical results files for one example inside the results folder. 5. Many files appear in the results folder: a. Two files form for every simulation (based on different thicknesses of components). i. The first number in the name denotes the thickness of each thermal switch, the second is the thickness of the (m)cm and the last is the thickness of both the heat source and heat sink. ii. The file with suffix _temps shows the temperatures for each node after the end of each completed thermodynamic cycle, while the other file shows the temperatures of all nodes after each process in the cycle, a list of all parameters at the beginning and a short summary at the end of the file. b. Result.txt contains a short summary of the quasi-steady-state operation of all simulations. c. resultFinished.txt is empty and appears only as a sign that all simulations inside this folder are finished. It appears only when running the runcycles_hpc.py script. 6. Very important: Check whether a simulation gives the correct results. One option is to compare the magnetic work with the difference between the input and output heat fluxes plus the eventual internal generated heat. These two can differ by a few percent (see troubleshooting, problem 4). Additionally, check the fluctuation of the temperature inside the heat sink and the heat source. If the fluctuations are so large that the temperature rises above and falls below the ambient temperature at some point, there is no cooling effect. a. Figure 4 shows exemplary results. i. Figures 4A and 4C show the temperature evolution of magnetocaloric material, heat sink, and heat source from the beginning of operation until the quasi steady-state is reached. ii. Figures 4B and 4D show the temperature profile in the magnetocaloric device of one complete cycle during the quasi-steady state operation. b. In the quasi-steady state, the temperature of the magnetocaloric material fluctuates between 294.8 and 289.5 K (temperature difference of 5.3 K), which is the maximum achievable cooling potential. i. Because thermal switches 1 and 2 are not ideal (thermal conductivity during off state is greater than zero and thermal conductivity during on state is finite), temperature difference between the heat source and the heat sink is only 2 K. ii. Heat dissipation from the heat sink to the ambient is adequate, since the temperature of the heat source is slightly above the ambient. If the switches had better rectification ratios, the temperature of the heat source would approach 289.5 K. c. The temperatures of the heat source and the heat sink are important refrigerating quality indicators. i. Figures 4A and 4B show the case where the temperatures of the heat sink and the heat source are constant during quasi-steady state (approx. 293 and 291 K, respectively). ii. Figures 4C and 4D show the different case where the temperatures of the heat sink and the heat source fluctuate even in the quasi-steady state operation: the temperatures of the heat sink and the heat source fluctuate between 291 and 291.4 K, and 292.7 and 293 K, respectively. Such operation is not desired, because the temperature of the heat sink should be constantly above the ambient temperature to dissipate heat to the ambient. d. Different magnetocaloric devices can be compared according to the established temperature span between the heat sink and the heat source. i. The higher the temperature span, the better the operation of the refrigerating device. The example in Figure 4 shows a case of zero cooling power, and therefore the highest achievable temperature span. If we increase the cooling power (add a thermal load to the heat source), the temperature span decreases. At some point, the temperature span will be zero: this corresponds to the maximum cooling power of the magnetocaloric device. ii. Usually, the cooling power is given per gram of magnetocaloric material, which enables further comparison with other devices and different magnetocaloric materials. The operation of the device can be evaluated also in terms of the COP, which is defined as: COP = cooling power / (heating power -generated heat -input work -cooling power). e. The model also enables evaluation of the impact of thermal contact resistance and internal heat generation inside thermal switches on the cooling performance of the device. Increasing one or both parameters decreases the temperature span and consequently the cooling power. The model can be used to determine the maximum allowable values of thermal contact resistance and internal heat generation. It is important to be aware of these values prior to designing the experimental setup. Further analysis of the results is presented in (Klinar et al. (2022)). EXPECTED OUTCOMES The modeling method described here allows you to analyze different static thermal switches implemented in a single-stage, non-regenerative (magneto)caloric device, for example, in terms of their COP or achievable temperature differences. The best cases can then be manufactured and experimentally tested. See (Klinar et al. (2022)) for information on the design of a regenerative device. LIMITATIONS Because the (de)magnetization process is considered adiabatic, this numerical model works correctly only for cases where the time for (de)magnetization is much shorter than the time for the heat-transfer processes. OPEN ACCESS The heat conduction is based on Fourier's law of heat conduction, which only applies to bulk materials and does not consider phenomena occurring on the nanoscale, therefore, be cautious when using very small dimensions or fine discretizations. TROUBLESHOOTING Problem 1 When you run a simulation, you see the ''ALREADY DONE'' message (step 4 in running the simulation). Potential solution The simulation is either finished and no action is needed, or there are old results in the results folder with the same name that need to be deleted. Problem 2 When you run the simulation, you see ''Time step too large!'' message (steps 3 and 4 in running the simulation). Potential solution This means that the time step is larger than the length of the time interval for each heat-transfer process (my_sim.t_transfer < my_sim.time_step). Reduce the time step in simulation.py. Problem 3 An error message (like this one: ''ValueError: Some errors were detected! Line #513 (got 13 columns instead of 102)'') appears when you try to rerun the simulations after you had to interrupt the execution (step 4 in running the simulation). Potential solution Before rerunning the simulations, delete all the lines that were not finished in the r._temps.txt files. Problem 4 At the end of the simulation, the magnetic work differs from the difference between the input and output heat fluxes plus the eventual internal generated heat by more than 5% (Step 6 of results analysis). Potential solution This means that the system has not reached the quasi-steady state yet or that the operation is not thermodynamically correct. If the steady state is not yet reached, try reducing the end tolerance for checking the steadiness in the simulation.py file. If, however, the steady state is reached, the reason for the discrepancy could be an inappropriate (most likely too large) discretization in time and/or space. Try adjusting the discretization in the simulation.py file. Problem 5 In the quasi-steady state, all temperatures (including the temperatures of the heat source) are above ambient. There is no cooling effect (Step 6 of results analysis). Potential solution The cooling power of the device is not sufficient, so there is no cooling effect. Possible causes, besides inappropriate thicknesses, are: the operating frequency is too high; cooling power (left boundary condition) value is too large; the heat generated inside the thermal switches is too large; thermal contact resistances are too large; the magnetic (or other field types for other caloric technologies) field change is too small; the temperature range of the simulation does not fit the temperature of the highest caloric effect of the chosen caloric material; the caloric effect of the chosen caloric material is too small (which results in poor cooling power). RESOURCE AVAILABILITY Lead contact Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Katja Vozel (katja.vozel@fs.uni-lj.si). Materials availability This study did not generate new, unique materials. Data and code availability Original data have been deposited to Zenodo: https://zenodo.org/badge/latestdoi/468418161. Exemplary data have been deposited at GitHub as part of the mc_switch repository and is publicly available as of the date of publication. DOI of a version of record at Zenodo is listed in the key resources table. All original code has been deposited at GitHub and is publicly available as of the date of publication. DOI of a version of record at Zenodo is listed in the key resources table. Any additional information required to reanalyze the data reported in this paper is available from the lead contact upon request.
2022-07-22T15:20:59.554Z
2022-07-20T00:00:00.000
{ "year": 2022, "sha1": "e0922288cd85c54949563266ec71ec850130ee53", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.xpro.2022.101576", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "950e897b4b7657f3dc2d0bbdf03d5818d35b1179", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
236637147
pes2o/s2orc
v3-fos-license
Inkk Trial .Çô Intraoperative ketamine for perioperative pain management following total knee endoprosthetic replacement in oncology: a double-blinded randomized trial Background There has been a growing interest in the use of ketamine following orthopedic surgeries. We hypothesized that low dose intravenous ketamine during surgery would help in mobilization following total knee replacement (TKR) in oncology patients as assessed by the timed to up and go (TUG) test at 72.áhours post-surgery. Our secondary objectives were to compare the opioid requirement at the end of 72.áhours, pain scores, satisfaction with pain management, adverse effects, range of joint movement achieved in the post-operative period and the functional recovery at the end of 1 month. Methods After the ethics commitee approval, registration of the trial with the Clinical Trial Registry - India (CTRI), and informed consent, this double-blinded trial was conducted. Using computer generated randomization chart, an independent team randomized the patients into ketamine group which received at induction, a ketamine bolus dose of 0.5.ámg.kg-1 before the incision followed by 10.á..g.kg-1min-1 infusion which was maintained intraoperatively till skin closure and the saline group received an equivalent volume of saline. Postoperatively, patient controlled morphine pumps were attached and the pain score with morphine usage were recorded for 72.áhours. The TUG tests and range of motion were assessed by the physiotherapists until 72.áhours. Results Fifty-two patients were enrolled in the trial. Demographics were comparable. No significant intraoperative hemodynamic changes and post-operative adverse events were noted between the groups. A decrease in the TUG test, along with decreased opioid usage with a better range of movements was noted in the ketamine group, but this was not statistically significant. Day of discharge, patient satisfaction score, and functional recovery assessed by Oxford Knee Score (OKS) were comparable between the groups. Conclusion In conclusion, low dose intraoperative ketamine infusion does not provide clinical benefit in perioperative pain management and postoperative rehabilitation following total knee endoprosthetic replacement in oncology. Introduction and rationale A pain-free postoperative period is imperative following total knee replacement (TKR) surgeries as it aids in early rehabilitation and faster recovery. 1 Currently available analgesic interventions during TKR include epidural analgesia, peripheral nerve block and opioids. 2 Epidural analgesia has failed to gain popularity because of incidences of hypotension, urinary retention, pruritis, motor weakness and increased transfusions and fluid requirements. 3,4 The use of opioids through intravenous patient-controlled analgesia (IV PCA) is associated with side effects including nausea, vomiting, constipation, sedation, and urinary retention 5 . Intra-articular local anesthetic infiltration has not gained popularity in our hospital. Additionally, peripheral nerve blocks are not favored as there is the risk of femoral quadriceps weakness leading to increased risk of fall. There are documentations of few cases of neuritis and femoral neuropathy following peripheral blocks. All of which can affect postoperative rehabilitation. 6 Hence, arose a need to have a suitable multi-modal analgesic regimen for these patients. Ketamine, an N-methyl-D-aspartate (NMDA) receptor antagonist, has been used in few orthopedic surgeries, including knee and spine surgeries, with results suggesting a decrease in opioid requirement perioperatively. The literature is inconclusive about the optimum dose and duration for the continuation of ketamine infusion in the peri-operative period. 7,8 Also, there is a lack of data on whether ketamine is equally effective in endoprosthetic knee replacement surgeries, which involve a longer procedure with more soft tissue and neurovascular dissection. Here, normal soft tissues are excised to achieve negative surgical margins resulting in large structural defects which are reconstructed by tumor endoprosthesis. 9 As tissue handling and trauma is maximum during any surgery, we aimed to study the benefit of intraoperative use of ketamine in rehabilitation following endoprosthetic TKR, and we hypothesized that a low dose of intravenous ketamine during surgery would help in mobilization following endoprosthetic TKR in oncology patients as assessed by the timed to Up and Go (TUG) test. 7,10,11 Our primary objective was to compare functional recovery using the TUG test at the end of 72 hours. Our secondary objectives were to compare the opioid requirement at the end of 72 hours, pain scores, satisfaction with pain management, the incidence of adverse effects and range of joint movement achieved in the postoperative period. We also compared the functional recovery at the end of one month. Methods This prospective double-blinded randomized control trial was conducted in our hospital from September 2017 till October 2018. After the Institutional Ethics Committee approval [IEC approval number: IEC/0817/1855/002], the trial was registered with the clinical trial registry of India [CTRI/2015/08/006130] and written informed consent was obtained from each patient/guardian. Patients with American Society of Anesthesiologists (ASA) physical status I and II, aged above 13 years undergoing total knee replacement for oncological indications were included. Patients undergoing reconstructive surgery with major plastic flaps or on preoperative opioid/drug abuse, on chronic pain medications, with preoperative pathological fracture, muscle weakness of affected limb leading to limitation to mobility, pregnant patients, patients with contraindications to ketamine such as raised intracranial pressure, glaucoma medications, raised intraocular pressure, history of vertigo, auditory/visual hallucinations, or on antipsychotic medications were excluded. Postoperative exclusion criteria included intraoperative common peroneal nerve damage and postoperative ventilation or hemodynamic instability preventing mobilization for more than 24 hours. Previous observations by the physiotherapy team revealed that patients after endoprosthetic TKR in oncology patients with standard analgesic protocol at our center, take an average of 142 seconds at 72 hours to complete the TUG test. The standard analgesic protocol at our center includes the use of intraoperative opioid along with postoperative morphine PCA pumps (1 mg bolus and 10-minute lockout interval), and either intravenous (IV) paracetamol or diclofenac. Group sample sizes of 20 each was required with 80% power with mean difference of 35.5 (25% reduction in TUG Day 3) and with a significance level (alpha) of 0.05. Permitting a 30% drop out (for postoperative exclusion), 52 was taken as sample size. Patients were preoperatively educated in the use of patient controlled analgesia (PCA) pumps and familiarized with the use of the Numeric Rating Scale (NRS; 0 to 10 scale where 0 = no pain and 10 = worst pain imaginable) for rating their postoperative pain at rest and movement. On the morning of the surgery, patients were randomized into ketamine group or saline group. A team of residents, who were not part of the research team, randomized patients in accordance with computer generated randomization chart. This group prepared the study drug, labeled, and handed over the syringes to the concerned anesthesiologist. This ensured that the theatre team, patients, and the study team were blinded to the nature of the study drug. The ketamine group received at induction, a bolus dose of 0.5 mg.kg -1 followed by 10 g.kg -1 .min -1 infusion, while the saline group received equivalent volume of saline. Induction of general anesthesia and intraoperative management was standardized. Upon arrival in the operating room, baseline parameters ---i.e., heart rate (HR), blood pressure (BP), oxygen saturation were noted. In addition, electrocardiogram was continuously monitored. Patients were induced with either propofol 2---3 mg.kg -1 or thiopentone sodium 5---7 mg.kg -1 intravenously; the need for neuromuscular blockade and airway management were decided as per the theatre anesthesiologists. Intraoperative analgesia included fentanyl 2 g.kg -1 IV at induction, followed by morphine, 0.1 mg.kg -1 (lean body weight) IV after 30---45 minutes. If needed, fentanyl 1---2 g.kg -1 could be repeated as and when required. The study drug bolus was administered after the airway was secured and was followed by infusion as per instruction given by the unblinded team. The study drug was continued till the completion of skin closure. The procedures were performed by the same surgical team. Perioperatively, steroids, tranexamic acid, and peri-articular anesthetic injections were not used. A single negative suction drain was inserted in all patients. At the end of surgery, injection paracetamol 500 mg---1g (> 50 kg: 1 g, 45---50 kg: 750 mg, less than 45 kg: 15 mg.kg -1 maximum of 500 mg) was given intravenously. In the postanesthesia care unit (PACU), the PCA pump with morphine was initiated with a standard setting of 1 mg bolus and lockout interval of 10 minutes. All patients were followed up by acute pain service (APS) and resting pain assessed using NRS scale. The worst pain during exercise was recorded by the physiotherapist at the end of each exercise session. Adverse effects were recorded as follows at 24, 48, and 72 hours. Vomiting was recorded at 24, 48, 72 hours as per vomiting score (0: no nausea, no vomiting; 1: nausea alone; 2: one episode of emesis; and 3: two or more episodes of emesis). 12 Sedation was assessed using the 6-point Ramsay sedation scale (in which 1 = awake, anxious, agitated, restless; and 6 = asleep, no response to light glabellar tap or loud auditory stimulus). 13 Unpleasant feelings like hallucinations (auditory/visual), dizziness, nightmares were recorded on a score from 1---5, 5 = worst imaginable. 14 At 30-day follow-up in the outpatient department, details of ongoing pain killers and functional recovery were recorded on Oxford Knee Score (OKS), which is a validated 12-item knee questionnaire that scores patients from 12 (best possible) to 60 (worst possible). 15 The scale is available in English language and was administered by the investigator and patients' replies recorded. The TUG measured the time it takes a patient to rise from an armed chair (at least up to knee length for the given patient), walk 3 meters, turn, and return to sitting in the same chair. 10 Patients were instructed to walk as quickly as they feel safe and comfortable. The use of the arms of the chair was permitted to stand up and sit down. A stopwatch was used to measure the time to complete the TUG within the nearest one-tenth of a second. Walking aids, if needed, were allowed for patients in the immediate postoperative period (24---48 hours) only. All the raw data were entered and analyzed using SPSS Statistics version 25 software. Demographic data were expressed as mean ± standard deviation (age, weight, height, duration of surgery, anesthesia, or proportion (sex and ASA physical status). The continuous data were analyzed using Student's independent t-test when normally distributed (fentanyl use, morphine use, degrees of move-ment), and with Mann---Whitney U test if otherwise (Heart rate [HR], blood pressure [BP], minimum alveolar concentration [MAC] and pain scores). All the analyses were two-tailed and the confidence level was 95%; p < 0.05 was considered statistically significant. Results A total of 102 patients were screened and 52 patients were randomized; 49 were included for the final TUG analysis, refer to consort diagram (Fig. 1). The general demographics such as age, gender, weight, ASA physical status, duration of surgery and anesthesia were comparable between the two groups (Tables 1 and 2). We found that the functional recovery assessed using TUG test at end of 72 hours was better in the ketamine group with 103.25 ± 30.04 seconds as compared to the saline group with 125.91 ± 49.32 seconds. But this finding was not statistically significant (p = 0.1). The results of the TUG tests on each postoperative day along with degrees of flexion achieved are shown in Table 3. The comparison of perioperative opioid requirement is enumerated in Table 4. Interventions were required intraoperatively for six patients for tachycardia and hypertension (2 in the saline group and 4 in the ketamine group). No statistical difference was seen in this regard. There was no discontinuation of the study drug due to any hemodynamic instability intraoperatively. The postoperative pain scores at rest and during exercise were comparable between the two groups. Figure 2 shows the trend of postoperative pain scores during exercise. The median pain score at 24 hours during exercise was 7 [5][6][7][8] in the saline group and 5 [4-7.5] in the ketamine group (p = 0.2). No significant postoperative adverse events such as nausea, vomiting, sedation, and dysphoric symptoms were noted between the groups. Day of discharge, patient satisfaction score and functional recovery assessed by OKS at one month follow up were comparable between the groups (Table 5). Discussion From this study we found that intraoperative intravenous ketamine infusion at 10 g.kg -1 .min -1 following a bolus of 0.5 mg.kg -1 did not improve post-operative rehabilita-tion following endoprosthetic TKR in oncology. Though the ketamine group had a better performance with respect to the TUG test at the end of 72 hours, the difference was not statistically significant. The difference in knee replacement done for tumors as compared to the conventional ones are that the part of the bone involved (femur or tibia) by the tumor is removed, keeping a safe margin with a cover of overlying muscles, 16 while in conventional TKR, only the articular surface is removed and replaced. 17 In tumour reconstruction, emphasis is placed on safe resection and reconstruction is secondary with the ligaments (collateral and cruciate) sacrificed in order to achieve complete resection. Postoperative rehabilitation is a challenge in tumor reconstruction. In distal femur reconstruction patients can be started on full weight bearing and gradual knee flexion. In proximal tibia reconstruction, the patients although started on full weight bearing, are advised to delay knee bending up to 6 weeks in order to protect the ligament reconstruction. 18 Nevertheless, despite the site of tumor, we presumed that the better functional scores at 48---72 hours could be translated in better prolonged rehabilitation which is most needed following these surgeries due to extensive tissue dissection. Hence, a review of functional recovery was done again at the end of one month for all trial patients. We found no difference between the two groups with respect to functional recovery as assessed by OKS. Previous studies 19---21 suggest that perioperative use of ketamine may benefit in postoperative rehabilitation. Adam et al. 19 had demonstrated better knee flexion in the study group which was statistically significant when ketamine was used along with continuous femoral nerve block. In the above trial, the ketamine infusion was continued 48 hours postoperatively at 1.5 g.kg -1 .min -1 after an intraoperative infusion run at 3 g.kg -1 .min -1 with no serious adverse effects. Two continuous infusions along with a PCA pump for post-operative pain management can be seen as cumbersome and not practical in all scenarios. The role of ketamine in preventing or reducing central sensitization due to tissue damage has been well established. 22,23 Since the tissue damage is maximum during the intra-operative period of any surgery, we rationalized that ketamine infusion during this period should work. In our trial, the ketamine group consistently had better degree of flexion on all assessments postoperatively till 72 hours, although this was not statistically significant. Similarly significant opioid sparing and analgesic effects have been observed with ketamine infusion in orthopedic surgeries and many of these studies 19,24---27 continued the ketamine infusion postoperatively for varied periods of time with a maximum recorded duration of 48 hours and at different dosages. There remains a chance of dosing errors with continuous infusions, 28 and hence as a policy ketamine infusions are not used inpatient wards at our hospital. Cengiz et al. 26 had recorded a reduction of morphine consumption up to 45% with an intraoperative ketamine infusion at 6 g.kg -1 .min -1 in total knee replacement surgeries. In our trial, the intraoperative fentanyl (205.00 ± 86.12 g vs. 213.25 ± 76.75 g) and the first 24 hours postoperative mor-phine requirement (28.52 ± 20.84 mg vs. 32.13 ± 19.99 mg) recorded in the ketamine group were lower though not significant. Similarly the pain scores in ketamine group was lower than of saline group and of a different severity (moderate versus severe in case of saline, however this was not statistically significant). Similar to the other trials, 19,25,26 there were no adverse effects of ketamine such as hallucinations and delusions observed postoperatively. Thus, the question on the role of continuing ketamine infusion into the postoperative period to obtain opioid sparing with better analgesic effects and to improve rehabilitation still remains. The intraoperative hemodynamic parameters were higher, though not significant, in the ketamine group; whether this is attributable to the increase in blood loss of around 150 ml in the ketamine group, is speculative (Table 1). Postoperative rehabilitation after TKR surgeries have been assessed using 2-minute walk tests, passive and active knee motion, performance measures such as TUG, IALS (Iowa level of assistance scale) and patient reported outcome measures (PROM). 11 We chose TUG test for our assessment. It is one of the most commonly used performance assessment tools. TUG test is quicker, less resource intensive and does not rely on clinician's perception and studies show that PROMs are less reliable than performance measures in the immediate post-surgery period. 29 The literature shows that TUG test has predictive values on both short 30 and long term 11 functional recovery following arthroplasties. Studies suggest that preoperative and acute TUG test is a better predictor of long-term functional outcome on the 6-minute walk test when not adjusted for age, sex, and preoperative functional outcomes. Bade et al. 11 also propose that postoperative day 2 range of motion is not a better predictor of long-term functional outcome following total knee arthroplasties for osteoarthritis as against pre-operative ROM. Nevertheless, does this finding apply to TKR with endoprosthesis performed for oncosurgeries is something that needs to be evaluated with a larger sample. We used the OKS for the PROM assessment. 31 We found that the cohort of patients who underwent pre-operative chemotherapy had better pain relief and they performed well on the pre-operative OKS (26 [24---27] in patients who received preoperative chemotherapy vs 22 [21---26]) though there was no statistical significance on this (p = 0.3). Postoperatively, as expected, at one month follow up, the cohort which received preoperative chemotherapy had a median OKS of 35 [33---36] as compared to the non-receivers 32 [30---34] (p = 0.007). Items, such as ability to kneel and feeling of sudden ''give way'' were not applicable to all the patients. Literature shows that preoperative chemotherapy can lead to decrease in inflammation of tissues surrounding the tumors leading to actual reduction of the size of the lesion while responders to chemotherapy were found to have decrease or complete remission of pain and a decreased vascularity of the tumor. This could translate into better surgical margins and hence outcomes. 32 There were limitations to the trial, ketamine infusion was restricted to the intraoperative period when tissue handling and trauma is maximum. The impact of this intervention was assessed by clinical parameters inclusive of rehabilitation and pain scores. We could have also looked at inflammatory markers to have a complete understanding of the role ketamine played in the body's response to surgical trauma. In summary, we infer that intraoperative intravenous ketamine infusion at 10 g.kg -1 .min -1 following a bolus of 0.5 mg.kg -1 does not improve postoperative rehabilitation following total knee endoprosthetic replacement surgeries in oncological settings.
2021-08-02T06:15:55.649Z
2021-07-01T00:00:00.000
{ "year": 2021, "sha1": "e26c8aac0e1601b341bf41abbb3c86ef39640f87", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.bjane.2021.07.014", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b0fa9a0e7dabb065ac220a2e6f6f10442f1a242e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252133447
pes2o/s2orc
v3-fos-license
FINANCING STRATEGY TO SUPPORT A PRODUCT DEVELOPMENT OF ALUMINIUM FINISHED GOODS (CASE STUDY: PT. XYZ) Nowadays, the use of wood materials for construction is starting to be reduced because it can cause global warming due to cutting down forests. There are changes in the construction used aluminium materials such as frames and doors. There is an increasing demand for aluminium products which is an opportunity for companies to develop products. Currently the company manufactures aluminium profile products (semi-finished goods), so the company needs to develop aluminium finished goods products to increase profits and brand image. The investment required by the company is 11,232,000,000 rupiah to develop aluminium finished goods products. This research uses financial feasibility study, then selects the best investment financing. Net Present Value (NPV), Profitability Index (PI), Internal Rate of Return (IRR), Payback Period (PP), and Return on Investment (ROI) are used as variables for determining investment. There are four financing alternatives, including internal financing, bank, leasing and IPO. The results showed that bank financing provided the highest NPV and ROI with an NPV value of 1,546,356,903, Profitability Index (PI) of 1.14, Internal Rate of Return (IRR) of 10.09%, Payback Period of 4 years 1 month, Return on Investment (ROI) of 28.24% and US Index of 4.71. Thus, the results of this research recommend using bank financing for investment in product development. Introduction Nowadays, the use of wood materials for construction is starting to be reduced. This is because it can have a negative impact on the environment, one of which can cause global warming due to cutting down forests. That way the use of wood must be replaced. Many construction service companies replace wood with aluminium which is generally used for frames. The use of aluminium as a frame is used for housing, apartments, hotels, hospitals, and others. In addition, the use of aluminium can be developed into doors. The use of aluminium has several advantages such as aluminium is not easy to rust, better durability, easier maintenance and more affordable price compared to wood. Thus, making the demand for aluminium frames increase compared to the decrease in demand for wooden frames. Due to the increasing demand for aluminum frames, this is an opportunity for the company to increase revenue. So far, the company has only focused on producing and selling aluminum profiles of various shapes according to customer needs, one of which is an aluminum frame which is commonly used as a frame. This case study was conducted at the company PT. XYZ. PT. XYZ is a company engaged in the field of aluminum profiles. The company has several facilities including Raw Material, Extrusion, Anodizing, Anodizing Laquer and Powder Coating. The company can produce and sell aluminum profiles in various colors as needed. This aluminum profile is a semi-finished product. The company has customers with various categories including distributors, industry, projects and individuals. Actually the company has the ability to process semi-finished goods into finished goods. When sales of aluminum profiles decline, new innovations are needed as another source of income. In addition, so that the company can develop and be sustainable. For this reason, it is necessary to develop other products that can sell finished goods from aluminum profiles to aluminum frames or from aluminum profiles to doors that can be used by consumers. This is an opportunity for the company to increase revenue. If the company only sells semi-finished goods, it is seen that the economic value is still lacking because the selling value of the goods is still cheap compared to finished goods. That way, the company should be able to develop semi-finished goods into finished goods to get a higher selling value. The company has a business issue in this development which is to find a funding strategy for the development of new products by making aluminum finished goods. This study only examines the funding strategy for companies to make finished goods in the form of aluminum frames and doors. Based on the conceptual framework above, the beginning of this research starts from the current condition of companies that produce semi-finished goods and then sees an opportunity to develop new products from semi-finished goods into finished goods. With this opportunity, the company can increase profits from the sale of finished goods. Business Issue Exploration To support the development and manufacture of finished goods, a special division is needed to handle this by establishing a fabrication division. This fabrication division will later prepare the equipment needed, the production process to sales to customers. Furthermore, to create a fabrication division, funding is needed for the purchase of equipment, worker and other facilities. The next step in this research is to conduct financial projections and feasibility studies for new product development projects. It will start by calculating capital expenditures and operational expenditures which will be used to measure cash flow projections using capital budgeting techniques. Thus, the main objective of this research is to determine the most appropriate funding alternative to finance the development of new products and in the hope of making a profit. After selecting the most appropriate funding alternative, then carry out the implementation in this research. There is a business opportunity that can be utilized from the construction of a house which requires a door as a complement to the development so that this new product in the form of a door can be absorbed in the market. According to data from the Ministry of Public Works and Public Housing, the number of housing needs or backlogs reached 7.64 million house units at the beginning of 2020, consisting of 6.48 million house units for low revenue people (non fixed), 1.72 million house units for low revenue people (fixed) and 0.56 million house units for non low revenue people (Petriella, 2020). Furthermore, there is the potential for the aluminium market in the property sector where the government through the Ministry of Public Works and Public Housing will build houses for low-income people. The government plans to build a total of 102,500 cheap or subsidized housing units in 2020 (Indraini, 2019). There are private developers who will build houses, hospitals and apartments that require aluminium doors for the construction of houses, hospitals or apartments. Based on Bisnis.com's records, the property industry grows 5 percent annually on average (Budhiman, 2020). By looking at the previous year, a number of parties projected this industry could grow in the range of 5 percent to 8 percent in 2020. Due to the Covid-19 pandemic, housing sales throughout 2020 in Jabodebek-Banten as the national housing benchmark decreased dramatically by 31.8 percent compared to sales in 2019 and is the lowest level of sales since the property cycle slowed down in 2013 (Lubis, 2021). According to the CEO of Indonesia Property Watch (IPW) Ali Tranghanda, there is something interesting about sales in the Rp. 1 billion to Rp. 2 billion in 2020, an increase of 12.5 percent compared to the previous year. Furthermore, sales of price segments below Rp. 300 million were under the greatest pressure with a decrease in 2020 of 42.9 percent, followed by the housing segment with prices over Rp. 2 billion, which fell 41.1 percent. Meanwhile, home sales in the medium price segment range from Rp. 301 million to Rp. 500 million and in the price segment Rp. 501 million to Rp. 1 billion also decreased by 34.2 percent and 25.6 percent respectively (Lubis, 2021). However, if the Covid-19 pandemic disappears, then the property business is likely to recover and economic growth will improve. With the recovering of the property business, it will have a positive impact and have great potential for companies to be able to market finished aluminium products for new and old homes. Therefore, to more easily achieve the desired target in business implementation, an appropriate strategy is needed. However, before developing a business strategy, it is important for a company to conduct a situation analysis to identify its business capabilities, customers, and business environment. Then in this study will analyze the business situation using the Porter's Five Force analysis approach and SWOT Analysis to evaluate the company's strategy. There are five factors from Porter are the threat of new entrants, the bargaining power of suppliers, the bargaining power of buyers, the threat of substitutive products and the rivalry from similar companies from inside an industry (Porter, 1985). Based on the results of discussions conducted with 5 participants consisting of the General Manager, Sales Manager and the Sales team. The discussion is about how the company's prospects and challenges will be when developing finished goods products. Summary of five forces analysis, the first analysis of the threat of new entrants is low to medium because there is always high demand for frames and doors. However, it is neither easy nor difficult to invest in this field. Second, the bargaining power of suppliers is low because there are most of the materials available in the company. Third, the bargaining power of buyers is medium because the finished goods that will be produced are still new products. Fourth, the threat of substitutive products is medium because they have cheaper substitutes using plastic or plywood materials. The last is rivalry among competitors from medium to high because there are other companies that already have almost the same products but still have market share. According to Pickton and Wright (Pickton & Wright, 1998) SWOT analysis (Strength, Weakness, Opportunity, Threat) is indeed good to use because it is easy and practical when implemented. Next to reassess its value as a strategic management tool. This method aims to analyze the internal and external factors of the company's management plan. The first, Strength analysis consists of has more than 10 years of experience in aluminum profiles, good aluminum raw materials and large aluminum profile production capacity. The second, Weakness analysis is that the finished product for aluminum doors is not widely known. The third, Opportunity analysis consists of high demand for special aluminum frames or doors, the potential market share in Indonesia is still wide, and the number of housing needs or backlogs in 2019 reached 5.4 million units based on ministry data. public works and public housing. The last Threat analysis is the presence of competitors such as aluminum shops and contractor services. Literature Review Capital structure is a combination of debt and equity to fund the investment of a project used by the company. To get the optimal capital structure, it can minimize costs by using the total capital or the average cost of capital. Development in a project requires funds in its financing. Usually, financing options are used at the cost of debt or the cost of own capital. The formula for calculating the cost of debt is as follows: Where: -ri = Cost of debt after tax rd = Interest rate (Cost of debt before tax) -T = Income tax rate Using the Capital asset pricing (CAPM) model approach, it will help to get a cost of equity equivalent to the risk-free rate plus a risk premium to cover investment risk. As for the investment beta, it is the risk that is added to the investment in the market which measures other factors against the investment. CAPM calculating formula is: Where: -rs = cost of equity -Rf = risk free rate β = beta -rm = market rate of return Beta can measure how an asset moves when the stock market increases or decreases as a whole. Beta is divided into levered beta and unlevered beta. The company is not listed on the Indonesia Stock Exchange and the beta version is not identified. Therefore, the author will use the homebulding industry, the unlevered beta released by Damodaran in 2021 is 1.18. (Damodaran, Betas By Sector (US), 2021). Weighted-Average Cost of Capital (WACC) is the average of the estimated future cost of funds and is calculated by weighting the company's debt and capital according to the amount of weight that is a portion of each funding source used. WACC calculating formula is: Where: -wi = Proportion of long-term debt in capital structure -ws = Proportion of equity in capital structure -ri = Cost of long-term debt -rs = Cost of Equity In conducting investment analysis, it is necessary to calculate capital budgeting. Capital budgeting is the process of evaluating and selecting sustainable long-term investments with the aim of maximizing shareholder wealth (Gitman & Zutter, 2015). The analytical method used to evaluate and select appropriate investments by analyzing alternative financing that provides the best return with Net Present Value (NPV), Profitability Index (PI), Internal Rate of Return (IRR), Payback Period (PP), and Return on Investments (ROI). Net Present Value (NPV) is a capital budgeting method for evaluating investment projects regarding profitability. The NPV formula is as follows: Where CF0 = Cash flow at year 0, CFt = Cash flow at year t, r = cost of capital If the NPV is> 0, the investment project is accepted or the projected cumulative cash flow yields a return, and vice versa. Profitability Index is a capital budgeting method which is another form of NPV. PI can be calculated from the present value of the cash inflows divided by the initial cash outflows. The PI formula is as follows: If the PI is >1, the investment project is accepted or the projected cumulative cash flow is greater than the investment, vice versa. Internal Rate of Return (IRR) is the discount rate calculated so that the NPV is equal to 0. In other words, the rate of return obtained from the company when investing. The IRR formula is as follows: If the percentage of IRR is > percentage of capital cost, the investment project is accepted, vice versa. Return on Investment (ROI) is a measure of how effectively an investment generates profit in the form of a percentage over a certain period. The ROI results can determine the profitable value of an investment compared to other investments and choose which one is more profitable. The ROI formula is as follows: ROI = Net Income / Cost of Investment In this study, the author will try to analyze the financing strategy that the company can use to develop its business. The financing strategy will be taken for five years to achieve the goal. One of the financing strategies is to use debt, when using debt financing an indicator is needed for consideration, the US Index can be an indicator tool in providing insight into loan selection. US Index is a theory from Dr. Ir. Uke Marius Siahaan, MBA which is used as an indicator to measure a company's ability to pay debts. In addition, the US Index can assist in making decisions to use investment financing through debt or equity (Dr. Ir. Uke Marius Siahaan, 2019). The formula for getting the US Index is as follows: US Index = Business Generic Profitability / Loan Interest Rate Business Generic Profitability = (Earnings before Interest and Taxes / Total Assets) x 100% If US Index > 1 then company should go leverage, if US Index < 1 then company should go equity and if US Index = 1 then company is free to choose either go leverage or go equity. Companies prefer to use internal financing whenever possible which is an important element in pecking order theory (Ross et al, 2009). Furthermore, the pecking order is financing for the company obtained from retained earnings, followed by debt financing and finally external financing (Gitman & Zutter, 2015). If you look at the previous theory, internal financing is a company that uses the funds generated from retained earnings to be used as investment. The company uses bank loans for investment financing. Banks will provide money to be lent to customers with interest. The advantage of using a loan or debt is that taxable income is reduced because there are interest payments on the debt, which means the cost of debt is subsidized by the government (Gitman & Zutter, 2015). However, the higher the proportion of money, the more likely the company is not able to pay its debts when they fall due, so that it can create the risk of company bankruptcy. Financing using bank loans usually requires collateral such as assets and then if the company is unable to pay its debts, the assets will be confiscated by the bank. Leasing is a lease agreement between the lessee who agrees to pay rent to the lessor for the use of the asset (Harrison Jr. & et al, 2012). With leasing will allow the lessee to use the assets used without having to pay in advance in large amounts as required in the purchase agreement. Meanwhile, according to the decision of the minister of finance, leasing is a financing activity in the form of providing capital goods either on a finance lease or operating lease for use by customers for a certain period based on periodic payments (Keputusan Menteri Keuangan Republik No. 1169/KMK.01/1991, 1991. When a company wants to expand, it can use an initial public offering (IPO). IPO aims to obtain additional funds by offering shares made by companies that have gone public and the IPO is part of equity financing. By going public, the company will decide to sell some of its shares to the public and be ready to be publicly valued by the public. When a company first goes public, it is often referred to as an Initial Public Offering (IPO) (Fahmi, 2011). If the company goes public, then there are benefits for the company (PT. Bursa Efek Indonesia, 2015) such as having access to funding in the stock market, adding value to trust for access to loans, building company professionalism, improving company image, liquidity and the possibility of profitable divestments. for the founding shareholders, building the loyalty of the company's employees, increasing the value of the company, and finally having the ability to maintain the company's viability. According to the Indonesia Stock Exchange go public information center, to become a go public company there are stages that must be passed (PT. Bursa Efek Indonesia, 2016), namely the initial stage is the appointment of an underwriter and preparation of documents, this stage the company needs to form an internal team, appoint underwriters and capital market supporting institutions and professions to assist companies in preparing to go public. Next, the stage of submitting an application for listing shares to the Indonesia Stock Exchange and submitting a registration statement to the Financial Services Authority. Next, the stage of the public offering of shares to the public. And the last stage is the listing and trading of company shares on the Indonesia Stock Exchange. Business Solution The process of selecting expansion project financing for new products in this study, the authors obtained several assumptions from the company's internal data and the company's historical data approach. Before choosing, it is necessary to assess each alternative with project analysis and investment using a financial feasibility study. Next, the author must identify investment opportunities and investment costs. Financial-based analysis to find out and explore the sources that the company currently has so that it can determine a financial strategic approach. The financial resources used are based on the company's internal financial statements in 2018 and 2019. There are four categories of ratios used to analyze the company's financial ratios, namely liquidity, activity, profitability and debt ratio. Liquidity ratio in this analysis is the current ratio and the fast ratio. In 2018, the company had a current ratio of 1.4 and there was an increase in the current year ratio to 1.6. If the company's current ratio is above one, it shows the company's short-term healthy financial capacity. The company has the same quick ratio of 0.9 in 2018 and 2019. If you look at the quick ratio, which is close to one, this is still considered quite good. Activity ratio analysis consists of inventory turnover, average collection period (days) and average payment period (days). The inventory turnover ratio decreased slightly from 4.5 to 4.2 in 2018 to 2019. This shows that the inventory turnover has not changed much. In 2018, the average collection period was 78.8 days and in 2019 it was 92.7 days. This period experienced a decline in payment of receivables from customers so that the company needed to improve so that cash flow could be better. The average payment period in 2018 was 153.4 days and decreased in 2019 to 164.9 days. This shows that the company's payments to suppliers are late, so it can affect supplier confidence. Profitability ratio in this analysis are gross profit margin and operating profit margin. In 2018, the company had a gross profit margin of 23.1%, then experienced a drastic decline in 2019 to 11%. This sees a reduction in the company's profits, so it needs to be considered in the management of the company. The company has an operating profit margin of 4.2% in 2018 and 1.3% in 2019. There is a decrease in operating profit margin by almost 3% so that it is necessary to increase efficiency in company operations. There are two types of debt ratios in this analysis, namely debt to equity ratio and debt to asset ratio. In 2018, the debt to equity ratio was 55.4% and decreased in 2019 to 42.3%. This shows that the company is still safe because the smaller the portion of debt to equity, the safer it is. The company's debt to asset ratio in 2018 and 2019 was 69.2% and 63.7%, respectively. There is a slight drop in value, but it's still safe. The company still has assets that are greater than debt. The process of making finished aluminum products requires several people to carry out production operations. Some of these people will then be formed into a fabrication division. The assumption of total employees and salary per month for the initial stage required is 35 people and Rp. 198,000,000 for salary per month. Furthermore, investment projections for making finished goods products need to build new production areas and equipment to support day-today operations. The initial investment required for the plan to create a production area and purchase new equipment is Rp. 11,231,000,000 (11.2 billion) consisting of building of Rp. 9,273,600,000, new equipment Rp. 1,829,000,000, and office equipment of Rp. 128,600,000. The depreciation used in this projection is the straight-line method. The method of calculating depreciation in this study is based on the Law of the Republic of Indonesia Number 36 of 2008. Based on Article 11 paragraph 1 explains that depreciation costs follow the useful life. Article 11 paragraph 6 (paragraph 1) explains that the useful life for permanent buildings is 20 years. Then, the useful life for the equipment is 8 years. The calculation of depreciation expense from 2022 -2026 is Rp. 695,861,250. Based on the projected production of 500 doors for one month with a composition of 60% for the first model (CD-01) and 40% for the second model (CD-02). If it is calculated in one year, the assumption is that the production yield is 3,600 CD-01 doors and 2,400 CD-02 for the first year. Furthermore, for the second year there is an estimated growth of 25%, then the third year of 40% growth, and the fourth and fifth years of 25% and 10% growth. Assuming the selling price by taking a profit margin of 30%, the door for the CD-01 model is Rp. 2,420,000 per unit and the CD-02 model is Rp. 2,250,000 per unit for the first year. Furthermore, for the following year it is estimated that there will be an average selling price increase of 5%, more details can be seen in the table below. CD-01 8,712,000,000 11,160,000,000 16,191,000,000 21,333,120,000 24,952,320,000 CD-02 5,400,000,000 6,900,000,000 9,996,000,000 13,192,560,000 15,411,240,000 Operating expenses consist of 8 activities. Estimated operational cost details are as follows: The analysis of financing alternatives uses four alternatives, namely Internal Financing, Bank Financing, Leasing and IPOs. In this study using the Weighted Average Cost of Capital as the discount rate. WACC is used to determine the composition of the cost of equity and the cost of debt. The first alternative, internal financing uses variables, as follows: 1. Risk Free Rate is based on market risk-free use of the current yield on 5 year Indonesian Bonds of 5.90% (PT.Penilai Harga Efek Indonesia, 2021). 2. Beta (β) will use unlevered beta 1.18 from industry homebulding based on Damodaran reference, last updated in January 2021. 3. Risk Equity Premium is based on market risk premium for Indonesia applied in this research is 1.84% according to Damodaran Country Default Spreads and Market Risk Premium, last updated in January 2021 (Damodaran, Country Default Spreads and Risk Premiums, 2021). Based on the information above, the WACC calculation is as follows: Cost of Debt (ri) 8.0% Debt to Capital Ratio (wi ) 0% WACC (rs * ws ) + (ri * wi) 8.07% The calculation of financial projections for 5 years using the value of the company's free cash flow obtained from the value of profit after tax. For more details, please refer to the table below: Based on the information above, the WACC calculation is as follows: Income Tax (T) 25% Cost of Equity (rs) 8.07% Equity to Capital Ratio (ws) 0% Cost of Debt (ri) 6.00% Debt to Capital Ratio (wi ) 100% WACC (rs * ws ) + (ri * wi) 6.00% The calculation of financial projections for 5 years using the company's free cash flow value based on bank loans. The loan interest assumption used is a fixed rate. For more details, please refer to the table below: The third alternative, leasing in this study uses variables, as follows: 1. Interest rate for leasing follows one of the leasing companies, namely PT. Buana Finance Tbk. Interest rates from Buana Finance range from 7.33% -9.13% for investment financing. The assumption of interest rates is taken as an average value between 7.33% -9.13% is 8.23% per year to be used (PT Buana Finance, 2021 Based on the information above, the WACC calculation is as follows: WACC (rs * ws ) + (ri * wi) 6.17% Calculation of financial projections using the value of the company's free cash flow based on a loan from a leasing company for 5 years. The loan interest assumption used is a fixed rate. For more details, please refer to the table below: The last alternative, in this study, uses the projected IPO financing from the cost of equity obtained from investors and uses the following variables: Implementation Plan The implementation plan for the development of new finished goods products will be proposed based on the analysis of the selected financing alternatives. The company should take appropriate action in mid-2021 so that the implementation plan can be carried out properly. There are several proposed time schedules for the implementation plan, which can be seen in the table below. Resource requirements that will be used are the requirements of the selected financing. After making an implementation plan, there is a need for resources that must be prepared for the implementation of the investment to run smoothly. When bank financing has been selected, the company must prepare the documents required by the bank consisting of business legality (Deed of establishment, SIUP, TDP, Company NPWP, Company Profile), company owner data (KTP, NPWP), company guarantee data (SHM, IMB, PBB), and company financial data (company accounts, financial reports, inventory reports, accounts receivable reports, accounts payable reports). Conclusions Based on the analysis that has been done previously, the following conclusions can be drawn: 1. Product development aims to open wider market opportunities for companies that focus on providing household construction needs. Thus, the company will get other benefits from selling this product. 2. There are four financing alternatives that will be proposed for investment funding in this research. The first alternative uses financing from internal companies, the second alternative uses financing from banks, the third alternative uses financing from leasing and the last uses financing from IPO. After calculating the financial projections for the four alternative financing, internal financing was not selected because the retained earnings were insufficient and the company's profits decreased based on financial resources. IPO financing was not chosen because the owner did not share ownership with other parties. Lease financing was not chosen because it has lower NPV, PI, IRR, Payback period and ROI results than others. Then the last option is bank financing, this financing has the highest NPV and ROI values, which shows the projected income earned exceeds the projected costs used when investing. 3. Based on the above analysis, the company is recommended to choose bank financing to finance the initial investment with a composition of 100% bank debt and an assumption of an interest rate of 8% per year. Bank financing gets the highest NPV and ROI with details Net Present Value (NPV) of 1,546,356,903, Profitability Index (PI) value of 1.14, Internal Rate of Return (IRR) value of 10.09%, Payback Period (PBP) value for 4 years 1 month, Return on Investment (ROI) of 28.24% and US Index of 4.71. With the US Index greater than one, it indicates that the company should go leverage / debt. 4. The company will be more easily approved by the bank for debt loans. This is because the company has had previous debt loans to banks. In addition, the bank will find it easier to conduct evaluation analysis.
2022-09-09T17:08:30.088Z
2021-09-29T00:00:00.000
{ "year": 2021, "sha1": "134d28e560d6dce2ddb7db659d48f1cb0e2c46a4", "oa_license": "CCBYNC", "oa_url": "http://e-journal.president.ac.id/presunivojs/index.php/IDEAS/article/download/1701/914", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "2c961a4d45afe7fb4ce0f635dd2dab96212e973d", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
264571882
pes2o/s2orc
v3-fos-license
Hemodynamic Response in Ascending Aorta Surgery Patients during Moderate Intensity Resistance Training Background In patients undergoing ascending aortic surgery (AAS), postsurgical physical exercise with a safe and effective exercise prescription is recommended. Resistance training is associated with blood pressure (BP) elevations that may increase the risk of new aortic dissection or rupture. However, the acute hemodynamic response to resistance training for this patient group is unknown. Aim The aim of this study was to investigate peak systolic BP (SBP) increases in AAS patients during moderate intensity resistance training. Methods SBP was measured continuously beat-to-beat with a noninvasive method during three sets of leg presses at moderate intensity. A 15-repetition maximum strength test was performed to estimate the maximal amount of resistance a participant could manage 15 times consecutively (equivalent to approximately 60–65% of their maximum strength). Results The study had 48 participants in total, i.e., 24 cases and 24 controls. Both groups consisted of 10 females (42%) and 14 males (58%). The case group had a mean age of 60.0 (SD ± 11.9) years and a mean of 16.3 months since surgery (minimum 4.4 and maximum 39.6 months). 22 of the 24 cases received antihypertensive medication. The median baseline BP was 119/74 mmHg among cases and 120/73 mmHg among controls. During the first set of leg presses, the median peak SBP was 152 mmHg, in the second set 154 mmHg, and in the third set 165 mmHg. Corresponding values in controls were 170 mmHg, 181 mmHg, and 179 mmHg. The highest peak SBP registered in an AAS patient was 190 mmHg and in any healthy control was 287 mmHg. Conclusion The findings indicate that AAS patients in control of their BP have the endurance to perform 3 sets of resistance training at moderate intensity as their SBP increases with a maximum of 39% from the baseline compared to the 51% increase in the control group. Introduction Degenerative, genetical, and congenital conditions all predispose to aortic aneurysms, dilatations, dissection, or rupture which results in elective or urgent surgery [1,2].Tese conditions are associated with high mortality and are primarily treated with ascending aortic surgery (AAS) where the aortic tissue is replaced with a graft [2,3].With increasing ageing of the population and improvement in surgical methods, an increasing number of patients undergo AAS [4][5][6]. Resting blood pressure is measured following ascending aortic surgery, with a goal of <120/70 mmHg, and is achieved through medical treatment.Te patients are monitored closely until 120/70 mmHg is reached [1,7]. Te level of acute rise in blood pressure following ascending aortic surgery is not documented, and there does not exist a documented limit for blood pressure increases during physical activity, e.g., a brisk walk for this patient group.Despite a paucity of evidence, it is presumed that sudden acute elevations in blood pressure (as occur in heavy weightlifting) may increase the risk of a new aortic dissection or rupture [8].Because of this assumption and due to safety, a general recommendation for the upper limit of acute blood pressure increases in the patient group is lacking and physical activity recommendations and restrictions are opinion and experience-based and not supported by clinical evidence or studies of safety [8].A lifestyle survey revealed that the number of patients not engaged in any structured physical activity increased after postaortic dissection due to fear of a new dissection [8]. Training-based rehabilitation of patients with coronary heart disease (CHD) reduces cardiovascular mortality and improves physical capacity and health-related quality of life [9,10]. An observational pilot study shows that some forms of cardiac rehabilitation are safe and helpful for the Marfan syndrome patients who have undergone AAS or heart valve surgery [11].Resistance training was not ofered in the training-based rehabilitation in this study. AAS patients are recommended to avoid isometric exercises and be physically active with a safe and efective exercise prescription to prevent conditions associated with a physically inactive lifestyle such as diabetes and arteriosclerosis [8,12,13]. Research indicates a lower BP postresistance training and a positive efect on age-related loss of balance, muscle strength, and mass [14][15][16][17].Guidelines from the European Society of Cardiology contain recommendations for cardiac ftness and resistance training in training-based rehabilitation [9,18,19].Te recommendations for CHF patients NYHA class II-III are either low intensity (40-50% of max strength) or moderate intensity (60% of max strength) [9,18,19].Tese guidelines were used to guide AAS patients in Denmark as there were no specifc recommendations for AAS patients other than to avoid isometric resistance training at the time this study was conducted [7,18,20]. A few studies have addressed the acute efect of endurance training on peak SBP in AAS patients.In a study of 26 AAS patients doing moderate intensity cycling, 75% of the patients had a SBP between 150 and 170 mmHg and the other 25% had a SBP <150 mmHg (50).In a study of 29 AAS patients, maximum SBP was measured during a VO 2 peak test before and after exercise-based rehabilitation vs. no training [21].Mean maximum SBP was 207 ± 33 mmHg.A third study has addressed the hemodynamic responses in AAS patients during cardiopulmonary exercise testing (n = 128) [22].No serious adverse events were observed, and peak exercise systolic and diastolic blood pressures were 160/ 70 mmHg. In a systematic literature search, no studies were identifed in the feld of AAS patients' acute SBP increase during resistance training. Terefore, the overall aim of this current study was to investigate the peak systolic BP in AAS patients during moderate intensity resistance training (leg press) according to current European guidelines.Second, the study aimed to compare peak systolic BP in AAS patients with that in a healthy gender and age-matched control group to evaluate the diferences between the groups. Design. A descriptive single-center intervention study with 24 AAS cases and 24 healthy controls. Participants. Participants who had undergone AAS were recruited from a cardiac rehabilitation unit in Copenhagen and fulflled all the inclusion criteria (adult AAS patients (+18 years), >eight weeks postoperatively, approved for training by a cardiologist, able to perform a legpress exercise, and able to understand instructions in Danish or English) and none of the exclusion criteria (medically treated B-dissection, resting SBP >150 mmHg, or left ventricular ejection fraction (LVEF) <15% assessed with ultrasound by a cardiologist).All the participants had participated in a rehabilitation programme at the time when the study was conducted. Te participants in the healthy control group fulflled all of the inclusion criteria (same sex and age as the matched case participant (±5 years), able to perform a leg-press exercise, and able to understand instructions in Danish or English) and none of the exclusion criteria (any heart disease or resting SBP >150 mmHg). Hemodynamic Outcomes. Continuous beat-to-beat measurements of the hemodynamics in real time were acquired by a Nexfn monitor (BMEYE, Amsterdam, Netherlands).Te accuracy of the method is investigated in various reports and is comparable with intra-arterial measurements with a diference in systolic pressure measurements but not of a clinical relevance [23][24][25][26][27]. Participants had a fnger clamp applied on the middle fnger between the proximal interphalanges joint and the distal interphalanges joint, which kept the artery at constant volume by applying counter pressure [24]. Study Procedure.Participants attended one test session at the cardiac rehabilitation unit in Copenhagen.Resting BP was measured after lying down for fve minutes in a quiet room.Tereafter, the participants did a 15-minute warmup on a bicycle ergometer, keeping a constant speed of 60-80 rpm.After the warmup, they were positioned in a Technogym leg-press machine "leg-press horizontal/seated mechanical" with their feet placed in a parallel position. A 15-repetition maximum (RM) strength test was performed to estimate the maximal amount of resistance a participant could manage 15 times consecutively (equivalent to approximately 60-65% of their maximum strength) [19].Once the correct resistance was determined, the participant had a fnger clamp applied on the middle fnger of their left hand to indirectly measure the BP continuously and noninvasively.Te participants were instructed to place 2 Translational Sports Medicine their left hand on their chest and their right hand on the right handle of the machine and to hold this position during the entire examination (Figure 1).Furthermore, the participants were instructed to refrain from speaking, to move nothing but their legs, and avoid squeezing or moving their hands during the entire examination.Each participant performed three sets of 15 repetitions (each set lasting approximately 70 seconds), and 60 seconds rest periods were given in between sets to ensure adequate recovery time for participants [19,28,29].As a safety precaution, instructions in the correct breathing technique during resistance training were provided; participants were asked to exhale during the most strenuous phase and inhale during the less strenuous phase of each repetition.Te participants were furthermore asked to avoid Valsalva Manoevre (forced exhalation against a closed glottis) which could lead to further systolic BP increases [23,29,30].Furthermore, there was a pragmatically safety set maximum at 200 mmHg which means that the entire examination stops if the SBP reached 200 mmHg in the case group. Statistical Analysis. Data were extracted from the Nexfn monitor in beat-to-beat data (data points for each heartbeat) and continuous data (Figure 2).Beat-to-beat data were used for further analysis.Continuous data were used to validate the beat-to-beat data to avoid using artifacts/outliers in the analysis and to avoid using data from calibrating points. To use the correct data points from the, respectively, large dataset collected from the 48 participants, a set of macros were coded collecting the data and combining them into a single fle with mean, minimum, and maximum values for the baseline and each of the peaks from the three sets.For each exercise set, the highest systolic and diastolic BP, HR, and workload were selected for further analysis. Analyses were conducted using the SPSS statistics IBM (Version 24).Data are reported as the median, mean values, +standard deviation (SD), 95% confdence intervals and range, or frequency percentages.For comparison of continuous variables, unpaired Student's t-test was applied.For nominal variables, an X 2 -test was conducted.To investigate if age, months since operation, or amount of antihypertensive prescriptions impacted SBP increase, a linear regression analysis with Pearson correlation (r) was performed.Model assumptions were checked by visual inspection of residual plots.Te threshold for a statistical signifcance was set at p < 0.05. Ethical Approval. Te Danish Data Protection Agency approved the study (j.nr.: 2012-58-0004).Te Health Research Committee in the Capital Region deemed the study exempt from approval (H-17041675).Te study was registered at https://clinicaltrials.gov/ before commencement of any study-related activities (NCT03424863) and was conducted in accordance with the World Medical Association Declaration of Helsinki. Subjects were informed orally and in writing about the study and signed informed consent afterwards. Results Te study had 48 participants in total: 24 in the AAS group and 24 in the control group (fowchart Figure 3).Both groups consisted of 10 females (42%) and 14 males (58%).Te case group had a mean age of 60.0 (SD ± 11.9) years and the control group 60.2 (SD 11.8) years.Te participants in the AAS group had a mean of 16.3 months since surgery (minimum 4.4 and maximum 39.6 months). 22 of the AAS patients received antihypertensive treatment.In the control group, 3 of the participants received the lowest possible dose of antihypertensive treatment.Resting SBP was signifcantly lower in patients than in controls (121 ± 12.9 versus 128 ± 11.8, P � 0.042) whereas resting HR and BMI were higher.All characteristics of the two groups are shown in Table 1. Beta-blockers, angiotensin II antagonist, ACEinhibitors, calcium antagonists, and diuretics are all antihypertensive drugs.In the case group, 31% (n � 7) were treated with only one antihypertensive drug, 31% (n � 7) were treated with two drugs combined, and 31% (n � 7) were treated with three or more drugs combined.Te control group had 4 participants (n � 16%) with comorbidities (as Alzheimer and arthrosis) versus the AAS group with only four participants (n � 16%) without comorbidities.Te comorbidities in the AAS group were diabetes, COPD, asthma, and arrhythmia. Hemodynamic Responses to Leg Press in AAS. Te hemodynamics before and during the three sets of leg-presses are shown in Table 2. In the AAS group, the baseline SBP increased by 34 mmHg (28%) from the baseline to the highest peak during the frst set of leg presses.Te maximum increase of SBP was 47 mmHg (39%) measured at the peak of the third set. In the third set, 9 of the AAS patients (38%) had a peak SBP >170 mmHg.Only 2 of them had a peak SBP >180, and none of the AAS patients exceeded 200 mmHg during intervention.Te highest measured SBP in the case group was 190 mmHg, and only one patient increased the SBP to this level. In all three sets of leg presses, HR and SBP increased in the last third of each set and decreased immediately in the 60 sec break in between sets.HR and BP decreased but did not reach baseline values, whereas every new set was started with a higher SBP than previous sets and the peak SBP increased during the 3 sets (pressure load summation). Translational Sports Medicine Comparison to Healthy Controls. Te median peak SBP in the control group increased by 50 mmHg (42%) from the baseline to the peak of the frst set of leg presses compared to the 34 mmHg (28%) in the AAS group, and the maximum increase of SBP in the peak of the third set was 58 mmHg (51%) compared to the 47 mmHg (39%) of the AAS patients (Figure 4). Tere was a statistically signifcant diference in peak SBP between the two groups in all the three sets of leg presses.Five participants (20%) from the control group had a peak SBP >200 mmHg, and no one in the AAS group exceeded 200 mmHg.Te mean peak SBP in the case group was signifcantly diferent from 200 mmHg (95% CI: −53.5; 33.3, P < 0.001) (Figure 4) (Figure 5). Te mean resting heart rate was signifcantly higher in the AAS group than in the control group.Te AAS group had, as expected, a signifcantly lower resting SBP compared to the control group, probably as a result of the medication use.No diference was seen in the resting DBP. Discussion Te present study investigated peak systolic BP in AAS patients during moderate intensity resistance training (60% of maximum strength).Te median peak SBP in AAS patients in this study had a maximum increase of 47 mmHg (39%) in the third set of leg presses from the baseline.Te highest SBP measured in the AAS group was 190 mmHg in one patient.Compared to the healthy control group, the median peak SBP did not reach the same high level in the AAS patients. Our fndings showed that the AAS patients in control of their blood pressure in this study did not exceed 200 mmHg SBP during moderate intensity resistance training.A new and conservative guideline from the Danish Society of Cardiology on VO 2 peak [31] tests was published after this study was conducted.Te recommendation concerning AAS patients is to avoid SBP >160 mmHg during VO 2 peak tests.Since the study was conducted before the recommendations were published, blood pressures above 160 mmHg were accepted. Using this more conservative approach, with an upper limit of 160 mmHg, some of the AAS patients in this study would exceed the limit when performing resistance training at moderate intensity (Figure 3).In this present study, 13 in the AAS group (54%) exceeded 160 mmHg doing the three sets of leg presses at moderate intensity.Te mean peak SBP in the AAS group was signifcantly diferent from a maximum of 200 mmHg (95% CI: −53.5; 33.3, p < 0.001) but not from 160 mmHg (95% CI: −13.5; 6.7, p � 0.490). If the AAS patients should follow the recommendation to avoid SBP >160 mmHg, their blood pressure should be monitored during resistance training.Longer breaks are needed between sets because of the pressure load summation and perhaps lower resistance than 15 RPM could be recommended as well. Corone et al. found that 75% of their participants had a SBP between 150 and 170 mmHg and the rest had a SBP <150 mmHg during cycling at moderate intensity [32].In this present study, 54% exceeded 160 mmHg which indicates that maximum SBP during resistance training at moderate intensity in this patient group is comparable with the SBP during cycling at moderate intensity. Fuglsang et al. measured maximum SBP during the CPET test in 29 AAS patients and documented a mean of 207 ± 33 mmHg [21].Tis is much higher than the highest mean peak in the third set at 153 mmHg ± 23 in this present study.Only one participant reached 190 mmHg. Tis illustrates (based on a relatively small sample size) that it is possible for this patient group to stay below 200 mmHg when performing three sets of resistance training at moderate intensity with 15 RM.AAS patients are instructed to lift nothing heavier than they are able to breathe normally during the lift in order to avoid Valsalva Maneuver and unfavorable high SBP increases [7,18,20].Several studies have shown that if participants performed Valsalva Maneuver during lifting, they would increase the risk of peak SBP >200 mmHg [23,29,30].Tat is why all participants were instructed to breathe the same way during the leg presses, which was to exhale during the most strenuous phase of the repetition and inhale during the less strenuous phase of the repetition.Translational Sports Medicine Translational Sports Medicine Te continuous monitoring of the BP with the Nexfn monitor was very useful in detecting unfavorable high increases in SBP during resistance training for this patient group.Te monitor showed BP at a given time, and during the intervention, an instructor was noting the highest measurements in all sets of leg presses.Te noted values matched the values in the beat-to-beat data. Chaddha et al. [8] highlighted fear as the limiting factor for physical activity after aortic dissection.We sensed a fear of lifting heavy in some of our participants in the case group, Translational Sports Medicine maybe because they feared that unfavorable high increases in SBP could cause them harm.Tis could be a reason to use the Nexfn monitor for pedagogical interventions for AAS patients doing resistance training as this group of patients may have a fear of SBP increasing rapidly during physically activity [8]. Although invasive measurements of BP are superior compared to the noninvasive methodology in terms of sensitivity, the noninvasive approach is acceptable for clinical rehabilitation purposes as demonstrated in several studies [25][26][27].Te continuous monitoring of BP using a noninvasive fnger arterial clamp method may be useful in detecting an unfavorable high increase in BP during resistance training. Using the Nexfn monitor in training sessions following AAS may have a positive infuence on the fear of exercise and ability and motivation to perform resistance training.Tis hypothesis needs to be further investigated. Tere are certain limitations to this study.Tough the Nexfn monitor was useful, it had some limitations, e.g., Schattenkerk et al. and Imholz et al. [24,26,29] published data indicating that the Nexfn monitor tends to overestimate systolic and underestimate diastolic pressure compared to traditional BP monitors.With this knowledge and no SBP measurements >200 mmHg in the AAS group, none of our participants were exposed to any danger concerning their SBP increases.A limitation with the arterial clamp was the sensitivity.In this current study, a couple of the participants had arthritis or cold fngers and the fnger clamps were changed several times, moved to another fnger, and to the other hand, until able to measure a valid SBP.A limit to our protocol was the continuous repetitions of sets with only 60 seconds of breaks in between each set [29].Tis might have had an infuence in higher peak SBP in the leg presses.SBP of the participants did not decrease to the baseline during the 60 second break between each set. Conclusion In conclusion, our study indicates that AAS patients in control of their BP endure to perform resistance training at moderate intensity (60% of maximum strength).Te resistance training examined in this study does not lead to SBP in excess of 200 mmHg if a maximum of three sets are performed including a break of 60 seconds in between every set.However, to avoid SBP >160 mmHg, there is a need of longer breaks between the sets and the intensity of resistance training that should be lowered to 40-50% of maximum strength. Te results are based on a small sample and are limited to AAS patients in control of their SBP, and further investigations regarding safety and intensity levels are needed. Figure 2 : Figure 2: Example of a continuous blood pressure measurement during three sets of leg press.Y-axis: systolic BP (mmHg) and X-axis: seconds.Figure 2 shows an example of a continuous dataset with three sets of leg press including breaks.Te three red arrows indicate the three peak systolic BP's, and these are the data intervals used for further analysis. Figure 1 : Figure 1: Positioning and movement in the leg-press device.(a) Legs bended and no resistance.(b) Legs extended and resistance lifted.Including continuously beat-to-beat SBP measurements on the left hand situated on the participants' chest during the whole exercise. Figure 3 : Figure 3: Flow of participants through the study.Tree possible participants declined participation, one possible participant dropped out before the intervention due to stress, and one possible participant did not fulfl the in-and exclusion criteria. Figure 4 : Figure 4: Peak SBP measured in the third set of the leg-press exercise at all the AAS participants.Y-axis: Peak SBP (mmHg) of all the AAS participants in the third set.X-axis: participants.Te red line indicates the pragmatically safety set at 200 mmHg, the light blue line indicates 180 mmHg, and the dark blue line indicates 160 mmHg. Figure 5 : Figure5: Te systolic blood pressure range and the percentage increase from the baseline to the 3 rd set.Te systolic blood pressure range and the percentage increase from the baseline to the 1 st set, from the baseline to the 2 nd set, and from the baseline to the 3 rd set. Table 1 : Baseline characteristics of the participants. n: numbers, f: female, m: male, SD: standard deviation, BMI: body mass index, HR: heart rate, SBP: systolic blood pressure, DBP: diastolic blood pressure, and * : lowest possible dose of medication. Table 2 : Hemodynamic outcomes and workload in the two groups during resistance training at moderate intensity.: systolic blood pressure, SD: standard deviation, DBP: diastolic blood pressure, HR: heart rate, bpm: beat per minute, and 95% CI: 95% confdence intervals.All hemodynamic outcomes were measured using the Nexfn device. SBP
2023-10-30T15:09:21.855Z
2023-10-28T00:00:00.000
{ "year": 2023, "sha1": "bfffba0419ef9abcadc88a045a0e01c018337b1c", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/tsmed/2023/7616007.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bf8c0a4aad59b783ad507d3b8c6d39dc0cc211ae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
52242335
pes2o/s2orc
v3-fos-license
Photonics-Based Microwave Image-Reject Mixer Recent developments in photonics-based microwave image-reject mixers (IRMs) are reviewed with an emphasis on the pre-filtering method, which applies an optical or electrical filter to remove the undesired image, and the phase cancellation method, which is realized by introducing an additional phase to the converted image and cancelling it through coherent combination without phase shift. Applications of photonics-based microwave IRM in electronic warfare, radar systems and satellite payloads are described. The inherent challenges of implementing photonics-based microwave IRM to meet specific requirements of the radio frequency (RF) system are discussed. Developmental trends of the photonics-based microwave IRM are also discussed. Introduction A frequency mixer is an essential module in modern microwave systems, such as radar, electronic warfare, wireless communication devices and satellite payloads [1][2][3][4]. Frequency mixers facilitate frequency upconversion to generate a radio frequency (RF) signal at f RF = |f IF + f LO |, where f IF and f LO are the frequencies of the intermediate frequency (IF) signal and local oscillator (LO) signal, respectively. Frequency mixers also facilitate frequency downconversion to generate an IF signal, described as f IF = |f RF − f LO |. In recent years, the frequency mixers implemented, based on microwave photonic technologies, have attracted great interest due to advantages such as their large bandwidth, light weight, high isolation and electromagnetic interference immunity (EMI) [5][6][7]. In addition, the photonics-based microwave frequency mixer can be combined with radio over fiber (RoF) technology to realize remote fiber-optic antennas, avoiding any additional electrical-to-optical or optical-to-electrical conversion [8,9]. However, the photonic microwave mixer is usually implemented through a heterodyne structure to obtain a nonzero IF signal, which can be easily interfered with by image signals. As shown in Figure 1, the image signal at f IM , with a frequency above or below the LO frequency by an amount equal to IF, will be converted to the same IF band, together with the desired IF signal downconverted from the radio frequency (RF) signal at f RF . Thus the downconverted image cannot be removed by filtering due to spectrum aliasing. Image-reject mixer (IRM) is a downconverter with a large suppression of downconverted products (f IMC = |f IM − f LO |) resulting in undesired image signal f IM while maintaining or enhancing the desired component. Typical electrical IRM structure is shown and analyzed in [10]. Due to electronic bottleneck, the working frequency and bandwidth are limited for electrical IRMs (i.e., IF bandwidth limited to no more than 160 MHz, and LO frequency limited to no higher than 14.0-18.0 GHz) [11]. For future RF systems, RF signals with large bandwidth will be used and systems will be operated in a complicated electromagnetic environment. Thus, image distortions will be serious, which creates an urgent demand for high performance photonics-based microwave IRM. Photonic IRMs Based on Pre-Filtering A typical way to achieve image rejection is to directly remove the image from the input RF signal by a filter, either in the electrical [12] or optical domain [13], as shown in Figure 2. As can be seen from Figure 2c, the performance of the image rejection depends on the response of the filter. For a scenario using an electrical bandpass filter (EBPF), to select the required RF signal, as shown in Figure 2a, electrical filters with high center frequency will be required to realize high frequency operation capabilities of IRMs. In addition, the slope of the filter must be sharp enough to suppress the image without affecting the desired signal, especially when the IF frequency is very low (i.e., the frequency difference between f RF and f IM is small). For example, the EBPF used in [12] has a passband of 8-9 GHz, thus, instantaneous working bandwidth is limited to 1 GHz. For a required IF working frequency (10 GHz [12]), the RF/LO working frequency will also be limited by the EBPF (20 GHz in [12]). When using an optical bandpass filter (OBPF) to select the required modulated RF signal in the optical domain, as shown in Figure 2b, optical filters with narrow bandwidth and sharp edge roll-offs are desirable. However, both the electrical filters with high center frequency, and optical filters with narrow bandwidth and sharp edge roll-offs, are difficult to realize, limiting working bandwidth and system performance. For example, the optical filter described by Strutz and Williams, [13] is a Fabry-Perot filter with a 3-dB bandwidth of 0.6 GHz, which is too wide for many applications. In addition, the filter bandwidths and center frequencies are usually fixed and wide tunability is hard to realize, limiting the flexibility of the system. To solve these problems, another method using pre-filtering was realized, based on multiple-stage frequency conversion [14][15][16][17]. Figure 3 shows a schematic diagram of typical IRMs based on multiple-stage frequency conversion, and Figure 4 illustrates its principle. The original RF signal f Received (containing the image f IM ) shown in Figure 4a is first downconverted using LO1 at a photonics-based microwave mixer. The converted result is shown in Figure 4b. Then a bandpass filter is used to select the desired RF signal and to remove the undesired image, as shown in Figure 4b. The selected RF signal enters the second photonics-based microwave mixer and is downconverted using LO2 to the desired IF band. The final downconverted result is shown in Figure 4c. As compared with the IRM based on direct pre-filtering, shown in Figure 2c, frequency requirement of the EBPF is lowered by increasing the number of frequency-conversion stages. Thus, when filtering is performed in the electrical domain after optical-to-electrical conversion, a good image rejection can be realized. It should be noted that in the first frequency-conversion step, one can carefully select LO1 frequency to downconvert received RF signal into a proper band, where EBPF with narrow bandwidth and sharp edge roll-offs is easy to achieve [14,15]. For example, an image-rejection ratio of larger than 150 dB has been achieved by Strutz and Williams [14] using two cascaded EBPFs with large out-of-band suppression, and in another paper [15] the image-rejection ratio reached 60 dB when the EBPF out-of-band suppression was 60 dB. Filtering after the first-stage frequency conversion can also be realized in the optical domain [16,17]. When using optical filters, achieving adequate edge sharpness and frequency-/bandwidth-tuning is still challenging. In addition, because the multiple-stage frequency conversion needs multiple LO sources, the system always has a complex structure with large volume and high cost. For photonic IRMs based on pre-filtering, one trend has been to involve microwave photonic filters [44,45], which are designed to undertake equivalent tasks to those of microwave filters. However, it is hoped that these IRMs will introduce advantages brought by photonics, such as low loss, high bandwidth, EMI, tunability, and reconfigurability. Photonic IRM Based on Phase-Cancellation Techniques Phase-cancellation technique is another way to realize photonics-based microwave IRMs [18][19][20][21][22][23][24][25][26][27][28][29][30], also known as Hartley architecture. Figure 5 shows the principle method. A pair of quadrature LO (or RF) signals are introduced to generate two quadrature IF outputs, which are then combined by a low-frequency 90-degree hybrid, introducing an additional 90 • phase difference, to make the downconverted image out of phase, and those downconverted from the wanted RF signals in phase. In this way, the wanted IF signals are enhanced, while the downconverted image at the same band is cancelled. Since this kind of photonics-based microwave IRM uses the phase differences of the signals to realize the image rejection, any requirement of filtering or multi-stage frequency conversion is avoided. In addition, when the RF signal frequency changes, the IRM based on phase cancellation can still suppress the corresponding image, which enables broadband applications. Three typical IRM schemes based on phase cancellation were proposed, using 90 • electrical hybrid, microwave photonic phase shifters, and optical 90 • hybrid. Photonic IRMs Based on 90 • Electrical Hybrid Methods in the first category apply an electrical 90 • hybrid to generate a pair of quadrature LOs [18] or a pair of quadrature RF signals (containing the images) [19,20]. Then, RF and LO signals are mixed in the optical domain. A pair of quadrature IF outputs will be generated, which are combined by another electrical 90 • hybrid, to produce a downconverted IF signal, with the downconverted image being rejected. In Lu et al.'s paper [18], the in-phase RF signals were modulated on two lasers by direct modulation, and then injected into two separate electro-optic modulators driven by the pair of quadrature LOs from an electrical 90 • hybrid. A separate study [19] describes two quadrature RF signals being generated by an electrical 90 • hybrid, and two out-of-phase LOs being produced by an out-of-phase divider, that formed two RF and LO pairs, which were mixed at two directly modulated lasers. Optical signals output from the two lasers were detected by two photodetectors (PDs), generating a pair of quadrature IFs. The two IFs are then combined by another electrical quadrature hybrid to achieve image-reject downconversion. For both approaches, two mixed RF and LO pairs were separated, and the difference between two separated branches affected the system performance and stability. By using integrated modulators to perform modulation by the LO and RF signals simultaneously, this problem can be avoided. For example, a single integrated dual-parallel Mach-Zehnder modulator (DPMZM) has been previously used [20], which greatly improved the stability of the system. The key problem associated with approaches in this category [18][19][20] is that they rely on an electrical 90 • hybrid to generate quadrature RF signals or LOs. Since electrical 90 • hybrids cannot maintain a precise 90 • phase difference across a wide frequency range at high frequencies, the bandwidth of IRMs is usually very limited, or, the image suppression ratio is small ( Figure 6). IRMs Based on Microwave Photonic Phase Shifters To avoid using the high-frequency electrical 90 • hybrid, photonic IRMs based on microwave photonic phase shifters were proposed [21][22][23][24], since the microwave photonic phase shifters can realize a 90-degree phase difference across a wide frequency range at high frequencies. The key component is an advanced modulator with two sub-modulators, which can be implemented by a dual-polarization dual-drive Mach-Zehnder modulator (DPol-MZM) [21], a DPMZM [22,23], or a polarization-division multiplexing Mach-Zehnder modulator [24]. Each sub-modulator (i.e., a sub-dual-drive Mach-Zehnder modulator (sub-DMZM) [21], and a sub-Mach-Zehnder modulator (sub-MZM) [22][23][24]) performs single-ended mixing. Meanwhile, together with optical filters, photonic microwave phase shift was achieved with a phase shift controlled by either bias voltages of the sub-modulators [21][22][23], or the polarization state at the output of the modulator [24]. Thus, a precise 90-degree phase shift over a wide frequency range can be introduced between two parallel single-ended mixers (i.e., two sub-modulators). After photodetection, quadrature IFs are combined by low-frequency electrical quadrature hybrid to realize image rejection. An image-rejection ratio as high as 60 dB is realized when RF/LO frequency is tuned from 10 to 40 GHz [21]. However, system stability is affected by bias drift or polarization fluctuation, because phase shift is controlled by the bias voltages [21][22][23] or polarization state [24]. Additionally, although optical filters can remove most undesirable sidebands, leading to large suppression of the mixing spurs [21], their limited edge roll-offs could restrict IRM performance at a low frequency regime. Photonic IRM Based on 90 • Optical Hybrid The third class of photonic IRMs based on phase cancellation applies a 90 • optical hybrid to introduce a quadrature phase [25][26][27][28][29]. Benefiting from a simple structure and the small phase imbalance of the commercially available optical hybrid over a wide frequency range, a high image rejection ratio can be achieved in a large bandwidth. Figure 7 shows a schematic diagram of a typical IRM based on a 90 • optical hybrid [25]. The optical carrier is split into two branches. One branch is modulated by an RF signal and the other is modulated by an LO signal, both with carrier-suppressed single-sideband (CS-SSB) modulation. The CS-SSB modulation can be realized by using either optical filters placed in each branch [25][26][27], or I/Q modulators operated at CS-SSB mode [28,29]. Signals from the two branches are sent to the signal port and the LO port of the 90 • optical hybrid, which typically has two in-phase output and two quadrature output ports. When one of the in-phase outputs and one of the quadrature outputs are sent to two PDs, an I/Q frequency mixer is obtained. By using a low-frequency electrical quadrature hybrid to combine the two electrical outputs of the I/Q mixer, a photonic IRM is achieved. It should be noted that if other outputs of the 90 • optical hybrid are directed to PDs, single-ended and balanced mixers can also be realized. Since parallel modulations, and/or parallel filterings, and parallel amplifications are required to obtain the independent RF and LO sidebands, the system performance will therefore be vulnerable to any path length difference between the two branches [26][27][28][29]. Therefore, environmental vibration, which cause path length variations, would be a challenge for practical application of such IRMs. To overcome this problem, in [26] a reconfigurable microwave photonic mixer which can perform such functions, including the IRM, is achieved by using a DPol-MZM and an optical 90 • hybrid. Parallel signal modulations are implemented in an integrated modulator and act as polarization multiplexors at the output. Thus, RF and LO sidebands can both be filtered and amplified by the same devices, guaranteeing links consistency for both signals. Separation lengths are minimized, which can be easily packaged to avoid environmental vibration. With the three aforementioned kinds of photonic IRMs, based on phase-cancellation techniques, images can be rejected over a wide frequency range. On the other hand, mixing spurs are generated by the nonlinearity of the mixer at frequencies of |kf RF + lf LO |, where k and l are integers. Since the mixing spurs might be in the same band of the wanted signal, especially in a wideband operation scenario, they should be suppressed together with the image. To realize mixing-spur suppression, we can rely on advanced modulation formats. For example, the suppression of LO/RF leakages can be realized by introducing a carrier suppressed double sideband (CS-DSB) modulation format for both RF and LO signals [20]. In addition, a CS-SSB modulation format can be applied to eliminate more high-frequency spurs [21][22][23][24][25][26][27][28][29]. However, the implementation of advanced modulation formats always requires complex configuration, and typically needs optical filters or high-frequency electrical hybrids, which can limit working frequency range. In addition, the previously reported photonics-based microwave IRMs are tested and analyzed by respectively injecting an RF signal and image. In practical applications, RF signals and images are received simultaneously and will generate other unwanted mixing spurs. For example, beating between RF and image signals will generate mixing spurs of the 2nd-order harmonic downconverted IF signal at 2f IF , which is very close to the optimum IF signal at f IF . Thus, this beating is difficult to remove through filtering. For wideband applications, mixing spurs at 2f IF can overlap with the desired IF signal at f IF . To deal with this problem, in one of our recent works, a photonic IRM based on a 90 • optical hybrid and two balanced photodetectors (BPDs) was proposed and demonstrated [30], which could suppress mixing spurs of RF and image signals. The RF signal (containing the image) and electrical LO signal are both converted into the optical domain and then injected into the signal and LO ports of a 90 • optical hybrid, respectively. The 90 • optical hybrid introduces a 180 • phase difference into the optical LO for two in-phase and the quadrature optical outputs. Through optical to electrical conversion, the IF electrical outputs for two in-phase optical outputs (or two quadrature optical outputs) at f IF are out of phase, while the other mixing spurs are in-phase. Thus, by injecting two in-phase and two quadrature optical outputs into two BPDs, respectively, at the output of each BPD the mixing spurs will be suppressed. In addition, the two IF outputs, from the two BPDs, are quadrature because the 90 • optical hybrid also introduces a quadrature phase of for the optical LO for the in-phase and the quadrature optical outputs. The two IF outputs are then combined by a low-frequency electrical hybrid to introduce an additional 90 • phase difference, through which the unwanted signals in the I and Q paths, downconverted from the image, are out of phase, and those downconverted from the wanted RF signals are in phase. In this way, the wanted IF signals will be enhanced, while the downconverted image is eliminated. Thus, a photonics-based microwave IRM with mixing spurs largely suppressed can be achieved. A good image rejection can also be achieved. Multi-octave frequency downconversion can also be achieved, since the mixing spurs included those at 2f IF , and are almost fully suppressed. For RF signals that have an instantaneous wide bandwidth (i.e., 2 GHz [30]), the proposed photonic-based microwave IRM can realize a clean downconverted IF signal with all mixing spurs suppressed, as shown in Figure 8a. As a comparison, the corresponding mixing results using a different scheme are shown in Figure 8b [25]. In addition, an alternative photonics-based microwave IRM system [30] showed superior performance as compared with the scheme represented in Figure 8b [25]. Photonics-Based Microwave IRM Applications The photonics-based microwave IRM can find potential applications in photonics-based RF channelizers, electronic warfare scanning receivers, and multi-band integrated RF front-ends for satellite and radar systems. RF channelization is a technique that can split a broadband signal into several frequency channels that have a bandwidth that can be processed by state-of-the-art electronics. The RF channelizer is usually applied in broadband RF receivers working in a high frequency regime. This highfrequency regime is required for modern RF systems, such as high-resolution radar, electronic warfare and multiband satellites [31]. By using the photonics-based microwave IRMs, it is possible to downconvert RF components at multiple frequencies over a wide bandwidth to the same IF band simultaneously, with great suppression of the in-channel interference. Previously, a typical coherent RF channelizer with large instantaneous bandwidth was proposed and demonstrated based on = photonics-based microwave IRMs [31,32]. A pair of optical frequency combs (OFCs) provide multiple LOs to downconvert different parts of the RF signal to the same IF band, and the photonics-based microwave IRMs are employed to achieve large in-band interference suppression in each channel. In a proof-of-concept experiment [31], a Ku-band RF signal with a bandwidth of 5 GHz (13-18 GHz) is channelized into five channels with 1-GHz bandwidth simultaneously. The measured in-band interference suppression ratio was greater than 25 dB within the 1-GHz instantaneous processing bandwidth. The photonics-based microwave IRM can also play an important role in electronic surveillance, since the electronic warfare receiver needs to receive signals over a large instantaneous frequency range. Previously, a photonics-assisted electronic warfare scanning receiver was reported to incorporate the photonics-based microwave IRM [33,34]. Figure 9 shows a schematic diagram. A tunable laser provides a master tone, which is then amplified and split into two arms. On the upper arm, the tunable laser acts as the optical carrier and is modulated by the RF input signal with CS-DSB modulation. On the lower arm, a coherent OFC is generated and serves as an LOs. By directing the OFC into a distributed-feedback slave laser, the laser will be injection locked, this leads to selection and amplification of the comb line at the required LO frequency. Then, down-conversion and precise anti-alias filtering are carried out through a photonics-based microwave mixer, incorporating a 90 • optical hybrid. Scanning is realized by wavelength tuning of the master laser via its driving current. Figure 9. Scheme of a photonics-assisted electronic warfare scanning receiver using a photonics-based microwave mixer incorporating a 90 • optical hybrid [34]. In addition, broadband flexible reconfigurable RF front-ends can be built based on photonics-based microwave IRMs [35]. This is of great importance for future RF systems, such as multi-functional radars, broadband electronic warfare and intelligent satellites, to achieve multi-band RF signals for receiving and emitting [36][37][38]. Previously, we have proposed and demonstrated a photonics-based RF front-end using photonics-based microwave IRMs [35], which can realize photonic multi-frequency LO generation, RF channelization, and multi-band up-conversion. Figure 10 shows a schematic diagram of a microwave photonic RF front-end. Dual OFCs with different free spectral ranges (FSRs) are generated and serve as photonic multi-frequency LOs. For signal receiving, the signal OFC is used as an optical carrier, which is modulated by the received RF signals with a CS-OSSB modulation format. The modulated OFC signal and the LO OFC are directed to a signal port and an LO port of a 90 • optical hybrid, respectively. With a programmable multi-channel filter to select the desired signal, and with splitting of LO pairs into multiple channels, simultaneous image-reject down-conversion and channelization are implemented with the help of PDs and low-frequency electrical 90 • hybrids. For signal transmitting, a signal OFC is injected into a Mach-Zehnder modulator (MZM) to modulate electrical baseband signals. Then, the modulated signal is combined with LO OFC and injected into a programmable multi-channel filter where desired optical components are selected. With a PD to perform optical heterodyne, frequency up-conversion to the required RF band is realized. A conceptual microwave photonic RF front-end covering S, X, K, Ku and Ka bands is successfully implemented. Simultaneous channelization, with downconversion for signal receiving, and frequency up-conversion for signal emitting, is demonstrated. Furthermore, based on this scheme, demonstration of a multi-channel microwave photonic satellite repeater has been established. Another application of photonics-based microwave IRM is a wideband or multi-band radar system [39]. Generally, the key challenge associated with realization of multi-band radar systems is bandwidth limitation of the electronic components. Microwave photonics provide a promising solution due to the advantages brought by photonics, including higher carrier frequency and large bandwidth spectrum and so on [40][41][42]. Previously, a dual-band linear frequency-modulated continuous wave (LFMCW) radar receiver was built based on a photonic IRM [28]. Integrated up-and down-chirp linear frequency-modulated (LFM) waveforms located in two different bands were used as transmitted signals. Without photonics-based microwave IRM, the image component of beat frequency from the up-and down-chirp bands would overlap with each other, resulting in false targets. A photonic IRM scheme offers the possibility of independent target detection, and allows the sharing of hardware resources and joint dechirp processing of dual bands. The image components can be further suppressed using subsequent digital I/Q imbalance compensation. In addition to the above microwave applications, an all-fiber continuous-wave coherent Doppler LiDAR was reported, based on photonic IRMs [43]. Discussions and Conclusions In conclusion, we reviewed recent advances in photonics-based microwave IRMs. Two typical methods to realize the photonic IRM, i.e., pre-filtering method and phase cancellation method, were introduced. Application of the photonics-based microwave IRM in electronic warfare, radar and satellite systems, was also described.
2018-08-29T17:15:39.271Z
2018-03-26T00:00:00.000
{ "year": 2018, "sha1": "3f9400d8aed406f413ccf641588b40ff8ff60839", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-6732/5/2/6/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3f9400d8aed406f413ccf641588b40ff8ff60839", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
117444060
pes2o/s2orc
v3-fos-license
Galaxy Formation from Spectroscopy of Extragalactic Globular Cluster Systems I discuss how spectroscopy of extragalactic globular clusters provides a powerful probe of the formation history and mass distribution of galaxies. One critical area is spectroscopy of objects which have been identified as candidate young globular clusters through HST imaging of galaxy mergers. I discuss how such data can constrain models of globular cluster and galaxy formation. As an example, I present new spectra which confirm the presence of young globular clusters in NGC 1275. A second way wide-field spectroscopy can be used to probe the formation history and mass distribution of galaxies is through spectroscopy of large numbers of globular clusters around elliptical galaxies. Metallicities obtained from such data place strong constraints on models of galaxy formation, and velocities determined from the same data provide kinematical tracers of the mass distribution out to distances of $\sim 100$ kpc. Introduction Globular clusters have long been used as important tracers of the history of chemical enrichment and mass distribution in our Galaxy and in the Local Group. Technological advances have now enabled these techniques to be applied to more distant galaxies. This paper concentrates on two ways spectroscopic study of extragalactic globular clusters has recently begun to constrain the formation history and mass distribution of elliptical galaxies. The rst of these is spectroscopy of candidate young globular clusters discovered in high resolution images of interacting and merging galaxies. The second is spectroscopy of the globular cluster systems around elliptical galaxies. I discuss how the initial results of each of these approaches are consistent with the hypothesis that mergers play an important role in the formation and evolution of elliptical galaxies. Globular Cluster Formation in Galaxy Mergers Some environments are much more favorable for the formation of globular clusters than others. For example, the disks of undisturbed spiral galaxies appear to be inhospitable to globular cluster formation, as evidenced by the absence of young globulars in the thin disks of the Galaxy and M31. More generally, spiral galaxies have fewer globular clusters per unit luminosity or mass than elliptical galaxies (Harris 1991;Zepf & Ashman 1993). This result suggests that the conditions during the formation of elliptical galaxies were more conducive to the formation of globular clusters than those typical of the disks of spiral galaxies. Ashman & Zepf (1992) proposed that the richer globular cluster populations of ellipticals relative to spirals can be understood if elliptical galaxies form from the mergers of gas-rich disk galaxies, and that globular clusters are formed during these mergers. They and many others (e.g. Schweizer 1987, Larson 1990, Kumai et al. 1993, Harris & Pudritz 1994 have argued that the physical conditions expected in such mergers are favorable for globular cluster formation. Ashman & Zepf (1992) therefore predicted that if elliptical galaxies form by merging, newly formed globular clusters should be observable in tidally interacting and merging galaxies. This prediction received dramatic support from HST imaging of the peculiar galaxy NGC 1275, which revealed objects with the luminosity, color, and size expected of young globular clusters. The success of the Ashman-Zepf prediction was demonstrated even more strongly in the observation of similar bright, blue compact sources in the prototypical merging system NGC 7252 (Whitmore et al. 1993). These observations of roughly 100 young clusters in several galaxy mergers provide strong evidence that the physical conditions in tidally interacting and merging systems are favorable for globular cluster formation. This conclusion had previously been hinted at by the existence of at least a few young globular clusters in the LMC (e.g. Mateo 1993) and by the super-star clusters in several dwarf irregulars, which often appear to be interacting (e.g. Kennicutt & Chu 1988). In this context, a young globular cluster is an object that after 10 Gyr of stellar evolution will have the properties characteristic of Galactic globular clusters. Although HST imaging provides a strong argument that the bright, blue, compact objects observed in NGC 1275 and NGC 7252 are young globular clusters, spectroscopy of these objects is the nal, critical step in con rming such an identication. The most basic aim of this spectroscopy is to con rm for individual objects that they are associated with the merging galaxy. The second critical component is to con rm that the optical emission of these objects comes from stars and therefore that the models which transform a young cluster's luminosity and color to mass are at least roughly valid. With good spectra, it is also possible to use the strength of various absorption lines to provide better constraints on the ages of the young globulars. Better ages lead directly to improved estimations of the mass from stellar population models. Finally, with multi-object techniques, large telescopes, and excellent image quality, it will be possible to obtain spectra and determine velocities for a number of clusters. This will allow at least a rough determination of the kinematics of the young cluster population. In order to obtain spectroscopy of the candidate young globular clusters discovered in NGC 1275, we used the LDSS-2 on the WHT in October of 1993. Because of the excellent seeing and good perfomance of the spectrograph, the most luminous candidate clusters were clearly identi able above the bright galaxy background. For the brightest of the candidate clusters, we have been able to obtain good spectra at two position angles. In Figure 1, we show one of these spectra, and for comparison, a spectrum of an A star obtained during the same night. This gure clearly demon-strates that the bright, blue object seen in the HST images is in fact a young star cluster in the galaxy NGC 1275. A more detailed analysis of this data is presented in . This spectral con rmation of the existence of young globular clusters in NGC 1275 is further evidence that globular clusters can form in tidal interactions and mergers. These data suggest that galaxy mergers are fertile ground for studying the astrophysics of globular cluster formation. Moreover, this result is consistent with the hypothesis that the greater speci c frequency of globular clusters around ellipticals relative to spirals is the result of the formation of globular clusters in the mergers which make the elliptical galaxies. The next step is to test this hypothesis in more detail by determining the e ciency with which globular clusters form in mergers of various types. Globular Cluster Systems of Elliptical Galaxies Globular cluster systems are invaluable probes of the formation history and mass distribution of galaxies. For all but a few nearby galaxies, globular clusters provide the most observationally accessible way to study the ages, metallicities and kinematics of individual objects, rather than integrated properties. Since globular clusters are bound, coeval, and chemically homogenous (at least to rst order), they provide a distinct record of the physical conditions at the time of their formation. Because of this property of globular clusters, they can be used to test competing theories of the formation of elliptical galaxies (Ashman & Zepf 1992, Zepf & Ashman 1993. If elliptical galaxies form in a monolithic collapse, the metallicity distribution is generally expected to be smooth and single-peaked. More speci c predictions can be made in the context of various models (e.g. Arimoto & Yoshii 1987, Matteuci & Tornamb e 1987. In contrast, if elliptical galaxies form through mergers, their globular cluster systems will be a composite of at least two populations. These are the globular clusters associated with the progenitor spirals, and those formed during the merger itself. Since the globulars formed during the merger are formed from enriched disk gas, they will generally be of higher metallicity than the clusters associated with the halos of the progenitor spirals. As a result, the metallicity distribution of the globular systems of ellipticals formed by mergers is expected to have at least two peaks. The di erence between the metallicity distribution of the globular clusters predicted by the monolithic collapse model and by the merger model provides a test of which theory more correctly describes the formation of elliptical galaxies. An observationally e cient way to estimate the metallicity distribution is to study the color distribution of the globular clusters, since broadband colors are primarily driven by metallicity in old stellar systems. In Zepf & Ashman (1993), we rst performed this test on the globular cluster systems of the elliptical galaxies NGC 4472 and NGC 5128, the systems with the best photometric data then available in the literature (Couture et al. 1991 for NGC 4472 andHarris et al. 1992 for NGC 5128). Using the KMM algorithim to analyze these distributions (cf. Ashman, Bird, & Zepf 1994), we found they were better t by a distribution with two peaks than a single one at con dence levels of 98% and 95%, respectively. We and others have gone on to obtain better data for the globular cluster systems of other elliptical galaxies. Six elliptical galaxies have now been analyzed in this way, and all appear to have color distributions which are better t by models with two or more peaks than singlepeaked ones , Secker et al. 1994, Lee & Geisler 1993, Ostrov, Forte, & Geisler 1992. Although the color distributions provide signi cant evidence that elliptical galaxies formed through a merging process, the case for or against a merging origin can be made considerably stronger when spectra are obtained for the globular clusters. Firstly, the contamination from foreground stars and background galaxies can be eliminated directly, rather than by the statistical estimates required when only photometry is available. Secondly, the metallicity of the globular clusters can be estimated from absorption-line indices, and compared to the metallicity estimates derived from the broadband colors. Thirdly, comparison of absorption-line indices arising from di erent elements can provide information about abundance ratios and therefore on the history of chemical enrichment. The spectroscopic study of large numbers of globular clusters around elliptical galaxies is aided greatly by the close match between the typical angular extent of rich globular cluster systems and the eld of view of the latest generation of multislit spectrographs. An example of this good match is shown in Figure 2, which is a plot of the surface density of photometric globular cluster candidates around the elliptical galaxy NGC 3923 (see . This plot demonstrates that for this typical globular cluster system of a bright elliptical at a distance of 1600 km s 1 , the derived surface density of globular clusters with R < 22 is a factor of several greater than the estimated background at the last data point at a radius of 5.6 arcminutes. This extended spatial distribution, characteristic of the rich globular cluster systems of bright ellipticals, also makes globular cluster excellent kinematic tracers at large galactic radii. This is perhaps the most exciting prospect for wide-eld spectroscopy of extragalactic globular cluster systems. Using NGC 3923 as an example, about 100-150 clusters are expected to a limit of R 22 within the annular region from 25 h 1 kpc to 50 h 1 kpc. The background contamination in a sample selected from images like ours of NGC 3923 is expected to be roughly 50% for these limits.
2019-04-14T01:49:09.054Z
1994-11-01T00:00:00.000
{ "year": 1994, "sha1": "d093cadbe88d55cfdd9dcd29137dc650bb5f74d3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d093cadbe88d55cfdd9dcd29137dc650bb5f74d3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
51729472
pes2o/s2orc
v3-fos-license
The Play of International Practice The core claims of the practice turn in International Relations (IR) remain ambiguous. What promises does international practice theory hold for the field? How does the kind of theorizing it produces differ from existing perspectives? What kind of research agenda does it produce? This article addresses these questions. Drawing on the work of Andreas Reckwitz, we show that practice approaches entail a distinctive view on the drivers of social relations. Practice theories argue against individualistic-interest and norm-based actor models. They situate knowledge in practice rather than “mental frames” or “discourse.” Practice approaches focus on how groups perform their practical activities in world politics to renew and reproduce social order. They therefore overcome familiar dualisms—agents and structures, subjects and objects, and ideational and material—that plague IR theory. Practice theories are a heterogeneous family, but, as we argue, share a range of core commitments. Realizing the promise of the practice turn requires considering the full spectrum of its approaches. However, the field primarily draws on trajectories in international practice theory that emphasize reproduction and hierarchies. It should pay greater attention to practice approaches rooted in pragmatism and that emphasize contingency and change. We conclude with an outline of core challenges that the future agenda of international practice theory must tackle. The Practice Turn in International Relations The practice turn has, after many fits and starts, arrived in International Relations (IR) theory. (Pouliot 2008;Adler and Pouliot 2011b; Acuto and Curtis 2013; Adler-Nissen 2013b;Bueger and Gadinger 2014). But current work fails to adequately elaborate on the promise of the practice turn. It supplies partial-and sometimes unclear -answers to what distinguishes international practice theories from alternative frameworks, such as rationalism and mainstream constructivism. For example, Adler and Pouliot (2011a:28) suggest not only that practice approaches constitute a new paradigm for the study of IR, but that practice theory provides a "big tent" capable of accommodating all of the wide range of ontological and epistemological stances found in the field. We argue, in contrast, that the practice turn entails a distinctive way of studying the world. Since they take practices as the core unit of analysis, practice approaches provide a different understanding of the international. Thereby, they move away from models of action that focus on the calculation of interests or the evaluation of norms. At the same time, practice theories adopt many of the same assumptions and sensibilities that IR scholars elsewhere describe as "cultural" (Lapid and Kratochwil 1996), "critical" (Ashley 1987), "cognitive" (Adler 1991), or "constructivist" (Guzzini 2000). In seeing "practices" as the stuff that drives the world and makes it "hang together," the everyday practices of diplomats, terrorists, environmentalists, or financial analysts become the object of investigation. Focusing on them allows us to better understand dynamics of order and change. Confusion about the practice turn, as well as the very idea of international practice theory, abounds. We seek to reduce this confusion by laying out its assumptions, promises, and challenges. To do so, we adopt a multilayered strategy. We first clarify how, in ideal-typical terms, practice theory differs from other social-theoretical frameworks. We show that practice theory not only opposes rationalism and norm-oriented theories, but also distinguishes itself from common culturalist approaches. We then introduce and discuss six core commitments of practice theory at the level of ontology, epistemology, and methodology. Practice theory implies emphasizing process, developing an account of knowledge as action, appreciating the collectivity of knowledge, recognizing the materiality of practice, embracing the multiplicity of orders, and working with a performative understanding Christian Bueger is Reader in International Relations at Cardiff University. He holds a PhD from the European University Institute. His research interests include international practice theory, international organizations, maritime security, sociology of science, and interpretive methodology. He is currently the principal investigator of an ESRC funded project on the global governance of maritime piracy which uses participant observation to study global and regional cooperation processes. Further information is available at http://bueger.info. Frank Gadinger is head of the research unit "Paradoxes and Perspectives of Democratisation" at the Centre for Global Cooperation Research, University of Duisburg-Essen. Prior he was a Research Associate at the NRW School of Governance, leading a project on political narratives. He holds a Dr. phil from the Goethe University Frankfurt/Main. He is currently publishing a book on the practices of justification and critique in the US War on Terror. of the world. We talk about commitments-rather than principles or shared assumptions-in order to emphasize the heterogeneous character of practice theory. In other words, it is a diverse "family." Specific theorists interpret the commitments differently. Hence, we next discuss the spectrum of practice approaches. We argue, in particular, against the tendency to equate international practice theory solely with Bourdieu's praxeology. The field requires a broader understanding of international practice theory in order to make sense of the diverse phenomena found in world politics. This includes, most notably, sustained engagement with practice theoretical approaches rooted in the tradition of pragmatism. Such a broader understanding entails paying attention to core points of contention within practice theory, and recognizing the challenges that they raise for a practice theoretical research agenda. We discuss four major concerns practice approaches will have to deal with in one way or another: questions of change, scale, methodology, and reflexivity. How scholars address these concerns will, we contend, prove critical to the fate of international practice theory. (Mis)Understanding the Practice Turn The idea of a practice turn in IR has already produced significant criticisms. In an article-length critique, Ringmar expresses extreme skepticism about international practice theory. As he argues, "practices of one kind or another are what scholars of IR always have studied" (Ringmar 2014:2). Indeed, practice has gradually emerged as a core category within constructivism. More than two decades ago, for example, Wendt (1992:413) invoked it as an intermediary between agents and structure. However, when Neumann (2002) suggested the need to pay greater attention to practice theory, he advanced a different argument: that we should promote the concept of practice from a supporting to a leading role. Neumann's suggestion, as we demonstrate later, entails major implications for ontology, epistemology, and methodology in IR. More recently, the work of Adler and Pouliot (2011a,b) has become closely associated with the practice turn. They do not claim that practice theory constitutes a "universal grand theory" or a "totalizing ontology of everything social" (Adler and Pouliot 2011a:2). But their approach stresses practices as a concept capable of integrating a broad range of work in contemporary IR. This move obscures key wagers of practice-turn theories while minimizing its potential contributions to understanding international affairs. Not every IR theory is, or should be, a practice theory; many approach the world in ways incompatible with practice-driven research. Although they discuss practice, many IR scholars do not share the epistemological and ontological commitments that practice theories imply. Thus, while we praise Adler and Pouliot for beginning the discussion in IR and promoting the cause, there is the danger of turning practice theory into an overcrowded circus. The ontological and epistemological commitments that give practice theory its distinct value must be safeguarded. This is not an argument for isolation. It does not imply that practice theorists cannot (or should not) productively cooperate and converse with other IR theories. On the contrary, such cooperation and collaboration, notably in empirical work, holds a great deal of promise. The precondition for such cooperation is, however, a clear understanding of what practice theory is and what it is not. Theoretical rigor provides the foundation for dialogue. In contrast to the position of Adler and Pouliot, we argue for, and work from, what we call a cautious position of coherence. Such a position does not claim to find a definite core that represents the concept of practice (Kratochwil 2011:37); however, it draws attention to a number of core commitments that, despite being interpreted and implemented differently, are shared within the family of practice theory. The history of earlier "turns" in the discipline highlights the need to demarcate clearer boundaries of international practice theory. The rise of constructivism remains a wellknown example of the difficulty in developing a productive research program on weak conceptual grounds. After the euphoria surrounding the emergence of the constructivist approach began to wane, the field witnessed a growing sense of disillusionment. Some of this stemmed from the dominant position of Alexander Wendt's articulation of constructivism and, concurrently, the increasing dilution of constructivism's basic premises (Fierke 2002). Scholars spent vast intellectual energy resolving resulting epistemological and ontological confusion and designing consistent and coherent avenues for research (Kratochwil 2008). The same fate could befall the practice turn in IR. If we lose sight of the ontological and epistemological commitments that give practice theory its distinct value, then we render the practice turn vulnerable to precisely the kinds of criticisms leveled by Ringmar (2014:2) that "there is nothing truly new about this research," because the field has always studied the activities of people, states, and other actors in world politics. Sorting Things Out: The Foundations of International Practice Theory Reckwitz (2002aReckwitz ( , 2004aReckwitz ( ,b, 2008Reckwitz ( , 2010 maps social theories in a manner that helps specify the distinctiveness of practice theories. He sorts approaches into different streams and situates practice theory within them. Reckwitz identifies three major categories: rationalism, normoriented theorizing, and cultural theory (Table 1). Within the last, he locates three families: mentalism, textualism, and practice theory. Classes of Social Theory: Interests, Norms, and Culture One major class of contemporary social analysis builds upon assumptions of instrumental rationality. Theories within this class rely on methodological individualism and concentrate on individual action; they treat individuals as self-interested and equipped with subjective rationality. In consequence, they view the social sphere as essentially the product of individual actions (Reckwitz 2002a:245). In contrast, norm-oriented theories focus on the social in rules that establish conditions of possibility for action. These theories assume that actors consent to normative rules. This enables them to distinguish between allowed, prohibited, worthwhile, and worthless behavior. Normative consensus guarantees social order (Reckwitz 2002a:245). Normative expectations and roles prevent a potential endless confrontation of disparate interests. Despite their differences, both of these approaches deviate from culturalist theories in an important way. They "both dismiss the implicit, tacit or unconscious layer of knowledge which enables a symbolic organization of reality" (Reckwitz 2002a:246). Instead of understanding social order as the coordination of actions through norms and rules, culturalist approaches focus on understanding what makes actors believe that the world is ordered in the first place, and therefore renders them capable of acting within it. This capacity to grasp the world as ordered presupposes a layer of symbolic and meaningful rules, that is, culture. Culture regulates the ascription of meaning to objects and provides procedures for understanding them (Reckwitz 2002a:246). Culturalist approaches enable analysts to address questions of social order that elude alternative frameworks. Theorizing based on instrumental rationality reduces the challenges of social order to the unequal distribution of resources. It therefore omits collective patterns of action. For their part, norm-based theories claim to more fully explain collective actions and change. However, they struggle to explain the emergence and constitution of norms themselves. Culturalist approaches provide an elaborate solution to this problem. Rather than presuppose that norms guide acting subjects, they instead scrutinize the "how" and "why" of the prior ordering. In their view, it is precisely collectively shared orders of knowledge, systems of symbols, meanings, or cultural codes that generate rules for action. Culturalist theories locate the social within collectively meaningful orders and the symbolic organization of reality (Reckwitz 2002a:246-47). They understand social order as a product of collectively shared knowledge. Three Families of Culturalist Theorizing: Ideas, Discourse, and Practice Culturalist approaches differ in how they conceptualize collectively shared orders of knowledge. Reckwitz identifies three families of culturalist theorizing based on this difference: mentalist, textualist, and practice theoretical. 1 Mentalist accounts see shared orders of knowledge expressed in the human mind and its cognitions. They understand culture as a mental and cognitive phenomenon; they therefore locate it in the human mind, mental structures, or the "head" of human beings (Reckwitz 2002a:247). Mentalist approaches treat shared cognitivemental schemes as the smallest unit of the social and as their main object of analysis. Classical representatives of this perspective are Max Weber's world images (Weltbilder), the phenomenology of Alfred Sch€ utz or Edmund Husserl or French structuralism presented by thinkers such as Ferdinand de Saussure and Claude L evi-Strauss (Reckwitz 2002a:247). Whereas mentalists focus on the minds of individuals to study shared knowledge, textualists take the opposite route. They do not identify shared knowledge in the "inside," but rather on the "outside" (Reckwitz 2002a:248), that is, in symbols, discourses, communication or in "text" that lie outside the individual's mind. Post-structuralism, radical hermeneutics, constructivist systems theory, or semiotics associated with scholars like Clifford Geertz, Michel Foucault, Jacques Derrida, Niklas Luhmann, Paul Ricoeur, or Roland Barthes mostly represent this mode of theorizing. Despite their divergences, these approaches unite in their focus on extra-subjective structures of meaning. They tend to rely on discourse analysis to decipher cultural codes and rules of formation. Foucault's (1972) early work Archeology of Knowledge and Geertz's (1973) The Interpretation of Cultures are paradigmatic in this regard. The third family-practice theory-embraces the importance of mentalist and textualist ideas, yet suggests locating shared knowledge in practices. The focus is neither on the internal (inside the head of actors), nor on the external (in some form of structure). Instead, scholars see practice as ontologically in between the inside and the outside. They identify the social in the mind (since individuals are carriers of practices), but also in symbolic structures (since practices form more or less extra-subjective structures and patterns of action). Practice theorists foreground an understanding of shared knowledge as practical knowledge. They are interested in concrete situations of life in which actors perform a common practice and thus create and maintain social orderliness. For practice theorists, the intentions and motivations of actors are less relevant. Their actual activities and practical enactments in concrete situations matter. In other words, situations become more significant than actors. As Reckwitz (2002a:249) defines it, "a practice is a routinized type of behavior which consists of several elements, interconnected to one another: forms of bodily activities, forms of mental activities, 'things' and their use, a background knowledge in the form of understanding, know-how, states of emotion and motivational knowledge." Performing a practice always depends on the interconnectivity of all these elements. We cannot reduce practice to any one of them (Reckwitz 2002a:250). Schatzki's (2012:2) understanding of practice as an "openended, spatially-temporally dispersed nexus of doings and sayings" emphasizes, in a similar way, the site of the social in practical activities. Sociologists provide examples such as the everyday practices of consumption, work, and family life (for example, Shove, Pantzar, and Watson 2012). In IR, such everyday practices obtain in diplomacy, international business transactions, and military activity. Theorists of practice criticize the tendency of mentalists and textualists to overintellectualize the social. Although such a criticism should not be overstated, action, including political action, remains more banal than textualist and mentalists assume. In distancing themselves from practical activities, mentalists and textualists tend to overemphasize intellectual constructs at the price of practical human competencies and evaluations. 1 Reckwitz (2002a:249) initially included intersubjectivism as a fourth family of culturalist theorizing. There the social is not located in mental qualities or symbolic orders, but in interaction and the use of ordinary language. Habermas's "theory of communication" is the paradigmatic case for an intersubjective understanding that is well established in IR (Deitelhoff 2009). As Reckwitz (2010) showed in later articles, this differentiation can however be neglected due to the strong convergence between intersubjectivism and the concerns of practice theory. Reckwitz's Mapping and International Theory Reckwitz's mapping provides a useful tool for situating practice theories in IR. Since the 1990s, a controversy between rationalist and norm-oriented approaches drives IR theorizing (for example, Fearon and Wendt 2002), often presented as a debate between a logic of consequences and a logic of appropriateness. Yet, a number of "via medias," "middle ground" constructions and "hybrids" also thrive in the field and often blur the lines between the two approaches or creatively combine elements (cf. the diagnosis of Guzzini (2000) and Patrick T. Jackson (2008)). In particular, this applies to the usage of terms such as "culture." Although some would claim that IR has seen a cultural turn (Lapid and Kratochwil 1996), scholars frequently reduce the cultural to an intervening variable added to an otherwise rationalist explanation (for example, Katzenstein 1996). Such an understanding has little to nothing in common with the notion of culture in social theory. What Reckwitz describes as "culturalist theorizing" in IR, moreover, often has other labels. For instance, an early description of "critical theory" by Richard Ashley comes close to Reckwitz's understanding of cultural theorizing. He argued that: approaches meriting the label 'critical' stress the community-shared background understandings, skills, and practical predispositions without which it would be impossible to interpret action, assign meaning, legitimate practices, empower agents, and constitute a differentiated, highly structured social reality (Ashley 1987:403). Leaving problems of labeling aside, the Reckwitzian map of mentalism, textualism, and practice theory can usefully capture current international theory. We find expressions of the mentalist stream in IR, for instance, in early cognitive-psychological works or constructivist research on "ideas" (although much of this research is hybridical in so far as it remains committed to a positivist epistemology, Laffey and Weldes 1997). Also, studies operating with concepts such as "belief systems," "world views," "operational codes," or "frames" rely on mentalist reasoning. They focus on mental "sense making" events as the object of analysis and explore, for instance, the impact of past experiences on future action. Although based on individuals' cognitive acts of interpretation, such studies adopt a mentalism perspective. They focus on the shared knowledge and meaning structures that coexist in a groups mind. Yet, they distance themselves from the rational actor models of methodological individualism (Goldstein and Keohane 1993:7). Studies, for example, analyze the shared effects that "experience" has on political actors in collective decision making (Hafner-Burton, Hughes, and Victor 2013) or draw on cognitive psychology to explain the link between personality profile and leadership style of world leaders (Steinberg 2005) or the mental schemes of terrorists (Crenshaw 2000). Textualism had a very sustained effect on international theory, notably in European and Canadian IR. Introduced in the late 1980s by the "dissidents in international thought" movement (Ashley and Walker 1990), expressions of textualism have become well anchored in the discipline. We find them under labels such as "poststructuralism," "discourse theory," or "discourse analysis." In the aftermath of the third debate (Lapid 1989), the study of textual structures became particularly influential in critical security, European integration, and foreign policy studies. A range of classical contributions draws on discourse analysis to study textual structures as preconditions for the actions of diplomats, regional cooperation, transnational identity, the identification of threats, or the development of security strategies. 2 If authors rely on different theorists-including Derrida or Foucault-their studies share the same objective. They want to understand world political phenomena by investigating extrasubjective structures of meaning through which agents achieve the capability to act. They show, for instance, that shared knowledge establishes authority and that textual genres render distinct forms of knowledge as acceptable (Hansen 2006:7). Thus, language is "a site of inclusion and exclusion" and creates a "space for producing and denouncing specific subjectivities within the political realm" (Herschinger 2011:13). International relations theories develop their own disciplinary understandings of the Reckwitzian categories. Yet, the framework allows us to capture the major lines in the field. This also becomes visible if we ask how practice theory was introduced to IR theory. Neumann (2002) introduced practice theory by contrasting it with textualism, while Pouliot (2008) did so by demonstrating the difference between rationalist and norm-oriented approaches. The Reckwitzian map gives a sense of orientation. It allows for understanding practice theory by a strategy of "othering." Such a "negative" strategy however runs the risk of underplaying the commonalities between culturalist theorizing and neglecting the many links which exist between de facto expressions of mentalism, textualism, and practice theory. This is notably the case for different variants of post-structuralism that emphasize practice (Wodak 2011). Carving out intellectual space through othering is a helpful, but also dangerous tool. Hence, we require also a positive approximation of what practice theory is. This can be done by identifying the commitments that practice theories rely on. Commitments of International Practice Theory Understanding practice theory as composed by a number of core commitments provides a minimal definition of it. In consequence, our understanding of what should count as practice theory changes. The range is narrower than suggested by Adler and Pouliot. Put another way, not everyone who studies practices is a practice theorist. However, it is broader than what is conventionally understood in IR. Notably different variations of pragmatist theorizing are included. In adopting the notion of commitments, our claim is not to have found a definite core that every variant of practice theory or every practice theorist shares or "believes" in. Instead, we argue that conducting practice theoretical analysis involves engaging with a number of themes and concerns. The commitments concern what one can achieve with a practice theoretical approach and clarify the reasons of centering analysis on the unit of practice. Questions such as what a practice is, however, remain open to continual interpretation and reconstruction in the conduct of actual practices of research (Kratochwil 2011:37-43). First, practice theories emphasize process over stasis. They emphasize the procedural dimension of practice and that any process requires activity. Practice theorists hence prefer verbs such as "ordering," "structuring," and "knowing" over the respective (static) nouns of "order," "structure," or "knowledge." With such a "prioritization of process over substance, relation over separateness, and activity over passivity" (Guillaume 2007:742), practice theories interpret the international through relational ontologies (Jackson and Nexon 1999). As a consequence, scholars bypass essentialist and static notions of the international and sideline distinctions that emphasize these, such as the one between agency and structure. Second, practice theories offer a distinct perspective on knowledge. They situate knowledge in practice and thereby develop a unified account of knowing and doing (Friedrichs and Kratochwil 2009). Connecting "practice," "acting," and "knowing" implies understanding knowledge as "knowing from within" (Shotter 1993:7). Such a conception of knowledge extends beyond conventional understandings of "knowing that" and "knowing how." Yet, practices cannot be reduced to background knowledge. While knowledge, its application, and creation cannot be separated from action, "it would be wrong to see the concept of practice as merely a synonym for action" (Hajer and Wagenaar 2003:20). In practice, the actor, his beliefs and values, resources, and external environment are integrated "in one 'activity system', in which social, individual and material aspects are interdependent" (Hajer and Wagenaar 2003:20). As a result, knowledge cannot be essentialized, but is instead a spatiotemporally situated phenomenon. Third, practice theories grasp knowing and the acquisition of knowledge by learning as inherently collective processes. Members of a distinct group (for example, medical professionals, football players, or children in a kindergarten) learn and internalize practices as "rules of the game" mostly through interaction. Practices as "repeated interactional patterns" achieve temporary stability because "the need to engage one another forces people to return to common structures" (Swidler 2001:85). In the medical sphere, for instance, formal rules and algorithms provide guidelines in medical operations to guarantee standard practices. These prevent doctors from having to make every decision anew in complicated situations. Yet, performing a practice does not necessarily presuppose an interactional dimension. Human collectiveness is not a general criterion for the sociality of practices. Practices can also involve an "interobjective structure," for example, when actors learn a practice through interaction with a machine or computer without necessarily communicating with other people (Reckwitz 2010:117). Fourth, practice theorists submit that practices have materiality. Bodies are the main carrier of practices. But they are not the sole one. Material artifacts or technologies can also be carriers of practices. The materiality and embodiment of the world is an aspect which tends to be marginalized in other social and culturalist theorizing. For practice theorists, the world is "continually doing things, things that bear upon us not as observation statements upon disembodied intellects but as forces upon material beings" (Pickering 1995:6). To stress the impact of objects, things, and artifacts on social life is not merely adding the element of materiality; it is an attempt to give non-humans a more precise role in the ontologies of the world. Fifth, social order is appreciated as multiplicity. Instead of assuming universal or global wholes, the assumption is that there are always multiple and overlapping orders (Schatzki 2002:87). There is never a single reality, but always multiple ones. This does not imply chaos, limitless plurality, or an atomized understanding of order. Orderliness is, however, an achievement. It requires work and emerges from routines and repetitiveness in "situated accomplishments" of actors (Lynch 2001:131). As such, order is always shifting and emergent. The assumption is that actors are reflexive and establish social orders through mutual accounts. Thus, the permanent (re-)production of "accountability" is preserved through ongoing practical accomplishments. Practices therefore have a dual role, both creating order through accountability and serving to alter the "structure" by the innovativeness of reflexive agents. Sixth, practice theories embrace a performative understanding of the world. The world depends on practice. This "world of becoming" is the product of ongoing establishment, reenactment, and maintenance of relations between actors, objects, and material artifacts. The concept of enactment turns the focus away from the idea that objects or structures have assumed a fixed, stable identity and that closure is achieved at some point. Enactment stresses the genuine openness of any construction process. Construction is never complete. Objects, structures, or norms, then, exist primarily in practice. They are real because they are part of practices, and are enacted in them. Such a performative understanding avoids attempting "to tame" practice and to "control its unruliness and instability," as Doty (1997:376) noted early on. In practice theory: "[. . .] practice must entail an acceptance of its indeterminacy. It must entail a decentering of practice" (Doty 1997:376). These six commitments stress that doing practice theoretical analysis implies engaging with a range of core themes and concerns. Laying out these commitments gives us a sense of how practice theory coheres and defines its limits. Our intention is, however, not to "police" what practice theory is and what not. Considering these commitments clarifies some of the boundaries. Ringmar's (2014) general attack on the promises of practice theory, for instance, targets two studies. He criticizes Abrahamsen and Williams (2011) as being nothing more than rational choice theory (Ringmar 2014:10). Abrahmsen and Williams indeed combine different approaches and do not follow Bourdieu dogmatically. But it is through this comprehensive practiceoriented approach that they successfully explain the growth of private security in globalization as a complex relational phenomenon and thus overcome the dualism of local and global. The study hence relies on the outlined commitments. We agree, however, with Ringmar (2014:13) criticism of Patrick Morgan's study on practices of deterrence (Morgan 2011) that offers a "reconstruction of the intentions and aims of actors involved." Morgan's argumentation is rooted in methodological individualism and strategic action that has little in common with the concerns of practice theory. The outlined commitments provide general criteria to bring coherence to international practice theory. As discussed in the next section, one should not read the commitments as "shared assumptions and beliefs." Practicedriven approaches draw on the commitments and develop them in different ways. The Spectrum of International Practice Theories As several commentators have noted, practice theories are a heterogeneous set of approaches. 3 To speak about practice theory in the singular is problematic. Reckwitz adopts the metaphor of a "family" to emphasize this heterogeneity and indicate that the term "practice theory" does not have a definite meaning. Practice theories have family resemblance in the sense outlined by Ludwig Wittgenstein (Wennerberg 1967). Their commonality lies in the relation between them, the outlined commitments and other varieties of theory. If this is a challenge to conventional understandings of what a theory is, the heterogeneity of practice theory is their strength, not their weakness. It allows one to capture "practice" from different directions and put emphasis on a broad range of phenomena. Doing practice-driven analysis implies to appreciate multiplicity. Practice approaches not only differ in terms of the traditions they are rooted in-below, we distinguish between a critical and a pragmatist one. They also employ different conceptual vocabularies on top of the concept of practice and thereby interpret the aforementioned commitments differently. Many IR scholars tend to equate the notion of practice theory with the thinking of Pierre Bourdieu. A vast majority of current practice theoretical work takes Bourdieu's approach as a starting point to a degree that "Bourdieusianism" dominates the discussion on practice in IR. 4 The attraction of Bourdieu's praxeology in IR lies not least in the fact that it is "at its core a theory of domination" (Pouliot and M erand 2013:36). This makes the approach compatible to a discipline historically concerned about power relations, conflicts, and hierarchical structures. In addition, his conceptual vocabulary of habitus, field, and capital seemingly correspond to IR categories such as strategy, conflicts, and culture (Adler-Nissen 2013b). Equating practice theory with Bourdieu, however, is a peculiar development in IR, which might require an explanation in itself. In the wider practice turn debate in the social sciences Bourdieu rather appears as a footnote than as the guiding approach (Spiegel 2005). While Bourdieu's work should have a prominent place, this rather odd development reduces the spectrum and hence the potential of practice accounts for IR. It forgets that practice theories have been developed from different traditions. It leads to another problem Approaches that draw on a pragmatist tradition tend to be excluded from the practice theory debate in the field. On top of Bourdieu's praxeology, a meaningful spectrum consists of at least four approaches that have started to thrive in IR: (i) studies of global governmentality following Foucault's later work (Walters 2012), (ii) the community of practice approach as outlined by Etienne Wenger and introduced to IR by Adler (2005), (iii) adoptions of actor-network theory following Bruno Latour and other advocates (Best and Walters 2013), and (iv) assemblage approaches following Gilles Deleuze's emphasis on practice and relations (Acuto and Curtis 2013). Other, less established, approaches draw, for instance, on the practice theories of Luc Boltanski (Gadinger 2015), Michel de Certeau (Neumann 2002), Karin Knorr Cetina (Bueger 2015), Theodore Schatzki (Navari 2010), or Ann Swidler (Sending and Neumann 2011). Each of the approaches deserves to be discussed in their own right and situated within the practice theoretical debate. Here, we are interested in the relations between them and how they respond to a set of challenges that the practice perspective poses. Below, we discuss the spectrum of practice theories in the light of a set of challenges or points of contentions. This set is certainly not conclusive, 5 but these are core issues in the future agenda of international practice theory. We first relate the approaches to two different traditions: critical theory and pragmatism. Then, we show how they offer different responses to the problem of change and induce different positions on the regularity of practice. We address concerns over how to handle different scales of practice and to "containerize" practice. The next challenge concerns methodology. How can practices be studied in empirical research? The final challenge is how, in a thoroughly practice-oriented theoretical ontology, the relation between academic practice and the practices under study can be conceptualized and what positions and reflexive standards follow. Two Traditions: Critical Theory and Pragmatism The family of practice theory is rooted in at least two different traditions-a fact that has largely gone unnoticed in IR, but is widely established in sociology and social theory (B enatou€ ıl 1999; Celikates 2006; Bogusz 2014). A continental critical theory line of reasoning develops the understanding of practice from a Marxian tradition. Beginning with Marx, who suggested that societal life should be analyzed as human practice, theorists including Michel Foucault, but also, for instance, Judith Butler, started from textualist assumptions and subsequently integrated a focus on practice: Foucault's later work on governmentality and Butler's understanding of performativity are prime examples of the practice wave in critical theorizing. In a nutshell, practice approaches in a critical tradition are primarily driven by concerns over power, domination, and resistance. Foucault's technologies of governance as well as Bourdieu's praxeology are the most prominent frameworks in IR in this line. What this tradition shares is its genuine interest in questions of hierarchical reproduction and resistance and in elaborating larger historical trends and forces. This is, for instance, reflected in Bourdieu's emphasis on understanding distinct social spheres as fields of practices, being shaped by symbolic power struggles between different actors each aiming to improve their position. By drawing on Bourdieu's key concepts, "it is possible to map political units as spaces of practical knowledge on which diverse and often 'unconventional' agencies position themselves and therefore shape international politics" (Adler-Nissen 2013a:2). As the bulk of Bourdieu-inspired studies in IR demonstrate, his terms habitus, field, capital, and doxa provide a productive relational framework for studying international practices. 6 An advantage in these studies, for instance, on European security (Berling 2012;Adler-Nissen 2014) or the emergence of private military companies (Leander 2005), is that actors are not studied in isolation, but through their practical relations to each other in dynamic configurations of fields. The concept of a "field" incorporates the objective component of a distinct hierarchal sphere such as art, economics, or even European security. The concept of habitus focuses on the experiences and strategies of individuals seeking to establish or achieve an advantageous position within it. The habitus is the origin of the practices that reproduce or change the existing structures of the field. These practices again shape the experiences of actors, form their habitus, and stabilize power structures in the field. It is fair to say that the emphasis of Bourdieu's praxeology is on the stability, regularity, and reproduction of practices and less on subversion and renewal. A major strength of Bourdieu's framework therefore lies in its ability to dissect symbolic power struggles in politics. Studying these struggles reveals much more complexity and subtlety than the stories conventionally told in IR. As a result, studying power relations by drawing on Bourdieu moves IR research in new directions and contributes to the debate on different faces of power (Barnett and Duvall 2005). This analytical strength, however, can also be turned into a criticism, which is articulated by scholarship rooted in pragmatism. Due to the explicit focus on domination, power, and hierarchies, one could gain the impression that practice is always embedded in power struggles. Indeed, the focus of Bourdieu's vocabulary is on structures of power and domination and less on the vast amount of other sociocultural practices. A pragmatist tradition, on the other hand, develops the concept of practice from its Aristotelian roots and its notion of practical reasoning (phronesis). Instead of structures and routines, concepts such as problems, uncertainty, creativity, and situated agency are key issues in the pragmatist tradition. Classical American pragmatist authors like John Dewey are main points of reference. In contrast to sociology, IR has not recognized recent pragmatist theorizing as part of the practice theoretical family. There has been some suspicion that the renaissance of pragmatism has something in common with practice theory, and one finds some cross-references. Kratochwil (2011:38), for instance, suggests that recent works in international practice theory share core elements of "a generative grammar for approaching action and meaning" that American pragmatism had initially articulated. Pragmatist theorists, notably contemporary ones, are rarely recognized for their role within practice theory and the interest in pragmatism is often understood as a separate project. The reasons for this lack of recognition are manifold. Part of the explanation is certainly that IR scholars are primarily interested in classical pragmatism, that is, the work of Dewey, James, Mead, and Peirce, and understand pragmatism mainly as a philosophical program rather than a sociological or empirical one (Hellmann 2009). Secondly, it is part of the pragmatist habit to shy away from declarations of belonging to a certain turn, tradition, or perspective. Many contemporary pragmatists, like Latour or Boltanski, are not transparent in this regard, although the intellectual roots and resemblances are quite obvious (for example, Latour 2005:261; Boltanski 2011:27-29, 54-60). As observers from sociology point out, such authors are not only seen as pragmatists, but also as practice theorists (Blokker 2011;Nicolini 2013). In consequence, in the IR debate, many contemporary theorists have rarely been identified as either pragmatists or practice theorists. Recognizing the pragmatist tradition is an important reminder that the commitments of practice theory can be interpreted quite differently. The pragmatist tradition aligns the concept of practice closer to action and, as a result, it loses its structural connotations. Practice is formed in a continuous stream of acts and has "neither a definite beginning nor a definite end" (Franke and Weber 2012:675). Thinking of practice in terms of change is at the core of the pragmatist tradition and reflects the aim of reconsidering "agency" in a more substantial manner. The originality of pragmatist approaches developed by Latour or Boltanski, and, also, albeit in a more communitarian fashion, in Wenger's community of practice approach, lies in their reinterpretation of the concept of action. Following the commitments of practice theory, action is seen as taking place in multiplicity, in a combination of "common worlds," and in hybrid relations between subjects and objects, and humans and non-humans. From this pragmatic point of view, the world of IR becomes one overflowing with a multitude of beings, things, objects, and artifacts. Stronger than the critical tradition, pragmatist vocabulary turns to fully relational, performative language and to describing the world as continuous process of ordering, translating, engaging, producing, assembling, enacting, working, or constructing. Thus, studies in IR inspired by Latour, Boltanski, or Deleuze focus on the practical work at the "construction" sites in which the social, the material, the factual, or the powerful is produced (for example, Walters 2002; Bueger and Bethke 2014). From a pragmatist point of view, "practices cannot be understood from an objective standpoint alone, because they are internally related to the intepretations and selfimages of their participants that can only be grasped if one takes their perspective as fundamental" (Celikates 2006:21). Thus, human action is deeply implicated in situations or controversies, which are always in need of interpretation by the involved agents (Blokker 2011:252). To do practice research in a pragmatist tradition means describing and elaborating on these controversies as well as identifying the underlying practices following ethnomethodological premises. In sum, the pragmatist tradition stresses situations, contingency, creativity, and change. Hence, it starts out from an almost opposite direction than the critical theory tradition's focus on routines and structures. These differences become clearer if we now turn to the question of transformations and change. Change One of the initial motives of developing practice theories was to enable a better grasp on social change and contingency (Neumann 2002;Spiegel 2005). The vocabulary of practice theory stresses cultural contingency and historicity much more than textualist or mentalist accounts. Structure, in practice theory terms, is largely formed by routinization, which refers to its temporality (Reckwitz 2002a:255). Yet, the conception of the transformative and regularized patterns of practical reconfigurations remains a major point of contention within practice theory. How fluid and ephemeral is the world? While for some approaches, change is a variation stemming from unexpected irritation and events in the reproduction process, for others change is constitutive of practice itself. For crit-ical theorists like Bourdieu, repetition and reproduction is the norm. Shifts are therefore considered rare and require a revolutionary event. Those interested in larger formations of domination and historical processes tend to focus on regularity and tend to underplay the potential for transformation. In consequence, such perspectives have been criticized for not being capable of actually studying change (Joas and Kn€ obl 2009:395). Pragmatist perspectives, such as ANT, or the assemblage framework, in their emphasis on process and relations, occupy a very different position. They claim that stability, rather than change, requires explanation. The world is seen as constantly emerging and shifting; practices are taken as inherently innovative, experimental, and erratic. Other approaches, such as the community of practice approach, attempt to take a middle ground position to deal with the tension of order and change. Adler's (2005:15) adoption of the concept to study community building beyond IR's norm-oriented approaches is driven by the aim to project the agency as well as the structural side of practice to get a more comprehensive understanding of social change. The understanding of world politics through communities of practice, which are produced and reproduced in collective processes of learning, reinterprets the earlier promises of constructivism to provide adequate interpretations of change (Wendt 1992). Every practice approach struggles with the inherent tension that practices can "range from ephemeral doings to stable long-term patterns of activity" (Rouse 2007:639). Practices are repetitive patterns. But they are also permanently displacing and shifting. Practices are dispersed, dynamic, and continuously rearranging in ceaseless movement. But they are also reproducing, organized, and structured clusters (Schatzki 2002:101). This constellation forces practice theorists to be particularly aware of the continuous tension between the dynamic, continuously changing character of practice on the one side, and the identification of stable, regulated patterns, routines, and reproduction on the other. The dual nature of practices requires attention to the interaction between both the emergent, innovative and the repetitive, reproducing sides of practice. This leads to one of the most disputed questions posed by practice theories scholars: Can practice theory serve both analytical purposes and explain continuity as well as change? Yet, one should not expect an inevitable conceptual "solution." As Reckwitz (2004b:51) correctly points out, there is no theoretical reason why practice theorists should take either the reproductive or the erratic character of practice to be the norm. Indeed, as he suggests, this issue needs to be turned into the analytical question of which practices, under which conditions, take on an erratic or a reproductive nature. In this sense, the different approaches of practice theory provide different analytical starting points. However, it is only when seen together that they generate a major empirical-analytical question. Scale and Structural Metaphors Scholars have proposed a confusing array of structural metaphors and it appears that these are (intentionally or unintentionally) undertheorized. Bourdieu's "field" is certainly one of the most developed concepts that allows IR scholars to understand IR beyond national boundaries in transnational spaces (Peter Jackson 2008:178). Drawing on the concept assumes a distinct structure that relies on a unique doxa and distribution of resources. Scholars hence argue that a fairly homogenous structure with boundary and identity practices can be identified. This offers particular promises, if the analyst wants to understand the distribution of power among different agents and their relative positionality (for example, Williams 2012). The logic of this structure then becomes an object of study. When compared to other concepts in practice theory, Bourdieu's structural metaphor assumes the most coherence. Significant similarities, in this sense, can be found in the community metaphor used by some, in particular Wenger's (1998) notion of communities of practice. Understanding practice as organized in community structures implies suggesting that a stable core (or repertoire in Wenger's words) and a significant amount of boundary work drive the collectives of practice. On the other side of the spectrum, we can identify notions of structure that draw on the pragmatist obsession with contingency, fluctuation, and situations. Schatzki's notions of "bundles" and "arrangements," the Latourian notion of "actor-networks," the Deleuzian concept of "rhizomatic assemblages" are almost chaotic notions of structure and order. They center on notions of multiplicity, overlap, complexity, incoherence, and contradictions between structural elements. As Marcus and Saka (2006:102) phrase it, such conceptualizations are employed "with a certain tension, balancing, and tentativeness where the contradictions between the ephemeral and the structural, and between the structural and the unstably heterogeneous create almost a nervous condition for analytic reason." The advantage of such metaphors is their genuine openness to the various possibilities of orderliness. They should not be understood as anti-structural notions, yet they foreground the ephemeral and stress that weight has to be put on empirical, situation-specific research in order to understand how ordered (or disordered) the world is. The price that has to be paid for such notions is that it becomes almost impossible to lay out grand histories of panoramic scale and the power dynamics they entail. Employing such notions also creates inherent contradictions for the presentation of academic research, given that academic research becomes only intelligible if phrased in relatively coherent narratives. The question of structure needs to be addressed in light of the importance of scale. One of the benefits of practice theories is that they do not take constructions of scale, such as micro (face to face interactions, and what people do and say), meso (routines), macro (institutions), or even local (situations), regional (contexts), global (universals), as natural categories. Practice theories intend to keep ontology flat and conceptualize the ideas behind such constructions. Indeed, there is no thing such as micro, macro, local, or global. In reality, these are strategic constructs by social scientists. Practice theory hence aims at allowing "the transcendence of the division between such levels, such as that we are able to understand practice as taking place simultaneously both locally and globally, being both unique and culturally shared, 'here and now' as well as historically constituted and path-dependent" (Miettinen et al. 2009(Miettinen et al. :1310. The question of scale has driven some substantial empirical research on how scales are made. Authors including Tsing (2005) or Latour (2005) have shown how actors combine heterogeneous elements to make the global and universal. They have foregrounded the work of bureaucrats, scientists, and activists in creating scale by framing things as universal and international. Other authors dem-onstrate the hybridity of scale, like Knorr Cetina (2005) who argues for the prevalence of what she calls "complex global micro-structures". For Knorr Cetina, these structures are driven by micro-interactions but are global in reach; transnational phenomena such as terrorism or financial markets can be studied and understood in such a manner. The empiricist route of focusing on the making of scale and the emergence of scale hybridity as the main object of study promises interesting insights. Yet not every practice-driven investigation will focus primarily on scalemaking. Even if analysis does not explicitly focus on scale, one needs to recognize that practice theorists not only challenge traditional understandings of scale. They also introduce their own politics of scale by creating structural concepts and situating practice in larger containers. Methodology Although scholars often perceive practice theory as the attempt to invent new vocabularies, it also implies a move to more empirical and descriptive work. Miettinen et al. (2009) provide a careful reminder that the practice turn was always primarily motivated by empirical concerns. Practice theorists across the spectrum stress that the theoretical vocabulary should be understood as offering "contingent systems of interpretation which enable us to make certain empirical statements" (Reckwitz 2002a:257). Practice theory has the status of "a heuristic device, a sensitizing 'framework' for empirical research in the social sciences. It thus opens up a certain way of seeing and analyzing social phenomena" (Reckwitz 2002a:257). It does not only provide a particular vocabulary, but also a search and find strategy. Since such an approach falls in the realm of interpretative methodology, practice theorists draw on a mix of established methods (usually participant observation, interviews as well as text analysis) and reinterpret these in light of practice theoretical concerns (see Nicolini 2009;Bueger 2014). Understanding practice theory as a heuristic device that provides sensitizing concepts emphasizes the importance of integrating methodology and theory. Indeed, practice theory and methodology should be considered as a coherent package (Nicolini 2013). The question of how practices can be studied empirically, however, has so far received the least attention from practice theorists. Methodological reflexivity is arguably weak. Many practice theorists have primarily come up with negative methodological guidelines that argue against "objectivist" accounts and suggest how not to conduct research. Bourdieu, for instance, has argued vividly against both objectivist and what he calls subjectivist accounts (Nicolini 2013:62). A pragmatist scholar like Latour equally lays out largely negative guidelines, and posits that his methodology tells you, in the first place, what not to do (for example , Latour 2005:142). Participant observation as the tool that allows for the recording of bodily movements, speech, and the handling of artifacts in real time particularly relates to the concerns of practice scholars. Participant observation allows direct proximity to practice. The method finds its limitations under conditions of limited field access and resources, or else the material concerns of historical practices, in which case bodily movements are no longer observable. Understanding practices will often require deciphering them from texts such as manuals, ego-documents, or visual representations, or from interviews cen-tered on descriptions of activities (Nicolini 2009). Interviews and texts, however, do not provide direct access to practices; they provide representations of practices that have to be carefully interpreted. The differences between critical and pragmatist versions of practice theory also play out in methodological choices concerning research strategies, data collection as well as writing styles. Critical scholars tend to focus their strategy on interpreting structures and fields. They therefore prioritize large-scale genealogies of practices reconstructed through textual analysis or the mapping of fields through survey methodology or positioning analysis. Given the concern with larger formations, writing styles adopted are more distant, objectifying, and offer less descriptive detail. Pragmatists by contrast tend to initiate research by zooming in on a distinct practice, a crisis situation, or an object (Bueger 2014) and hence place more emphasis on participant observation, acquiring descriptions of detailed situations, and immersion in the action. Corresponding to the erratic understanding of practice, writing follows a style that provides complex, often nonlinear and incoherent narratives that include multiple voices of practitioners with a high level of empirical detail. While critical narratives risk providing overly "clean" narratives of practice, the pragmatist faces the trap of producing incomprehensible cacophonies of voices. Given the status of empirical work for developing international practice theory, the question of which packages of theory and methodologies and which writing styles best enable the capturing of practice remains a vital concern. Positionality and Reflexivity What is the relationship between academic practices and the practices under study? Methodology is one way to contemplate this relationship, yet practice theories also consider the broader set of relationships that academic practices have to other practices. The symmetrical perspective of practice theory implies not only considering the world studied as a practical configuration, but also conceiving of (academic) knowledge generation as practice. Practice theory, then, provides a tool for studying scientific disciplines (such as IR), for understanding the multiple relations between scientific and other social and political practices, and for examining the practical activities involved in generating knowledge (Bueger and Gadinger 2007). The study of scientific practices has been crucial to developing practice theory. It is therefore no coincidence that the majority of authors in the seminal edited volume introducing the practice turn (Schatzki, Knorr Cetina and von Savigny 2001) are science studies scholars. The symmetrical perspective of practice theory enables not only an understanding of what relations contribute to the construction of academic knowledge, but also the identification of the practical (performative) effects that academia has. The representations of practice generated by scholars have various effects for the practitioners and practices themselves. While practice theorists are united in recognizing the importance of such a form of practical reflexivity, its status in directing knowledge generation remains contested. Those close to a critical tradition use reflexivity as a device for ensuring the quality of knowledge, preserving the autonomy of the academic field, and maintaining a notion of academic superiority. For instance, Bourdieu stresses collective reflexivity, that is, the constant investigation of the conditions under which knowledge has been produced (Ber-ling 2013). Practical reflexivity provides, then, the basis for intervening in societal concerns, debunking games of domination, and contributing to the emancipation of the subjects of domination. Thus, reflexivity and the study of academic practice exert power as an essential form of self-regulation and policing device. In contrast, pragmatist scholars interpret practical reflexivity as a constructive mode geared toward ensuring that academic knowledge production addresses societal concerns. Arguing against autonomy, this position draws on the classical pragmatist understanding of academia as part of a broader community of inquiry which constructs matters of concern, develops problematizations, and cultivates methods for mastering problems. To practice reflexivity on academic practices strengthens the ways that analysis can contribute to problematization and problem solving (Hellmann 2009). One of the expectations of turning to practice vocabulary is that it places scholars in a better position to contribute to real-world problems and to produce statements of relevance beyond a community of peers (Latour 2005:261). What such contributions will look like, what positions the academic will have to take, and what the status of reflexivity will be in maintaining this position are ongoing concerns for practice theory. Conclusion: The Future of International Practice Theory Is it meaningful, or even necessary, to speak of a "practice turn"? Regardless of how we answer that question, attention practice theories now drives important research on international relations. Still, the development of international practice theory remains in its early stages. In this article, we sought to clarify the character and promise of practice theory. We rejected overly vague conceptualizations of the "practice turn," as well as claims that practice theory offers nothing new to the field. Of particular importance, we argued against attempts to cast international practice theory as the new grand theory of international relations. It is not. Nor is it capable of integrating the discipline's diverse paradigms and methodologies. Indeed, international practice theory adds additional vocabulary and methodological perspectives. It increases, rather than decreases, the pluralism of the field. This facilitates productive debate-as long as we remain clear about what different theories and approaches bring to the table. Moreover, we offered three layers of approximation concerning international practice theory. We started with a discussion of what belongs outside of practice theory: rational choice, norm-oriented constructivism, or the study of belief systems or of discourse. In social-theoretic terms, practice theory moves away from the study of intersubjective coordination. Its distinctiveness resides in taking patterns of activity as the smallest unit of analysis. This entails focusing on the study of bodily movements, the handling of artifacts, and practical knowledge. It concerns itself with the structures and situations where actors perform shared practices and produce social order. We also laid out the core commitments of practice theory: its minimal ontological and epistemological wagers. These "thin" commitments provide the basis for mutual understanding both within and outside of international practice theory. As Reckwitz (2004b:52) suggests, practice theory is at its strongest when it remains as thin as possible with respect to its general conceptual requirements. We then surveyed the broader approaches that fit within this 'thin' understanding of the practice turn. In particular, we emphasized the need to avoid conflating Bourdieusian approaches with international practice theory writ large. Rather, such approaches constitute part of an ongoing debate within practice theory. The future of international practice theory depends on the vibrancy of that ongoing debate. Its particular stakes involve unresolved problems for the practice turn: how to cope with tensions between the regulative and erratic character of practice, how to handle the politics of scale, what methodologies best allow for capturing and writing about practice, and how to reflexively situate practice researchers within the world they study. But these questions cannot be resolved simply through theoretical debates; they must be worked out in the context of empirical investigation.
2017-09-07T10:25:56.753Z
2015-09-01T00:00:00.000
{ "year": 2015, "sha1": "3aadd1d36e42232b839ab228f83425ecb8b4bd97", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/isq/article-pdf/59/3/449/9988527/59-3-449.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParseMerged", "pdf_hash": "3aadd1d36e42232b839ab228f83425ecb8b4bd97", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Sociology" ] }
267287359
pes2o/s2orc
v3-fos-license
Examining how organizational leaders perceive internet-delivered cognitive behavioural therapy for public safety personnel using the RE-AIM implementation framework Background Within Canada, internet-delivered cognitive behavioural therapy (ICBT) has recently been tailored by PSPNET to meet the needs of public safety personnel (PSP) to help address high rates of mental health problems within this population. Perceptions and outcomes of ICBT among PSP are promising, but it remains unknown how PSPNET is perceived by PSP organizational leaders. It is important to assess this gap because these leaders have significant potential to influence the uptake of ICBT. Methods In the current study, PSP leaders (n = 10) were interviewed to examine their perceptions of PSPNET and opportunities to improve ICBT implementation. The RE-AIM evaluation framework was used to assess PSP leaders' perceptions of PSPNET in terms of reach, effectiveness, adoption, implementation, and maintenance. Results The results evidenced that leaders perceived PSPNET as effective in reaching and serving PSP and PSP organizations. PSP leaders reported perceiving ICBT as effectively implemented, especially for being freely offered to individual PSP and for improving PSP's access to experienced therapists specifically trained to work with PSP. Participants indicated organizations have promoted and will continue promoting PSPNET longer-term, facilitating adoption and maintenance. Factors perceived as facilitating successful service delivery included building relationships and trust with PSP organizations and general support for PSP leadership mental health initiatives. PSP leaders identified perceived areas for improving ICBT implementation (e.g., ensuring leaders have access to data on PSPNET uptake and outcomes, creating promotional videos, expanding availability of PSPNET to other provinces, offering additional options for receiving therapist support). Implications Overall, the study provides insights into PSP leaders' perceptions of the implementation of ICBT among PSP and ideas for optimizing implementation efforts. Introduction Public safety personnel (PSP) are diverse occupational groups working to keep communities safe, including, but not limited to, border services officers, correctional services workers, firefighters (career and volunteer), Indigenous emergency managers, operational intelligence personnel, paramedics, police (municipal, provincial, and federal), public safety communicators, and search and rescue personnel (Canadian Institute for Public Safety Research and Treatment [CIPRST], 2019).Elevated rates of mental health challenges have been observed among PSP worldwide (Benedek et al., 2007;Courtney et al., 2013;Maia et al., 2007;Motreff et al., 2020).Within Canada specifically, PSP across all sectors are at increased risk for developing symptoms of major depressive disorder, posttraumatic stress disorder (PTSD), generalized anxiety disorder, social anxiety disorder, panic disorder, and alcohol use disorder, with 44.5 % reporting clinically significant symptoms of one or more mental health disorders and 18.0 % reporting symptoms of three or more (Carleton et al., 2018).Unfortunately, despite high mental health needs, PSP experience many barriers to accessing mental healthcare services (e.g., stigma, time barriers, geographical barriers, confidentiality concerns; Jones et al., 2020;McCall et al., 2021a). In response to the high rates of mental health challenges faced by PSP and barriers to care, Public Safety Canada invested in the development of a clinical research unit called PSPNET, tasked with developing, delivering and conducting research on ICBT tailored for PSP (Public Safety Canada, 2019).ICBT consists of cognitive behavioural treatment materials provided in the form of an online course, which can be selfguided or therapist-guided (Andersson, 2016).There is now a large evidence base for the effectiveness of both therapist-guided and selfguided ICBT for reducing symptoms of mental disorders (e.g., Andersson et al., 2019a;Andersson et al., 2019b;Lewis et al., 2019;McCall et al., 2021b).Moreover, ICBT can help overcome barriers to mental healthcare (e.g., geographical, time, and attitudinal barriers; Andersson, 2016).To date, PSPNET's research has focussed on examining ICBT from the perspective of PSP who have used therapist-guided ICBT services.Results have been encouraging, showing that PSP perceive therapistguided ICBT to be beneficial (Beahm et al., 2021) and those who participate in it experience significant reductions in symptoms of anxiety, depression, and posttraumatic stress (Hadjistavropoulos et al., 2021;McCall et al., 2023). The objective of the current study was to extend research on ICBT tailored for PSP by examining the perceptions of PSPNET by leaders within PSP organizations (i.e., administrative leaders, organizational leaders, management, and frontline supervisors).There is a need to identify leaders' perceptions of ICBT tailored for PSP because perceptions of program outcomes and implementation and ideas for improvement can vary between different types of interested parties (Lyles et al., 2021;Neher et al., 2022), and past research on mental health programs targeted to PSP has highlighted that PSP leaders and organizational policies supportive of mental health programs represent major facilitators to program uptake and implementation (Knaak et al., 2019;Milliard, 2020).Therefore, leaders' perceptions of ICBT tailored for PSP could impact decisions by policymakers or PSP organizations regarding program support (Damschroder et al., 2022;Shelton et al., 2018).Research has also shown that most prospective clients of PSPNET learn about PSPNET through employers, unions, colleagues, or professional associations, further highlighting the potential influence that leaders of PSP organizations may have on uptake (McCall et al., 2021c).Leaders may also be able to provide novel ideas for improving the implementation of ICBT tailored for PSP. Prior to PSPNET being implemented, interviews conducted with 126 Canadian PSP (56 %; n = 70 in leadership positions) evidenced that 93 % perceived a need for ICBT tailored to PSP, and 62 % reported believing that PSPNET would be used by PSP (McCall et al., 2021a).A national survey conducted by PSPNET assessed perceptions of ICBT and insights into expanding access across Canada approximately two years after initial implementation (Landry et al., 2023).The results indicated that most PSP leaders believed having PSPNET available to members of their organizations should be a priority (80.9 %; n = 207) and would be effective for improving members' mental health (82.4 %; n = 210).The results of both studies (i.e., Landry et al., 2023;McCall et al., 2021a) suggested PSP leaders have positive perceptions of ICBT tailored to PSP and perceive a need for ICBT; however, previous studies did not comprehensively investigate PSP leaders' perspectives on PSPNET, barriers to and factors facilitating implementation, or opportunities to improve services. The current qualitative study used the RE-AIM evaluation framework (Glasgow et al., 1999(Glasgow et al., , 2019;;Holtrop et al., 2021), which has recently been applied in other ICBT implementation research (e.g., Lundström et al., 2023;Sit et al., 2022), to assess PSP leaders' perceptions of PSPNET along five dimensions including: reach, effectiveness, adoption, implementation, and maintenance.Quantitative data has typically been used during evaluations using the RE-AIM framework; however, there has recently been increased emphasis on using qualitative data to provide insights into the five dimensions (Holtrop et al., 2018).Qualitative data is helpful for understanding what has and has not worked and what can be improved in terms of both the intervention and implementation approach.For example, qualitative data can help evaluators understand the best way to reach certain groups, why programs do not reach certain groups, and help identify ways to better reach target populations (Holtrop et al., 2018).Qualitative data can also be used to understand various stakeholders' perceptions on program effectiveness, such as whether they view the results as meaningful and beneficial enough to make the program worthwhile (Holtrop et al., 2018).Adoption of an intervention can be explored using qualitative data to help understand why certain organizations chose to participate in an intervention or not (Holtrop et al., 2018).Qualitative data can be particularly important for understanding stakeholders' perceptions of implementation efforts and what was successful and unsuccessful, and for identifying areas for improvement (Holtrop et al., 2018).Qualitative data is also useful for identifying potential maintenance sustainability problems with an intervention and assessing whether organizations have intentions to continue to adopt or promote an intervention or not (Holtrop et al., 2018). The current study uses a qualitative RE-AIM framework.In line with this framework, reach referred to examining leaders' perceptions of whether PSPNET was reaching those in need and whether certain groups were not reached.Effectiveness referred to assessing whether leaders had heard feedback about PSPNET and their perceptions of PSPNET's success in making a difference in the lives of PSP, making a difference in PSP organizations, and improving awareness of posttraumatic stress injuries.Adoption referred to the extent to which leaders had promoted PSPNET.Implementation referred to assessing leaders' perceptions of the strengths and weaknesses of PSPNET's services as they were implemented.Maintenance referred to leaders' perceptions of the consequences of not maintaining PSPNET and their willingness to advocate for the long-term sustainability of program.In addition to assessing PSP leaders' perceptions of PSPNET along the five RE-AIM dimensions, we aimed to understand perceived facilitators and barriers to PSPNET's service delivery across the five dimensions and leaders' suggestions for optimizing PSPNET across those dimensions.There is increasing emphasis on developing digital or online mental health programs targeting the needs of PSP (Moghimi et al., 2022;Voth et al., 2022); accordingly, results from the current study are expected to inform the implementation of both PSPNET and other interventions and resources targeted to PSP. Context At the time of data collection, PSPNET had made available two therapist-guided ICBT courses in both English and French to residents of Saskatchewan (beginning in December 2019), Quebec (beginning in June 2020), Nova Scotia, New Brunswick, and Prince Edward Island (beginning in November 2021).One ICBT course was transdiagnostic (i.e., The PSP Wellbeing Course; see Hadjistavropoulos et al., 2021 for details), and one course was PTSD specific (i.e., The PSP PTSD Course; see McCall et al., 2023 for details).Both courses have demonstrated good outcomes, with high client engagement, high treatment satisfaction, and moderate to large reductions in most types of symptoms for clients reporting clinically significant symptoms at pretreatment (Hadjistavropoulos et al., 2021;McCall et al., 2023).The Self-Guided PSP Wellbeing Course, which was virtually identical to The PSP Wellbeing Course but did not include therapist support, was also made available to PSP residing anywhere in Canada (beginning in December 2021).An English version of The Self-Guided PSP Wellbeing Course tailored to spouses or significant others of PSP (i.e., The Spouse or Significant Other Wellbeing Course) was also released for use across Canada (beginning in August 2022).At the time of data collection for the current study, PSPNET had offered services to over 900 PSP and 115 spouses or significant others.Within Saskatchewan specifically, 318 PSP had enrolled in a therapist-guided PSPNET ICBT course, 12 PSP had enrolled in the self-guided PSP Wellbeing Course, and 26 spouses and significant others had enrolled in the Spouse or Significant Other Wellbeing Course.Over 4000 PSP in Saskatchewan had attended a presentation on PSPNET delivered by a PSPNET team member. Sample and recruitment Data for the current study were collected from 10 PSP leaders residing in Saskatchewan, Canada between November 2022 and January 2023.Leaders from Saskatchewan were selected because Saskatchewan was the first province in which PSPNET implemented its services, and sufficient time (i.e., three years) had passed since initial implementation to answer the research questions.The sample was recruited by contacting leaders of organizations that had previously been contacted to promote PSPNET.Previous PSPNET outreach activities with these leaders included individual meetings, presentations, webinars, and submitting newsletter articles to PSP organizations.For the current study, PSP leaders were sent an email invitation with a consent form and a one-page summary of PSPNET and its key research findings.Potential participants were asked to schedule an interview using an online scheduling tool called Coconut Calendar at their discretion. Email invitations were sent to 23 PSP leaders.The final sample included 10 leaders from the following sectors: EMS/paramedics (n = 2), fire (n = 2), municipal police (n = 2), Royal Canadian Mounted Police (n = 1), and border services (n = 1).Two leaders worked for organizations that included multiple public safety sectors, and two had some responsibilities in other provinces.The sample was comprised of six men and four women.The sample size is adequate for qualitative research (Boddy, 2016) and similar to that of prior qualitative interview studies with a similar purpose (e.g., Melia et al., 2021). Data collection Leaders took part in a semi-structured interview with author JDB over the phone (Olson, 2011).Phone interviews are known to allow for participation of hard-to-reach populations, such as those within PSP organizations who have busy schedules (Saarijärvi and Bratt, 2021;Sturges and Hanrahan, 2004).Previous research has demonstrated that the use of phones to conduct interviews does not affect the quantity, quality, or content of data collected (Saarijärvi and Bratt, 2021;Sturges and Hanrahan, 2004).Prior to beginning the interview, leaders were asked if they had reviewed the consent form and verbally consented to participate.The interview guide included 18 questions to assess leaders' perceptions of PSPNET and to identify areas for improvement along the five RE-AIM framework dimensions (see the Appendix A).Interviews lasted approximately 20 to 30 min and were recorded and transcribed verbatim by a professional transcription service. Analyses After identifying information was removed, transcripts were uploaded into the qualitative analysis software NVIVO (QSR International NVivo 20 Qualitative Data Analysis Software, 2020).Data were coded using a directed content analysis (Hsieh and Shannon, 2005), which refers to a content analysis that is guided by a pre-existing theory or framework (Hsieh and Shannon, 2005).The RE-AIM framework dimensions were used as domains, and the interview data were coded into categories and subcategories within these dimensions.Data were coded by meaning units (Graneheim and Lundman, 2004).The data were coded by author JDB (who was near completion of a PhD in psychology at the time of analyses) and then reviewed and checked by a PSPNET clinical research associate (ID-see Acknowledgements-who held a master's degree in social work), who provided feedback on the coding structure and identified coding categories.A final review of the data was carried out by author HDH, who holds a PhD in clinical psychology and has expertise in qualitative analysis.Changes were applied to the final coding structure and codebook based on the derived feedback. Qualitative research requires researcher reflexivity (Lazard and McAvoy, 2020).Author JDB has no history of working as a PSP, which may have made stakeholders feel less comfortable sharing their perspectives on PSPNET.JDB and ID are both affiliated with PSPNET and hold positive attitudes toward ICBT and its effectiveness, which may have biased interpretations of stakeholder responses.The current study was designed to describe stakeholder perspectives rather than identify underlying systems of meaning.As such, data were coded by searching only for overt themes in responses, potentially limiting the impact of positivity biases. Reach and adoption Assessment of whether leaders perceived PSPNET as successful in reaching PSP was strongly conceptually related to whether leaders perceived that PSPNET had been adopted/promoted by organizations, and results in the reach and adoption categories are therefore presented together.Overall, most leaders indicated that they perceived PSPNET to be successful in reaching PSP because information had been sent out to everyone within their organization.Leader #4 stated, "I think from our organizational perspective, everybody is aware of it".A couple leaders indicated it is difficult to determine whether the information was actually received by PSP or whether PSP engaged with the information.One leader stated, "I know the emails have been sent to everybody, but how many people have agreed to read those, who've clicked the links, who've reached out, that I don't know.But, the broad information dissemination has happened…" (Leader #7). Some leaders reported perceiving that specific PSP groups have not been reached as effectively as others.Leader #6 reported perceiving a need to find ways to reach former PSP who are no longer directly connected with an organization: "Another group that I don't know if you've reached, or is aware of who you are, are former PSP".One leader reported the perception that there are specific groups within their organization that are not reached as successfully as others, such as those not traditionally associated with frontline work (e.g., inspectors; training division personnel).There were mixed perceptions about whether older or younger PSP were reached.For example, Leader #4 reported that younger PSP are less likely to perceive a need for the program, having experienced relatively fewer PPTEs.Leader #9 reported that older PSP are harder to reach because they are more likely to have an "old school mindset" toward mental health. All PSP leaders suggested that their organization has promoted PSPNET.Having a PSPNET team member give a talk or webinar on PSPNET was among the most commonly cited types of promotion.Organizations often reported having promoted PSPNET during meetings (e.g., critical incident stress management meetings; peer support meetings; internal team meetings).Other ways leaders described promoting PSPNET included: mentioning PSPNET in their newsletters, mental health minutes, hanging up posters, and providing a link to PSPNET's website on their own websites or resource pages on social media. Effectiveness All PSP leaders reported the perception that PSPNET was making a difference in the individual lives of PSP.Participants also reported perceiving that having PSPNET available was beneficial for their organization.Almost all feedback was positive.A couple leaders reported having received a small amount of feedback indicating that the services did not meet certain PSP's needs.Themes related to effectiveness are shared below. No negative reports According to PSP leaders, no negative feedback represented positive feedback about PSPNET. We have not heard anything bad.So what we're taking that as is that those that are connecting are getting what they need from PSPNET.We are hearing, inon the reverse, though, we are hearing that our [other] program is not hitting par.So, I mean, if this one wasn't hitting par, I think we'd be hearing about it (Leader # 2). Helpful program Overall, PSP leaders reported perceiving that the program is beneficial because it is helping PSP with mental health challenges.For example, Leader #8 reported, "Provinces that don't have it up and running at the moment are really excited to receive the service, because it really, I think on a personal level, it really does help."Another leader echoed this sentiment, "Generally speaking, I've heard people say that it's a very helpful program" (Leader #1). Positive perceptions of PSPNET's ICBT characteristics Overall, PSP leaders reported hearing positive feedback about several aspects of ICBT.They stated that PSP found PSPNET beneficial because of factors such as accessibility and convenience, how PSPNET is tailored to all PSP, the confidentiality of the program, how PSPNET is free to use (for both the individual and PSP organizations), and how therapists are knowledgeable about PSP and also offer different options for offering support.For example, Leader #3 highlighted the importance of knowledgeable therapists, "One of the biggest things, I'll be honest with you, is that our people feel comfortable knowing that they're getting a bona fide clinician and somebody that's privy to the emergency services world."In terms of access, Leader # 6 reported, "many like thethe convenient access to PSPNET."Regarding confidentiality, Leader # 2 stated, "They really thought it was good and that they absolutely trusted that their information wouldn't go any further."In terms of costs, Leader #1 reported, "I think that financial piece is probably one of the best benefits.Just because of the fact that we have so many different types of workplaces within our sector.And so that is extremely helpful". Bridge to other care One leader suggested that they heard from a PSPNET client that the program was helpful and also helped them seek out other forms of mental healthcare.Leader #5 reported, "There was one person that we referred and they came back and said, yeah, you know what, it helped.And it was a good bridge to continuing their care." Increases availability and options for mental health treatment Most leaders reported the perception that PSPNET was beneficial because it allowed them to provide another treatment option in addition to other available options.Some PSP leaders, for example, indicated that they believed PSPNET is beneficial because it increases availability and options for mental health programs for PSP.Leader #6 stated, "I think it just makes sense.It's another resource that we can refer people to". Fills a gap in treatment needs A couple of PSP leaders reported perceiving that PSPNET was filling gaps in treatment that their current programs were not meeting.One leader reported that the program filled a need for more treatment options because current programs were not available for some employees (e.g., contract workers and volunteers) who lacked benefits. [Our organization] has benefits that the paramedics can access, which allow them all kinds of different services for their mental health.However, a lot of our contracted services have either no benefits, or limited, and then of course the volunteers (Leader #1). Another leader reported perceiving that PSPNET was beneficial for their organization because the program is external to their organization, whereas other programs are internal. We're a very low-trust organization.I know some people will not reach out for help internally.Because they do not have confidence that will be kept confidential… So, the fact that you're completely external and have nothing to do with any of us directly, I think, isis a really critical element for people (Leader #7). Increases awareness All of the PSP leaders stated their perception that PSPNET is one piece of a larger initiative that has played a vital role to increase awareness within their organizations about posttraumatic stress injuries, including anxiety, depression, and posttraumatic stress disorder, and other mental health issues.One leader stated, "It has absolutely improved awareness.No question about it.You know?In combination with all of the other things that have been happening.This has absolutely increased awareness" (Leader #2). A few PSP leaders reported the perception that PSPNET is helpful for increasing awareness because it provides a specific action that PSP can take.In the words of Leader #1: People, I think, are more willing to listen.Because they know that there's a solution.They know that there's help.And before that there was an awful lot of awareness about all of these things.Folks would talk about it but that was where it ended.So this is awareness and action attached. Does not meet all PSP's needs The only negative feedback that PSP leaders reported hearing was that some individuals felt that they needed more therapist interaction or that they would prefer a face-to-face service over PSPNET.Leader #9 identified that they heard primarily positive comments from several individuals except for one who felt like they would prefer face-to-face support: I think there's only one person that I recall that said that it didn't really work for them, or they didn't really like it.Everyone else was very positive about their experience with it.And the person that said that they didn't enjoy it, or it didn't, just wasn't for them, they just said they prefer kind of face-to-face because they have a therapist who they see also. Implementation When asked about PSPNET implementation, PSP leaders generally reported the perception that implementation success was based on two factors; specifically, the nature of ICBT and the promotional activities that were undertaken.Most leaders identified at least one intervention characteristic of PSPNET's ICBT that they believed made the program successful.Intervention characteristics that were perceived as contributing to successful implementation included accessibility and convenience, quality of psychoeducational information provided, knowledgeable clinician support, and no cost for enrolling in the program.Secondly, several PSP leaders reported perceiving that the program was successful because the promotional activities carried out by PSPNET team members were engaging and provided a personal contact with the program. Close second is the fact that [PSPNET team member] is, you know, very, very engaging and has taken the time to speak to whomever and do the presentations and give that information freely and openly and have those discussions.Because when you actually have more than just an email, people get more out of it (Leader #1). Maintenance PSP leaders were supportive of PSPNET and wanted to see PSPNET maintained.All PSP leaders reported that the discontinuation of PSPNET would affect their organization and leave a gap in the services available for PSP.Many PSP leaders claimed that PSPNET had filled a gap in services that would be hard to replace with other programs.For example, Leader #7 reported: I feel like that would be a gaping hole.Because you do provide a service that's anonymous, that's third-party, that's accessible.And I don't think there's anything else that would meet that for anybody.So I feel like that would be a big gap. A couple PSP leaders also reported the perception that the loss of PSPNET would create further barriers to promoting mental health services.Participants described the time required to develop trust with a service and indicated that having to start the promotion process over with a new program would set back efforts for promoting mental health services.Overall, PSP leaders reported perceiving that the availability of PSPNET is important and is filling a need for mental health services. All PSP leaders reported the perception that PSPNET is an important program and that they would be willing and eager to continue to promote and advocate for PSPNET.In the words of Leader #3, "100%.100%.Like I said, you're right now, you're number one in what I'm promoting".Similarly, Leader #2 replied, "Absolutely.Both locally here and provincially.Actually, not willing to.Eager to.How's that?"For a summary of PSP leader perceptions pertaining to each domain of the RE-AIM framework, see Table 1. Creating trust and relatability A major perceived facilitator for PSPNET was developing trust and rapport with PSP and making the materials relatable to PSP.PSP leaders emphasized their perception that the personal aspect of the promotions was key to making PSPNET successful.Leader #5 reported, "It's nice to have the face-to-face or a Zoom type thing.I think those were way more beneficial than just a poster or an email".Another way to increase relatability was identified by Leader #7 who mentioned that she tries to personalize PSPNET messages.I've shared the vast majority of the messages and I try to put a context on them how it links to us, so it links to our work… I try to, like, make it relatable to everybody and consumable for them. Growing recognition of the need for mental health supports and supportive leadership Some PSP leaders reported perceiving that a major facilitator for reaching PSP was the growing recognition that PSP organizations need mental health initiatives.Leader #2 reported, "[PSPNET has] come at a really good time when that's such a salient issue.And police leaders are looking for ways to allow the members to access the services they need."Moreover, increased leadership and management support of mental health initiatives was viewed as a facilitator for promoting PSPNET.Leader #5 stated, "I mean, management washas been great about allowing us to take time out of the training slots to promote mental health and then to promote PSPNET".Recognition and support of mental health issues by the organization and management was one of the most frequently cited facilitators for promoting PSPNET. General barriers to mental healthcare (e.g., stigma, time, confidentiality) PSP leaders described several perceived barriers to the rollout of PSPNET that are not necessarily specific to PSPNET but reflect general barriers to mental health services.The perceived barriers included issues such as continuing stigma about mental health problems, concerns about confidentiality, and not having the time to engage with services.These perceived barriers are illustrated in the following quote: The barriers are always in our members' perceptions of do they need to reach out?Is it confidential?That's always a concern for them.But other than that, the only barriers would be the self-imposed barriers that people would put on themselves (Leader #2). Information overload Some PSP leaders reported that their organization is sending out information but indicated that the information may not be received because PSP receive a lot of information and do not open all emails they receive.As one leader stated, "Information overload.They get emails; they get messages.You know?Some of them we read, some we don't read, some we just move on to others" (Leader #2). Organization-specific barriers and availability issues PSP leaders also reported having perceived organization-specific barriers that made it difficult to promote PSPNET.One perceived barrier that some leaders mentioned was that the decentralization or vastness of their organization, with varied reporting structures, made promotion difficult.For example, Leader #1 illustrated this concept: "There's a barrier that we can't necessarily follow up on everyone. [There are a vast number of] services in the province.We don't know whether or not everyone has the information posted or if they're talking about it."Another perceived organization-specific barrier was technological issues, such as the blocking of hyperlinks. A couple of PSP leaders described difficulties promoting PSPNET within their organization because therapist-guided ICBT is not available in all provinces and because some programs are not available in both English and French.These leaders were responsible for organizations that operated in jurisdictions larger than Saskatchewan, making accessibility an organization-specific barrier to promotion.For example, Leader #7 reported: Improving PSPNET PSP leaders provided suggestions for improving the promotion, reach, and implementation of PSPNET.This feedback is provided in Table 2, along with current status of PSPNET's efforts to respond to leaders' suggestions. Discussion ICBT has recently been tailored to PSP and found effective for treating symptoms of several mental disorders (Hadjistavropoulos et al., 2021;McCall et al., 2023).The current study was designed to explore perceptions of PSPNET among PSP leaders along the five dimensions of the RE-AIM evaluation framework: reach, effectiveness, adoption, implementation, and maintenance (Holtrop et al., 2018).The study used a qualitative approach for the RE-AIM framework as this approach can provide insights to why programs are effective or not and can help guide improvements to interventions but also implementation efforts (Holtrop et al., 2018).Using a qualitative method helps provide an in-depth understanding of stakeholders' experiences.Understanding perceptions of effectiveness of ICBT for PSP among PSP leaders is critical because leaders can influence the uptake, implementation, and sustainment of services for PSP (Damschroder et al., 2022;Knaak et al., 2019;Milliard, 2020).To improve understanding of ICBT for PSP, the current study also explored what PSP leaders perceived to be facilitators and barriers to the implementation of ICBT for PSP, along with their perceptions of how ICBT for PSP could be improved.The results have implications primarily for the rollout of ICBT to PSP but may also serve to inform implementation of other mental health services for PSP. Primary results and implications Leaders reported generally positive perceptions of PSPNET, suggesting an implementation context conducive to successful ongoing implementation of ICBT tailored for PSP.Leaders also provided suggestions to further improve implementation.Some leaders reported perceiving that some PSP may not be reached as successfully as others, including retired PSP or volunteer PSP.For some PSP, leaders suggested that therapist support by email or phone may be insufficient.Previous PSPNET research has evidenced that leaders supported ICBT for PSP prior to implementation (McCall et al., 2021a) and after two years of implementation of PSPNET (Landry et al., 2023).The current study contributes to extant research on PSP leaders' perceptions of ICBT by demonstrating that leaders perceive a need for ICBT tailored for PSP and believe PSPNET's services are effective and beneficial for PSP and PSP organizations after seeing results from the services.The results are promising because leader support of programs is integral for promoting, implementing, and sustaining programs (Damschroder et al., 2022;Shelton et al., 2018), particularly programs within PSP populations (Knaak et al., 2019;Milliard, 2020).The results suggest PSP leaders are supportive of digital mental health interventions, at least within the context of ICBT tailored and implemented in collaboration with PSP. The current paper contributes to needed research exploring stakeholder perceptions of ICBT.Previous research has examined the perceptions of ICBT service providers, managers, and ICBT users (e.g., Duffy et al., 2023;Folker et al., 2018).The current study provides insights into perceptions of leaders of organizations whose workers may benefit from the use of tailored ICBT.This research can inform implementation efforts. In studying facilitators of ICBT implementation, one implication of the current study is that providers of ICBT and other digital mental health interventions can facilitate successful implementation by building relationships and establishing trust with PSP populations.Building trust can include promotional activities such as presentations that provide a personal connection to the program and word of mouth within an organization.The results align with a study on peer support during reintegration after an occupational stress injury in a police organization, which suggested word of mouth would be a facilitator for implementing and reaching police with the program (Jones et al., 2022).The current results reflect previous research indicating PSP are often skeptical about mental health services (Jones et al., 2020;McCall et al., 2021a), underscoring the need to build trust in services through direct relationship development.PSP leaders suggested that promoting PSPNET was also facilitated by shifting attitudes within PSP organizations, which are highlighting mental health challenges as a salient issue.Moreover, PSP leaders reported perceiving tailoring as a valuable aspect of ICBT for The PSPNET team offers a flexible timeline from 8 to 16 weeks of therapist support and has found that most clients complete the course within this timeline.Therefore, the team does not intend to adjust the timeline but will ensure therapists remind clients of the flexibility of the program.Audio content is being added to all programs. The above results are encouraging, but PSP leaders also highlighted some perceived barriers to promoting ICBT services, such as ongoing problems with stigma on an individual level.The ongoing evidence of stigma as a barrier suggests that ICBT, as a highly private treatment option, may be well poised to provide treatment to PSP who may avoid other treatments due to concerns about stigma.PSP leaders reported believing PSPNET helps to raise awareness about posttraumatic stress injuries, but more work is needed to reduce self-stigma and reach more PSP who struggle with mental health problems.Continued promotion of PSPNET, or similar mental health services within PSP organizations, may help with overcoming self-stigma, or individuals' lack of perceived need for mental healthcare or fears of seeking treatment.Some organization-specific barriers for promoting PSPNET existed within PSP organizations (e.g., communication can be difficult in large, decentralized organizations).Working directly with organizational leaders may help to overcome such barriers. PSP leaders cited several characteristics of PSPNET's ICBT as perceived strengths that made the services effective and the implementation successful.Other ICBT or digital mental health providers should pay particular attention to these characteristics when designing and implementing services, as not all ICBT programs include these characteristics.First, PSP leaders suggested that they viewed therapist support as beneficial.PSPNET made efforts to ensure that PSPNET therapists were aware of the occupational duties of PSP and the occupational stressors that they face.Service providers who do not train ICBT therapists in these areas may not have as much success, particularly given PSP's skepticism about therapist cultural competence, PSP's beliefs that therapists will not understand them (Jones et al., 2020), and PSP's reports of past negative experiences with counsellors and therapists (McCall et al., 2021a).Second, PSP leaders reported perceiving that a strength of PSPNET was the fact that there were no costs for the clients to access the program.Therefore, service providers may be able to improve reach by seeking out grants or other external funding rather than requiring clients or PSP organizations to pay for services.Third, the results showed that leaders perceived the tailoring of the program to PSP as beneficial.PSPNET put effort into tailoring the course (e.g., case stories and examples) through interviews and incorporating PSP feedback on the case stories.Potential service providers should note the effort it takes to make stories and examples relatable, ensure they allot the time and resources required for tailoring, and use input from actual PSP throughout this process.Fourth, PSP leaders reported perceiving that the confidentiality of PSPNET was a significant strength.Providers seeking to offer services to PSP should ensure that they have secure encryption in place and ensure confidentiality among all team members.Selling of data should be strictly prohibited (e.g., Hurler, 2022). The current study results were also used to make iterative changes to PSPNET and supports the use of feedback from leaders as a method of improving ICBT.Previous PSPNET research has used data from clients to improve PSPNET courses but did not provide insights into outreach or promotion (Beahm et al., 2022).Data from leaders in the current study provides complementary insights into facilitators, barriers, and improvements to outreach and promotion efforts. Limitations and future research The current study results may be influenced by selection biases.First, PSP leaders were contacted based on their previous connections with PSPNET.Therefore, leaders selected were more likely to be supportive of PSPNET and have faced fewer barriers to being able to promote PSPNET.The selection process may have limited opportunities to identify certain barriers or areas for improvement.Second, given our sample size and convenience sampling methods, the current study was not able-or, indeed, intended-to identify broad tendencies in the favorability of perceptions of ICBT that might be generalized to other PSP leaders across Canada; rather, we sought to explore nuances in perceptions among our sample.Third, PSP leaders' responses may have been influenced by a response bias as leaders may have been hesitant to report negative aspects of the program to a member of the PSPNET team.Throughout the interview process the interviewer attempted to manage response biases by emphasizing that feedback on areas for improvement was important to continuous improvement of PSPNET programs.The results nonetheless identified several factors acting as facilitators or strengths of ICBT for PSP.Future research can expand on identifying barriers by seeking out PSP leaders who have not promoted PSPNET in the past. The current study's sample consisted of leaders from Saskatchewan, where PSPNET was first implemented, allowing for exploration of leader perspectives several years after initial implementation.There may be differences in perceptions of PSP leaders in other provinces, which warrants future research.For instance, there may be facilitators or barriers for reaching PSP that are regionally specific.There is also a need to assess PSP leaders' perceptions of ICBT or digital mental health interventions in other countries as implementation climates and attitudes may vary. The current study used a qualitative approach to the RE-AIM framework which helped to highlight PSP leader's perceptions of the program and identify areas for improvement.Future research may address the domains of the RE-AIM using quantitative data.For example, research could evaluate the reach domain by considering the percentage of PSP who report mental health concerns compared to those who use PSPNET.PSPNET outreach data could also be used to assess the percentage of PSP organizations within Saskatchewan, or other provinces, who have accepted promotional materials, met with PSPNET members, or booked presentations by a PSPNET team member.Utilizing a quantitative approach could complement the results of the current study.Finally, future research could explore how tailored ICBT is perceived by leaders in various types of PSP and other occupations. Conclusion PSPNET has recently tailored ICBT to meet the needs of PSP.Prior research has shown that PSP have reported favorable perceptions of ICBT tailored for PSP and demonstrated promising clinical outcomes.The current study expanded on prior research on ICBT for PSP by exploring perceptions of ICBT among PSP leaders using the RE-AIM framework.It was important to explore leaders' perceptions of ICBT because they have significant potential to influence PSP's uptake of ICBT and were well poised to provide insights and suggestions to help facilitate more successful implementation efforts.The study results suggested PSPNET is perceived by leaders as reaching and effectively serving PSP and that PSP organizations have been promoting (adopting) and are eager to continue to promote (maintain) PSPNET.Leaders also perceived implementation as successful, especially in terms offering the service for free and increasing access to therapists who have specialized knowledge in working with PSP.Perceived facilitators for reaching PSP included building relationships with PSP and an organizational environment that is supportive of mental health initiatives.Despite support for mental health initiatives from PSP leaders, leaders reported perceiving that PSP still experience stigma, which prevents uptake of the program.PSP leaders have reported the perception that promotions of PSPNET have helped reduce stigma and increase awareness about posttraumatic stress injuries and mental health issues, suggesting that continued promotions may further reduce stigma and lead to increased uptake and reach of services.In terms of effectiveness, PSP leaders reported viewing PSPNET as effective for individual PSP and beneficial for their organizations.According to PSP leaders, the greatest strengths of PSPNET are the characteristics of PSPNET's ICBT and outreach presentations by the PSPNET team.PSP leaders also offered ideas for improving PSPNET.Some of the changes are already underway, demonstrating a need for improved communication with leaders so that they better understand ways PSPNET services are being improved.The current study results can benefit other service providers seeking to offer ICBT or digital mental health services to PSP, as the results indicate ICBT is viewed as beneficial and filling a service gap.The results also provide insights for reaching PSP and promoting services. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Table 1 Summary of leader perceptions pertaining to each domain of the RE-AIM framework.When I send out the messages, like I said, they go region wide.I was like OK, if you're in Saskatchewan this is what you're entitled to… And in Manitoba you'd probably go and do the online courses.So I don't know if that gets confusing for people… So, broader availability would be helpful. MaintenanceDiscontinuation of PSPNET would leave a gap in services available to PSP.Loss of PSPNET would create further barriers to promoting mental health services.PSP organizations would be willing and eager to promote and advocate for maintenance of PSPNET.J.D.Beahm et al. Table 2 Suggestions for improving PSPNET.
2024-01-28T16:09:29.011Z
2024-01-26T00:00:00.000
{ "year": 2024, "sha1": "b52b8d209c9c3e91b826567c938256a021fbf0e3", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.invent.2024.100718", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "deaf108f8fced9a64b5b40c28d264994ce3a4396", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235391040
pes2o/s2orc
v3-fos-license
CoviLearn: A Machine Learning Integrated Smart X-Ray Device in Healthcare Cyber-Physical System for Automatic Initial Screening of COVID-19 The pandemic of novel Coronavirus Disease 2019 (COVID-19) is widespread all over the world causing serious health problems as well as serious impact on the global economy. Reliable and fast testing of the COVID-19 has been a challenge for researchers and healthcare practitioners. In this work, we present a novel machine learning (ML) integrated X-ray device in Healthcare Cyber-Physical System (H-CPS) or smart healthcare framework (called “CoviLearn”) to allow healthcare practitioners to perform automatic initial screening of COVID-19 patients. We propose convolutional neural network (CNN) models of X-ray images integrated into an X-ray device for automatic COVID-19 detection. The proposed CoviLearn device will be useful in detecting if a person is COVID-19 positive or negative by considering the chest X-ray image of individuals. CoviLearn will be useful tool doctors to detect potential COVID-19 infections instantaneously without taking more intrusive healthcare data samples, such as saliva and blood. COVID-19 attacks the endothelium tissues that support respiratory tract, and X-rays images can be used to analyze the health of a patient’s lungs. As all healthcare centers have X-ray machines, it could be possible to use proposed CoviLearn X-rays to test for COVID-19 without the especial test kits. Our proposed automated analysis system CoviLearn which has 98.98% accuracy will be able to save valuable time of medical professionals as the X-ray machines come with a drawback as it needed a radiology expert. Introduction Coronavirus disease (COVID-19) is a respiratory tract infectious disease that has spread across the world [1]. It belongs to a family of viruses whose infection can cause complications that vary from typical cold to shortness of breath [2]. Patients also develop pneumonia termed, Novel Coronavirus Pneumonia (NCP), that results in acute respiratory failure with a very poor prognosis and high mortality [3,4]. Subsequently, the pandemic nature of the coronavirus and the absence of reliable vaccines make COVID-19 diagnosis an urgent medical crisis. At present, the standard testing method for COVID-19 diagnosis is the real-time Reverse Transcription Polymerase Chain Reaction (rRT-PCR) test. In this test, nasal swab is collected from the patient and kept in a special medium called the "virus transport medium", to protect the RNA. Upon reaching the lab, the swab is further processed to determine whether or not the patient is positive for the coronavirus [5]. The entire process takes several hours and the results generally arrive after a day or two depending on the time taken from the swab to reach the lab. The spread of the COVID-19 virus at this point advocates the requirement of its quick diagnosis and treatment. Studies such as [6,7] have proved that the COVID-19 virus infects the lungs and creates smooth and thick mucus in the patient's affected lungs that is visible when chest X-rays and CT scans are performed. However, the analysis of X-ray images is a tedious task and require expert radiologists. In this endeavor, several computer algorithms and diagnosis tools such as [8,9] have been proposed to get detailed insights from the X-ray images. Although these studies have performed efficiently, they lack in terms of higher accuracy, generalization, computational time, and error rate. To mitigate the shortcomings, recent studies such as [10][11][12][13] have incorporated machine learning (ML) and deep learning (DL) tools to investigate the chest X-ray images. The selection of proper DL-based automated analyzer and predictor for coronavirus patients will be very beneficial and helpful for the medical department and society. Additionally, ML-DL approaches can provide test results faster and more economically as compared to the laboratory-based tests. Furthermore, as COVID-19 is spreading rapidly through person-to-person contact, hospitals and healthcare professionals are becoming increasing overburdened, sometimes to the point of complete breakdown. Clearly, an alternative, remote-based, online diagnostic and testing solution is required to fill this urgent and unmet need. The Internet of Medical Things (IoMT) could be extended to achieve this healthcare-specific solution. With this motivation, the present work proposes an AI-based Healthcare Cyber-Physical System (H-CPS) that incorporates convolutional neural networks (CNNs) (see Fig. 1). The model allows healthcare practitioners to promptly and automatically screen positive and negative COVID-19 patients by considering their chest X-ray images. The organization of the paper is as follows: "Related prior research works" discusses the working of existing COVID-19 detection models, their shortcomings, and our contributions in the H-CPS framework. "Proposed CoviLearn model for automatic initial screening of COVID-19" explains the proposed solution and its functioning, followed by "Performance evaluation" that validates the model using real-life data. Finally, "Conclusions and future scope" gives a compact conclusion and mentions the area of future study. How Existing Research Models Function Over the course of 2 years, many techniques have been proposed for effective COVID-19 detection [14]. However, from the exhaustive list of works, we have selected some of the state-of-the-art methods focusing only on the deep learning based COVID-19 detection. A CNN called COVIDNet was trained in [15] using more than 15000 chest radiography images of COVID-19 positive and negative cases. The deep neural network (DNN) reported accuracy of 92.4% and sensitivity of 80% . A three-dimensional convolutional ResNet-50 network, termed COVNet, was proposed in [16] that utilized volumetric chest CT images consisting of community acquired pneumonia (CAP) and other non-pneumonia cases. The reported AUC metric by the model was 0.96. A similar ResNet-50 model proposed by [17] reported an AUC of 0.996 although tested to a much lesser dataset. In [18], a location-attention network using ResNet-18 was proposed using disparate CT samples from COVID-19 patients, influenza-A infected, and healthy individuals to classify COVID-19 cases, that reported an accuracy of 86.7% . Samples from 4 classes: healthy, bacterial pneumonia, non-COVID-19 pneumonia, and COVID-19 were used in [19] to train dropweight-based Bayesian CNNs that reported an accuracy of 89.92%. In [20], a modified inception transfer-learning model that reported an accuracy of 79.3% , specificity of 0.83, and sensitivity of 0.67 was proposed. In [21], a multilayer perceptron combined with an LSTM neural network was implemented, that was trained using clinical data collected from 133 patients out of which 54 belonged to the critical care domain. The authors in [22] implemented a two-dimensional deep CNN architecture, while the authors in [23] combined threedimensional UNet and ResNet-50 architectures. Both were trained using volumetric CT scanned data of patients categorized as COVID-19 positive and negative. The method in [24] used a pre-trained ResNet-50 network using chest X-ray images from 50 COVID-19 positive and 50 COVID-19 negative patients and reported an accuracy of 98% . In [25] four state-of-the-art DNNs: AlexNet, Resnet-18, DenseNet-201, and SqueezeNet were ensembled. The model also used chest X-ray images of normal, viral pneumonia, and COVID-19 cases. A novel CNN augmented with a pre-trained AlexNet using transfer learning was proposed in [26]. The model was tested on both X-ray and CT scanned images with reported accuracies of 98% and 94.1% , respectively. Shortcomings in the Existing Research Works Although the domain is very new and many studies pertaining to the deep learning-based methodology have been proposed, most of them suffer from shortcomings such as lower accuracy, model generalization, computational cost, and error rate. Even when certain research works achieve higher accuracy, they either suffer from lower sensitivity, specificity or have a small test dataset. Moreover, the prospect of augmenting IoMT frameworks with COVID-19 diagnosis is new and its incorporation can further assist the existing healthcare system to cope in this difficult times. Also, the training dataset for certain methods is limited because of class imbalance, that is, less number of coronavirus images as compared to normal lung images. This problem of dataset imbalance results in lesser model accuracy and less efficient. Table 1 provides a comprehensive comparison of the existing research works. Our Vision of CoviLearn in the H-CPS Framework We propose an AI-based H-CPS framework termed "Cov-iLearn" to provide healthcare professionals the leverage to perform automatic screening of COVID-19 patients using their chest X-ray images. With a deep neural network (DNN) in its core, the CoviLearn model is implemented on the server for ubiquitous deployment. The hyperparameters of the DNN have been adjusted to make its functioning reliable, accurate, and specific. By just uploading the X-ray images, the model automatically identifies the symptoms and reports unbiased results. CoviLearn augmented with H-CPS brings patients, doctors, and test lab in a single smart healthcare platform, as illustrated in Fig. 2. The reported results can be uploaded to the IoMT platform from where it may be transferred to nearby COVID-care hospitals, the Center for Disease Control (CDC), and state and local health bureaus. Hospitals could subsequently offer online health consultations based on the patient's condition and monitor vital equipments and quarantine requirements. Therefore, the proposed H-CPS provides people the leverage to dynamically monitor their disease status, receive proper medical needs, and eventually curb the spread of the virus. Novel Contributions of CoviLearn The major contributions of the work are: -An architecture of H-CPS framework augmented with a next generation smart X-ray machine architecture at the interface is proposed to combat the spread of COVID-19. -An efficient heuristic search technique is incorporated which automatically finds an optimal feature subset present in the input chest X-ray images. -An end-to-end automatic functioning DNN model that extracts the features from X-ray images is incorporated. -The CNN blocks are reliable, accurate, and very specific that makes the overall model very effective. Furthermore, the model can be easily integrated into embedded and mobile devices, thereby assisting health practitioners to effectively diagnose COVID-19. Proposed CoviLearn Model for Automatic Initial Screening of COVID-19 The CoviLearn Device for Next-Generation X-ray Screening As discussed in the earlier sections, COVID-19 and other related pneumonia diseases can be screened and diagnosed by analyzing chest X-ray images. However, the existing X-ray diagnosis suffers from limited access and lack of experienced personnel. To address this issue, we propose a next-generation X-ray system in the H-CPS perspective. The H-CPS and IoMT together bring all the necessary agents of smart healthcare in a universal communication and connectivity platform. This linking of technologies extends the efficiency services such as telemedicine, teleconsultation, and endorse smart-medical care. Figure 3 shows the system-level block diagram of the next-generation X-ray machine integrated with CoviLearn for automatic screening of infectious diseases. It identifies most of its components, such as X-ray apparatus (tube), flat panel detector, onboard memory, DICOM protocol converter, Image processing, CoviLearn diagnosis, wired/wireless data communication, display, or user interface, along with system controller. In the proposed X-ray machine, X-ray image is captured by an array of sensor in the digital and radiography flat panel detector. The flat panel also includes the devices of communication to next stages. The image is then saved and converted to DICOM X-ray image. Subsequently, the image is processed and based on the quality and requirement the exposure of the X-ray tube is adjusted. The captured image is stored temporarily in the local memory, after which it is displayed on monitor screen with the help of the controller. After acquiring the quality assured image, it is then transferred to the CoviLearn model which automatically classifies the image either as normal or COVID-19 affected. The image classification is performed either locally in the presence of sufficient resources or on cloud by transmitting the images over network. The test results automatically synchronize with the H-CPS platform for necessary medical and administrative actions. The controller unit is responsible for controlling the entire sequence of events. Fig. 3 The proposed next-generation X-ray device of CoviLearn integrated with machine learning models Dataset Used for Validating the Proposed CoviLearn System To overcome the problem of class imbalance, we have manually collected chest X-ray of patients having coronavirus. These images are from various resources such as pyimagesearch, radiopedia, sirm, and eurorad. For the normal chest X-ray, we have used the chest X-ray dataset from the National Institute of Health (NIH), USA [27]. The count of images from both the sources was 250. Subsequently, the dataset has been divided into two classes: patient's diagnosed as COVID-19 positive and negative. For training 80% of the dataset ( ∼ 200 images) is used from which 30% is used for validation ( ∼ 60 images). The testing of the model is performed on 20% ( ∼ 50 images) of the dataset. Based on this validation dataset, the loss and validation graphs have been plotted. All the images are processed and mixed to prevent undue biasing as discussed in the following subsections. Data Pre-processing All the captured images have different sizes, and therefore, data pre-processing was essential before doing further analysis. The pre-processing is performed in three stages: first, the individual data are normalized by subtracting the mean RGB values; second, all the pixels in the input image data are scaled within the range of 0 to 1. Finally, the tensor is reshaped appropriately, so that it fits the model (in this case, the tensor is reshaped into 224 × 224 pixels). Data Augmentation Deep learning models are ravenous for data and since our model only has around 250 images for each class; hence, the volume of our data needs to be increased and this can be achieved through data augmentation. Therefore, similar to the process mentioned in [28], the input images are augmented by random crop, adjust contrast, flip, rotation, adjust brightness, horizontal-vertical shift, aspect ratio, random shear, zoom, and pixel jitter. As a result of this augmentation, the proposed CoviLearn system became more efficient. The Proposed Transfer Learning for Deep Neural Network in CoviLearn CoviLearn uses transfer learning to predict the classification results. Transfer learning substitutes for the requirement of large dataset and has been used in different applications, such as healthcare, manufacturing, etc. It uses the knowledge learned in training a large dataset and transfers that same knowledge in some different and smaller dataset. In the present work, four different DNNs: ResNet-50, ResNet-101, DenseNet-121, and DenseNet-169, along with different blocks to train the individual networks. The hyperparameters have been adjusted to report the highest accuracy. Detailed structural organizational of network layers is as illustrated in Fig. 4 where each network is divided into phases, starting from getting an image input, followed by training the model by sequentially passing the set of images into convolutional networks, to finally predicting the results using a classification layer. Following subsection discusses the base classifiers and the difference between them. Deep Neural Base Classifiers The CoviLearn model uses four deep neural networks as the base classifiers. Two of these belong to the ResNet family [29] (ResNet-50 and ResNet-101) and remaining two belong to the DenseNet family [30] (DenseNet-121 and DenseNet-169). As the convolutional neural networks become deeper, the back propagated error from any layer is required to traverse the entire depth where repeated weight multiplications occur. As a result of these multiplications, the original error significantly diminishes and the neural network's performance is satisfactorily affected. To combat this, researchers have proposed many architectures, out of which the current state-of-the-art includes the DenseNet and the ResNet models. DenseNet or Dense Convolutional Network solves the problem using shorter connections between the layers. In other words, inside the DenseNet network, the each layer is connected to all its higher layers. Equation (1) This arrangement allows feature reusing without having to travel the entire depth or entire depth of the network. In comparison to a traditional CNN, DenseNet requires fewer parameters, because features learned in one layer are sent to the higher layers, thereby eliminating redundancy. A typical DenseNet architecture involves a convolution layer, followed by a pooling layer. These are followed by 4 dense blocks and 3 transition blocks placed one after the other. Inside the dense block, there are two convolutional layers with filters of different sizes, while the transition layer involves an average pooling layer. The dissimilarity between the DenseNet-121 and DenseNet-169 networks is with respect to the number of hidden layers. For the former, the total number of convolution layers in the four Dense Blocks is 121, while for the latter that is 169. Increasing the layers does not necessarily improve the accuracy and depends upon the particular situation. Residual Networks or ResNet solves the problem of vanishing gradient decent by utilizing a skip connection between the original input and the final convolution layers. By overlooking the in between layers and attaching the given input directly to the output allows the presence of an additional path for the back propagated error to flow and therefore solving the problem of vanishing gradient descent. For a DenseNet, the equation changes to (3) A typical ResNet architecture involves four stages. The first stage is responsible for performing zero-padding operation on the input data. The second stage is made up of convolutional blocks that performs convolution along with batchnormalization and max-pooling. The penultimate layer consists of identity blocks augmented with filters, followed by the final stage that comprises a GAP layer, a fully connected dense layer, and classifier function. All convolution layers use ReLU as the activation function. Similar to DenseNet, the two types of ResNets that is ResNet-50 and ResNet-101 differ in the depth of the network. It has been observed that certain variations of ResNet have redundant layers that barely contribute. The presence of them results in ResNet handling larger parameters and weights. On the other hand, DenseNet are relatively narrow (fewer number of filters) (2) P l = T l [P 0 , P 1 , P 2 , … , P l−1 ]. (3) P l = T l (P l−1 ) + P l−1 . and simply add the new feature maps. Another difference between the DenseNet and ResNet models is that the former does not sum the output feature maps of the preceding layers but rather concatenates them, unlike the latter where summation happens. This is evident from Eqs. (2) and (3). Training and Testing of the Proposed Model The CoviLearn model takes the input image, swaps the color channels, and resizes it to 224 × 224 pixels. Afterwards, the data and label list are converted into an array, while the pixel intensities are normalized between 0 and 1, by dividing the entire input image by 255. Subsequently, one-hot encoding is performed on the labels, following which various models are loaded one at a time by freezing few upper layers and a base layer is created with dropout. Finally, the input tensor of size 224 × 224 is loaded onto the model and compiled using Adam optimizer and binary cross entropy loss. Experimental Setup To compare the performance of different models, three evaluation parameters: accuracy, sensitivity, and specificity have been considered. As the test images are converted into 224 × 224 tensor, the model predicts the above-mentioned three metrics. Table 2 Result Analysis In context of coronavirus detection, True Positive (TP) is when the patient has coronavirus and the model detects coronavirus, True Negative is when the patient does not have coronavirus and the model also predicts the same. False Positive (FP) is when the the patient is not infected with (4). Additional metrics such as sensitivitythe ability to identify coronavirus patients correctly-and selectivity-the ability to identify non-coronavirus patients correctly-are as defined by Eqs. (5) and (6), respectively Figure 5 shows the confusion matrices of COVID-19 and normal test results of the different pre-trained models. The graphs show a well-defined pattern of the training-validation accuracy that increases, and the training-validation loss that decreases, with increasing epochs. Because of the limited computational resources, the comparison between different parameters is done for 25 epochs only. Besides the confusion matrix, receiver-operating characteristic (ROC) curve plots and areas for each model are given in Fig. 6. DNNs which are trained with DenseNet pre-trained blocks appear to be very higher than DNN trained with ResNet blocks, with DNN III having the highest AUC of 99% . One of the interesting findings is the DNN which when used with the ability of the DenseNet model achieves higher sensitivity and specificity. This ensures the reduction of false positives for both the COVID-19 and the healthy classes. As is evident from the relationship between accuracy and epoch, DNN-III shows the highest accuracy followed by DNN-IV, DNN-II, and DNN-I. The accuracy increases with each subsequent epoch except at few as illustrated in Fig. 7. A similar trend is shown in loss graphs where the loss decreases with each subsequent epochs and a similar trend is followed, that is, DNN-III shows the lowest loss followed by DNN-IV, DNN-II, and DNN-I (see Fig. 8). The results as reported by the proposed CoviLearn model are compared with the existing research works and tabulated in Table 3. In [18], detects COVID-19 using classification of CT samples by CNN models with an accuracy of 86.7% , sensitivity of 98.2% , and specificity of 92.2% . CovidNet in [15] reported an accuracy of 93.3% . The CNN-based DarkCovidNet model [31] to detect COVID-19 from chest X-ray also has an accuracy of 98.08% . In comparison, the proposed model has an accuracy of 98.98% , sensitivity of 0.984, and specificity of 0965. CoviLearn has significantly outperformed existing deep learning-based COVID-19 detection techniques such as [15,[18][19][20]23]. Also, the sensitivity of the proposed SN Computer Science model has outperformed existing models such as [15,20,23] both in terms of sensitivity and specificity. [17,21,24] achieved similar accuracy; however, their test dataset size is relatively smaller than the one used in the current work. The deep neural architectures proposed in [25,26] involved many hyperparameters, estimation of which increased the overall computation cost and resulted in ubiquitous deployment. On the other hand, CoviLearn because of its transfer-learning ability and selected deep neural networks has the advantage of rejecting redundant parameters and thereby reducing the overall computational cost. Finally, all these models lacked a smart healthcare framework, which has been proposed and implemented in CoviLearn in the form of H-CPS. The comparison of existing research works is compactly summarized in Tables 3 and 4 . Effectiveness of the Transfer-Learning Concept The initial neural network when trained reported accuracy, sensitivity, and specificity values of 0.5981, 0.6041, and 0.5923, respectively. To improve these substantially, we used the concept of transfer learning. It is done by freezing the layers of the existing models and replacing with the penultimate layer (the layer responsible for performing classification) of state-of-the-art neural networks trained on larger datasets to perform final classification. This step improved the accuracy, sensitivity, and specificity metrics to 0.9225, 0.9319, and 0.9135, respectively. Following this step, finetuning is performed on the model's hyperparameters to further improve the model's performance by ∼ 5% . Therefore, despite a small training dataset of 250 images, embedding the transfer learning helped improve the model's classification performance significantly. Table 5 compares the metrics obtained in each of the stages. Conclusions and Future Scope The study presents CoviLearn, a DNN-based transferlearning approach in Healthcare Cyber-Physical System framework to perform automatic initial screening of COVID-19 patients using their chest X-ray image data. An architecture of next-generation smart X-ray machine for automatic screening of COVID-19 is proposed at the interface of H-CPS. ResNet-101 are close to 97% . The highest specificity of DNN III is 98%. Therefore, all these results clearly indicate the ability to classify the deadly coronavirus correctly. The present CoviLearn platform will be very useful tool doctors to diagnosis the coronavirus disease at a lower cost despite being economical and automatic. However, additional study and medical trial are required to full proof the extracted features extracted by machine learning as reliable bio-markers for COVID-19. Furthermore, these machine learning models can be extended to diagnose other chest-related diseases including tuberculosis and pneumonia. A limitation of the study is the use of a limited number of COVID-19 X-ray images. Therefore, in the future, a larger dataset and a cloud based system can be ventured to make the model ubiquitous and more robust. In fact, the results can be used to detect the highly prone corona positive patients in a timely application of quarantine measure, until the rRT-PCR test examinations results are obtained. The proposed CoviLearn can be added to our healthcare CPS framework CoviChain for reliable information sharing right from the source to destination end while accommodating various stake holders [34]. Wang et al. [15] 92. 4 15,000 Xu et al. [18] 86.7 618 Ghoshal et al. [19] 89.92 5941 Wang et al. [20] 79.3 1065 Jin et al. [22] 94.98 2355 Narin et al. [24] 98 100 Chowdhury et al. [25] 98.3 2876 Maghdid et al. [26] 98 (X-ray), 94.1 (CT) 531 CoviLearn 98.98 250
2021-06-11T01:16:16.366Z
2021-06-09T00:00:00.000
{ "year": 2022, "sha1": "e8e6645e3781c42b93fee155ec6f8d7736a1e3a3", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s42979-022-01035-x.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "888512a3f136f3b5eab1790f55d5a0842814a467", "s2fieldsofstudy": [ "Medicine", "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Engineering", "Computer Science", "Medicine" ] }
16449764
pes2o/s2orc
v3-fos-license
Coagulopathy and Extremely Elevated PT/INR after Dabigatran Etexilate Use in a Patient with End-Stage Renal Disease Introduction. Dabigatran is an oral direct thrombin inhibitor which has been approved for prophylaxis of stroke in patients with atrial fibrillation. The use of dabigatran etexilate increased rapidly due to many benefits. However, questions have been raised constantly regarding the safety of dabigatran etexilate. Case. A 58-year-old Caucasian male with a history of recurrent paroxysmal atrial fibrillation status after pacemaker and end-stage renal disease on hemodialysis came to the Emergency Department with the complaint of severe epistaxis. He had been started on dabigatran 150 mg twice a day about 4 months ago as an outpatient by his cardiologist. His prothrombin time (PT) was 63 seconds with international normalized ratio (INR) of 8.8 and his activated partial thromboplastin time (aPTT) was 105.7 seconds. Otherwise, all labs were unremarkable including the liver function test. Dabigatran was stopped immediately. His INR and aPTT trended downward, reaching normal levels 5 days after admission. Conclusion. Dabigatran is contraindicated in patients with severe kidney insufficiency as it is predominantly excreted via the kidney (~80%). Elderly patients over 75 and patients with chronic renal impairment should be carefully evaluated before starting dabigatran. Despite studies showing only mild increase in aPTT and PT/INR in patients receiving dabigatran, close monitoring may be reasonable in patients with renal insufficiency. Introduction Dabigatran etexilate is a novel oral anticoagulant approved by the Food and Drug Administration (FDA) for stroke prophylaxis in patients with nonvalvular atrial fibrillation (AF). Since approval, the use of dabigatran etexilate has increased substantially. Nearly 17 percent of patients with nonvalvular AF were started on dabigatran etexilate within just one year of approval [1]. A recent study showed that approximately 725,000 patients in the United States have been on dabigatran etexilate [1]. However, questions have been raised consistently regarding the safety of dabigatran etexilate. Here, we present a case of dabigatran etexilate-induced coagulopathy with extremely increased PT/INR in a patient with end-stage renal disease (ESRD). Case Presentation A 58-year old Caucasian male with a history of recurrent paroxysmal AF came to the Emergency Department (ED) with the complaint of epistaxis. He had a history of end stage renal disease (ESRD) on hemodialysis. His cardiologist had started him on dabigatran etexilate 150 mg twice a day about 4 months ago. He was previously on warfarin, but side effects including multiple episodes of minor epistaxis and gastrointestinal bleeds requiring transfusions warranted the switch to dabigatran etexilate. His CHADS2 score was 5, supporting the need for anticoagulation to prevent future stroke events [2]. Since being started on dabigatran etexilate, he has been tolerating it except for minor epistaxis. On the day of ED presentation, the patient awoke to find himself in a pool of blood. His vital signs were unremarkable on arrival to the hospital. Because of persistent epistaxis, an inflatable balloon epistaxis device was placed in the right nostril in the ED, with good hemostasis. He was admitted to the hospital for monitoring and further work-up. Abnormal labs at the time of admission included a prothrombin time (PT) of 63 seconds, INR of 8.8, activated partial thromboplastin time (aPTT) of 105.7 seconds, and elevated BUN and creatinine of 73 mg/dL and 4.12 mg/dL, respectively. His hemoglobin and hematocrit were frequently checked, and they remained stable around 12 mg/dL and 37 mg/dL, respectively, not requiring any pRBC transfusions. The patient had not missed any dialysis session prior to admission. The supratherapeutic INR was thought to be secondary to dabigatran etexilate, and the medication was held. Other possible causes of supratherapeutic INR were excluded, including Vitamin K deficiency and severe liver disease, as laboratory values showed normal liver function test (LFT), albumin, and Vitamin K levels. He was given fresh frozen plasma (FFP), and ENT was consulted for additional packing. As dabigatran etexilate was a new anticoagulation agent at the time, the hospital did not have a reversal protocol for dabigatran etexilate toxicity in place and thus FFP was used. He remained stable clinically and the INR and aPTT trended downward after holding the dabigatran and continuing his scheduled dialysis session the following day. INR was 1.7 at the time of discharge and his aPTT had normalized. After a 5-day hospital stay, he was discharged. He went home without anticoagulants as his recurrent bleeds were thought to be a substantial morbidity risk outweighing the benefit of stroke prevention. Discussion Oral anticoagulation is an important part of long-term AF management to prevent embolic stroke and other systemic thromboembolic diseases. For decades, warfarin or oral Vitamin K antagonists were the main anticoagulants used. However, with the narrow therapeutic index and multiple drug and food interactions associated with warfarin, an alternative was needed. Dabigatran etexilate was the first novel oral anticoagulant approved by the FDA for stroke prophylaxis in nonvalvular AF [8]. Since its approval, dabigatran use increased substantially. Nevertheless, concern about its safety has been raised consistently. Dabigatran etexilate is absorbed across the gastrointestinal (GI) wall by p-glycoprotein [9] and consequently converted by esterases to dabigatran, an active from of dabigatran etexilate [9]. The bioavailability of dabigatran is low (6-7%) compared to other Xa inhibitors. However, its plasma concentration peaks in 1.25-1.5 hours, which allows for a more rapid onset of action compared to Vitamin K antagonists (VKA) [10]. The half-life of dabigatran etexilate in patients without renal impairment is 14 to 17 hours [11], and as it is primarily excreted by the kidney (80%), dosage reductions are necessary for those who have renal impairment [11]. Dabigatran etexilate has many advantages when compared to oral VKAs. One of the major benefits of dabigatran etexilate is its predictable pharmacokinetic profile [11]. The absorption of dabigatran etexilate is constant, with less interindividual variability [12], and this unique characteristic prevents frequent laboratory monitoring [10]. Furthermore, dabigatran etexilate is not converted by the cytochrome P450 enzyme, thereby reducing drug interactions [10]. Due to its rapid onset of action, bridging with unfractionated or low molecular weight heparin is not needed, which decreases considerable burden on the patient and health care. However, there are some concerns associated with using dabigatran etexilate. One major concern is the absence of an antidote to reverse its action. The bleeding rate associated with dabigatran etexilate is not higher than that of VKAbased on clinical trials and postmarket assessment by the FDA [12,13]. Nevertheless, bleeding is still a serious, potentially life-threatening complication. Dabigatran etexilate was suspected to be the main culprit behind the deaths of 542 patients in 2011 [14,15]. In the case of a severe, clinically significant bleeding, dabigatran etexilate has to be stopped immediately. Dialysis should be considered in the case of an active, potentially fatal bleeding [16]. Patients with renal impairment have an increased risk of bleeding when taking dabigatran etexilate. More than 80% of absorbed dabigatran etexilate is excreted by the kidney [11,17]. Thus a reduced dose is required for patients who have renal impairment. The FDA has approved a 75 mg twice daily dose of dabigatran etexilate for patients with a creatinine clearance of 15-30 mL/min [13]. In the United States, dabigatran etexilate is not indicated if creatinine clearance is less than 15 mL/min in patients with acute renal failure or ESRD. However, there are no outcome data for the newer anticoagulants in patients with creatinine clearance less than 30 mL/min, and the current European Society of Cardiology (ESC) guidelines advise against their use in this patient population [18]. However, because dabigatran etexilate is mainly prescribed by primary care physicians and cardiologists, not all patients' renal functions are assessed properly before starting dabigatran etexilate, as we could see in our case. Another concern of dabigatran etexilate is the difficulty of assessing its anticoagulant effect. It is important to determine the anticoagulant effect in the cases of acute, life-threatening bleeding, preoperative evaluation, and suspected overdose [19]. The thrombin clotting time (TCT) is a sensitive test for dabigatran etexilate, but it is not useful for monitoring patients with possible dabigatran etexilate-induced coagulopathy because it is too sensitive. The Ecarin clotting time (ECT) is also a sensitive test and may have a dose-related response, but it is not available as a routine coagulation test and has not been approved by the FDA for monitoring the activity of dabigatran etexilate [20]. Some studies indicate that if a patient on dabigatran has an aPTT > 90 seconds and INR > 2, one must consider overdosing or dabigatran accumulation [21]. However, most studies have found PT and aPTT to be insensitive to therapeutic doses of dabigatran etexilate, since they do not have a linear relationship [16]. Currently, there is no available laboratory study to confirm dabigatran etexilate-induced coagulopathy in the hospital setting [20]. This case demonstrates a dabigatran-induced coagulopathy with very high PT/INR. Dabigatran should be avoided in patients with severe renal insufficiency and ESRD on hemodialysis. In our case, the patient had a significant epistaxis with increased PT/INR, and the bleeding was controlled only after nasal packing and administration of one unit of FFP. PT/INR and aPTT decreased consequently after holding the dabigatran as well as continuing his scheduled dialysis. Studies suggest that increased INR is not correlated to the activity of dabigatran, but as of yet there is no sufficient data Conclusion There have been multiple reported cases of bleeding related to dabigatran use (Table 1). However, to the best of our knowledge, this is the first report of an extremely elevated PT/INR with the use of dabigatran in a patient with ESRD. The current guideline indicates that routine PT/INR followup is not necessary for patients taking dabigatran. However, since there is no reliable laboratory study to monitor the anticoagulant effect of dabigatran, it may be beneficial to check the coagulation panel, including PT and aPTT, to reduce the risk of bleeding in patients especially at high risk for bleeding. It is also imperative that patients have their renal function checked prior to starting therapy and that the drug is dosed based on creatinine clearance. Warfarin should be regarded as an option in populations with decreased renal function to decrease the risk of bleeding and increase its control in case of bleeding. Studies are warranted to find a safe dose of dabigatran in patients with renal impairment and develop a better way of monitoring the activity of dabigatran.
2016-05-04T20:20:58.661Z
2013-09-18T00:00:00.000
{ "year": 2013, "sha1": "ac3374053ec39fdaca009bb9a4e61c4bf703bfd2", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crim/2013/131395.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f642be2b80ff71d95a1b97ba883e510ada2a419f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232098521
pes2o/s2orc
v3-fos-license
Contextual Processing and the Impacts of Aging and Neurodegeneration: A Scoping Review Abstract Contextual processing (or context processing; CP) is an integral component of cognition. CP allows people to manage their thoughts and actions by adjusting to surroundings. CP involves the formation of an internal representation of context in relation to the environment, maintenance of this information over a period of time, and the updating of mental representations to reflect changes in the environment. Each of these functions can be affected by aging and associated conditions. Here, we introduced contextual processing research and summarized the literature studying the impact of normal aging and neurodegeneration-related cognitive decline on CP. Through searching the PubMed, PsycINFO, and Google Scholar databases, 23 studies were retrieved that focused on the impact of aging, mild cogniitve impairment (MCI), Alzheimer’s disease (AD), and Parkinson’s disease (PD) on CP. Results indicated that CP is particularly vulnerable to aging and neurodegeneration. Older adults had a delayed onset and reduced amplitude of electrophysiological response to information detection, comparison, and execution. MCI patients demonstrated clear signs of impaired CP compared to normal aging. The only study on AD suggested a decreased proactive control in AD participants in maintaining contextual information, but seemingly intact reactive control. Studies on PD restricted to non-demented older participants, who showed limited ability to use contextual information in cognitive and motor processes, exhibiting impaired reactive control but more or less intact proactive control. These data suggest that the decline in CP with age is further impacted by accelerated aging and neurodegeneration, providing insights for improving intervention strategies. This review highlights the need for increased attention to research this important but understudied field. Introduction Age-associated cognitive change can be a common part of normal aging, with declines in processing speed, inhibitory control, and working memory capacity being the archetypes. 1 In accelerated aging with neurodegeneration such as Alzheimer's disease (AD), the most common form of late life dementia, cognitive changes are more drastic, affecting multiple domains including attention, executive function, decision-making, and memory. 2,3 Similarly, Parkinson's disease (PD), a neurodegenerative disorder that leads to rigidity, bradykinesia, and imbalance, can also involve executive dysfunction and working memory deficits. 4,5 Importantly, dementia often manifests from intermediate changes due to normal aging, known as mild cognitive impairment (MCI), 6 which is a target for early detection and more effective early interventions. Multiple genetic and modifiable lifestyle factors have been associated with long-term adverse health outcomes in aging. These include an unhealthy diet, smoking, alcoholism, obesity, and inactivity, and even the impact of adverse childhood experiences (ACE) can increase the risk of neurodegeneration and neuropsychiatric consequences in late adulthood. 7,8 More directly related to the present paper, research has shown that contextual processing (CP, sometimes also referred to as context processing) is another important cognitive domain that can be greatly impaired during senescence. 9 CP entails the ability to process different streams of information to select the response that is most relevant for a certain context while inhibiting others for people to adapt to changing scenarios. CP reflects cognitive flexibility in that it depends on the functional integrity of the prefrontal cortex (eg, dorsolateral prefrontal cortex), 4,10 through context activation/updating and context maintenance. 11 The former is the ability to reactivate/ update context information on a trial-by-trial basis (ie, comparing the current to previously exposed information) whilst the latter reflects the working memory capacity to retain/activate/update the learned information. CP is utilized in various situations such as orienting one's self in space, adapting to novel scenarios, and facilitating decision-making processes, based on the general knowledge of certain objects, previously exposed information, and additional cues. 5,9,11 The capability of CP also allows people to select the responses for dealing with particular tasks with flexible behaviour adaptation. 12 There is a gap in the literature regarding the impact of aging and neurodegeneration on CP. An apparent primary reason for this is that CP can be complexly presented, closely linked to many other well-studied cognitive skills. When evaluating cognitive performance, it may be difficult to differentiate a deficit oriented with CP from that with other cognitive skills such as executive function and memory retrieval. Even so, there are individual studies targeting a possible difference in CP performance between normal and accelerated aging. It is hypothesized that aging can lead to a reduction in CP performance, while neurodegeneration and dementia can cause a marked weakening in this cognitive domain. The current lack of reviews to synthesize this information motivated our present research. Such knowledge is important in order to truly understand the relationship between neural substrates and disease sequelae, paving the way for more effective preventative and management strategies. The understanding is also critical for the development of portable technologies allowing for effective detection of brainwave changes in aging and dementia at point of care. [13][14][15] The objective of this paper is to summarize the current literature on how normal and accelerated aging processes affect context processing. To better prepare readers with the results, we start by introducing the key aspects of CP research. Proactive Control and Reactive Control of Contextual Processing Based on the "dual mechanism of cognitive control" (DMC) model, cognitive flexibility with CP is achieved through proactive control and/or reactive control, depending on the situational demands. 16 Proactive control is a sustained, anticipatory form of control that allows individuals to respond efficiently and rapidly. Task-relevant information is held in working memory (ie, identity of previous stimuli and task instructions) to anticipate the upcoming stimuli. 16 Proactive control is essentially maintaining contextual information in mind in order to respond appropriately to a certain scenario or task. For example, if a person is told to only respond when they see a certain cue-probe pair, ie, the word "animal" (cue) followed by the word "dog" as a probe, they must remember what the cue was while determining if it matches with the newlypresented probe. Being able to remember what the cue was allows individuals to respond faster when the correct probe is presented. Reactive control on the other hand is used in situations where anticipating the upcoming stimuli does not yield optimal results or when the cue's predictability for the upcoming probe is unreliable. 16 Reactive control helps individuals respond appropriately when facing unexpected stimuli and enables them to recognize incorrect information so that they can act accordingly in light of the novel information. For instance, a person would expect to see the word "dog" appear when the word "animal" is presented as a cue; however, the probe could be irrelevant objects, eg, house, car, and tree. When the cue is misleading the individual must use reactive control to suppress inappropriate actions. activity. [16][17][18] The proactive control is associated with the sustained activation of the lateral prefrontal cortex (PFC), reflecting the active maintenance of task goals and instructions. The hippocampus/medial temporal lobe is another neural region associated with proactive control because it helps maintain information online during working-memory tasks and binds task-relevant information to specific regions of the brain to elicit an appropriate response based on the presented stimulus. 16 Previous research has also suggested that proactive control is linked to the dopaminergic system. More specifically, when a task-relevant stimulus is presented, phasic bursts of dopamine are synchronously released within the PFC. These phasic bursts dopamine release enables the PFC to be activated for longer periods of time, allowing contextual information to be maintained online for longer whilst protecting it from interference effects caused by task-irrelevant inputs. [16][17][18] The reactive control, on the other hand, activates the lateral prefrontal cortex transiently whenever interference is detected, reflecting reactivation. Reactive control is also linked with the dopaminergic system but dopamine is not released in a phasic manner as seen in proactive control. 16 The anterior cingulate cortex (ACC) that is involved in attention, conflict detection and monitoring has also been documented to be important for both proactive and reactive control. 16,18 The dorsolateral PFC together with its association networks is key for CP regulation, due to several characteristics. 4,[19][20][21][22][23][24][25] The PFC network regulates top-down processing by using contextual constraints or information cognitively stored to guide behaviours, modulates activity in other task-relevant areas for the selection of appropriate responses. PFC is highly interconnected with other cortical and subcortical areas such as parietal cortices, temporal lobes, and the basal ganglia, which are associated with sensory perception and movement initiation. 19,22 Contextual information influences working memory via the lateral PFC, by extracting and transforming taskrelevant information into context representations. 5,21,22,26 These multi-modal context representations are maintained in PFC to control both motor and sensory processes and enable the selection and implementation of appropriate actions depending on the context. 22 Electrophysiological Basis of Contextual Processing The event related potentials (ERP) derived from the electroencephalography (EEG) brain brainwaves are commonly used to understand the electrophysiological basis of CP. The ERP component P300 (P3), or more specifically the P3b component suggestive for target detection, is a known electrophysiological marker for CP. [26][27][28][29] Using the oddball target detection task, where infrequent targets are embedded in a series of frequent standard stimuli, 30 a posterior-parietal scalp distribution of the P3b is elicited every time a target stimulus is detected. This indicates that the P3b plays a role in comparing environmental cues to contextual information, in that the person compares the current stimulus to the previous stimuli using working memory, ie, does the current stimulus match the cue that was previously seen. 27,31 The P300 has also been associated with stimulus categorization and template matching of targets in working memory. [32][33][34] There may be a recurrent link between working memory, CP, and the P3b component, affected by allocation of attention to a stimulus, stimulus-task relevance and decision confidence. 35,36 P3b latency is associated with the stimulus evaluation time to determine if the stimulus is taskrelevant, [37][38][39] and with the mediation between perceptual analysis and response initiation to identify the stimulus and accordingly initiate a response. 40 P3b amplitude is modulated by attentional allocation and the task relevance of a stimulus. 35,36 Prolonged latencies and attenuated amplitudes of P3b have been shown in patients with neurologic conditions. 41 Another ERP component that has been linked to CP is N2, correlated with contextual encoding 42 and reflecting the degree of attention required to process stimuli and conflict monitoring. 43, 44 The contingent negative variation (CNV) is another known ERP component associated with CP, representing the maintenance of task-related information, 45 and indexing stimulus expectation and motor preparation. 46,47 Other ERP markers associated with CP include the lateralized readiness potential (LRP which is used to measure motor processes) and N2cc (an ERP component that prevents responses based on stimulus position involved in the Simon task). 48,49 Moreover, researchers have attempted the utility of portable EEG-ERP technologies in the establishment of the brain vital sign framework in order to rapidly capture and evaluate the important brainwave markers, including consistent N400 characteristics during semantic information processing. [13][14][15] By linking the previously reported N400 responses acquired using traditional laboratorybased experiments with the rapid bedside detection, this innovative research supports the development of rapid physiological-based measurements of higher cognitive 347 functions such as CP, without reliance on lab-based experimental probes for potential clinical translation. [13][14][15] Behavioral Tests for Contextual Processing The expectancy AX continuous performance test (AX-CPT) has been validated to index CP capacity. 50 In the AX-CPT, subjects are asked to detect targets (X) and nontargets (non-X) within a steam of presented letters. The goal is to press the target button only when it is preceded by a certain letter, eg, "A", which is the cue. For any non-A letters (referred to as "B cues"), subjects must inhibit themselves from pressing the target button even if followed by the letter X. Other versions of the AX-CPT include the "BX" and "AY trials," which test context maintenance and the capacity to overcome automatic responses, respectively. The Stroop task is also often used to index CP, evaluating dynamic control, rule generation, and task switching. 12 The task presents either a color or a word based upon a cue that precedes the stimulus, or a rule that is established at the beginning of a task, eg, "color" or "word". 51,52 The color of the stimulus may be different from what it is written in, ie, the word "blue" filled with a "red" font color. The garden path sentence task, Go no-go paradigm, stop-signal paradigm, and flanker interference are all used to examine inhibitory control. [53][54][55] In the garden path sentence task, participants are instructed to remember a low-probability word ending, ie, the word "hair" instead of the word "teeth" for the phrase: "Before bed, remember to brush your [. . .]." 53 After a delay interval, they are asked to recall the low-probability ending word for the phrases that were shown to them prior to the delay. In the go no-go task, participants are asked to refrain themselves from responding to a low-frequency target stimulus, with fewer errors signifying a better response inhibition. 55 A variation of the go no-go task asks the participants to alternate between a letter categorization task (deciding if a vowel was present) and a number categorization task (deciding if an even number was present), and to respond only if a vowel or an even number were present. 56 In a similar version of the go no-go paradigm participants are asked to reach for a target when they saw a specific cue appearing on the screen. 57 In the stop-signal task, participants are asked to suppress an action when instructed, ie, when a certain tone is presented. 54 In the flanker interference test, participants are asked to press a keyboard button that corresponds to the direction the central target is facing -sometimes the central target faces a direction that is opposite to the peripheral items (flankers) -the goal is to respond as quickly as possible without being distracted. 58 In the counting distraction-attention task, 59 participants are asked to press a letter that corresponds to the correct number of digits presented. For instance, in the congruent condition, one of the digits 1-4 are presented, ie, "10;" the number and the amount of correct digits present match (in this case, the cue contains the number 1 and only one of the four allowed numbers is present). 59 In the incongruenteligible condition, the number and identity of the digits did not match, ie, "33;" the number 3 is correct but this cue only has two of the allowed numbers. 59 In the auditoryvisual distraction-attention task, participants are presented with auditory and visual stimuli and are asked to focus solely on one stimulus, testing their task-switching and execution abilities. 60 In the predictive sequence visual task participants are instructed to use the preceding information (series of triangles moving from left, upwards, and right) to anticipate the target (downward facing triangle). 24 The multi-finger sequencing task, on the other hand, is used to study the ability to overcome automatic responses. 61 Participants press a keyboard button that corresponds to the color of a block, ie, "m" key for red box and "n" key for blue box, and the colored blocks are presented in different orders. 61 In the Simon task, subjects are asked to respond to a non-spatial feature of a lateralized stimulus while ignoring its position (ie, when the word "left" is presented on the right side of the screen, they need to press the keyboard button that corresponds to the word irrespective of its position). 62 Contextual Processing Execution in Relation to Other Aspects of Cognition CP is a component of executive function and working memory. 4,5 When there is a delay between task-relevant stimuli and the generation of a response, contextual information is maintained over time and facilitates working memory. 5,63 In the daily environment, sequences of events separated in time are integrated and actively maintained in working memory to help guide actions. 19 Contextual information also mirrors the series of events that are separated in time, which are then integrated by working memory into submit your manuscript | www.dovepress.com DovePress Clinical Interventions in Aging 2021:16 predictive goal-relevant information. These are short predictive sequences of stimuli that precede the target event, ie, symbols, words, or patterns, which the individual then uses to react accordingly. 19 When engaging in CP, people first comprehend the taskrelevant (predictive) stimuli, detect the stimuli, and translate this information into a self-guided cue, and utilize this information to generate a response (eg, whether to click the target button or non-target button in the AX-CPT task). According to Baddeley's model of working memory, contextual information is analogous to the sequential visual or auditory stimuli that are stored in the visuospatial sketchpad and phonological loop, which are transformed by the central executive system into goal-relevant predictive information. 64 Methods Two reviewers (K.H.T and A.M) independently conducted a literature search using PubMed (MEDLINE), Google Scholar, and PsycINFO up to September 2020. We focused on these databases because of their established reputation and coverage on biomedical and clinical research. Studies were reviewed and any contentions were resolved by involving a third reviewer (X.S). The majority opinion of the reviewers was used for further analysis. The sets of keyword search terms were used in combination and included ("context processing" OR "contextual processing" OR "proactive control" OR "reactive control") AND ("aging" OR "aging" OR "senior" OR "elderly" OR "older adults" OR "mild cognitive impairment" OR "MCI" OR ("dementia" OR "vascular dementia" OR "frontotemporal dementia" OR "Parkinson's" OR "PD" OR "Lewy body dementia" OR "LBD" OR "Alzheimer's Disease" OR "AD"). The "*" was used to indicate multiple words of the same meaning but different endings (Figure 1). The search yielded a total of 803 articles. After filtering for title/abstract containing contextual processing key terminology, language filter (for English), and age (for older adults), 137 articles remained in the filtered set I. A further filtering step through article reading excluded studies focusing on memory, attention, visuospatial, language, semantic, processing, or other neurodegenerative diseases unrelated to dementia. The final filtered set contained 23 articles, including 11 on normal aging, 1 on AD, 5 on PD, and 6 on MCI ( Figure 1). This study applied narrative descriptions to each of the final articles, while no quality appraisal was performed given the relatively small number of studies found and the varied research methods and objectives of the studies. Aging and Contextual Processing -Electrophysiology Data Studies suggested differences in the ERP components (P300 and CNV) between older adults and adolescents/younger adults (Table 1A). Older adults exhibited a delayed P3b onset, indicating a deficit in making an anticipatory response towards a stimulus 65 while taking a longer time to execute a response. 40,66,67 Older adults showed comparable P3b amplitudes on context-dependent and context-independent trials, suggesting inability to differentiate contextual relevant versus irrelevant information. 10 A reduced amplitude and a delayed onset in P3b were also observed when presented with conflicting stimuli, confirming difficulties in processing and responding to unexpected stimuli. 65 Older adults also exhibited a P3a component after a non-cue stimulus (irrelevant item), which was not seen in younger participants, demonstrating involuntary and transient allocation of attention to unexpected or novel stimuli, 30,68,69 and indicating increased susceptibility to attentional distraction with a longer reaction time. 70 When there was a delay between the presentation of the cue and the probe, older adults showed a lower CNV amplitude compared to younger adults, demonstrating a decline in neural correlate of task maintenance and motor preparation in anticipation of a stimulus. 71 Hajra et al (2018) used portable EEG/ERP technology and demonstrated the correlation of context-related information processing with increased N400 amplitude and increased temporoparietal activity while developing the brain vital sign framework in a wide age range. 13,14 Aging and Contextual Processing -Behavioral and Neuroanatomical Data Studies suggested that aging involves different neural activation patterns in the inhibitory processes concerning proactive control and reactive control (Table 1A). When suppressing irrelevant information or engaging in conflict resolution, older adults exhibited increased activity in multiple brain regions for inhibitory processes (eg, left inferior frontal triangularis, left inferior frontal operculum, left inferior temporal, and right anterior striatum) and displayed reduced efficiency; 72,73 whereas younger adults only saw an increase in activity in their left posterior superior temporal. 74 When older adults engaged in proactive control, they exhibited decreased activity in the anterior cingulate cortex bilaterally (associated with conflict detection) 75,76 and increased activity in the middle frontal gyrus (MFG), 74 linked with maintenance of task-relevant information. 17 The trade-off in activity (increased MFG activity at the expense of ACC activity) was interpreted as beneficial for older adults in tasks where proactive control is required. 17 In reactive control, older adults recruited the left inferior frontal operculum (known to play a role in inhibitory processes) more than the younger subjects. 74 Studies also showed that older adults tend to rely more on the prefrontal structures (Table 1A). Younger adults only recruited frontal structures during mixed block trials in which they alternated between multiple tasks with cognitive challenge. 71 In contrast, older adults recruited the same amount of frontal structures when completing a simpler task. 71 Similarly, when completing the AX-CPT, older adults used the lateral PFC more than younger adults. 77 In addition, older adults showed variable PFC activation patterns when engaging in proactive and reactive control: in proactive control, the activation of the right dorsolateral PFC (key for memory encoding and goal-maintenance) was decreased; in reactive control, the activation in the ventral PFC and inferior frontal junction (important for reactive control) was increased. 16,74 This suggests that normal aging involves reactive control more than proactive control. 10,17,77-81 MCI on Contextual Processing -Electrophysiology Data The six studies each examined amnestic MCI (Table 1B). Patients with multiple domain amnestic mild cognitive impairment (mdaMCI), in which memory is affected in conjunction with other cognitive aspects, exhibited longer reaction times and fewer correct responses on several cognitive control tasks (eg, Simon task, go no go-task, and auditory-visual distraction attention task). [82][83][84][85][86][87] Patients with single domain amnestic mild cognitive impairment (sdaMCI), in which only memory is impaired, performed at an intermediate level relative to mdaMCI and age-matched controls. [82][83][84][85][86][87] It also took the sdaMCI patients a longer time to evaluate and classify the items compared to the controls, shown by the longer P3b latencies. 83 Both mdaMCI and sdaMCI patients had lower LRP amplitude than the control group, indicating a deficit in their ability to select and prepare for a motor response. 85 Behavioural and EEG Go no-go task aMCI patients had longer reaction times and less accuracy in the Go no-go task in contrast to the control group. aMCI patients also had lower N2 amplitudes for the Go no-go task compared to controls, which indicated that they were less skilled at detecting task-relevant stimulus and inhibiting inappropriate responses. MCI on Contextual Processing -Behavioral and Neuroanatomical Data We were unable to find any studies in the literature that examined the effects of MCI on CP from a neuroanatomical perspective. However, from a behavioral perspective, mdaMCI patients took a longer time than sdaMCI patients and age-matched controls in eliciting a motor response when presented with a task-relevant stimulus. 85 Alzheimer's Disease on Contextual Processing -Electrophysiology Data We were unable to find any electrophysiology-related studies in the literature that examined the effects of AD on CP. Alzheimer's Disease on Contextual Processing -Behavioral and Neuroanatomical Data Current literature suggests a complete lack of neuroanatomical studies in understanding CP in AD. The sole study of the research line that examined the effects of AD on CP enrolled 26 AD patients and 43 age matched control participants and this was purely behavioral. 88 The researchers observed that AD patients were unable to maintain contextual information for 5000 ms (Table 1C). In the study, AD patients made more errors on BX trials in contrast to AX, AY, and BY trials and exhibited no response latency for the BX trial, compared with age-matched controls. This was interpreted by the authors as: instead of taking the needed extra processing time on BX trials to inhibit the inappropriate response tendencies associated with the X probe, AD patients simply succumbed to the proberelated interference and generated an error response. The authors further interpreted that AD patients were unable to utilize contextual information to execute task-related behaviours due to an impaired proactive control due to the fact that the BX trial of the AX-CPT task is used to index proactive control. 88 This suggests that proactive control in further impaired in AD than in normal aging. Parkinson's Disease on Contextual Processing -Electrophysiology Data Five studies explored the effects of PD on contextual processing, and each restricted the enrolment to PD patients without dementia (Table 1D). Three studies whereas proactive inhibition (ability to shape response strategies according to the context) remained unaffected. Notes: *No age range or standard deviations were provided. "n" indicates the sample size. Abbreviations: A, adolescents; AD, Alzheimer's disease; aMCI, amnestic mild cognitive impairment; AX-CPT, AX continuous performance test; AY, a variation of the AX-CPT; BX, a variation of the AX-CPT; C, children; EEG, electroencephalography; fMRI, functional magnetic resonance imaging; HP, high-performing; LP, low-performing; LPD, left-dominant Parkinson's disease patients; LRP, lateralized readiness potential; mdaMCI, multiple domain amnestic mild cognitive impairment; N2, an event-related potential associated with detecting mismatched information; N2cc; an event-related potential associated with selective attention in tactile processes; N2pc, an event-related potential associated with selective attention in visuospatial processes; OA, older adults; OAP, older adults with Parkinson's Disease; OO, old-old adults; P3, an event-related potential associated with decision making; P3a, a component of the P3 that is associated with processing unexpected novel stimuli; P3b, a component of the P3 that is associated with information processing; RPD, right-dominant Parkinson's disease patients; sd, standard deviation; sdaMCI, single domain amnestic mild cognitive impairment; sLRP, stimulus-locked lateralized readiness potential; YA, young adult; YO, young-old adults. repeatedly showed that PD patients were able to detect targets but unable to utilize the contextual information (ie, a predictive sequence to help them generate a faster response in a subsequent trial). 25,89,90 Based on the authors' interpretation, the extensive connections in PD patients' frontal networks inhibited the rate at which information was relayed, hence the underperformance on the processing of contexts in this population. 89,90 Fogelson et al also compared CP performance involving PD and schizophrenic patients, and reported that both patient groups had abnormal network changes when processing context-dependent stimuli, specifically weaker frontaltemporal-parietal connections. 25 Interestingly, another study reported that proactive control was preserved amongst PD patients, as they showed the ability to adjust control mechanisms to better adapt to future response conflict. 91 The study also showed that as motor symptom severity increased, online cognitive control was decreased in the PD patient participants, although proactive control remained unaffected. 91 Yet, another study observed that proactive inhibition was preserved amongst early-stage PD patients whereas reactive inhibition was reduced. 92 Collectively, these studies suggest that proactive control is spared amongst PD patients. Parkinson's Disease on Contextual Processing -Behavioral and Neuroanatomical Data We were unable to find any studies in the literature that examined the effects of PD on CP from a neuroanatomical perspective. However, from a behavioral perspective, the non-demented PD patients showed robust context maintenance abilities (proactive control), but weakened context adjustment abilities (reactive control). For instance, Di Caprio et al observed that PD patients struggled with inhibiting a response when presented with contradicting information. 92 Fogelson et al noted that PD patients had issues differentiating task-relevant from task-irrelevant information, which could contribute to the decline in reactive control abilities seen in this population. 89 Discussion We studied the literature on contextual processing concerning the impacts of normal and accelerated aging. The available data to date have revealed some important findings. As summarized in Figure 2, older adults had a delayed onset and reduced amplitude of electrophysiological response to information detection, comparison, and execution. CP is further impaired in AD, specifically in terms of the proactive control mechanism, whereas PD largely affects the reactive control mechanism of CP. Depending on subtype, the effect of MCI can be more heterogeneous, although slower initiating, processing, and motor responding appear to be typical. The information has clinical and healthcare implications. As an integral component of executive control, CP is fundamental in support of daily living, allowing individuals to internally interpret environmental cues to guide their thoughts and behavior through the formation of an internal representation of context, remembering the information in working memory, and updating of context to adjust to the environment. 11,93,94 Studies consistently suggested that aging is associated with marked changes in the P3a, P3b, and CNV waveforms induced by CP tasks, with delayed onset and reduced amplitudes as being common (Table 1). 10,30,40,[65][66][67][68]70,72,73,95,96 Based on these differences, it has been suggested that electrophysiological markers may be developed in support of clinical decision making. 14 Aging is also associated with reduced inhibition toward irrelevant stimuli and older adults recruit additional neural resources to perform CP tasks. 63,74,78 It is clear that older adults are more reliant on their frontal structures than younger adults with CP, especially the PFC, 68,77 in accordance with the "guided activation theory of PFC function," in suggesting that frontal dopamine system aids in setting and achieving goals, which is less efficient in older adults so that further engagement of the PFC coordination is seen in this population. The only study that compared the effects of AD on CP was based on behavioral data and reported that proactive control was severely impaired in the AD patients, whilst reactive control remained relatively stable. 88 Indeed, proactive control may be more effortful and cognitively demanding than reactive control in that the latter is only active on an "as-needed" basis, specifically when an interference is encountered. 97,98 Furthermore, the neural regions supporting proactive control (ie, the anterior attention system including the frontal eye field) deteriorate faster than those supporting reactive control (ie, the posterior attention system including the parietal cortex and the temporoparietal junction). 78,[99][100][101] The hallmark hippocampi and the medial temporal atrophy in AD can affect information maintenance with working memory. Neuroimaging and electrophysiology research are needed to determine whether AD involves further changes in P3a, P3b, CNV, and N2 waveforms than normal aging. 355 Compared to AD, CP was better studied for PD but limited to older patients without dementia, with results spanning both behavioral and electrophysiological aspects. It is clear that PD affects the utilization of contextual information to prepare and execute a response, due to the over-excessive frontal network connections, preventing effective cross-communication between cortical regions. 25,89,90 As a result, PD patients without dementia exhibited a decline in their reactive control performance, most likely related to the loss of dopaminergic neurons in their basal ganglia, preventing effective motor abilities. On the other hand, proactive control remained relatively intact for these PD patients, 91,92 opposite to what is observed in AD and mdaMCI patients who experienced declines in memory and other cognitive domains. CP in MCI may be more complicated to study due to highly heterogeneous conditions. The studies under the review have all been on amnestic MCI patients, who showed a general slowness with contextual information processing in contrast to normal aging. Several studies contrasted the mdaMCI and sdaMCI subtypes: typical, mdaMCI patients exhibit more profound brain activity changes, explained by the decreased attentional resources in mdaMCI for processing task-relevant stimuli. [82][83][84][85][86][87] The pace to initiate a motor response was also slower than normal aging, seen with a relatively lower LRP amplitudes (an ERP marker for motor processes). 84 The observation that a prolonged N2cc latency characterize the multi-domain amnestic MCI also suggests a deficit in the executive function of the MCI group. 86 The information can be useful for early differential diagnoses and effective intervention, considering that MCI represents the greatest risk for dementia. Several caveats apply to our study. First, data presented were based on a small number of publications, most of which enrolled a small number of participants. We also did not carry out a systematic review and may not have a complete coverage of the topic. For instance, only one study was found for CP and AD and it was purely behavioural with no EEG or imaging data, demonstrating how novel this topic is. When more data become available, more sophisticated screening methodologies and data analysis can become feasible. Additionally, given the current paucity of available clinical data, it is not understood whether the present findings can be reliably generalized. It is anticipated that the needed future research will reveal additional data to improve the understanding about the disease impact with increased sample sizes and more sophisticated study designs. Also, we limited the scope of the review to CP and excluded publications on the other related cognitive aspects, such as semantic processing, sensory perception, and attention. Clearly, cognitive domains are associated with each other, all of which being critical to our daily living activities. Similarly, we cannot expect neurocognitive disorders or the deficits of normal aging to affect particular cognitive domains in isolation. Meanwhile, studies typically have a focused purpose of investigating a specific cognitive domain and indeed several domains including memory, attention, and executive function have each been well reviewed previously. 3,102 In terms of CP, it is a primary part of multiple cognitive processes and shares with multiple common PFC presentations such as dorsolateral PFC and lateral PFC, 4,10,16-18 making it even more difficult for focused investigation. For this reason, the studies under review are particularly plausible, owing to the careful design and data curating to allow the critical information. Further development and clinical translation of CP research will also rely on future methodology and technology breakthroughs. Even with the limitations, our work contributes to the research field by providing the first review that synthesizes recent findings on CP. The study suggests that CP declines with age and is further impaired by neurodegenerative diseases, including AD, PD, and MCI with characteristic patterns (Figure 2). This knowledge can potentially benefit clinical decision-making in the realm of aging and neurocognitive disorders. Moreover, our study draws attention to a clear need for future research on CP such as implementing neuroimaging technologies. Further, it revealed a knowledge gap about the effects of other dementias on CP, other than AD (Figure 2). Many of these including frontotemporal dementia, Lewy body dementia, and vascular dementia have unique neuropathology and clinical representation and we hypothesize that they would affect CP. For instance, vascular dementia is characterized with widespread white matter connectivity, 103 and this will likely affect information transmission of CP. Taken together, previous research has clearly demonstrated the importance of contextual processing in older adults with their adaptation in daily living environment. A reduction in CP has been found to affect widespread aspects of their personal and social lives including speech and communication, reasoning, recognition, memory, judgment and decision-making. [103][104][105][106][107] This raises concern to expand this important but largely understudied area of CP in aging. It is anticipated that future research will be able to apply valid innovative methods and technologies [17][18][19] and produce the needed data to investigate CP. The needed data on CP in MCI, AD, and other neurodegenerative conditions will help provide important new insights for clinical practice, to early diagnose and control the symptoms and risk factors, and effectively manage the diseases and prevent cognitive consequences. The information will also be valuable for improved care of older adults with sensible supportive contexts. Conclusion Contextual processing is a unique component of working memory and executive function critical for daily living. Available data have revealed characteristic behavioral, neural activation, brain waveforms, and structural changes of CP in normal aging while its impairments in agingrelated neurodegenerative disorders are little known, other than reduced proactive control in AD and reactive control in PD. A general trend for CP performance in MCI patients is slower processing and movement initiation. The current situation calls for future research to enrich the knowledge in this field for improved intervention/preventative strategies.
2021-03-04T05:40:32.450Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "ec1d8b7744ae9aa64d71718ff99bca41a2a77946", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=67011", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ec1d8b7744ae9aa64d71718ff99bca41a2a77946", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
231976614
pes2o/s2orc
v3-fos-license
UNBLOCK: Low Complexity Transient Blockage Recovery for Mobile mm-Wave Devices Directional radio beams are used in the mm-Wave band to combat the high path loss. The mm-Wave band also suffers from high penetration losses from drywall, wood, glass, concrete, etc., and also the human body. Hence, as a mobile user moves, the Line of Sight (LoS) path between the mobile and the Base Station (BS) can be blocked by objects interposed in the path, causing loss of the link. A mobile with a lost link will need to be re-acquired as a new user by initial access, a process that can take up to a second, causing disruptions to applications. UNBLOCK is a protocol that allows a mobile to recover from transient blockages, such as those caused by a human hand or another human walking into the line of path or other temporary occlusions by objects, which typically disappear within the order of $100$ ms, without having to go through re-acquisition. UNBLOCK is based on extensive experimentation in office type environments which has shown that while a LoS path is blocked, there typically exists a Non-LoS path, i.e., a reflected path through scatterers, with a loss within about $10$ dB of the LoS path. UNBLOCK proactively keeps such a NLoS path in reserve, to be used when blockage happens, typically without any warning. UNBLOCK uses this NLoS path to maintain time-synchronization with the BS until the blockage disappears, as well as to search for a better NLoS path if available. When the transient blockage disappears, it reestablishes LoS communication at the epochs that have been scheduled by the BS for communication with the mobile. I. INTRODUCTION Next-generation wireless communication technologies (IEEE 802.11 ay, 5G and beyond) can enable extremely high throughput applications due to their operation in the mm-Wave spectrum. They promise low latency and ultra-reliable packet delivery. To overcome high path loss, mm-Wave devices use directional radio beams for communication. The device modems typically use small-sized arrays as they offer very high gains and a multitude of radiation patterns. Deploying a large number of Base Stations (BS) and Access Points can therefore provide Line of Sight (LoS) communication with the mobiles. However, the mm-Wave spectrum not only suffers from high path loss, but also high penetration loss [1], [2]. Common media such as drywall, wood, glass, concrete, and the human body, cause severe signal degradation [3]. Penetration losses increase with the frequency of operation and can completely disrupt LoS communication. During a blockage event, the Re-ceived Signal Strength (RSS) of the mobile reduces drastically and can result in link outage. If the BS has to reacquire the link, the re-connection to a 5G New Radio (NR) Base Station can take up to a second [4]. 1 Such high connection latency severely impacts applications [5]. In addition, the network reconnection process is also power hungry. Therefore, it is important to minimize the need for such re-acquisitions. In this paper we distinguish between "transient" blockages and "permanent" blockages. Transient blockages are defined as blockages which last on the order of a hundred milliseconds, and which end after the blockage event is over. Transient blockages can occur as the user walks past obstacles or other users walk across the directional mm-Wave link. Permanent blockage occurs when the LoS beam between the mobile and the BS is permanently (i.e., for a very long time) blocked, and the only solution for maintaining a connection is to switch to a LoS beam to a different BS that is not blocked, i.e., handoff. The focus of this paper is on sustaining a communication link for control messages during a transient blockage. Poor RSS during blockage events results in the mobile losing time synchronization with the BS. This results in the mobile missing future scheduled epochs for communication with the BS, during which the BS would have been able to align its directional beam towards the mobile. The mobile relies on the signals transmitted during such epochs to combat high phase noise in the mm-wave spectrum. It is therefore critical to sustain the control plane communication link during such blockage events so as to maintain time-synchronization with the BS. The ability to electronically steer the direction of beams can help the device to sustain a communication link by using alternate or Non-Line of Sight (NLoS) paths, when the direct LoS beam is blocked. NLoS paths are reflections from scatterers that exist indoors and urban outdoors. In our extensive experiments in office-like environments, we have found that there typically are available usable NLoS paths between the BS and the mobile. By maintaining the time synchronization and control plane communication with the base station using NLoS paths, mobile devices can sustain a control link through a transient blockage. This then allows 1 During initial access the mobile performs a spatial scan; it searches for the BS beam with the best RSS. The BS also periodically sweeps through all its beams. After acquisition, both BS and the mobile communicate in scheduled epochs using the beam acquired during the initial access. Sudden onset of blockage results in the mobile losing communication with BS over these epochs. recovery to a LoS link as soon as the blockage disappears. Our experiments have shown that the RSS on the NLoS paths is about 10 dB less than the LoS path in an indoor environment. This is sufficient for sustaining a NLoS link over which control packets are transmitted. To employ a NLoS link during blockage events, both the BS and the mobile need a mechanism to identify the viable NLoS paths. As the scattering environment changes with user mobility, a one-time environment scan to extract NLoS paths is not sufficient for mobiles. Moreover, such an approach increases the memory usage of the modem. Thus it is necessary to dynamically determine NLoS paths. The UNBLOCK protocol is a low complex blockage recovery protocol which enables both BS and mobile to dynamically identify NLoS paths, and to then utilize them to maintain time-synchronization so as to preserve the link without need for re-acquisition. UNBLOCK's design is based on extensive mobility experiments using mm-wave software defined radios in indoor office environments. It uses an appropriate beam scanning interval that helps in conserving the mobile battery. Unblock employs the right NLoS beam discovery interval for both BS and mobile by observing the duration of transient blockage events, and beam coherence time of NLoS paths from pedestrian mobility experiments. The protocol fits into the framework of 5G NR standards on which we elaborate in Section VI. The rest of the paper is organised as follows. Section II presents the challenges to overcome to avoid outages to mobiles caused by blockages. Testbed and Experimentation is elaborated in Section III. Our protocol is described in Section IV. Section V provides implementation details and performance of protocol. Section VI maps the protocol to 5G cellular standards. Section VII presents existing work in the domain and Section VIII concludes our work. II. CHALLENGES Communication devices operating in the mm-Wave spectrum, whether BS or mobile, utilize narrow radio beams to overcome high path loss, and need a beam alignment protocol to handle user mobility [6]. Several in-band and sensor-assisted beam alignment protocols requiring the angle of arrival/departure predict the next best-aligned beam [7], [8], [9], [10], [11]. Although these works address beam misalignment for static links, such predictive mechanisms do not recover link RSS during LoS blockage events while the user is moving. Maintaining the communication link's RSS during an LoS blockage event perforce requires an NLoS/reflected path. As the scattering environment changes with user mobility, a one-time environment scan is not sufficient to identify usable reflected paths. Such reflected paths can therefore be discovered only by active probing. Let N BS and N M S be the number of beams 2 available at the BS and the mobile respectively. For a given BS transmit beam, a mobile might discover that one or more of these N M S beams (used in receive mode) are NLoS paths. Similarly, for a given receive beam of the mobile, the BS might find one or more of its transmit beams as NLoS paths to the mobile. Both the BS and the mobile must periodically search for and store the discovered NLoS paths, while communicating on the LoS beams. Storing the discovered paths for later use is necessary since the blockage events are unpredictable. When blockage happens the discovered NLoS beams in the memory are employed to revive link RSS during blockage. Taking the example of a static user where the scattering environment remains the same throughout, for a given LoS transmit beam of the BS, the mobile sweeps through all its N M S receive beams to identify available NLoS beams. Due to the presence of a rich scattering environment indoors, especially due to the walls, the mobile potentially discovers multiple NLoS paths. From our extensive experiments, we found that the RSS of these NLoS beams is typically about 10 dB less than that of the LoS beams. By listening on the discovered NLoS beam, the mobile can help the BS discover its available NLoS transmit beams corresponding to the receive beam discovered by the mobile. To accomplish this, the BS transmits over all the N BS beams while the mobile is making measurements on the NLoS beam. At the end of this process, an NLoS BS-mobile beam pair is found. The BS and the mobile thereby store in their memory at least N BS NLoS beam pairs, one for each BS transmit beam, and use the appropriate stored pair, corresponding to the particular BS transmit beam at the onset of blockage, in case of blockage. Due to mobility, the scattering environment changes over time, and so the above protocol is periodically employed to discover NLoS beams at appropriate 100 ms intervals. An important additional aspect to consider while designing a protocol for battery-driven mobiles is the reduction of modem power consumption. Frequent measurements using the beams to discover NLoS paths not only consume wireless resources, but persistent usage of the radio front end also consumes a significant amount of device power. To discover an NLoS beam pair, the mobile must first perform a spatial scan, measure RSS, and identify a good NLoS receive beam. Then it initiates a BS transmit beam sweep to discover a good NLoS beam for the BS. With a large number of beams both at the BS and the mobile, this process requires the mobile to make a significant number of measurements. For example, with a beamwidth of 5 • to cover a 120 • sector, at least 24 beams are needed; therefore the NLoS beam discovery process may require up to 48 measurements. As the scattering environment changes with mobility, this process must be repeated periodically. The periodicity of this process depends on the Beam Coherence Time (BCT) [12], defined as the duration after which a beam change is necessary to restore the link RSS to the previous maximum. The NLoS beam discovery protocol must run at least once within each BCT on the mobile. The BCT is dependent on motion -the faster a user is moving, the lesser is the BCT. To summarize, the complete protocol to recover the link RSS during blockage events has the following features: • A dynamic mechanism to discover NLoS beams both at BS and mobile. • Storage of the most appropriate backup NLoS beam pairs. • Optimization of the device power consumption by choosing the optimal NLoS beam pair re-scanning interval. • Most importantly, employing the discovered NLoS beam pair when blockage happens, to avoid the need for network re-connection after transient blockage events. The UNBLOCK protocol's parameters have been tuned based on extensive measurement data and it has been validated in various indoor office environments. We first ran extensive measurements in Wisenbaker Engineering Building, a threestoried, 30,000 sq. ft. department building [13]. Experiments have been performed in several classrooms, corridors, and graduate offices to study the RSS of reflected paths from various surfaces. We found that the RSS of the reflected beams is around 10 to 14 dB less than that of LoS Beams in these environments. Along with measurement studies, we did experiments to study the BCT using NLoS beams for several pedestrian motion patterns. Based on these observations, we have designed the UNBLOCK protocol to be a low complexity, power-efficient, and dynamic blockage recovery protocol. III. TESTBED AND EXPERIMENTATION The fundamental goal of our experiments is to understand both the signal and temporal characteristics of multipath/NLoS paths in mm-wave spectrum. Our experiments have been focused on NLoS path behaviour in indoor environments where reflected paths are typically present, as our experiments have shown. Accurate understanding of multipath characteristics is important to design a protocol to recover link RSS during blockage events. In crowded indoor environments like conference halls and classrooms, humans act as primary blockers, whereas in household and workplace environments, mobility of the user can result in transient blockage. We conducted experiments in various locations of our large department building as a representative of office building environments, using 60 GHz software defined radio testbed [14]. A. Testbed The testbed is built around a National Instruments' softwaredefined mm-wave transceiver system with two nodes [14]. The node has a chassis with high speed backplane interconnections between several slots. The FPGA cards inserted in these slots communicate with each other through backplace interconnections and are programmed to form transmit and receive chains of mm-Wave radios. A Sibeam phased array is interfaced with each node. These phased arrays have 24 antenna elements, 12 each for transmit and receive beamforming. With independent transmit and receive RF chains, this provides analog beamforming. Although it is possible to create a large number of beams with arrays, we used 25 equally spaced narrow beams, roughly covering a sector of 120 degrees, obtained by programming 2 bit weights for each element. The element phase weights are predetermined to obtain the desired beam patterns of approximately 20 degrees beamwidth. The 25 predetermined beams form the beamforming codebook. For ease of understanding, we refer to the two bidirectional nodes as "base station" and "user". In the measurement experiments, we transmit single carrier symbols of 2 GHz bandwidth at 60 GHz carrier frequency in a slot that has a duration of 100 microseconds. A frame of 10 ms duration is used for time division duplexed transmission. A frame is divided into 100 slots, with the first 50 slots for downlink and the rest for the uplink operation. 4 slots in every 50 with reference signals are used to time synchronize the base station and user nodes. During our measurement studies, the user node measures the RSS and signal-to-noise ratio (SNR) of transmission from the base station node. A beam index from the chosen codebook can be changed every slot; therefore, our codebook which has 25 different indices can be swept in 25 slots, i.e., 2.5 ms. We performed experiments with two different codebooks, covering azimuth and full space. Beams in each codebook approximately cover 120 • sector. Our narrow beam codebook has an approximate beamwidth of 20 • . B. RSS of Reflected paths Upon encountering a medium, electromagnetic radiation may be transmitted, reflected, or absorbed. The reflectivity of radiation from a surface usually increases as the frequency increases. Based on the penetration depth, radiation may transmit through or get completely absorbed by the medium. Because of its shorter wavelength, mm-wave radiation is reflected by common building materials. Penetration and reflection losses [15] from our experiments for some of the common obstacles are tabulated in the Table I. The reflected radiation from typical building surfaces is measurable and usually above the noise threshold. The total noise power from all the sources in our system at the room temperature for a 2 GHz bandwidth system is −73 dBm, in the scenarios where RSS is below this noise threshold, we indicated Noise Floor (NF) 3 . Therefore we can rely on these reflected paths to maintain the link, perhaps at a reduced rate, during the temporary blockage. To quantify the RSS of reflected signals, measurements were taken at more than 50 locations inside the building. In particular, several measurements were taken in office-like environments, corridors, and spaces with concrete walls and Fig. 1 shows some of the test environments. We observed that, on average, the RSS of reflected paths from wooden walls is 10 dB lesser than that of the LoS path, whereas it is 13 dB lesser from concrete walls. To make accurate measurements of RSS on the reflected beams, it was ensured that the additional path loss between the reflected path and LoS path was negligible. Through observations, it was found that narrow NLoS beams have better RSS than wider beams, but system designer must also take into account the trade-off between NLoS beam discovery latency and the gains offered for a given beamwidth. As the spatial scan area is lesser for narrow beams, more measurements are required to discover NLoS beams. From the experiments, it was observed that to recover from blockage during mobility, it suffices to keep just one beam, the best reflected/NLoS beam, in the memory. Another important observation from the experiments is that during the blockage events, NLoS beam operation at the mobile alone can recover RSS and maintain the link in the scenarios where the RSS is above the noise floor of the mobile's receiver. Optimizing both NLoS beams, both at the BS and the mobile, however, can improve the link RSS up to 10 dB more than that of using NLoS beam alone at the mobile. C. Beam Coherence Time It is equally important to study how the RSS of a particular reflected beam changes with user mobility. As the user moves from one location to another, the receive beam of the mobile has to be changed to better align with that of the BS beam. As the alignment is disrupted over time, the RSS of the beam is degraded. How long a particular mobile beam remains aligned with that of the BS depends on the mobility of the user. As we rely on scatterers for blockage recovery, which are more common in indoor rather than outdoor environments, the focus was on pedestrian mobility. The temporal changes in RSS for both translational and rotational motion patterns of the user were studied. For any beam pattern, the high gain region is within 3 dB from the maximum or between the half power points of the main lobe. Beyond the the 3 dB beamwidth, the gain in the angular direction falls off significantly from the peak and is often unpredictable. Fig. 2 shows the radiation pattern of the beams used in the experiments. Therefore, it is necessary to change the beam after misalignment results in more than 3 dB loss. We formally define this duration as the "Beam Coherence Time" [16], [17]. It is the duration after which the RSS on a beam drops by 3 dB from the maximum. Figure 2: Radiation Pattern Similar to the measurement experiments, mobility experiments were performed at several locations in the building. Different mobility patterns like human walk, hand movements, and array rotation at multiple angular speeds, were studied. Knowledge of BCT helps in determining the NLoS beam discovery interval. Choosing the right interval for beam discovery not only helps in reliably preserving communication during the blockage but also reduces the device power consumption. Table IIa presents the BCT for users walking at distances of 5 m and 10 m from the BS, with the mobile using a fixed NLoS beam to communicate with the BS. Table IIb shows the BCT for multiple angular speeds using a NLoS beam, when the distance between BS and mobile is 5 m. The fundamental idea behind using BCT to determine the NLoS the beam re-scanning interval is that there is likely to be a LoS beam update after this duration, either at BS or user, to overcome the misalignment caused by the user mobility. If the LoS beam either at the BS or user is changed, then the previously discovered NLoS beam is no longer valid to recover from blockage, and must be updated to account for mobility of the user. From the experiments, it was observed that rotational motion of the user quickly degrades the RSS, and hence the BCT is lesser compared to lateral user motion patterns like walking. Also, use of narrow beams demand frequent updates of NLoS beams, just as it does for LoS beams. It is expected that rapid hand movements while users are interacting with the mobile result in faster angular speeds, and so have shorter BCT duration. This can especially happen while users are playing mobile games or running Virtual Reality (VR) applications. Other movements, such as swinging hands, head roll, and body rotation, although slower, can cause intermittent blockage. Unlike RSS improvement with NLoS beam operation at the BS in conjunction with NLoS beam at the user, the BCT remains unchanged, as it is purely dependent on the motion pattern of the user. Additionally, the BCT duration becomes larger when the user moves farther from the BS because of the smaller subtended angle. IV. UNBLOCK Before describing the protocol for blockage recovery, we elaborate on the NLoS beam discovery procedure both at the BS and the mobile. First, the mobile, while listening to a BS beam using its LoS beam, performs a spatial scan during the available communication epochs with the BS. The mobile measures the RSS on all its beams for a particular BS LoS transmit beam, and discovers NLoS beams with RSS within 10 dB of its LoS Beam. At the end of this procedure, the mobile has obtained an NLoS path to the BS. Next the BS needs to optimize its NLoS beam; this can be achieved by the mobile measuring its RSS using the discovered NLoS beam, while the BS is sweeping over its transmit beams. The mobile communicates back to the BS the identity of the transmit beam that has the highest RSS. Thereby, both BS and mobile discover their respective NLoS beams that can be used in case of blockage. Based on the BCT data in Table II, the smallest BCT is around 100 ms. Therefore, the NLoS beam discovery procedure is repeated every 100 milliseconds. Fig. 3 shows the State machine of the UNBLOCK protocol. To limit the discussion to the scope of this work, we assume that both BS and mobile run a beam alignment protocol to align their LoS beams to account for user mobility. This happens in the State of "Beam Adaptation" (BA). The "NO" state denotes Normal Operation, in which both the mobile and BS continue to use an aligned LoS beam pair. These LoS beams are acquired during the initial acquisition. By the end of the initial acquisition process, both BS and mobile determine their respective LoS beams. Over time, this alignment gets disturbed because of user mobility. Whenever the mobile finds that its current RSS has dropped by 3 dB from the previous measurement occasion, it moves to the "Beam Adaptation" (BA) state. In the BA state, Figure 3: UNBLOCK State Machine the mobile first tries to recover the RSS by aligning its LoS beam, and, later, the BS adapts the LoS beam on an as-needed basis. Upon successful alignment, they continue in the "NO" state. Periodically, in our case, every 100 ms, the mobile moves to the "MS NBD" state, i.e, "Mobile Side NLoS Beam Discovery. In this state, the mobile performs a spatial scan to find an NLoS receive beam for the current LoS beam of the BS. The mobile measures the RSS on each of its beams and identifies beams with RSS within 10 dB of the current LoS beam. Among all the NLoS beams discovered, it stores the beam index with the highest RSS. While the mobile is listening to the BS on the NLoS beam, "Base Station NLoS Beam Discovery" (BS NBD) is triggered. In "BS NBD", the BS sweeps over all beams, one at a time, and the mobile measures the RSS of the discovered NLoS beam. Upon the mobile finding the BS beam with the highest RSS, it communicates the beam information to BS. The BS stores the NLoS beam information. This process is repeated every 100 ms to account for changing scatterers caused by user mobility. If at anytime during "NO" operation, the current RSS of the mobile drops by 10 dB from that of the previous measurement, the mobile first moves to the "NBO" state. In this "NLoS Beam Operation" state, the mobile first reverts to the stored NLoS beam and informs the BS via the control packet communication to switch the BS beam to the NLoS beam. Upon detecting the blockage event from the control packets, the BS switches to the NLoS beam, retrieving the stored information from its memory. There are ample measurement opportunities in the 3GPP NR standards to discover NLoS beams. Section VI presents how UNBLOCK can run in adherence to the existing standards. V. IMPLEMENTATION AND EVALUATION We implemented the UNBLOCK protocol on a 60 GHz software defined radio tested. In the testbed, a beam can be changed every 100 µs. With 25 beams in the beamforming codebook, it takes 2.5 ms each at mobile and BS to search over all the beams. Every 100 ms, the UNBLOCK protocol on the testbed needs 5 ms to identify an NLoS backup beam pair. The Beam Alignment protocol proposed in [18] for LoS Beam Adaptation was implemented to evaluate UNBLOCK under user mobility. The UNBLOCK protocol was programmed in Figure 4: 5G NR Frame labVIEW software both at the BS and the mobile nodes. UNBLOCK was evaluated in two different scenarios. • BS and mobile are stationary, while a human walks around and blocks the LoS beam pair. • BS and blockers are stationary, while a user with a phased array walks in such a way that it obstructs the LoS path to the BS. The experiments were repeated in several locations in the building. It is observed that in both scenarios, UNBLOCK is successful in recovering the link during blockage, and thereby preserving the time synchronization. Fig. 5 shows a blockage event in an experiment in which a human walks across the path between a static BS and a mobile and briefly blocks the link. It can be observed that the blockage occurs quickly; therefore, it is not possible to predict such events. In Fig. 5, the RSS of the LoS link RSS, which is −58 dBm, falls to -72 dBm in 35 milliseconds. The noise floor, which is the total noise power from all the sources in our transceiver is −74 dBm system with 2 GHz bandwidth is . In the communication system with 2 GHz operating bandwidth, the thermal noise at room temperature alone is −80.8 dBm. When the RSS hits NF, the mobile loses time synchronization and connectivity with the BS. Without the UNBLOCK protocol, one can observe that the mobile would lose synchronization with the BS and will need to re-initiate the initial search process. However, when the UNBLOCK protocol is running, it detects the RSS drop at around 50 ms, and the backup beam pair in the memory maintains link RSS to preserve time synchronization. The experiments were repeated at 50 different locations with the BS remaining static, while the user with the phased array in hand either turns away from the BS to obstruct the LoS links with his body, or walks past a metal board that blocks LoS link. It was observed that the UNBLOCK protocol preserves time synchronization 96% of the experiments. VI. UNBLOCK FOR 5G NR Beam Management at the base station, called gNodeB, and user equipment (UE) in 5G Newradio standards can be broadly divided into two phases: • Beam Selection phase. During this phase, a mobile performs Acquisition/Re-Acquisition. periodically, determine a good reflected beam, and perform blockage recovery upon onset of blockage. The 3GPP standards [4] make provisions to complete these two phases at gNodeB and UE to identify an aligned beam pair. We describe how UNBLOCK can be implemented using the mechanisms available in the standards for these phases. The gNodeB transmits synchronization signal blocks (SSB), control information, and broadcast information necessary for UEs to discover and connect to it. This is done with up to 64 different beams within a sector, every two frames. Communication between gNodeB and several UEs happens within a frame, which is of 10 ms duration. Each frame is further divided into slots. Based on the configuration of deployment, the number of slots in a frame can be chosen from the set {10, 20, 40, 80, 160}. mm-Wave 5G standards are timeduplexed, meaning the gNodeB and the UE communicate in a synchronized order, but not simultaneously. In a slot, either downlink/uplink data or control information are transmitted. Within each slot, each SSB is transmitted on a particular beam, and there can be up to 4 SSB blocks within a slot. Fig. 4 shows a 5G NR frame. When UE is turned on, it first searches for these SSB beams and time synchronizes with gNodeB. As there is no prior synchronization with gNodeB, the schedule of these SSBs is unknown and UE has to wait at least 20 ms on a receive beam. As there are 64 SSBs, it takes 1.28 seconds to test all the gNodeB beams. However, upon successfully connecting to gNodeB, UE is time-synchronized with gNodeB, and is also aware of the SSB beams' temporal locations. In addition, the gNodeB changes their periodicity to 5 ms from 20 ms for the particular UE after initial access procedure. Hence, UE has ample opportunity to harvest the reflected beams of the transmitting/gNodeB beam every 100 ms. A. Beam Selection in 5G Newradio As the first step in the network connection procedure, a user device operating in the mm-wave spectrum initiates gNodeB discovery by searching for synchronization signals. It is a directional search using several beams to identify the direction of the best possible RSS. The gNodeB sweeps all 64 SSBs consecutively, and repeats the sweep every 20 ms. UE discovers either one or multiple SSB beams and uses one of those beams to communicate back to gNodeB to complete the connection. gNodeB changes the periodicity of these SSB bursts to 5 ms for UEs after acquistion. Therefore, using the UNBLOCK protocol, it has enough opportunities not only to make measurements on the current receive beam but also to identify the reflected NLoS beams of a transmitting beam of the gNodeB. Upon listening to UE's connection request on one of its SSB beams, gNodeB designates that SSB beam as a communicating beam to that UE. By the end of this procedure, both gNodeB and UE identify the transmit-receive beam pair for communication. UE receives all subsequent SSB, channel state information reference signals (CSI-RS) and scheduling information once it connects to the gNodeB. B. Beam Refinement in 5G Newradio After Beam Selection, the gNodeB assigns each user a set of beams with either SSB or CSI-RS resources for channel quality reporting. This helps the gNodeB manage its beam as and when necessary, since user motion or hand movements quickly disrupt beam alignment. In the beam refinement phase, the gNode and UE adapt the transmit-receive configuration identified during the initial connection to handle mobility and to improve the quality of service. UE also reports channel state information to gNodeB, which gNodeB uses to adapt its transmit beams. As the user moves, the previously identified beams might become stale, either through misalignment or blockage, and new beams are necessary to sustain connectivity. When beam misalignment is detected either at gNodeB through reports from UE, or at UE from measurements, any beam adaptation protocol can be employed to manage the Transmit and Receive beams. UNBLOCK at the mobile side can identify NLoS beams for a given gNode B transmit beam either using SSB resources or CSI-RS signals scheduled on that beam. To find the NLoS transmit beam of gNode, UNBLOCK can use the resources available either for Beam Selection or Beam Refinement. In the case of the Beam Selection phase, gNode B sweeps over all its 64 transmit beams in 5 ms. UNBLOCK can also identify the NLoS gNodeB transmit beam from an exclusive set of beams allocated by the gNodeB. It is also possible to use beams in both Beam Selection phase and beam refinement phase to quickly narrow down the NLoS beams. Unlike our implementation, beams can be changed on a per symbol basis. Symbols within SSB are of either 4.46 or 8.92 microseconds duration, and CSI RS duration can be 8.92 microseconds. Therefore, the mobile can use a large number of narrow beams to harvest reflected paths. Although faster search methods like hierarchical search can be employed, to present the fundamental idea of UNBLOCK we limit the discussion to an exhaustive search. VII. RELATED WORK The authors of [19] have proposed a measurement system and conducted human blockage experiments using a similar testbed [14]. Our experiments and the NYU measurement campaign [19] confirm the existence of rich multipaths in indoor environments at mm-wave frequencies. The work [20] modelled the common causes of packet decoding errors in mm-wave communications as a linear dynamical system and proposed tests to identify the cause of an error. It identifies blockage after its occurrence, and is a reactive solution unlike the pro-active UNBLOCK protocol which always maintains a backup beam pair. Using multihop mm-Wave network to avoid blockage by forwarding packets to the destination via unblocked links is studied in the work [21]. In the paper [22], hybrid beamforming is explored to instantaneously recover from blockage. The work proposes antenna diversity to design and maintain several beams in different directions at the transmitter and receiver. In case an antenna beam suffers from blockage, the proposed method invokes one of the beams from the available. Despite requiring simultaneously active beams, the method still needs a search to identify the unblocked beams. Another reactive approach to overcome blockage is presented in [23]. Once blockage occurs, several beams are used simultaneously in this approach [21]. Most of the works in the literature propose reactive mechanisms to overcome blockage; in contrast, the UNBLOCK protocol is a pro-active dynamic approach to avoid outage during user mobility and always keep a backup beam pair to preserve time synchronization for immediate use when blockage happens. VIII. CONCLUSION mm-Wave signals are highly susceptible to blockage. Avoiding link outage and preserving time synchronization during blockage events is important to conserving mobile device power and circumventing high reconnection latency. The UN-BLOCK protocol harvests viable NLoS paths in a timely fashion and immediately employs them upon the onset of blockage. Thereby, it ensures that time synchronization with the BS for mobile devices is maintained during transient blockage events. When the transient blockage disappears, the LoS link is immediately restored by communicating at those epochs dedicated by the BS for the particular mobile. Measurement experiments and evaluation on software defined radios are reported to assess the efficacy of the UNBLOCK protocol.
2021-02-21T14:23:14.639Z
2021-01-05T00:00:00.000
{ "year": 2021, "sha1": "d504a31cc04d569b6cf7cbc813064fe94bdbca3d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.02658", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d81888649003218bd7717022768d6c8b2931d24e", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering" ] }
3067979
pes2o/s2orc
v3-fos-license
Towards More Nuanced Classification of NGOs and Their Services to Improve Integrated Planning across Disaster Phases Nongovernmental organizations (NGOs) are being integrated into U.S. strategies to expand the services that are available during health security threats like disasters. Identifying better ways to classify NGOs and their services could optimize disaster planning. We surveyed NGOs about the types of services they provided during different disaster phases. Survey responses were used to categorize NGO services as core—critical to fulfilling their organizational mission—or adaptive—services implemented during a disaster based on community need. We also classified NGOs as being core or adaptive types of organizations by calculating the percentage of each NGO’s services classified as core. Service types classified as core were mainly social services, while adaptive service types were those typically relied upon during disasters (e.g., warehousing, food services, etc.). In total, 120 NGOs were classified as core organizations, meaning they mainly provided the same services across disaster phases, while 100 NGOs were adaptive organizations, meaning their services changed. Adaptive NGOs were eight times more likely to report routinely participating in disaster planning as compared to core NGOs. One reason for this association may be that adaptive NGOs are more aware of the changing needs in their communities across disaster phases because of their involvement in disaster planning. Introduction The contribution of local nongovernmental organizations (NGOs) in the U.S. and abroad in disaster planning, response, and recovery has been well demonstrated [1]. For example, countries like China and Japan are frequently affected by natural disasters and therefore have developed similar strategies to engage NGOs across disaster phases [2][3][4][5]. NGO participation in relief activities and long-term support of victims is an advantage given that resources to support these types of activities can be stretched thin, particularly when disasters are increasing in frequency and growing in scale. Consequently, the U.S. government has integrated NGOs into national strategies, including the National Health Security Strategy and Implementation Plan 2015-2018 [6], which promotes a framework in which local, state, and federal agencies collaborate with NGOs and businesses to advance national health security. This framework strongly emphasizes integrated planning as an activity that includes NGOs, public health agencies, emergency management, faith-based groups and others across communities. Prior evidence of progress in better integration of planning includes an expansion of regional planning alliances and participation of organizations in coalitions for emergency planning. Also of note is NGO integration into the National Disaster Recovery Framework [7], which specifies the role of NGOs in pre-disaster planning, contributions from NGOs as part of Voluntary Organizations Active in Disaster (VOAD), and types of services they might provide during disaster. Study Design The study team collected data from NGOs using a cross-sectional survey, which was in the field from March and April of 2015. The survey was conducted as part of a larger study to gain a better understanding of how NGOs participate in disasters across nations-in this case, the U.S. and China. However, only the U.S. data is analyzed and presented here. More information is provided about the larger study in a toolkit developed to facilitate better NGO-NGO and NGO-government coordination in disaster response and recovery [17]. The purpose of the survey was to answer three questions: (1) What services do NGOs provide during disaster response? (2) How do these disaster services differ from services provided during routine times or long-term disaster recovery? (3) How is the NGO provision of services during times of disaster associated with their regular participation in disaster planning in their communities? These research questions were developed to begin to test the model proposed by Acosta and Chandra (2013), which hypothesized that during a disaster, NGOs ramp up at least their routine services. We were interested in understanding the extent to which NGOs provide a set of core services-those that are critically important or central to fulfilling their organizational mission-during a disaster. If NGOs provide a set of core services, planners will understand what NGOs bring to the table, when their services are needed, and when NGOs will likely need to pull back support. As an example, an NGO whose mission it is to ensure that outreach and health workers are staffed where and when they are most needed in the community may report the core service as the identification and coordination of volunteers. However, if NGOs adopt new services during a disaster based on the needs of the community (we refer to these as adaptive services) by replacing or adding to their core services, this may strengthen their role in a disaster. But it would then be critical for them to engage disaster planners to ensure clarity on what services NGOs are offering and at what point during the response/recovery cycle these services will change. Gathering information from NGOs about their core and adaptive services can help emergency planners better engage NGOs by providing a new frame by which to understand and inventory the assets that NGOs bring to disaster response and recovery. Study Sample and Informed Consent We assembled a list of NGOs for survey by reviewing member lists for the state chapters of Voluntary Organizations Active in Disaster (VOAD) and identifying contact information for VOAD member organizations. VOADs are typically coalitions of organizations covering a range of administrative geographies (e.g., city, county, regional, state, multi-state) whose mission is to mitigate and alleviate the impact of disasters. A link to an online survey was emailed to 576 potential VOAD organization respondents in March 2015 and three email reminders were sent to encourage participation. We asked respondents to complete the survey on behalf of their entire organization. Informed consent was obtained electronically prior to participation in the survey. Respondents were offered the option to receive a USD$20 gift card to reimburse them for the time spent completing the survey. The survey methods and content were reviewed and approved by the RAND Corporation's Human Subjects Protection Committee. At the time the survey was closed in April of 2015, 241 organizations had responded to the survey, with a response rate of 42%, which is comparable to recent response rates for web survey administration [18]. Three surveys were missing responses to most or all questions and were dropped from the sample. An additional 18 respondents reported that they did not provide any services and were dropped from the sample, resulting in a final analytic sample of N = 220. Based on the respondents who answered a survey item on the organization's geography, all U.S. regions were represented by the organizations: the Northwest (20.5%); Southwest (25.9%); Midwest (27.3%); Northeast (36.4%); and Southeast (41.8%). Measures and Statistical Analyses All analyses were conducted using SAS 9.3 (SAS Institute, Cary, NC, USA). NGO Characteristics We asked NGOs questions about whether their organization had members and how many they had, as well as about their membership structure (individuals vs. organizational members), history (e.g., length of existence), staffing situation (e.g., paid employees or volunteers), geography (e.g., rural vs. urban, U.S. region), and populations served (e.g., age, racial/ethnic groups, income level, etc.). Simple frequencies (means or percentages) were calculated for each of the NGO characteristics. NGO Community Activities We asked NGOs to select, from a pre-populated list, all of the activities their organization undertook to (1) build resilience in the community; (2) partner with government and nongovernment agencies; (3) facilitate transition from disaster response to recovery; and (4) engage and serve the community. We also asked NGOs to select, from a pre-populated list, (5) the types of information they used for planning and decision making during recovery. Simple frequencies were calculated for each of the activities or information types and are presented in Table 1; the ones with asterisks were found to be associated with the dependent variable "NGO routinely participating in disaster planning" at p < 0.05 in univariate logistic regression and are therefore used as covariates for the multivariable logistic regression described below. * These items had statistically significant univariate associations (p < 0.05) with the dependent variable "NGO routinely participating in disaster planning" discussed below. NGO Disaster Services We asked NGOs to select, from a pre-populated list, all of the services they provided during: disaster response (within one month of the disaster), immediate recovery (one to three months after the disaster), and long-term recovery or routine times (more than three months after the disaster). Services they could select were: clothing; food services; animal services; warehousing (e.g., storing food, clothes, and other goods); mental health or counseling; spiritual support; job and unemployment assistance; housing (temporary or permanent); medical care; medication or pharmacy; case management, information or referral services; transportation; child services, child care, other child support; senior services; family violence (e.g., domestic violence, child abuse, interpersonal violence); immigrant services; financial assistance, including referrals for financial assistance; legal, insurance, and mediation services; construction or infrastructure development; volunteer opportunities; community liaison (e.g., representing community needs or interests); and preparing community members for the next disaster. This list was generated from a prior survey of NGO disaster services after Hurricane Sandy [19]. Classifying NGO Disaster Services as Core or Adaptive We then classified each disaster service as either core or adaptive by first calculating the proportion of NGOs that self-reported as offering the service during all three phases of disaster (disaster response, short-term recovery, and long-term recovery or routine times) and the proportion of NGOs that reported offering the service during only one or two phases of disaster. We then ran one-proportion Z-tests to determine if the proportions were different. If a higher proportion of NGOs reported that the service was offered during all three phases of disaster and the Z-test was significant at p < 0.05, the service was classified as "core". If a higher proportion of NGOs reported that the service was offered during only one or two phases of disaster and the Z-test was significant at p < 0.05, the service was classified as "adaptive" ( Table 2). Our reasoning was that if services shift by disaster phase and/or is not available at all times during disaster response and recovery, then it is not a core service for NGOs. * If more NGOs reported that the service was offered during all three phases of disaster and the Z-test was significant at p <0.05, the service was classified as "core". If more NGOs reported that the service was offered during only one or two phases of disaster and the Z-test was significant at p <0.05, the service was classified as "adaptive". Classifying NGOs as Core or Adaptive Organizations In addition to the classification of each service as being core or adaptive, we also classified NGOs as being of core or adaptive types of organization overall by calculating the percentage of each NGO's services that were classified as core. We conducted a sensitivity analysis on the core services variable to identify the optimal percentile cutoff for classification of NGOs as core vs. adaptive organizations. Optimal was defined as a cutoff in core services percentiles that resulted in minimally overlapping distributions of core service percentiles between core vs. adaptive organizations. Figure 1 shows that there is minimal overlap in core service percentiles between core and adaptive organizations when the cutoff is set at 75%. This lack of overlap in core service percentile means that the core NGO category is distinct in its definition compared to the adaptive NGO category and should therefore maximize our ability to detect differences between the NGO types. As a result, NGOs were classified as a core organization if 75-100% of their services were classified as core. NGOs with 74% or lower core services were classified as adaptive organizations. For one service type (financial assistance) that was classified as both core and adaptive, it was treated as core in this analysis. In addition to the classification of each service as being core or adaptive, we also classified NGOs as being of core or adaptive types of organization overall by calculating the percentage of each NGO's services that were classified as core. We conducted a sensitivity analysis on the core services variable to identify the optimal percentile cutoff for classification of NGOs as core vs. adaptive organizations. Optimal was defined as a cutoff in core services percentiles that resulted in minimally overlapping distributions of core service percentiles between core vs. adaptive organizations. Figure 1 shows that there is minimal overlap in core service percentiles between core and adaptive organizations when the cutoff is set at 75%. This lack of overlap in core service percentile means that the core NGO category is distinct in its definition compared to the adaptive NGO category and should therefore maximize our ability to detect differences between the NGO types. As a result, NGOs were classified as a core organization if 75-100% of their services were classified as core. NGOs with 74% or lower core services were classified as adaptive organizations. For one service type (financial assistance) that was classified as both core and adaptive, it was treated as core in this analysis. Identifying NGOs that Routinely Participate in Disaster Planning Respondents indicated their level of agreement (strongly disagree to strongly agree) with the statement "Our organization routinely participates in disaster planning with government and nongovernmental partners in our community." We then dichotomized this to create the dependent variable by categorizing respondents as being routinely involved in disaster planning (i.e., answered "agree" or "strongly agree" to the statement) or not being routinely involved in disaster planning (i.e., answered "disagree" or "strongly disagree" to the statement). Identifying NGOs that Routinely Participate in Disaster Planning Respondents indicated their level of agreement (strongly disagree to strongly agree) with the statement "Our organization routinely participates in disaster planning with government and nongovernmental partners in our community." We then dichotomized this to create the dependent variable by categorizing respondents as being routinely involved in disaster planning (i.e., answered "agree" or "strongly agree" to the statement) or not being routinely involved in disaster planning (i.e., answered "disagree" or "strongly disagree" to the statement). Modeling Predictors of an NGO Routinely Participating in Disaster Planning A multivariable logistic regression examined the association between NGO type (core vs. adaptive) and NGO report of routine participation in disaster planning (the dependent variable) and included NGO participation in community activities (see Table 1) as covariates. Logistic regression was first used to identify statistically significant univariate associations (p < 0.05) between community activities and the dependent variable. All covariates significantly associated with the dependent variable were included in the multivariable logistic regression. Likelihood ratio (LR) testing was used to identify concise models in which each covariate was tested to see whether its inclusion resulted in a significantly different model compared to one without the covariate. Because the purpose of the model is to identify key activities associated with routine participation in disaster planning in communities, LR testing of models set significance at 0.10, which allows the inclusion of activities or drivers that are meaningful, but which might not be identified at strict significance levels. NGO Characteristics Nearly 84% of NGOs surveyed reported that they had been in existence for more than ten years. Over 60% indicated that they were an organization with members, either individual (e.g., congregants, grassroots volunteers), or organizational members (i.e., other organizations with formalized relationships to them). A vast majority of organizations reported serving children (81%), the elderly (86%), families (88%), racial and ethnic minorities (77%), low-income populations (85%) and non-English speaking populations (72%). In terms of past disaster experience, all NGOs surveyed reported some experience with disasters: 84% reported past participation in all three types of disaster phases: planning, response, and recovery, while 16% reported participating in only one or two types of disaster phases. Table 2 presents the percentage of NGOs that report service types offered during each of three disaster phases-disaster response, short-term recovery, and long-term recovery. Table 2 also presents the results of an analysis that classifies services as core (i.e., services that NGOs offer consistently across phases of disaster) or adaptive (i.e., services offered during only one or two phases of disaster). Service types classified as being core are comprised mainly of social services (e.g., family violence, senior services, immigrant services, etc.), while adaptive service types generally reflect services that are typically relied upon during disaster (e.g., warehousing, food services, clothing, etc.). Medical care was classified as an adaptive service, which is expected since most NGOs in this sample are not health clinics or other medical facilities. Whether NGOs Are Core or Adaptive We found that 120 NGOs were core organizations, meaning that they mainly provided the same services across all phases of disaster (see Table 2 for service types) and 100 NGOs were adaptive organizations, meaning they tended to provide different services across disaster phases. Core organizations provided an average of five types of services (SD 3.9) and 96% of those services were core services, whereas adaptive organizations provided an average of six types of services (SD 4.0) and only 27% of those services are core services (see Table 2 for which services were classified as core vs. adaptive). Whether Participation in Disaster Planning Differs for Core vs. Adaptive NGOs Adjusted odds ratios for NGO type (the independent variable) and NGO participation in community activities (covariates) predicting NGO routine participation in disaster planning are presented in Table 3. Likelihood ratio testing results showed that the model in Table 3 contains only essential covariates; that is, the dropping of each covariate listed in Table 3 resulted in a significantly different model compared to one that included it. One key finding was in relationship to the primary question of NGO type and its relationship to the outcome of routine participation in disaster planning: adaptive NGOs were nearly eight times more likely to report routinely participating in disaster planning compared to core NGOs. Additionally, a specific set of key community activities was found to be independently associated with the outcome of routine participation in disaster planning: NGOs that reported training their program staff in emergency preparedness skills were 6.4 times more likely to report the outcome. NGOs communicating information to constituents/community members on where to go in an emergency were 6.8 times more likely to report the outcome. Discussion Findings from this survey indicate that NGOs provide many types of services to community residents and that these services can vary across the phases of disaster. Information from this survey was used to categorize NGO disaster services as either core or adaptive, as well as to categorize NGOs themselves as either core or adaptive based on the disaster services they delivered. Examples of core services include senior services, spiritual support, and providing volunteer opportunities. These services can be considered important and unique contributions upon which emergency planners can readily engage NGOs and depend on them to deliver throughout the disaster phase and possibly back to the steady state. These may also be services that, because of their durability, can be considered by policymakers as core for establishing contracts with NGOs, thus improving the region's ability to quickly access those NGOs' resources if a disaster occurs. Lack of resources has caused financial difficulties for NGOs because of long delays in the reimbursement processes (e.g., see our report on disaster case management after Katrina). If core services were under an earlier contract, a major hurdle to agile disaster response could be overcome. Our analyses also explored the concept of both adaptive services (e.g., food services, animal services, transportation, etc.) and adaptive organizations. The idea that NGOs can be flexible with their services and therefore responsive to the needs of their community is important for planners to consider, because those organizations may be the best ones to deploy for certain needs as conditions on the ground change. What is particularly compelling is that our analyses also showed that adaptive NGOs were significantly more likely to report routine participation in disaster planning in their communities than core NGOs. One reason for this association may be that adaptive NGOs are more aware of the changing needs in their communities between response, recovery, and routine times because of their involvement in disaster planning. However, there are a number of details about this finding that remain unclear. We are unclear about how NGOs participate in disaster planning and the extent to which the planning is integrated across key community sectors. We do not know the direction of the association between adaptive service models and disaster planning due to the cross-sectional nature of the survey. We also do not know about the quality or amount of adaptive services delivered by NGOs, which are just the type of services that are usually not offered across the disaster phases. However, based on the results of the survey, some NGOs are clearly aware that they change their services during disaster phases. Therefore, planners can leverage this and other findings from this paper to better engage NGOs. Future research should also delve more deeply into understanding how core and adaptive organizations may work differently with government agencies leading disaster response, and what the advantages and disadvantages are from both the NGO and lead agency perspective. It is critical that planners effectively and efficiently engage NGOs-the findings from this paper suggest that planners should be asking NGOs about not only their services, but about how the services change across phases of disaster, what their capacity is for delivering adaptive services, what information they receive from the community that triggers the initiation of adaptive services, and their past experiences in delivering adaptive services. This paper offers a more robust and nuanced taxonomy upon which to classify NGOs and their services, which should support more reliable disaster resilience-building efforts. Gathering this information should allow planners to develop a more comprehensive landscape of disaster services and a more accurate timeline of when these services begin and end. Further, research should test the assumptions and findings from this analysis to observe how effective core and adaptive NGOs function in future disaster response and recovery. This paper highlights one crucial area for improvement of NGO engagement during disasters: better classification of NGOs and their services. However, there are other areas as well, including improving trust and communication between NGOs and government agencies, for which work should be done between disasters. In the context of a trend of limited resources for addressing disasters, improving relationships and efficiency may be the best way toward well-coordinated cost-managed responses. Limitations The survey was cross-sectional in nature, so the logistic regression was not able to establish causality of NGO types or activities in relation to the outcome. Furthermore, the survey respondents were a convenience sample of NGOs known to participate in disaster-related work, so the results of the study are not necessarily generalizable to all types of NGOs. Results may be more representative of NGOs participating in disaster-related work, but we still recommend caution in generalizing to this NGO subgroup due to the non-random nature of the sample. In addition, the classifications that we developed for the NGOs (core and adaptive) were based mainly on the theory that service types and models were the most relevant drivers of how NGOs become involved in disaster-related work, but there are other ways to categorize NGOs that we did not explore. However, data on NGOs and their disaster experience are limited in their ability to specify the most relevant ways to categorize NGOs. Conclusions Overall, the diversity across NGOs identified here suggests that the government may need to provide different types and levels of support, training, and resources to ensure optimal NGO integration as partners with lead government agencies. For example, it may be useful for local planners to map all key NGO services to community geography and then work with groups of NGOs that provide specific services, using this framework of core and adaptive services. In this way, local planners will know exactly which NGOs will be working together on food services, and which will be working on temporary housing, etc. Given that NGOs will continue to play central roles in disaster response and recovery, their meaningful and appropriate integration into disaster planning is essential. The research presented here provides local emergency planners with more information about types of NGO disaster services and about key differences in NGO approaches to disaster service delivery.
2017-12-11T01:13:08.205Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "0e66204e3eb6b63825ddb3c2e34b22e3d246b5f4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/14/11/1423/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e66204e3eb6b63825ddb3c2e34b22e3d246b5f4", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Business", "Medicine" ] }
16420662
pes2o/s2orc
v3-fos-license
Correction of Inverted Nipple Using Subcutaneous Turn-Over Flaps to Create a Tent Suspension-Like Effect Background Many techniques have been reported for the correction of inverted nipples. However, the conventional methods may be insufficient, especially for moderate to severe inversions. We propose a modification of Elsahy’s method and report satisfactory results. Methods A single-institutional retrospective review was performed for all patients who received the modified operation. Patient charts were reviewed for demographic data, pertinent preoperative factors such as Han and Hong classification, and clinical outcomes including postoperative nipple height and sensation. Surgical details are described within the main text. Results The review identified 26 female patients amongst whom 47 inverted nipples were corrected using the modified method. The mean nipple height was 9 mm with an average follow-up period of 14 months. Brush stimulation elicited nipple contraction in all patients. There was no recurrence of nipple inversion, nor were there any surgical complications to report. Conclusion The suspension technique is a simple, reliable method for correcting grade II and III nipple inversions. Introduction An inverted nipple is a condition in which a portion of or the entire nipple is buried below the plane of the areola. The condition was first described by Cooper in 1840, and the first corrective operation was reported by Kehrer in 1879. In patients with inverted nipple, a relatively short lactiferous duct is attached to the nipple via dense and highly inelastic connective fibers [1,2]. This deformity can pose aesthetic, psychological, and functional problems such as difficulty with breastfeeding. Nipple inversion is not rare, with reported prevalence ranging from 1.8 to 3.3% [1,3]. Over time, a number of methods have been used to correct the condition. Elsahy originally proposed the use of bilateral triangular dermal flaps that cross under the nipple [4,5]. Due to its simplicity and effectiveness, this method came to be used widely both in its original and modified forms [6][7][8][9][10][11]. Despite the variety in corrective operations, postoperative recurrence of inversion remains a problem. In the current study, we introduce a modified operation with rotated-buried flaps and report clinical outcomes of this technique, along with a review of the literature. Patients and Methods All research involving human participants had been approved by the institutional review board at Myong-Ji Hospital (Goyang, South Korea). All of the patient's records and information was anonymized and de-identified prior to analysis. Additionally, all patients whose clinical photographs have been included in this article gave written approval of the use of their photographs for research, presentation, and publication. The retrospective study was performed for all patients undergoing the described inverted nipple correction from 1999 to 2010. Medical charts of identified patients were reviewed for demographic data, pertinent preoperative factors such as Han and Hong classification, and clinical outcomes including postoperative nipple height and sensation. Additionally, charts were reviewed for complications such as nipple necrosis, permanent numbness, hematoma, infection, or breastfeeding difficulty. Surgical technique Operations were performed under topical and local anesthetic. Upon surgical preparation, a modified Elsahy incision was designed over inverted nipple. The modification was such that the triangular flaps had wider bases (each ¼ of the circumference of nipple base) with flap lengths equal to the nipple diameter (Fig 1). The nipple was pulled anteriorly with a 5-0 nylon stay suture, and the nipple base was circumferentially incised to the superficial dermis. Each of the triangular flaps were de-epithelialized and elevated to include the areolomammillary muscle layer and subcutaneous tissue just above the breast parenchyma (Fig 2). With the nipple under gentle traction, a vertical tunnel was created deep to the central portion of the nipple. This dissection was carried out bluntly to avoid injury to the lactiferous duct (Fig 3). The triangular flaps were rotated about the axis and downturned into this pocket such that the deeper surfaces of the flaps were in contact with each other, while the superficial surfaces were in touch with the walls of the dissection pocket (Fig 4). The flaps were fixed to each other at three of the respective vertices with 5-0 polyglactin sutures. The subcutaneous layers were approximated with 5-0 polyglactin, and the areola skin was closed with 6-0 nylon. The nipple base was then re-draped to adjust for the tension from twisting of the triangular flaps and primary closure of the areola skin. The protracted nipple was dressed in a light compressive dressing. The stay suture was maintained and the nipple was protected in a paper cup for a week. For two months following this, patients were instructed to wear one-cup larger brassiere along with a cotton stent made from a finger stockinet and Fixomull (BSN Medical, Hamburg, Germany)( Fig 5). Results The review identified 26 female patients among whom 47 inverted nipples were corrected using the method described above (Figs 6(A)-6(C), 7(A)-7(D) and 8(A)-8(B)). Twenty-one patients had inverted nipples bilaterally, and five patients had unilateral inverted nipples. Thirty-seven (37) of the nipples were grade II inversions, according to Han and Hong classification, with the remaining 10 nipples being grade III inversions [12]. The mean patient age was 34 years (range: 16-64 years) at time of operation. Twelve patients were planning the breastfeed in the future (n = 17; 45%). The remaining patients were neither breastfeeding nor planning to breastfeed (n = 9; 35%). Postoperative course was uneventful for all of the patients, and there were no recorded instances of wound infection, hematoma, or nipple necrosis. Nipple height had been measured with calipers immediately after the operation and during routine clinic visits. The immediate post-op nipple height was 10.8 ± 0.8 mm, which had decreased to 9.0 ± 1.0 mm by the mean follow-up visit (14.03 months). In 45 of the cases, postoperative nipple projection remained at 90-100% of the nipple height achieved at the time of operation. In one patient, 40% of projection was lost in both nipples. All of the patients were satisfied with nipple contour and projection. The brush test revealed that all 47 postoperative nipples had retained enough sensory function to elicit a contraction response [13]. Of the whole, five of the patients subsequently had children and were successful in breastfeeding. Discussion Postoperative nipple height reductions have ranged from 10 to 50% for the purse-string suture method [14][15][16]. While procedures such as thick and full-thickness pennant flaps, dermoglandular flaps, or dermal elongation using a telescope at the base appear to be appropriate for correcting severely inverted nipples, recurrence rate remains significant for grade II and III inversions [14,15,17]. In certain techniques, the lactiferous duct is transected to correct severely inverted nipples [6,12,17,18]. However, the nipple is innervated from the lateral cutaneous branches of the fourth intercostal nerve along the major duct system [19], and preserving the lactiferous duct is important not only for breastfeeding but also for nipple sensation. In our series, the postoperative nipple height remained at 9 mm. Nearly all of the corrected nipples had maintained adequate nipple height, with the exception being one patient in whom the postoperative height had decreased by 40% in both nipples. Additionally, no inversion had recurred in any of the nipples. Nipple sensation can be evaluated by a number of methods. The postoperative nipple has been examined most frequently by subjective patient report. Semmes-Weinstein monofilament test and pressure specified sensory device may have the advantage of quantifiable data [20]. However, such tests designed to measure pressure or dimensional sensitivity may not be as clinically significant in the assessment of nipple. Whereas tactile gnosis is important to the hand and pressure sensitivity is important to the foot, the relevant physiologic functions are sexual arousal, oxytocin secretion, and lactation for the nipple. One of the earliest clinical studies on nipple sensation has revealed erectility of the nipple upon stimulation was found to be most relevant to nipple function, with more objective methods of sensation assessment failing to account for what is physiologically relevant to a patient's sexual and reproductive needs. [21,22] Hence, we rely on the contractile response upon brush stimulation to most closely represent the ability generate either sexual or suckling response from the nipple. Thus far in our series of patients undergoing the modified method of nipple correction, all of the operated nipples had either retained or recovered the type of sensory perception required for the pilomotor-like response. In review, we have summarized four of the most widely reported inverted nipple correction methods in comparison to our modified method (Table 1). Our method has four main differences to previous methods. The first modified component is that the bases of triangular flaps were rotated, downturned, and fixated such that the nipple base was mechanically cross-linked in the dermal plane defined by the areola mound. This procedure evenly disperses the fixation tension laterally and translates anterior-posterior vector laterally into the areola tissue. In contrast, Elsahy fixed the triangular flap tips to one side of the areolar plane by transposition, which does not provide similar mechanical linkage between the nipple base and the areola dermis [5]. The second component is that the longitudinal length of each triangular flap was equal to the diameter of nipple base, which length maintains tension and provides strong suspension after fixation. The third component of our design is that the bases of triangular flaps are wider than that described by Elsahy. This base was approximately one quarter of the circumference of nipple base. This wider base had two purposes. The first was to create for a greater purse-string effect at the nipple base at the time of primary closure of the subcutaneous layer and areola skin. The second was to provide as reliable of a blood supply to the triangular flaps. Our fourth modification is the adoption of thick and reliable subcutaneous flaps, which included the areolomammillary muscle layer. These downturned and buried flaps were designed to occupy the vertical tunnel just deep to the nipple base. The bulk of these flaps act as supporting pillars below the nipple, fill the dead space, and thereby discourage retraction of the nipple. Two methods incorporating thick subcutaneous flaps have previously been introduced. Hugo et al. used a double-opposing pennant flap, and Taylor et al. employed areolabased dermoglandular advancement using rhomboid flaps [14,17]. These methods differ from our support and push-out efforts We applied a circumferential incision around the nipple base. This thick subcutaneous flap is also complementary to the circumferential incision around the base, which is helpful for creating and maintaining a naturally symmetric and upright position for the nipple and making repositioning easier without compromising blood supply. This circumferential incision was an original design by Elsahy but is not widely adopted for the fear of violating the blood supply to the nipple [5][6][7][8][9][10][11]13]. Postoperative care is also important in the prevention of recurring nipple inversion, and wearing a prefabricated cotton stent and a larger brassiere is a simple, tolerable solution for patients. In conclusion, the suspension technique of crossing bulky subcutaneous triangular flaps with a circumferential dermal incision at the nipple base is a simple, reliable method for correcting nipple inversion.
2018-04-03T02:53:36.727Z
2015-07-24T00:00:00.000
{ "year": 2015, "sha1": "93abd1439f3c747ad75a328a8154eee57b8122d7", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0133588&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "93abd1439f3c747ad75a328a8154eee57b8122d7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
249221893
pes2o/s2orc
v3-fos-license
CORE Teaching Model Based Mnemonic Technique Impact Students’ Mathematical Creative Thinking Ability and Metacognitive Awareness The purpose of this study was to determine the effect of the Connecting, Organizing, Reflecting, and Extending (CORE) learning model with mnemonic techniques on students' creative thinking skills and metacognitive awareness. The research method was a quasi-experimental design. Data on creative thinking skills were collected using essay test instruments and metacognitive awareness using questionnaires. This research was conducted on the eighth-grade students of a secondary school in Tulang Baawang Lampung with a sample of 60 students taken by using the cluster random sampling technique. The data analysis technique was two-way ANOVA. Based on the test results, the first hypothesis is shown F(2) = 12.92, p < .05, the CORE learning model affects mnemonic techniques on students' creative thinking abilities. The second hypothesis shown F(2) = 19.97, p < .05, the high, medium, and low metacognitive awareness categories affect students' creative thinking abilities. The third hypothesis shown that F(4) = 1.65, p < .05, there is no interaction between the CORE learning model and mnemonic techniques with metacognitive awareness of students' creative thinking abilities. The impact of this research is the model can be a solution for mathematics teaching and learning. Introduction Consciously or unconsciously, humans often do thinking activities (Hermiati et al., 2021). In working on math problems, students cannot be separated from the thinking process (Mursari, 2020). In the twenty-first century, special skills in thinking skills such as creative thinking, critical thinking, problem-solving, communication, collaboration, metacognition, innovation, creation, and literacy are required for learning activities (Zubaidah, 2016). Creative thinking is one of the abilities that can be developed through mathematics education. Creative thinking is a mental process that generates an infinite number of unique, effective, and applicable ideas. These characteristics are critical in the mathematics learning process (Karim, 2014;Rahma et al., 2017;Wahsheh, 2017). Defines by Yazar Soyadı (2015), creative thinking as a collection of cognitive activities individuals engage in in response to specific objects, problems, and conditions, or types of efforts directed at specific events and problems based on their capacities (Adiastuty et al., 2020). Thus, students must develop their capacity for creative thinking through their teaching and learning activities . Along with specialized skills, awareness is a critical component of learning, such as students' metacognitive awareness. Metacognitive awareness is defined as the phenomenon by which a person experiences situations and events in their life and world in various ways (Maftoon & Fakhri Alamdari, 2020). Learners aware of metacognition appear more strategic and perform better than students who are unaware, and will develop students' ability to solve math problems in everyday life (Harrison & Vallin, 2018;Mujib, 2019). Metacognitive awareness refers to the presence or absence of awareness that enables an individual to plan, sequence, and monitor learning in such a way that progress can be seen immediately (Abdelrahman, 2020;Mistianah, 2020;Tamsyani, 2016). Even though student involvement is critical and influential in learning activities at the secondary school level, many students are still less enthusiastic and less active in mathematics. Each educator or mathematics teacher has their unique method of instructing students on the presented material. Mathematics education is highly dependent on how a teacher conveys the material to his students. There are numerous strategies for ensuring that the subject matter delivered is well understood and accepted by students to enhance their cognitive abilities, including thinking creatively. The following results were obtained from the pre-survey conducted at SMP Nurul Iman: Table 1 shows the data from the pre-survey in the grade 8 th of SMP Nurul Iman, which is still relatively low, with 36 out of 56 students scoring below the minimum mastery criteria. In this case, the learning process is still teacher-centered, which results in students having low creative thinking abilities, a lack of metacognitive awareness, and many students believing mathematics is an easy subject to learn. The low creative ability of students can be interpreted as a lack of or inability to express their ideas in the context of problem-solving. As a critical component of enhancing educational quality, teaching and learning activities must be modified to help students develop their capacity for creative thinking. One way to improve one's thinking ability is to use more engaging and innovative learning models. According to research conducted by Gustiara Dova Maya, the CORE learning model has an effect on students' ability to think creatively and collaborate. The disadvantage of this research is that the researcher uses only the model without supplementing it with other techniques and thus misses out on other factors that may affect students (Gustiara Dova Maya, 2020). Maftukhah et al., (2017) Arifah et al. (2016) was using the CORE learning model in conjunction with case studies improved students' ability to think creatively more than using the expository learning model. Then, students' mathematical creativity can achieve mastery when using the CORE learning model in conjunction with LKPD and students' mathematical creativity skills are superior when using conventional learning models. This study examined students' creative abilities using a learning model aided by LKPD but did not examine other factors that could influence them (Beladina et al., 2013). This research is novel in that it combines the CORE learning model and mnemonic techniques to train students' creative thinking skills in terms of metacognitive awareness and between classes that use the CORE learning model and mnemonic techniques and classes that use conventional learning models. Then, no research using CORE-based mnemonic techniquies. Thus, the purpose of this study was to examine students' ability to think creatively based on their metacognitive awareness through the use of the CORE learning model and mnemonic techniques. Population and Sample The population of this research was 127 in eighth-grade students (female = 75, male = 52) at SMP Nurul Iman in Tulang Bawang, in district Jaya Makmur, Banjar Baru, Lampung, with a cluster random sampling technique used to select the research sample. The sample for this study consisted of two different class categories: experimental and control. The experimental class 1 used the CORE model, the experimental class 2 used the CORE-based mnemonic techniques, and the control class used traditional learning. Material and Instrumen The CORE-based mnemonic techniques enable students to develop their activity and memory to the point where they can recall exactly what they want to remember (Sohimin, 2016;Wang et al., 2019). The research instrument was essay questions designed to assess students' mathematical creative thinking abilities using indicators. Figure 1 illustrates the indicators of mathematical creative thinking ability . The mathematical creative thinking indicators are cover 4 elements (i.e. fluency, flexibility, origanality, and elaboration). Research Procedure The ability to think creatively while learning in the CORE-based mnemonic technique model can be achieved by analyzing and collecting data from educator-provided problems. The research procedure for implementing the learning model is as follows: The use of models and techniques in the classroom can serve as a link between students and the material being presented (Auliani et al., 2018). Figure 2 illustrates a technique for assisting students in remembering material presented by the teacher. We can see that the step of learning model from connecting, organizing, reflecting, and extending. The CORE-based mnemonic techniques and requires students to connect prior knowledge with new knowledge, formulate ideas using keywords, describe what they know in front of other groups, and repeat the material Data Analysis In this study, the test was administered as a final (posttest) and consisted of 6 essay questions based on indicators of mathematical creative thinking ability. The validity was 0.89 and the reliability was 0.96. Hypothesis testing with SPSS 25 software using a two-way ANOVA test with testing prerequisites for normality and homogeneity. Result and Discussions The researchers divided the subjects into three groups: the experimental group received treatment using the CORE model, the second experimental group received treatment using the CORE model with mnemonic techniques, and the third group received treatment using the lecture method (classical model). When a process is taught in the form of material, mathematical creative thinking ability data is implemented. When data is gathered, it is necessary to use core data to test logical hypotheses. The data illustrates the concept of mathematical continuity by comparing the highest and lowest scores for creative mathematical thinking. Then, after filtering out measures of variation within the coverage group (r) and standard deviation (s) inferred from observational data on conceptual understanding, look for neutral bias, including means, median, and mode. The Table 4 summarizes the descriptive statistics for metacognitive awareness in the conventional and experimental group. The mean score for experimental class 2 is 92.90, while the mean scores for experiment 2 and control are 86.50 and 88.90, respectively. Then, the normality of the mathematical creative thinking ability as follows Table 5 show the results of the normality test. was .129 in experimental class 1, while the score was .103 in experimental class 2, and .150 in control class, respectively. As a result, each sample is considered valid because < and the sample are drawn from a normally distributed population. The following table summarizes the results of the Metacognitive Awareness questionnaire's normality test: Table 7 shows the homogeneity test of creative thinking abilities. The results indicate a significance level of 0.05 and a dk = 2 (5.991), indicating that the data are homogeneous. The following are the results of the Metacognitive Awareness questionnaire's homogeneity test: Table 8 shows the results of the homogeneity test for the Metacognitive Awareness. The data is homogeneous with a significance level of 0.05 and dk = 2 (5.991). The two-way ANOVA tests is followed: Table 9 shows the results of the two-way ANOVA. As demonstrated by the p-value 0.05, there is an effect of learning models on students' mathematical creative thinking abilities and an effect of students' metacognitive awareness levels on students' mathematical creative thinking abilities, with (Fcount < Ftable). On the other hand, has no interaction with metacognitive awareness (F(4) = 1.65, p = 0.05). The CORE learning model and the CORE learning model with mnemonic techniques are superior to conventional learning models in developing students' mathematical creative thinking abilities. Based on the steps of the CORE and CORE learning models, students are invited to practice recalling prior knowledge, developing their curiosity, and attempting to motivate themselves for future learning (Birenbaum et al., 2015). CORE learning teaches students to connect previously acquired knowledge in order to develop strategies for acquiring new knowledge. After acquiring new knowledge, students learn to critically examine their findings to apply them to a problem (Miller & Calfee, 2004). The experimental class learning process starts with the preparation stage. The researcher provides motivation or positive suggestions to students before they begin learning. After receiving motivation, students become more enthusiastic and ask a lot of questions, fostering student curiosity. The researcher presents the topic in the delivery stage by using more relaxed language, avoiding boring expressions, and connecting with students so that they remain attentive and are not confused. During the training phase, the researcher splits students into groups and then gives them worksheets on the SPLDV topic to discuss. Students debate the subject with their groups before working on the worksheets provided by the researcher. Questions and answers at this level of development are one of the components that make students appear highly active, help them understand the content better, keep the class atmosphere from becoming repetitive, and can teach students psychologically. Students' interest in the CORE learning model with mnemonic techniques can be seen in the atmosphere of the learning process as participants feel happy, comfortable, and active in participating in learning and are able to communicate well in delivering material and motivate students to learn by using the CORE learning model with mnemonic techniques on this SPLDV material. There are still students who are less active when the CORE learning model is used with mnemonic techniques, specifically when conveying the results of the discussion, because some students lack confidence, but students respond well to this learning model and can understand the material provided in general. After the learning materials are completed, students are given posttest questions to determine whether the CORE learning model with mnemonic strategies influences students' mathematical creative thinking abilities. Student reactions to the CORE model with mnemonic techniques are good, indicating that students are interested in using the CORE learning model with mnemonic techniques on SPLDV material. The CORE learning model is comprised of several stages, the first of which involves an educator activating students' prior knowledge (Saregar et al., 2021). The CORE learning model is a model of learning that is expected to enable students to connect (Connecting), organize (Organizing), and then rethink the concept being studied (Reflecting), as well as to expand their knowledge during the learning process (Extending) (Budianto, 2016). However, relying solely on learning models is insufficient to enhance students' mathematical creative thinking abilities. As a result, the CORE learning model is combined with mnemonic techniques to help students retain and sharpen the knowledge they already possess. In comparison to traditional models of education, the teaching and learning process is highly monitored and verbal (Fahrudin et al., 2021). A student will accept only what an educator offers. In practice, this model emphasizes lecture and question-and-answer sessions. According to the findings of this study, the CORE learning model and the CORE learning model with mnemonic techniques have a greater impact on students' mathematical creative thinking abilities than the conventional learning model. Conclusion The research objectives and the analysis of the research data indicate that the CORE-based mnemonic techniques and the level of metacognitive awareness affect students' mathematical creative thinking abilities. Students' creative thinking abilities improve when they use the CORE-based mnemonic techniques instead of when they use conventional learning models. The CORE model places a premium on memory and higher-order thinking skills training.
2022-06-01T15:17:46.262Z
2022-05-05T00:00:00.000
{ "year": 2022, "sha1": "501c2c544b4407ab7819d5721e4784dc2b621acf", "oa_license": "CCBY", "oa_url": "http://jurnalnasional.ump.ac.id/index.php/alphamath/article/download/13327/4841", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3ebe7bc9db2de37a4928e2ba5b4409c9feb55d76", "s2fieldsofstudy": [ "Mathematics", "Education" ], "extfieldsofstudy": [] }
230581500
pes2o/s2orc
v3-fos-license
PREPARATION AND CHARACTERIZATION OF A BIOARTIFICIAL POLYMERIC SYSTEM (SCAFFOLD) MADE OF CHITOSAN AND POLYVINYL ALCOHOL WITH POTENTIAL IN THE VIABILITY OF BONES CELLS : Scaffolds are widely used in tissue engineering because their manufacture is based on natural and synthetic polymers, which allows them to have properties such as biocompatibility and biodegradability, creating an ideal environment for cell growth on their surface. In this context, among the polymers studied in Tissue Engineering are Chitosan (CH) and Polyvinyl Alcohol (PVA). CH is a versatile polymer obtained from de-acetylation of chitin, which is used for its high biodegradability and biocompatibility, although its mechanical properties must be improved. It has been found that one of the ways to improve the mechanical properties of CH is to mix it with other synthetic polymers such as PVA. PVA is known for its biocompatibility, biodegradability, zero toxicity and ease of preparation due to its solubility in water and excellent mechanical properties, such as tensile strength and ease in the formation of films and barriers. In this study we evaluated the capacity of scaffolds made with CH and PVA in different concentrations (2: 1, 1: 1, 1: 2, respectively) as a possible application in bone regeneration. This was made through different characterization tests such as Infrared Spectroscopy, AFM, Swelling test and Porosity test, where we obtained information about its structural and physicochemical properties. Additionally, a cellular quality control was performed on the material through the MTT assay. The Fourier transform infrared spectroscopy (FTIR) study showed that there are strong intermolecular hydrogen bonds between the chitosan and polyvinyl alcohol molecules. The Swelling and Porosity tests showed favorable results, obtaining maximum values of 5519% and 72.17% respectively. MTT tests determined that the prepared materials are not cytotoxic. These findings suggest that scaffolds possess properties suitable for use in Tissue Engineering. Introduction Tissue engineering is an important alternative in the development and future of medicine and focuses on the study and development of biological substitutes to repair or regenerate tissues and organs thanks to biomaterials [1]. A biomaterial is defined as any material used to make devices that replace a body part or function in a safe, reliable, economical, and physiologically acceptable manner [2]. These can be used without incompatibilities in biomedical applications and also to be placed in contact with living tissues without causing damage or alterations. Therefore, they must meet the requirements of the specific application for which they are designed, in addition to presenting high biocompatibility when evaluating their in vitro and in vivo behavior [3]. At present, the use of biomaterials has application in almost all systems of the human organism and there is a wide range of useful materials for all types of biomedical applications. These can be classified according to their chemical nature into metals, polymers, ceramics and composites. Polymers are widely used for their variety of compositions, ease of production with different geometric patterns and with certain properties, they can be rigid or soft and some are suitable for temporary replacements due to their biodegradable composition [4]. Polymeric materials have a wide variety of applications in the biomedical field since they have physical, chemical and mechanical properties that are closer to those of living tissues. In addition, they are easy to process and can be obtained in various forms. Currently, there are numerous polymers used in the biomedical field, among which we can find synthetic and natural ones. The formers are obtained by controlled polymerization processes from low molecular weight raw materials and the latter come directly from nature [5]. One of the strategies used for tissue regeneration is the use of scaffolds, which are highly porous structures that serve as a template to guide the formation of new tissue [6]. The scaffold must have special characteristics that allow it to perform its function without creating an adverse response in the body. These characteristics are biocompatibility with the tissue, biodegradability (at an ideal frequency that corresponds with the formation of the new tissue), zero toxicity, optimal mechanical properties and an adequate morphology that allows the transport of cells, gases, nutrients and metabolites [1]. To develop a scaffold with these properties, a broad type of biodegradable polymers of natural and synthetic origin have been studied. This is because natural polymers are known to have better biological properties than synthetic polymers, and synthetic polymers stand out for their mechanical properties [7]. The combination of these materials is known as Bioartificial Polymeric Systems (BPS) [8]. In this context, for this work a scaffold made of two polymers, chitosan (CH) and polyvinyl alcohol (PVA) was prepared and characterized. CH is a natural polymer known for its biological properties such as biocompatibility, biodegradability, zero toxicity, and antibacterial properties [7]. An important property of CH in the context of tissue engineering is the ease with which it can be functionalized. The functional groups present in CH have the facility to join with other groups, peptides or amino acids, allowing it to combine with other materials, which would improve CH in different applications of tissue engineering [9]. CH is considered a biodegradable biomaterial, because it is easily hydrolyzed and metabolized by enzymes (chitosanases and lysozymes) [9]. CH-based scaffolds must meet a series of parameters such as biodegradability, biocompatibility and, in the same way, they must mimic the environment of the tissue to be recovered in order to provide an adequate environment where cells can proliferate and differentiate [10]. PVA is a synthetic hydrophilic polymer known for its easy preparation, zero toxicity, biodegradability, high chemical resistance and physical properties. It is produced by partial or total hydrolysis of polyvinyl acetate to remove acetate groups. The amount of hydroxylation determines the physical, chemical, and mechanical properties of PVA. This polymer is highly soluble in water, but resistant to most organic solvents [11]. The higher the degree of hydroxylation and polymerization of PVA, the lower the solubility in water and the more difficult it is to crystallize [12]. Due to its solubility in water, PVA needs to be cross-linked to form scaffolds. These crosslinks, whether chemical or physical, provide the structural stability the scaffold needs after it swells in the presence of water or biological fluids. An advantage in the application of this material is its polarity. due to the fact that it has hydroxyl groups in its chemical structure. These groups tend to form hydrogen bonds between hydroxyl groups, which generates ease in mixing with other materials, obtaining an improvement in the structure [13]. The scaffold was prepared through a process known as lyophilization, this process is one of the most used since the level of porosity can be controlled with the freezing time, after that, the samples are subjected to a vacuum process where the solvent residues, they evaporate by sublimation and leave spaces in the samples known as pores. Investigations on the use of PVA in the manufacture of scaffolds have been reported due to its hydrophilicity; excellent chemical stability (stable pH), which allows cells to grow in a favorable environment; and semipermeability, which favors the transport of oxygen and nutrients necessary for cell survival. In 2005, a study was reported in which scaffolds were made with PVA and alginate (natural hydrophilic and biodegradable polymer) in different concentrations of PVA (10, 30, 50 wt%). The porous bodies were manufactured by a lyophilization process. Porosity values around 85% were obtained for the three concentrations prepared and pore sizes between 190 and 290um. It was concluded that, from the porosity and pore size values, the scaffold obtained an ideal surface area, in which the cells can interact. The alginate and PVA scaffold were shown to have not only better mechanical properties, but also better adhesion and cell growth than the control scaffold with only alginate [14]. In 2013, another study was carried out where PVA was mixed with two different concentrations (0.5% and 4%) of gelatin (a biocompatible and biodegradable natural polymer). The scaffold was manufactured through a freeze-drying process. Its degree of swelling was 1500% for the 4% concentration, while the 0.5% concentration obtained an absorption of 1000% of its original weight. The ideal pore size was obtained with a concentration of 4%, with a range between 100 and 120um. It was concluded that properties such as porosity and pore size increased their values when the gelatin concentration increased, which influences the water absorption capacity of the material [15]. On the other hand, CH has been used to synthesize scaffolds due to its biocompatibility and biodegradability properties. In a study made in 2010, scaffolds with CH and acetic acid were prepared, varying the CH concentration by 1.25%, 1.5% and 1.8%. Porous bodies were obtained by lyophilization process. The porosity values obtained were 85%, 90% and 95% for the concentrations 1.8%, 1.5% and 1.25%, respectively. The effects of variations in the CH concentration on the morphology of the scaffold were observed through the SEM test, obtaining a pore size range between 50 and 100um. The pore size increased as the CH concentration decreased. The swelling values were 800%, 1200% and 1600% for the concentrations of 1.8%, 1.5% and 1.25%, respectively. It was concluded that by having a reduced CH volume, the scaffold properties improved. Additionally, CH-prepared scaffold has potential in tissue engineering applications [16]. In 2013, CH was mixed with three different concentrations (0.0025%, 0.005%, 0.01%) of carbon nanotubes (biomaterial with excellent mechanical properties). The scaffold was manufactured through a freeze-drying process. The porosity obtained was 87.7% for the lowest concentration (0.0025%), while increasing the concentration to 0.005% and 0.01%, values of 88.5% and 88.8% were obtained respectively. The degree of swelling was 600% for the 0.0025% concentration, while the remaining two concentrations (0.005% and 0.01%) obtained an absorption of 1000% with respect to their original weight. In the SEM test, the CH scaffold showed better values when obtaining a pore size between the 100 and 120um range. It was concluded that the CH concentration allowed the scaffold to have a surface area where the cells can interact [17]. Studies using these two materials have also been reported. In 2015, CH and PVA scaffolds were synthesized with methylcellulose (MC-biodegradable cellulose ether that presents good solubility in water at low temperature). Three concentrations were prepared, varying the amount of MC (25%, 50% and 75%) and keeping the amounts of CH and PVA constant. The method used to manufacture these scaffolds was lyophilization. The degree of swelling increased for the concentration of 25%, the CH hydroxyl groups and PVA have a positive influence on the improvement of this property. The maximum value of porosity was 88% for the three concentrations. The morphology of the scaffolds was observed through the SEM test, the pore sizes obtained were 200 and 500um. It was concluded that the scaffold had a porous surface with different pore size distributions, facilitating the entry of nutrients and cell products, which is beneficial for cell growth. Furthermore, with the swelling values obtained, the mechanical properties of the scaffold and the liquid retention capacity can be improved [18]. The combination of these materials has been reported in several studies, and by means of the degree of swelling or swelling tests, porosity, FTIR and cell viability study through the reduction of the MTT compound, the physicochemical and cell viability properties of the prepared scaffold can be described. Scaffold Synthesis A 2% m / v solution of chitosan (SQT) in 5% v / v glacial acetic acid was prepared. CH was dissolved by continuous stirring for 24 h on a Heidoplh MR 3001 heating plate. Subsequently, a 2% m / v solution of PVA (SPVA) was prepared with distilled water at 80 ° C by continuous stirring for 24h. Both solutions were mixed in three different concentrations, (i) 25% SQT -75% SPVA, (ii) 50% SQT -50% SPVA, (iii) 75% SQT -25% SPVA. Subsequently, the solutions were poured into 2 cm diameter and 0.5 cm high cylindrical molds; and they were frozen for 24h at -20 ° C. Finally, solutions (i), (ii) and (iii) were subjected to a lyophilization process at approximately 10-2 atm in a Supermodulyo Freeze Dryer brand Thermo Electron Corp lyophilizer in a period of 48h until the porous bodies were obtained. Fourier Transform Infrared Spectrophotometry (FTIR). This test is based on the fact that most molecules absorb light in the infrared range of the electromagnetic spectrum and this energy is converted into molecular vibration. This absorption is specific to the bonds between the atoms present in the molecule. Using a spectrometer, this absorption of infrared light through the sample material is measured as a function of wavelength. Spectra for CH, pure PVA, and mixtures (i), (ii) and (iii) were obtained using a Thermo Fisher brand Nicolet iS5 spectrometer, equipped with an attenuated total reflectance (ATR) instrument, using a Zn-Se crystal. Each spectrum represents 64 co-aggregated scans referenced against an ATR cell. The range used was 4000 to 650 cm-1 with a resolution of 1.92 [19]. Swelling The property of a biomaterial to absorb water influences the conservation of the material's structure and cell growth [4,20]. 3 samples were taken from each specimen, for a total of 9 samples evaluated. These were weighed after the freeze-drying process using a KERHN & Sohn GmbH analytical balance model D-72336. Subsequently, each specimen was immersed in distilled water at 20 ° C and every 3s they were removed, the surface moisture was dried with filter paper and was weighed immediately. Finally, to determine the degree of swelling, the following equation was used: Where: Percent swelling of the sample. Sample weight after being dipped. Initial sample weight. Porosity Porosity plays an important role in scaffold synthesis because pores provide nutrition for cells and tissue [21]. The percentage of porosity was calculated as follows: Where: 1 Volume of distilled water used. 2 Total volume of distilled water after submerging the sample. 3 Volume of distilled water remaining after sample removal. Porosity percentage. Assay of cell viability (metabolic activity) by reduction of compound MTT. The cell viability test is divided into two parts: Sterilization of scaffolds. In order to avoid the presence of bacteria or microorganisms acquired from the medium that could alter the result of the assay, specimens (i), (ii) and (iii) were placed in 12-well NEST Biotech brand sterile plates, where there were added 3 series of 2mL v / v of ethanol at concentrations of 70%, 50% and 30%. Each ethanol series was added at 12 h intervals. Subsequently, PBS was added to each specimen in order to obtain a neutral pH, and a medium in which the cells can survive. Cell culture and MTT. For cell culture, cells were trypsinized, centrifuged, and resuspended in DMEM culture medium. 12-well plates were used, in which osteoblasts were incubated. The incubation process was carried out in a BINDER incubator at 37 ° C and 5% CO2. After 48 h of incubation, MTT solution was added to each well, and incubated again for 100 to 120 min. Subsequently, all the suspended medium in each well was removed and DMSO detergent was added, in order to solubilize the formazan crystals that form once the MTT is added. The absorbance values were measured in a Biotek Model 800 TS ELISA reader at 590 nm [22]. FTIR Characterization The FTIR characterization was used to evaluate the functional groups of the polymers. Figure 3 shows the FTIR spectrum of low molecular weight QT. In the band from 3000 to 2840 cm-1 the narrowing of the C-H bonds of the alkyl groups is evidenced. The peaks observed in the range from 1651 to 1346 cm-1 show the presence of amide I, amide II and amide III, which are characteristic groups of QT. Also, the pronounced peaks at 1068 to 889 cm-1 demonstrate the narrowing of C-O bonds present in the QT [20]. Figure 5 shows the FTIR spectrum corresponding to specimens (i), (ii) and (iii). In the range of 3100 to 2800 cm-1, stretching of C-H bonds is observed. In the range of 3447 to 3200 cm-1, the vibrations of the N-H and C-H groups can be observed [5,20]. Each interval can be seen in sections (a) and (b) of Figure 5. Swelling Characterization One of the most relevant aspects to consider in the study of scaffolds is their maximum degree of water absorption, since the capacity to absorb and retain water is directly related to the capacity to diffuse nutrients and oxygen [23]. In Figure 6 the swelling percentage of the scaffolds can be observed for the specimens (i) 75% CH -25% PVA, (ii) 50% CH -50% PVA and (iii) 25% CH -75% PVA. The three specimens absorbed a large percentage of water with respect to their weight in the initial 30s, with values of 4839%, 3980% and 3589% for (i), (ii) and (iii), respectively. Likewise, after 60s, the swelling percentage for each specimen reached a maximum of 5519%, 4148% and 3775% for (i), (ii) and (iii), respectively. The results show that CH greatly influences the swelling volume and reduces it from 5519% to 3775% when mixed with PVA. The reduction in the swelling volume can be attributed to a more rigid network formed by the intermolecular and intramolecular reactions of the polymers [24]. The increase in the swelling value can be explained through the high availability of amino groups, responsible for non-covalent bonds in water, which occupy the free space between chains and increase the volume of the sample. The degree of reduction also depends on factors such as the molecular weight of the components, pH, temperature [25]. Porosity plays an important role in scaffolds used in tissue engineering because pores allow cell migration and blood vessel growth through the scaffold. Furthermore, pores ensure the effective exchange of nutrients and waste between cells and their microenvironment [59]. The porosity found for specimens (i), (ii) and (iii) is shown in Table 1. It can be seen that the maximum porosity was of the specimen (i), with the highest QT concentration. Likewise, the minimum value of porosity was noted in the specimen (ii), with an equivalent concentration of QT and PVA. No greater difference was obtained in the porosity percentages of the specimens (i), (ii) and (iii), this is due to the freeze-drying process used in the synthesis of the scaffold. In the literature it can be found that when handling more cycles in lyophilization, the percentage of porosity increases [14], while in the methodology of the present investigation, a lyophilization cycle was used for all specimens. This is related since the samples at the time of being frozen are subjected to an increase in pressure and the solvent passes from the liquid to the gas phase, leaving a space where the pore is formed [14]. Generally, a high degree of porosity is a characteristic of an ideal scaffold used in tissue engineering. A large space and nutrition for the cells will be given by a scaffold with a high level of porosity [24]. Cell viability assay (metabolic activity) by reduction of compound MTT In tissue engineering, an ideal scaffold is the one that in addition to having excellent mechanical properties, does not produce toxic or adverse effects when applied to living tissue. The effect of scaffolds on osteoblast proliferation and survival viability was studied by means of the MTT compound reduction cell viability assay. Figure 7. MTT test results. The absorbance level and standard deviation obtained for each specimen can be seen in figure 7. Each absorbance value is the average of the absorbance values in each well. These values were obtained from the ELISA reader. It can be seen that the absorbance value for specimens (i), (ii) and (iii) is considerably higher than that of the control, which demonstrates the presence of cells adhered to the scaffold at the end of the assay (Figure 8). The absorbance values of the different specimens did not show greater variation. Furthermore, it can be seen that the lowest absorbance value was obtained in the specimen with the highest quantity of QT. This can be attributed to the shape of the pores that the scaffold may have, which influences adhesion, proliferation and migration of cells [20]. Likewise, the high absorbance of the specimens can be explained due to the high level of biocompatibility that QT has with osteoblasts [7], and an adequate sterilization process. Conclusions In this investigation, a scaffold was synthesized with two polymers CH and PVA. The study showed that by varying the concentration of QT and PVA, associated with the lyophilization method used, it is possible to control properties that will allow the scaffold to be applied in bone regeneration. In the first place, the values obtained in porosity were similar because a lyophilization stage was used for the three samples, a process which allows controlling the values of this property. Additionally, the results of the degree of swelling indicate that in a concentration with a higher quantity of CH, the scaffold obtains better swelling properties. This result is consistent with others in the literature, and it is demonstrable because there is less presence of hydrophilic groups in a PVA-QT mixture, when the concentration of PVA is increased. As mentioned above, the MTT assay was performed in order to quantify the formation of formazan crystals through the changes in absorbance that the samples presented at a wavelength of 590nm, values that proportionally represent cell viability. First, the scaffold had to be sterilized, so that the cell viability assay was not affected. This was achieved by means of series of ethanol and PBS, and it proved to be an effective method to achieve a sterile environment in which the cells can survive, due to the fact that no turbulence or color changes were observed in the medium where the cells were grown. Furthermore, it is a simple procedure which can be implemented in future research. The three specimens showed the presence of cells at the end of the test, the variation in absorbance did not reflect significant changes, which allows us to say that the cells managed to proliferate in a favorable environment. Finally, the three specimens were characterized through these tests, however, the one with the highest amount of CH presented better properties than the others, which means that this research can be expanded by adding more characterization tests that allow to know more thoroughly the properties of an SPBA when synthesized with different concentrations of QT and PVA. It is recommended to study the degradation index of the scaffolds, and a SEM test in order to know more about the structure of the pores. Funding: This research received no external funding.
2020-12-10T09:06:48.345Z
2020-12-07T00:00:00.000
{ "year": 2020, "sha1": "fd4e436cd2cdf690a7290b53b4eeab3e401b8e28", "oa_license": "CCBY", "oa_url": "https://www.preprints.org/manuscript/202012.0125/v1/download", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "18c779d28a4d192684d3012f511b395c79c2a103", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Chemistry" ] }
195192815
pes2o/s2orc
v3-fos-license
Homologues of the RNA binding protein RsmA in Pseudomonas syringae pv. tomato DC3000 exhibit distinct binding affinities with non‐coding small RNAs and have distinct roles in virulence Summary Pseudomonas syringae pv. tomato DC3000 (PstDC3000) contains five RsmA protein homologues. In this study, four were functionally characterized, with a focus on RsmA2, RsmA3 and RsmA4. RNA electrophoretic mobility shift assays demonstrated that RsmA1 and RsmA4 exhibited similar low binding affinities to non‐coding small RNAs (ncsRNAs), whereas RsmA2 and RsmA3 exhibited similar, but much higher, binding affinities to ncsRNAs. Our results showed that both RsmA2 and RsmA3 were required for disease symptom development and bacterial growth in planta by significantly affecting virulence gene expression. All four RsmA proteins, especially RsmA2 and RsmA3, influenced γ‐amino butyric acid utilization and pyoverdine production to some degree, whereas RsmA2, RsmA3 and RsmA4 influenced protease activities. A single RsmA, RsmA3, played a dominant role in regulating motility. Furthermore, reverse transcription quantitative real‐time PCR and western blot results showed that RsmA proteins, especially RsmA2 and RsmA3, regulated target genes and possibly other RsmA proteins at both transcriptional and translational levels. These results indicate that RsmA proteins in PstDC3000 exhibit distinct binding affinities to ncsRNAs and have distinct roles in virulence. Our results also suggest that RsmA proteins in PstDC3000 interact with each other, where RsmA2 and RsmA3 play a major role in regulating various functions in a complex manner. I N T RO D U C T I O N The Gac/Rsm signal transduction system has been elaborately studied in many bacterial species (Babitzke and Romeo, 2007;Lapouge et al., 2008). It has been widely reported that the GacS/GacA two-component system (TCS) regulates pleotropic phenotypes, including virulence, stress responses, biofilm formation, production of extracellular enzymes, secondary metabolites and quorum sensing (Heeb and Haas, 2001;Lapouge et al., 2008). The GacA homologues specifically initialize the transcription of non-coding regulatory small RNAs (ncsRNAs), such as csrB and csrC in Escherichia coli (Jonas and Melefors, 2009;Martínez et al., 2014), and rsmW, rsmY, rsmZ and rsmV in Pseudomonas aeruginosa (Janssen et al., 2018). These ncsRNAs contain numerous GGA motifs and bind and sequester the function of the RNA binding protein CsrA (carbon storage regulator) or its homologues RsmA and RsmE (repressor of secondary metabolites) (Reimmann et al., 2005;Vakulskas et al., 2015). As important post-transcriptional regulators, the RsmA/CsrA family proteins inhibit translation or stability of transcripts of target genes by binding specific GGA motifs within apical loops of the RNA secondary structures in the 5′ untranslated regions (UTR), one of which overlaps or is close to the Shine-Dalgarno (SD) sequence or ribosome binding sites (RBSs) of target mRNAs, thus blocking ribosome access (Blumer et al., 1999;Vakulskas et al., 2015). On the other hand, the RsmA/CsrA family proteins can also positively regulate the expression of target genes. CsrA protects flhDC mRNA by inhibiting the 5' end-dependent RNase E cleavage pathway in E. coli (Yakhnin et al., 2013). RsmA activates the expression of hrpG gene by directly binding to the 5' UTR and stabilizing its mRNA in Xanthomonas citri (Andrade et al., 2014). The CsrA protein was first reported in E. coli by affecting glycogen biosynthesis and gluconeogenesis (Romeo et al., 1993). Later, CsrA was deemed to be a global regulator, which regulates multiple important pathways in many bacteria (Timmermans and Van Melderon 2010). RsmA and RsmE are highly conserved CsrA homologues and play a major role in regulation of virulence in diverse pathogenic bacteria. In X. citri subsp. citri and X. campestris pv. campestris, an rsmA mutant caused significantly reduced virulence in the host plant, and delayed or completely abolished hypersensitive response (HR) in the non-host plant tobacco (Andrade et al., 2014;Chao et al., 2008). In Erwinia amylovora, a csrA mutant did not induce HR on tobacco or cause disease on immature pear fruits. It was compromised in motility and had reduced exopolysaccharide (EPS) amylovoran and expression of type III secretion system (T3SS) genes. In addition, CsrA in E. amylovora may indirectly affected uptakes of antibiotics through the Rcs system (Ancona et al., 2016;Ge et al., 2018;Lee et al., 2018). In Pectobacterium carotovorum, overexpression of rsmA inhibited motility, biofilm formation, EPS and secondary metabolite, antibiotics and pigment productions (Mukherjee et al., 1996). Absence of rsmA resulted in less tissue-macerating (soft-rotting) in plant hosts through affecting quorum sensing required for extracellular lytic enzymes (Chatterjee et al., 1995(Chatterjee et al., , 2003Cui et al., 2001). RsmA influenced expression of adhesion synthesis operon and indirectly affected virulence of Yersinia pseudotuberculosis (Heroven et al., 2008). Loss of csrA greatly down-regulated SPI-1 intestinal epithelial cell invasion and other virulence gene expression in S. typhimurium (Lawhon et al., 2003). In Pseudomonas protegens, RsmA and RsmE controlled metabolism and antibiotic biosynthesis (Reimmann et al., 2005;Wang et al., 2017). In Pseudomonas putida, RsmA, RsmE and RsmI negatively affected c-di-GMP pools and biofilm formation (Huertas-Rosales et al., 2016. Pseudomonas syringae pv. tomato DC3000 (PstDC3000) causes bacterial speck disease on tomato and Arabidopsis thaliana by secreting effectors through the T3SS and producing the phytotoxin coronatine (Zhao et al., 2003). Extracellular protease, pyoverdine siderophore and alginate EPS also contribute to its virulence (Swingle et al., 2008;Vargas et al., 2013). Earlier bioinformatic studies showed that P. syringae pv. tomato possesses seven small RNAs, i.e. rsmX homologues (rsmX1-5) as well as rsmY and rsmZ (Moll et al., 2010). However, the molecular mechanism and virulence-related target genes of the CsrA/RsmA proteins in PstDC3000 remain elusive. A recent study identified five alleles of CsrA (RsmA) proteins present in PstDC3000 and characterized three single mutants (csrA1 to csrA3) (Ferreiro et al., 2018). They showed that CsrA2 (RsmA2) is the most conserved member among 250 prokaryotic genomes examined, and CsrA3 (RsmA3) is nearly identical to CsrA2, but is present only in the Pseudomonas fluorescens group (RsmE). They further reported that CsrA3 and CsrA2 were found to play roles in motility, syringafactin and alginate production, and promote growth in planta, but not symptom development (Ferreiro et al., 2018). Here, we labelled these CsrA proteins in PstDC3000 as RsmA proteins because deduced amino acids of RsmA proteins in PstDC3000 shared relatively higher identities and similarities to RsmA proteins from other Pseudomonas strains than to CsrA proteins from E. coli and Erwinia species ( Fig. S1; Table S1). In our study, our major goal was to determine the roles of the rsmA genes in virulence and other virulent-related phenotypes. We also explored the interactions between different RsmA proteins in PstDC3000 and determined the binding affinities of these RsmA proteins to ncsRNAs. R E S U LT S In vitro characterization of the rsmA mutants We characterized four rsmA genes in PstDC3000 by generating four overexpression strains (DC3000(pRsmA1, pRsmA2, pRsmA3, pRsmA4)), four single mutants (rsmA1, rsmA2, rsmA3, rsmA4), three double mutants (rsmA2/rsmA3, rsmA2/rsmA4, rsmA3/rsmA4), one triple mutant (rsmA2/rsmA3/rsmA4) and one quadruple mutant (rsmA1/rsmA2/rsmA3/rsmA4). Our results using overexpression strains indicate that overexpression of RsmA2, RsmA3 and Rsm4 in PstDC3000 resulted in reduced pyoverdine production (Fig. S2A) and protease activities (Fig. S2B). Overexpression of RsmA2 and RsmA3 led to decreased ability in utilizing γ-amino butyric acid (GABA), whereas overexpression of RsmA1 and RsmA4 resulted in enhanced GABA utilization (Fig. S3A). On the other hand, motility was slightly decreased only in PstDC3000 overexpressing RsmA3 (Fig. S3B). Overexpressing of CsrA from E. amylovora in PstDC3000 led to similar phenotypic changes with those of overexpressing RsmA2 of PstDC3000 (Figs S2 and S3). The deduced amino acid sequence of CsrA of E. amylovora shared relatively higher identities and similarities to that of the RsmA2 than to RsmA3 of PstDC3000 ( Fig. S1; Table S1). We also observed that mutation of the rsmA1gene alone did not affect GABA utilization, protease activity and pyoverdine production (except a minor increase in motility) (Figs S4 and S5), suggesting that RsmA1 plays a very minimal role, as previously reported (Ferreiro et al., 2018). Based on these results, we mainly focused on RsmA2, RsmA3 and RsmA4 for the majority of our studies. RsmA2 and RsmA3 are required for virulence and bacterial growth in planta We inoculated plants using an infiltration method with a very low concentration of initial inoculum and found that PstDC3000, all three overexpression strains, three single mutants (rsmA2, rsmA3, rsmA4) and two double mutants (rsmA2/rsmA4 and rsmA3/rsmA4) exhibited similar disease symptoms (Fig. 1A,B). Interestingly, the rsmA2/rsmA3 double mutant and the rsmA2/rsmA3/rsmA4 triple mutant exhibited dramatically reduced symptoms, including both necrotic spots and the amount of chlorosis (Fig. 1B). Virulence of the rsmA2/ rsmA3/rsmA4 triple mutant could be restored by complementation with either the rsmA2 or rsmA3 gene, but not with the rsmA4 gene (Fig. 1C). Virulence of the rsmA2/rsmA3 double mutant could partially be recovered by either rsmA2 or rsmA3 (Fig. 1C). These findings suggest that both RsmA2 and RsmA3 are required for virulence. Fig. 1 Virulence of Pseudomonas syringae pv. tomato DC3000, rsmA overexpression, rsmA mutants and complementation strains. (A) Disease symptoms caused by PstDC3000(pUCP18), PstDC3000(pRsmA2), PstDC3000(pRsmA3) and PstDC3000(pRsmA4) overexpression strains in tomato leaves. (B) Symptoms caused by PstDC3000 and the rsmA2, rsmA3, rsmA4, rsmA2/rsmA3, rsmA2/rsmA4, rsmA3/rsmA4 and rsmA2/rsmA3/rsmA4 mutants in tomato leaves. (C) Symptoms caused by complementation strains of the rsmA2/rsmA3 and the rsmA2/rsmA3/rsmA4 mutants in tomato leaves. Pictures were taken at 7 days postinoculation. The experiment was repeated three times and similar results were obtained. We also monitored bacterial growth in planta at 0, 1, 3 and 5 days post-inoculation (dpi). No significant difference was found among PstDC3000, three overexpressed strains and three single mutants as well as the rsmA2/rsmA4 and the rsmA3/rsmA4 double mutants ( Fig. 2A,B). Bacterial growth for the rsmA2/rsmA3 double mutant and the rsmA2/rsmA3/ rsmA4 triple mutant was about 5-10-fold lower than those of all other strains (Fig. 2B), and bacterial growth could be completely rescued in the triple mutant by expression of either the rsmA2 or rsmA3 gene, but not the rsmA4 gene (Fig. 2C). Expression of the rsmA2 or rsmA3 gene in the rsmA2/ rsmA3 double mutant could partially restore bacterial growth in tomato leaves (Fig. 2D). In order to rule out the possibility that the defect of bacterial growth in planta was due to defects in their abilities to grow in vitro, we determined bacterial growth in KB medium. Though some mutants showed delay in growth, all the mutants reached a similar level to the wild-type after 24 h of growth (Fig. S6). All the overexpression strains and mutants could still elicit HR on non-host tobacco leaves (Fig. S7). Overall, these results support the finding that RsmA2 and RsmA3 play an important role in the interaction of PstDC3000 with tomato plants, and suggest that these proteins might have functional redundancy. Expression of T3SS, coronatine and alginate genes in PstDC3000 is regulated by RsmA2, RsmA3 and RsmA4 to varying degrees In order to determine how mutation of the rsmA genes affects virulence, expression of selected virulence-related genes in five mutants was quantified. First, expression of avrE and hrpL in all five mutants was reduced (Fig. 3), especially in the rsmA2/rsmA3 double mutant and the rsmA2/rsmA3/rsmA4 triple mutant, where expression of avrE and hrpL was more than 30-50-and 250-1000-fold lower than that of the wild type (WT), respectively (Fig. 3). In the single mutant, the effect of the rsmA3 gene on expression of hrpL and avrE genes was much stronger than that of the rsmA2 and rsmA4 genes. These results suggest that RsmA2, RsmA3 and RsmA4 synergistically influence the expression of T3SS genes in PstDC3000, and further indicate that RsmA3 plays a major role. Expression of coronatine-related genes, i.e. the corR and cfl genes, was also reduced in four of the five mutants. Expression of the corR and cfl genes in the rsmA2/rsmA3 double mutant was decreased about 3-and 2.2-fold than that of the wild type, respectively, while in the rsmA2/rsmA3/rsmA4 triple mutant, they were also down-regulated about 6.7-and 4.8-fold, respectively (Fig. 3). Both corR and cfl gene expression was slightly down-regulated in the rsmA2 and rsmA3 mutants, but not in the rsmA4 mutant (Fig. 3). These results suggest that RsmA2 and RsmA3 synergistically regulate coronatine gene expression in PstDC3000, whereas RsmA4 might also have a minor role when both RsmA2 and RsmA3 are absent. Expression pattern for the algQ gene was reduced in the rsmA3, rsmA2/rsmA3 and rsmA2/rsmA3/rsmA4 mutants (Fig. 3). However, expression of algK gene was increased in all three mutants lacking the rsmA3 gene. Expression of algK was up-regulated about 11-fold in the rsmA3 mutant and more than 30-fold in both the rsmA2/rsmA3 double mutant and the rsmA2/rsmA3/ rsmA4 triple mutant than that in the WT (Fig. 3). These results suggest that different regulation mechanisms exist for the algQ and algK genes in PstDC3000 by the RsmA proteins. Protease activities are influenced by RsmA2, RsmA3 and RsmA4 Overexpression of RsmA in PstDC3000 exhibited reduced protease activities compared to that of PstDC3000 (Figs 4A and S8A). In contrast, the rsmA2, rsmA3 and rsmA4 deletion mutants all exhibited increased protease activities compared to PstDC3000 (Figs 4B and S8B). The protease activities of the rsmA2/rsmA4 and rsmA3/rsmA4 mutants were also slightly increased (Figs 4B and S8B), whereas the protease activities for the rsmA2/rsmA3, rsmA2/rsmA3/rsmA4 and rsmA1/rsmA2/ rsmA3/rsmA4 mutants were similar to each other and increased to the level of the rsmA3 single mutant (Figs 4B, S4B and S8B). Complementation of the rsmA2/rsmA3/rsmA4 and rsmA2/ rsmA3 mutant with the rsmA2 gene partially restored protease activities, whereas complementation with the rsmA3 gene led to reduced protease activities (Figs 4CD and S8C). In conclusion, these results indicate that RsmA3 plays a major role in regulating protease activity, whereas Rsm2 and RsmA4 also negatively influence protease activity in PstDC3000. The effect of RsmA proteins on pyoverdine production One characteristic of fluorescent pseudomonads is the production of pyoverdine as a siderophore and virulence-related signal molecule (Imperi et al., 2009). Similar to the protease activity results, overexpression of rsmA2, rsmA3 or rsmA4 led to varying reduced pyoverdine production, where overexpression of the rsmA3 gene exhibited the strongest negative effect (Figs 5A and S9A). In contrast, deletion of the rsmA2 and rsmA3 genes, but not the rsmA4 or rsmA1 genes, led to increased pyoverdine production (Figs 5B, S4A and S9A). Furthermore, pyoverdine production in the rsmA2/rsmA4 and rsmA3/rsmA4 double mutants was similar to that in the rsmA2 and rsmA3 single mutants, respectively (Figs 5B and S9B). When both rsmA2 and rsmA3 were deleted, pyoverdine production in the rsmA2/rsmA3 double mutant and the rsmA2/rsmA3/rsmA4 triple mutants was significantly lower compared to DC3000, but similar to that of the rsmA3 overexpression strains (P < 0.05, Figs 5A,B and S9A,B). However, Fig. 2 Bacterial growth of Pseudomonas syringae pv. tomato DC3000, rsmA overexpression, rsmA mutants and complementation strains in tomato. (A) PstDC3000, PstDC3000(pUCP18), PstDC3000(pRsmA2), PstDC3000(pRsmA3) and PstDC3000(pRsmA4) overexpression strains. (B) PstDC3000 and the rsmA2, rsmA3, rsmA4, rsmA2/rsmA3, rsmA2/rsmA4, rsmA3/rsmA4 and rsmA2/rsmA3/rsmA4 mutants. (C) PstDC3000, the rsmA2/rsmA3/rsmA4 mutant and its complementation strains. (D) PstDC3000, the rsmA2/rsmA3 and its complementation strains. Bacterial growth was monitored at 0, 1, 3 and 5 days postinoculation. Vertical bars represent standard deviations. The experiment was repeated three times and similar results were obtained. A B C D the rsmA1/rsmA2/rsmA3/rsmA4 quadruple mutant exhibited increased pyoverdine production compared to the rsmA2/rsmA3 double and rsmA2/rsmA3/rsmA4 triple mutants (Fig. S4A). Complementation of the rsmA2/rsmA3/rsmA4 mutant with either rsmA2 or rsmA4, but not the rsmA3 gene, partially restored pyoverdine production to the wild-type level (Figs 5C and S9C). Similarly, complementation of the rsmA2/rsmA3 mutant by expressing the rsmA2, but not the rsmA3 gene, partially recovered its pyoverdine production (Figs 5D and S9C). These results indicate that all four RsmA proteins influenced pyoverdine production in PstDC3000 to some degree. These results also suggest that expression levels of RsmA2 and RsmA3, especially RsmA3, might be important in pyoverdine production, and RsmA1 might also play a role in pyoverdine production when RsmA2, RsmA3 and RsmA4 are all deleted. RsmA3 negatively regulates motility in PstDC3000 In contrast to protease activity and pyoverdine production, motility was only slightly decreased in PstDC3000 overexpressing the rsmA3 gene and slightly increased in the single rsmA1 and rsmA3 mutants and the rsmA3/rsmA4 double mutant (Figs S3B, S5B, S10A,B and S11A,B). Similar to pyoverdine production, when both rsmA2 and rsmA3 were deleted, motility of the rsmA2/ rsmA3 double mutant, the rsmA2/rsmA3/rsmA4 triple mutant and the rsmA1/rsmA2/rsmA3/rsmA4 quadruple mutant was significantly reduced as compared to other strains (P < 0.05, Figs S5B, S10B and S11B). Motility can be rescued for the rsmA2/ rsmA3 double mutant and the rsmA2/rsmA3/rsmA4 triple mutant by the rsmA2 or rsmA4 gene, but not the rsmA3 gene as described above for pyoverdine production (Figs S10C and S11CD). These results indicate that although RsmA1 may also suppress motility, RsmA3 plays a dominant role in regulating motility in PstDC3000, and also suggest that RsmA2 could affect motility when interacting with RsmA3. The effect of RsmA on GABA utilization Non-protein amino acid GABA is highly abundant in tomato apoplast and PstDC3000 can utilize GABA as a sole carbon and nitrogen source, thus utilization of GABA might affect its survival in planta (Rico and Preston, 2008). Overexpression of the rsmA2 and rsmA3 genes in PstDC3000 led to decreased ability in utilizing GABA, whereas overexpression of the rsmA4 gene resulted in enhanced GABA utilization (Fig. 6A). In contrast, deletion of the rsmA2 and rsmA3 genes increased or decreased GABA utilization, respectively, whereas no effect was found in the rsmA1 and rsmA4 single mutants or in the rsmA2/rsmA4 and rsmA3/rsmA4 double mutants and the rsmA1/rsmA2/ rsmA3/rsmA4 quadruple mutant (Figs 6B and S5A). Similar Fig. 3 Expression of selected virulence genes of Pseudomonas syringae pv. tomato DC3000 and the rsmA mutants as compared to the wild type. Expression of avrE, hrpL, corR, cfl, algQ and algK genes in the rsmA2, rsmA3, rsmA4, rsmA2/rsmA3 and rsmA2/rsmA3/rsmA4 mutant strains as compared to that of PstDC3000 grown in a hrp-inducing minimal medium at 6 h determined by qRT-PCR. The rpoD gene was used as a control. Vertical bars represent the standard deviations of mean ratio. Bars marked with the same letter are not significantly different (P < 0.05). The experiment was repeated three times, and three technical replicates were included for each of the two biological samples per experiment. Fig. 4 Diameter of halo zones of protease activities. (A) PstDC3000, PstDC3000(pUCP18), PstDC3000(pRsmA2), PstDC3000(pRsmA3) and PstDC3000(pRsmA4) overexpression strains. (B) PstDC3000 and the rsmA2, rsmA3, rsmA4, rsmA2/rsmA3, rsmA2/rsmA4, rsmA3/rsmA4 and rsmA2/ rsmA3/rsmA4 mutants. (C) PstDC3000, the rsmA2/rsmA3/rsmA4 mutant and its complementation strains. (D) PstDC3000, the rsmA2/rsmA3 mutant and its complementation strains. All strains were grown on NYG agar plates containing 0.75% skimmed milk at room temperature. Diameters were measured after 24 h of incubation. Vertical bars represent standard deviations. Bars marked with the same letter are not significantly different (P < 0.05). The experiment was repeated three times with three replicate and similar results were obtained. (pUCP18), PstDC3000(pRsmA2), PstDC3000(pRsmA3) and PstDC3000(pRsmA4) overexpression strains. (B) PstDC3000 and the rsmA2, rsmA3, rsmA4, rsmA2/rsmA3, rsmA2/rsmA4, rsmA3/rsmA4 and rsmA2/rsmA3/rsmA4 mutants. (C) PstDC3000, the rsmA2/rsmA3/rsmA4 mutant and its complementation strains. (D) PstDC3000, the rsmA2/rsmA3 mutant and its complementation strains. Pyoverdine production was quantified by measuring the absorbance at OD 405 of culture supernatants diluted 2:1 in 100 mM Tris-HCl (pH 8.0) and normalized at OD 600 of bacterial suspensions. Data were presented as relative fluorescence levels (A 405 /A 600 ). All strains were grown in MG medium at 28 °C for 24 h. Vertical bars represent standard deviations. Bars marked with the same letter are not significantly different (P < 0.05). The experiment was repeated three times with three replicates and similar results were obtained. A B C D to motility and pyoverdine production, when both the rsmA2 and rsmA3 genes were deleted, GABA utilization in the rsmA2/ rsmA3 double mutant and the rsmA2/rsmA3/rsmA4 triple mutant was significantly decreased compared to other strains, but was similar to that of the rsmA3 overexpression strain (P < 0.05, Fig. 6A,B). Complementation of the rsmA2/rsmA3/ rsmA4 mutant with either the rsmA2 or rsmA4 gene increased GABA utilization to the same level as the rsmA4 overexpression strain (Fig. 6A,C), whereas complementation with the rsmA3 gene led to GABA utilization as low as that of the rsmA3 overexpression strain (Fig. 6A,C). Surprisingly, when the rsmA2/rsmA3 double mutant complemented with either the rsmA2 or rsmA3 gene, it resulted in significantly decreased ability in utilizing GABA (P < 0.05, Fig. 6D). These results indicate that all four RsmA proteins influence GABA utilization to varying degrees in PstDC3000. Furthermore, when RsmA2, RsmA3 and RsmA4 were all absent, RsmA1 could also influence GABA utilization, suggesting that the interaction between these four RsmA proteins in influencing GABA utilization is very complicated. Expression of the rsmA genes in PstDC3000 In order to give a glimpse of the complex interaction among RsmA proteins in PstDC3000, we first determined the expression of the rsmA genes using reverse transcription quantitative realtime PCR (qRT-PCR) in HMM medium. Expression of rsmA2 was decreased about 2-fold in the rsmA3 single mutant compared to that of the wild-type and rsmA4 mutant (Fig. 7A). No significant change was observed for the rsmA3 gene in the mutant strains tested. However, in the rsmA2/rsmA3 double mutant, expression of the rsmA4 and rsmA1 genes was slightly down-regulated, whereas expression of the rsmA1 gene was significantly downregulated in the rsmA2/rsmA3/rsmA4 triple mutant (P < 0.05, Fig. 7A). These results suggest that RsmA3 positively regulates rsmA2 expression. Furthermore, the results also suggest that RsmA2 and RsmA3 might synergistically activate the expression of the rsmA4 gene, whereas RsmA2, RsmA3 and RsmA4 synergistically promote the expression of the rsmA1 gene. Western blot analyses showed that the abundance of the RsmA2 and RsmA4 proteins was decreased about 30% and 50% in the rsmA3 single mutant strain, respectively, whereas the abundance of RsmA3 protein remained mostly unchanged in the rsmA2 and rsmA4 single mutants (Figs 7B and S12A). Furthermore, the abundance of the RsmA2 protein was significantly decreased (70% less) in the rsmA2/rsmA3 double mutant and the rsmA2/rsmA3/rsmA4 triple mutant as compared to the WT, whereas the abundance of the RsmA4 protein was not detectable in the rsmA2/rsmA3 double mutant and the rsmA2/rsmA3/rsmA4 triple mutant (Figs 7C and S12B). Interestingly, the abundance of the RsmA3 protein was also decreased about 30% in the rsmA2/rsmA3 double mutant, but only slightly decreased in the rsmA2/rsmA3/rsmA4 triple mutant (Figs 7C and S12B). These results suggest that RsmA3 influences the expression of RsmA2 and RsmA4 proteins. These results also suggest that RsmA2 might self-regulate itself and reciprocally influence RsmA3 expression, whereas RsmA2 and RsmA3 together might also affect the expression of RsmA4 proteins. RsmA proteins have distinct binding affinities to ncsRNAs To compare RNA-binding affinities to different ncsRNAs, four RsmA proteins (RsmA1, RsmA2, RsmA3 and RsmA4) were purified and subject to RNA gel shift assays (Fig. 8). Since previous sequence analysis of five rsmX ncsRNAs revealed the same secondary structure with five GGA motifs in the hairpin loop (Moll et al., 2010), only rsmX1 and rsmX5 as well as rsmY and rsmZ were selected for the analysis. In all four ncsRNAs tested, a band shift was observed at 40 nM for RsmA2 and RsmA3, at 320 nM for RsmA1, and at 640 nM for RsmA4. These results indicate that all ncsRNAs of PstDC3000 exhibit similar binding affinity to different RsmA homologues, while RsmA homologues have distinct binding affinities to ncsRNAs in the following order from strongest to weakest: RsmA2 = RsmA3 > RsmA1 > RsmA4. D I SC U SS I O N Since 1993, the CsrA/RsmA homologues have been extensively studied in human and plant pathogens as well as plant-associated microorganisms. In a previous report, three paralogues, i.e. csrA1, csrA2 and csrA3 in PstDC3000, were evaluated for their roles in motility, alginate biosynthesis, syringafactin production and virulence, and the authors concluded that CsrA1 to CsrA3 were not required for virulence (Ferreiro et al., 2018). However, in our study we demonstrated that RsmA2 and RsmA3 were required for virulence in PstDC3000 and bacterial growth in planta, and RsmA4 might also play a minor role in virulence. We also provided evidence that RsmA2 and RsmA3 regulate genes at both transcriptional and post-transcriptional levels and interactions between themselves are very complicated. Moreover, we showed that RsmA proteins in PstDC3000 exhibited distinct binding affinities to ncsRNAs, which might explain the distinguishing role each RsmA protein plays in affecting various phenotypes. RsmA (CsrA) plays a critical role in virulence among many pathogenic bacteria (Ancona et al., 2016;Andrade et al., 2014;Barnard et al., 2004). It has been reported that deletion of a single csrA gene in PstDC3000 did not affect virulence, but bacterial growth was reduced in the csr2 and csr3 mutants (Ferreiro et al., 2018). In our study, we confirmed that deletion of a single rsmA gene did not change its ability to cause disease. However, we found that deletion of both rsmA2 and rsmA3 significantly affects disease symptom development and bacterial growth, suggesting that both RsmA2 and RsmA3 are required for PstDC3000 virulence and bacterial growth in planta. These results also suggest that RsmA proteins in PstDC3000 exhibit function redundancies in controlling virulence factors. The T3SS and phytotoxin coronatine are major pathogenicity and virulence factors in PstDC3000, respectively (Zhao et al., 2003). The ability of PstDC3000 to multiply in plant tissue and promote symptom development is dependent on the translocation of many effector proteins to target specific host proteins and interfere with plant innate immune signalling systems (Feng and Zhou, 2012;Mudgett, 2005). The non-host specific toxin coronatine increases disease severity by suppressing stomata closure and promoting the jasmonic acid (JA) signalling pathway to suppress salicylic acid (SA)-mediated defence responses (Brooks et al., Fig. 7 Expression of the rsmA genes and abundance of the RsmA2, RsmA3 and RsmA4 proteins in Pseudomonas syringae pv. tomato DC3000 and the rsmA mutants. (A) Expression of the rsmA1, rsmA2, rsmA3 and rsmA4 genes in the rsmA2, rsmA3, rsmA4, rsmA2/rsmA3 and rsmA2/rsmA3/rsmA4 mutant strains as compared to that of PstDC3000 grown in a hrp-inducing minimal medium at 6 h and determined by qRT-PCR. The rpoD gene was used as a control. Vertical bars represent the standard deviations of mean ratio. Bars marked with the same letter are not significantly different (P < 0.05). (B) Abundance of RsmA2-His6, RsmA3-His6 and RsmA4-His6 in the rsmA2, rsmA3 and rsmA4 mutant strains as compared to that of PstDC3000 grown in HMM medium for 24 h at 18 °C. (C) Abundance of RsmA2-His6, RsmA3-His6 and RsmA4-His6 in the rsmA2/rsmA3 and rsmA2/rsmA3/rsmA4 mutant strains as compared to that of PstDC3000 grown in HMM medium for 24 h at 18 °C. , 2007;Uppalapati et al., 2007;Zhao et al., 2003). Our results show that transcription levels of avrE, hrpL, corR and cfl are positively regulated by RsmA proteins at various degrees, indicating that RsmA proteins synergistically regulate virulence factors and contribute to virulence, and further suggesting that RsmA proteins in PstDC3000 have function redundancy. We also demonstrated that RsmA2 and RsmA3 play a major role, whereas RsmA1 and RsmA4 play a minor role in virulence. 2004; Elizabeth and Bender In addition, our results show that RsmA3 negatively regulates algK gene, which is responsible for alginate synthesis, whereas RsmA2 positively affect algK expression. However, expression of algK increased drastically in mutants deleting both RsmA2 and RsmA3. Our results are consistent with previous reports that algD is up-regulated in both the rsmA mutant of P. aeruginosa and the csrA3 mutant of PstDC3000 (Burrowes et al., 2006;Ferreiro et al., 2018). In contrast, expression of algQ, encoding a global regulatory protein of alginate biosynthesis (Ambrosi et al., 2005;Kim et al., 1998;Schlictman et al., 1995), was significantly decreased when both rsmA2 and rsmA3 were deleted, indicating that RsmA3 and RsmA2 synergistically regulate AlgQ at transcriptional level. It is possible that AlgK is negatively regulated by AlgQ, which needs to be further verified. The potential interplay between different Rsm proteins has been investigated in some Pseudomonas strains Morris et al., 2013;Zha et al., 2014). In P. fluorescens, RsmE expression was negatively regulated by RsmA and RsmE, which regulate itself (Reimmann et al., 2005). Both RsmA and RsmE, which are closely related to RsmA2 and RsmA3, respectively, negatively affect their own expression in P. putida (Huertas-Rosales et al., 2016). Furthermore, RsmA and RsmF translation is repressed by specific binding of RsmA to rsmA and rsmF mRNA in vitro at post-transcriptional level in P. aeruginosa (Marden et al., 2013). In our study, one of the novel findings is that RsmA3 positively regulates RsmA2 at transcriptional level. This result might explain why many phenotypes were most significant when both RsmA2 and RsmA3 were absent. In other words, RsmA3 probably is on the top of the RsmA regulatory cascade in PstDC3000. Interestingly, our results suggest that expression of the rsmA1 gene, which is present in most Pseudomonas strains, might be suppressed by the synergistic action of RsmA2, RsmA3 and RsmA4 at transcriptional level. In addition, we demonstrated that in the rsmA3 mutant, the abundance of the RsmA2 and RsmA4 protein was significantly decreased, indicating that RsmA3 positively affects RsmA2 and RsmA4 at post-transcriptional and translational levels. Furthermore, the abundances of the RsmA2, RsmA3 and RsmA4 proteins were all decreased in rsmA2 and rsmA3 double mutants as compared to the wild-type and the rsmA3 single mutant, further suggesting that RsmA2 and RsmA3 might synergistically and reciprocally influence the expression of RsmA2, RsmA3 and RsmA4 proteins at the post-transcriptional level. Future studies should focus on illustrating the exact interaction among these RsmA proteins in PstDC3000. It is assumed that RsmA/CsrA proteins bind to target mRNAs at the conserved GGA motif located in the loops of the hairpin structure within the 5ʹ UTR, whereas ncsRNAs, containing numerous GGA motifs, sequester their functions (Vakulskas et al., 2015). We demonstrated that RsmA protein homologues have distinct binding affinities to ncsRNAs, whereas RsmA2 and RsmA3 in PstDC3000 exhibited similar binding affinities, which are much stronger than those of RsmA1 and RsmA4. These results are consistent with previous findings that RsmA and RsmE in P. fluorescens have similar binding affinities to ncsRNAs Reimmann et al., 2005). These results also provide evidence as why RsmA2 and RsmA3 are more important than RsmA1/RsmA4 in regulating various phenotypes. The question that remains unanswered is, since the major residues in binding to GGA motif are well conserved among these RsmA proteins, why do they display distinct binding affinities to ncsRNAs? In summary, the RsmA/CsrA family protein has long been deemed an important and pleiotropic post-transcriptional regulator in many bacteria and is extensively involved in the gene regulatory network (Vakulskas et al., 2015). In our current study, we demonstrated that RsmA proteins in PstDC3000 modulate virulence and bacterial growth in planta, and regulate protease activity, pyoverdine production, utilization of GABA and motility. We also demonstrated that RsmA proteins in PstDC3000 exhibit distinct binding affinities to fine-tune the expression of target genes, both negatively and positively. We further provided evidence for the existence of regulatory interactions between different RsmA proteins both at transcriptional and translational levels; however, the exact regulatory mechanism remains unknown. In the future, it is worth exploring potential direct or indirect regulation pathways as well as interaction among the RsmA proteins. Furthermore, identifying direct targets of RsmA proteins in the regulation of virulence factors should be a priority. E X PE R I M E N TA L PROC E D U R E S Bacterial strains, plasmids and culture conditions The bacterial strains and plasmids used in this study are listed in Table 1. Pseudomonas syringae pv. tomato strains were cultured on King's medium B (KB). For T3SS gene expression, an hrp-inducing minimal medium (HMM), supplemented with 10 mM fructose as carbon source, was used (Chatnaparat et al., 2015). Luria-Bertani (LB) broth was utilized for routine growth of E. coli strain at 37 °C. Antibiotics were used at the following concentrations when appropriate: 100 μg/mL rifampicin, 50 μg/mL kanamycin, 100 μg/mL ampicillin, 15 μg/mL tetracycline and 100 μg/mL spectinomycin. All the primers used are listed in Supplementary Table S2. Complementation of mutants and generation of overexpression strains For complementation of the rsmA mutants, a 1 kb fragment containing the native promoter and the rsmA genes was amplified by PCR and cloned into the pUCP18 vector to yield plasmids pRsmA1, pRsmA2, pRsmA3 and pRsmA4. The resulting plasmids were sequenced at the University of Illinois at Urbana-Champaign core sequencing facility. The final plasmids were introduced into the corresponding marker-less deletion mutants and PstDC3000 by electroporation. In addition, the promoter and the csrA gene from E. amylovora Ea1189 strain were also amplified and cloned into the pUCP18 vector and introduced into PstDC3000. Motility, pyoverdine production, protease activity and GABA utilization assays For motility assay, cells were grown overnight in KB medium, harvested and washed in phosphate-buffered saline (PBS). Cells were resuspended in PBS and 2 μL of bacterial suspensions (OD 600 = 2.0) were spotted onto the centre of motility plates (0.3% KB agar). The plates were incubated for 48 h at room temperature and the motility of bacterial cells was visually examined at 24 and 48 h post-inoculation. The experiments were performed three times, with three biological replicates per treatment. Pyoverdine product was detected in mannitol-glutamate (MG) medium as previously described (Ambrosi et al., 2005;Chatnaparat et al., 2015;Imperi et al., 2009;Park et al., 2010). Bacterial cells of overnight cultures in KB were washed in PBS, resuspended to an OD 600 of 0.05 in MG medium and incubated with shaking at 28 °C for 24 h. Pyoverdine was quantified by measuring the absorbance at 405 nm of culture supernatants diluted 2:1 in 100 mM Tris-HCl (pH 8.0) and normalized with OD 600 . To visualize pyoverdine product, bacterial cells were resuspended in PBS to a final concentration of OD 600 = 0.3, and 2 μL bacterial suspensions were spotted onto MG agar plates. Plates were incubated at 28 °C and observed under UV light after 48 h. The experiments were performed three times, with three biological replicates per treatment. For protease activity, all strains were grown in KB overnight, rinsed and resuspended in PBS to a density of OD 600 = 1. Bacterial suspensions (2 μL) were applied onto NYG (peptone yeast glycerol medium; 5 g/L peptone, 3 g/L yeast extract, 20 g/L glycerol) agar plates containing 0.75% skimmed milk. Plates were incubated at room temperature for 3 days prior to examination and measurement of the diameter of halo zones. The experiments were repeated three times, with three biological replicates each. For GABA utilization assays, bacterial cells of overnight cultures in KB were washed and resuspended to OD 600 = 0.02 in modified MG medium by replacing mannitol and l-glutamic acid in MG medium with 10 mM GABA (Chatnaparat et al., 2015). Cells were grown overnight at 28 °C, and bacterial growth was monitored by measuring OD 600 . The experiments were repeated three times, with three biological replicates each. Virulence assay and bacterial growth in tomato Tomato, Solanum lycopersicum 'Big Daddy Hybrid' plants were grown in a greenhouse. About 3-4 weeks after transplanting, leaves were infiltrated with bacterial suspension at about 5 × 10 4 CFU/mL (diluted from original suspension at OD 600 = 0.1) using a needleless syringe. Bacteria were recovered from plants by taking three samples from three leaves at the site of infiltration using a disk punch (three disks per strain) at 0, 1, 3, 5 days post-inoculation (dpi). Leaf disks were homogenized by mechanical disruption using pestles in PBS. Serial 10-time dilutions of the tissue homogenates were plated on LB plates, and the number of CFUs per disk (cm 2 ) was calculated. For virulence assay, disease symptoms were recorded at 7 dpi. The experiment was repeated three times. RNA isolation and reverse transcription quantitative real-time PCR After 6 h incubation in HMM at 18 °C, 4 mL of RNA protect reagent (Qiagen, Hilden, Germany) was added to 2 mL of bacterial culture mixed by vortex and incubated at room temperature for 5 min. Cells were harvested by centrifugation and RNA was extracted using RNeasy® mini kit (Qiagen, Hilden, Germany) according to the manufacturer's instructions. DNase I treatment was performed with TURBO DNA-free kit (Ambion, TX, USA) and RNA was quantified using Nano-drop ND100 spectrophotometer (Nano-Drop Technologies, Wilmington, DE, USA). One microgram of total RNA was reverse transcribed using Superscript III reverse transcriptase (Invitrogen, Carlsbad, CA, USA) following the manufacturer's instructions. One microgram of cDNA was used as the template for reverse transcription quantitative real-time PCR (qRT-PCR). PowerUp SYBR ® Green PCR master mix (Applied Biosystems, CA, USA) was used to detect gene expression of selected genes. qRT-PCR amplifications were performed using the StepOnePlus Real-Time PCR system (Applied Biosystems, CA, USA) under the following conditions: 50 °C for 2 min and 95 °C for 2 min followed by 40 cycles of 95 °C for 15 s and 60 °C for 1 min. The dissociation curve was measured after the programme was completed and gene expression was analysed with the relative quantification (ΔΔC t ) method using the rpoD gene as an endogenous control. The experiment was repeated three times, and three technical replicates were included for each of the two biological samples per experiment. Western blot The DNA fragments containing the native promoters and coding sequences of the rsmA2, rsmA3 and rsmA4 genes with a 6-His tag at the C-terminus were cloned into pUCP18. The resulting plasmids were transformed by electroporation into the PstDC3000 and mutants. For western blot, equal amounts of bacterial cells grown in HMM containing 10 mM fructose at 18 °C for 24 h were collected. Cell lysates were resolved by sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) and transferred to polyvinylidene fluoride membrane (Millipore, Billerica, MA, USA). After blocking with 5% milk in PBS, membranes were probed with 1.0 μg/ mL rabbit anti-His antibodies (GenScript, Piscataway, NJ, USA), followed by horseradish peroxidase-linked anti-rabbit IgG antibodies (Amersham Bioscience, Uppsala, Sweden) diluted 1:10 000. Immunoblots were developed using enhanced chemiluminescence reagents (Pierce, Rockford, IL, USA) and visualized using an ImageQuant LAS 4010 CCD camera (GE Healthcare, South Plainfield, NJ, USA). As a loading control, a duplicate protein gel was incubated in staining solution with shaking overnight and then incubated in destaining solution with shaking until the bands could be observed clearly. The experiment was performed at least two times. Protein expression and purification The rsmA1, rsmA2, rsmA3 and rsmA4 genes were amplified by PCR (Table S2). The PCR products were cloned into pET-42b vector with a C-terminal hexahistidine tag to construct the plasmids pET42b-RsmA1, pET42b-RsmA2, pET42b-RsmA3 and pET42b-RsmA4. The plasmids were electroporated into E. coli BL21 (DE3), and the transformants were induced with 0.5 mM isopropyl β-d-1-thiogalactopyranoside (IPTG) for 3 h at 37 °C to obtain the recombinant protein. Cells were centrifuged and lysed by centrifuging at 8635g for 45 min at 4 °C. The supernatant was passed through an Ni-NTA affinity chromatographic column to obtain the proteins (GE Healthcare, Uppsala, Sweden). RNA electrophoretic mobility shift assays Full-length sequences of each ncsRNA (rsmX1, rsmX5, rsmY and rsmZ) were PCR-amplified and cloned into the pGEM-T Easy vector (Promega, Madison, WI, USA). RNA probes were prepared from the cloned vector as a template using MEGAshortscript kit (Thermo Fisher Scientific, Waltham, MA, USA) and labelled with biotin using Pierce RNA 3ʹ end biotinylation kit (Thermo Fisher Scientific), according to the manufacturer's instruction. RNA gel shift assays were performed using Lightshift® chemiluminescent RNA EMSA kit (Thermo Fisher Scientific) with slight modifications. Briefly, reaction mixtures were prepared in volumes of 10 μL, containing 2 nM of biotin-labelled target RNA, 1 × binding buffer, 5% glycerol and 0.4 units RNase inhibitor. After 20 min incubation with different amounts of proteins at room temperature, 5 × loading buffer was added to the binding reaction. Protein-RNA complexes were separated on a 6% native polyacrylamide gel in 0.5 × TBE buffer (44.5 mM Tris-base, 44.5 mM boric acid and 1 mM EDTA) and UV-light crosslinked to a nylon membrane. Chemiluminescence was visualized using an ImageQuant LAS 4010 CCD camera (GE Healthcare, Piscataway, NJ, USA). Statistical analysis Statistical comparison among different strains or conditions was performed by one-way ANOVA and the Student-Newman-Keuls test (P = 0.05) to analyse the data. The funders had no role in study design, data collection and analysis, decision to publish or preparation of the manuscript. Fig. S3 Effect of RsmA1 of P. syringae pv. tomato DC3000 and CsrA of E. amylovora on GABA utilization and swimming motility. (A) Growth of PstDC3000(pRsmA1) and PstDC3000(pCsrA) overexpression strains as compared with PstDC3000(pUCP18), PstDC3000(pRsmA2), PstDC3000(pRsmA3) and PstDC3000 (pRsmA4) overexpression strains in GABA. All the strains were grown in modified MG medium (replacing mannitol and l-glutamic acid in MG medium with 10 mM GABA) at 28 °C and bacterial growth was monitored by measuring OD 600 at 24 h. Vertical bars represent standard deviations. Bars marked with the same letter are not significantly different (P < 0.05). The experiment was repeated three times and similar results were obtained. (B) Motility of PstDC3000(pRsmA1) and PstDC3000(pCsrA) overexpression strains compared with PstDC3000(pUCP18), PstDC30 00(pRsmA2), PstDC3000(pRsmA3) and PstDC3000(pRsmA4) overexpression strains. Vertical bars represent standard deviations. One-way ANOVA and the Student-Newman-Keuls test (P = 0.05) were used to analyse the data. Bars marked with the same letter were not significantly different (P < 0.05). The experiment was repeated three times and similar results were obtained. Fig. S4 Effect of the rsmA1 mutant and the rsmA1/rsmA2/rs-mA3/rsmA4 quadruple mutant of P. syringae pv. tomato DC3000 on pyoverdine production and protease activity. (A) Pyoverdine production by the rsmA1 mutant and the rsmA1/rsmA2/rs-mA3/rsmA4 quadruple mutant and compared with PstDC3000, PstDC3000(pRsmA1), the rsmA2/rsmA3 and the rsmA2/rs-mA3/rsmA4 mutant strains. (B) Protease activity by the rsmA1 mutant and the rsmA1/rsmA2/rsmA3/rsmA4 quadruple mutant and compared with PstDC3000, PstDC3000(pRsmA1) and the rsmA2/rsmA3 and rsmA2/rsmA3/rsmA4 mutant strains. Vertical bars represent standard deviations. One-way ANOVA and the Student-Newman-Keuls test (P = 0.05) were used to analyse the data. Bars marked with the same letter are not significantly different (P < 0.05). The experiment was repeated three times and similar results were obtained. Fig. S5 Effect of the rsmA1 mutant and the rsmA1/rsmA2/rs-mA3/rsmA4 quadruple mutant of P. syringae pv. tomato DC3000 on GABA utilization and motility. (A) Growth of the rsmA1 mutant and the rsmA1/rsmA2/rsmA3/rsmA4 quadruple mutant as compared with PstDC3000, PstDC3000(pRsmA1) and the rs-mA2/rsmA3 and rsmA2/rsmA3/rsmA4 mutant strains. All the strains were grown in modified MG medium (replacing mannitol and l-glutamic acid in MG medium with 10 mM GABA) at 28 °C and bacterial growth was monitored by measuring OD 600 at 24 h. Vertical bars represent standard deviations. Bars marked with the same letter are not significantly different (P < 0.05). The experiment was repeated three times and similar results were obtained. (B) Motility of the rsmA1 mutant and the rsmA1/rsmA2/rsmA3/rsmA4 quadruple mutant as compared with PstDC3000, PstDC3000(pRsmA1) and the rsmA2/rsmA3 and rsmA2/rsmA3/rsmA4 mutant strains. Vertical bars represent standard deviations. One-way ANOVA and the Student-Newman-Keuls test (P = 0.05) were used to analyse the data. Bars marked with the same letter are not significantly different (P < 0.05). The experiment was repeated three times and similar results were obtained. Fig. S6 Growth of P. syringae pv. tomato DC3000, rsmA overexpression and mutant strains. (A) PstDC3000, PstDC3000(pUCP18), PstDC3000(pRsmA1), PstDC3000(pRsmA2), PstDC3000 (pRsmA3), PstDC3000(pRsmA4) and PstDC3000(pCsrA) overexpression strains. (B) PstDC3000 and the rsmA2, rsmA3 and rsmA4 mutants. (C) PstDC3000 and the rsmA2/rsmA3, rsmA2/ rsmA4, rsmA3/rsmA4 and rsmA2/rsmA3/rsmA4 mutants. All the strains were grown in KB at 28 °C. Overnight cultures of PstDC3000, mutants and overexpression strains as well as complementation strains were harvested and resuspended to OD 600 = 0.01 in fresh KB medium. Bacterial strains were grown at 28 °C, and aliquots of the culture were taken every 2 h for 24 h. Bacterial growth for each strain was determined by measuring OD 600 . The experiments were performed in triplicate and repeated three times and similar results were obtained. Vertical bars represent standard deviations. Fig. S7 HR assay on tobacco leaves. PstDC3000, the rsmA overexpression and rsmA mutant strains were infiltrated into 8-week-old tobacco leaves. PBS was used as negative control. Photographs were taken 24 h post-infiltration. The experiment was repeated three times and similar results were obtained. Overnight cultures of bacterial strains were harvested by centrifugation, resuspended in 1/2× PBS and adjusted to OD 600 = 0.1. Bacterial suspension was infiltrated into tobacco leaves (Nicotiana tabacum) by needleless syringe. Infiltrated plants were kept in a humid growth chamber and HR symptoms were recorded at 24 h post-infiltration. The experiment was repeated three times. Fig. S8 Protease activity by P. syringae pv. tomato DC3000, rsmA overexpression, rsmA mutants and complementation strains. (A) PstDC3000(pUCP18), PstDC3000(pRsmA2), PstDC3000(pRsmA3) and PstDC3000(pRsmA4) overexpression strains. (B) PstDC3000 and the rsmA2, rsmA3, rsmA4, rsmA2/ rsmA3, rsmA2/rsmA4, rsmA3/rsmA4 and rsmA2/rsmA3/rsmA4 mutants. (C) PstDC3000, the rsmA2/rsmA3 and rsmA2/rsmA3/ rsmA4 mutants and their complementation strains. Protease activity was measured at room temperature using NYG agar plates containing 0.75% skimmed milk where halo zones are indicative of protease activities. Pictures were taken at 72 h post-incubation. The experiment was repeated three times and similar results were obtained. Fig. S9 Pyoverdine production by P. syringae pv. tomato DC3000, rsmA overexpression, rsmA mutants and complementation strains. (A) PstDC3000(pUCP18), PstDC3000 (pRsmA2), PstDC3000(pRsmA3) and PstDC3000(pRsmA4) overexpression strains. (B) PstDC3000 and the rsmA2, rsmA3, rsmA4, rsmA2/ rsmA3, rsmA2/rsmA4, rsmA3/rsmA4 and rsmA2/rsmA3/rsmA4 mutants. (C) PstDC3000, the rsmA2/rsmA3 and rsmA2/rsmA3/ rsmA4 mutants and their complementation strains. Pyoverdine production was visualized on MG plates under the UV light, where intensities of fluorescence were indicative of pyoverdine production. All strains were grown on MG plates at 28 °C. Pictures were taken at 48 h post-incubation. The experiment was repeated three times and similar results were obtained. Fig. S10 Motility of P. syringae pv. tomato DC3000, rsmA overexpression, rsmA mutants and complementation strains.
2019-06-21T23:04:10.154Z
2019-06-20T00:00:00.000
{ "year": 2019, "sha1": "6cc894da3b05bd48b953352002e5a7ac37bb1d0c", "oa_license": "CCBY", "oa_url": "https://bsppjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/mpp.12823", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67cf08fe1ccf535545047c9986ba8b29c77596a1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
90254296
pes2o/s2orc
v3-fos-license
Preventive, Diagnostic and Therapeutic Applications of Baculovirus Expression Vector System Different strategies are being worked out for engineering the original baculovirus expression vector (BEV) system to produce cost-effective clinical biologics at commercial scale. To date, thousands of highly variable molecules in the form of heterologous proteins, virus-like particles, surface display proteins/antigen carriers, heterologous viral vectors and gene delivery vehicles have been produced using this system. These products are being used in vaccine production, tissue engineering, stem cell transduction, viral vector production, gene therapy, cancer treatment and development of biosensors. Recombinant proteins that are expressed and post-translationally modified using this system are also suitable for functional, crystallographic studies, microarray and drug discovery-based applications. Till now, four BEV-based commercial products (Cervarix®, Provenge®, Glybera® and Flublok®) have been approved for humans, and myriad of others are in different stages of preclinical or clinical trials. Five products (Porcilis® Pesti, BAYOVAC CSF E2®, Circumvent® PCV, Ingelvac CircoFLEX® and Porcilis® PCV) got approval for veterinary use, and many more are in the pipeline. In the present chapter, we have emphasized on both approved and other baculovirus-based products produced in insect cells or larvae that are important from clinical perspective and are being developed as preventive, diagnostic or therapeutic agents. Further, the potential of recombinant adeno-associated virus (rAAV) as gene delivery vector has been described. This system, due to its relatively extended gene expression, lack of pathogenicity and the ability to transduce a wide variety of cells, gained extensive popularity just after the approval of first AAV-based gene therapy drug alipogene tiparvovec (Glybera®). Numerous products based on AAV which are presently in different clinical trials have also been highlighted. Introduction Baculovirus (family: Baculoviridae) derived its name from the Latin word "baculum" meaning "stick". They are rod-shaped (30-60 × 250-300 nm) large enveloped viruses with circular, supercoiled double-stranded DNA genomes, approximately 80-180 kb in size. While most of the baculoviruses infect their natural host, i.e., butterflies and moths (Lepidoptera), few are also known to infect sawflies (Hymenoptera) and mosquitoes (Diptera) (King et al. 2011). They have not been linked with any disease in any organism outside the phylum Arthropoda (Kost and Condreay 2002). Baculoviruses are well known for their role as biopesticides and are efficient tools for heterogeneous protein production in insect cells (Summers 2006). Morphologically, these enveloped viruses have been classified into two phenotypes: occlusion-derived viruses (ODVs) that are embedded in paracrystalline matrix forming polyhedral occlusion bodies (OBs) which are responsible for horizontal transmission between insects and the budded viruses (BuVs) present in the haemolymph which spreads infection from cell to cell (Luckow and Summers 1988). Occlusion body morphology was initially used to define two major groups of baculoviruses: nucleopolyhedroviruses (NPVs) and the granuloviruses (GVs). NPVs obtain their envelop from host nuclear membrane and are occluded within main occlusion protein polyhedrin forming large (1-15 μm) polyhedral inclusion bodies, while GVs obtain their envelop from cell membrane to make oval-shaped single virion structure called granule or capsule with diameter in the range of 0.2-0.4 μm (King et al. 2011). NPVs are further distinguished as single nucleopolyhedrovirus or multiple nucleopolyhedrovirus based on the number of nucleocapsids in a polyhedral inclusion body (O'Reilly et al. 1994). OBs allow virions to remain infectious for long period due to their highly resistant and stable structure. Baculovirus-infected insect cell expression system has been used for the routine production of recombinant proteins, including several proteins of therapeutic nature over the last three decades. The establishment of this system begins from the production of human beta interferon (INF-β), the protein normally not produced in the cultured human cells. It was produced with a recombinant Autographa californica multiple nucleopolyhedrovirus (AcMNV) by exploiting its polyhedrin promoter (Smith et al. 1983). In this system, the protein coding sequence of human interferon gene was linked to the AcNPV polyhedrin gene promoter. The interferon gene was inserted at different positions relative to the AcNPV polyhedrin transcriptional and translational signals. The interferon-polyhedrin hybrid plasmid was then transferred to infectious AcNPV expression vectors by recombination within S. frugiperda insect cells, where more than 95% of biologically active glycosylated interferon was produced in the secreted form. At the same time, another group successfully expressed Escherichia coli β-galactosidase gene in insect cells by using this system. A 9.2 kb plasmid construct was made of β-galactosidase gene (1 kb) after fusion with the N-terminal region of the polyhedrin gene (1.2 kb) of AcNPV. Co-transfection of this fused plasmid construct with wild-type AcNPV genomic DNA (134 kb) was performed in order to insert the foreign gene into the polyhedrin gene of AcNPV genome by the process of homologous recombination. Finally, the recombinant viruses were selected as blue plaques in the presence of β-galactosidase indicator X-gal medium. These discoveries mark the beginning of baculovirus expression system, facilitating the engineering and improvement of baculovirus vectors, modification of the sugar moieties of glycoproteins expressed in insect cells and scale up of the cell culture processes. A baculovirus expression vector (BEV) platform has been tailored by taking advantage of baculoviruses' natural tendency to infect insect cells. There are almost 500 different types of baculoviruses, all of which specifically infect invertebrates. For laboratory research and manufacturing purposes, the most commonly studied baculovirus is Autographa californica multiple nucleopolyhedrovirus (AcMNPV) which is often considered as a prototype of baculoviruses. It has a double-stranded circular DNA genome of 134 kb inside a rod-shaped nucleocapsid of size 25 × 260 nm (Fauquet et al. 2005). Its large size genome gives sufficient ability to accommodate a large foreign DNA or multiple genes together. Typically, recombinant BEVs are constructed by co-transfecting a mixture of transfer plasmid and modified non-infectious and linearized AcMNPV that lacks parental polyhedrin gene and a portion of ORF1629. Transfer plasmid contains the gene of interest (GOI), flanked upstream by strong polyhedrin or p10 promoter and downstream by an essential portion of ORF1629 of AcMNPV for highlevel protein expression in insect cells. The transfer plasmid and modified linearized AcMNPV DNA undergo homologous recombination to generate de novo recombinant baculoviruses. After plating of these baculoviruses, single pure plaques of recombinant baculovirus are selected. Subsequently, this plaque is passaged through multiple rounds of insect cell infection to generate a high-titre stock. It creates a working virus bank (WVB) for utilization during downstream processes. This system was further enhanced for manufacturing and commercialization purposes by multiple ways and technologies. Bacmid technology (Bac to Bac ® , Life Technologies) was employed for the generation of recombinant AcMNPV genomes in bacterial host system E. coli. FlashbackTM (Oxford Expression Technologies Ltd.) and BacMagicTM (Merck) BaculoOneTM (PAA) technologies are used to avoid the bacterial sequences in the final vector or rapid production of multiple recombinant viruses in a one-step procedure. MultiBac system is being used for the synthesis of multisubunit protein complex and OmniBac as multigene transfer vector for universal generation of recombinant baculoviruses. Sleeping beauty or PiggyBac transposon system are being exploited in highly efficient seamless excision of transposons from the genomic DNA and for its potential to target integration events to desired DNA sequences. For the production of AcMNPV vectors and recombinant proteins, the Spodoptera frugiperda Sf21 and its subclone Sf9 and High Five cell lines are being used. These insect cells exhibit several properties like rapid growth, stress resistance and robust expression of recombinant proteins that make them suitable for the production of clinical biologics and commercial products. Initially, insect-derived baculovirus expression vector (BEV) was recognized as a safe system for routine production of recombinant proteins both in insect and mammalian cells. During the last three decades, it has emerged as an effective tool for research as well as various applications in the field of biotechnology. It has shown tremendous potential as preventive, diagnostic and therapeutic agent against a myriad of diseases in the form of vaccination, tissue engineering, stem cell transduction, viral vector production and gene therapy (Airenne et al. 2013). It has been extensively used for functional studies, crystallography, biosensors, protein microarray and drug discovery. All these applications are based on different baculovirusderived products such as heterologous proteins, protein/antigen displayed on baculovirus particle surface, heterologous viral vectors and gene delivery vehicles for mammalian cells (van Oers et al. 2015). In this chapter, we have presented the application of these products from a clinical point of view in three main categories, viz. preventive, diagnostic and therapeutic agents. Most of the approved biomolecules produced by using baculovirus expression system in insect cells have been discussed. As thousands of other products are being developed by BEVs, it seems ineffectual to include the entire list under the ambit of the present chapter; however few among them have been mentioned to have an understanding about the scope of this powerful expression system in the near future. BEVs As Disease Preventive Agents BEV exhibits many characteristics that make it suitable for the production of heterologous proteins in insect cells. It can be easily handled in the BSL1/2 laboratories due to its harmless nature to nontarget organisms. These viruses are environmentally safe due to their instability outside the laboratory. It is used to produce high level of proteins in insect cells or larvae where the eukaryotic environment provides the appropriate post-translational modifications. BEVs host insect cells are mostly free of human pathogens and do not require controlled oxygenic environment for their growth. Insect cells can be grown into serum-free medium, and the heterologous protein production can be enhanced to the level of pilot plant or larger bioreactors. Therefore, the proteins obtained by the BEVs can be used as vaccines either in the form of heterologous subunit proteins or virus-like particles (VLPs) formed by subunit proteins of virus. Subunit vaccines are relatively safe as they are devoid of virus genetic material but exhibit poor immunogenicity that might be due to incorrect folding of the target protein. Structural proteins of viruses such as capsid and envelop proteins assemble into particulate structure similar to the naturally occurring virus or subviral particles. Therefore, virus-like particles (VLPs) that are non-infectious and non-replicating due to the absence of viral genetic material can be produced in heterologous system (Yamaji 2014). VLPs are highly effective in eliciting both humoral and cellular immune response because of their densely repeated display of viral antigens in right conformation (Roy and Noad 2008). VLPs comparatively exhibit wide spectrum of clinical applications such as prevention of disease as vaccines, diagnostics as antigens for the detection of antibodies and therapeutics in the form of therapeutic vaccines and delivery agents. The use of heterologous proteins and VLPs as preventive agents in the form of vaccines against different diseases is being described (Table 9.1). A decade back, only two veterinary products were manufactured using BEVs to prevent classical swine fever in pigs. Now, five more new vaccines have been approved, two of which are for humans, and many more products are in the development phase. Here, approved vaccines as well as development of other vaccines in preclinical stages have been highlighted. Veterinary Vaccines (a) Subunit marker vaccine for classical swine fever: Classical swine fever virus (CSFV) infection invariably develops antibodies against virus envelop proteins E RNS and glycoprotein E2 and the non-structural protein NS3 in swine (Paton et al. 1991). However, injection of only glycoprotein E2 in pigs is reported to sufficiently provide protection to CSFV (Van Rijn et al. 1996). Therefore, a subunit vaccine has been produced on the basis of conserved glycoprotein E2 with a baculovirus vector in insect cells (Moormann et al. 2000). Glycoprotein E2 being expressed as envelop protein, its C-terminal transmembrane domain was removed to secrete it into the medium, and the residual baculovirus was inactivated with 2-bromoethyl-imminebromide. This vaccine was manufactured and commercialized as "Porcilis Pesti ® " by MSD Animal Health. The same vaccine was also commercialized as "BAYOVAC CSF E2 ® /Advasurea" by Bayer AG/Pfizer Animal Health but was later discontinued. (b) Virus-like particle (VLP)-based vaccine for porcine circovirus type 2: Porcine circovirus type 2 (PCV2) vaccine was developed based on VLPs. PCV2 is the primary causative agent of postweaning multisystemic wasting syndrome (PMWS) in swine. Two major open reading frames (ORF1 and ORF2) have been identified in PCV2. ORF2 encodes a major structural protein with typespecific epitopes and is found to be highly immunogenic. Therefore, ORF2 that encodes the capsid protein was used to develop the vaccine with a baculovirus in Tn5 insect cells (Liu et al. 2008 MSD Animal Health got the same products licensed by two names in different geographical areas 9 Preventive, Diagnostic and Therapeutic Applications (c) VLP-based vaccine for porcine parvovirus (PPV): PPV, a non-enveloped DNA virus, causes major reproductive failure in swine. Its viral capsid is made up of 50-60 molecules of VP2, the major structural protein that are being targeted for vaccine development. VP2 gene was expressed under the control of late p10 promoter of baculovirus and the LacZ gene under the control of Drosophila hsp 70 promoter. The recombinant baculovirus AcAs3-PPV was used to infect Sp21 insect cell line to express VP2 that leads to self-assembled empty PPV VLPs in serum-free medium for safety point of view (Maranga et al. 2002). Earlier, it was also produced in Sf9 cells in the presence of serum proteins. However, its commercialization at large scale still needs more developmental efforts. Monovalent and bivalent VLP vaccines are being developed for two serotypes 1 and 4 of BTV. BTV-1 exhibits more protection to virulent BTV live strain as compared to BTV-4. Earlier, VLP expressing capsid proteins VP2 and VP5 were developed by co-transfection of dual transfer vector DNA (pAcVC3/BTV-10-2/BTV-10-5) with wild-type AcNPV DNA in insect cells (French et al. 1990). Strong developmental efforts and further research are needed to commercialize robust and effective BTV vaccine. Many more VLPs veterinary vaccines by baculovirus expression system in insect cells have been developed such as avian influenza (AI), (H5N3)-VLPs that consists of subunits haemagglutinin (HA), neuraminidase (NA) and matrix protein (M1) of AI virus for ducks (Prel et al. 2008); chimeric infectious bursal disease virus (IBDV)-VLPs consisting of structural protein VP2, VP3 and VP4 with varying degree of one of the capsid protein VP2 tagged with histidine of IBDV for chickens (Hu and Bentley 2001); rabbit haemorrhagic disease virus (RHDV)-VLPs made up of VP60 capsid protein for rabbits (Laurent et al. 1994); and simian immunodeficiency virus (SIV)-VLPs consisting of precursor protein Pr56 gag for vaccine testing in non-human primates (Yamshchikov et al. 1995). Human Vaccines (a) Subunit vaccine for influenza: Influenza generally called as "the flu" is caused by RNA influenza viruses, designated from type A to C. Both type A and B influenza viruses possess haemagglutinin (HA) or neuraminidase (NA) glycoprotein spikes in their envelope which act as key antigens in the host immune response, therefore targeted for vaccine development. But HA and NA exhibit antigenic drift due to continuous mutations in the genetic material, and the vaccine based on these glycoproteins is required to be updated annually. However, type C influenza virus is not involved in annual influenza virus vaccine as they cause only mild respiratory disease in humans. With the advent of successful cases of approved VLP-based vaccines, researchers are indeed redirecting their efforts for the development of such products. Therefore, a number of vaccines have been produced against many viral diseases in humans; however many of them are either in preclinical or clinical trial stages. Prominent VLPs that are made up of multimeric proteins expressed in insect Sf9 cells include Ebola by VP40 and glycoproteins (Sun et al. 2009); enterovirus by P1 and 3CD (Chung et al. 2010); human parvovirus B19 by B19 VP1, VP2 (Roldão et al. 2010); Norwalk virus (Nv) by capsid proteins (Jiang et al. 1992;Ball et al. 1999;Atmar et al. 2011;Frey 2011); polyomavirus by VP1 (Montross et al. 1991); severe acute respiratory syndrome-associated coronavirus (SARS-CoV) by SP, EP, MP and EN (Mortola and Roy 2004) and simian virus 40 (SV40) by VP1 or P1 and 3CD (Kanesashi et al. 2003). VLPs for rotavirus were prepared by using two (VP2 and VP6) to three (VP2, VP6 and VP7) capsid proteins expressed both in Sf9 and High Five insect cells. It has also been expressed in Sf larvae with two capsid proteins VP2 and VP6 (Roldão et al. 2010). Combinations of capsid proteins from different strains of influenza were used in both Sf9 and High Five insect cells such as HA (H1N1) with M1 (H3N2) and HA (H3N2) with M1 (H1N1) to produce higher amount of influenza A-VLPs. Other influenza A-VLPs formed by co-expression of M1 and ESAT6-HA were produced only in High Five cells. Strain-specific influenza HA and M1 capsid proteins were used to prepare influenza A H1N1-VLPs and influenza A H3N2-VLPs in both the insect cells (Krammer et al. 2010;López-Macías et al. 2011). Respiratory syncytial virus (RSV) vaccine was produced by using RSV-F protein (Mazur et al. 2015;Neuzil 2016). HIV VLPs were produced by targeting gag protein in rodents and rhesus macaques for preclinical trials (Pillay et al. 2009;Wagner et al. 1996). BEVs as Diagnostic Agents Supposedly, both heterologous subunit proteins and VLP-based subunit vaccines can be used as vaccines as well as antigens for the detection of antibodies, given the condition that it satisfies the various diagnostic parameters like sensitivity, specificity, predictive values and likelihood ratios. These parameters have been well evaluated and found to be acceptable for diagnostic purposes for numerous BEV-derived products. However, commercialization of these vaccines/proteins demands further standardization and evaluation. Here, we have summarized some of the human as well as veterinary usage diagnostic molecules produced by BEV system in insect cells (Table 9.2). (Eshaghi et al. 2004). Its further use as diagnostic reagent for humans needs to be explored. P1 and 3CD protein genes of swine vesicular disease virus (SVDV)-derived VLPs as antigens for detection of antibodies against SVDV in pigs by ELISA were also developed (Ko et al. 2005). (c) Horse: Recombinant baculovirus expressing equine infectious anaemia virus (EIAV) core proteins Gag and p26 as antigens was found to possess high specificity and sensitivity in ELISA and agar gel immunodiffusion (AGID) to detect antibodies from infected horse sera (Kong et al. 1997). Haemagglutinin (HA) (Sugiura et al. 2001). Its efficiency was further tested by HA1 subunit of HA (Sguazza et al. 2013). (d) Cattle: Baculovirus-derived antigen capture competitive ELISA (Ag Cap c-ELISA) for the diagnosis of bluetongue and epizootic haemorrhagic disease virus infection in cattle exhibits advantages in terms of easy production, standardization, less requirement of downstream processing and its non-infectious nature as compared to commercially available c-ELISA (Mecham and Wilson 2004). Blocking ELISA was developed by BEVs for the detection of antibodies against foot-and-mouth disease of cattle, pigs and goats by virus type A with a specificity of 99% (Ko et al. 2010). Bovine respiratory syncytial virus (BRSV) infection that causes lower respiratory tract disease in calves 1-3 months old can be detected by immunofluorescence analysis with recombinant F-protein as antigen (Pastey and Samal 1998). (e) Bird: A variant of ELISA known as E-ELISA using eukaryotically expressed E protein as the antigen for the detection of Tembusu virus (TMUV) in ducks was developed with 93.2% specificity and 97.8% sensitivity (Yin et al. 2013). Recombinant avian paramyxovirus type 2 haemagglutinin (APMV2-HN) is found to be a useful alternative to APMV-2 antigens in haemagglutination inhibition (HI) test for the detection of APMV-2 infection in avians (Choi et al. 2014). Whole Sendai virus virion VLPs are being used as antigens for the detection of antibodies against virus for diagnostic purposes such as major capsid protein VP1 of goose haemorrhagic polyomavirus-VLPs for the detection of GHPV-specific antibodies in sera from flocks with haemorrhagic nephritis and enteritis of geese (HNEG) disease (Zielonka et al. 2006). Application in Humans Most of the recombinant proteins that are used as antigens have been expressed by baculovirus expression system in Sf9 insect cells unless otherwise stated. Some of them are mentioned here. Lassa virus infection causes Lassa fever mainly endemic in West Africa. Recombinant nucleocapsid protein acts as antigen for the detection of antibodies in Lassa virus-infected patient sera by ELISA (Barber et al. 1990;Saijo et al. 2007). Rubella virus (RV) normally causes a self-limiting disease, but its infection during the first trimester of pregnancy may cause foetal damage. Therefore, serological diagnostic test was developed by expressing E1, E2 and polyprotein precursor of rubella virus as antigen for enzyme immunoassay (EIA) and immunoblot analysis of patient sera (Seppänen et al. 1991). Trypanosoma cruzi causes Chagas' disease in Latin America. Flagellar repetitive antigen (FRA), part of T. cruzi-based improved diagnostic assay, was developed for Chagas' disease (dos Santos et al. 1992). Fulllength human glutamic acid decarboxylases (GAD65 and GAD67) with histidine tag were produced in their natural conformations for the development of an immunoassay for the diagnosis of insulin-dependent diabetes mellitus (Mauch et al. 1993). Hantavirus which causes haemorrhagic fever with renal syndrome (HFRS) nucleocapsid protein of strain SR-11 (rNP-SR-Sf9) was used as antigen for the indirect immunofluorescence antibody (IFA) diagnostic test that detects three serotypes (hantan 76-118, SR-11 and Puumala) of hantavirus (Yoshimatsu et al. 1993). Purified human papillomavirus (HPV) E2 protein was used to develop ELISA to detect IgG and IgA responses in cervical neoplasia patients (Rocha-Zavaleta et al. 1997). Houston/90 (Hou/90) is a human calicivirus (HuCV) strain in one of the three clades of Sapporo-like HuCVs that cause acute gastroenteritis in children. The viral capsid gene of Hou/90 capsid was used as antigen for immunoprecipitation and EIA (Jiang et al. 1998). Herpes simplex virus (HSV) infection is caused by two viruses HSV-1 and HSV-2. Diagnostic test that can distinguish between two strains has been developed that utilizes both type-specific and type-common HSV antigens in a single-step assay format to perform accurate diagnosis (Burke 1999;Wald and Ashley-Morrow 2002;Liu et al. 2015). Eight different strains of human caliciviruses (HuCVs) capsid proteins have been used to develop antigen-antibody detection assay by EIAs that are highly specific (Jiang et al. 2000). Causative agent of tick-borne encephalitis (TBE), C-terminus truncated form of protein E (Etr) of TBE complex virus tagged with histidine was used to develop sensitive and specific ELISA as well as immunoblot assay to detect the TBE virus-specific antibodies in infected individuals (Marx et al. 2001). Fel dl, the major allergen from cats, consists of two polypeptide chains, chain 1 (ch1) and chain 2 (ch2), which are usually linked with a disulphide bond. Recombinant Fel dl (rFel dl Ch1 + Ch2) protein construct in which two chains are linked together with glycine/serine linker was used as more potent antigen than bacterial-derived proteins for the detection of IgE and IgG antibodies by radioimmunoassay (RIA) and ELISA (Guyre et al. 2002). Coeliac disease (CD) is characterized by the presence of autoantigen transglutaminase (TG). Recombinant human tissue TG (hu-tTG) expressed with baculovirus system was used as antigen for ELISA that showed a sensitivity of 100% and a specificity of 98.6% (Osman et al. 2002). The envelope glycoproteins: gB, gD, gC, gE and gG are thought to be the primary targets of IgG antibody response in patients with Herpes B virus (HBV) infection. Therefore, ELISA test was developed by using the cocktail of these recombinant glycoproteins along with other capsid proteins with high sensitivity and specificity (Perelygina et al. 2005). Similarly, the recombinant proteins in single or multiple subunits for the diagnosis of different types of viral infections in humans have been developed with baculovirus expression system in insect cells. BEVs as Therapeutic Agents BEVs express products like growth factors, cytokines, chemokines, enzymes, hormones and monoclonal antibodies that can be used for human therapeutic purposes. More recently, BEV has also been exploited as effective tool for gene therapy. For simplicity, the applications of these products have been divided into two major groups: biological drug therapy and gene therapy. Over thousands of such biomolecules have been developed till now in this system, few among them are discussed here (Table 9.3). Biological Drug Therapy BEVs have been utilized as eukaryotic expression vectors in insect cells for the production of therapeutic or immunotherapeutic proteins such as monoclonal antibodies, cytokines and chemokines, growth factors, etc. that require post-translational modifications, more importantly glycosylation. The baculovirus expression system has been accepted as one of the most efficient and powerful technologies for the production of biological recombinants in terms of achievable quantity, purity and ease of the eukaryotic processing (Luckow and Summers 1988). Therapeutic recombinant protein production is considered as an essential section of the emerging biotechnology industries. This system has the potential for the development of high commercial value industry. (a) Immunotherapy: Over the years, numerous tumour immunotherapies achieved early-stage successes but failed in clinical trials Phase-III (Goldman and DeFrancesco 2009). Baculovirus-derived Dendreon's Provenge (Seattle; sipuleucel-T) for prostate cancer is among the first therapeutic cancer vaccines to complete Phase-III trail successfully and to receive FDA approval. Provenge (Sipuleucel-T) is an autologous active cellular immunotherapy that has shown evidence of reducing the risk of death among men with metastatic castrationresistant prostate cancer (Kantoff et al. 2010). It consists of autologous peripheral-blood mononuclear cells (PBMCs), including antigen-presenting cells (APCs), which have been activated ex vivo with a recombinant fusion protein (PA2024). The fusion protein PA2024 contains prostate antigen, prostatic acid phosphatase which is fused to an immune-cell activator called granulocyte-macrophage colony-stimulating factor. PA2024 is produced by BEV in Sf21 insect cells. Monoclonal antibody CO17-1A was prepared against colorectal cancer cells by using pFastBac vectors (Park et al. 2011). The BEVs expressed proteins that are being utilized for the production of monoclonal antibodies against Bcl-2 (B-cell lymphoma leukaemia-2). It is an integral membrane oncoprotein that regulates programmed cell death (apoptosis) in haematolymphoid cells (Reed et al. 1992). Single-domain antibodies (sdAbs) that are prepared against rotavirus infection are also known as nanobodies or VHHs. They have characteristically high stability, solubility and very high affinity for their antigens. These antibodies were first produced in the insect larvae Trichoplusia ni which serve as living bio-factories for the production of these biomolecules (Gómez-Sebastián et al. 2012). Anti-breast cancer monoclonal antibodies (mAb) BR55, with or without fusion with KDEL (Lys-Asp-Glu-Leu) endoplasmic reticulum retention signal, were prepared. The heavy chain (HC) and light chain (LC) genes of mAb BR55 were cloned in pFastBac Dual vector under the control of polyhedrin (P PH ) and p 10 promoters, respectively, in Sf9 insect cells (Lee et al. 2014). Antibody response was enhanced against two recombinant subunit vaccines by tagging the vaccines with adjuvant recombinant single-chain antibody APCH1. It recognizes the MHC Class II DR and produced in Trichoplusia ni insect cells (Gil et al. 2011). Human interleukin 2 (IL-2) was prepared in insect larvae of T. ni by placing the IL-2 gene under p10 promoter of BEV (Pham et al. 1999). Human granulocyte-macrophages colony-stimulating factor (hGM-CSF) was prepared in Bombyx mori (silkworm) nuclear polyhedrosis virus (BmNPV) (Shi et al. 1996). Other cytokines and chemokines are being produced by using this expression system in a similar manner. Recently, the intravesical instillation of transgene devoid baculovirus is found to elicit local immune stimulation by upregulating a set of Th-1-type cytokines in orthotopic bladder tumours in mice (Ang et al. 2016). However, the application of such strategy for non-muscle invasive bladder cancer (NMIBC) in humans is awaited. (b) Enzyme and hormonal therapy: Enzyme human adenosine deaminase, a key purine salvage enzyme required for immune competence, has been produced both in Trichoplusia ni and Spodoptera frugiperda insect cells as well as larvae. This enzyme possessed specific activity of 70 units/mg in crude homogenate that is 70-350 times higher than its two most abundant natural sources thymus and leukemic cells. Such biologically active, inexpensive, rapid and huge production of the enzymes by this baculovirus system opens up the avenues for other biologically active molecules. Human parathyroid hormone (hPTH) was produced both in Bombyx mori cells and larvae. Both of the host systems have been reported to be suitable for efficient synthesis and secretion of the correctly processed hPTH (Mathavan et al. 1995). Similarly, recombinant full length human growth hormone (hGH) was produced in Bombyx mori nuclear polyhedrosis virus (vBmhGH) (Sumathy et al. 1996). (c) Growth factors therapy: Growth factors are naturally signalling molecules required for myriads of biological processes for which the requirement of consistent, cost-effective and clinically efficient technologies is indispensable. Wound healing is one of such complex biological processes that requires the collaborative efforts of different tissues, cells and molecules. The repair process of wounds after injury is initiated by the release of various growth factors (GFs). GFs act as functional messenger molecules between cells which control the cellular processes in the regulatory network and sometimes require recombinant protein therapies. Currently, wound healing is being focussed on GFs and/or human skin substitutes, required for decreasing healing time by modifying inflammation and accelerating the proliferative phase. The beneficial effects of GFs to attract different kinds of cells at the site of wound healing have been demonstrated by many studies. Wider clinical and commercial applications of such GFs depend on their scalable cost-efficient production. BEVs have been successful in unblocking the bottlenecks for such inevitabilities. Three fully functional human GFs, the human epidermal growth factor 1 (huEGF1), the human fibroblast growth factor 2 (huFGF2) and the human keratinocyte growth factor 1 (huKGF1), have been produced with BEVs in Trichoplusia ni insect larvae (Dudognon et al. 2014). The expression of huKGF1 was found to be enhanced further when it was expressed by tagging it with human antibody IgG fragment crystallisable region (Fc). Human prepro (beta) nerve growth factor that has been suggested as a therapeutic agent for the treatment of Alzheimer's disease was produced in insect cells as recombinant virus, mature human beta nerve growth factor (rhNGF). It was found to be biologically active in cholinergic cell survival (Barnett et al. 1990). Similarly, different strategies are being worked out with BEVs in insect cells or larvae for biologically active, cost-effective, therapeutic and commercial scale production of numerous highly variable molecules. Gene Therapy Today, gene therapy potential has reached to the point whereby it can be exploited to treat many diseases that were earlier thought to be untreatable. The requisite modalities for such gene drugs such as safety, generation, immune response, duration of expression and the gene delivery capacity are being successfully realized by baculovirus-based vectors. Baculoviruses have been found to deliver genes into a wide range of vertebrate cells and species. However, the exact mechanism of entry of baculovirus in to the host cells is still not fully understood. Recently, phagocyticlike mechanism of entry into mammalian cells was found to be more convincing than pinocytosis (Long et al. 2006). Baculovirus progeny production occurs in two forms, budded virus (BuV) and occlusion-derived virus that only differ in their envelops. BuV derives its envelop from cell membrane and spreads the infection within host, whereas occlusion-derived virus envelop is derived from nuclear membrane and spreads infection between hosts. BuV is the most widely used form in biotechnology that enters the insect and other hosts through endocytosis mechanism, although the tenet of exact endocytic mechanism still needs to be build. AcMNPV is the prototype of baculoviruses and widely used for different applications including gene therapy. It is able to transduce both dividing and non-dividing mammalian cells and activates the transgene in the target cells that it carries under the control of specified promoter. It indicates that the nucleocapsid of the baculovirus transports its genome across the intact host cell nuclear membrane through nuclear pore complex. However, the detailed molecular mechanism of baculovirus transduction in mammalian cells demands further investigation for efficient gene delivery. BEVs gene delivery capability have been exploited in understanding the mechanism of vertebrate cell transduction, preclinical studies, vaccination, cartilage and bone tissue engineering, cancer gene therapy, assay development, drug screening and generation of other gene therapy vectors (Airenne et al. 2013). We would like to emphasize on the use of recombinant adeno-associated vectors (rAAV) as gene therapy tools which are highly important from bioprocess and therapeutic perspective. BEVs-Derived Recombinant Adeno-Associated Viruses (rAAVs) for Gene Therapy Recombinant AAVs that carry therapeutic DNA turn out to be the attractive gene delivery vectors because of their suitability for in vivo gene therapy potential, relatively long-term gene expression, lack of pathogenicity and ability to transduce wide variety of both dividing and non-dividing cells. Nine different serotypes of rAAVs are used for gene therapy whereby each serotype exhibits different propensity for tissue-specific infection and infection kinetics (Zincarelli et al. 2008). The major limitation of low production quantity was addressed recently by optimizing the BEVs platforms and adjusting different parameters such as multiplicity of infection, cell density and fermentation mode that produced up to 10 4 vector genomes per litre (Mena et al. 2010). The strategy for rAAV production requires the production of three AAV capsid proteins, VP1, VP2 and VP3. These capsid proteins assemble within BEV-transduced insect cells to produce icosahedral VLPs (Aucoin et al. 2007). More efficient rAAVs require co-infection of insect cells with three different kinds of baculoviral vectors. The first one is Bac-Rep, expressing the major AAV replication enzymes Rep 78 and Rep 52 essential for AAV genomic replication and packaging, respectively. Second is Bac-Cap, expressing the AAV virion capsid proteins (VP1, VP2 and VP3), and third is Bac-GOI, expressing the gene of interest flanked by AAV inverted terminal repeat elements required for the rescue, replication and packaging of the heterologous gene. Co-infection with these three vectors in insect cells produces efficiently replicated and encapsulated single-stranded AAV vector genome (Weyer and Possee 1991). Further enhancement of AAV in terms of stability, robustness, scalability and high-titre production involves both Rep and Cap protein expression from a single baculovirus (Bac-Rep Cap), i.e. expression of both Rep 78 and Rep 52 transcription from a single mRNA and genetic modifications of the original Bac-Rep and Bac-Cap constructs (Virag et al. 2009). The development of such robust gene delivery vehicles was based on the fact that AAV genome is efficiently replicated in Sf9 and Sf21 insect cell lines in a Rep-dependent fashion. Some of the diseases that are being targeted by gene therapy using rAAV are discussed below: (a) Gene therapy against lipoprotein lipase deficiency (LPLD): It is a rare autosomal recessive genetic and metabolic disorder in which inactivation of familial lipoprotein lipase enzyme occurs due to mutation in gene LPL. Functional lipase is required for plasma triglyceride hydrolysis under normal condition. Inactivated enzyme results into hypertriglyceridemia characterized by frequent abdominal pain and fatty deposits in the skin and retina that in severe cases can lead to fatal pancreatitis, diabetes and onset of cardiovascular diseases. Earlier therapies targeted to lower the plasma triglycerides have not been proved much effective. Alipogene tiparvovec (also known as AAV1-LPL S447X in the early phases of clinical trial) is the first adeno-associated virus (AAV)-mediated gene therapy manufactured by UniQure that got market authorization and government approval in Europe. It is an AAV1 (serotype 1) vector expressing naturally occurring variants of LPL transgene, LPL S447X linked with improved lipid profile and is commercialized by the name of Glybera (Gaudet et al. 2010). It is injected through intramuscular route in the patients that results in natural gain of function of LPL gene variants to muscle tissues. Glybera use significantly lowers plasma triglycerides by increasing the lipoprotein lipase enzyme activity. The major concern for using such vector-based gene therapy is to prevent both humoral and cell-mediated immune response elicited against viral capsid proteins that may impact the efficacy and safety of these drugs. Intramuscular injections of Glybera has been proved clinically safe and efficient drug that does not elicit any additional systemic and local immune response harmful for humans. This approach was found to be relevant and promising for the treatment of thousands of single gene disorders. Similar strategies are being investigated in diverse range of therapeutic areas, and many products for the treatment of human diseases are in different stages of clinical development. These AAV gene therapy drugs at different clinical development phases are being discussed here. (b) Haemophilia: It is a blood clotting disorder caused by the mutation in the clotting factor IX gene. Presently, four clinical trials are going on that involve rAAV serotype 2 or 8, designed to express factor IX. Haemophilia A, the most common severe inherited bleeding disorder caused by mutation in factor VIII gene, is significantly more problematic for this treatment because of a larger size of cDNA that prevents in achieving the adequate level of transgene expression and elicits the anti-factor VIII immune response (High et al. 2014). (c) Retinal degeneration: Recombinant AAV has been used to treat a number of animal models but is limited by carrying capacity, slow onset of expression and limited ability to transduce some of the retinal cell types from the vitreous. Next-generation AAVs have been produced to address these issues by creation of self-complementary AAV vectors for faster onset of expression and specific mutations of self-exposed residues to increase transduction. Such vectors were further improved for broader applicability and advantageous characteristics by directed evolution through an iterative process of selection (Day et al. 2014). Age-related macular degeneration (AMD) that leads to the central vision loss in elderly individuals due to choroidal neovascularization is marked by proliferation of blood vessels and retinal pigment epithelial (RPE) cells. It leads to photoreceptor death and fibrous disciform scar formation. Treatment of AMD patients requires neutralization of vascular endothelial growth factor (VEGF) for which expression of modified soluble Flt1 receptor was designed and expressed in AAV2-sFLT01 vector. Presently, this study is in Phase 1 trial (MacLachlan et al. 2011). Leber congenital amaurosis (LCA) is an autosomal recessive blinding disease that occurs due to mutations in RPE65 gene. Subretinal administration of AAV2-hRPE65v2 has been reported both safe and efficient for at least 1.5 years after injection. Currently six clinical trials, either in stage 1 or 2, are going on to treat this retinal disease (Simonelli et al. 2010). (d) Neurological diseases: rAAV has been used as an effective gene delivery system for the treatment of central or peripheral nervous system with almost no adverse effects in many clinical trials. First time, its clinical use in the human brain has been used to treat Canavan disease, a childhood leukodystrophy also known as Van Bogaert-Bertrand disease caused by the deficiency of enzyme aspartoacylase (ASAP). It involves neurosurgical administration of approximately 10 billion infectious particles of recombinant adeno-associated virus (AAV) containing the aspartoacylase gene (ASPA) directly to the affected regions of the brain (Janson et al. 2002). To treat Alzheimer's disease, transfer of gene encoding nerve growth factor (NGF), which is essential for healthier nerve cells, is transduced by an adeno-associated nerve growth factor (CERE-110) (Bakay et al. 2007). Transduction of glutamic acid decarboxylase (GAD) and trophic factor neurturin was assessed successfully in different Phase 1 and 2 clinical trials for the treatment of Parkinson's disease (Marks et al. 2010;Kaplitt et al. 2007). (e) Duchenne muscular dystrophy (DMD): DMD is a severe recessive X-linked muscle disorder caused by mutations in gene encoding dystrophin. Gene therapy to treat DMD is a challenge due to the large size of DMD gene. However, alternative gene delivery strategies like exon skipping, trans-splicing, microand mini-dystrophin in Phase II/III clinical trials have been found to be promising (Jarmin et al. 2014). A number of Phase I/II/III clinical trials are underway for the treatment of numerous diseases such as acute intermittent porphyria, alpha 1-antitrypsin deficiency, aromatic amino acid decarboxylase deficiency, Becker muscular dystrophy, choroideremia, chronic heart failure, gastric cancer, HIV, inflammatory arthritis, late infantile neuronal ceroid lipofuscinosis, Leber's hereditary optic neuropathy, limb girdle muscular dystrophy, macular degeneration, Pompe disease, spinal muscular atrophy, etc. (Felberbaum 2015). The future prospectives of baculovirus gene delivery applications in stem cell transduction, cancer gene therapy and cartilage and bone tissue engineering are also quite optimistic. Great interest in regenerative medicine begins with the advancement in identification, isolation and derivation of human stem cells, specifically the generation of human-induced pluripotent stem cells. Prolonged expression of transgenes has been demonstrated in multiple multipotent stem cells such as mesenchymal, neural, umbilical cord, bone marrow, adipose tissue, human embryonic stem cells (hESCs) and pluripotent stem cells. These baculoviruses have also been customized for stable gene expression in stem cells by genomic integration for downstream therapeutic applications, for example, deriving unlimited numbers of genetically corrected functional adult cells for cell replacement therapy (Kotin et al. 1991). De-differentiated chondrocytes transduced with baculovirus vector (Bac-CB) expressing bone morphogenetic protein-2 (BMP-2) result into sustained expression of BMP-2 with passaged chondrocytes in vitro. It was further improved by coexpression of transforming growth factor beta with baculovirus vectors (Chen et al. 2008). These chondrocytes were further used to grow cartilage-like tissues in rotating shaft bioreactors that demonstrated the potential of baculovirus in cartilage tissue engineering, but their clinical utility in humans is yet to be proved. Bac-CB-based BMP-2 transduction into human bone marrow-derived mesenchymal stem cells (BMSCs) is also demonstrated to directing ontogenies of naïve BMSCs. Implantation of these transduced cells induced ectopic bone formation in nude mice and promoted calvarial bone repair in immunocompetent rats (Chuang et al. 2009). For massive repairing of bone, sustained expression of genes promoting osteogenesis (BMP-2) and angiogenesis (VEGF) in adipose-derived stem cells (ASCs) was performed by dual baculovirus vector system. Transplantation of these ASCs in NZW rabbit resulted in accelerated healing, improved bone quality and angiogenesis. Same technique was also tested in rabbits, and the results altogether support the viability of baculoviruses for stem cell engineering and bone formation (Luo et al. 2011). The propensity of baculoviruses for effective high-level transgene expression has been exploited for cancer gene therapy. Baculovirus vectors have been tailored with suicide, tumour suppressor, pro-apoptotic, immune-potentiating and antiangiogenesis genes and studied in animal tumour models under in vivo conditions in many anticancer strategies (Luo et al. 2012;Wang and Balasundaram 2010). Recently, stem cells transduced with suicide genes have proved beneficial for curbing primary, solid and metastatic tumours (Zhao et al. 2012). Today, baculovirus technology has matured to the level that it can be used for plethora of applications. The studies conducted on model organisms in the context of therapeutic applications are encouraging and support further development of baculoviruses from preclinical applications to clinical trials and for human diseases treatment. A deeper and holistic understanding of antigenic and target cell transduction molecular mechanisms will be helpful in enhancing the clinical utility of this unique and powerful gene delivery system.
2019-04-02T13:13:55.208Z
2017-11-16T00:00:00.000
{ "year": 2017, "sha1": "4e5b48be090af9d9cacc7721fa54f63e3e721428", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "90be846f15f0f3ab212d78d368dc81b010447e4c", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
228829053
pes2o/s2orc
v3-fos-license
Memantine can protect against inflammation-based cognitive decline in geriatric depression Introduction Geriatric depression is frequently accompanied by cognitive complaints and inflammation that increase risk for treatment-resistant depression and dementia. Memantine, a neuroprotective drug, can improve depression, inflammation, and help prevent cognitive decline. In our six-month clinical trial, escitalopram/memantine (ESC/MEM) improved mood and cognition compared to escitalopram/placebo treatment (ESC/PBO; NCT01902004). In this report, we examined the impact of baseline inflammation on mood and cognitive outcomes. Materials and methods We measured a panel of inflammatory cytokine markers using Human 38-plex magnetic cytokine/chemokine kits (EMD Millipore, HCYTMAG-60K-PX38) in 90 older adults 60 years and older with major depression enrolled in a 6-month double-blind placebo-controlled trial of escitalopram + memantine (ESC/MEM) in depressed older adults with subjective memory complaints. Four cytokine factors were derived and linear models were estimated to examine the predictive ability of cytokine levels on treatment induced change in depression and cognition. Results Of the 90 randomized participants, 62 completed the 6-month follow up assessment. Both groups improved significantly on depression severity (HAM-D score), but not on cognitive outcomes at six months. Cytokine factor scores were not significantly different between ESC/MEM (n = 45) and ESC/PBO (n = 45) at baseline. Pro-inflammatory biomarkers at baseline predicted a decline in executive functioning in the ESC/PBO group but not in the ESC/MEM group, interaction F(1,52) = 4.63, p = .04. Discussion In this exploratory analysis, the addition of memantine to escitalopram provided a protective effect on executive functioning in older depressed adults. Future studies are needed to replicate the association of cytokine markers to antidepressant and neuroprotective treatment-related change in cognition in geriatric depression. Introduction Geriatric depression and cognitive dysfunction are often comorbid (Charlton et al., 2014;Lee et al., 2007). Evidence of cognitive impairment has been found in up to two thirds of non-demented older adults with depression (Lanza et al., 2020). Underlying inflammation is linked to increased risk for Alzheimer's disease and treatment-resistant depression (Zwicker et al., 2018). Existing antidepressants are able to reduce peripheral inflammation in humans and in animal models, while anti-inflammatory agents have been tried as add-on antidepressant treatment strategies with some promise (Eyre et al., 2016(Eyre et al., , 2017Kohler et al., 2016;Lindqvist et al., 2017;Lu et al., 2019;Sun et al., 2020). Specifically, escitalopram can influence the metabolic pathways responsible for oxidative stress and inflammatory mechanisms of depression (Bhattacharyya et al., 2019). Additionally, neuroprotective agents like memantine have been reported to produce antidepressant and neuroprotective effects in animal models via neuroplastic effects on hippocampal cell proliferation and decreased neuroinflammation (Takahashi et al., 2018;Wei et al., 2016). Neuroinflammation and excitotoxicity contribute to the pathophysiology of both depression and neurodegeneration (Bauer and Teixeira, 2019;Bhalla et al., 2009;Conwell et al., 1998;Lavretsky et al., 1998Lavretsky et al., , 2020Pelton et al., 2016;Reynolds et al., 2006;Rush et al., 2006;Steffens, 2008;Vega et al., 2016). Antidepressants coupled with drugs that target glutamate transmission and excitotoxicity, therefore, offer a promising novel "mood plus cognitive enhancer" neuroprotective approach to treatment. Memantine, an NMDA antagonist, inhibits calcium influx and excitotoxicity while preserving the physiological activation of the receptor. We recently conducted a randomized, double-blind, placebocontrolled trial of escitalopram combined with placebo (ESC/PBO) or memantine (ESC/MEM) in depressed elderly with subjective memory complaints (NCT01902004). No differences were observed in the depression remission rate at 6-or 12-months following initiation of treatment . However, compared to ESC/PBO, ESC/MEM treatment produced improvements in delayed recall and executive functioning at 12-month follow-up. The current report examines the effects of inflammation at baseline on clinical and cognitive outcomes at 6-month follow up. Several studies have highlighted an association between depression symptoms and increased markers of peripheral inflammation (Dantzer et al., 2008;Miller and Raison, 2016), which theoretically spurs neuro-inflammation downstream the manifests as depression or cognitive decline (Franceschi and Campisi, 2014). The evidence linking inflammation to increased cognitive dysfunction in aging is more mixed, and may vary as a function of the presence and/or stage of neurodegenerative disease progression (Lai et al., 2017;Lassale et al., 2019;Ng et al., 2018;Yang et al., 2015). Nonetheless, inflammation remains a commonly suspected mechanism of cognitive impairment in aging (Franceschi and Campisi, 2014). Despite the fact that inflammation is a targeted mechanism in both depression and cognitive impairment among older adults, there are surprisingly few studies that approach these symptoms as concurrent outcomes of a similar mechanistic process. The parent clinical trial of the current exploratory study offered a unique opportunity to investigate inflammation as a predictor of treatment response for both depression and cognitive outcomes. We examined baseline markers of peripheral inflammation and change scores following study treatment, testing differential treatment response between groups for either depression or cognitive function. The rationale was that our findings could uniquely contribute to mechanistic understanding of these aging-related symptoms through manipulating neurotransmitter systems intricately connected to neuro-inflammation (Haroon et al., 2017). Methods Study methods have been previously described and will be briefly summarized. This study was approved by the University of California, Los Angeles (UCLA) Institutional Review Board and all participants signed informed consent. Between October 2013 and May 2018, we recruited individuals from the UCLA Neuropsychiatric Hospital inpatient and outpatient service and from community advertising. Three hundred and sixty-one individuals were assessed via phone screening, yielding 115 participants for in-person diagnostic interview. Of these, 95 met inclusion criteria and underwent randomization to one of the treatment arms. The sample used in the present study included n ¼ 45 randomized to receive either escitalopram with placebo and n ¼ 45 randomized to receive escitalopram with memantine, who also had baseline inflammation data available. (please see CONSORT diagram; Fig. 1). The Structured Clinical Interview for the Diagnostic and Statistical Manual of Mental Disorders (SCID DSM-5) was administered by a study psychiatrist or a trained, masters-level research associate to diagnose Major Depressive Disorder (MDD) and rule out other diagnoses (e.g., psychosis). Inclusion criteria were: 1) presence of MDD according to DSM-5 criteria; 2) score of !16 on the 24-item Hamilton Rating Scale for Depression (HAM-D) (Hamilton, 1960); 3) subjective memory complaints (affirmative response to the question, "Have you experienced memory problems over the past six months?" during phone screening); 4) did not have dementia (described below); and 4) age ! 60 years. Exclusion criteria were: 1) lifetime history of any psychiatric disorder (except MDD, co-morbid anxiety, or insomnia); 2) recent and/or current unstable medical or neurological disorders; 3) diagnosis of dementia; or 4) known allergic reaction to escitalopram or memantine. Participants were free of psychotropic medications for at least two weeks before starting the trial (four weeks in the case of fluoxetine). No participants were currently taking a cognitive enhancer at study entry. Screening for dementia Participants were screened for dementia using the following procedures: 1) administration of the Clinical Dementia Rating Scale (Berg, 1988), scores of >0.5 were excluded; 2) review of a standard battery of hematologic studies, blood chemistries, liver and thyroid function tests, B12 and folate levels, and RPR test; 3) a neurological and psychiatric examination; 4) review of neuropsychological scores on the study test battery; and a score of 24 on the Mini-Mental State Examination (Folstein et al., 1975(Folstein et al., , 1985. Those who met diagnosis of dementia were excluded. Diagnosis of mild cognitive impairment (MCI) Whether or not eligible participants met criteria for MCI was determined using established guidelines (Langa and Levine, 2014;Petersen, 2004). MCI was defined as: 1) a stage between normal cognition and dementia (Clinical Dementia Rating Scale (CDR) score of 0.5 (Hughes et al., 1982)); 2) patient-reported decline in cognition; 3) objective impairment on neurocognitive testing; 4) no significant functional impairment. Objective impairment on neurocognitive testing was defined as scoring one standard deviation (SD) below age-and education-specific norms on at least two screening memory tests (Hopkins Verbal Learning Test, Revised, [either Total or Delayed scores] and Wechsler Memory Scale Third Edition, WMS-III, verbal paired associates, [either Total or Delayed scores]). Participants who met this criterion and had a CDR score of 0.5 were classified as having amnestic MCI (either single or multiple domains) (Winblad et al., 2004). Randomization Eligible participants were randomized in a 1:1 ratio to escitalopram/ placebo or escitalopram/memantine using a computer-generated random assignment scheme. A block randomization strategy (with randomly selected blocks of length 4 and 6) was used to maintain balance throughout the trial. Intervention procedures All study participants received a 14-day supply of the study medications including 10-20 mg of escitalopram daily open-label throughout the trial. Matching capsules containing memantine (MEM) or placebo were given and titrated from 5 mg/day up to 10 mg twice daily (i.e., 20 mg per day) during the first four weeks. Depending on Clinical Global Impressions (CGI) scale ratings at the end of week 4, participants were either titrated up to 20 mg escitalopram/day (CGI !3) or were continued on the same dose (CGI rating of 1 or 2). If participants reported side effects attributed to the study medications they were instructed to decrease their dosage. The minimum allowed dosages were 5 mg once daily for MEM and 10 mg once daily for escitalopram. Neuropsychological battery The following test battery was administered at baseline and 6 months. We transformed raw scores to z-scores using the sample mean and standard deviation, reversing z-scores when necessary so that higher zscores represent better performance for all measures. These z-scores were aggregated into domains, chosen o priori, based on the general processes involvedaccording to standard neuropsychological practice (REF) and consistent with our prior report from this sample (Harvey, (Heaton et al., 2004;Reitan, 1958), Stroop interference (CJ and SM, 1978), Controlled Oral Word Association test [FAS] (Heaton et al., 2004;Strauss et al., 2006). Statistical approach Data were inspected for outliers, homogeneity of variance and other assumptions to ensure their appropriateness for parametric statistical tests. The cytokine concentration levels were log-transformed and in order to reduce the number of cytokine markers in analyses, we used the iterated principal factor method with varimax rotation to obtain factor scores from the log-transformed cytokine concentrations at baseline. The number of factors was determined by using two criteria: (1) use of a scree plot (plot of eigenvalues on the y-axis and the number of factors on the xaxis) to determine the point where the slope of the curve leveled off to indicate the number of factors that should be kept, and (2) the total amount of variability of the original items explained by each factor solution. Following Hair et al. (2010), factor loading of 0.5 and above was chosen as the cut-off (Hair et al., 2010). We first used general linear models to examine the association of the cytokine factor scores with cognitive domain scores at baseline, controlling for age, sex, BMI and batch. We also examined models including depression (HAM-D) scores to evaluate whether any shared variance with depression washed out emergent relationships between inflammation and cognition, or if these relationships were distinct from depression. We then estimated similar general linear models to examine whether the factor scores at baseline were associated with 6-month change in cognitive domain scores. As above, age, sex, BMI and batch were used as covariates. Given that this is the first study to examine the association of cytokine markers to anti-depressant treatment induced change in cognition in older depressed patients, we set the significance level at p .05 for all analyses. Sample Baseline demographic and clinical variables are presented in Table 1. Treatment groups did not differ in any of these measures. Sixty-two subjects completed study: 33 ESC/MEM (out of 45 at baseline ¼ 73% completers) and 29 ESC/PBO (out of 45 at baseline ¼ 64% completers), see Fig. 1. Completers and drop-outs did not differ in any of the baseline measures (see Supplementary Table 1). Mean daily escitalopram dose was 9.9 mg (SD ¼ 1.5; range: 5-20 mg). Mean daily memantine dose was 19.3 mg (SD ¼ 2.6; range 10-20 mg). Measures of tolerability and dropouts due to side-effects did not differ between the groups. Remission rate within ESC/MEM was 47.9%, compared to 31.9% in ESC/PBO at 6 months (χ 2 (1) ¼ 2.0, p ¼ .15). Changes in HAM-D, learning, delayed recall and executive functioning scores were not significantly different between groups. Both groups improved significantly in HAM-D and neither group improved in cognitive outcomes at the end of the 6-month intervention. Please refer to our earlier paper describing the results of the parent clinical trial for more detailed results . Cytokine factor analysis Four factors were chosen as the optimal number of factors to be retained, accounting for 74% of the variance. The factor loadings are presented in Table 2. Interestingly, three of the four factors identified mirrored clear biological functions. All the cytokines included in Factor 1 are typically associated to T cell responses, particularly type 1 and type 17 helper T cells (Th1 and Th17) (Damsker et al., 2010). Factor 2 includes proto-typical pro-inflammatory cytokines and chemokines (IL-8, MIP-1 β, IL-6 and TNF-α) together with their prototypical regulators IL-10 and IL-1RA, thus bearing an innate inflammatory signature (Turner et al., 2014;Zhang and An, 2007). Factor 3 only includes chemokines primarily driving the recruitment of eosinophils/basophils, T cells and monocytes (Turner et al., 2014;Zhang and An, 2007). Factor 4 was the only one without a clear biological function. Factor 4 included sCD40L and Fractalkine, both involved in vascular inflammation; while all four analytes are associated with neuroinflammation. Baseline analyses Cytokine factor scores were not significantly different between treatment groups at baseline. Factors 1, 2 and 3 were not significantly associated with any cognitive domain. However, increased scores on Factor 4 were associated with worse Learning and Delayed Recall scores: F(1,81) ¼ 3.92, p ¼ .05; F(1,81) ¼ 4.31, p ¼ .04, respectively (see Supplementary Table 2 for beta coefficients, standard errors and 95% confidence interavals for all associations). When depression (HAM-D) scores were added to the models, the results were sustained for delayed recall: Delayed Recall -F(1, 80) ¼ 3.89, p ¼ .05, and to a lesser extent for Learning -F(1, 80) ¼ 3.46, p ¼ .07. Longitudinal analyses Cytokine factor scores at baseline were not associated with change in depression (HAM-D). We found that Factor 2 was differentially associated with change in Executive Function scores as a function of treatment group: interaction term F(1,52) ¼ 4.63, p ¼ .04. As seen in Fig. 2, baseline Factor 2 scores predicted decline in Executive Function only in the ESC/PBO group (slope ¼ À0.13, p ¼ .003) while the ESC/MEM exhibited no relationship between baseline inflammation scores and Executive Function decline (slope ¼ 0.0, p ¼ .9), despite similar distribution of Factor 2 baseline scores (see Supplementary Table 3 for beta coefficients, standard errors and 95% confidence interavals for all associations). No other association between baseline factor scores and change in cognitive domain scores were significant. Discussion In this study, we examined the relationship between inflammatory markers and neuropsychological functioning in older adults with depression treated with a combination of escitalopram combined with memantine or placebo. First, we found a cross-sectional relationship linking a set of inflammatory markers with learning and memory function that persisted after controlling for depression. Second, we found that increased pro-inflammatory factors (Factor 2) at baseline predicted decline in executive function, but only in the ESC/PBO group, with no such relationship observed in the ESC/MEM group. Notably, we did not find relationships among the same factors in both sets of analyses and in each set of analyses only one of the four factors was found to be related to cognition. Elevated inflammation has been implicated in both depression and cognitive decline in aging (Ownby, 2010;Rosenblat et al., 2014), and this report extends this literature in a few key ways. Most previous studies tended to focus on the relationship of inflammation to either depression or cognitive impairment as an outcome (Elderkin-Thompson et al., 2012) not taking into account frequent comorbidity and shared underlying mechanisms (Kanchanatawan et al., 2018;Morimoto and Alexopoulos, 2013). In addition, most studies examined the role of isolated inflammatory markers, most commonly C-reactive Protein (CRP), Il-6, TNF-α (Lindqvist et al., 2017;Strawbridge et al., 2015;Yang et al., 2019), while we aggregated factors from a panel of cytokines that appear to represent concurrent function. Most but not all cytokines included in the panel have been studied with respect to mood or cognition. Factor 2 contains a set of well-known pro-inflammatory markers often described in cognition and mood studies (da Fonseca et al., 2014;Elderkin-Thompson et al., 2012;Lai et al., 2017;Ng et al., 2018). While the cytokines that comprise Factor 4 are less well-studied than other markers in terms of cognitive outcomes, there is a growing literature uncovering their role in cognition and neurodegeneration. In mice models, IFN-a in CSF is associated with cognitive impairment (Sas et al., 2009). Higher levels of soluble CD40L has been associated with HIV-associated neuroinflammation (Ramirez et al., 2010). GRO is also implicated in inflammatory response to reactive oxygen species in mice models (Shen et al., 2010). Fractalkine moderates microglia activity in the CNS, its receptor CX3CL1 protects neurons from microglial neurotoxicity (Limatola and Ransohoff, 2014), and is linked to cognition and neurodegenerative diseases; however, the nature of these relationships requires further study (Finneran and Nash, 2019). Our finding of the lack of influence of Factor 2 on cognitive outcomes in the ESC/MEM is intriguing and suggests that memantine may protect against pro-inflammatory cognitive decline. This observation further elaborates on the role of peripheral inflammation in geriatric depression and cognitive decline, and adds to our recent report from the same study using the functional enrichment transcriptome analysis that demonstrated that escitalopram-based remission was associated with functions related to cellular proliferation, apoptosis, and inflammatory response . Remission in the ESC/MEM group, however, was characterized by processes related to cellular clearance, metabolism, and cytoskeletal dynamics. Both treatment arms modulated inflammatory responses, albeit via different effector pathways. Memantine is an NMDA receptor agonist used to treat moderate to severe Alzheimer's disease by reducing glutamatergic excitotoxicity (Cacabelos et al., 1999;Rogawski and Wenk, 2003). Dysfunctional glucose metabolism is emerging as a key player in the development of Alzheimer's disease (Kuehn, 2020) (given the moniker "Type III Diabetes" (de la Monte and Wands, 2008)), and can increase oxidative stress and subsequently drive up neuroinflammation -Corral et al., 2015). A similar mechanism has been proposed in the development of depression (Dantzer and Walker, 2014). Inflammation can lead to glutamatergic-related excitotoxicity and some have posited that this is the pathway that links inflammation to susceptibility for depression (Haroon et al., 2017). In line with our findings, studies of ketamine, another NMDA antagonist, have also suggested that it not only prevents glutamatergic excitotoxicity but also may have anti-inflammatory properties (Hudetz et al., 2009;Proescholdt et al., 2001). Taken together, our findings point to a possible role of memantine in protecting inflammation-driven cognitive decline in depressed older adults. This study has limitations. The study was not specifically powered or designed for the presented analyses and will require replication and further study. The sample is relatively homogeneous with respect to demographic variables that might relate to the relatively preserved cognition or the level of inflammation in this cohort. In addition, cognitive function varied somewhat, although none of the participants had dementia, some met criteria for the mild cognitive impairment. Therefore, it will be important to understand these findings in studies with longitudinal follow-up to identify those who may be developing Alzheimer's type or other neurodegenerative disorders. Participants were not required to fast before blood collection took place, introducing the possibility of uncontrolled biologic variability. While blood-based markers of inflammation are widely used as a proxy to study neuroinflammation given their easier accessibility, their signal gets diluted in the circulation compared to the peripheral site of inflammation; thus, it would be helpful to study the response of memantine treatment in measures more proximal to the central nervous system (e.g., cerebrospinal fluid) (Bettcher et al., 2018). Alternatively, ultra-sensitive methods for cytokine detection, like Simoa, may be used to improve detection of diluted signals of tissue inflammatory responses in the circulation. Several publications have confirmed the ultra-sensitivity of this technology and support its utility for the development of immune signatures and the identification of disease biomarkers (Rissin et al., 2010). Importantly, we did not find a consistent pattern with respect to inflammatory factors and cognition. The most well-studied inflammatory markers related to cognition (i.e., Factor 2) did not correlate with cognitive functioning at baseline, only in analysis of change scores. Furthermore, associations varied among cognitive domains, with relationships found for learning and memory at baseline, and change in executive function over time. Given the exploratory nature of this study and the higher likelihood of spurious findings compared to hypothesisdriven analyses, it is critical to interpret our findings with caution. Furthermore, we acknowledge the incertitude of drawing links between highly complex, microscopic immune response and more abstract downstream performance on cognitive testing. While numerous studies take a similar theory-driven approach, our goal for the present study was to begin exploring these relationships to generate hypotheses for future research. In summary, the present study highlights intriguing links between inflammation and cognitive outcomes in depressed older adults who were treated with escitalopram combined with memantine or placebo. The adverse effects of increased inflammation is well-studied in both depression and neurodegenerative disease, and it is important to consider overlapping neurobiological pathways for these burdensome disorders of aging. Our findings also implicate a role of modulating glutamate activity in protecting older adults with depression from inflammation related cognitive decline, a novel finding that requires further study. Declaration of competing interest None reported.
2020-11-12T09:01:29.997Z
2020-11-07T00:00:00.000
{ "year": 2020, "sha1": "baff75cb1d5db6128af4eabba985f9f6f345480d", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.bbih.2020.100167", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "538bc93fdff3bd40de01b47657528110beeda681", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
8993898
pes2o/s2orc
v3-fos-license
Nonredundant Roles for CD1d-restricted Natural Killer T Cells and Conventional CD4+ T Cells in the Induction of Immunoglobulin E Antibodies in Response to Interleukin 18 Treatment of Mice Interleukin (IL)-18 synergizes with IL-12 to promote T helper cell (Th)1 responses. Somewhat paradoxically, IL-18 administration alone strongly induces immunoglobulin (Ig)E production and allergic inflammation, indicating a role for IL-18 in the generation of Th2 responses. The ability of IL-18 to induce IgE is dependent on CD4+ T cells, IL-4, and signal transducer and activator of transcription (stat)6. Here, we show that IL-18 fails to induce IgE both in CD1d−/− mice that lack natural killer T (NKT) cells and in class II−/− mice that lack conventional CD4+ T cells. However, class II−/− mice reconstituted with conventional CD4+ T cells show the capacity to produce IgE in response to IL-18. NKT cells express high levels of IL-18 receptor (R)α chain and produce significant amounts of IL-4, IL-9, and IL-13, and induce CD40 ligand expression in response to IL-2 and IL-18 stimulation in vitro. In contrast, conventional CD4+ T cells express low levels of IL-18Rα and poorly respond to IL-2 and IL-18. Nevertheless, conventional CD4+ T cells are essential for B cell IgE responses after the administration of IL-18. These findings indicate that NKT cells might be the major source of IL-4 in response to IL-18 administration and that conventional CD4+ T cells demonstrate their helper function in the presence of NKT cells. Introduction IL-18, an IL-1-like cytokine that requires cleavage by caspase-1 to become active, was originally identified as a factor that enhances IFN-␥ production by Th1 cells in the presence of anti-CD3 Ab plus IL-12 (1,2). Later studies have revealed that IL-18 and IL-12 directly and synergistically induce IFN-␥ production by Th1 cells, nonpolarized T cells, B cells, NK cells, macrophages, and dendritic cells (3)(4)(5)(6)(7)(8). However, our recent studies and those of others have demonstrated that in the absence of IL-12, IL-18 promotes Th2 cytokine production by T cells, basophils, and mast cells (9)(10)(11)(12)(13). In the presence of IL-3, IL-18 stimulates basophils and mast cells to produce IL-4 and IL-13 even without Fc R cross-linkage (9). CD4 ϩ T cells cultured with IL-2 and IL-18, without TCR engagement, express CD40 ligand (L), * produce IL-4 and IL-13, and induce B cells to secrete IgE in vitro (10). Consistent with these findings, the administration of relatively high doses of IL-18, which have the ability to induce Th1 diseases when administrated with IL-12 to WT mice (14), results in striking increases in serum IgE that are dependent on CD4 ϩ T cells, IL-4, and signal transducer and activator of transcription 6 but are independent of TCR engagement (10). Transgenic (Tg) mice that overexpress IL-18 or caspase-1 in their keratinocytes (KIL-18Tg and KCASP1Tg, respectively) spontaneously produce IgE in an endogenous IL-18-dependent manner (10,15,16). Taken together, these results suggest that IL-18 promotes allergic disorders, particularly intrinsic type atopic diseases characterized by the absence of sensitivity to a particular antigen. Although it is well established that IL-18 stimulates CD4 ϩ T cells to produce IL-4 in vivo and in vitro, the CD4 ϩ T cell subset that responds to IL-18 in vivo by inducing IL-4 production and CD40L expression remains unknown. NKT cells express the NK cell marker NK1.1 and an invariant TCR V ␣ 14-J ␣ 281 chain, preferentially associated with a V ␤ 8.2 chain (17). NKT cells are positively selected by the nonpolymorphic MHC class I-like molecule CD1d (17) and recognize glycolipids, such as ␣ galactosylceramide ( ␣ -GalCer) presented by CD1d (18). ‫ف‬ 60% of all NKT cells express CD4 whereas the remaining cells are CD4 Ϫ CD8 Ϫ (17). NKT cells exert regulatory functions, which are most likely mediated by their capacity to promptly release large amounts of IL-4 and IFN-␥ upon TCR engagement by anti-CD3 or NKT cell Ls such as ␣ -GalCer (19)(20)(21)(22). Modulation of NKT cells may not only determine the outcome of host immune response, but also be applicable for the treatment of immunological diseases. Furthermore, it is important to determine the stimulus that selectively induces NKT cells to produce Th1 or Th2 cytokines. Here we demonstrate that IL-18 treatment of mice induces CD40L expression and IL-4 production by NKT cells in vivo. NKT cell-deficient (CD1d-deficient; CD1d Ϫ / Ϫ ) mice or conventional CD4 ϩ T cell-deficient (class II-deficient; class II Ϫ / Ϫ ) mice fail to produce IgE in response to injection of IL-18 whereas class II Ϫ / Ϫ mice reconstituted with conventional CD4 ϩ T cells produce a substantial amount of IgE. Culturing NKT cells with IL-2 and IL-18, without TCR engagement, causes a striking increase in CD40L expression and production of significant amounts of IL-4, IL-9, and IL-13 by these cells. Although conventional CD4 ϩ NK1.1 Ϫ T cells respond poorly to IL-18 with relatively modest induction of IL-4 and CD40L, they are required for IgE production in response to IL-18 in vivo, as shown by the failure of class II Ϫ / Ϫ mice reconstituted with conventional CD4 ϩ T cells from IL-4 Ϫ / Ϫ mice to produce IgE when treated with IL-18. These results indicate that NKT cells are a critical subset of CD4 ϩ T cells that respond to IL-18 by expression of Th2 cytokines and CD40L in vivo, and conventional CD4 ϩ T cells act as Th cells together with NKT cells in IL-18-induced IgE responses. In Vitro Culture. Sorted CD4 ϩ NK1.1 ϩ and CD4 ϩ NK1.1 Ϫ T cells from C57BL/6 mice (10 5 /0.2 ml/well) were cultured with medium alone or various combinations of 200 pM IL-2, 10 ng/ ml IL-12, and 50 ng/ml IL-18 for 4 d in RPMI 1640 supplemented with 10% FBS, 50 M 2-ME, 2 mM l -glutamine, 100 U/ml penicillin, and 100 g/ml streptomycin. Supernatants were harvested and tested for IL-4, IL-9, IL-13, and IFN-␥ contents by ELISA. The collected T cells were also examined for their expression of CD40L and capacity to induce B cells to produce IgE by incubation with highly purified B cells as previously described (10). In Vivo Treatment of Mice. Mice were injected on a daily basis with PBS buffer or IL-18 (2 g/day) for 13 d. For adoptive transfer experiments, class II Ϫ / Ϫ mice were transferred with 10 ϫ 10 6 purified CD4 ϩ NK1.1 Ϫ T cells from either WT or IL-4 Ϫ / Ϫ mice intravenously. From the day after cell transfer, mice were treated daily with 2 g IL-18 as described above. They were bled 0, 7, 10, and 14 d later and serum IgE, IL-4, and IL-13 were measured by ELISA. In some experiments, CD4 ϩ NK1.1 Ϫ T cells from WT mice were labeled with carboxyfluorescein diacetate succinimidyl ester (CFSE; Molecular Probes, Inc.; reference 25) and transferred to class II Ϫ/Ϫ mice. Cells containing transferred cells were stained with PE-anti-CD4 and transferred cells were identified by their CFSE fluorescence and CD4 expression. The frequency of repopulated CD4 ϩ T cells in class II Ϫ/Ϫ mice was calculated by (number of CFSE ϩ CD4 ϩ cells) / (number of transferred cells). Flow Cytometry. To detect IL-4-producing cells from mice treated with IL-18, total spleen cells derived from C57BL/6 mice that had been injected with IL-18 for 10 d were first stained with CyChrome-anti-CD4 and PE-anti-NK1.1 and followed by fixa-tion with 4% (wt/vol) paraformaldehyde in PBS and permeabilization of cell membrane with ice-cold PBS containing 1% FCS plus 0.1% saponin. Resultant cells were further stained with 0.5 g FITC-anti-mouse IL-4 or isotype-matched control Ab and analyzed for their proportion of cytoplasmic IL-4 ϩ cells by FACS-Calibur ® (Becton Dickinson). To detect CD40L ϩ cells in mice treated with IL-18, total spleen cells were stained with FITCanti-CD4, PE-anti-NK1.1, and the combination of biotinylated anti-CD40L and tri-color streptavidin (Caltag), and then analyzed on a FACSCalibur ® . CD1 Ϫ/Ϫ Mice Are Defective in the Production of Th2 Cytokines and IgE in Response to IL-18 Administration. IL-18 treatment of BALB/c mice induces IgE in a CD4 ϩ T celldependent manner (10). Consistent with our previous report (10), this IgE response is not associated with induction of Th2 cells (not depicted), suggesting that TCR engagement might not be required for IgE induction in IL-18-injected mice. To further substantiate this observation that IgE response is independent of TCR engagement by endogenous Ags, we examined the capacity of BALB/c mice expressing transgene-encoding TCR for OVA peptide (DO11.10 mice) to produce IgE in response to IL-18. These mice received daily injections of IL-18 (2 g/day) for 13 d. Like normal BALB/c mice, they produced IgE in response to this treatment ( Fig. 1 A), although this IL-18 treatment again did not induce Th2 response (not depicted). Furthermore, this IL-18-induced IgE production was entirely resistant to cyclosporin A treatment (not depicted), further excluding the involvement of TCR-mediated T cell activation in this T cell-dependent IgE response. To identify the IL-18-responsive T cells that are relevant in IL-18-induced IgE response, we compared the capacity of C57BL/6 and C57BL/6 background CD1 Ϫ/Ϫ mice lacking CD4 ϩ NK1.1 ϩ T cells (23) to produce IgE in response to IL-18. As shown in Fig. 1 B, IL-18 caused a striking increase in serum IgE levels in WT mice but not in CD1 Ϫ/Ϫ mice. Furthermore, administering IL-18 to WT mice caused the production of a significant amount of IL-4 and IL-13 whereas CD1 Ϫ/Ϫ mice produced no IL-4 and diminished amounts of IL-13 ( Fig. 1, C and D). These results suggest that CD4 ϩ NK1.1 ϩ T cells produce both IL-4 and IL-13 in response to IL-18. NKT Cells Produce IL-4 and Express CD40L in Response to In Vivo Treatment with IL-18. To determine the roles of NKT cells in IL-18-induced IgE response, we directly tested whether CD4 ϩ NK1.1 ϩ T cells produce IL-4 and increase CD40L expression when WT mice are injected with IL-18. As shown in Fig. 2 A, CD4 ϩ NK1.1 ϩ T cells obtained from IL-18-injected mice, compared with PBSinjected mice, showed a significant increase (P Ͻ 0.01) in the proportion of T cells producing IL-4 ex vivo (7.1%). In contrast, few, if any, CD4 ϩ NK1.1 Ϫ T cells contained cyto- plasmic IL-4. Similarly, CD4 ϩ NK1.1 ϩ T cells in IL-18injected WT mice showed a significant increase (P Ͻ 0.01) in the proportion of CD40L-expressing T cells (Fig. 2 B). Although IL-18 also caused an increase in the proportion of CD4 ϩ NK1.1 Ϫ T cells that expressed CD40L, the frequency of positive cells was significantly less than that among the CD4 ϩ NK1.1 ϩ T cells. We also tested the capacity of CD4 ϩ NK1.1 ϩ T cells and CD4 ϩ NK1.1 Ϫ T cells stimulated with IL-2 and IL-18 for 4 d to induce IgE in resting B cells in vitro. IL-2 plus IL-18-stimulated CD4 ϩ NK1.1 ϩ T cells were able to induce B cells to secrete IgE whereas conventional CD4 ϩ NK1.1 Ϫ T cells failed (Table I). Taken together, these results indicate that NKT cells constitutively expressing IL-18R␣ chain are a crucial subset of CD4 ϩ T cells that respond to IL-18 by producing IL-4 and expressing CD40L, which in combination, induce IgE production by B cells both in vivo and in vitro. NKT Cells but Not Previously Activated CD4 ϩ NK1.1 Ϫ T Cells Produce IL-4 in Response to IL-18. In Fig. 3, we demonstrated that NKT cells but not CD4 ϩ NK1.1 Ϫ T cells are highly responsive to IL-18. To determine if the induction of Th2 cytokines and CD40L after IL-18 is a unique property of NKT cells or a property of all previously activated or memory T cells, we compared IL-18 responsiveness of NKT cells to that of previously activated conventional CD4 ϩ T cells. For this purpose, we used CD44 high CD4 ϩ T cells as a control population. Before comparison, we examined the expression of CD44 on total CD4 ϩ T cells, CD4 ϩ NK1.1 ϩ T cells, and CD4 ϩ NK1.1 Ϫ T cells obtained from normal C57BL/6 mice. As shown in Fig. 4 A, almost all freshly prepared CD4 ϩ NK1.1 ϩ T cells (R2) expressed high levels of CD44 whereas only 12.9 and 21.6% of CD4 ϩ NK1.1 Ϫ T cells (R3) and total CD4 ϩ T cells (R1) expressed CD44, respectively. Because a substantial proportion of CD44 high CD4 ϩ T express NK1.1 T cells, we compared IL-18 responsiveness of NKT cells to that of CD44 high NK1.1 Ϫ CD4 ϩ T cells or CD44 int NK1.1 Ϫ CD4 ϩ T cells. As NKT cells sometimes lose their NK1.1 expression during or after stimulation (26), we sorted CD44 high NK1.1 Ϫ CD4 ϩ T cells from splenic CD4 ϩ T cells, already depleted of other cell populations, particularly NKT cells (refer to Materials and Methods). We also sorted CD44 int NK1.1 Ϫ T cells and CD44 high NK1.1 ϩ CD4 ϩ T cells (NKT cells) from total CD4 ϩ T cells. We stimulated these three populations with IL-2 and/or IL-18. Consistent with the results shown in Fig. 3 A, only NKT cells produced both IL-4 and IL-13 in response to IL-2 plus IL-18 (Fig. 4 C). These results support our conclusion that NKT cells but not previously activated conventional T cells have the capacity to induce IgE from B cells. CD4 ϩ NK1.1 Ϫ T Cells Are Required in IL-18-induced IgE Responses. Although conventional CD4 ϩ NK1.1 Ϫ T cells responded poorly to IL-18 in vivo and in vitro with relatively modest induction of IL-4 and CD40L, these results leave open the question of whether conventional CD4 ϩ T cells are also required for the observed effect of IL-18 on IgE production. Thus, we tested IL-18 responsiveness of class II Ϫ/Ϫ mice expressing almost the same number of CD4 ϩ NK1.1 ϩ T cells as were expressed by WT mice, although CD4 ϩ T cells constituted only 4 to 5% of their spleen cells (Fig. 5 A; reference 20). As shown in Fig. 5 B, class II Ϫ/Ϫ mice completely failed to demonstrate induction of IgE in response to IL-18 treatment. However, class II Ϫ/Ϫ mice reconstituted with conventional CD4 ϩ T cells from WT mice mounted a small but significant IgE response to IL-18 (Fig. 5 C) whereas those mice reconstituted with conventional CD4 ϩ T cells from IL-4 Ϫ/Ϫ mice failed to do so in response to IL-18 (Fig. 5 C). Compared with WT mice, class II Ϫ/Ϫ mice reconstituted with CD4 ϩ T cells showed weak IgE responses, suggesting that host spleen was only partially repopulated. Indeed, only 0.94, 1.58, and 0.64% of transferred conventional T cells were repopulated at days 3, 7, and 10, respectively (Fig. 5 D). These results provide direct evidence that IL-4-producing conventional CD4 ϩ T cells are needed for IgE production by B cells. Discussion Here we show that the administration of IL-18 results in increases in serum levels of IgE, IL-4, and IL-13 in normal mice but not in CD1 Ϫ/Ϫ mice, which lack NKT cells (Fig. 1). In addition, NKT cells, which are strongly positive for the IL-18R␣ chain, produce IL-4, IL-9, and IL-13 and express CD40L in response to IL-18 plus IL-2 in the absence of TCR engagement (Figs. 2 and 3). Furthermore, NKT cells that are stimulated with IL-18 and IL-2 for 4 d promote class (Table I). However, class II Ϫ/Ϫ mice, which have NKT cells but lack conventional T cells, fail to produce IgE in response to IL-18 treatment, suggesting the importance of conventional T cells in IL-18induced IgE response (Fig. 5). Indeed, these mice were able to produce IgE after reconstitution with conventional T cells from WT but not from IL-4 Ϫ/Ϫ mice. Taken together, these results demonstrate that NKT cells are relevant cells for IL-18-induced IL-4 production and suggest that they recruit the action of conventional T cells in the induction of IgE. In atopic individuals, it is well known that allergens give rise to a polarization to Th2 responses and that enhanced secretion of IL-4 promotes IgE production (27). Allergen binding to IgE cross-links FcR on basophils and mast cells, which causes them to produce IL-3, IL-4, IL-5, IL-9, IL-13, and a variety of chemical mediators, most notably histamine (28). The combination of these products induces allergic inflammation, highlighting the importance of IgE for activating basophils and mast cells. Thus, we could designate this IgE-dependent allergic disease as "acquired type allergic response." However, we have recently demonstrated an alternative, IgE-independent activation pathway of mast cell/ basophil ("innate type allergic response"; reference 16). IL-18 in combination with IL-3 directly stimulates these cells to produce IL-4, IL-13, and histamine in an IgE-independent manner (9). IL-18 also stimulates CD4 ϩ T cells to produce IL-4 and express CD40L, which can induce in vitro class switching to IgE in B cells in an Ag-independent manner (10). Our previous study also demonstrated that IL-18treated mice or IL-18-producing caspase-1 Tg mice express high serum levels of IgE, which is CD4 ϩ T cell-, IL-4-, and signal transducer and activator of transcription 6-dependent, but Th2 cell independent (10). Here we show that IL-18 stimulates IgE production in OVA-specific TCR Tg mice without OVA administration (Fig. 1 A). Furthermore, we found that this IgE response is Ag nonspecific and resistant to cyclosporin A treatment. However, IL-18 is not necessarily essential for induction of IgE response because like WT mice, IL-18-deficient mice generate a Th2 response and produce IgE when inoculated with the helminth Nippostrongylus brasiliensis (unpublished data). Thus, N. brasiliensis-induced Th2 responses, which require TCR engagement, can be generated in the absence of IL-18. These results indicate that IgE response to antigen using the TCR is possibly not affected by blocking IL-18 as a therapeutic strategy but is susceptible to the treatment with cyclosporin A. In other words, IL-18-induced IgE responses become apparent only in the absence of TCR engagement. Thus, IL-18 may induce allergic disorders, particularly intrinsic atopic diseases characterized by the absence of particular allergen-specific IgE. Our results suggest that it is importance to determine whether observed IgE response is dependent on TCR and/or IL-18-mediated signaling. The novel lymphoid lineage, NKT cells, which expresses both NK receptors and TCR encoded by the V␣14 and J␣281 gene segments, has been suggested to play an important role in the regulation of immune responses (17). Cytokines such as IL-12 can stimulate NKT cells to release IFN-␥ and exhibit natural cytotoxity (29,30). Recent studies demonstrated that NKT cells could transactivate NK cells via IFN-␥ production upon stimulation with CD1d-bound glycolipid L (␣-GalCer; reference 18). This IFN-␥ production by NKT cells in response to ␣-GalCer is predominantly mediated by IL-12 produced by dendritic cells (31). Moreover, ␣-GalCer-stimulated NKT cells inhibit antigen-induced allergic responses by production of IFN-␥ Figure 5. Transfer of conventional CD4 ϩ T cells from WT mice partially reconstitutes IgE production by class II Ϫ/Ϫ mice injected with IL-18. (A) Frequency of CD4 ϩ NK1.1 ϩ T cells in total spleen cells (top) and CD4-enriched spleen cells (bottom) from C57BL/6, CD1 Ϫ/Ϫ , and class II Ϫ/Ϫ mice. Percent of cells in selected quadrants are indicated. C57BL/6 and class II Ϫ/Ϫ mice received either nothing (B) or 10 7 CD4 ϩ NK1.1 Ϫ T cells from WT or IL-4 Ϫ/Ϫ mice (C; five mice per group) and were injected daily with IL-18 (2 g/day) for 13 d. Mice were bled on days 0, 7, 10, and 14 and serum IgE was measured by ELISA. (D) Class II Ϫ/Ϫ mice (three mice per group) received an intravenous injection of 10 7 CFSElabeled CD4 ϩ T cells from C57BL/6 mice. Spleen cells were isolated from recipients at various times after transfer and transferred cells were identified by CSFE fluorescence as described in Materials and Methods. (32). However, NKT cells also exert regulatory functions, most likely through their capacity to promptly release large amounts of IL-4 upon stimulation with anti-CD3 or ␣-GalCer, thus promoting the acquisition of a Th2 phenotype (19)(20)(21)(22). Therefore, NKT cells can promote Th1 responses in certain situations and Th2 responses in others. The data presented here show that among CD4 ϩ T cells, NKT cells have the unique capacity to respond to IL-18 by IL-4 production and CD40L up-regulation and directly help B cells produce IgE in vitro and in vivo without antigenic stimulation. However, our studies with class II Ϫ/Ϫ mice reveal that conventional CD4 ϩ T cells are also required for the observed effect of IL-18 on IgE production (Fig. 5). Class II Ϫ/Ϫ mice reconstituted with conventional CD4 ϩ T cells from WT mice but not from IL-4 Ϫ/Ϫ mice mounted a small but significant IgE response to IL-18 (Fig. 5 C), suggesting that conventional T cells participate in IgE induction by secreting a small amount of IL-4. However, this IgE production in WT CD4 ϩ T cell reconstituted mice failed to increase beyond day 10 after IL-18 administration. We suspect that the failure to completely reconstitute IL-18-induced IgE responses in these animals is due to incomplete repopulation of CD4 ϩ T cells in class II Ϫ/Ϫ hosts (Fig. 5 D). We could increase the number of repopulated cells by increasing the number of transferred cells, although we could not increase repopulation rate (unpublished data). Thus, in the absence of class II expression, CD4 ϩ T cells may not survive efficiently and are unable to completely reconstitute IgE production. Nevertheless, our finding indicates that IL-4 production by both NKT and conventional CD4 ϩ T cells is critical for IL-18-induced IgE production. However, as NKT cells and conventional T cells produce large and small amounts of IL-4, respectively, the precise role of IL-4 from conventional T cells in B cell activation remains uncertain. Furthermore, we do not know the appropriate ratio of NKT cells to conventional T cells in effective B cell IgE responses. We have shown that KIL-18Tg and KCASP1Tg mice, which overexpress mature IL-18 or caspase-1 gene, respectively, in keratinocytes, spontaneously produce IgE in an endogenous IL-18-dependent manner (10,15,16). Interestingly, our recent examination of KCASP1Tg mice revealed that they exhibited a two-to threefold increase in splenic NKT cells, although short-term in vivo treatment with IL-18 did not affect the number of splenic NKT cells (unpublished data). These increases in the proportion of splenic NKT cells and serum IgE levels in KCASP1Tg mice are somewhat similar to that in the V␣14-J␣281 Tg mice established by Bendelac et al. (33). These V␣14-J␣281 Tg mice exhibit a selective increase in serum IgE (sixfold above controls on average) and IgG1 (twofold above controls). Here, we have provided strong evidence that CD1d-restricted NKT cells are critically important for the induction of IgE in response to IL-18. However, as NKT cells that express TCR other than V␣14-J␣281 TCR are also selected by CD1d (34), it remains possible that such nonclassical NKT cells also contribute to the IL-18-induced IgE response. Future studies with J␣281 Ϫ/Ϫ mice should be able to resolve this issue. Recently, Leite-de-Maraes et al. (35) have reported that IL-18 enhances IL-4 production by L-activated NKT cells but not by conventional T cells. Thus, IL-18 enhances anti-CD3 Ab-or ␣-GalCer-induced IL-4 production by NKT cells. However, IL-18 also enhanced IFN-␥ production by NKT cells stimulated with anti-CD3 or ␣-GalCer. These results suggest that IL-18 exerts its action on IL-4 production by NKT cells by amplifying the signaling pathway initiated by TCR/CD3 cross-linkage or cognate L ␣-GalCer. In sharp contrast, our results reveal that IL-18 and IL-2 synergistically and directly exert IL-4 and CD40Linducing activities on NKT cells even in the absence of TCR engagement. Furthermore, these activated NKT cells in collaboration with IL-4-producing conventional T cells can induce class switching to IgE in B cells, indicating a new function of NKT cells in innate immunity. These findings suggest that IL-18 and NKT cells might be potential targets in the effort to develop agents that regulate Th2-independent allergic disorders as innate type allergic response. Furthermore, as IL-18 predominantly enhances production of Th2 cytokines both in vitro and in vivo, injection of IL-18 might be promising for the treatment of Th1 diseases such insulin dependent diabetes mellitus.
2014-10-01T00:00:00.000Z
2003-04-21T00:00:00.000
{ "year": 2003, "sha1": "a8e9320253abd22e1596467c26093bb37a39256d", "oa_license": "CCBYNCSA", "oa_url": "http://jem.rupress.org/content/197/8/997.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "a8e9320253abd22e1596467c26093bb37a39256d", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
219562959
pes2o/s2orc
v3-fos-license
Condição de saúde bucal e utilização de serviços odontológicos entre idosos em área rural no sul do Brasil Oral health condition and the use of dental services among the older adults living in the rural area in the south of Brazil Resumo Este artigo objetiva avaliar a saúde bucal, a utilização de serviços odontológicos e os fatores associados entre indivíduos com 60 anos, ou mais, residentes em área rural. Estudo transversal, de base populacional, realizado na zona rural de um município de porte médio do extremo sul do Brasil. O desfecho foi ter utilizado serviços odontológicos nos 12 meses anteriores à data da entrevista. A análise abrangeu descrição da amostra, prevalência da utilização de serviços odontológicos para cada categoria das variáveis independentes e análise multivariada através da Regressão de Poisson. Foram entrevistados 1.030 idosos, sendo 49,9% edêntulos totais e tendo 13,9% consultado com dentista no último ano. A probabilidade de consultar foi maior em idosos do sexo feminino, com companheiros(as), com maior escolaridade, pertencentes aos melhores níveis econômicos e que referiram ter algum problema de saúde bucal. Por sua vez, idosos que relataram ser ex-fumantes ou fumantes consultaram menos. Planejamentos em saúde devem ser reorganizados com o intuito de priorizar grupos populacionais com maiores dificuldades na utilização dos serviços odontológicos. Palavras-chave Serviços odontológicos, Saúde bucal, Idoso, População rural, Assistência odontológica Abstract Objectives: To evaluate the oral health, the use of dental services and associated factors among individuals aged 60 years, or more, living in the rural area. Method: This is a population-based, cross-sectional study carried out in the rural area of a medium-sized municipality in the extreme south of Brazil. The outcome was to have used dental services in the 12 months before the date of the interview. The analysis included a description of the sample, prevalence of the use of dental services for each category of independent variables and multivariate analysis through Poisson Regression. Results: In total, 1,030 older adults were interviewed, of which 49.9% were totally edentulous patients, and 13.9% had dental visits in the last year. The probability of visits was higher in females, with a partner, higher schooling, of the highest economic levels and that reported some oral health problem. On the other hand, elderly who reported being former smokers or were current smokers had fewer visits. Conclusions: Health planning should be reorganized to prioritize population groups with more significant difficulties in the use of dental services. introduction Although Brazilian population aging requires greater health care, the existing services did not adequately meet the needs of seniors. It is believed that dental visits are unnecessary for this age group due to the high rates of edentulism 1,2 . This context can be attributed to a care model that for a long time was focused on mutilating practices and resulted in poor oral health, and dental services do not consider this group a priority [2][3][4] . The use of dental services through early interventions and frequent and periodic follow-ups brings several benefits to oral health, with enabling actions aimed at health promotion, prevention, diagnosis, treatment and rehabilitation 2,[5][6][7] . Several factors lead to seeking medical or dental visits, including demographic, economic, educational, psychological characteristics, morbidity profiles, as well as patterns of popular culture and traditions that may be affected by current health policies and the characteristics of the health system [7][8][9] . Public dental services were reorganized and improved with the implementation of the National Oral Health Policy to change the reality of the oral health condition of Brazilians. The combination of guidelines and actions at the individual and collective levels, encompassing the insertion and expansion of oral health at all levels of care in the Unified Health System (SUS) facilitated access to dental procedures that were previously exclusive to the private sector [10][11][12] . However, comparing the last two national epidemiological surveys, namely, the National Oral Health Surveys conducted in 2003 and 2010 (SBBrasil), even with the significant improvement of the DMFT (decayed, missing and filled teeth) rate in the young population, among the elderly from 65 to 74 years of age, this rate practically remained unchanged, reaching 27.5 teeth in 2010, whereas in 2003, the average was 27.8 teeth, mostly corresponding to "extracted" or "missing" 13,14 . This dental loss of older adults is unfortunately still popularly seen as part of the aging process, not as a shortcoming of public policies, which are not geared toward the adult population so that it can reach senility with its natural teeth 15 . Besides Brazilian oral health care having been historically restricted to a limited range of dental procedures provided in large urban centers, which present a higher concentration of public and private health services, Brazilian rural areas have worse indicators of income, basic sanitation, and schooling levels 12,16 . Such a setting may favor an increased burden of morbidities and health problems. The recognition of the needs of this population, through epidemiological studies, is essential for the planning of realistic interventions aimed at improving access and quality of health care, reorganization of services and redistribution of care resources 12,17 . With the intention of increasing information on the pattern of dental visits in rural areas, this study aimed to describe oral health, the use of dental services and associated factors among individuals aged 60 years or more residing in the rural area, located in a municipality in the extreme south of Brazil. Material and methods This study was carried out in the rural area of Rio Grande, Rio Grande do Sul, and was part of a more extensive study -a research consortium -covering several health aspects of certain segments of the rural population. In 2017, the population was estimated at 209,378 inhabitants, of which 4% lived in the rural area 18 . This is a cross-sectional, population-based type study which included a population of individuals 60 years of age and over who lived in the rural area. Individuals institutionalized in nursing homes or hospitals were excluded. Older adults with an intellectual impairment that prevented their understanding of the questions were not interviewed. In order to estimate the prevalence of dental services utilization in the last year, a prevalence of 20%, an error of 2 p.p. and 95% confidence level was used in the calculation of sample size, with a 10% increase for losses and refusals, resulting in 679 individuals. The following parameters were defined to calculate the associated factors: statistical power of 80% to find a relative risk (RR) of at least 2, 95% confidence level, prevalence in non-exposed patients of at least 20%, and non-exposed to exposed ratio of at least 4:1, including 10% increase for losses and refusals and 20% for control of possible confounding factors (n = 722). The rural area of the municipality of Rio Grande consists of 24 census tracts with approximately 8,500 inhabitants distributed around 2,700 permanently inhabited households 19 . The sampling process was random and systematic to select 80% of households from the draw of a number between "1" and "5". The number drawn corresponded to the address considered a skip. For example, if the number "3" was drawn, every household with a number "3" of a sequence of five households was not sampled, that is, it was skipped. This procedure ensured that four out of five households were sampled. Fieldwork was conducted from April to October 2017 by a team of interviewers and field supervisors. After elucidating the study subject and agreeing to participate, the old man signed the Informed Consent Form, and then the questionnaire was applied. Caregivers signed the form on behalf of seniors with disabilities. The study was approved by the Research Ethics Committee of the Federal University of Rio Grande, and confidentiality of individual information of the participants was assured. The collection tool used was an electronic questionnaire, previously tested in a pilot study performed in households excluded from sampling. Data were collected through tablets using the RedCap® program 20 . Data stored on tablets were sent daily to the FURG server (redcap.furg.br) via an internet connection. On a weekly basis, the data quality control (data quality tool) was performed on the server to identify variables with no response or errors. After correction, data were sent back to the server. Also, a weekly database backup was performed on a Microsoft Excel® worksheet to ensure no loss of information. A short version of the tool was applied in 10% of individuals interviewed. Data agreement was analyzed by Kappa statistic. The dependent variable consisted in the use of dental services in the 12 months before the interview (yes or no), from the question "From <MONTH> last year to this date, have you visited a dentist?" Information was collected if the older adult had already used the services through the question "Have you ever visited a dentist in your life?" Independent variables included gender (male or female); age (in full years); self-reported skin color (white, black, yellow, indigenous or brown); marital status (without or with a partner), schooling (in full years); economic class according to the Brazilian Association of Research Companies (ABEP) 21 ; reason for last visit (urgent visit, common treatment and revision); perception of the need to use dentures; report of oral health problem in the 12 months before the interview (difficulty in eating, sleeping or participating in social activities); type of service used in the last visit (public health post, public service other than health post, covenant and private service); health plan; tobacco use (never smoked, has smoked or currently smokes); alcohol consump-tion in the last month; depression; total number of teeth reported in the upper and lower arches (in quartiles); use of dentures and self-perceived oral health (very poor/poor, fair and good/very good). Individuals with black, brown, indigenous and yellow skin color were grouped in a category called "other" because they were small groups. The common treatment in the variable "reason for using the last visit" is the segment of two visits or more that did not fit into the other categories. The variable depression was collected by the PHQ-9 (The Patient Health Questionnaire) tool, with a cut-off point ≥ 9 points. Statistical analyses were performed in the Sta-ta® program version 14.0 22 . A descriptive analysis of the independent variables was performed. The prevalence of the outcome and its respective confidence interval (95% CI) and prevalence according to the associated factors were calculated using the Chi-square test of heterogeneity (bivariate analysis) in this stage. Then, the Poisson regression with robust adjustment of variance 23 and backward stepwise method was used to estimate the crude and adjusted prevalence ratios and their respective confidence intervals (95% CI). The multivariate analysis followed a hierarchical theoretical model of determination by levels, as described in Figure 1. This model establishes a chain of determinants organized in levels of determination that influence the distal or proximal outcome 24 . The first level included the variables gender, age, self-reported skin color, schooling, economic class and marital status. Tobacco use, alcohol consumption and depression were inserted at the second level. The third level included variables health plan, oral health problem and self-perceived oral health. The variables of each level were adjusted at the same level and the higher level. Those with p-value < 0.20 were maintained to control possible confounding. Statistical significance was measured by the Wald's test of heterogeneity and linear trend, with a p-value < 0.05 of a two-tailed test. results Of the 1,785 households sampled, 1,131 older adults were identified in the rural area of the municipality of Rio Grande in 2017. Of this total, 1,030 participated in the survey, which corresponds to a rate of 8.9% of losses and refusals. The prevalence of the use of dental services in the 12 months before the interview was 13.9% (95% CI, 11.8-16.2) and the prevalence of non-use of services was 86.1% (95% CI, 83.8-88.2). A 6.6% share of the seniors reported never having visited a dentist. Table 1 shows the description of the main characteristics of the sample. There was a predominance of men (55.2%), white individuals (91.6%) belonging to economic class C (51.2%) and using some denture (74.8%). Approximately half of the individuals were totally edentulous (49.9%) and 73% had up to eight teeth in both arches. The prevalence of the outcome according to the independent variables to the use of dental services and the crude and adjusted prevalence ratios are described in Table 2. After adjustment, it was noticed that women were 90% more likely to visit the service in the last 12 months when compared to men. Older people who had 8 or more years of study visited 155% more than those who did not study any year. Individuals of economic classes A/B used 289% more services than those of the D/E classes; and those who reported having a partner increased the likelihood of seeing the dentist by 77%. In turn, former smokers or smokers consulted 40% less. Older adults who reported a dental health problem that interfered with eating, sleeping, or participating in social activities increased the likelihood of using dental services in the last year by 121%. Discussion This study identified that the population has a high proportion of total edentulism (49.9%) in the rural area of Rio Grande and that the prevalence of the use of dental services in the last year was 13.9%. Gender, marital status, educational level, economic level, tobacco use and oral health problems influenced the use of services. The use of health services is linked to access barriers, which can prevent or hinder the possibility of people using these services 6 . In a systematic review, Moreira et al. 3 pointed to barriers to the use of dental services due to low schooling and low income. In agreement with the literature, 31 . Low adherence to health plans that cover dental visits (10.3%) may also be a factor for older adults seeking a greater proportion of independent private services. Another fac-tor that denotes access is the proportion of people who have never been to the dentist 32 . Approximately 6.6% of seniors in this study reported never having gone to the dentist. This value was lower when compared to the SBBrasil 2010 14 data, which was 14.7%, but was similar when compared to the South region (5.1%), confirming the disparity between the Brazilian regions 14,33,34 . The prevalence of the use of dental services in Anglo-Saxon countries is almost five times higher when compared to our study, denoting the differences regarding the health system, contextual values regarding the use of services and health behavior 27-29 . Our findings, in agreement with a study carried out in Pelotas 1 , showed no association be- tween the self-perceived need for dental treatment and the use of dental services, differing from previously reported results, which may be due to a high edentulism rate in both populations 26,30 . Still, almost 40% of older adults reported not requiring the use of dentures. One reason to explain this negative relationship is the high cost of prosthetic treatment 5 . Other studies still suggest that the absence of teeth is not perceived by the elderly as a significant oral health problem [35][36][37] . Moreover, unlike the evaluation measure of the quality of the prosthesis by a dental surgeon, many older adults consider their prostheses maladapted due to the difficulties of adaptation and retention of new prostheses 2 . While not a significant result of this study, the recent use of dental services has been inversely associated with older age, suggesting a decreased regular use of dental services among seniors and may generate reverse causality with high rates of edentulism 4,12 . In these and other studies 27,29 , senior women were 90% more likely to visit a dentist than men, which may be due to men seeking health services less due to cultural and occupational factors 14,26 . Older adults who had a partner also visited more, perhaps because they had someone who supports the search for health care 27 . On the other hand, being a current smoker or former smoker has reduced the likelihood of using dental services. Although it is not a variable described in the literature, it is known that smokers take less care of their health and use fewer health services in general 29 . As expected, the elderly who reported having dental problems visited the dentist more in the last year and were more likely to do so than those who did not have oral problems 8,12,29 . Still, around 83.6% of the elderly mentioned common treatments as reasons for visits, to the detriment of only 9.5% for urgent visits. This proportion may also indicate a better coverage of dental services, since historically most of the Brazilian municipalities developed oral health actions only for the school age group, assigning to seniors only access to emergency services, which were often mutilating 32 . The increased access of older adults to dental services can be attributed to the gradual incorporation of oral health professionals into the Family Health Strategy Teams (ESF) and the Ministry of Health program called Brasil Sorridente ("Smiling Brazil"), which by establishing the National Oral Health Policy, facilitated greater attention and financing to oral health 2,10,12 . There was greater effort to promote the increased integration of oral health in the health services in general from the combination of knowledge and practices that pointed to health promotion, prevention and surveillance and the revision of the care practices that incorporated the family approach and life protection 38 . Specialized care was expanded and qualified, in particular, with the establishment of Dental Specialties Centers and Regional Dental Prosthesis Laboratories. Possible methodological limitations may affect the observed results. The recall bias tends to be inferred in the reports, however, as the outcome was measured dichotomously, it may be easier to remember whether or not the dentist was visited. Another possible limitation refers to the response of the elderly's self-perceived oral health condition, when in the presence of a caregiver. This situation occurred in only 46 cases. In the remaining questionnaires, the answer was given by the respondent. Therefore, since the underestimation or overestimation of this prevalence occurred, it must have been shallow, not affecting the result found. Still, the findings regarding the number of teeth may not be accurate because they have been self-reported, and not obtained by clinical examination. However, a Brazilian cohort study suggests that information obtained from self-reports on oral health showed good sensitivity when compared to clinical examination 39 . As a positive aspect, it can be pointed out that the study was carried out in a medium-sized Brazilian municipality. Its findings can be extrapolated to similar municipalities and may provide subsidies on the characteristics of oral health care in a rural area. In conclusion, the results of this study indicate poor oral health conditions of the Brazilian elderly living in rural areas. The rates of utilization of dental services are low, especially in illiterate men of lower economic level, without a partner, former smokers or current smokers and who reported they were unable to identify oral problems. Health planning should be reorganized with the aim of prioritizing these population groups, improving the available health care model. Also, intersectoral actions of public policies should seek better education and income indices to reduce the inequalities of these social determinants that are, to this date, considerable barriers to access to dental services. collaborations FMM Schroeder participated in the conception, design, analysis and interpretation of data and writing of the article. RA Mendoza-Sassi participated in the analysis and interpretation of the data, its critical review and approval of the version to be published. RD Meucci participated in the design, writing of the article, its critical review and approval of the version to be published.
2020-06-04T09:07:15.972Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "75e04b909c0b8b17c6d16b2231e078c38c6c1227", "oa_license": "CCBY", "oa_url": "http://www.scielo.br/pdf/csc/v25n6/en_1413-8123-csc-25-06-2093.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "737fe006b4cfc9d0d60c72044ee43edecfc35845", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6222367
pes2o/s2orc
v3-fos-license
Resonance, linear syzygies, Chen groups, and the Bernstein-Gelfand-Gelfand correspondence If \A is a complex hyperplane arrangement, with complement X, we show that the Chen ranks of G=\pi_1(X) are equal to the graded Betti numbers of the linear strand in a minimal, free resolution of the cohomology ring A=H^*(X,\k), viewed as a module over the exterior algebra E on \A: \theta_k(G) = \dim_\k Tor^E_{k-1}(A,\k)_k, where \k is a field of characteristic 0, and k\ge 2. The Chen ranks conjecture asserts that, for k sufficiently large, \theta_k(G) =(k-1) \sum_{r\ge 1} h_r \binom{r+k-1}{k}, where h_r is the number of r-dimensional components of the projective resonance variety R^1(\A). Our earlier work on the resolution of A over E and the above equality yield a proof of the conjecture for graphic arrangements. Using results on the geometry of R^1(\A) and a localization argument, we establish the conjectured lower bound for the Chen ranks of an arbitrary arrangement \A. Finally, we show that there is a polynomial P(t) of degree equal to the dimension of R^1(\A), such that \theta_k(G) = P(k), for k sufficiently large. 1. Introduction 1.1. Orlik-Solomon algebra. Let A = {H 1 , . . . , H n } be an arrangement of complex hyperplanes in C ℓ . A fundamental question in the subject is to decide whether a given topological invariant of the complement, X(A) = C ℓ \ H∈A H, is determined by the intersection lattice, L(A) = { H∈A ′ H | A ′ ⊆ A}, and, if so, to find an explicit combinatorial formula for such an invariant. For example, in [26], Orlik and Solomon showed that the cohomology ring of the complement is entirely determined by L(A). More precisely, the Orlik-Solomon algebra A = H * (X(A), Z) is the quotient of the exterior algebra E = (Z n ) on generators e 1 , . . . , e n in degree 1 by the ideal I generated by all elements of the form ∂e i1...ir := q (−1) q−1 e i1 · · · e iq · · · e ir , for which codim H i1 ∩ · · · ∩ H ir < r. Notice that I is generated in degrees 2 and higher; in particular, A 0 = E 0 = Z and For each element a ∈ A 1 , the Orlik-Solomon algebra can be turned into a cochain complex (A, a). The i th term of this complex is simply the degree i graded piece of A, and the differential is given by multiplication by a: (1.1) (A, a) : 0 / / A 0 a / / A 1 a / / A 2 a / / · · · a / / A ℓ / / 0 . HENRY K. SCHENCK AND ALEXANDER I. SUCIU This complex arose in the work of Aomoto [1] on hypergeometric functions, and in the work of Esnault, Schechtman and Viehweg [15] on cohomology with coefficients in local systems. In [33], Yuzvinsky showed that, for generic a, the complex (1.1) is exact. 1.2. Resonance varieties. In [16], Falk initiated the study of the cohomology jumping loci for an arrangement complement. Fix a field k; abusing notation, we will also denote by A = H * (X(A), k) the Orlik-Solomon algebra over k. The resonance varieties of A are the loci of points a = n i=1 a i e i ↔ (a 1 : · · · : a n ) in P(A 1 ) ∼ = P n−1 for which (A, a) fails to be exact. More precisely, for each k ≥ 1, The resonance varieties of A lie in the hyperplane n i=1 a i = 0, and depend only on the lattice-isomorphism type of L(A). In [16], Falk also introduced the notion of a neighborly partition: a partition Π of A is neighborly if, for any rank two flat Y ∈ L 2 (A) and any block π of Π, where µ denotes the Möbius function of the lattice. Now assume char k = 0. Falk showed that all components of R 1 (A) arise from neighborly partitions of subarrangements of A. In particular, each flat Y ∈ L 2 (A) gives rise to a "local" component of dimension µ(Y ) − 1; see Example 2.3 for how this works. Falk also conjectured that the components of R 1 (A) are projective linear subspaces; this was proved in [8], and generalized to R k (A) in [5]. Libgober and Yuzvinsky [22] showed that the components of R 1 (A) are in fact disjoint, and positive-dimensional. These facts will be used in an essential way later in the paper. We note that the characteristic zero assumption is necessary; the aforementioned results depend on this. A thorough treatment of resonance varieties over arbitrary fields (and even commutative rings) can be found in a recent paper of Falk [17] (see also [24]). 1.3. Lower central series and Chen ranks. The fundamental group of the complement, G(A) = π 1 (X(A)), is not necessarily determined by the intersection lattice. Even so, the ranks of the lower central series quotients, . In fact, due to the formality of X(A), the LCS ranks depend only on the k-algebra A = H * (X(A), k), where char k = 0. Explicit formulas for the LCS ranks of an arrangement group are available in some cases (most notably, when L(A) is supersolvable), but a general, all-encompassing formula remains elusive. See [18], [31], [34] for surveys of the problem, and [30], [29] for recent developments. More manageable topological invariants are the Chen ranks, Introduced by K.T. Chen in his thesis [4], these are the LCS ranks of the maximal metabelian quotient of G. For example, if F n is the free group of rank n, then θ 1 (F n ) = n, and θ k (F n ) = (k − 1) k+n−2 The study of Chen ranks of arrangement groups was started in [6], [7]. The ranks θ k (G(A)) were determined in a number of cases, including the pure braid groups. They proved to be quite subtle and useful invariants, distinguishing in some instances between arrangement groups with same LCS ranks. 1.4. Resonance and Chen ranks. For about a decade, it was an open question whether the Chen ranks of an arrangement are combinatorially determined, see e.g. [18, §2.3]. This question was recently settled in the affirmative in [28]. There remained the question of computing explicitly the Chen ranks of an arrangement group G(A) in terms of the intersection lattice L(A). Based on the work in [6], [7], a precise combinatorial formula for the Chen ranks was conjectured in [31]: Conjecture A (Resonance formula for Chen ranks). Let G = G(A) be an arrangement group, and let h r be the number of components of R 1 (A) of dimension r. Then, for k ≫ 0: In other words, θ k (G) = r≥1 h r θ k (F r+1 ). This formula can easily be verified for a pencil of n lines (with G = F n−1 × Z), a near-pencil (with G = F n−2 × Z 2 ), or a product of such arrangements. Much less obviously, the conjecture holds for the braid arrangements [6], and for "decomposable" arrangements [7], [29]. Our methods give a unified proof of Conjecture A for all these classes of arrangements, and in fact apply more generally. It was originally conjectured in [31] that equality (1.3) holds for all k ≥ 4, but Example 6.3 below shows this is false. It turns out that the value for which θ k (G) is given by a fixed polynomial in k depends on the Castelnuovo-Mumford regularity of the linearized Alexander invariant. 1.5. Resonance and linear syzygies. In [30] we observed that there was a close connection between the resonance variety R 1 (A) and the linear syzygies of A, where the Orlik-Solomon algebra A is viewed as a module over the exterior algebra E. We define the linear strand in a minimal free resolution of A over E as the subcomplex of the form While β 12 is determined by the Möbius function of L(A), a purely combinatorial formula for β i,i+1 is unknown. However, many examples suggested to us that: Conjecture B (Resonance formula for the linear strand). For k ≫ 0, the graded Betti numbers of the linear strand are given by 1.6. Outline and Results. We now outline the structure of paper, and state our main results. We start in §2 with a review of the linearized Alexander invariant B. First considered in [7], this graded module over the symmetric algebra S = Sym(A * 1 ) is closely related to both the resonance variety R 1 (A) [8], and to the Chen ranks θ k (G) [28]. In §3, we express the linearized Alexander invariant as an Ext module. Key to our approach is the Bernstein-Gelfand-Gelfand correspondence, which gives (for our purposes) a relationship between linear complexes over the exterior algebra E (on generators e 1 , . . . , e n ) and graded modules over the symmetric algebra S (on generators x 1 , . . . , x n ). In Theorem 3.2, we show: Here [14]. The formulation of the BGG correspondence in [13] shows that the local cohomology modules of F (A) determine the free resolution of A over E. As a first application, we prove in Corollary 3.3 that Conjecture A and Conjecture B are equivalent: As another application, we prove in Theorem 3.4 that the Chen ranks conjecture holds for graphic arrangements. If Γ is a graph and A = A(Γ) the corresponding arrangement, we show that: where κ s is the number of cliques of size s + 1. On the other hand, the components of R 1 (A) are all 2-dimensional, and there is one component for each triangle or complete quadrangle in Γ. Thus, formula (1.7) agrees with (1.3). In §4 we use the results of [8], [22], [28] and some commutative algebra to prove that the Chen ranks have polynomial growth, controlled by the resonance variety: there exists a polynomial P (t) ∈ Q[t], of degree equal to the dimension of R 1 (A), such that θ k (G) = P (k), for all k ≫ 0. In particular, this implies As an easy corollary of this, we compute the complexity of the Orlik-Solomon algebra A, viewed as a module over E, in the case when ℓ = 3. In §5, we use a localization argument to give a (sharp) lower bound on the Chen ranks, thereby proving one direction of Conjecture A. For an arbitrary arrangement group G, we show (in Corollary 5.6): In §6, we give some examples illustrating the fact that the Chen ranks formula does not hold for small values of k. This phenomenon can be interpreted in terms of certain local cohomology modules, which reflect subtle combinatorial behavior in L(A). Dramatis Personae Let A be an arrangement of complex hyperplanes in C ℓ , with complement X(A). Since we are primarily interested in the fundamental group G(A) = π 1 (X(A)), we may restrict our attention to affine line arrangements in C 2 . Indeed, if A is an arbitrary arrangement, let A ′ be a generic two-dimensional section of A. Then, by the Lefschetz-type theorem of Hamm and Lê [21], the inclusion [27]. In view of the above, we will assume throughout that A is a central arrangement. Most often, this will be an arrangement in C 3 , with projectivization a line arrangement in P 2 . Chen groups and Alexander invariant. Let G ′ = [G, G] be the derived subgroup of G, and G ′′ = (G ′ ) ′ the second derived subgroup. The group G/G ′ is the maximal abelian quotient of G, whereas G/G ′′ is its maximal metabelian quotient. The k-th Chen group of G is, by definition, the k-th lower central series quotient of G/G ′′ . Let θ k (G) = rank gr k (G/G ′′ ) be its rank. For example, if G = F n , the free group of rank n, then, as shown by Chen [4]: The Alexander invariant, B = B(A), is the abelian group G ′ /G ′′ , endowed with the Λ-module structure induced by the conjugation action in the extension 0 → G ′ /G ′′ → G/G ′′ → G/G ′ → 0. As shown in [7], the module B admits a finite presentation of the form Let gr B = k≥0 I k B/I k+1 B be the associated graded module. A basic observation of W.S. Massey [23] asserts that gr k (G/G ′′ ) = gr k−2 B, for k ≥ 2. Consequently, for any field k of characteristic 0. Linearized Alexander invariant. In general, the module B is hard to compute, depending as it does on finding a (braid monodromy) presentation for the group G, and carrying out a laborious Fox calculus algorithm. The resulting presentation matrix ∆ typically involves very complicated Laurent polynomials. On a conceptual level, it is not at all clear whether B is combinatorially determined, since G is not always determined by L(A). For all these reasons, it is convenient to look at a simplified version of the Alexander invariant, which carries all the essential information we want to extract from this module. The linearized Alexander invariant, B = B(A), is the graded S-module presented by the "linearization" of the matrix ∆: The matrix ∆ lin appears in the statement of Theorem 4.6 and in Remark 4.7 from [8]; see also [24] and [28] for more general contexts. The reason for the terminology is as follows: Viewing Λ ⊗ C as the coordinate ring of the algebraic torus (C * ) n and S ⊗ C as the coordinate ring of C n , the entries of ∆ lin are the derivatives at 1 ∈ (C * ) n of the corresponding entries of ∆. Let us describe the linearized Alexander matrix in a concrete fashion, following [8]. If δ i denotes the i-th differential in the Koszul complex, then where α 2 is the adjoint of the canonical projection γ 2 : 2.3. The module B, resonance, and Chen ranks. When k is a field of characteristic zero (which will be our standing assumption from now on), the linearized Alexander invariant B := B ⊗ k is related in a very concrete way to both the resonance variety R 1 (A), and to the fundamental group G = G(A). The first important fact about the module B is the identification of the variety defined by its annihilator ideal. The second important fact about the module B is the following linearized version of Massey's result. 28]). The Chen ranks, θ k = θ k (G), k ≥ 2, are equal to the dimensions of the graded pieces of the linearized Alexander invariant: In particular, the Chen ranks are combinatorially determined, and the Hilbert polynomial P(B, t) ∈ Q[t] gives the asymptotic Chen ranks: for k ≫ 0, θ k = P(B, k). Example 2.3 (Braid arrangement). Let A be the braid arrangement in P 2 , with defining polynomial Q = xyz(x−y)(x−z)(y−z). From the matroid (see Figure 1), it is easy to see that the Orlik-Solomon algebra A is the quotient of the exterior algebra E on generators e 0 , . . . , e 5 by the ideal I = ∂e 145 , ∂e 235 , ∂e 034 , ∂e 012 , ∂e ijkl , where ijkl runs over all four-tuples; it turns out that the elements ∂e ijkl are redundant. The (minimal) free resolution of A as a module over E begins: The resonance variety R 1 (A) ⊂ P 5 has 4 local components, corresponding to the triple points, and 1 essential component (i.e., one that does not come from any proper sub-arrangement), corresponding to the neighborly partition Π = (05|13|24): The linearized Alexander invariant B = coker ∆ lin : S 31 → S 15 is a module over the ring S = k[x 0 , . . . , x 5 ]; the presentation matrix ∆ lin can be reduced by row and column operations to the matrix ϑ : A computation shows that Thus, θ 1 = 6, θ 2 = 4, and θ k = 5(k − 1), for k ≥ 3. This agrees with the computations in [6], and with the values predicted by Conjecture A. A generalization will be given in Theorem 3.4. 3. The Bernstein-Gelfand-Gelfand correspondence and H * (A, a) 3.1. The BGG correspondence. Let V be a finite-dimensional vector space over a field k. In this section, we connect our cast of characters. The key tool is the Bernstein-Gelfand-Gelfand correspondence, which is an isomorphism between the category of linear free complexes over the exterior algebra E = (V ) and the category of graded free modules over the symmetric algebra S = Sym(V * ). An introduction to the BGG correspondence may be found in Chapter 7 of [12]; additional sources are [13], [9], and [14]. Notice that if we take Sym(V * ) to be generated in degree one, then (V ) is generated in degree −1. The convention in arrangement theory (and the convention of this paper) is that the exterior algebra is generated in degree 1. To distinguish between gradings, we write E ′ for an exterior algebra with generators in degree −1, and E if the generators are in degree 1. Let L denote the functor from the category of graded E ′ -modules to the category of linear free complexes over S, defined as follows: for a graded E ′ -module P , L(P ) is the complex We will be applying the functor L to the , where I is the Orlik-Solomon ideal; tensoring with E ′ (−ℓ) shifts so the unit of the algebra is in degree ℓ, and the generators are in degree ℓ − 1, with ℓ the dimension of the ambient space of the arrangement. Similarly, let R denote the functor from the category of graded S-modules to the category of linear free complexes over E ′ : for a graded S-module M , R(M ) is the complex In Theorem 4.3 of [13], Eisenbud, Fløystad and Schreyer show that if M is a graded S-module, with linear free resolution given by L(P ), then the dimension of Tor E ′ i (P, k) can be computed from the dimensions of the graded pieces of the local cohomology modules of M . This result will be the main tool in our application of BGG; for completeness (and because of the grading differences) we prove a variant of their result in §3.3 below. 3.2. The Eisenbud-Popescu-Yuzvinsky resolution. Let A = {H 1 , . . . , H n } be a central arrangement in C ℓ , with complement X(A). Identify the cohomology ring H * (X(A), k) with the Orlik-Solomon algebra A = E/I, where E is the exterior algebra on V = k n . The BGG correspondence was used in this context by Eisenbud-Popescu-Yuzvinsky [14] to establish that H * (X(A), k) ∼ = ann(I) has a linear free resolution over E; results on the first differentials in this resolution appear in [10]. Fix a basis e 1 , . . . , e n for V , and let x 1 , . . . , x n be the dual basis for V * . From [14], Corollary 3.2 we have an exact sequence of S-modules: The key point here is that the complex obtained by applying BGG to A is in fact exact, hence a free resolution of F (A). Notice that the differential d i : p ⊗ 1 → n i=1 e i p ⊗ x i is precisely the differential in the Aomoto complex (A, a), where the maps are given by multiplication by a generic linear form of the exterior algebra; the x i are simply the coefficients of this form. With grading convention that E is generated in degree one, F (A) is generated in degree −ℓ. With this choice, Ext ℓ−1 S (F (A), S) is generated in degree ℓ − 1; we will see in a bit that this is consistent with the topological formula (2.7). 3.3. Relating Tor E and Ext S . The following lemma is a restatement of Theorem 4.3 of [13]. For our purposes it is preferable to use Ext i (•, S) rather than the local cohomology modules H i m (•); by local duality ( [11], Theorem A.4.2) these modules encode essentially the same information. Lemma 3.1. With notation as above, we have: Proof. As before, write A ′ for the Orlik-Solomon algebra, with unit in degree ℓ, and generators in degree ℓ − 1. A straightforward translation shows that . By Proposition 7.9 and Exercise 7.7 of [12], H k (Hom S (L(P ), S)) i+k is dual to H i (P ⊗ C) −i−k . If we apply this with P = A ′ , then [14] tells us that L(A ′ ) is a free resolution of F (A ′ ), so that ( [11], A3.11) The reason for tensoring so that A ′ has no components of negative degree is that it keeps the indexing simple; in particular, we find that Finally, since F (A ′ ) is generated in degree zero, F (A) = F (A ′ )⊗S(ℓ) is generated in degree −ℓ, as claimed. So Thus, Putting together (3.4), (3.7), and (3.9) finishes the proof. Recall that the regularity of an E-module M is the smallest integer n such that Tor E i (M, k) j = 0 for all i, j with j ≥ i + n + 1. Lemma 2.3 of [30] is that A is (ℓ − 1)regular; this also follows from Lemma 3.1 and the EPY resolution of F (A). For example, if we write e ij for dim k Ext i S (F (A), S) j , then the minimal free resolution of A over E (for ℓ = 3) is: The module B as an Ext module. We now connect the players. Recall that the linearized Alexander invariant (over k) is the S-module defined via the exact sequence The rows in the diagram are EPY resolutions. Notice that the middle row is just a truncation of the Koszul complex over S. All columns but the last one (with solid arrows) are exact by definition of the Orlik-Solomon algebra A = E/I. The exactness of the two bottom rows, combined with the long exact sequence in homology, shows that the rightmost column (marked with dotted arrows) is exact. This short exact sequence yields a long exact sequence of Ext modules: The Koszul complex is exact and self-dual, so Ext i S (F (E), S) = 0, for all i < ℓ, thus Ext i+1 S (F (A), S) ∼ = Ext i S (F (I), S), for i < ℓ − 1. Since all the vertical exact sequences (except the last one) consist of free S-modules, the dual sequences are also exact, and so E k /(im α k ) ∼ = (I k ⊗ S) * . Hence: If char k = 0 (which recall is our standing assumption), then we obtain: RESONANCE, SYZYGIES, CHEN GROUPS, AND BGG 11 Corollary 3.3. The Chen ranks of an arrangement group G equal the graded Betti numbers of the linear strand of the cohomology ring A over the exterior algebra E: In particular, this shows that Conjectures A and B and are equivalent. We note that it is possible to assemble a different proof of Corollary 3.3 using recent results of Fröberg and Löfwall [19] on Koszul homology and homotopy Lie algebras. 3.5. Chen groups of graphic arrangements. We conclude this section with an application to a particularly nice class of arrangements. Given a simple graph Γ, with vertex set V = {1, . . . , n} and edge set E, the corresponding graphic arrangement, A(Γ) = {H e } e∈E , consists of the hyperplanes in C n of the form H e = {z ∈ C n | z i − z j = 0} for which e = (i, j) belongs to E. For example, if Γ = K n is the complete graph on n vertices, then A(K n ) is the braid arrangement in C n . The resonance components of A(K n ) are in one-to-one correspondence to the 3-vertex and 4-vertex subsets of V, and are all 1-dimensional; see [8, §6.8], and compare with Example 2.3. Now, since every graph on n vertices is a subgraph of the complete graph K n , every graphic arrangement is a sub-arrangement of the braid arrangement A(K n ). It follows that R 1 (A(Γ)) has precisely one, 1-dimensional component for each triangle or complete quadrangle in Γ. Combining Corollary 3.3 with our previous results from [30], where the linear strand of a graphic arrangement is determined, we see that the Chen ranks conjecture holds for graphic arrangements. More explicitly, we have the following. Theorem 3.4. Let Γ be a graph, A = A(Γ) the corresponding graphic arrangement, and G = π 1 (X(A)) its group. Then where κ s denotes the number of complete subgraphs on s + 1 vertices. Example 3.5. If A = A(K n ) is the braid arrangement in C n , then G is isomorphic to P n , the pure braid group on n strings. Applying formula (3.13), we find: The Chen ranks of the pure braid groups were first computed in [6], using an arduous Gröbner basis computation. The above computation recovers the result of [6]. The rate of growth of the Chen groups It follows from Theorem 2.2 that for k ≫ 0 the Chen rank θ k is given by a rational polynomial P (k). In this section, we prove that the degree of P (k) is equal to the dimension of R 1 (A). As an application, we compute the complexity of the Orlik-Solomon algebra A, viewed as a module over the exterior algebra E, in the case when A is a central arrangement in C 3 . 4.1. Primary decomposition of ann(B). We know that the Fitting ideal of the presentation matrix of B has the same radical as the annihilator of B, and by [8], the corresponding variety is precisely R 1 (A). This variety has a number of nice properties: as noted earlier, Libgober and Yuzvinsky [22] showed that it is the union of disjoint projective subspaces, each of dimension at least one. For each component L ⊂ R 1 (A), denote by p = p L the corresponding minimal prime ideal in S. Slightly abusing notation, we write Ass(R 1 (A)) for the set of such ideals. For each p ∈ Ass(R 1 (A)), denote by L = L p the corresponding linear subspace of R 1 (A). Then: p. For each p ∈ Ass(R 1 (A)), there is at least one homogeneous element m ∈ B whose annihilator is p, i.e., m ∼ = S/p, as S-modules. Let M (p) denote the submodule of B annihilated by p: It is easy to see that there is almost no interaction between these (finitely-generated) submodules. Lemma 4.1. Let N be a finitely-generated, graded S-module, and n 1 , n 2 ∈ N satisfy n 1 / ∈ n 2 and n 2 / ∈ n 1 . Suppose ann n i = P i m and P 1 + P 2 = m, where m is the maximal homogeneous ideal. Then Proof. Suppose x = a 1 n 1 = a 2 n 2 , with a i ∈ S. Then obviously both P 1 and P 2 annihilate x, so the maximal ideal m kills x. By degree considerations, we must have a i ∈ m. Since m · a 1 n 1 = 0, we find that a 2 1 n 1 = 0. But P 1 is prime, so a 2 1 ∈ P 1 implies a 1 ∈ P 1 . Hence x = 0. 4.2. Local cohomology and regularity. Associated to any finitely generated, graded module N , there is an exact sequence (see Eisenbud [11]): where H * m (N ) denotes the local cohomology of N , supported at the maximal ideal m, and N is the sheaf associated to N . In [33], Yuzvinsky showed that the module F (A) is annihilated by f = The modules H 0 m (B) and H 1 m (B) vanish in high degree (see Chapter 9 of [12]), so the exact sequence above yields a coarse approximation to the Chen ranks conjecture: for k > reg(B), and so deg P( m , k) = r. Since m is a submodule of M (p), this implies On the other hand, M (p) is generated by finitely many such submodules, and so (4.7) also implies deg P(M (p), k) ≤ r. 4.3. The complexity of the OS-algebra. Let M be a finitely generated module over a local or graded ring R with residue field k, and let (4.9) β i (M ) = dim k Tor R i (M, k) be the rank of the i th free module in a minimal free resolution of M over R. We then have the following measure of the growth of the numbers β i (M ), see [3]. Now let A be a central arrangement of n planes in C 3 . We wish to compute the complexity of the Orlik-Solomon algebra A = E/I, viewed as a module over E. We start with an observation about the resonance variety R 1 (A). Proof. If dim R 1 (A) ≥ n − 2, then since R 1 (A) is the union of a subspace arrangement in P n−1 , it must contain a hyperplane H ∼ = P n−2 . But since the components of R 1 (A) are linear subspaces of dimension at least one, Bézout's theorem says any component L of R 1 (A) different from H must meet H, contradicting the fact that R 1 (A) consists of disjoint subspaces (notice that applying Bézout makes sense, because linear subspaces are disjoint over k iff they are disjoint overk). Thus R 1 (A) = H, and so R 1 (A) contains a single component (necessarily local), of dimension n − 2. Hence A is a pencil. We noted in [30] that the regularity of A as an E-module is two; so Using the (minimal, free) resolution of A over E, we can then write the Hilbert series of A as Since b 0 = 1, b 1 = n, and b 3 = b 2 − n + 1 (see [27]), we have (4.10) Expanding the left hand side of (4.10), we find it equals: Assume m is odd, and expand the coefficient of t m as a polynomial in m. We find that it is equal to −b2+2n−3 (n−2)! m n−2 + · · · . On the other hand, the coefficient of t m on the right hand side of (4.10) is b ′ m−1,m − b ′ m−2,m . Now recall from (3.12) that θ k = b ′ k−1,k , for all k ≥ 2. Thus, we obtain A similar argument applies when m is even. Notice that b 2 − 2n + 3 = 0 iff A is a near pencil, i.e., n − 1 planes through a line, and one additional plane in general position. Proposition 4.6. If A is a central arrangement of n planes in C 3 , and A is not a near pencil, then cx E (A) = n − 1. If A is a near pencil, then cx E (A) = n − 2. Proof. If A is a near pencil, then the resolution of A over E is linear, and we know that β m (A) = θ m is a polynomial in m of degree n − 3. If A is a pencil, then the resolution is also linear, and β m (A) = θ m is a polynomial of degree n − 2. If A is not a pencil or near pencil, then by Lemma 4.5, dim R 1 (A) < n − 2. It follows from Theorem 4.3 that θ m is a polynomial of degree at most n − 3, so β m (A) is a polynomial of degree n − 2, for m ≫ 0. A lower bound for the Chen ranks In this section we prove a lower bound on the Chen ranks of an arrangement group. We continue to use the notation of the previous section. The idea of the proof is as follows. First, we choose an irreducible component of R 1 (A) (we do not distinguish between the local and non-local components), which corresponds to a sub-ideal of I 2 , generated by decomposable elements. A short exact sequence relates this sub-ideal of decomposable elements to I. After appealing to the Bernstein-Gelfand-Gelfand correspondence and dualizing, we obtain a long exact sequence of Ext-modules. Finally, knowledge of the geometry of R 1 (A), combined with a localization argument, yields the bound. 5.1. The modules B(p). Let p ∈ Ass(R 1 (A)) be a minimal prime ideal, and L p = V (p) ⊂ E 1 the corresponding linear subspace. As shown by Falk in [16], Corollary 3.11, the following holds: If a, b ∈ L p , then a ∧ b ∈ I 2 . Thus, if we define to be the ideal of E generated by wedge products of pairs of elements from L p , then I(p) is a sub-ideal of I. Recall from Theorem 3.2 that B ∼ = Ext ℓ−1 S (F (A), S). By analogy, define the linearized Alexander invariant at p to be the S-module x i , x r+1 , . . . , x n and I(p) = I. After a linear change of variables in E, the ideal I corresponds to I 0 = e 1 , . . . , e n 2 , while the linear subspace L p corresponds to L 0 = span{e 1 , . . . , e n }. A standard calculation (compare [7]) shows that the module B(p) has Hilbert polynomial P(B(p), k) = (k − 1) n+k−2 k . Example 5.2. Let A be the braid arrangement, discussed in Example 2.3. The generators of I 2 are f 1 = ∂e 145 , f 2 = ∂e 235 , f 3 = ∂e 034 , and f 4 = ∂e 012 . These give rise to various "local" data. For example, p 1 = x 0 , x 2 , x 3 , x i , L p1 = span{e 1 −e 4 , e 1 −e 5 }, I(p 1 ) = (e 1 −e 4 )(e 1 −e 5 ) , and B(p 1 ) = coker ϑ p1 : S 4 → S , where ϑ p1 = x 0 x 2 x 3 x i . Now recall A supports a non-trivial neighborly partition, Π = (05|13|24). The corresponding non-local component L Π ⊂ R 1 (A) is spanned by η 1 = e 0 −e 1 −e 3 +e 5 and η 2 = e 0 − e 2 − e 4 + e 5 . The associated prime is p = x 0 − x 5 , x 1 − x 3 , x 2 − x 4 , x i , whereas I(p) = η 1 ∧ η 2 . Finally, B(p) = coker ϑ p : S 4 → S , where Lemma 5.3. If dim L p = dim L q , there is a linear automorphism of E taking I(p) to I(q). As noted in §1.4, the original form of Conjecture A assumed that taking k ≥ 4 would suffice to ensure equality. The non-vanishing of H 0 m (B) 4 in Example 6.3 shows that a larger k is needed, in general.
2014-10-01T00:00:00.000Z
2005-02-21T00:00:00.000
{ "year": 2005, "sha1": "aabd281fe95196aa8dbbb8c8ea3893c4835c1d3d", "oa_license": null, "oa_url": "https://www.ams.org/tran/2006-358-05/S0002-9947-05-03853-5/S0002-9947-05-03853-5.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "9b35e97bc194cc62430ca93eb7b2e04d0baa5b12", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
254497479
pes2o/s2orc
v3-fos-license
Assessing the impact of the Royal Canadian Mounted Police (RCMP) protocol and Emotional Resilience Skills Training (ERST) among diverse public safety personnel Background Public safety personnel (PSP; e.g., border services personnel, correctional workers, firefighters, paramedics, police, public safety communicators) are frequently exposed to potentially psychologically traumatic events. Such events contribute to substantial and growing challenges from posttraumatic stress injuries (PTSIs), including but not limited to posttraumatic stress disorder. Methods The current protocol paper describes the PSP PTSI Study (i.e., design, measures, materials, hypotheses, planned analyses, expected implications, and limitations), which was originally designed to evaluate an evidence-informed, proactive system of mental health assessment and training among Royal Canadian Mounted Police for delivery among diverse PSP (i.e., firefighters, municipal police, paramedics, public safety communicators). Specifically, the PSP PTSI Study will: (1) adapt, implement, and assess the impact of a system for ongoing (i.e., annual, monthly, daily) evidence-based assessments; (2) evaluate associations between demographic variables and PTSI; (3) longitudinally assess individual differences associated with PTSI; and, (4) assess the impact of providing diverse PSP with a tailored version of the Emotional Resilience Skills Training originally developed for the Royal Canadian Mounted Police in mitigating PTSIs based on the Unified Protocol for the Transdiagnostic Treatment of Emotional Disorders. Participants are assessed pre- and post-training, and then at a follow-up 1-year after training. The assessments include clinical interviews, self-report surveys including brief daily and monthly assessments, and daily biometric data. The current protocol paper also describes participant recruitment and developments to date. Discussion The PSP PTSI Study is an opportunity to implement, test, and improve a set of evidence-based tools and training as part of an evidence-informed solution to protect PSP mental health. The current protocol paper provides details to inform and support translation of the PSP PTSI Study results as well as informing and supporting replication efforts by other researchers. Trial registration Hypotheses Registration: aspredicted.org, #90136. Registered 7 March 2022—Prospectively registered. Trial registration: ClinicalTrials.gov, NCT05530642. Registered 1 September 2022—Retrospectively registered. The subsequent PSP PTSI Study results are expected to benefit the mental health of all participants and, ultimately, all PSP. Supplementary Information The online version contains supplementary material available at 10.1186/s40359-022-00989-0. T1 DA1 First Daily Assessment -Group 1 (i.e., "DA1") T1 FA1 First Full Survey -Group 1 (i.e., "F1") Week 2 T1 FA1 First Clinical Interview -Group 1 (i.e., "C1") T1 N Participants from separate sectors (i.e., fire, police, paramedics, public safety communicators) were brought into the study in a staggered approach. Fire was onboarded in November 2021; police were onboarded in January 2022; paramedics were onboarded in February 2022; and public safety communicators were onboarded end of April 2022. b Participants within each sector were on-boarded and interviewed in two separate groups to accommodate shift work and clinician workload. c ERST Training sessions were offered twice weekly and occasionally more than 1 week apart due to shift work concerns. Trainers adhered to the above schedule as closely as possible, while accommodating shift work and the specific logistics needs of each sector community. Supplemental Table 2 Supplemental Psychometrics and References for Self-Report Measures (Alphabetically) Alcohol Use Disorders Identification Test (AUDIT; (1)). The AUDIT is a 10-item self-report questionnaire comprised of items assessing alcohol intake, alcohol dependence, and adverse consequences of alcohol use over the past 12 months. Items such as "How many drinks containing alcohol do you have on a typical day?" are reported on a 5-point Likert-type scale ranging from 0 (never) to 4 (daily or almost daily). A positive screen for AUD was determined based on total score (i.e., scores greater than 15 can be used to identify clinically significant hazardous drinking and dependence; (2)). Psychometric evaluation of the AUDIT has demonstrated good internal consistency (α = .85) and good test-retest reliability (r = .83 to .95) in the general population (3,4) and in police populations (α = .81; (5)). Anxiety Sensitivity Index-3 (ASI-3; (6)). The ASI-3 is an 18-item self-report measure assessing the tendency to fear anxiety symptoms based on the belief that they may have harmful consequences. Items such as "When my chest feels tight, I get scared that I won't be able to breathe properly," are rated on a 0 (agree very little) to 4 (agree very much) Likert scale. Higher scores indicate greater sensitivity to anxiety symptoms. Factor analysis supports a three-factor structure (i.e., somatic, cognitive, and social fears), which correspond to the three theorized dimensions of anxiety sensitivity (i.e., fear of somatic sensations, fear of cognitive dyscontrol, and fear of socially observable signs of anxiety, respectively). The ASI-3 has been found to have better factorial validity and internal consistency relative to the original Anxiety Sensitivity Index (7) and has displayed convergent, discriminant, and criterion validity (6). Psychometric evaluation of the ASI-3 has indicated good internal consistency (αs = .83, .86, and .79 for somatic, cognitive, and social fears subscales respectively, as well as α = .89 for the ASI-3 total score) and good test-retest reliability (rs = .45, .51, and .39 for somatic, cognitive, and social fears respectively, as well as r = .31 for the ASI-3 total score; (8)). Beliefs about Emotions Scale (BES; (9)). The BES is a 12-item self-report scale designed to measure respondents' beliefs about the unacceptability of experiencing and expressing emotions. Each item is measured on a seven-point Likert scale, ranging from 0 (totally disagree) to 6 (totally agree). Higher scores indicate greater beliefs that it is unacceptable for respondents to experience or express emotion. The scale has high internal consistency (α = .91). Measures of dysfunctional attitudes, self-sacrifice, and problematic perfectionism, as well as symptoms of depression, anxiety, and fatigue, have also been significantly correlated with scores on the BES. Brief Experiential Avoidance Questionnaire (BEAQ; (10)). The BEAQ is a 15-item self-report scale created to provide a shortened alternative to the Multidimensional Experiential Avoidance Questionnaire (MEAQ; (11)). The BEAQ assesses a broad range of experiential avoidance dimensions such as avoidance, psychopathology, and quality of life. Items such as "I'm quick to leave any situation that makes me feel uneasy," are rated on a 6-point Likert scale ranging from 1 (strongly disagree) to 6 (strongly agree). Higher scores indicate greater avoidance. Psychometrics evaluation of the BEAQ has demonstrated good internal consistency (α = .84) in veterans seeking outpatient treatment and adequate consistency (α = .77) in veterans seeking residential treatment for PTSD (12). Brief Fear of Negative Evaluation Scale -Straightforward Items (BFNE-S; (13)). The BFNE-S is comprised of the eight straightforwardly worded items from the original BFNE (14) and assesses fears of negative evaluation with 5-point Likert scales from 0 (not at all characteristic of me) to 4 (extremely characteristic of me). Higher scores indicate greater fear of negative evaluation. Use of only the straightforward items has been ratified by recent comparative analyses (15). The BFNE-S has demonstrated excellent internal consistency, factorial validity, and construct validity in undergraduate (αs = .94 to .96; (16,17)) and clinical (αs = .90 to .96; (13)) samples. Brief Resilience Scale (BRS; (18)). The BRS is a 6-item self-report measure designed to assess resilience, or a person's ability to bounce back or recover from stress. Items such as "I tend to bounce back quickly after hard times," are rated on a scale from 1 (strongly disagree) to 5 (strongly agree). Higher scores indicate a greater sense of resiliency. The BRS has demonstrated good test-retest reliability and internal consistency (αs = .80 to .91) across clinical and non-clinical samples, and has been independently determined to be among the most psychometrically sound of available resilience measures (19). Canadian Armed Forces Recruit -Mental Health Service Use Questionnaire (CAFR-MHSUQ; (20)). The CAF-R-MHSUQ is a 4-item self-report questionnaire designed to measure a participant's willingness to seek mental health services. Items such as "If I developed mental health problems, I would expect to seek mental health treatment from a professional," are rated on a 5-point Likert scale ranging from 1 (strongly agree) to 7 (strongly disagree). Higher scores indicate greater willingness to seek mental health services. Cannabis Use Disorders Identification Test -Revised (CUDIT-R; (21,22)). The CUDIT-R is an 8item self-report questionnaire designed to measure cannabis use and misuse. Items such as "How often do you use cannabis?" are rated on a 5-point Likert scale ranging from 0 to 4, with anchors changing based on the item. A positive screen for CUD is determined based on total score (i.e., a cut-off score of 13 or higher indicating clinically significant hazardous use and dependence). The CUDIT-R is well supported, with high sensitivity (i.e., 91%) and specificity (i.e., 90%) for identifying problematic use (21,22). Psychometric evaluation of the CUDIT-R has demonstrated good internal consistency among college students (α = .83; (23)). Childhood Stressors Screen (CSS) . The CSS is a 22-item self-report questionnaire designed to measure aversive childhood experiences. Items such as "When you were growing up, how often did your family run out of money or find it hard to pay for basic necessities life food or clothing?" are rated a Likert-style scale ranging from 0 (never) to 4 (very often). Most CSS items were derived from the Canadian Community Health Survey: Mental Health, 2012 (24). Items three, four, and five were derived from the Childhood Experiences of Violence Questionnaire (25). A shortened version was introduced at F1 for fire and for all police, paramedic, and public safety communicators milestone assessments based on a subset of items suggested by Afifi (personal communication, November 15, 2021). Items 1, 2, 5, and 14 to 22 were retained from the original version. Chronic Pain Questionnaire (CPQ; (26)). The CPQ is a self-report questionnaire designed to measure the location, intensity, and duration of physical chronic pain. Items such as "Do you experience chronic pain?" are answered with face-valid options (e.g., yes, no). Additionally, items such as "What caused the chronic pain that most interfered with your life?" are rated on a Likert-like scale (e.g., Injury related to active duty, Injury related to work other than active duty). The CPQ is a new measure and psychometrics will be available as soon as possible. Depression Anxiety Stress Scale -21 (DASS; (27,28)). The DASS is a 21-item self-report questionnaire designed to measure the negative emotional states of depression, anxiety, and stress. Given that symptoms of MDD and GAD were measured with other questionnaires (i.e., PHQ-9 and GAD-7), only the Stress subscale was used. The Stress subscale is sensitive to levels of chronic non-specific arousal and assesses difficulty relaxing, nervous arousal, and being easily upset/agitated, irritable/over-reactive and impatient. Respondents are asked to use 4-point severity/frequency scales ranging from 0 (did not apply to me) to 3 (applied to me very much) to rate the extent to which they have experienced each state over the past week. Higher scores indicate greater subjective experiences of stress. Psychometric assessment of the Stress subscale has indicated good internal consistency (α = .78; (29)) among medical students and excellent internal consistency (α = .91; (30)) among a community sample. (31)). The DAR-5 is a 5-item questionnaire assessing participants' self-reported levels of anger. Items such as "When I got angry at someone, I wanted to hit them," are rated on a 1 (none or almost none of the time) to 5 (all or almost all of the time) Likert scale. Dimensions of Anger Reactions -5 (DAR-5; Higher scores indicate greater self-reported levels of anger. The DAR-5 was adapted from the original Dimensions of Anger Reactions measure (32), which displayed strong psychometric properties but was lengthy and overcomplicated. The resulting DAR-5 has displayed concurrent validity with the commonlyused State Trait Anger Expression Inventory 2 (33) and predictive of changes in PTSD; further, it displays strong internal reliability (α = .90), a robust one-factor structure, and is recommended for screening anger in long questionnaire batteries (34). Discrimination Questions. Institutional discrimination and harassment were evaluated using 4-items: 1) "Have you experienced sexual harassment in relation to your work (for example, in a work setting, from a colleague but outside of work, etc.)?"; 2) "Have you experienced sexual assault in relation to your work (for example, in a work setting, from a colleague but outside of work, etc.)?"; 3) "Have you experienced harassment (non-sexual) in relation to your work (for example, in a work setting, from a colleague but outside of work, etc.)?"; and 4) "Have you experienced discrimination in relation to your work (for example, in a work setting, from a colleague but outside of work, etc.)?". The items included face-valid response options (i.e., yes, no, prefer not to answer). Participants were also asked to identify the grounds on which they were discriminated against (e.g., race, sex, gender identity). The items were adapted from previous work done by the International Women's Media Foundation (https://www.iwmf.org/wpcontent/uploads/2018/06/Violence-and-Harassment-against-Women-in-the-News-Media.pdf). Drinking Motives Questionnaire -Short Form (DMQ-SF; (35)). The DMQ-SF is a 4-item self-report questionnaire that assesses motives for drinking behaviours. Items such as "How often do you use alcohol to manage physical pain?" are rated on a 5-point Likert scale ranging from 0 (never) to 4 (daily or almost daily). The DMQ was designed for large-scale screenings as part of demographic history and is entirely dependent on participant self-report. Dyadic Adjustment Scale -4 (DAS-4; (36)). The DAS-4 is a 4-item self-report questionnaire that assesses marital satisfaction. Items such as "In general, how often do you think that things between you and your partner are going well?" are rated on a 1 (never) to 6 (all the time) Likert scale. Higher scores indicate greater subjective marital satisfaction. The DAS-4 has displayed very good psychometric properties (36). Emotion Regulation Questionnaire (ERQ; (37)). The ERQ is a 10-item questionnaire designed to assess how an individual regulates positive and negative emotions. The ERQ has two subscales: 1) Cognitive Reappraisal, which evaluates a participants ability to change their thinking (e.g., "When I want to feel more positive emotion (such as joy or amusement), I change what I'm thinking about."); and 2) Emotion Suppression, which evaluates the extent to which participants repress their emotions (e.g., "When I'm feeling negative emotions I make sure not to express them."). Items are rated on a 1 (strongly disagree) to 7 (strongly agree) Likert scale. The ERQ has displayed good internal reliability (αs range from .73 to .79) and test-retest reliability across three months (37), as well as a robust factor structure (37). Expression of Moral Injury Scale -Military -Short Form (EMIS-M-SF; (38)). The EMIS-M-SF is a 4-item self-report measure designed to swiftly assess for warning signs of a moral injury in military populations. Items (e.g., "I feel guilt about things that happened during my military service that cannot be excused.") are rated on a 1 (strongly disagree) to 5 (strongly agree) Likert scale. Higher scores indicate greater experience of moral injury. The EMIS-M-SF has been psychometrically validated in a military sample, with good internal consistency (α = .84). Generalized Anxiety Disorder Scale -7 (GAD-7; (39)). The GAD-7 is a 7-item self-report measure assessing for symptoms of anxiety and worry. Participants are asked to rate their experiences of symptoms over the last two weeks (e.g., "Feeling nervous, anxious, or on edge") on a 0 (not at all) to 3 (nearly every day) Likert scale. A positive screen for generalized anxiety disorder (GAD) was determined based on total score (i.e., scores greater than 9 can be used to identify persons reporting clinically significant symptoms; (40)). The GAD-7 has good reliability, and construct, criterion, procedural, and factorial validity (39), as well as good internal consistency (α=.89) and inter-item correlations (.45-.65) in a community sample (41). HEXACO Personality Inventory -100-item scale (HEXACO-100; (42)). The HEXACO-100 is a selfreport measure which corresponds to the six personality dimensions identified in the HEXACO model (43). Items such as "People sometimes tell me that I am too critical of others," are ranked on a 1 (strongly disagree) to 5 (strongly agree) Likert scale. The HEXACO model of personality is comprised of six personality dimensions: honesty/humility, emotionality, extraversion, agreeableness, conscientiousness, and openness (44). The HEXACO-100 is psychometrically sound, with good internal consistency in college and community samples (αs range from .81 to .85), inter-factor correlations ranging from |.02| to |.42|, and convergent validity with other measures of personality (42). Based on participant and clinician feedback, a shortened form of the HEXACO-100 (HEXACO-60; (45)) was introduced after F1 for fire and for all police, paramedic, and public safety communicators milestone assessments. HEXACO Personality Inventory -60-item scale (HEXACO-60; (45)). The HEXACO-60 is a short version of the 100-item HEXACO personality inventory, with 10 items from each of the 6 personality dimensions in the HEXACO model (43). Items (e.g., "I often push myself very hard when trying to achieve a goal.") are rated on a 1 (strongly disagree) to 5 (strongly agree) Likert scale. The HEXACO-60 has been psychometrically validated with acceptable internal consistency in both an undergraduate (αs range from .77 to .80) and a community sample (αs range from .73 to .80). In addition to the standard HEXACO-60 items, items 97 to 100 of the HEXACO-100 were retained, as these constitute an interstitial facet absent from the standard 60-item version. The HEXACO-60 plus these four interstitial facet items replaced the HEXACO-100 after F1 for fire and for all police, paramedic, and public safety communicators milestone assessments. Illness/Injury Sensitivity Index -Revised (ISI-R). The ISI-R is 9-item revision of the original Illness/Injury Sensitivity Index (46) designed to measure fears of illness and injury (e.g., "I worry about my physical health."). Items are rated on a 5-point Likert scale ranging from 0 (agree very little) to 4 (agree very much). Two factors, Fear of Illness (e.g., "I worry about becoming physically ill.") and Fear of Injury (e.g., "I am frightened of being injured."), are represented within the ISI-R (47); however, the total summed score is used in most analyses, with higher scores indicating greater fear. The ISI-R has excellent internal consistency (α = .86), convergent validity with other measures related to injury and illness (r > .65), and correlates highly with the original index, r = .96 (48). (ISI; (49)). The ISI is a 7-item self-report questionnaire that assesses difficulties with falling or staying asleep. Items such as "How satisfied/dissatisfied are you with your current sleep pattern?" are rated on a 5-point Likert scale ranging from 0 (very satisfied) to 4 (very dissatisfied). Higher scores indicate greater sleep difficulties. The ISI has solid psychometric support including adequate internal consistency (i.e., α = .74 to .78), sensitivity (94%), specificity (94%), and convergent validity (50). Insomnia Severity Index Institutional Betrayal and Support Questionnaire (IBSQ; (51)(52)(53)). The IBSQ is a 29-item self-report questionnaire that assesses for feelings of support versus lack of support by an institution. The questionnaire was modified to measure perceptions of support received by the PSP following exposures to diverse potentially psychologically traumatic events (PPTEs). PSP are asked whether their organization played a role following exposure by responding in any of several different ways, such as "Meeting your needs for support and accommodations," and "Responding inadequately to the experience/s, if reported." Response options include yes, no, and N/A. Preliminary research with an earlier version of this measure has revealed a one-factor solution. The questionnaire has been modified to ask specifically about experiences with the institution participants are employed by at the time of research (51,52). Based on participant and clinician feedback, a short form of the ISBQ (IBQ2; (54)) was introduced after F1 for fire and for all police, paramedic, and public safety communicators milestone assessments. Institutional Betrayal Questionnaire -2 (IBQ2; (55)). The IBQ2 is a 12-item self-report questionnaire that measures feelings of betrayal towards an institution after experiencing a potentially psychologically traumatic event (e.g., sexual assault, motor vehicle accident, sudden death). Respondents are asked to consider larger institutions (e.g., church, military unit, workplace) to which they belong or have belonged and consider whether or not the institution played a role in a previously identified event. Items include various actions an organization could take such as "Not taking proactive steps to prevent this type of experience," and "Denying your experience in some way." Response options include yes, no, and N/A. The IBQ2 has been validated in a sample of sexual assault survivors and showed good convergent and discriminant validity (55). The IBQ2 replaced the ISBQ after F1 for fire and for all police, paramedic, and public safety communicators milestone assessments. (56)). The IUS-12 is a 12-item questionnaire that measures responses to uncertainty, ambiguous situations, and the future. Items are rated on a 5-point Likert scale ranging from 1 (not at all characteristic of me) to 5 (entirely characteristic of me). Higher scores indicate greater intolerance of uncertainty. The IUS-12 has a continuous latent structure and has two factors (56, 57), prospective IU (7 items; e.g., "I can't stand being taken by surprise.") and inhibitory IU (5 items; e.g., "When it's time to act, uncertainty paralyses me."). The IUS-12 has sound psychometric properties (56,58) and strong internal consistency for the total and subscale scores (αs range from .85 to .91; (56)). Intolerance of Uncertainty Scale -Short Form (IUS-12; Life Event Checklist for DSM-5 (LEC-5; (59, 60)). The LEC-5 is a commonly used tool for assessing self-reported exposures to diverse PPTE. The LEC-5 presents respondents with 17 different PPTE types each with six response options including happened to me, witnessed it, learned about it, part of my job, not sure, or doesn't apply (60). LEC for DSM-IV has demonstrated good convergent and discriminant validity, test-retest reliability over a 7-day period, and concurrent validity with other measures of PPTE exposures (59,61). The only difference between LEC-5 and the LEC for DSM-IV is that the LEC-5 allows respondents to report PPTE exposures that occurred "as part of my job," which corresponds with contemporary PTSD diagnostic criteria (62). In the current study, participants were asked specifically about PPTE exposures that occurred "as part of my public safety job," to avoid confounds with other employment. For the First Full Survey participants were asked to report on PPTE during their "entire life (growing up, as well as adulthood)"; in contrast, for subsequent surveys, participants were asked to report on PPTE "since you last completed this questionnaire." McGill Pain Questionnaire -Short Form (MPQ-SF; (63)). The MPQ-SF is a commonly used tool for the measurement of pain experience. The MPQ-SF includes a pain rating index (PRI) of 15 of the most commonly used adjectives that describe sensory and affective aspects of pain (64). The MPQ-SF also includes a visual analogue scale (VAS) to help assess pain intensity. The checklist is rated on a 4-point intensity scale ranging from 0 (none) to 3 (severe). The MPQ-SF has been found to correlate highly with the original MPQ (64), and demonstrates good factorial validity for both sensory and affective components (.78 and .76 respectively; (64)). Based on participant and clinician feedback, the RPI and PPI were removed after F1 for fire and from all police, paramedic, and public safety communicators milestone assessments, retaining only the VAS. Medications and Drug Use Scale (MDUS; Carleton, Duranceau, & LeBouthillier, 2016 ; unpublished scale). The MDUS is a 5-item self-report questionnaire that assesses self-reported use of medications and drugs not otherwise assessed by demographics. Items such as "Do you regularly use any prescription or over-the-counter medications?" are responded to with face-valid options (e.g., yes, no). The MDUS was designed for large-scale screenings as part of demographic history and is entirely dependent on participant self-report. Mental Health Continuum -Short Form (MHC-SF; (65)(66)(67)). The MHC-SF is a 14-item scale designed to measure emotional, psychological, and social well-being. The MHC-SF was derived as a shortened version of the Mental Health Continuum Long Form and has good internal reliability (α = .74; (65)). The MCH-SF measures the degree of 1) emotional well-being, 2) psychological well-being, and 3) social well-being (65). Items are rated on a 6-point Likert scale ranging from 0 (never) to 5 (every day). Higher scores indicate greater perceived well-being in each of the three areas. The MHC-SF has also demonstrated good internal reliability in French (αs for subscales range from .78 to .90; (67)) and in Dutch (α = .89 and αs for subscales range from .74 to .83; (66)). (MAKS; (68)). The MAKS is a 15-item self-report questionnaire designed to measure mental health literacy and stigma. Items such as "Most people with mental health problems want to have paid employment," are rated on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Higher scores indicate greater mental health literacy. The MAKS is a relatively new measure, but the available psychometric data support the internal consistency (α = .65) and test-retest reliability (.57 to .87) of the measure (68); in addition, the measure appears sensitive to changes based on interventions (69). Based on participant and clinician feedback, the MAKS was identified as redundant with other measures and was removed after F1 for fire and from all police, paramedic, and public safety communicators milestone assessments. (OMSWA; (70)). The OMSWA is a 22-item selfreport questionnaire designed to measure mental health stigma and workplace attitudes. Items such as "I would be upset if a co-worker with a mental illness always sat next to me at work," are rated on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Higher scores indicate greater stigmatizing attitudes towards mental health conditions in the workplace. The OMSWA is a relatively new measure, but the available psychometric data support the internal consistency of the measure (70). The measure is currently in use as a standard metric by the Mental Health Commission of Canada. Based on participant and clinician feedback, a short form of the OMSWA was introduced after F1 for fire and for all police, paramedic, and public safety communicators milestone assessments. Opening Minds Survey of Workplace Attitudes Opening Minds Survey of Workplace Attitudes -Short Form (OMSWA-SF; (71)). The OMSWA-SF is a 9-item self-report questionnaire designed to briefly assess stigmatizing attitudes in the workplace. Items (e.g., "I would not be close friends with a co-worker who I knew had mental illness.") are rated on a 1 (strongly disagree) to 5 (strongly agree) Likert scale. Higher scores indicate greater stigmatizing attitudes towards mental health conditions in the workplace. The OMSWA-SF has been psychometrically validated in a sample of PSP, with good internal consistency (α = .89). The OMSWA-SF replaced the OMSWA after F1 for fire and for all police, paramedic, and public safety communicators milestone assessments. Pain Anxiety Symptoms Scale -20 (PASS-20; (72)). The PASS-20 is a short form of the original Pain Anxiety Symptoms Scale (PASS; (73)) used to measure pain-related anxiety. Each of the 20 items (e.g., "When I feel pain I am afraid that something terrible will happen.") are rated on a 6-point Likert scale ranging from 0 (never) to 5 (always). Each of four, 5-item subscales (i.e., Cognitive, e.g., "I can't think straight when in pain;" Fear, e.g., "Pain sensations are terrifying;" Escape/Avoidance, e.g., "I will stop any activity as soon as I sense pain coming on;" and Physiological, e.g., "Pain makes me nauseous.") provides a score that can be considered separately or, when summed, as a general measure of pain-related anxiety. Higher scores indicate greater pain-related anxiety. Factorial validity for both the total and subscale scores has been demonstrated for clinical (α = .83; (74)) and non-clinical samples (α = .91; (75)). (PDSS; (76)). The PDSS is a 7-item self-report measure designed to assess symptoms of panic disorder (e.g., "During the past week, were there any activities that you avoided or felt afraid of because they caused physical sensations like those you feel during panic attacks?"). The items assess panic frequency, distress during panic, panic-focused anticipatory anxiety, phobic avoidance of situations, phobic avoidance of physical sensations, impairment in work functioning, and impairment in social functioning. Items are rated on a 5-point Likert scale ranging from 0 (none) to 4 (extreme). A positive screen for panic disorder was determined based on total score (i.e., scores greater than 7 can be used to identify persons reporting clinically significant anxiety and distress; (77)). The self-report version of the PDSS has displayed excellent psychometrics, with one study finding strong internal validity (α = .92) and a correlation of .81 with the original measure (78). Panic Disorder Severity Scale Parental Assessment of Childhood Stress (PACS; Carleton, Duranceau, & Wright, 2016; unpublished scale). The PACS is a 26-item self-report questionnaire that assesses the degree to which a parent serving in public safety believes their child is experiencing distress that may be associated with the realities of such service. The PACS is a new measure and psychometrics will be available as soon as possible. Posttraumatic Growth Inventory -Short Form (PTGI-SF; (84)). The PTGI-SF is a 10-item self-report questionnaire briefly assessing growth in response to traumatic events. Items (e.g., "I learned a great deal about how wonderful people are.") are rated on a 0 (I did not experience this change as a result of my crisis) to 5 (I experienced this change to a very great degree as a result of my crisis) Likert scale. Higher scores indicate greater posttraumatic growth. The PTGI-SF has been psychometrically validated in a large sample, with good internal consistency (α = .89) and can be reliably used in place of the full version with little loss of information. PTSD Checklist for DSM-5 (PCL-5; (60)). The PCL-5 is a 20-item self-report measure used to assess symptoms of posttraumatic stress disorder (PTSD) experienced in the past month and to screen for persons reporting clinically-significant symptoms. Participants use a Likert scale ranging from 0 (not at all) to 4 (extremely) to rate how bothered they had been by different PTSD symptoms (e.g., "Repeated, disturbing memories, thoughts, or images of the stressful experience") over the past month. A positive screen for PTSD is determined based on total score (i.e., a score greater than 32 used to identify clinically significant symptoms), as well as meeting criteria on each individual symptom cluster (60). Psychometric evaluation has found the PCL-5 to be a reliable and valid measure of PTSD symptoms as described in the Diagnostic and Statistical Manual of Mental Disorders, 5th ed. (62), with strong internal consistency (α = .94) and test-retest reliability (r = .82) in PPTE-exposed populations (85). Public Safety Personnel Stressors (PSP-Stress; (86)). The PSP-Stress is a 40-item self-report questionnaire designed to measure environmental stressors specific to public safety officers. The scale was created by combining the 20-item Operational Police Stress Questionnaire (PSQ-Op) and the 20-item Organizational Police Stress Questionnaire (PSQ-Org). Items such as "The feeling that different rules apply to different people (e.g., favouritism)," are rated on a 7-point Likert scale ranging from 1 (no stress at all) to 7 (a lot of stress). Higher scores indicate greater subjective levels of stress. The PSP-Stress is a new measure and psychometrics will be available as soon as possible. The PSQ-Op scale has adequate reliability with a coefficient alpha of .93 and corrected item-total correlations ranging from .50 to .70. The PSQ-Org scale has adequate reliability with a coefficient alpha of .92 and corrected item-total correlations ranging from .41 to .73 (86). Public Safety Personnel Support (PSP-Support; Carleton, 2015; unpublished scale). The PSP-Support is a 15-item self-report questionnaire designed to measure environmental supports specific to public safety officers. Items such as "Your family" are rated on a 5-point Likert scale ranging from 1 (I feel undermined) to 5 (I feel as supported as I could ever hope to be). Higher scores indicate greater subjective levels of support. The PSP-Support is a new measure and psychometrics will be available as soon as possible. The RSSS is a 14-item self-report questionnaire designed to measure environmental supports specific to Royal Canadian Mounted Police members. Items were adapted to fit all PSP for the current study (e.g., changing RCMP officer to PSP). Items such as "Do you feel that your supervisor would support you if you developed a mental illness/injury?" are rated on a 5-point Likert-type scale ranging from 1 (strongly disagree) to 5 (strongly agree). The RSSS is a new measure and psychometrics will be available as soon as possible. Self-Care and Mental Health Access for Public Safety (SCMHA-PS; Carleton, Duranceau, & LeBouthillier, 2016; unpublished scale). The SCMHA-PS is a 25-item self-report questionnaire designed to measure environmental supports specific to public safety officers. The questionnaire contains two scales and one open-ended question. Items such as "Spouse" are rated on a 7-point Likert scale ranging from 1 (I can and would access as an early resource) to 7 (I don't know if I have access). The questionnaire contains one item with multiple open-ended sub-questions which asks, "How many days per week do you do each of the following activities?" There are 10 sub-questions with items such as "Socializing with other First Responders or other Public Safety Personnel?" The questionnaire also contains items such as "Contact mental health professionals for well-being (e.g., psychologists, therapists)" which are rated on a 5-point Likert scale ranging from 1 (never) to 5 (annually). The SCMHA-PS is a new measure and psychometrics will be available as soon as possible. Social Interaction Phobia Scale (SIPS; (87)). The SIPS is a 14-item self-report measure designed to assess symptoms specific to social anxiety disorder (SAD; e.g., "When mixing socially I am uncomfortable."). Each item is measured on a 5-point Likert scale, ranging from 0 (not at all characteristic of me) to 4 (entirely characteristic of me). Higher scores indicate greater symptoms of social anxiety. The items were derived as a subset of items from the Social Interaction Anxiety and Social Phobia Scales (88). The SIPS is designed to measure three symptom dimensions of SAD: social interaction anxiety; fear of overt evaluation; and fear of attracting attention. SIPS total and subscale scores account for equivalent or greater variance relative to the original SIAS and SPS total scores (87). The SIPS total score has demonstrated excellent internal consistency (α = .92) with adequate internal consistency (αs = .76 to .86) exhibited by all three sub-scales among undergraduate students (89). Similar results were found among patients with principal SAD and principal GAD patients, and slightly lower values but still good internal consistency were observed among healthy control sample (90). Use of the total score typically provides sufficient sensitivity and specificity for discerning clinical and nonclinical samples (i.e., scores greater than 20 can be used to identify persons reporting clinically significant distress). Subsequent research has replicated the psychometric properties, as well as convergent and discriminant validity, of the SIPS in a large and independent sample (89). The SIPS is included as a measure of dimensional SAD symptoms (87,91). Social Provisions Scale -10 (SPS-10; (92,93). The SPS is a 10-item short form of the original measure designed to measure perceived social support (92). Items such as "There are people I can depend on if I really need it," are rated on a 4-point Likert scale ranging from 1 (strongly disagree) to 4 (strongly agree). Higher scores indicate greater feelings of social support. The SPS-10 has demonstrated excellent internal consistency (α = .88), concurrent validity (r = .93), and factorial validity (92). Based on participant and clinician feedback, the SIPS was identified as redundant with other measures and removed after F1 for fire and from all police, paramedic, and public safety communicators milestone assessments. Southampton Mindfulness Questionnaire (SMQ; (94)). The SMQ is designed to provide a measure of mindful awareness of distressing thoughts and images. The SMQ is a 16-item scale with items such as, "Usually when I experience distressing thoughts and images, I am able to accept the experience," rated on a 7-point Likert scale ranging from 0 (strongly disagree) to 6 (strongly agree). Higher scores indicate greater ability to manage emotional reactions to distressing thoughts and images. Psychometric properties of the SMQ have exhibited an excellent internal consistency (α = .89) for the total sample and an acceptable consistency for the community (α = .89) and clinical (α = .82) groups; with corrected itemtotal correlations of r = .54 (94). Tobacco Use Questionnaire (TUQ; (95)). The TUQ is an 8-item self-report questionnaire that assesses self-reported tobacco use. Items such as "Do you use tobacco (e.g., cigarettes, smokeless)?" are responded to with face-valid options (e.g., yes, no). The TUQ was designed for large-scale screenings as part of demographic history and is entirely dependent on participant self-report. Items were derived following review of the World Health Organization's Global Adult Tobacco Survey (95). Unified Protocol Behavioral Avoidance Questionnaire (UPBAQ; (96)): The UPBAQ is a 5-item measure that was developed explicitly to assess the skill of approach-oriented behavior as it is taught in the Unified Protocol (UP). Participants rated items (e.g., "The way I acted in situations was driven by my distressing emotions," "I tried to avoid distressing emotions by avoiding situations that might cause them.)" by indicating how often they use each skill on a scale from 1 (never) to 5 (always or when needed). Higher scores indicate greater levels of emotional avoidance. The UPBAQ has demonstrated good internal consistency and validity (97). Unified Protocol Cognitive Questionnaire (UPCQ; (98)). The UPCQ is a 7-item measure that was developed explicitly to assess the skill of cognitive flexibility as it is taught in the Unified Protocol (UP) as existing questionnaires assessing cognitive coping either included skills that are not emphasized in the UP or excluded key concepts covered in this module. Participants rated items (e.g., "I evaluated my thinking when I experienced a distressing emotion," "I understood that my thoughts can have an effect on my feelings and behaviors.") by indicating how often they use each skill on a scale from 1 (never) to 5 (always or when needed). Higher scores indicate greater use of cognitive coping skills. The UPCQ has demonstrated good internal consistency and validity (97). (UPKA; (97)). The UPKA questionnaire consists of 13 true/false items designed to assess core concepts taught during the PSP Emotional Resilience Skills Program. Example items include, "The goal of your emotional resilience course is to learn how to eliminate unwanted emotions like fear, anxiety, and sadness," and "How we currently feel can affect the way we interpret many situations." Several randomized controlled trials have assessed the efficacy of the psychological intervention on which the PSP ERST was based (e.g., (98)). Unified Protocol Knowledge Acquisition Utrecht Work Engagement Scale -9 (UWES-9; (99)). The UWES-9 is a 9-item questionnaire assessing a person's work-related state of fulfillment. Questions such as "I am proud of the work that I do," are rated on a 6-point scale from 0 (never) to 6 (always). Higher scores indicate greater levels of work-related fulfillment. The UWES-9 has evidenced adequate internal consistency (αs = .75 to .85) and a more temporally stable factor structure than longer versions of the UWES (100).
2022-12-11T05:08:11.829Z
2022-12-09T00:00:00.000
{ "year": 2022, "sha1": "87efc9660c7001c42638225a1b689b8fab5509bd", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "87efc9660c7001c42638225a1b689b8fab5509bd", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119712395
pes2o/s2orc
v3-fos-license
Regularity of maximal functions on Hardy-Sobolev spaces We prove that maximal operators of convolution type associated to smooth kernels are bounded in the homogeneous Hardy-Sobolev spaces $\dot{H}^{1,p}(\mathbb{R}^d)$ when $1/p<1+1/d$. This range of exponents is sharp. As a by-product of the proof, we obtain similar results for the local Hardy-Sobolev spaces $\dot{h}^{1,p}(\mathbb{R}^d)$ in the same range of exponents. Introduction Let ϕ : R d → R be a nonnegative function such that The maximal operator associated to ϕ is defined as where ϕ t (x) = t −d ϕ( x t ), and f ∈ L 1 loc (R d ). The simplest example of such an operator is the Hardy-Littlewood maximal operator, which from this point on we denote by M . It occurs when ϕ = 1 |B1| 1 B(0,1) , where B(x, r) denotes the d-dimensional ball of radius r centered at the x ∈ R d and |B r | its Lebesgue measure. The operator M evaluates the supremum of all averages of |f | on balls centered at x, and for different functions ϕ, the operator M ϕ can be interpreted as a weighted average variant of M . It was established by Kinnunen [13] that, for p > 1, M defines a bounded operator in the Sobolev spaces W 1,p (R d ), i.e, there is C = C p > 0 such that (1.1) In his paper [13], Kinnunen obtains the bound (1.1) by proving that a function f ∈ W 1,p (R d ) satisfies, for almost every x ∈ R d , that and this last inequality readily implies which, combined with the well known L p -boundedness, implies the W 1,p -boundedness. Despite the fact Kinnunen's work in [13] is stated in terms of the classical Hardy-Littlewood case, his proof extends Date: April 24, 2018. to all M ϕ of convolution type that are L p -bounded, i.e, and henceforth one has the analogue of (1.3) for M ϕ , and as a consequence the W 1,p (R d )-boundedness. When p = 1, Kinnunen's result can not hold because of the fact that M ϕ f / ∈ L 1 (R d ), and this completely rules out the possibility of M ϕ f belonging to W 1,1 (R d ). This means his result is sharp in the sense of range of exponents. Despite that, one could still ask what happens when examining only the derivative level of the inequality, i.e, could M ϕ satisfy an inequality like (1.3) for 0 < p ≤ 1? A natural way to address what happens in this regime is switching from the Lebesgue L p (R d ) spaces to the Hardy spaces H p (R d ). For 0 < p ≤ ∞, a distribution f : R d → C lies in the Hardy space H p (R d ) if its nontangential Poisson maximal function lies in L p (R d ). Given a kernel ψ : R → C, the nontangential maximal function associated to ψ of a function f is defined as is the Poisson kernel, and we set For p > 1, as a consequence of the L p -boundedness of the nontangential maximal functions and the Lebesgue differentiation theorem, one has H p (R d ) = L p (R d ). When 0 < p ≤ 1 the scenario is different , what makes them natural substitutes for the L p spaces in this range of exponents. A distribution f ∈ S ′ (R d ) belongs to the homogeneous Hardy-Sobolev spacesḢ 1,p (R d ) if for j = 1, . . . , d, it has a weak derivative ∂ j f in the space H p (R d ), and in this case we set These spaces were first studied by Strichartz [26] and, when 1/p < 1 + 1/d, every distribution f ∈ H 1,p (R d ) is known to coincide with a locally integrable function. In particular, one can always make sense of M ϕ f , as well as its distributional derivatives, whenever ϕ is sufficiently regular, which raises the natural question of boundedness of M ϕ in these spaces on this range of exponents. We answer this question for ϕ ∈ S(R d ). Theorem 1 offers a new way to obtain a derivative level boundedness result as (1.1) which avoids (1.2) and introduces Hardy space regularity of maximal functions into the fold for the first time. There are three main steps in the proof of Theorem 1. Let ψ ∈ C ∞ c (B(0, 1)), such that ∇ψ L ∞ , ψ L ∞ ≤ 1 and ψ ≥ 0. Given x, y ∈ R d satisfying |x − y| ≤ t, one has 6) and in order to obtain Theorem 1, the first step is the choice of an appropriate c ∈ R for each t in (1.5). We then split B 2t into two sets, a local and a non-local piece. The second step is the analysis of the local piece and has two main ingredients: a characterization of Hardy-Sobolev spaces by Miyachi [24], which is given in terms of the following maximal operator and a self-improvement lemma from [17]. The third step is the study of the non-local piece, in which we will get a bound in terms of the nontangential maximal function associated to ϕ. At this point the aforementioned quasi-norm equivalence (1.6) will come into play. Lastly, Theorem 1 is sharp in term of the range of exponents, and we show it in the last section. As pointed out, Hardy spaces are a natural extension of the Lebesgue spaces when 0 < p ≤ 1, and although this result is the first of this kind, another very natural question is that of what happens in the W 1,1 case. Given a maximal operator M ϕ , it is possible to extend (1.3) to p = 1, in the sense that there is a constant C > 0 such that for every function f ∈ W 1,1 (R d ). There has been a lot of effort in understanding this question in the last few years, as well as the problem of determining the optimal constant in (1.8). The first work in this direction is due to Tanaka [27], who studied the case of ϕ(x) = 1 [0,1] (x), the one-sided Hardy-Littlewood maximal operator, and obtained (1.8) with C = 1. Later, Kurka proved the same result for the one-dimensional Hardy-Littlewood maximal operator, with C = 240.004. Still in the onedimensional setting, the same results for the Heat and the Poisson kernels were obtained by Carneiro and Svaiter [8] with C = 1. Other interesting results related to the regularity of maximal operators are [1,2,4,5,6,7,11,12,14,18,19,25,28]. Recently, Luiro [20] proved that inequality (1.8) is true in any dimension for the uncentered Hardy-Littlewood maximal function, provided one considers only radial functions. Later Luiro and Madrid [21] extended the radial paradigm to the uncentered fractional Hardy-Littlewood maximal function. As a straightforward consequence of Theorem 1, we obtain partial progress towards the understanding of the W 1,1 scenario. (1.9) In the same spirit of [20,21], Corollary 2 implies that |∇M ϕ f | ∈ L 1 (R d ) under stronger conditions than just f ∈ W 1,1 (R d ), which sheds new light on the question if one might have (1.8) for general f ∈ W 1,1 (R d ). 1.1. A word on forthcoming notation. We denote by d ≥ 1 the dimension of the underlying space. We represent the characteristic function of E by 1 E , and averages of f ∈ L 1 (E) are denoted as If a = 1 or p = 1, they will be suppressed from the notation. Note that the definition ofM a ϕ f makes sense for f a tempered distribution provided that ϕ is a Schwartz function. Preliminaries Given a function f ∈ L 1 loc , let N p (f ) be the maximal function defined in (1.7). This operator was first considered by Calderón [3], when p > 1, to characterize functions with weak derivatives in L p spaces, and later studied by Miyachi [24], for 0 < p ≤ 1, in order to obtain similar characterizations for Hardy spaces. As our first ingredient, we use his characterization in the form of the next result. [24]). Let 1/p < 1 + 1/d and f ∈ L 1 loc . Then In the first inequality above we have used that supp(ϕ) ⊂ B(0, 1) and the definition of E 1 . Since for any h ∈ L 1 loc and r > 1 one has for any choice of q ∈ (d/(d + 1), p), we have for the last inequality being due to boundedness of M on L r,∞ (R d ) when r > 1. Now we appeal to Lemma 4 to both the display above as well as to the quantity (3.1) to obtain We move on to estimate the integral over E 2 . Let y ∈ E 2 , z ∈ B(x, 2t) and r > t. Let e = y−z |y−z| . Then sup |η|≤|y−z| |f * ∂ j ϕ r (z + η) | since |(z + η) − z| = |η| ≤ |y − z| ≤ 4t < 4r, and |f * ∂ j ϕ| = |∂ j f * ϕ|, we have and since p/q > 1, it from follows boundedness of M q on L p , the already mentioned quasi-norm equivalence (1.6) in H p applied to both ψ and ϕ 1/8 , and Lemma 3 that which is the desired result. 3.2. The case of a general support. Given ϕ ∈ S(R d ), for some constant C = C(ϕ) one has We can proceed now as in the case of compact support and divide B(x, 2t) into E 1 and E 2 . In E 2 the support does not play a role in the proof. In E 1 one just has to observe that ). On the other, for any kernel ϕ ∈ C ∞ c (R), one has for j = 1, . . . , d that (∂ j M ϕ (f )) ′ (x) |x| −(d+1) when |x| → ∞, which implies it does not belong to L d d+1 (R d ), and therefore H d d+1 (R d ). To see this is true, we use the following observation due to Luiro [18]: if for some t > 0 one has M ϕ (f )(x) = |f | * ϕ t (x) then Now, when |x| → ∞, any admissible t will be roughly the size of |x|, and now by standart considerations this will imply ∂ j M ϕ (f )(x) |x| −(d+1) . 4.3. Local Hardy spaces. One can consider similar questions on the local Hardy spaces h p (R d ) introduced by Goldberg [9]. They are defined similarly as the spaces H p (R d ), but with a truncated nontangential maximal operator, i.e, f ∈ h p (R d ) whenM 1 P f ∈ L p (R d ), wherẽ |P t * f (y)|. A function belongs toḣ 1,p (R d ) if ∂ 1 f, . . . , ∂ d f ∈ h p (R d ) and we set If one considers the operator we have the following result Theorem 5. Let ϕ ∈ S(R d ) and 1/p < 1 + 1/d. Then m ϕ is a bounded operator fromḣ 1,p (R d ) tȯ h 1,p (R d ). The proof of this result follows the same lines as Theorem 1 since one has the analogue of Lemma 3 (see [24]) for theḣ 1,p spaces, as well as the norm equivalence with any truncated nontangential maximal operator associated to a Schwartz kernel, so we omit the details.
2018-04-22T22:35:39.000Z
2017-11-04T00:00:00.000
{ "year": 2018, "sha1": "cbe84c8364e570e79db572512a8666e3b94add99", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://rss.onlinelibrary.wiley.com/doi/am-pdf/10.1112/blms.12195", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "cbe84c8364e570e79db572512a8666e3b94add99", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
7353432
pes2o/s2orc
v3-fos-license
The Development of Self Structures and Active Coping In addition to cope with usual stressful circumstances at work, nowadays, it is important to examine what kind of mental capacities of medical staff are adaptive in respect of a new type of stress – job insecurity. Special focus is put upon self structures as personality determinants and the role they have in coping.. The aim of the study was to determine the role of the self structures in active coping with job insecurity. It was supposed that the increasing integration of self structures leads to increasing use of active coping strategies. Perceived job insecurity was measured by The job insecurity perception scale (Knežević and Majstorović, 2013). The Ego Functioning Questionnaire (Majstorović, Legault and Green-Demers, 2008) was used to evaluate types of ego-functioning; coping were assessed by the Cybernetic coping scale (Edwards and Baglioni, 1993). In order to test the hypothesis the multivariate regression analysis was developed with self-regulation as predictor and active coping strategy as a criterion. A significant model F(3, 306) = 26,73, p < 0,001, was obtained with all the predictors selected as significant. The prediction directions were as expected-Integrated and Ego-investing self were positive predictors (β = 0,35, p < 0,001, and β = 0,16, p < 0,01, respectively), while the impersonal self singled out as a negative predictor (β = –0,13, p < 0,05). The results have shown that the development of self structures is valid predictor for the active coping of medical staff when facing with job insecurity. INTRODUCTION The work of medical staff is extremely stressful; it requires developed mental capacity to overcome everyday stress and to function adaptivelly in stressful circumstances.Nowadays, it is especially important to examine what kind of mental capacities of health care workers are adaptive in respect of a new type of daily intense stress what is stress caused by rising job insecurity. JOB INSECURITY According to Vander Elst et al. [1] job insecurity is defined as frustration of basic psychological needs.Klandermans and van Vuuren [2] state that job insecurity can be seen as objective and subjective construct.Subjective job insecurity represents a psychological phenomenon which is primarily characterized by the experience of uncertainty connected to the employment future where there might be significant differences among the employees within the same organization [3]. The job insecurity is a threat that each individual perceives and responds differently when it reaches a critical level.A great number of situational and individual factors determine the threshold of threat and influence the readiness of an individual to perceive and react to the threat: a type of affective attachment [4], defensive self-respect [5], authenticity and consciousness [6], etc.The perception of job insecurity is also influenced by certain demographic and status variables.The researches have most commonly examined the connection to gender, age, education level, length of work experience and previous experiences with the job insecurity perception [7].Hodgins and Knee [8] speak about the personality variables that influence the job insecurity observation -first of all, types of self structures and dispositional mood factors. SELF STRUCTURES AND THREAT The self represents the personality instance which is responsible for the allocation of conative and cognitive resources which help the individual to establish, up to a specific degree, an integrated experience of the world and himself in it.Hodgins and Knee [8] represent the humanistic conceptualization of the self in accordance with the theory of self-determination, they point out that human beings possess an inherited organismic self, which consists of the main motivational apparatus and cognitive developmental dispositions.The developmental process is initiated by three basic psychological needs: the need for relatedness, competence and autonomy.However, the social surroundings can encourage or obstruct the natural tendency of the self to accomplish its potential completely.According to these authors, the quality of the ego functioning becomes directly dependent on how successfully the system integrates outer and inner experiences into existing structures.It also becomes dependent on the adaptability of structures in situations when they are faced with new upcoming experiences (primarily of a threatening character). Beginning from the motivational orientation and the dominance of some of the personal self-regulation styles (the autonomous, the controlled and the impersonal).Hodgins and Knee [8] describe three types of the self: the integrated self, the ego-investing an impersonal self, which influence the perception and the processing of unpleasant experiences.The integrated self refers to the harmonized self system; it originates in individuals who got required social support in their efforts to satisfy completely their three basic psychological needs during their development.Individuals with an integrated self succeed in learning to value themselves and what they really are, to recognize the importance of their own authentic inner impulses and develop unconditional self-evaluation and self-respect.While conducting most of their activities these individuals are intrinsically motivated; they have stable self-confidence, they enjoy social contacts and they are spontaneous in their reactions.Their self-system is open to changes and innovations and it is ready for research and for receiving contents experimentally from both, the inner and outer reality.With regard to other ego functioning types, the integrated self gives the opportunity for a more objective an more timely perception.Hodgins and Knee [8] point out that the ego-investing self develops within circumstances where autonomy lacks support.When this happens, the internalized social pressure and limitations stimulate the development of self-evaluation based on a constructed (fake) image about oneself, which is founded on acquiring approval of others instead of authentic self-affirmation.As a result of this, individuals are energetically encouraged by extrinsic goals, like money, fame and power; they behave rigidly, their perception of reality is selective and they are eager to gain confirmation and acknowledgement for their actions and behavior.The impersonal self represents the lowest level of self-integration.This type of self appears during the individual`s development and it includes personal experience in which three basic physiological needs were in a great way (critically) unsatisfied [8].The vitality of these individuals is low and it points out to the common lack of motivation.These individuals display lack of intention in reacting, and when they show it, they have a wish to terminate the action as fast as possible; they get excited easily, they are often cluttered with information and overwhelmed by negative thoughts and feelings.As a consequence of this, these individuals withdraw from new experiences, they turn towards routines and repetitive activities and they get engaged in social auto-isolation in order to protect their own unstable functioning. Given the situation of job insecurity requires effective self-regulatory processes that enable a high level of directivity of attention and regulation of cognitive effort, careful self-reflection, monitoring and evaluation of implemented efforts and action, it is supposed that only good self-regulatory processes and developed self structures exhibit the potential to successfully operate and to active cope in a stressful circumstance like job insecurity.In this direction, the aim of this study is to determine the role of the self structures in active coping with job insecurity.It was supposed that the increasing integration of self structures leads to increasing use of active coping strategies in situation of job insecurity. METHOD SAMPLE The research has been conducted on a sample of 102 health workers: men (46; 45,1 %), and women (56; 54,9 %), from the three public hospitals in the Autonomous Province of Vojvodina.Research was conducted in the period January-March 2013. INSTRUMENTS The Scale of the Perception of Job Insecurity (SPJI) [9] was created according to similar scales [10].The scale consists of 22 items, all of the items are expressed in Likert`s scale of five degrees.The perception of job insecurity was conceptualized as the accessibility of the working role for the employees for the certain time in the future, and it is made of three qualitative dimensions: the feeling of helplessness (affective dimension), the strength of the threats (affective) and the valuation of the possible job loss (cognitive dimension). The Ego Functioning Questionnaire [11] was designed to measure three types of self: Integrated self, Egoinvesting self and Impersonal [8].The questionnaire consists of 30 items that measure different types of self (10 items for each type of the self).All items are expressed on the seven-point Likert scale. The coping concept in the article is placed within the framework of cybernetic theory of stress and coping formulated by Edwards [12] according to which coping is conceptualized as an attempt to reduce and eliminate negative effects of stress on psychological well -being of an individual.Following coping behaviours (five forms) were assessed by the Cybernetic coping scale [13]: (a) attempts to reduce symptoms, and to directly improve the psychological well-being (b) change in the situation -an active troubleshooting by trying to bring the situation in conjunction with personal preferences, (c) customization -customize personal preferences to fit the situation (d) overcome by devaluing the importance for the individual -importance that is generated from a disagreement between his desires and perceptions, and (e) avoiding -divert attention from the situation.According to Edwards [12], of these five coping strategies, the only active strategy is Change in the situation. METHOD OF DATA ANALYZING The descriptive statistics, as one of the statistics analysis, and multiple regressive analyses were used.Table 1 shows the descriptive indicators of dependent variables in research.After analysis, it is evident that from the normal distribution does not significantly differ not one variable (as seen from the index of curvature and kurtosis distribution, which are located within the values -1 and +1), and the reliability below the lower limit on eligibility (0,70) has the subscale Customization on CCS instrument.Other scales have satisfactory reliability indices. RESULTS To test this hypothesis the multivariate regression analysis was conducted with three types of self-regulation as predictors (Integrated self, Egoinvesting self and Impersonal self) and one active coping strategy (Change in the situation) as a criterion.When predicting the strategy Change in the situation a significant model F (3, 306) = 26,73, p < 0,001, is obtained with all the predictors selected as significant, as shown in Table 2. Since the integrated self structures (Integrated and Egoinvesting self) encourage active coping and undeveloped (Impersonal self) are not associated with an active coping of medical staff in unstable employment situation, we can say that the formulated hypothesis is supported. DISCUSSION Results of multivariate regression suggest a statistically significant positive relationship between integrated self-regulation and the use of change in the situation and egoinvesting self-structures and the use of active coping strategies -change in the situation.impersonal self-regulation is a negative predictor of strategy change in the situation. Different types of self structures generate different levels of self-confidence, vulnerability and tolerance towards threat (threshold of threat) conditioning by this the level of a person`s defense and his openness to the current experience.The integrated self enables a completely authentic perception and processing of an unpleasant experience -such as it really is with full self-confidence directed towards overcoming of threat, while the highest level of defense (the impersonal self) leads to greatest reality distortions, withdrawal, avoidance and similar.These self structure characteristics determine the manner and level of activity in relation to the threat.It could be said that the development of self structures and increased integration of self-regulation encourages the active relationship towards stress in the situation of job insecurity and active coping with the intention of establishing control over the situation. Taking into account the differences in the quality and quantity of self structures and self-regulatory processes, as well as the type of motivational orientation that lies at their basis, it can be assumed that integrated self in a situation of job instability will be directed towards action that implies the realization of personal target of authentic value.Implementation of the strategy change in the situation will be aimed at trying to minimize the risks associated with job instability, and the threat generated by job insecurity will be transformed in a challenge that calls for direct action.In contrast to the integrated self, egoinvesting self does not provide a good structure target, nor a sense of personal ownership and has no capacity for strategic planning, and is reactive, not proactive by character.Due to a lower quality of self-regulatory process, egoinvesting self-regulation lacks basic information and goals within specific, operational self-process, and relies on social comparison that is used for the evaluation of outcomes.Active coping implemented by egoinvesting self structures will be directed towards restoring impaired psychological well-being and repairing damage of the self caused by job insecurity.Therefore, both types of self-developed structures shall encourage active coping -change in the situation -active coping of integrated self will be directly aimed towards the preservation of work and reduction of the risks associated with the job instability, active coping of egoinvesting self will be indirectly aimed at restoring the psychological well-being of the individual in a situation of job insecurity. The other research results also show that good quality of self regulation is a positive predictor, and poor quality of self structures is negative predictors of the problem-focused coping strategies [14].Also, it was confirmed that the self-developed structure is positively associated with active forms of overcoming [15].Identified regulation which is base of integrated self allows a person to be persistent in achiving the goals that are important to her in the long run; it encourages the person to be persistent and to foster activities that are not her actual interest, but which are important in achieving her goal [14], it is associated with positive outcomes in an academic setting, such as, commitment, psychological adjustment to school, perseverance and concentration [15].In similar way, Blascovich and Tomaka [16] point out that negative excitement that accompanies response to the threat -which is characteristic of the impersonal and partly for egoinvesting self, reduces the capacity for coping. CONCLUSION The aim of this study was to determine the role of the self structures in active coping with job insecurity.The results have shown that the increasing integration of self structures leads to increasing use of active coping strategy -change in the situation. The most integrated self structure (integrated self) show the potential for the highest level of active coping with the situation of unstable job.These structures have the ability to directly confront the stressor -for them, job instability is the challenge.The less integrated self structures (egoinvesting self) indicate the potential for a lower level of active coping -they have the ability to confront the stressor indirectly through rebuilding the damage that occurred at the level of personality in a stressful transaction with unstable job. The research results increase the insight in the phenomenon of job instability and offer a basis for creating organizational interventions aimed at strengthening resilience of medical staff to stress caused by the perception of job insecurity.Knowing the role of self structures as personality determinants in coping with the job insecurity threat it is possible to create organizational interventions directed towards the resilience of the employees.Taking into account that support from social surroundings influences the nature of the self structures [8], by supporting the social context, conditions will be created for the self to develop openness towards experience and to realize an autonomous regulation of behavior.In other words, the support from the social surroundings will influence the nature of the self functioning so that it will direct it and make it more or less open to new life experiences, or more or less selfdetermined and active in behavior regulation. Limitation of research are connected with the structure of the coping process.Due to the fact that job insecurity represents an intense chronic stress with coping mechanisms that vary through the stressful transaction, individual assessment is related to a (separate) specific stressful episode and a coping mechanism that is currently used in this episode.In the study of stress is very important which part of the process is "caught" by research; optimal situation would mean that all research subjects correspond / estimate the same stressful episode in the experience of job instability and the stress of the same degree (magnitude).One way of overcoming the difficulties of this kind is certainly the application of a longitudinal research design in the research. Table 2 . The significance of the model and partial contributions of predictors in the prediction of Change in the situation.
2016-07-05T18:43:41.079Z
2016-03-14T00:00:00.000
{ "year": 2016, "sha1": "3fe666210d6135c95983f7aaf68f1bdcf5a2652a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7906/indecs.14.2.13", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "78abf22d5d51504996168a2ea2ea74ed6b2e6a9e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
119098391
pes2o/s2orc
v3-fos-license
A comparative study of resists and lithographic tools using the Lumped Parameter Model A comparison of the performance of high resolution lithographic tools is presented here. We use extreme ultraviolet interference lithography, electron beam lithography, and He ion beam lithography tools on two different resists that are processed under the same conditions. The dose-to-clear and the lithographic contrast are determined experimentally and are used to compare the relative efficiency of each tool. The results are compared to previous studies and interpreted in the light of each tool-specific secondary electron yield. In addition, the patterning performance is studied by exposing dense line/spaces patterns and the relation between critical dimension and exposure dose is discussed. Finally, the Lumped Parameter Model is employed in order to quantitatively estimate the critical dimension of line/spaces, using each tool specific aerial image. Our implementation is then validated by fitting the model to the experimental data from interference lithography exposures, and extracting the resist contrast. I. INTRODUCTION For these reasons, in this work we compare and discuss the performance of three different tools: EUV interference lithography (EUV-IL), EBL, and HIBL in patterning a periodic layout of densely packed lines and spaces (l/s). L/s are ubiquitously used in integrated circuit architecture such as crossbar memory devices, metal lines, programmable logic arrays, word & bit lines; and they will likely be also used in future 3-D devices. 17 Moreover, dense l/s represent the ultimate resolution testing condition for resist and tools where the proximity effect is accounted for. In the present work, we focus on the amount of energy (or dose) required to print a resist feature of given size (critical dimension, CD), also known as the CD vs. dose function, which is a relevant figure of merit for lithographers. Although the exposure mechanism is radically different in these tools, the exposure chemistry is always triggered by the (primary or secondary) electrons in all the three tools. We discuss how these differences affect the final result also in the light of existing studies on the II. EXPERIMENTAL Here, we describe the characteristics of the three lithographic tools and resist materials used in this work. The conventional geometry adopted here is that the l/s patterns are printed on the resist along one direction parallel to the surface of the sample. It is assumed z along the normal to the surface of the sample. The aerial image intensity, I generated by each exposure tool is thus symmetrical along the direction of lines and it is fully described as a function of x only. The CD of lines was experimentally measured from by top-down SEM imaging of the patterned resist l/s and by quantitative metrology image analysis using a commercial software suite (SuMMIT, Lithometrix). A. Extreme Ultraviolet-Interference Lithography The extreme ultraviolet interference lithography tool (EUV-IL) at the XIL-II beamline uses light at 13.5 nm wavelength from the Swiss Light Source. Masks featuring transmission diffraction gratings produce two-beam interference patterns on the surface of the sample to form dense l/s, as detailed elsewhere. 18 The main advantages of this technique are the high resolution (down to 6 nm half-pitch) and the fast speed to pattern large areas. 19 The shape of the aerial image is dictated by the constructive and destructive interference of the two diffracted beams and it is given by: where p is the pitch of the l/s array. For the two-beam interference l/s patterning, pairs of gratings of different pitches were fabricated on SiN membranes. For producing the contrast curves, an aperture of 0.5×0.5 mm 2 was used to expose an array of increasing doses. B. Electron beam lithography An electron beam lithography tool (Vistec EBPG 5000) with 100 keV acceleration voltage, beam current of 500 pA at aperture 400 μm was used for the present study. The direct write with a raster-scan focused beam has a relatively low speed and can print only about a few μm 2 per second, depending on a variety of settings. For the sake of this comparison, we are interested in the effect of beam shape and for this reason all software proximity effect corrections were disabled during this experiment. Abundant research has been put into the measurement of the beam size and shape using the point spread function (PSF) method. 20 Here, we adopt the widely used notation and define the beam shape as the sum of two Gaussians, the first representing the highly collimated primary beam, and another representing the contribution from backscattered electrons: where α is the square root of the variance of the forward scattering beam, β is the square root of the variance of the backscattering electrons and η is defined as a correction factor. Experimentally measured values for α range from 4 to 14 nm, whereas β is about several microns; the correction factor η is typically ≈ 0.7-1. 21,22,23 Based on literature data, the amplitude of the primary Gaussian (1/2α 2 ) can be estimated to be about 100 to 1000 times larger than the amplitude of the secondary Gaussian (η/2β 2 ). The expression of Eq. (2) therefore describes the combination of a sharp beam profile with a very broad and low-intensity tail. C. He ion beam lithography A He ion beam was generated by field ionization from a gas source in a ZEISS microscope column, and accelerated to 30 keV. The raster scan was controlled by an Orion Nanofab pattern generator to perform direct write lithography of arbitrary userdefined layout. The beam aperture was 7 μm and the write current was 0.19 pA for all samples. Although the source and the column are capable of supplying higher currents, these settings were dictated by the need for high resolution and by the maximum allowed frequency of the beam blanking unit. In this equipment, the raster scan speed is possibly the main disadvantage of the direct write with focused beam, as less than a square micron per second can be patterned, using these settings for high resolution. Early works demonstrated that the large mass and large scattering cross section of helium ions lead to a much shallower penetration depth in matter and a smaller interaction volume than it occurs with electron beam, thus bringing remarkably superior imaging performance. 24,25 Similar to the electron beam, the PSF of the He ion beam has been determined by previous investigations and it has been conventionally modeled as the sum of two Gaussians. 26,27 As these studies found, the experimental measurement of the PSF was significantly more challenging than for electron beam, because the He beam size is well below the resolution limit of printing detection and its proximity effect is weaker. Among the few works on this topic, one estimated the PSF indirectly by Abel inversion of experimentally measured line spread function of a chemically amplified resist. 28 Reported r.m.s. width values are 0.9 nm and 150 nm for the primary and secondary beams, respectively. 29 Notably, the intensity of the secondary beam was estimated to be six orders of magnitude weaker than that of the primary beam. For this reason, in this work we modeled the aerial image of the ion beam as a single Gaussian: where σ is the Gaussian standard deviation. D. Resist materials Poly-methylmethacrylate (PMMA, molecular weight 950k, 1% w/w in ethyl lactate) and hydrogen silsesquioxane (HSQ, 1% in methyl isobutyl ketone, MIBK) were chosen because they are sensitive resists over a broad range of energies and patternable by both photons and particle beams. For the sake of comparison, the processing was kept the same throughout all lithographic tools: thermal treatments, III. MODELING The Lumped Parameters Model (LPM) 15 with the exposure intensity distribution on and in the resist. As a consequence, this model is very compact and has a low computational burden. In the specific case of two dimensional l/s symmetry, its exact analytical form is given by the expression: 30 where E is the energy (or dose) required to print a line with CD of x, by an aerial image of normalized intensity I(x) on a resist of contrast γ and dose-to-clear E 0 ; x 0 indicates the center location of the line. Because this model was devised specifically for photolithography, an effective thickness D eff in Eq. (4) is defined as: and it accounts for the finite transmissivity of the material via the actual resist thickness D, its optical absorption coefficient α, and resist contrast γ. The LPM is, therefore, a compact and accurate method to quantitatively demonstrated by a comparative study of chemically amplified resists exposed by EBL and EUV. Those Authors found a that in the former, ionization occurs mostly in spatially isolated reactions, whereas in the latter, the probability of multiple ionization events in confined space is about a factor three-fold higher. 31 The LPM model does not account for the spatial distribution of ionization events because only the aerial image intensity (i.e., its spatial energy) is provided as an input. The LPM was employed to calculate the CD vs. dose for three aerial image A. Resist Dose-to-Clear (E 0 ) and Contrast (γ) parameters The Table I. In agreement with the known properties of these materials, our data indicate that HSQ is significantly less sensitive than PMMA: the E 0 was more than one order of magnitude higher, regardless of the tool. As for the 'relative sensitivity' of these tools when exposing the same resist, a considerable gap can be detected in the amount of charge per unit area required to clear. The EBL required almost a two orders of magnitude higher dose than the HIBL did. Several studies had previously reported a similar observation: the dose-to-clear obtained by the former is about 4-fold to hundred-fold higher than the latter. 5,8,9,,32 The exposure efficiency is described by the secondary electron yield (SEY) parameter, which indicates the amount of secondary electrons generated by each absorbed primary photon, electron or ion. This definition is widely used in the photoresist community to describe the efficiency of each SE generation: it is not related to the total incident dose nor to the surface effects, but to the dose absorbed throughout the resist. According to our experimental data, the dose ionization event in a confined volume of resist than the EBL can accomplish. 40 The experimental dose ratio reported in the present work is higher than the theoretical predictions, which we ascribe to a higher lithographic efficiency of EUV than it would be expected from the pure dose equivalency. Finally, the broad differences in the dose ratio across different resist platforms reported in one of the previous works, 39 is suggesting that the acid generation probability is also a specific variable in the resist exposure kinetics. While most of the above mentioned values are extracted from l/s patterning at the 1:1 duty cycle dose (i.e., the dose-to-size), our dose ratio are instead calculated from dosage curves: this difference does not changes our argumentation. As for the resist contrast, this quantity is strongly dependent on the processing conditions and previous studies reported values of γ ranging from 1 to 3.5 for both PMMA and HSQ materials developed using conventional methodology. 9,27,41 According to our data, the contrast was not changed significantly across the lithographic tools, which is a good indication of the consistency of the processing, despite being produced by different equipment. B. Dense lines/spaces The performance in patterning dense l/s was tested using EUV-IL, EBL and HIBL on PMMA (pitch 80 and pitch 60 nm) and HSQ (pitch 44 nm). The CD of lines was measured by metrological scanning electron microscopy. The resulting CD, normalized to the total pitch, is plotted for these tools as a function of the normalized dose E/E 0 in Fig. 3. A consistent trend can be observed from these plots. At normalized dose below 1, that is, when E < E 0 , no lines are printed: E 0 represents a minimum threshold energy. As the dose is increased, in a positive tone resist as it is PMMA, trenches begin to be printed and the measured CD decreases with increasing dose: the measured CD represents here the width of remaining resist lines between the trenches (measured by the SEM). As for HSQ, the resulting CD indicates a similar behavior as PMMA. In the case of this negative tone resist, the CD of lines increases with increasing dose. wavelength. 42 The dose were normalized to the E 0 of the two materials at EUV, as measured in Section IVa. The regression algorithm provided the γ which best fitted to the data. The resulting best fit LPM are shown alongside the experimental CD data in the following Fig. 4, and the values are summarized in Table II. V. SUMMARY AND CONCLUSIONS A comparative study of the extreme ultraviolet interference lithography, electron beam lithography and He ion beam lithography has been presented here. Despite the different working principle, the similarity in the exposure mechanism in broadband resists made it possible to conduct this comparison. Preliminarily, we determined the dose-to-clear and the resist contrast of PMMA and HSQ for each tool, which was used to normalize the exposure doses. In agreement with previous studies, it was found that the He ion beam lithography required about a hundred-fold lower dose than electron beam lithography did. From the dosage point of view, our findings indicate that EBL is more than four orders of magnitude less lithographically efficient than EUV-IL and more than three orders of magnitude less than HIBL, owing to the weak interaction with the resist and the high kinetic energy of the electron beam. The ratio between the exposure doses by EBL and by EUV-IL, found experimentally in the present work, was higher than theoretical estimates reported in previous studies. We discussed this discrepancy in the light of the different in ionization efficiency of these lithographic tools. The critical dimension of dense l/s patterns, which is a relevant figure of merit for lithographic processing and fabrication of integrated circuits in general, was studied in detail. The beam shape had a remarkable effect on the CD vs. dose relationship, as it accounts for the specific features of each tool. A numerical formulation of the exact lumped parameter model and a nonlinear least-squares regression were also implemented. This model estimated quantitatively the effect of different aerial images on the CD vs. dose, consistently with the experimental findings. Finally, the LPM fit to the EUV-IL exposure data extracted the parameters for γ and E 0 , thus validating our implementation. Our findings quantitatively explain for the peculiar features each tool has and which make it suitable for different purposes. The EUV-IL make it possible to pattern large areas of dense features such as line/spaces, with relatively good resolution. The electron beam lithography is effective in exposing high resolution arbitrary patterns and has a better performance for isolated structures than dense ones. The HIBL is a promising technique for the lithography of dense high resolution patterns due to the almost negligible backscattered secondary electrons from the beam-substrate interaction, which possibly make it ideal for both isolated and dense high resolution patterning.
2019-04-13T17:39:24.738Z
2016-11-21T00:00:00.000
{ "year": 2017, "sha1": "f0c35239670d24efba9799200ef7ca4e33a28601", "oa_license": null, "oa_url": "https://www.dora.lib4ri.ch/psi/islandora/object/psi:8353/datastream/PDF/Fallica-2016-Comparative_study_of_resists_and-(published_version).pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f0c35239670d24efba9799200ef7ca4e33a28601", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
54137207
pes2o/s2orc
v3-fos-license
Targeting macrophage scavenger receptor 1 promotes insulin resistance in obese male mice Abstract Immune components can bridge inflammatory triggers to metabolic dysfunction. Scavenger receptors sense lipoproteins, but it is not clear how different scavenger receptors alter carbohydrate metabolism during obesity. Macrophage scavenger receptor 1 (MSR1) and macrophage receptor with collagenous structure (MARCO) are scavenger receptors that have been implicated in lipoprotein metabolism and cardiovascular disease. We assessed glucose control, tissue‐specific insulin sensitivity, and inflammation in Msr1‐ and Marco‐deficient mice fed with obesogenic diets. Compared to wild‐type (WT) mice, Msr1 −/− mice had worse blood glucose control that was only revealed after diet‐induced obesity, not in lean mice. Obese Msr1 −/− mice had worse insulin‐stimulated glucose uptake in the adipose tissue, which occurred in the absence of overt differences in adipose inflammation compared to obese WT mice. Msr1 deletion worsened dysglycemia independently from bacterial cell wall insulin sensitizers, such as muramyl dipeptide. MARCO was dispensable for glycemic control in obese mice. Oral administration of the polysaccharide fucoidan worsened glucose control in obese WT mice, but fucoidan had no effect on glycemia in obese Msr1 −/− mice. Therefore, MSR1 is a scavenger receptor responsible for changes in glucose control in response to the environmental ligand fucoidan. Given the interest in dietary supplements and natural products reducing inflammation or insulin resistance in metabolic disease during obesity, our results highlight the importance of understanding which ligand–receptor relationships promote versus those that protect against metabolic disease factors. Our results show that ligand or gene targeting of MSR1 exacerbates insulin resistance in obese mice. Abstract Immune components can bridge inflammatory triggers to metabolic dysfunction. Scavenger receptors sense lipoproteins, but it is not clear how different scavenger receptors alter carbohydrate metabolism during obesity. Macrophage scavenger receptor 1 (MSR1) and macrophage receptor with collagenous structure (MARCO) are scavenger receptors that have been implicated in lipoprotein metabolism and cardiovascular disease. We assessed glucose control, tissue-specific insulin sensitivity, and inflammation in Msr1-and Marcodeficient mice fed with obesogenic diets. Compared to wild-type (WT) mice, Msr1 À/À mice had worse blood glucose control that was only revealed after diet-induced obesity, not in lean mice. Obese Msr1 À/À mice had worse insulin-stimulated glucose uptake in the adipose tissue, which occurred in the absence of overt differences in adipose inflammation compared to obese WT mice. Msr1 deletion worsened dysglycemia independently from bacterial cell wall insulin sensitizers, such as muramyl dipeptide. MARCO was dispensable for glycemic control in obese mice. Oral administration of the polysaccharide fucoidan worsened glucose control in obese WT mice, but fucoidan had no effect on glycemia in obese Msr1 À/À mice. Therefore, MSR1 is a scavenger receptor responsible for changes in glucose control in response to the environmental ligand fucoidan. Given the interest in dietary supplements and natural products reducing inflammation or insulin resistance in metabolic disease during obesity, our results highlight the importance of understanding which ligand-receptor relationships promote versus those that protect against metabolic disease factors. Our results show that ligand or gene targeting of MSR1 exacerbates insulin resistance in obese mice. Introduction Low-grade chronic inflammation can contribute to aspects of the metabolic syndrome, including altered endocrine control of metabolism. Inflammation can reduce the ability of insulin to alter carbohydrate metabolism in tissues that can lower blood glucose, which is often called insulin resistance (Hotamisligil et al. 1993(Hotamisligil et al. , 1995. Higher levels of circulating and tissue-resident cytokines, chemokines, and proinflammatory immune cells are typically associated with tissue insulin resistance, which is a factor that predicts and participates in whole body dysglycemia (Hotamisligil et al. 1993(Hotamisligil et al. , 1995Wellen and Hotamisligil 2005). For example, increased numbers of adipose tissue-resident macrophages and inflammatory cytokines coincide with obesity-related adipose tissue expansion (Hotamisligil et al. 1993;Weisberg et al. 2003). Furthermore, increased adipose-resident macrophages are polarized to an inflammatory phenotype during obesity and adipose-resident macrophages that are skewed toward proinflammatory characteristics correlate with body mass index (BMI) and indices of insulin resistance (Weisberg et al. 2003;Xu et al. 2003). Innate and adaptive immune responses in the adipose tissue and the intestine have been shown to connect inflammation with insulin resistance (Winer and Winer 2012;McPhee and Schertzer 2015;Winer et al. 2016). Many sources of this metabolic inflammation have been characterized, including microbial or dietary components (Cani et al. 2007;Oliveira et al. 2013;Chi et al. 2014;Henriksbo et al. 2014;Caesar et al. 2015), endogenous metabolites (Mills et al. 2016;Liu et al. 2017), xenobiotics (Pestana et al. 2017), and therapeutic drugs Henriksbo and Schertzer 2015). Pattern recognition receptors (PRR) can bridge potential triggers of inflammation to metabolic outcomes by acting as sensors of pathogen-associated molecular patterns (PAMPs) and/ or damage-associated molecular patterns (DAMPs). There are many examples of PRRs propagating metabolic inflammation and promoting insulin resistance and defects in carbohydrate metabolism during aging, obesity, or other stressors (Shi et al. 2006;Schertzer et al. 2011;Vandanmagsar et al. 2011;Henriksbo et al. 2014;Bauernfeind et al. 2016;McBride et al. 2017). However, some PRRs protect against metabolic inflammation and insulin resistance during obesity (Denou et al. 2015;Cavallari et al. 2017). It has now been shown that PRRs can reprogram cellular metabolism and propagate inflammation as opposed to being direct sensors for obesity-associated inflammatory "ligands", such as saturated fatty acids (Lancaster et al. 2018). There is still much to learn about how obesity-related triggers of inflammation engage elements of the immune system to alter cellular and systemic metabolism. PRRs can respond to ingested nutrients and scavenger receptors are well known as receptors for various lipoproteins (Parthasarathy et al. 1986;Babitt et al. 1997;Febbraio et al. 1999). For example, the Class B scavenger receptor SCARB1 has been implicated in lipoprotein metabolism and the Class B scavenger receptor SCARB3/ CD36 has been implicated in long-chain fatty acid metabolism (Babitt et al. 1997;Febbraio et al. 1999). Class A scavenger receptors have shown to detect and respond to modified low-density lipoproteins (mLDL) (Parthasarathy et al. 1986). It is clear that different scavenger receptors are involved in macrophage foam cell formation, atherosclerosis, and cardiovascular disease (Ben et al. 2015;Zani et al. 2015), but the role of these scavenger receptors in glucose metabolism is not as well defined. Macrophage scavenger receptor 1 (MSR1) and macrophage receptor with collagenous structure (MARCO) are Class A scavenger receptors and these PRRs are predominantly expressed in macrophages (Bowdish and Gordon 2009). MSR1 and MARCO sense lipoproteins that have been implicated in cardiovascular disease, but the roles for different Class A scavenger receptors in carbohydrate metabolism and insulin resistance are ill-defined. Despite the connection between cardiovascular disease and diabetes, the roles of these scavenger receptors may be distinct to glucose metabolism. It is known that deletion of Msr1 attenuates macrophage uptake of mLDL and limits atherosclerotic lesions in mice prone to atherosclerosis (Suzuki et al. 1997). Conversely, obese Msr1-deficient mice display exacerbated insulin resistance and augmented inflammation characterized by polarization of macrophage populations toward more inflammatory subsets (Zhu et al. 2014). Less is known about the role of MARCO in obesity and insulin resistance and it is not clear if MARCO has a similar protective role in obesity-induced insulin resistance. This is worth testing since MARCO has been shown to be necessary for both toll-like receptor 2 (TLR2) and nucleotide oligomerization domain 2 (NOD2)-mediated bacterial pathogen sensing and clearance (Dorrington et al. 2013). Deletion of Tlr2 can protect against insulin resistance, whereas deletion of Nod2 can exacerbate obesityinduced insulin resistance in obese mice (Ehses et al. 2010;Denou et al. 2015). Furthermore, specific bacterial cell wall components, such as muramyl dipeptide (MDP) are NOD2-dependent insulin sensitizers (Cavallari et al. 2017). These results warrant testing how MARCO alters insulin sensitivity and blood glucose control in comparison to other Class A scavenger receptors, such as MSR1. In this study, we used Msr1-and Marco-deficient mice fed obesogenic diets to assess the role of these scavenger receptors in regulating glucose control, tissue-specific insulin sensitivity, and inflammation. We demonstrate that MARCO is dispensable for glycemic control in obese mice. Genetic deletion of Msr1 or feeding fucoidan, a natural product ligand of MSR1, both worsened insulin resistance and blood glucose control. We found that poor blood glucose control coincided with impaired insulin-stimulated glucose uptake in the adipose tissue, which occurred in the absence of overt adipose inflammation in obese Msr1 À/À mice. Animals and diets All animal procedures were approved by the Animal Research Ethics Board of McMaster University and performed according to institutional guidelines. All mice were male and were maintained on a 12-h light/dark cycle. Wild-type (WT) C57BL/6J mice were from The Jackson Laboratory (Cat# 000664). Msr1 À/À mice on a C57BL/6J background were originally from the laboratory of S. Gordon (Suzuki et al. 1997). Marco À/À mice on a C57BL/6N background were originally from the laboratory of K. Tryggvason (Arredouani et al. 2004). For all studies, mice were 9-10 weeks old prior to starting experiments or switching diets. Mice were fed a control diet (17% kcal from fat, 29% kcal from protein, 54% kcal from carbohydrate; Cat# 8640 Teklad 22/5; Envigo, Huntington, United Kingdom) or an obesogenic, high-fat, lowfiber diet (60% kcal from fat, 20% kcal from protein, 20% kcal from carbohydrate; Cat# D12492; Research Diets, New Brunswick, NJ, USA) as indicated for each experiment. Our study was designed to test glycemic perturbations of an obesogenic diet that is higher in fat and lower in fiber compared to a standard rodent diet. Muramyl dipeptide (MDP; cat# tlrl-mdp; Invivogen, San Diego, CA, USA) was administered via intraperitoneal injection at a dose of 100 lg/mouse for 3 days prior to metabolic tests. Fucoidan from Fucus vesiculosus (Cat# F5631; MilliporeSigma, Burlington, MA, USA) was administered by oral gavage at a dose of 40 mg/kg three times per week for 4 weeks. Genotyping Mouse liver was digested in buffer containing 100 mmol/ L Tris-HCl, 5 mmol/L ethylenediaminetetraacetic acid, 200 mmol/L NaCl, 0.2% w/v SDS, and 1.5 units of proteinase K (Cat# EO0491; Thermo Fisher Scientific, Waltham, MA, USA) at 37°C overnight. DNA was precipitated from tissue lysate by adding an equal volume of isopropanol and gently agitating the mixture. DNA pellets were washed twice with a 75% ethanol/25% ultrapure water solution and suspended in ultrapure water. PCR amplification of isolated DNA was performed using primer sequences targeting Msr1 and Marco genes. Msr1 was amplified from DNA samples with the following primer sequences: "WT Forward" ACC TTA TAG ACA CGG GAC GCT TCC AGA A, "WT Reverse" GAC TCT GAC ATG CAG TGT TTC TGT A, "KO Forward" ACC TTA TAG ACA CGG GAC GCT TCC AGA A, and "KO R Reverse" AGG AGT AGA AGG TGG CGC GAA GG. Marco was amplified from DNA samples with the following primer sequences: "WT Forward" CAG CTG GGT CCA TAC CAG C, "WT Reverse" CTG GAG AGC CTC GTT CAC C, "KO Forward" CCA CGC TCA TCG ATA ATT TCA C, and "KO Reverse" GCC TGC AGT GGC CGT CGT TTT A. Amplified sequences were separated on a 1% agarose gel. Glucose and insulin tolerance tests Glucose tolerance tests (GTTs) and insulin tolerance tests (ITTs) were performed in 6 h fasted and conscious mice. D-(+)-Glucose (Cat# G7021; MilliporeSigma, Burlington, MA, USA) and insulin aspart (NovoRapid; Novo Nordisk, Bagsvaerd, Denmark) were delivered via intraperitoneal injection at doses indicated in figure legends. Blood glucose was determined by tail vein blood sampling at the indicated time points using a handheld glucometer (Accu-Chek Performa; Roche, Basel, Switzerland). Glucose uptake and adiposity imaging Insulin stimulated uptake of 2-deoxy-2-( 18 F)fluoro-D-glucose into various tissues was measured by positron emission tomography as previously described (Jorgensen et al. 2013). Adiposity of these mice was determined by computed tomography as previously described (Cavallari et al. 2017). Gene expression Total RNA was obtained from frozen mouse white adipose tissue via mechanical homogenization at 4.5 m/sec for 30 sec using a FastPrep-24 tissue homogenizer (MP Biomedicals, Santa Ana, CA, USA) and ceramic beads, followed by guanidinium thiocyanate-phenol-chloroform extraction. RNA was treated with DNase I (Cat# 18068-015; Thermo Fisher Scientific, Waltham, MA, USA) and cDNA was prepared using 1000 ng total RNA and Super-Script III Reverse Transcriptase (Cat# 18080-044; Thermo Fisher Scientific, Waltham, MA, USA). Transcript expression was measured using TaqMan Assays with AmpliTaq Gold DNA polymerase (Cat# N8080247; Thermo Fisher Scientific, Waltham, MA, USA) in a Rotor-Gene Q realtime PCR cycler (QIAGEN, Hilden, Germany), and target genes were compared to the geometric mean of Rplp0 and Rn18s housekeeping genes using the DDC T method. Gene expression was analyzed in WT mice treated with vehicle (n = 7), WT mice treated with fucoidan (n = 5), SRA À/À mice treated with vehicle (n = 6) and SRA À/À mice treated with fucoidan (n = 8). Data analyses Values are reported as mean AE standard error of the mean (SEM) and P < 0.05 was considered statistically significant. Comparisons of each result were analyzed by unpaired two-tailed t-test or one-or two-way ANOVA with Tukey post hoc testing, as indicated. Nonparametric tests were utilized on data sets that were not normally distributed. GraphPad Prism 6 software was used. Values of n represent different mice for each experiment and are represented by symbols in the figures. Data availability The datasets generated during and analyzed during this study are available from the corresponding author on reasonable request. Results Deletion of Msr1, but not Marco, worsened HFD-induced insulin resistance We used mice that had a genetic deletion of Msr1 or Marco to determine if different Class A Scavenger receptors were relevant to obesity-induced insulin resistance ( Fig. 1A and B). Wild-type (WT), Msr1 À/À and Marco À/À mice that were fed a control diet containing~17% energy from fat had no difference in body mass or blood glucose levels during an insulin tolerance test (ITT) or glucose tolerance test (GTT) (Fig. 1C-E). We then showed that WT, Msr1 À/À , and Marco À/À mice all had similar body mass after 6 weeks of feeding an obesogenic, low-fiber, high-fat diet (HFD) (Fig. 1F). However, 6 weeks of HFD revealed that only Msr1 À/À mice had higher blood glucose during an ITT compared to WT mice or Marco À/À mice ( Fig. 1G; P = 0.0068). In further support of Msr1 deletion worsening HFD-induced insulin resistance, we found that 10 weeks of HFD-feeding caused higher blood glucose during a GTT in Msr1 À/À mice compared to WT and Marco À/À mice. (Fig. 1I; P = 0.0322). This glycemic effect was independent of changes in body mass between different genotypes of HFD-fed mice (Fig. 1H). These results show that Msr1 deletion worsens HFD-induced insulin resistance, whereas MARCO is dispensable during dietinduced obesity in mice. Msr1 deletion worsens adipose tissue insulin resistance in obese mice We next used radiolabeled glucose tracer and whole-body imaging after insulin injection in 6-week HFD-fed WT and Msr1 À/À mice in order to determine tissue-specific insulin resistance ( Fig. 2A and B). We found that insulinstimulated glucose uptake was lower in the white adipose tissue (WAT) of HFD-fed Msr1 À/À mice compared to WT mice ( Fig. 2C; P = 0.0485). We found that Msr1 À/À mice and WT mice had similar insulin stimulated glucose uptake in all other tissues that were analyzed, including liver, skeletal muscle, heart, kidney, lungs, and brown adipose tissue ( Fig. 2C and D). These results show that Msr1 deletion worsens WAT insulin resistance during dietinduced obesity in mice. Bacterial insulin sensitizers lower glucose independent of MSR1 in obese mice We next hypothesized that Msr1 À/À mice had worse insulin resistance because of defective bacterial cell wall muropeptide/peptidoglycan sensing, similar to Nod2 À/À mice (Denou et al. 2015). We therefore tested how glycemic control was altered by MDP, a bacterial cell wall muropeptide known to promote insulin sensitivity via NOD2 (Cavallari et al. 2017). We found that MDP treatment lowered glucose during a GTT in both WT ( Fig. 3A and B; P = 0.0499) and Msr1 À/À (Fig. 3C and D; P = 0.0014) mice without changing the body mass of obese mice fed a HFD for 16 weeks. We found that MDP lowered the cumulative area under the curve for glucose during a GTT to a greater extent in Msr1 À/À compared to WT mice ( Fig. 3E; P = 0.048). These results show that MDP improves glycemia independent of MSR1. Deletion of Msr1 actually potentiates glucose clearance in response to the NOD2 ligand, MDP. The polysaccharide fucoidan worsens insulin resistance via MSR1 in obese mice We next tested if a suspected ligand for MSR1 would alter glucose control. We found that oral delivery of fucoidan (40 mg/kg body mass, three times per week for 4 weeks) caused higher blood glucose during a GTT in WT mice fed a HFD for 4 weeks despite no change in body mass ( Fig. 4A in Msr1 À/À mice fed a HFD for 4 weeks ( Fig. 4C and D). These data show that engaging MSR1 with the polysaccharide fucoidan worsens glucose control in an MSR1-dependent manner without changing body mass. We also tested if genetic deletion of Msr1 or natural product targeting of MSR1 altered adipose tissue inflammation. We hypothesized that adipose tissue inflammation could underpin adipose insulin resistance observed in HFD-fed fucoidan-treated WT mice or Msr1 À/À mice. We found that neither fucoidan-treated mice nor Msr1 À/À mice showed overt signs of inflammation, as suggested by no differences in transcript levels , and WT and Marco À/À mice (B). Body mass (C), blood glucose, and cumulative area under curve (AUC) during an insulin tolerance test (0.5 IU/kg; D) and glucose tolerance test (2 g/kg; E) of WT, Msr1 À/À , and Marco À/À mice fed a control diet. Body mass (F) and blood glucose and cumulative AUC during an insulin tolerance test (1 IU/kg; G) of WT, Msr1 À/À , and Marco À/À mice fed a high-fat diet (HFD) for 6 weeks. Body mass (H) and blood glucose and cumulative AUC during a glucose tolerance test (1 g/kg; I) of WT, Msr1 À/À , and Marco À/À mice fed a HFD for 10 weeks. Body mass and AUC graphs were analyzed by one-way ANOVA with Tukey post hoc testing. Data are means AE SE. * indicates significance with P < 0.05. Numbers of mice analyzed for each condition are represented above each bar and by symbols. of inflammatory cytokines, chemokines, immune cell markers, or ER stress markers (Fig. 4E-G). Discussion Components of the immune system such as PRRs can link inflammation to metabolic dysfunction. We sought to understand if different Class A scavenger receptors influenced blood glucose control during diet-induced obesity in mice. We found that genetic deletion of Msr1 exacerbated adipose tissue insulin resistance. These results further reinforce the concept that not all immune components or PRRs promote metabolic inflammation and insulin resistance. We add further evidence that loss of Msr1 worsens insulin resistance during obesity, which is consistent with previous reports showing that Msr1 (i.e., SR-A) deletion deteriorates adipose tissue insulin sensitivity in obese mice (Zhu et al. 2014). Our results build on this previous work by showing that MSR1 provides a unique protection from excessive insulin resistance compared to other Class A scavenger receptors, since MARCO was dispensable for changes in insulin sensitivity in obese mice. MSR1 transcript levels in the adipose tissue of humans showed stronger correlation with insulin sensitivity compared to SCARB3/CD36, a Class B Scavenger receptor that has been intensely studied at the nexus of lipid metabolism and insulin resistance (Goudriaan et al. 2003;Elbein et al. 2011;Wilson et al. 2016). Given that MSR1 transcript levels are increased, mainly in the stromal vascular fraction of adipose tissue of insulin-resistant subjects, it is important to consider if this association represents a potential cause or protective response from excessive insulin resistance. Hyperinsulinemia is a key factor in driving insulin resistance and it is already known that insulin lowers Msr1 levels in human macrophages (Park et al. 2012). Hence, our data showing that Msr1 deletion exacerbates insulin resistance in adipose tissue of obese mice is consistent with a model where MSR1 acts as a compensatory or protective response to limit excessive insulin resistance. It is not yet clear what stimuli underpin changes in MSR1 levels during obesity, but many factors have been proposed, including scavenging of cellular debris in expanding adipose tissue. It is known that MSR1 levels do not change during lipid infusioninduced insulin resistance (Kashyap et al. 2009), and it remains possible that changes in MSR1 levels simply represent changes in macrophage numbers in the adipose tissue during obesity, since this is the cell type that predominantly expresses MSR1. Previous work shrewdly identified that lysophosphatidylcholine was an obesity-relevant ligand that engaged MSR1 to promote adipose tissue resident macrophage polarization away from inflammatory phenotypes (Zhu et al. 2014). We tested if a naturally occurring environmental ligand of MSR1, fucoidan, altered inflammation and/or glycemia in obese mice. Fucoidan is a sulphated polysaccharide found in brown seaweeds and is used as a dietary supplement. Fucoidan has been shown to lower blood glucose, insulin resistance, steatosis, ER stress, and inflammation in obese mice and rats (Jeong et al. 2013;Wang et al. 2016a,b), which supported our initial hypothesis that fucoidan would improve glycemia in obese mice and that MSR1 would mitigate its efficacy. Surprisingly, fucoidan worsened insulin resistance in an MSR1-dependent manner. We found no overt change in markers of adipose tissue inflammation or ER stress due to fucoidan treatment or Msr1 deletion. This was surprising given the well-known role of MSR1 in immunity. It is not clear how deletion of this scavenger receptor reduces the ability of insulin to promote glucose uptake into adipose tissue. It is likely that reduced glucose uptake in adipose tissue of Msr1 À/À mice is due to adipocyte insulin resistance. Future experiments defining if MSR1 alters local signals beyond inflammation such as adipokines and how this impacts steps in the insulin signaling cascade are warranted. Previous studies that showed improved glucose control with fucoidan treatment used higher doses of this compound (80-100 mg/kg), whereas our study used 40 mg/ kg. This could explain discordant results, but we propose that the most important advance provided in our study is the ligand-receptor specificity of the actions of fucoidan on MSR1. Sulphated polysaccharides have many potential cellular targets that depend on the dose-response relationships and, to the best of our knowledge, no ligand-receptor relationship has been established for the actions of fucoidan on blood glucose. This is an important consideration since human work supports the concept for fucoidan worsening insulin resistance (Hern andez-Corona et al. 2014). In fact, a daily oral dose of 500 mg of fucoidan to overweight/obese humans for 3 months increased both insulin levels and a marker of insulin resistance (HOMA-IR). This fucoidan-mediated deterioration of insulin resistance in obese humans occurred despite a reduction in blood pressure and lowering of LDL . Fucoidan-mediated worsening of glucose tolerance during obesity requires functional macrophage scavenger receptor 1. Body mass (A) and blood glucose and cumulative area under curve (AUC) during a glucose tolerance test (1.5 g/kg; B) of WT mice fed a high-fat diet for 4 weeks and treated with vehicle or fucoidan three times per week for 4 weeks. Body mass (C) and blood glucose and cumulative AUC during a glucose tolerance test (1.5 g/kg; D) of Msr1 À/À mice fed a high-fat diet for 4 weeks and treated with vehicle or fucoidan three times per week for 4 weeks. Transcript levels of cytokines and chemokines (E), immune cell receptors (F), and endoplasmic reticulum stress markers (G) of WT and Msr1 À/À mice fed a high-fat diet for 4 weeks and treated with vehicle or fucoidan three times per week for 4 weeks. Body mass and AUC graphs were analyzed by unpaired two-tailed t-tests. Transcript levels were analyzed by two-way ANOVA with Tukey post hoc testing. Data are means AE SE. * indicates significance with P < 0.05. Numbers of mice analyzed for each condition are represented above each bar and by symbols. relationships relevant to glucose control should help inform immunometabolic approaches aiming to treat and/or prevent metabolic diseases.
2018-12-02T19:39:54.774Z
2018-11-01T00:00:00.000
{ "year": 2018, "sha1": "f20c5a00ad6ca0f9ac8176cf58d55203f629abad", "oa_license": "CCBY", "oa_url": "https://physoc.onlinelibrary.wiley.com/doi/pdfdirect/10.14814/phy2.13930", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f20c5a00ad6ca0f9ac8176cf58d55203f629abad", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
224822938
pes2o/s2orc
v3-fos-license
Phosphorylation of Msx1 promotes cell proliferation through the Fgf9/18-MAPK signaling pathway during embryonic limb development Abstract Msh homeobox (Msx) is a subclass of homeobox transcriptional regulators that control cell lineage development, including the early stage of vertebrate limb development, although the underlying mechanisms are not clear. Here, we demonstrate that Msx1 promotes the proliferation of myoblasts and mesenchymal stem cells (MSCs) by enhancing mitogen-activated protein kinase (MAPK) signaling. Msx1 directly binds to and upregulates the expression of fibroblast growth factor 9 (Fgf9) and Fgf18. Accordingly, knockdown or antibody neutralization of Fgf9/18 inhibits Msx1-activated extracellular signal-regulated kinase 1/2 (Erk1/2) phosphorylation. Mechanistically, we determined that the phosphorylation of Msx1 at Ser136 is critical for enhancing Fgf9 and Fgf18 expression and cell proliferation, and cyclin-dependent kinase 1 (CDK1) is apparently responsible for Ser136 phosphorylation. Furthermore, mesenchymal deletion of Msx1/2 results in decreased Fgf9 and Fgf18 expression and Erk1/2 phosphorylation, which leads to serious defects in limb development in mice. Collectively, our findings established an important function of the Msx1-Fgf-MAPK signaling axis in promoting cell proliferation, thus providing a new mechanistic insight into limb development. INTRODUCTION Vertebrate limb development relies on the activity of signaling that control patterning and growth of the limb bud along three orthogonal axes. Among them, fibrob-last growth factor (Fgf) signaling is one of the dominant elements that control the elongation of limb bud along proximo-distal (P-D) axis, which promotes limb bud growth and progressive distalization (1). But how Fgf signaling is regulated remains to be further studied. Extracellular signal-regulated kinase 1/2 (Erk1/2, also known as p44/42 mitogen-activated protein kinase, MAPK) can be activated by a variety of growth factors and mitogens (2)(3)(4)(5)(6)(7). Growth factor-induced activation of the MAPK signaling pathway participates in most processes of vertebrate embryonic development, and in most cases, it functions in proliferation and differentiation regulation (8)(9)(10)(11). For example, during myogenesis, MAPK signaling is crucial for the growth factor-induced cellular proliferation of myoblasts, and inactivation of MAPK is required for initiation of myogenesis (8,12,13). The way in which gene regulation of growth factors couples with MAPK activation during limb development is not yet well understood. Homeoproteins are one of the major classes of transcriptional factors that regulate the development of tissues and organs in vertebrates (14). Msx (including Msx1, Msx2 and Msx3) comprises one of the subfamilies of homeoproteins that control cellular differentiation during development. In vertebrate, Msx is expressed in diverse spatial and temporal domains and participates in the formation of limbs, neurotubes, craniofacial glands, mammary glands and other structures. (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25). Although Msx is important for diverse tissues during early development, it is mainly expressed in proliferating cells and is downregulated upon differentiation (17,23). For example, in the developing limb, Msx1 is expressed in a zone of undifferentiated proliferating mesenchymal cells destined to form structural elements of the limb but not in the differentiating cells forming these structures (15)(16)(17)(18). These and other observations have led to the postulation that Msx1 may be responsible for driving the cellular proliferation (15,22,(26)(27)(28)(29), although the underlying mechanisms are not known. In this study, we first observed that Msx1 is indeed able to promote the proliferation of mouse C2C12 myoblasts and C3H10T1/2 mesenchymal stem cells (MSCs). Significantly, the MAPK signaling pathway is markedly activated upon overexpression of Msx1. We then found that Msx1 directly binds to and upregulates Fgf9 and Fgf18 expression, which subsequently triggers MAPK signaling activation. Importantly, we identified a phosphorylation site of Msx1, Ser136, and observed that the mutation of Msx1 Ser136 to Ala (S136A) compromises its function, whereas the mutation of Ser136 to Asp (S136D) enhances its function in upregulating Fgf9 and Fgf18 expression and activating MAPK signaling, which is consistent with the role of the phosphorylation of Msx1 at Ser136 in promoting cell proliferation. Furthermore, we showed that cyclin-dependent kinase 1 (CDK1) is the kinase that phosphorylates Msx1 at Ser136. Significantly, in vivo, Fgf9, Fgf18 and p-Erk1/2 levels were downregulated in the developing limb buds when Msx1 and Msx2 were conditionally knocked out in bone, which resulted in developmental defects in limbs. In summary, our findings provide evidence of a novel mechanism of Msx1 involved in regulating gene expression and promoting cell proliferation and limb development. Plasmids and site-specific mutagenesis The expression plasmid pcDNA3 (Invitrogen, Carlsbad, CA, USA) was used for transient transfection, and pLZRS-IRES-GFP was used for retroviral gene transfer. Sequences corresponding to mouse Flag-tagged Msx1 were cloned into pcDNA3 or pLZRS-IRES-GFP. Site-directed mutagenesis at Ser136, Ser152 and Ser160 was performed by overlap extension PCR with minor modifications (30)(31)(32). The point mutation primer information is shown in Supplementary Table S1. All plasmids used were sequenced for verification. Cell culture and myogenic differentiation Murine myoblast C2C12 cells were obtained from American Type Culture Collection (ATCC) and were cultured in Dulbecco's modified Eagle's medium (DMEM) (Gibco, Grand Island, NY, USA) supplemented with 10% fetal bovine serum (FBS) (Gibco) (growth medium). C3H10T1/2 (ATCC) cells as well as bone marrow-derived MSCs that extracted from femurs and tibiae of mice at 4-6 weeks after birth were cultured in ␣-MEM (Gibco) supplemented with 10% FBS. 5-Ethynyl-2 -deoxyuridine staining The fraction of proliferating cells was determined using Click-iT TM EdU Alexa Fluor Imaging Kits (Thermo Fisher Scientific). Briefly, 20 000 C2C12 cells were seeded into each well of a 12-well plate and incubated for 48 h. Cells were incubated with 10 M 5-ethynyl-2 -deoxyuridine (EdU) for 3 h prior to fixation with 4% paraformaldehyde (PFA) (Sangon Biotech, Shanghai, China) and permeabilized with 0.25% Triton X-100. The EdU-positive nuclei were labeled according to the manufacturer's protocol, which was followed by labeling of all nuclei with 500 ng/ml 4 6-diamidino-2-phenylindole (DAPI, Thermo Fisher Scientific). Thereafter, the cells were observed, and images were obtained using a fluorescence microscope (ZEISS, Jena, Germany). Cell cycle analysis Differently treated cells were washed with phosphatebuffered saline (PBS). Resuspend cells with PBS containing propidium iodide (10 mg/mL), Triton-X 100 (3‰) (Sigma-Aldrich) and Rnase A (50 ug/ml) and incubate in the dark for 15 min. The fractions of viable cells in G1, S and G2 phases of cell cycle were measured with a FACStar flow cytometer (BD Biosciences, San Jose, CA, USA). The Flow cytometry was performed as previously described (37). Cell proliferation assay Cell proliferation assays were performed using Cell Counting Kit-8 (CCK-8) reagent (Dojindo Laboratories, Kumamoto, Japan). Briefly, differently treated cells were transferred into 96-well plates with 4 × 10 3 cells in 100 ul growth medium per well and examined at 0, 24, 48, 72 and 96 h respectively. At each time point, 10 ul CCK-8 reagent was added into each well and incubate in 37 • C for 1 h. The absorbance at 450 nm was measured using a multimode microplate reader (BioTek, Vermont, USA). Western blotting For western blotting, cells were sonicated in lysis buffer (50 mM Tris-HCl, pH 7.5, containing 150 mM NaCl, 0.1 mM EDTA, 1% Triton X-100, 1 g/ml aprotinin, 10 g/ml leupeptin, and 1 mM PMSF) and centrifuged at 20 000 × g for 10 min. The supernatants were subjected to SDS-polyacrylamide gel electrophoresis (SDS-PAGE), followed by western blotting analysis with indicated antibodies. Enhanced Chemiluminescence Plus Western Blotting Detection Kits (Bio-Rad) and a luminescent image analyzer (Tanon, Shanghai, China) were used to visualize protein bands according to the manufacturer's instructions. The list of antibodies used for western blotting is available in Supplementary Table S2. Immunofluorescence Cell immunofluorescence assays were performed as described previously (38). Briefly, cells were washed with PBS, fixed in 4% PFA, permeabilized with 0.2% Triton X-100 for 10 min, blocked with 2% bull serum albumin (BSA) in PBS for 1 h, and then stained with the indicated antibodies. The list of antibodies used for immunofluorescence staining is available in Supplementary Table S2. Tissue immunofluorescence assays were performed as previously described (39). Freshly dissected bones were fixed in 4% PFA for 48 h and incubated in 15% DEPC-EDTA (pH 7.8) for decalcification. Then, the specimens were embedded in paraffin or OCT compound (SAKURA, CA, USA) and sectioned at a 10-m thickness. Samples were blocked in PBS with 10% HS for 1 h and then incubated with mouse anti-Sox9 or rabbit anti-PCNA antibodies, followed by incubation with the corresponding conjugated secondary antibodies. Sections were counterstained with DAPI to visualize the nuclei. RNA-seq Total RNA was isolated with TRIzol from Msx1overexpressing C2C12 cells (n = 3) or control cells (n = 3). cDNA sequencing libraries were prepared with an NEBNext ® Ultra™ RNA Library Prep Kit for Illumina. The RNA-Seq FASTQ raw data were trimmed to remove low-quality reads and adapters using Trimmomatic (40). The trimmed reads were aligned to the mouse reference genome UCSC GRCm38/mm10 with HISAT2. Gene and transcript quantification was performed using StringTie. The results of the mapping of the RNA-seq reads, transcript assembly and abundance estimation were reported as fragments per kilobase of exon per million fragments mapped (FPKM). To identify genes that were differentially expressed, the fold changes of each gene were calculated by dividing the average FPKM for the case by the average FPKM for the control. To avoid infinite values, a value of 0.01 was added to the FPKM value of each gene before log 2 transformation. Hierarchical clustering analysis (HCA) and principal component analysis (PCA) were performed using the relevant functions in R packages. Enrichment analysis of KEGG signaling pathways was conducted by using DAVID. LC-ESI-MS/MS analysis We performed the identification of Msx1 phosphorylation by overexpressing Msx1 in C2C12 cells. The protein samples were digested by the FASP method (41), followed by LC-ESI-MS/MS analysis using a nanoflow EASY-nLC 1000 system (Thermo Fisher) coupled to an LTQ Orbitrap Elite mass spectrometer (Thermo Fisher). All analyses were performed with a two-column system. Samples were first loaded onto an Acclaim PepMap100 C18 Nano Trap Column (5 m, 100Å, 100 m i.d. × 2 cm, (Thermo Fisher)) and then analyzed on an Acclaim PepMap RSLC C18 column (2 m, 100Å, 75 m i.d. × 25 cm (Thermo Fisher)). The mobile phases consisted of Solution A (0.1% formic acid) and Solution B (0.1% formic acid in ACN). The peptides were eluted using the following gradients: 5-35% B for 0-58 min, 35-90% B for 10 min and 90% B for 5 min at a flow rate of 200 nl/min. The MS analysis was performed using data-dependent analysis; the 15 most abundant ions in each MS scan were automatically selected and fragmented in HCD mode. For data analysis, the raw data were analyzed by Proteome Discoverer (version 1.4, Thermo Fisher) using an in-house Mascot server (version 2.3, Matrix Science) (42). The mouse protein database (20170427) was downloaded from UniProt. Data were searched using the following parameters: trypsin/P as the enzyme; up to two missed cleavage sites were allowed; mass tolerances of 10 ppm for MS and 0.05 Da for MS/MS fragment ions; Carboxyamidomethylation of cysteine as a fixed modification; oxidation of methionine and phosphorylation of serine, threonine and tyrosine as variable modifications. The incorporated Target Decoy PSM Validator in Proteome Discoverer was used to validate the search results and identify the hits with a FDR ≤0.01. Msx1 knockout cell line Msx1 knockout cell lines were established using a previously reported CRISPR/Cas9 system (43). The gRNAs were designed at http://crispr.mit.edu/ website, and the relevant information is provided in Supplementary Table S3. qRT-PCR analysis Total RNA was prepared using a HiPure Total RNA Mini Kit (Magen, Guangzhou, China). RNA was reversetranscribed using the All-in-One First-Strand cDNA Synthesis Super Mix for qPCR (One-Step gDNA Remover) Kit (TransGene Biotech, Beijing, China). qRT-PCR was performed with 2× SYBR UltraSYBR Mix (Cwbio, Beijing, China) using a Light Cycler 480 system (Roche, Basel, Switzerland). The amplification procedure was as follows: 95 • C for 5 min, followed by 40 cycles of 95 • C for 10 s and 60 • C for 20 s. The cycle threshold (Ct) of each sample was used for the calculation. The expression levels of mRNA were quantified using Relative Quantification Software with GAPDH as an internal control. Primer information is provided in Supplementary Table S4. Enzyme-linked immunosorbent assay The enzyme-linked immunosorbent assays (ELISA) were performed using the ELISA Kit for Mouse Fgf9 or Fgf18 (DL Develop, Wuxi, China). Briefly, C2C12 cells overexpressing Msx1 or the control were cultured overnight with the serum-free DMEM. The supernatants were obtained and Fgf9 and Fgf18 concentrations were quantified with the ELISA Kits respectively according to the instructions of the manufacture. RNA interference Small interfering RNAs (siRNAs) used for the knockdown of Fgf9 and Fgf18 and nonspecific siRNA negative controls were designed and synthesized by Genepharma (Suzhou, China). The oligonucleotide sequences used in this study are shown in Supplementary Table S6. Transfection of siRNAs was performed in six-well plates. The cells were seeded into a six-well cell culture plate and cultured in growth medium on the day before transfection. The transfections of siRNAs were carried out using Lipofectamine RNAiMAX Reagent (Invitrogen) according to the manufacturer's instructions. Cells were harvested for qRT-PCR or western blotting 48 h later. Animals All procedures involving mice were approved by the Fudan University Animal Care and Use Committee. Mice were housed in the animal facility with free access to standard rodent chow and water. Germline Msx1 knockout mice were obtained from Richard Mass at Brigham and Women's Hospital (44). Prx1-Cre mice (45) were kindly provided by Professor Weiguo Zou at CAS. Msx flox/flox mice (46) were purchased from the Jackson Laboratory (Bar Harbor, ME, USA). PCR genotyping was performed using protocols described by the supplier. Both male and female mice that were 0-6 months old were used. Embryos were collected after timed mating, and noon on the day of plug discovery was considered to be embryonic day 0.5 (E0.5). Analysis of bone phenotypes Skeletal preparations were double-stained with alcian blue (Sigma) and Alizarin Red S (ARS) (47,48). Briefly, the carcasses were skinned and eviscerated, fixed with 95% ethanol for 3 days and stained with alcian blue for 3 days. Then, the skeletons were fixed with 95% ethanol three times for 1.5 h each, followed by clearing with 2% KOH for 3-4 h. After staining with ARS for 3-4 h, the skeletons were cleared in 1% KOH/20% glycerol and stored in glycerol. For histological analysis, bone tissues were fixed in 4% PFA and then embedded in paraffin. Tissue sections (10 m) were used for alcian blue and alkaline phosphatase (ALP) staining. Drug formulations The MEK inhibitor PD0325901 was purchased from Selleck Chemicals. The CDK1 inhibitor RO-3306 was purchased from APExBio Technology. The drugs were prepared in dimethyl sulfoxide (DMSO) (10 mM stock) and diluted to their final concentrations in cell culture medium prior to in vitro assays. Statistical analysis At least three independent replicates were performed for each assay. The average values from the parallel experiments are given as the mean ± SD. The comparison of differences among the groups was carried out by Student's t-test. Significance was defined as P < 0.01 (***P < 0.0001, **P < 0.001, *P < 0.01). Msx1 promotes the proliferation of myoblasts and MSCs To understand the mechanisms underlying the role of Msx1 in limb development, we first analyzed the effects of Msx1 on myoblast cell growth. Flow cytometry analysis revealed that overexpressing Msx1 in C2C12 cells increased the proportion of cells in S and G2 phase and decreased the percentage of cells in G1 phase ( Figure 1A and B), suggesting that Msx1 may promote C2C12 cell cycle progression. We further conducted cell proliferation assays and found that cells overexpressing Msx1 grew more rapidly compared with those with the empty vector control ( Figure 1C). In addition, the EdU-positive cell numbers were increased in the Msx1 overexpression group compared with those in the control group ( Figure 1D and E), and an increase in the proliferation marker PCNA was detected by western blotting in Msx1-overexpressing cells compared to that in control cells ( Figure 1F). These data indicate that overexpression of Msx1 promoted C2C12 cell proliferation. To further verify the effect of endogenous Msx1 on cell proliferation, a C2C12 cell line with Msx1 knock-out (KO) was established using a transient transfection-based CRISPR/Cas9 method (43). In contrast to the results of overexpressing Msx1, both cell cycle progression and cell proliferation were slowed down when Msx1 was deleted in C2C12 cells in comparison with the control group (Figure 1G and H). Accordingly, the level of PCNA was also lower in Msx1-KO than control C2C12 cells ( Figure 1I, with quantification in Figure 1J). Considering its important role in limb development, the function of Msx1 was also analyzed in the C3H10T1/2 mesenchymal stem cell line, which is another relevant multipotent cell line often used in musculoskeletal research. Similarly, PCNA expression was increased when Msx1 was overexpressed in C3H10T1/2 cells (Supplementary Figure S1A and B), and cells proliferated more rapidly as a result (Supplementary Figure S1C). Overall, the effects of Msx1 in C2C12 and C3H10T1/2 cells indicate that Msx1 promotes the proliferation of musculoskeletal progenitor cells. Msx1 promotes cellular proliferation by activating the MAPK signaling pathway To investigate how Msx1 promoted myoblast proliferation, we carried out RNA profiling analysis to compare C2C12 cell lines overexpressing Msx1 with the control cell line. The down-or upregulated genes with >2-fold differential expression are shown in the heatmap ( Figure 2A). In the KEGG enrichment analysis, we found that the differentially-expressed genes (DEGs) were involved in the Rap1, Ras and MAPK signaling pathways, which are all closely related to cell proliferation (8,12,49) (Figure 2B). We further performed gene set enrichment analysis (GSEA) of the DEGs, the MAPK signaling was again enriched and Msx1 expression is positively correlated with MAPK signaling activation ( Figure 2C). Since MAPK signaling is common downstream of the Rap1 and Ras signaling pathways (50-52), we performed western blotting analysis and found that Msx1 greatly enhanced Erk1/2 phosphorylation in C2C12 cells, while the total Erk1/2 (t-Erk1/2) abundance was unchanged ( Figure 2D). In contrast, the phosphorylated Erk1/2 (p-Erk1/2) level was decreased in the Msx1-KO C2C12 cell line ( Figure 2E), with quantification of three experiments in Figure 2F. Therefore, Msx1 can activate the MAPK signaling pathway in myoblast cells. To further verify whether Msx1 promotes C2C12 cell proliferation by activating the MAPK signaling pathway, we Nucleic Acids Research, 2020, Vol. 48, No. 20 11457 utilized the MAPK-specific inhibitor PD0325901 (53) to treat Msx1-overexpressing and control cells. Msx1 was unable to promote cell cycle progression when C2C12 cells were treated with PD0325901 ( Figure 2G and H), and the increase in the proportion of cells in S and G2 driven by Msx1 also decreased to a level comparable to that found in the control cells treated with PD0325901 ( Figure 2G and H). Meanwhile, the enhanced proliferation of C2C12 cells by Msx1 was reduced by PD0325901 to a rate similar to that of the control cells treated with the inhibitor (Figure 2I). Western blotting showed that when the phosphorylation of Erk1/2 was blocked by PD0325901, the proliferation marker Ki67 was no longer increased in Msx1overexpressing cells ( Figure 2J). These results demonstrated that MAPK signaling is likely the pathway through which Msx1 promotes C2C12 cell proliferation. The importance of the MAPK signaling pathway was further verified in C3H10T1/2 cells. Once again, the p-Erk1/2 level was dramatically increased when Msx1 was overexpressed (Supplementary Figure S2A). In addition, inhibition of the MAPK signaling pathway by PD0325901 compromised the function of Msx1 in promoting C3H10T1/2 cell proliferation, reducing both the proliferation rate and PCNA level (Supplementary Figure S2B and C). Taken together, these results suggest that Msx1 promotes the cellular proliferation of both C2C12 and C3H10T1/2 cells by activating the MAPK signaling pathway. Msx1 activates the MAPK signaling pathway by directly binding and upregulating Fgf9 and Fgf18 expression To understand the molecular mechanism through which Msx1 activates cellular proliferation, we analyzed the RNAseq data of Msx1-expressing cells and found that a set of Fgf family genes, including Fgf3, Fgf9, Fgf14, Fgf15, Fgf18, Fgfr1 and Fgfbp1, were upregulated by Msx1 overexpression ( Figure 3A and B). The mRNA levels of these genes were validated by qRT-PCR, and Fgf9 and Fgf18 were found to be most prominently upregulated when Msx1 was overexpressed ( Figure 3C). The upregulation of Fgf9 and Fgf18 by Msx1 at the protein level was also confirmed by western blotting ( Figure 3D). In contrast, the expression levels of Fgf9 and Fgf18 were decreased in the Msx1-KO C2C12 cell line ( Figure 3E and F). At this point, we are not certain whether Msx1 activates MAPK signaling prior to Fgf upregulation or whether Msx1 activates the expression and secretion of Fgf first, which then leads to MAPK activation. So, we firstly examined the secretion of Fgf9 and Fgf18 by ELISA. The results showed that the abundance of Fgf9 and Fgf18 in the cell culture medium was both increased when overexpressing Msx1 in C2C12 cells ( Figure 3G), suggesting that MAPK signaling could be activated by the secretion of Fgf9 and Fgf18. We then performed an experiment in which the activity of Fgf9 and/or Fgf18 was blocked by adding an antibody against Fgf9 or Fgf18 into the culture medium of C2C12 cells to determine the impact of the neutralization of Fgf activities. As a result, we observed that the increased p-Erk1/2 level induced by Msx1 was greatly reduced by the addition of Fgf9 and Fgf18 antibodies, to a level that was comparable to that of the control cells when equal amounts of IgG were added ( Figure 3H). This result suggests that the Msx1-enhanced expression and secretion of Fgf9/18 likely precede and lead to the MAPK activation. For further validation, we used the synthesized siRNAs to knock down Fgf9 and Fgf18, and the level of p-Erk1/2 was examined. As expected, in Msx1-overexpressing cells, the level p-Erk1/2 decreased significantly when either Fgf9 or Fgf18 was knocked down; moreover, p-Erk1/2 almost decreased to the same level as that found in cells without Msx1 overexpression when both Fgf9 and Fgf18 were knocked down ( Figure 3I), suggesting that Msx1 could not promote the phosphorylation of Erk1/2 without Fgf9 and Fgf18. On the other hand, the downregulation of p-Erk1/2 by Msx1 deletion in C2C12 cells was recovered by treating the cells with recombinant Fgf9 or/and Fgf18 protein ( Figure 3J), and the quantification of three experiments was shown in Figure 3K. Thus, these results suggest that Msx1 activates the MAPK signaling pathway through upregulating Fgf9 and Fgf18 gene expression. To further understand how Msx1 activates Fgf9/18, we re-examined our previous ChIP-seq data (GSE26711) (54), and found that Msx1 binds to the promoters of Fgf9 and Fgf18 ( Figure 3L). Specifically, we found one Msx1 binding peak at Fgf9 and three at the Fgf18 gene promoter ( Figure 3L). We accordingly designed primers for the ChIP-qPCR assays and found that the relative enrichment of Msx1 at the binding site of Fgf9 and the second binding site of Fgf18 was significantly higher than that in the control ( Figure 3M). Therefore, Msx1 seems to bind to and upregulate Fgf9 and Fgf18 to activate MAPK signaling. Phosphorylation of Ser136 is critical for Msx1 promotion of myoblast proliferation Increasing evidence suggests that posttranslational modifications, especially phosphorylation, impact transcriptional activities (55). To further understand the molecular mechanism through which Msx1 regulates cell growth, we performed mass spectrometry (MS) analysis and identified three novel phosphorylated serine residues, Ser136, Ser152 and Ser160, in Msx1 ( Figure 4A). Multiple sequence alignment of amino acids showed that all three serine residues described above are conserved in mice, humans, rats and zebrafish ( Figure 4B). We constructed various retroviral plasmids expressing Msx1 harboring a single mutation or a combination of mutations at the three phosphorylation sites and checked whether these phosphorylation sites may affect the capability of Msx1 to promote myoblast proliferation. Interestingly, the C2C12 cells expressing Msx1(S152/160A) show no change in cell cycle phases compared with the C2C12 cells expressing wild-type Msx1 ( Figure 4C and D); however, the cells overexpressing Msx1(S136/152/160A) exhibit altered cell cycle progression; specifically, the number of cells in G1 phase increased to a level similar to that of the control cells ( Figure 4C and D). This suggested that Ser136 phosphorylation may play a key role in the promotion of myoblast proliferation by Msx1. We then mutated Ser136 in Msx1 to either Ala or Asp to mimic the dephosphorylated and phosphorylated forms of Ser136. Remarkably, while the Ser136 to Ala (S136A) mutant compromised the function of Msx1 in promoting cell cycle progression, the Ser136 to Asp (S136D) mutant retained the cell cycle promotion function of Msx1 ( Figure 4C and D). When analyzing cell proliferation, the C2C12 cells overexpressing the Msx1 S152/160A double mutant produced no changes compared with the cells expressing wild-type Msx1, but the cells with Msx1 in which the three phosphorylation sites were mutated to Ala (S136/152/160A) showed much slower growth ( Figure 4E). Significantly, Msx1 with the single mutation S136A did not promote C2C12 cell proliferation compared to wild-type Msx1, whereas Msx1 with the single mutation S136D did promote C2C12 cell proliferation at a similar level as wild-type Msx1 ( Figure 4E). Consistently, the proliferation marker Ki67 was also upregulated in C2C12 cells expressing wild-type Msx1, Msx1 (S152/160A) or Msx1 (S136D) but not in cells expressing Msx1 (S136/152/160A) or Msx1 (S136A) ( Figure 4F). These results demonstrated that the phosphorylation of Msx1 Ser136, but not Ser152 or Ser160, is critically required for the promotion of cell proliferation. We sought to determine whether these newly identified phosphorylation sites of Msx1 may have any effects on myoblast differentiation. Therefore, C2C12 cells expressing Msx1 with various combinations of the three mutated phosphorylation sites were subjected to myogenic differentiation. As the microscopy data showed in Figure 4G, C2C12 cells in the control group started to fuse to each other 3 days postinduction (dpi) and to form complete myotubes at 7 dpi. By contrast, C2C12 cells expressing either wildtype Msx1 or any mutant (S152/160A, S136/152/160A, S136A or S136D) failed to differentiate during the induction period. Consistently, the level of the C2C12 differentiation marker, myosin heavy chain (MHC) increased upon induction in the control cells, but failed to do so in cells expressing Msx1 or any of the mutants during differentiation ( Figure 4H). While the three phosphorylation sites of Msx1 show distinctive effects on cell proliferation, they all show inhibitory effect on C2C12 differentiation. Thus, the ability of Msx1 to regulate proliferation or differentiation seems to be uncoupled, and the two functions appear to be independent of each other. To understand how the phosphorylation of Msx1 Ser136 activate Fgf-MAPK signaling, we further performed ChIP assays in C2C12 cells overexpressing wild-type Msx1, Msx1(S136A), and Msx1(S136D). By examining the Msx1 binding of Fgf9 and Fgf18, we detected that the enrichment of Msx1 at the Fgf9 and Fgf18 gene promoters was dramatically decreased in cells expressing Msx1(S136A) but remained comparable to that of wild-type Msx1 in cells expressing Msx1(S136D) ( Figure 5C). Therefore, the binding of Msx1 to the Fgf9 and Fgf18 genes was dramatically weakened when Ser136 of Msx1 was not in its phosphorylated state. These observations suggest that Msx1 may promote C2C12 cell proliferation by phosphorylating Ser136 to enhance its binding to Fgf9 and Fgf18, upregulating the expression of Fgf9 and Fgf18 and consequently activating MAPK signaling. Similar observations were made in C3H10T1/2 cells. Likewise, overexpression of Msx1 upregulated the expression of Fgf9 and Fgf18 in C3H10T1/2 cells (Supplementary Figure S3A and B), while phosphorylation of Msx1 at Ser136 seems to be a key mechanism that upregulates Fgf-MAPK signaling (Supplementary Figure S3C). As a result, the proliferation of C3H10T1/2 cells was increased by Msx1(S136D) (Supplementary Figure S3C and D). CDK1 is the kinase that phosphorylates Msx1 at Ser136 We further investigated which kinase phosphorylates Msx1 at Ser136. The online software KinasePhos predicted that the only kinase that may phosphorylate Ser136 of Msx1 is CDK1 (Supplementary Table S7). The substrate peptide sequence, which CDK1 is inclined to catalyze, corresponds to the sequence of residues 132-140 in Msx1, as shown in Figure 5D. To verify this prediction, we utilized the specific inhibitor of CDK1, Ro3306 (56), to analyze its effect on C2C12 cells. By western blotting, we found that the phosphorylation of Erk1/2 in C2C12 cells was reduced by Ro3306 treatment compared with that in control cells treated with DMSO ( Figure 5E). Thus, Msx1 could not promote Erk1/2 phosphorylation when the activity of CDK1 was blocked. Remarkably, the cell line expressing the Msx1(S136D) mutant, a mimic form for constitutive phosphorylation, showed increased p-Erk1/2 levels, which is independent of Ro3306 treatment ( Figure 5E); however, the cell line harboring the Msx1(S136A) mutant exhibited no increase in Erk1/2 phosphorylation regardless of the presence or absence of Ro3306 ( Figure 5E). This is because the Msx1(S136A) is resistant to the activity of CDK1, thus unable to promote the phosphorylation of Erk1/2. For further validation, we have successfully made the antibody specifically recognizing the Ser136 phosphorylation form of Msx1. To prove this antibody works, we first demonstrate that this antibody does recognize the phosphorylation form of Msx1. We observed that the phosphorylation signal is missing when Msx1 was dephosphorylated by the calf intestinal alkaline phosphatase (CIAP) treatment ( Figure 5F). Using this antibody, we further found that in C2C12 cells, the phosphorylation at Msx1 Ser136 (p-Msx1) was reduced by Ro3306 treatment compared with that in control cells treated with DMSO ( Figure 5G). These results suggest that CDK1 is apparently responsible for phosphorylation of Msx1 Ser136. Figure 5. Phosphorylation of Msx1 at Ser136 is essential for its binding to and upregulation of Fgf9 and Fgf18 and further activation the MAPK signaling pathway, and CDK1 appears to be the kinase that performs this phosphorylation. (A) Western blotting to assess the effect of Msx1 phosphorylation sites on increases in the levels of p-Erk1/2, Fgf9 and Fgf18 by wild-type Msx1. (B) qRT-PCR assays to assess the effect of Msx1 phosphorylation sites on increases in the levels of Fgf9 and Fgf18 by wild-type Msx1. Values are the means ± SD. ***P < 0.0001. (C) ChIP-qPCR analysis to determine the impact of Ser136 mutants of Msx1 on Fgf9/18 binding. ChIP assays were performed with C2C12 cells overexpressing wild-type Msx1, Msx1 (S136A), Msx1 (S136D) or the control. ChIP-qPCR was then used to determine the relative enrichment of the Msx1 binding fragments of the Fgf9 and Fgf18 genes. The ChIP-qPCR data were analyzed to calculate the enrichment of the control or different Msx1 mutants relative to input respectively, and the control values were defined as 1 to calculate the fold enrichment of Msx1 or its mutants over the control. Values are the means ± SD. ***P < 0.0001, **P < 0.001, *P < 0.01. (D) Diagram of the conserved residues of the CDK1 catalytic domain and the corresponding phosphorylated peptide sequence of Msx1. (E) Western blotting to examine the effect of Msx1 or its mutants and Ro3306 on the level of p-Erk1/2 in C2C12 cells. C2C12 cells overexpressing wild-type Msx1, Msx1(S136A), Msx1(S136D) or the control were treated with 10 M Ro3306 or DMSO. Twenty hours later, the cells were harvested and lysed for western blotting, and the level of p-Erk1/2 was checked. (F) Western blotting to verify the specificity of Ser136 phosphorylated Msx1 (p-Msx1) antibody. Lysates from C2C12 cells overexpressing Flag-tagged Msx1 were incubated with calf intestinal alkaline phosphatase (CIAP) at 37 • C for half an hour and subjected to immunoblot analysis with the indicated antibodies. (G) Western blotting to examine the effect of Ro3306 on the level of endogenous p-Msx1 in C2C12 cells. C2C12 cells were treated with 10 M Ro3306 or DMSO. Twenty hours later, the cells were harvested and lysed for western blotting with the indicated antibodies. The molecular Msx1-Fgf9/18-MAPK axis is required for limb development Based on these new molecular insights into the promotion of cell proliferation by Msx1, we further explored whether these mechanisms may apply to limb development. The expression profiles of Msx1, Fgf9, Fgf18, Ki67 and PCNA and the level of p-Erk1/2 in forelimb buds of mouse embryos from E9.5-14.5 were determined by qRT-PCR analysis or western blotting. As shown in Figure 6A and B, the variation patterns in the expression of all these molecules were strikingly similar during early limb development. In brief, they all gradually increased from E9.5 to E12.5, peaked at E12.5 and E13.5 and began to decrease at E14.5 ( Figure 6A and B). To determine the developmental regulation of the Fgf9/18-MAPK signaling axis by Msx1 during in vivo development, we next generated germline Msx1-KO mice by heterozygote inbreeding. The forelimb buds of embryos at E13.5, when Msx1 is mostly expressed, were collected for further analysis. In the limb buds of the Msx1-KO mice, the levels of Fgf9, Fgf18, Ki67 and p-Erk1/2 were all decreased to different degrees in comparison with those in the buds of the wild-type mice ( Figure 6C and D). Thus, the in vivo results found in the developmental embryos agreed with our observations in vitro, indicating that Msx1 indeed participates in promoting cell proliferation in limb development by upregulating Fgf9/18 and activating MAPK signaling. To further verify the role of Msx1 in the commitment of MSCs to an osteoblast fate in limb development, Msx1 MSC-specific knockout mice were generated by crossing Prx1-Cre mice with Msx1 flox/flox mice. As reported, due to the functional redundancy of Msx1 and Msx2, Msx1 or Msx2 homozygous mutants (Msx1-/-or Msx2-/-) do not display gross limb abnormalities (21,44,57). We further generated Msx1/2 MSC-specific double-knockout mice using Prx1-Cre mice and Msx1/2 flox/flox mice (Supplementary Figure S4A). As a result, unlike germline Msx1/2 KO, which was perinatally lethal (21), Msx1/2 MSC- specific knockout produced mice (Prx1-Cre; Msx1 flox/flox ; Msx2 flox/flox ) that were viable but relatively smaller in size at both 2 days and 6 weeks after birth compared with the controls (wild-type and Msx1 or Msx2 MSC-specific knockout mice; Supplementary Figure S4B-D). The Msx1/2 MSCspecific knockout mice had an approximately 30% lower body weight at 3 weeks after birth compared with the controls (Supplementary Figure S4D). In addition, the Msx1/2 MSC-specific knockout mice displayed more severe defects in forelimb development. In appearance, the forelimbs of Msx1/2 MSC-specific knockout mice were shorter and smaller than those of the controls and could not function normally ( Figure 7A). Using CT, we found the complete absence of the radius and a decreased ulna size in Msx1/2 MSC-specific knockout mice compared with those in wildtype mice ( Figure 7B and C). Defects were not limited to the radius and ulna, as finger truncation, polydactyly, and oligodactyly were also observed ( Figure 7C). To determine whether the abnormality of the limbs in Msx1/2 MSC-specific knockout mice resulted from a pri-mary defect in osteoblast or chondrocyte development, we analyzed limbs isolated from Msx1/2 MSC-specific knockout mice and compared them in terms of the corresponding elements of the wild-type. The staining results showed that chondrogenesis ( Figure 7D) and osteogenesis ( Figure 7E) were both reduced in the limbs of Msx1/2 MSC-specific knockout mice compared with those in wild-type mice. The stem cells in the marrow cavities are key for osteochondrogenesis. To explore the cause of the reduction of osteochondrogenesis, we evaluated the levels of the proliferation marker PCNA and the chondrogenic marker Sox9 in marrow cavities using immunofluorescence staining. The results showed that both the Sox9 and PCNA levels were much lower in the studied tissues of Msx1/2 MSC-specific knockout mice compared with those in wild-type mice ( Figure 7F), and the numbers of both PCNA-and Sox9-positive cells in the marrow cavities were decreased by >80% in double knockout mice compared with those in controls (Figure 7G and H). These observations suggested that MSCspecific knockout of Msx1/2 impaired the proliferation of bone marrow stem cells and further led to a reduction in osteochondrogenesis. For further validation, primary bone marrow MSCs from Msx1/2 MSC-specific knockout mice and wild-type mice were obtained and cultured in vitro, and cellular proliferation was observed by CCK-8 assays. As shown in Figure 7I, compared with that of wild-type bone marrow MSCs, the proliferation of bone marrow MSCs with Msx1/2 doubleknockout was significantly slowed down. Meanwhile, the levels of PCNA, p-Erk1/2 and Fgf9/18 were dramatically decreased in Msx1/2 double-knockout bone marrow MSCs ( Figure 7J). Thus, in developing limbs, Msx1 and Msx2 function redundantly to promote bone marrow MSC proliferation by upregulating Fgf9/18 and further activating MAPK signaling. In conclusion, our studies demonstrate a coupling mechanism of Msx with Fgf/MAPK signaling to promote cell proliferation and embryonic limb development, thus providing novel molecular insights. DISCUSSION As shown in the working model in Figure 7K, CDK1 first phosphorylates Msx1 at Ser136, empowering Msx1 with the capability to bind and upregulate the Fgf9/18 genes. The increased Fgf9 and Fgf18 proteins are exocytosed to the extracellular matrix, where they activate the MAPK signaling pathway by binding to Fgfrs in an autocrine or paracrine fashion, leading to the increased phosphorylation of Erk1/2. Phosphorylated Erk1/2 then promotes myoblast proliferation in various ways. The well-established function of Msx1 in development is the inhibition of the differentiation of cells, such as those of the myogenic lineage (33,54). Our current findings focus on another function of Msx1 in promoting cell proliferation. In mice, limb development first begins with the induction of the forelimb buds at E9.5, followed by the formation of the hindlimb buds at E10 on both flanks of the embryo (marking the future forelimbs and hindlimbs, respectively) (58,59). During mouse embryonic limb development, the period from E9.5-13.5 is the fast-growing phase for the limbs, which exactly corresponds to the time period of Msx1 expression. As we have shown in this study, Msx may play a key role in driving cell proliferation during this period, allowing for the rapid growth of limb buds. We show here that Msx1 enhances cell proliferation by activating the MAPK signaling pathway by targeting Fgf9 and Fgf18 gene expression. Evidence has suggested a critical role for Erk1/2 in myoblast proliferation (60,61). Erk1/2 activity can be stimulated by a variety of growth factors in myoblasts, including Fgf, hepatocyte growth factor (Hgf) and insulin-like growth factor (Igf) (5,(7)(8)(9)(10)60). Here, we reveal a particular way by which Erk1/2 is stimulated by Fgfs that are activated by the phosphorylated Msx1 in myoblasts. Regarding how p-Erk1/2 promotes myoblast proliferation, it has been reported that p-Erk1/2 prevents cell cycle exit during G1 (49) and promotes entry into S phase (8,62). In addition, only when p-Erk1/2 is shuttled into the nucleus can it promote cell proliferation (63). For Msx1 to regulate Fgf9 and Fgf18 gene expression, the phosphorylation of Msx1 at Ser136 was carefully exam-ined. We demonstrated that a non-phosphorylated form of Msx1 mutant S136A is unable to stimulated Fgf/MAPK activation, whereas a phosphorylation mimic form of Msx1 mutant S136D constitutively activates Fgf-MAPK signaling pathway and promotes cell proliferation. Interestingly, a previous study showed that phosphorylation of Msx2 at Thr135 and Thr141 is the key mechanism allowing its regulation of target genes (64). Together, these studies demonstrated that phosphorylation of homeoproteins could be key to their functions in controlling transcription and development. In addition to skeletal muscle and bone, Msx1 is also expressed in many other tissues and organs, such as the heart, craniofacial derivations, neurotube, mammary gland and a few types of tumors (65)(66)(67)(68)(69)(70)(71)(72). It was recently found that Msx1 plays an important role in human odontogenesis. Specifically, a frame-shift mutation in Msx1 in human dental pulp stem cells weakened the activity of the MAPK signaling pathway and decreased the proliferation of cells, leading to the developmental deficiency of teeth (73). This mutation disrupted the Msx1 protein at the amino acid residue Met43; therefore, Ser136 was also disrupted. In addition, we have obtained preliminary results that indicate that Msx1 promotes the proliferation of ZR-75-30 breast cancer cells and PC-3 prostate cancer cells. In these cancer cells, MAPK signaling is also activated by Msx1 overexpression, and a cell line with a Msx1(S136A) knock-in mutation showed reduced Fgf-MAPK signaling and cellular proliferation. Overall, the mechanisms by which Msx1 promotes cell proliferation may function in different tissues and organs, which deserves to be further investigated. DATA AVAILABILITY The RNA-seq datasets used in this study have been deposited with the NCBI GEO under accession number GSE150349.
2020-10-22T18:55:01.588Z
2020-10-20T00:00:00.000
{ "year": 2020, "sha1": "56432cb85a9635ef9497ab791b8ea8cdf8bbd6eb", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/nar/article-pdf/48/20/11452/34367966/gkaa905.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ed9c4c611a1384e5108b4387e562d0c5be9832e5", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
237690400
pes2o/s2orc
v3-fos-license
A Novel Surface Parametric Method and Its Application to Aerodynamic Design Optimization of Axial Compressors : A novel parametric control method for the compressor blade, the full-blade surface parametric method, is proposed in this paper. Compared with the traditional parametric method, the method has good surface smoothness and construction convenience while maintaining low-dimensional characteristics, and compared with the semi-blade surface parametric method, the proposed method has a larger degree of geometric deformation freedom and can account for changes in both the suction surface and pressure surface. Compared with the semi-blade surface parametric method, the method only has four more control parameters for each blade, so it does not significantly increase the optimization time. The effectiveness of this novel parametric control method has been verified in the aerodynamic optimization field of compressors by an optimization case of Stage35 (a single-stage transonic axial compressor) under multi-operating conditions. The optimization case has brought the following results: the adiabatic efficiency of the optimized blade at design speed is 1.4% higher than that of the original one and the surge margin 2.9% higher, while at off-design speed, the adiabatic efficiency is improved by 0.6% and the surge margin by 1.3%. Introduction In recent decades, there have been many important developments in compressor aerodynamic optimization design methods. Compared with traditional compressor design methods, the optimization method not only reduces the dependence on expert experience but also overcomes the limitations of traditional design. However, aerodynamic optimization design of compressors has three typical characteristics: high-dimensional, expensive computational cost and black-box, abbreviated as the HEB problem. Since the possible design space of blade geometry rises sharply with the number increase in optimization control variables, the optimization process can easily fall into a "dimensional disaster" [1][2][3]. It is often impossible to obtain an optimal solution within an acceptable amount of time in engineering. How to effectively break through the aerodynamic optimization HEB problem for compressors is one of the research hotspots in this field. One of the key aspects of the optimization design of compressor blades is geometry parameterization. The parametric method determines the size and structure of the blade geometry deformation space. One of the most promising methods for solving the HEB problem is parametric dimensionality reduction [4]. The objective of this method is to reduce the number of optimal control variables while retaining the optimal solution in the original design space, so as to reduce the design space. For the aerodynamic optimization of compressors, it is an effective way to reduce the dimension by establishing a geometric parametric method of compressor blade with few parameters and good performance. Traditional geometric parametric methods for compressor blades can be divided into three categories. In the first category, the geometry of each radial section remains unchanged, and their axial and circumferential changes are used as control parameters (bow and sweep) of the 3D blade geometry [5,6]. Although this parametric method has fewer control parameters, its approach often leads to excessive modification of the original blade, which is unable to fine-tune the blade geometry, and because most of the optimization objects are blades designed by experts in detail, this parametric method often fails to meet the aerodynamic constraints in the optimization process. Thus, it is rarely used in an actual optimization project. The second category is to take the control parameters of free curves that fit the central arc of the 2D cascade geometry [7][8][9][10] or the profile of the suction and pressure side as optimization variables [11][12][13][14]. These control parameters can control the geometric changes in the 3D blades during the optimization process. The third category is a combination of the first and second parametric methods. The core disadvantages of the latter two traditional methods are the large number of geometric control parameters, the enormous design space and the fact that the parameterization process requires fitting back calculation, which has the problem of fitting accuracy. The recently developed free form deform (FFD) parametric method [15][16][17] is suitable for modeling complex geometry and for local fine-tuning, but still suffers from the disadvantage of too many control parameters in most cases, and most importantly, the method is not geometrically intuitive, which leads to difficulty in obtaining a suitable variable range. In 2003, the semi-blade surface parametric method was utilized by Stephane et al. [18] to modify the shape of the suction surface of a transonic rotor. The semi-blade surface parametric method first regards the blade geometry as a combination of the suction surface and pressure surface, and it is a 3D surface parametric method in the true sense. The parametric method has good low-dimensional characteristics, but also a smooth-shaped surface, and making it easy to ensure the advantages of rotor mechanical strength is one of the important development directions of the compressor blade parametric [19]. However, this method limits the degree of freedom of geometric deformation. The special problems are as follows: (1) the modified area only covers one surface (suction or pressure surface) of the compressor blade. So, the effect of the other surface on the aerodynamic performance cannot be accounted for at the same time; (2) the leading and trailing edges cannot be adjusted. It reduces the geometric exploration ability of this parametric method and greatly reduces the probability of the existence of the optimal solution in the original design spaces. Aiming at the defects of the semi-blade surface parametric method, this paper proposes a new full-blade surface parametric method and uses this method to parameterize the rotor and stator blade of Stage35. The full-blade parameterization method not only continues the smoothness and convenience of surface parametric control, but also overcomes the shortcomings of the semi-blade parametric method. It takes into account the suction surface and pressure surface of the blade, so that the leading edge and trailing edge of the blade have a certain geometric variability. Thus, this method can broaden the ability of geometric exploration and improve the probability of the existence of the optimum solution in the original design space. An improved artificial bee colony (IABC) algorithm is adopted for global optimization and the strategy of a "coarse grid to optimize and fine grid to analyze" is adopted in the optimization process, to save the time cost. Combined with the verified computational fluid dynamics (CFD) numerical method, Stage35 is aerodynamically optimized under multi-operating conditions, and an optimized solution is obtained within a relatively acceptable engineering time cost. This result verifies the superiority of the optimization method based on the full-blade surface parametric method in exploring an acceptable approach for multistage compressors with constraints and multi-objective optimization. Full-Blade Surface Parametric Control Method As shown in Figure 1, the full-blade surface parametric method considers the suction and pressure surfaces of the blade as a whole surface and superimposes Bezier surfaces on the entire surface to form a new blade. The control parameters of the Bezier surface are the optimization design variables that control the blade geometry in the optimization cycle, and the deformation of the blade geometry is controlled by the superimposed Bezier surface. During the superposition process, the four vertices of the Bezier surface correspond one-to-one with the four vertices of the original blade surface. The proposed full-blade surface control parametric method includes the following four steps, as shown in Figure 2: Step 1. The original blade is unfolded along the radial line of the leading edge. As a result, the trailing edge line becomes the middle position of the unfolded blade surface. Step 2. Chord length parameterization. Since the Bezier surface is a unit surface in the calculation domain, it cannot be directly covered on the three-dimensional unfolded surface of the original blade. Therefore, in order to correspond each point of the physical domain and the computational domain one by one in the superposition process, each point of profile of the three-dimensional blade unfolded surface must be normalized. The method adopted is chord length parameterization, and it is shown in Equations (1) and (2). where ξ i,j refers to the horizontal coordinate in the unit domain, and η i,j the vertical one. i ∈ 1, n p , j ∈ (1, n s ), where n p means the points of each section profile along the chordwise direction and n s means the total sections of the blade. L_C j refers to the total length of the jth section profile of the blade along the radial direction and L_S i the total arc length of the ith profile line of the blade along the chordwise direction. l_c m means the length of the mth segment along the arc direction and l_s n means the length of the nth segment along the radial direction. Step 3. Use the Bezier surface generation function to calculate the amount of variation at each point on the computational domain of the unfolded surface of the original blade. The value of the Bezier surface function is determined by Equations (3)- (5). (3) is the function value of the Bezier surface, which represents the circumferential variation distance at each point of the unfolded surface of the original blade. Where B m l (v) and B n k (u) are the Bernstein basis functions, calculated by Equation (4), u and v are two independent variables in the calculation domain of the Bezier surface, and their variation ranges are both in [0, 1]. P k,l is the control vertex of the Bezier surface, and the number of control vertices in the horizontal and radial directions is (m + 1) and (n + 1), respectively. The value of C n k is obtained from the calculation of Equation (5). Step 4. The R value is superimposed on the circumference of each point on the original unfolded surface to obtain the new blade. The full-blade surface parametric method has some outstanding advantages: firstly, there is no need to back-calculate the surface control points and no fitting accuracy problem due to the surface superposition rather than surface fitting. Secondly, the higher-order continuity of the Bezier surface [20] ensures that the new blade surface will be no less smooth than the original one. Finally, it reduces the dimensionality of the design variables and has excellent low-dimensional properties due to its inherent constraints of both chordal and radial control points. Because the full-blade surface parametric control method is developed from the semiblade surface parametric control method, it is necessary to compare the two methods. Compared to the semi-blade surface parametric method, the full-blade surface parametric method has differences in three aspects: (1) The full-blade surface parametric method regards the suction and the pressure surface as a whole one, and the moving distance of the corresponding points of the Bezier surface are superimposed on the unfolded surface of the original blade surface. Then, the suction and the pressure surface are connected along the leading edge to form a new blade. (2) The moving direction of the points of the original blade is no longer along the normal direction like the semi-blade surface method, but in the circumferential direction, as shown in Equation (6): where y new refers to the circumferential value at each point of the new blade, and y old refers to that of the original blade, while ∆s refers to the moving distance of each point corresponding to the Bezier surface. (3) As shown in Figure 3 below, the red and green points are the active points in the blade geometry deformation process. To maintain first-order continuity at the connection of the suction edge and pressure edge at the leading edge, it is necessary to maintain the same amount of change of the green points at ξ 1 , ξ 2 , ξ 6 , and ξ 7 . The synchronous changes of the green points cannot change the shape of the leading edge, but it can change the inlet metal angle, which affects the airflow performance. Since the aerodynamic performance of the blade is sensitive to the leading edge geometry, denser control points are required at this region. The above three differences bring corresponding advantages to three aspects of the full-blade surface parametric method: (1) both the suction and the pressure surface can be parameterized to deform the blade geometry without significantly increasing the number of optimization control variables. (2) It has good parametric reshaping capability for the leading part, which provides the freedom of deformation of this region. (3) Because the circumferential surface superposition direction is driven by the leading edge modification, the geometric changes of the blade body and trailing edge are larger than that of the semi-blade surface method, which makes it easier to break through the design limitations of the original blade. Optimization Case To verify the effectiveness of the full-blade surface parametric method in solving the HEB problem of the aerodynamic optimization of compressors, a global optimization engineering task was built on a commercial supercomputing platform, which contains three parts: the full-blade surface parametric method, IABC algorithm and CFD flow field calculation program and the general single-stage axial flow transonic compressor Stage35 was selected as the test case to obtain the optimized compressor blade shape under multi-operating conditions at design speed and off-design speed (N = 0.8). Reference [21] gives detailed information about the geometric data and experimental data of Stage35. Stage35 has a low aspect ratio and was designed by NASA in 1978 [21], and the rotor and the stator blade have 36 and 46 multiarc blades, respectively. Parameter Setting of Blade Parameterization The key to engineering optimization design of compressors is to balance the design space against the computational cost. As the design space increases exponentially with the number of optimization variables, it is easy to fall into a "dimensional curse" that can lead to the failure of the optimization. Actually, the engineering optimization requires that the optimization variables are minimized while ensuring sufficient design space. In other words, the parametric method needs to ensure the low-dimensional characteristics. In the literature [18,19], the semi-blade surface parametric method only selects the suction surface as an optimization object; however, this selection is applicable only to the case in which the loss of the suction surface far exceeds that of the pressure surface, while the influence on the aerodynamic performance of both the surfaces is accounted for by the full-blade surface parametric control method. In principle, the number of control points can be increased with the improvement of control accuracy. However, due to the global deformation characteristics of the Bezier surface, too many control points easily interfere with each other and increase the design space. As a rule of thumb, seven points in the chordwise direction is sufficient to control the geometric deformation, but the number of control points in the radial direction can be increased as needed. In this case, the 6 × 3 order (seven points chordal, four points radial) Bezier surface is adopted to parameterize the rotor and stator blades of Stage35 by the full-blade surface parametric method. The distribution of control variables is shown in Figure 3. Seven points are distributed in the chordwise direction of the unfolded blade surface. Their positions are 0.0, 0.1, 0.3, 0.5, 0.7, 0.9 and 1.0, respectively. Four points are set along the radial direction of the blade. Due to the large flow loss of the end wall in the hub area, it is necessary to set up some dense devices in the hub area, with relative positions of 0.0, 0.2, 0.5 and 1.0. The range of the optimization variables must be appropriate. If it is too small, it will be difficult to break through the limitations of the original blade geometry. If it is too large, it will easily exceed the requirements of flow rate and pressure ratio. According to trial and error, the variation range of the optimization variable should be set to [−6.0, 6.0] mm. Due to the tangents of the endpoints of the Bezier surface, to keep the first-order continuous smoothness of the leading edge during the geometric modification, the first two points and the last two points in the ξ direction of the suction and pressure sides must be changed in step. Therefore, the total number of optimization variables of a single blade row is 3 × 4 + 1 × 4 = 16, and in the traditional parametric method of directly fitting the suction and pressure surface, the number of control parameters of a single blade row is about 60 [22]. Even if the thickness distribution is not changed, but only the mean camber line of the four radial cross-sections is used for modification, about 30 parameters are required, and it is not easy to ensure the radial smoothness. The essence of the Bezier surface parametric modification is that each control point of a different cross-section is radially constrained in the same way, resulting in a radially smooth surface on the one hand and a smaller design space on the other. Optimization Algorithm The use of adjoint algorithms and surrogate modeling techniques is an important solution to the HEB problem of the compressor aerodynamic optimization. At the end of the 1980s, the adjoint algorithm rose and developed rapidly in the field of aerodynamic optimization design [23,24]. Soon, this new method was valued by researchers in the field of turbomachinery, and achieved a lot of research results [25][26][27]. The adjoint method has an obvious advantage that the calculation cost of the optimization has nothing to do with the number of optimization variables, and the gradient of each design variable can be quickly obtained by solving the adjoint matrix, and the sensitivity of each variable can be obtained. However, the disadvantages are that the optimization results are only localized, and it is difficult to deal with multiple constraints and multiple objectives [28]. The essence of the surrogate model method is to train the samples in the design space to obtain a fitting function that can be calculated quickly, so as to greatly reduce the computational cost [29][30][31]. However, this method has two defects: first, the samples required for a sufficient precision surrogate model increases sharply with the increase in the number of optimization variables, which itself costs a lot of computational time, so it is not suitable for high-dimensional problems; second, in high-dimensional problems, the difficulty of sample fitting is greatly increased, even if the sample size is sufficient; it is also difficult to obtain accurate surrogate functions, and it is easy to give wrong optimization solutions. In recent years, deep reinforcement learning has been applied in the field of engineering optimization [32]. This method is a combination of deep learning and reinforcement learning, and has good perception ability and decision-making ability. However, this method has not been well applied to the aerodynamic optimization of 3D blades. The evolutionary algorithm is a widely used method in the field of engineering optimization [33][34][35], which has good robustness and global convergence. For a better global search, the optimization process uses the IABC algorithm proposed in the literature [36]. The IABC algorithm is used because it has the following three advantages: (1) the IABC algorithm is global, robust, and does not depend on the initial solution; (2) each generation of individuals is independent of each other, which allows for multitask concurrency and shortens optimization time by taking advantage of the enormous computational power of commercial supercomputing platforms [37]; (3) compared to general-purpose genetic algorithms and standard artificial bee colony (ABC) algorithms, the IABC algorithm has better global search and fast convergence capability. Meanwhile, its disadvantage is that the convergence of this method requires more iterations than the adjoint algorithm and surrogate technology. However, engineering optimization often does not focus on the global convergence solution; it needs to obtain the relative optimal solution to meet the engineering requirements in an acceptable time range. Therefore, the IABC algorithm should be used as the optimization algorithm, and the time to jump out of the cycle can be determined by the designer (usually set the upper limit of the number of times for bees to explore food sources). The IABC algorithm is outlined below. In the ABC algorithm, both onlooker and employed bees explore new food sources in the way shown in Equation (7), but the equation only uses local random information about the bees. The IABC algorithm is improved by using a formula that explores new food sources for employed bees with global best information by Equations (8) and (9) with local best information for onlooker bees. where V j i indicates the location of the new food source explored by the employed bee, ξ j i and λ j i are random numbers between [−1, 1], X j i refers to the jth component of the i-th individual, and X j best is the best fitness location among successive generations of food sources. (X j Neib ) best represents the best fitness location within the neighborhood, and (V j Neib ) best represents a new food location explored by the onlooker bees; the local best information in Equation (9) requires the Chebyshev distance, which is expressed in Equation (10) as follows: where j ∈ {1, 2, · · · , D}, d(i, t) is the Chebyshev distance between X i and X t . The neighborhood of X i is determined by Equation (11), which shows that X t is in the neighborhood of X i when the Chebyshev distance between X i and X t is less than or equal to the product of the neighborhood radius r and the average Chebyshev distance, and not otherwise. where md i is the average Chebyshev distance between X i and the entire onlooker bee population, S is the neighborhood of X i , and r is the radius of the neighborhood and experience shows that the algorithm converges best when r is 1. (X Neib ) best is computed from Equation (12). where fit() is the fitness value of each individual bee. After the benchmark function test, the IABC algorithm has better global search accuracy and convergence speed than the ABC and GA algorithms, details of which can be found in Ref. [32]. In this case, the bee colony size of the IABC algorithm is 80, the size of bee colony generations is 20, and the upper limit of exploiting times is 3. A total of approximately 1700 calculations is performed on a commercial supercomputing platform. Each optimization task requires seven cores in parallel. A maximum of 24 tasks can be concurrently performed at one time. The time to complete each optimization task is 15 min, and the total optimization time is 48 h. Objective Function and Constraints To further reduce the optimization time, according to experience, it is often possible to improve the adiabatic efficiency and surge margin by choosing the optimization operating point between the design point and the near-surge point. Therefore, 140,000 pa back pressure at design speed is selected as optimization operating point 1, and 120,000 pa back pressure at off-design speed (N = 0.8) is selected as optimization operating point 2. The multi-objective optimization adopts the weighted summation method, and the optimization objective function is expressed as shown in Equation (13): The constraint conditions are shown in Equation (14): where e f f 1 and e f f 2 , respectively, refer to the isentropic efficiency at design speed and off-design speed, ω 1 and ω 2 represent the weighting factor of isentropic efficiency at the two rotating speeds, and ω 1 and ω 2 are all set to 0.5, while TPR 1 means the total pressure ratio and TPR ori 1 means the original total pressure ratio, mass 1 refers to the updated flow rate and mass ori 1 refers to the original flow, minus represents the minimum artificially given value, x i means the design variable. x L i , x U i refer to the lower and upper variation limits, respectively. The maximization of a weighted sum of the isentropic efficiency of the operating conditions at design speed and off-design speed is taken as the optimization goal. The strong constraint conditions are set as follows: the relative variation of the flow rate of the new blade must not exceed 0.5% higher than that of the original one, and the total pressure ratio must be raised. If the requirements are not satisfied, then the objective function f is set to a minimum value to exclude this food source in the optimization process. In the case of strong constraints, in order to generate more feasible solution space, more bees should be adopted to enhance the exploration ability of the algorithm. Numerical Method Flow field calculation is an important part of performance evaluation in the optimization process. The CFD evaluation tool used is the Fine-Turbo module of the commercial software Numeca. The CFD calculation parameters are set as follows: the turbulence model is selected as the S-A one-equation model, the mixed plane method is used for the interface between rotor and stator, the fourth-order Runge-Kutta method is used to perform the time discretization, the central difference method is used to perform the spatial discretization, and the artificial viscosity is used to eliminate the shockwave. Multigrid technology and implicit residual error scheme is set up to accelerate convergence. The S-A turbulence model has been successfully applied in engineering. Although it is a one-equation model, its accuracy is not inferior to that of the two-equation model. In addition, the S-A turbulence model is not derived and simplified from the two-equation model, but directly derived and modeled step by step by using experience and dimensional analysis. It has three advantages: (1) unlimited grid form and topology; (2) the near-wall region requires fewer mesh points; (3) good numerical stability and convergence. As shown in Figure 4, the inlet boundary conditions of the calculation are set as follows: the total pressure is set as 101,325 pa, the total temperature 293.15 K, and the incoming airflow direction axial, the non-slip boundary condition is used for the wall, the non-reflecting 2D condition is used for the rotor/stator interface. The outlet boundary conditions are set as follows: the average static pressure is given at the exit, and in the process of simulation, the back pressure gradually is changed from low (choked point) to high (surge point). Numerical Method Verification For the purpose of accurate simulation, verifying the accuracy of the numerical simulation is a necessary step before optimization. Here, the above-mentioned Stage35 with detailed experimental data is selected as a case to perform this verification. Figure 5 compares the pressure characteristic line and efficiency characteristic line obtained by simulation with the experimental value. From the comparison of the pressure characteristic line, we can note that the maximum pressure ratio of the experiment is 3.7% higher than the calculated value, while the experimental value of the choked flow is 1.4% lower than the calculated value. For the efficiency characteristic line, the experimental value of the highest efficiency is larger than the calculated value. At the same time, the experimental value of the surge margin is 8% larger than the calculated value. It is analyzed that the relative error between CFD calculation and experimental data is caused by the turbulence model and oscillation error near stall point, especially the flow deviation near stall point. However, on the whole, the change trend of the experimental values and the calculated values are consistent, and the flow field analysis is not at the stall point. Relatively speaking, it has a good calculation accuracy, which ensures the reliability of the flow field calculation in the aerodynamic optimization cycle. Although there is still a difference gap between the calculated value and the experimental value, the focus of this paper is to verify the optimization method. The verification of the optimization method is mainly based on the calculated performance comparison before and after optimization. It should be noted that we use simulation calculation before and after optimization, and the grid template and calculation parameter settings of the simulation are exactly the same. This is actually a comparison between the two simulated values before and after optimization, which has little to do with the difference between the simulated and experimental values. Therefore, we can consider that the simulation used in this paper can be used to measure the performance of the optimization method. In the field of turbomachinery design optimization, the calculation and comparison method are common. Grid Setting and Independence Verification The above general case Stage35 is also utilized to perform the grid independence verification and its grids are automatically generated by the AutoGrid5 module of the commercial software Numeca. The mesh details are shown in Figure 6. The grid generation parameters are set as follows: the first layer grid spacing to 0.001 mm to ensure that the first layer grid y + ≤ 5, the rotor blade tip clearance to 0.408 mm, the stator blade hub clearance to 0.4 mm, and the grid topology 4HO. The higher the quality of grid, the higher the accuracy of flow field calculation. At the same time, the higher the number of grid, the longer the convergence time of flow field calculation. Therefore, before the actual optimization process, it is necessary to make a trade-off between the quality and quantity of the grid [38,39]. As shown in Figure 7, four sets of different grid templates are utilized to generate the grid for Stage35. The main difference of these grid templates is the grid density, while guaranteeing the quality of each set of grids to meet the calculation requirements. The scale of the four sets of the rotor blades are 320,000, 680,000, 1,020,000 and 1,840,000, respectively, and the grid numbers of stator blades are 340,000, 750,000, 1,100,000 and 1,880,000, respectively. It is obvious that the performance of the third set of grids is very close to that of the fourth set, so the third set of grids is accurate enough to be adopted to calculate the flow field. In order to ensure both a certain computational accuracy and to save computational cost, this paper adopts the method of "coarse grid optimization and fine grid verification" [40,41], in which the second set of grids is used for the optimization and then the third set of grids is used for the flow field analysis. As a result, it can save about 1/3 of the time. Figure 8 clearly shows each step of the optimization process. The first step is to read the design variables of blade geometry as input parameters, then initialize the food source (design space) to obtain the initial solution in the design space, and then calculate each fitness value. The calculation function of fitness value contains three parts: parametric modeling, mesh generation and flow field calculation. When the performance of the new blade is good enough, the optimization loop finishes, and then the optimal blade geometry is output; otherwise, the optimization algorithm (IABC) will be used to explore the new design space and give a new feasible solution. All above steps complete an iteration. This cycle is followed until the jump out condition (the upper limit of generation times) is satisfied, and then the iteration is completed to obtain the optimized blade. Tables 1 and 2 show the comparison of the performance of Stage35 for the working point at design speed and off-design speed before and after optimization. It can be noted from Table 1 that at design speed, the adiabatic efficiency of the optimized blade is 1.4% higher, and the surge margin 2.9% higher, while ensuring that total pressure ratio and the flow rate do not exceed the constraint. As seen from Table 2, at the off-design speed and working point, the adiabatic efficiency improves by 0.6%, and the surge margin by 1.3%. As shown in Figure 10, the performance of the optimized blade at the design point is not the highest, and the peak efficiency is reached at a higher back pressure. At the back pressure above the peak efficiency point, the optimized efficiency improves by an average of 2%. Overall, the flow rate range of the optimized compressor is reduced, but the comprehensive surge margin (including the effect of pressure ratio) is still improved. At off-design speed, the adiabatic efficiency of the optimized blade above the back pressure of the working point is higher, which not only improves the optimized efficiency at the working point but also broadens the comprehensive surge margin. Figure 10. Comparison of the performance before and after optimization at different speeds. Figure 11 shows the corresponding position of each control parameter. Combined with Table 3, we can know the change value of each design variable after optimization, where "R_" refers to the design variables of rotor blades, and "S_" refers to the design variables of stator blades. Figure 11. Control parameter distribution. Figure 12 shows the comparison of geometric changes of the rotor and stator blade before and after optimization, and the cloud chart shows the distribution of variation values of each point on the unfolding surface. It can be seen from the figure that the leading edge angle of the optimized rotor blade at the hub and tip region increases, and it changes slightly in the blade body. In the region from the hub position to the radial 70% position of the rotor blade, the rear half of the airfoil is convex to a certain extent, which reduces the camber angle of the corresponding airfoil, especially at the 30% radial position. The comparison also shows that the leading edge angle of the optimized stator blade increases significantly, thus leading to the larger camber angle of the airfoil, and the tip region of the blade has the most significant geometric change. It is also found that the blade thickness at the tip and hub of the rotor and stator blade becomes thinner. However, this paper only considers the aerodynamic performance, and does not consider the strength for the time being. Analysis of the Optimization Results at Design Speed As shown in Figure 13, at design speed, the performance improvement of the optimized rotor is mainly concentrated in the radial region from 10% to 85% of the blade, and the performance is reduced at the tip and hub region of the rotor blade. Meanwhile, it is also noted from the comparison that the performance of the stator blade is not changed as much as that of the rotor blade after optimization. Figure 13b shows that the total pressure ratio decreases in the remaining regions of the rotors except for the 25-50% leaf height region, which remains unchanged. From the limit streamline comparison shown in Figure 14, it can be seen that the separation line on the suction surface of rotor blade is extended at the hub, but the separation line as a whole is significantly pushed to the trailing edge, so that the separation loss in the hub region of the optimized blade increases, while the separation loss in the area above the hub decreases. The same conclusion can be drawn from the entropy distribution shown in Figures 15 and 16 as above. From Figure 15, it can be seen that the high entropy region becomes larger in the hub and tip region of the optimized blade, while the entropy value decreases and the flow is smoother in the main flow channel at 10-80% of the blade span. From Figure 16c, the entropy value of the suction surface at 0.95 of blade span is relatively high, which is due to the complex flow situation in this region where there is a leakage vortex interacting with the main flow. To further analyze the reasons for the decrease in efficiency at the hub and tip of the rotor blade and the increase in efficiency in the middle region, we intercepted the Mach number of the B2B surface before and after optimization, shown in Figure 17, and the static pressure distribution before and after optimization, shown in Figure 18, for comparative analysis. From Figure 17, it can be seen that in the hub region of the optimized blade, the shock wave position on the suction surface is significantly pushed back, but the separation area near the trailing edge after the shock wave is significantly increased, thus increasing the separation loss there. Combined with Figures 12 and 18a, it can be seen that the reason for the above phenomenon is as follows: the increase in the leading edge blade angle at the hub and the camber angle of the optimized rotor blade reduce the positive incidence angle of the incoming flow. It makes the airflow on the suction surface of the acceleration region longer, so that the pressurization region after the shock wave shortens and the reverse pressure gradient increases. As a result, it increases the airflow separation region and increase the corresponding separation loss. Therefore, the airflow turning angle at the hub does not increase, so the total pressure ratio still decreases slightly although the camber angle at the hub of the rotor blade increases. The airflow development in the tip channel of the blade is similar to that in the hub. For the middle section of the blade, the pushing back of the shock wave does not increase the reverse pressure gradient. This is because the camber angle of blade middle geometry is reduced. In this case, the separation region after the shock wave decreases and the corresponding loss decreases. As shown in Figure 19, the low-energy fluid flows from the pressure surface through the tip gap into the suction surface and sinks at the leading edge, mixing with the main flow in the channel, curling up the leakage vortex and progressing toward the trailing edge and the pressure surface of the adjacent blade. In this process, there is a complex interaction between the secondary flow at the end wall and the leakage vortex. As can be seen from Figure 19, the tip leakage vortex dispersion angle of the optimized blade is smaller, more airflow can rotate to reach the channel outlet, reducing the corresponding airflow loss. Additionally, more leakage vortices in the original blade channel will collide with the pressure surface of the adjacent blade; thus, the loss is larger. At the same time, the leakage vortex rotational strength of the optimized blade is smaller, and the corresponding viscous dissipation loss and mixing losses are also reduced. At design speed, although the loss of the tip leakage vortex of optimized rotor blade decreases, it is secondary to the loss increase in separation region after the shock wave, so the overall efficiency of the tip section decreases. In contrast, the increase in losses at the hub and tip of the optimized rotor blade is secondary to the increase in efficiency in the middle region, so that the overall performance of the design point is improved at design speed. Analysis of the Optimization Results at Off-Design Speed As shown in Figure 20a, the efficiency of the stator blade at off-design speed changes little before and after the optimization, which is the same as design speed, so the next analysis focuses on the rotor blade. In addition to the reduction in the efficiency in the 15~35% height of the rotor blade, the efficiency in other parts of the optimized rotor blade has been improved, and the optimization effect at the 90% height of the rotor blade is the best. It can be seen from Figure 20b that the total pressure ratio at 30-75% of the blade span of the optimized rotor blade is slightly increased, while the total pressure ratio at 75-95% of the blade span is slightly decreased. From the limit streamline on the suction surface of the rotor blade in Figure 21, the limit streamline after optimization is significantly shortened and the separation line in the 0-40% region of blade span disappears, so the corresponding separation loss is reduced. It can be observed that there is a reattachment line on the suction surface before and after optimization, which is due to the presence of separation bubbles on the suction surface of the rotor blade due to the large positive incidence angle of the incoming flow at off-design speed. The presence of the separation bubble means that there is a separation loss here, and at 60-70% of the blade span, the optimized separation bubble increases, increasing the corresponding airflow loss there. The flow field at off-design speed is analyzed in more depth by combining the Mach number of B2B surface in Figure 22 and the surface static pressure distribution in Figure 23. The Mach number at the leading edge at the hub and tip of the optimized rotor blade reduces, which weakens the shock wave intensity at this location, leading to a reduction in the corresponding losses of the shock wave as well as the losses from the interaction between the shock wave with the boundary layer. We also note that the separation area at the trailing edge of the rotor blade hub is reduced after optimization, and thus, the corresponding separation losses are reduced. For the middle section, the intensity of the shock wave at the leading edge of the optimized blade increases, and the corresponding shock wave loss increases. At the same time, the positive incidence angle of the incoming flow in the middle of the rotor blade increases, but the camber angle and the trailing edge backward angle of the blade do not change much here. This will increase the turning angle of the airflow and the corresponding rim work, so the total pressure ratio increases there. Then, we study the tip leakage flow of the rotor blade shown in Figure 24. Similar to the leakage flow at design speed, the leakage vortex dispersion angle of the optimized rotor blade at off-design speed decreases. The optimized leakage vortex can develop a longer distance in the chordwise direction, and the vortex strength decreases significantly. The two reasons above lead to the reduction in viscous dissipation loss and mixing loss of low-energy fluid and mainstream. Therefore, the performance improvement of the optimized rotor blade at off-design speed mainly comes from three aspects: (1) the disappearance of the separation line on the suction surface near the hub; (2) the decrease in separation loss near the hub trailing edge; (3) the reduction in the tip leakage loss. Conclusions In this paper, a new parametric control method of compressor blade, the full-blade surface parametric control method, is proposed. The method is applied to the single-stage transonic axial flow compressor Stage35 for multi-stage aerodynamic optimization under multi-operating conditions for validation, and the following conclusions are reached: (1) The full-blade surface parametric method has good construction convenience, smoothness of the blade surface and low-dimensional characteristics. In terms of lowdimensional characteristics, the number of control variables of the full-blade surface parametric method is more than half of that of the traditional parametric method. Thus, from the perspective of parametric dimension reduction, it is helpful to avoid the "dimension disaster". The optimization is successfully obtained for different speed cases in only 48 h on a supercomputing platform using the multi-task concurrent mode of the IABC algorithm. (2) Compared with the semi-blade surface parametric method, the full-blade surface parametric method has better characteristics. It considers the influence of both the suction surface and pressure surface of the blade geometry on the aerodynamic performance. Further, it can improve the geometric exploration ability of the compressor by increasing the geometric freedom of the leading and trailing edges. (3) The full-blade surface parametric method is suitable for the aerodynamic optimization of Stage35, which shows that it can be applied to general aerodynamic optimization. After the optimization, the design point efficiency at design speed increases by 1.4%, the surge margin by 2.9%, the adiabatic efficiency of the operating point at off-design speed by 0.6%, and the surge margin by 1.3%. (4) The optimized blade improves the shock wave structure of the flow field, reduces the shock wave intensity at design speed and off-design speed. Additionally, it makes the separation area at the trailing edge of the blade narrow, the suction surface separation line shorten, as well as the blade tip leakage vortex dispersion angle and rotation intensity decrease, thus enhancing the efficiency of the optimized blade. Although the full-blade surface parametric method has several above advantages, the following defects still exist: it currently does not possess the ability to conduct a sweeping deformation, which limits the geometric exploration ability on the blade in the optimization process to a certain extent. At the same time, the method can make the blade thinner during the geometric changes, and the strength of the rotor blade will be influenced, and need to be more constrained. This parametric method may be more suitable for turbine blades with larger thicknesses. In light of the above problems, further research can be conducted.
2021-09-27T20:56:06.346Z
2021-07-16T00:00:00.000
{ "year": 2021, "sha1": "f8ac8d683c253c5c8a38548448bb6eeba8a8791f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9717/9/7/1230/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "5c8a40b60afee704df9c089cfae9c8733d0c2b64", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics" ] }
118706899
pes2o/s2orc
v3-fos-license
An axiomatic characterization of the Brownian map The Brownian map is a random sphere-homeomorphic metric measure space obtained by “gluing together” the continuum trees described by the x and y coordinates of the Brownian snake. We present an alternative “breadth-first” construction of the Brownian map, which produces a surface from a certain decorated branching process. It is closely related to the peeling process, the hull process, and the Brownian cactus. Using these ideas, we prove that the Brownian map is the only random sphere-homeomorphic metric measure space with certain properties: namely, scale invariance and the conditional independence of the inside and outside of certain “slices” bounded by geodesics. We also formulate a characterization in terms of the so-called Lévy net produced by a metric exploration from one measure-typical point to another. This characterization is part of a program for proving the equivalence of the Brownian map and Liouville quantum gravity with parameter γ = √ 8/3. 1 ar X iv :1 50 6. 03 80 6v 3 [ m at h. PR ] 3 1 A ug 2 01 6 Overview In recent years, numerous works have studied a random measure-endowed metric space called the Brownian map, which can be understood as the n → ∞ scaling limit of the uniformly random quadrangulation (or triangulation) of the sphere with n quadrilaterals (or triangles). We will not attempt a detailed historical account here. Miermont's recent St. Flour lecture notes are a good place to start for a general overview and a list of additional references [Mie14]. 1 This paper will assemble a number of ideas from the literature and use them to derive some additional fundamental facts about the Brownian map: specifically, we explain how the Brownian map can be constructed from a certain branching "breadth-first" exploration. This in turn will allow us to characterize the Brownian map as the only random metric measure space with certain properties. Roughly speaking, in addition to some sort of scale invariance, the main property we require is the conditional independence of the inside and the outside of certain sets (namely, filled metric balls and "slices" formed by pairs of geodesics from the center to the boundary of a filled metric ball) given an associated boundary length parameter. Section 1.5 explains that certain discrete models satisfy discrete analogs of this conditional independence; so it is natural to expect their limits to satisfy a continuum version. Our characterization result is in some sense analogous to the characterization of the Schramm-Loewner evolutions (SLEs) as the only random paths satisfying conformal invariance and the so-called domain Markov property [Sch00], or the characterization of conformal loop ensembles (CLEs) as the only random collections of loops with a certain Markov property [SW12]. The reader is probably familiar with the fact that in many random planar map models, when the total number of faces is of order n, the length of a macroscopic geodesic path has order n 1/4 , while the length of the outer boundary of a macroscopic metric ball has order n 1/2 . Similarly, if one rescales an instance of the Brownian map so that distance is multiplied by a factor of C, the area measure is multiplied by C 4 , and the length of the outer boundary of a metric ball (when suitably defined) is multiplied by C 2 (see Section 4). One might wonder whether there are other continuum random surface models with other scaling exponents in place of the 4 and the 2 mentioned above, perhaps arising from other different types of discrete models. However, in this paper the exponents 4 and 2 are shown to be determined by the axioms we impose; thus a consequence of this paper is that any continuum random surface model with different exponents must fail to satisfy at least one of these axioms. One reason for our interest in this characterization is that it plays a role in a larger program for proving the equivalence of the Brownian map and the Liouville quantum gravity (LQG) sphere with parameter γ = 8/3. Both 8/3-LQG and the Brownian map describe random measure-endowed surfaces, but the former comes naturally equipped with a conformal structure, while the latter comes naturally equipped with the structure of a geodesic metric space. The program provides a bridge between these objects, effectively endowing each one with the other's structure, and showing that once this is done, the laws of the objects agree with each other. The rest of this program is carried out in [MS15a,MS15b,MS16a,MS16b], all of which build on [She10,MS12a,MS12b,MS12c,MS13a,MS13b,DMS14] (see also Curien's work on the discrete side of this question [Cur13]). After using a quantum Loewner evolution (QLE) exploration to impose a metric structure on the LQG sphere, the papers [MS15a,MS16a] together prove that the law of this metric has the properties that characterize the law of the Brownian map, and hence is equivalent to the law of the Brownian map. Relation with other work There are several independent works which were posted to the arXiv shortly after the present work that complement and partially overlap the work done here in interesting ways. Bertoin, Curien, and Kortchemski [BCK15] have independently constructed a breadth-first exploration of the Brownian map, which may also lead to an independent proof that the Brownian map is uniquely determined by the information encoding this exploration. They draw from the theory of fragmentation processes to describe the evolution of the whole countable collection of unexplored component boundaries. They also explore the relationship to discrete breadth-first searches in some detail. Abraham and Le Gall [AL15] have studied an infinite measure on Brownian snake excursions in the positive half-line (with the individual Brownian snake paths stopped when they return to 0). These excursions correspond to disks cut out by a metric exploration of the Brownian map, and play a role in this work as well. Finally, Bettinelli and Miermont [BM15] have constructed and studied properties of Brownian disks with an interior marked point and a given boundary length L (corresponding to the measure we call µ 1,L DISK ; see Section 4.2) including a decomposition of these disks into geodesic slices, which is related to the decomposition employed here for metric balls of a given boundary length (chosen from the measure we call µ L MET ). They show that as a point moves around the boundary of the Brownian disk, its distance to the marked point evolves as a type of Brownian bridge. In particular, this implies that the object they call the Brownian disk has finite diameter almost surely. We also highlight two more recent works. First, Le Gall in [Le 16] provides an alternative approach to constructing the object we call the Lévy net in this paper and explores a number of related ideas. Roughly speaking, the approach in Le Gall's paper is to start with the continuum random tree used in the construction of the Brownian map and then take the quotient w.r.t. the equivalence class that makes two points the same if they belong to the closure of the same disk cut out by the metric exploration. This equivalence relation is easy to describe directly using the Brownian snake, which makes the Lévy net construction very direct. We also make note of a recent work by Bertoin, Budd, Curien, and Kortchemski [BBCK16] that studies (among other things) the fragmentations processes that appear in variants of the Brownian map that arise as scaling limits of surfaces with "very large" faces. Theorem statement In this subsection, we give a quick statement of our main theorem. However, we stress that several of the objects involved in this statement (leftmost geodesics, the Brownian map, the various σ-algebras, etc.) will not be formally defined until later in the paper. Let M SPH be the space of geodesic metric spheres that come equipped with a good measure (i.e., a finite measure that has no atoms and assigns positive mass to each open set). In other words, M SPH is the space of (measure-preserving isometry classes of) triples (S, d, ν), where d : S × S → [0, ∞) is a distance function on a set S such that (S, d) is topologically a sphere, and ν is a good measure on the Borel σ-algebra of S. Denote by µ A=1 SPH the standard unit area (sphere homeomorphic) Brownian map, which is a random variable that lives on the space M SPH . We will also discuss a closely related doubly marked Brownian map measure µ 2 SPH on the space M 2 SPH of elements of M SPH that come equipped with two distinguished marked points x and y. This µ 2 SPH is an infinite measure on the space of finite volume surfaces. The quickest way to describe it is to say that sampling from µ 2 SPH amounts to first choosing a real number A from the infinite measure A −3/2 dA, then independently choosing a measure-endowed surface from µ A=1 SPH , then choosing two marked points x and y independently from the measure on the surface, and then "rescaling" the resulting doubly marked surface so that its area is A (scaling area by A and distances by A 1/4 ). The measure µ 2 SPH turns out to describe the natural "grand canonical ensemble" on doubly marked surfaces. We formulate our main theorems in terms of µ 2 SPH (although they can indirectly be interpreted as theorems about µ A=1 SPH as well). Given an element (S, d, ν, x, y) ∈ M 2 SPH , and some r ≥ 0, let B(x, r) denote the open metric ball with radius r and center x. Let B • (x, r) denote the filled metric ball of radius r centered at x, as viewed from y. That is, B • (x, r) is the complement of the y-containing component of the complement of B(x, r). One can also understand S \ B • (x, r) as the set of points z such that there exists a path from z to y along which the function d(x, ·) stays strictly larger than r. Note that if 0 < r < d(x, y) then B • (x, r) is a closed set whose complement contains y and is topologically a disk. In fact, one can show (see Proposition 2.1) that the boundary ∂B • (x, r) is topologically a circle, so that B • (x, r) is topologically a closed disk. We will sometimes interpret B • (x, r) as being itself a metric measure space with one marked point (the point x) and a measure obtained by restricting ν to B • (x, r). For this purpose, the metric we use on B • (x, r) is the interior-internal metric on B • (x, r) that it inherits from (S, d) as follows: the distance between two points is the infimum of the d lengths of paths between them that (aside from possibly their endpoints) stay in the interior of B • (x, r). (In most situations, one would expect this distance to be the same as ordinary interior metric, in which the infimum is taken over all paths contained in B • (x, r), with no requirement that these paths stay in the interior. However, one can construct examples where this is not the case, i.e., where paths that hit the boundary on some (possibly fractal) set of times are shorter than the shortest paths that do not. In general, the interior-internal metric is less informative than the internal metric; given either metric, one can compute the d lengths of paths that remain in the interior; however the interior-internal metric does not determine the d lengths of curves that hit the boundary an uncountable number of times. Whenever we make reference to metric balls or slices (as in the statement of Theorem 1.1 below) we understand them as marked metric measure spaces (endowed with the interior internal metric induced by d, and the restriction of ν) in this way. We will later see that in the doubly marked Brownian map, if we fix r > 0, then on the event that d(x, y) > r, the circle ∂B • (x, r) almost surely comes endowed with a certain "boundary length measure" (which scales like the square root of the area measure). This is not too surprising given that the Brownian map is a scaling limit of random triangulations, and the discrete analog of a filled metric ball clearly comes with a notion of boundary length. We review this idea, along with more of the discrete intuition behind Theorem 1.1, in Section 1.5. We will also see in Section 2 that there is a certain σ-algebra on the space of doubly marked metric measure spaces (which induces a σ-algebra F 2 on M 2 SPH ) that is in some sense the "weakest reasonable" σ-algebra to use. We formulate Theorem 1.1 in terms of that σ-algebra. (In some sense, a weaker σ-algebra corresponds to a stronger theorem in this context, since if one has a measure defined on a stronger σ-algebra, one can always restrict it to a weaker σ-algebra. Theorem 1.1 is a general characterization theorem for these restrictions.) We will also need to have some discussion in Section 2 to explain why the assumptions in the theorem statement are meaningful (e.g., why objects like B • (x, r), viewed as a metric measure space as described above, are measurable random variables), and to explain the term"leftmost" (which makes sense once one of the two orientations of the sphere has been fixed). However, let us clarify one point upfront: whenever we discuss geodesics in this paper, we will refer to paths between two endpoints that have minimal length among all paths between those endpoints (i.e., they do not just have this property in a some local sense). Theorem 1.1. The (infinite) doubly marked Brownian map measure µ 2 SPH is the only measure on (M 2 SPH , F 2 ) with the following properties. (Here a sample from the measure is denoted by (S, d, ν, x, y).) 1. The law is invariant under the Markov operation that corresponds to forgetting x (or y) and then resampling it from the measure ν. In other words, given (S, d, ν), the points x and y are conditionally i.i.d. samples from the probability measure ν/ν(S). 2. Fix r > 0 and let E r be the event that d(x, y) > r. Then µ 2 SPH (E r ) ∈ (0, ∞), so that µ 2 SPH (E r ) −1 times the restriction of µ 2 SPH to E r is a probability measure. Under this probability measure, the following are true for s = r and also for s = d(x, y) − r. (a) There is an F 2 -measurable random variable that we denote by L s (which we interpret as a "boundary length" of ∂B • (x, s)) such that given L s , the random metric measure spaces B • (x, s) and S \B • (x, s) are conditionally independent of each other. In the case s = r, the conditional law of S \ B • (x, s) depends only on the quantity L s , and does so in a scale invariant way; i.e., there exists some fixed a and b such that the law given L s = C is the same as the law given L s = 1 except that areas and distances are respectively scaled by C a and C b . The same holds for the conditional law of B • (x, s) in the case s = d(x, y) − r. (b) In the case that s = d(x, y) − r, there is a measurable function that takes (S, d, ν, x, y) as input and outputs (S, d, π, x, y) where π is a.s. a good measure (which we interpret as a boundary length measure) on ∂B • (x, s) (which is necessarily homeomorphic to a circle) that has the following properties: i. The total mass of π is a.s. equal to L s . ii. Suppose we first sample (S, d, ν, x, y), then produce π, then sample z 1 from π, and then position z 2 , z 3 , . . . , z n so that z 1 , z 2 , z 3 , . . . , z n are evenly spaced around ∂B • (x, s) according to π following an orientation of ∂B • (x, s) chosen by tossing an independent fair coin. Then the n "slices" produced by cutting B • (x, s) along the leftmost geodesics from z i to x are (given L s ) conditionally i.i.d. (as suggested by Figure 1.2 and Figure 1.3) and the law of each slice depends only on L s /n, and does so in a scale invariant way (with the same exponents a and b as above). We remark that the statement that we have a way to assign a boundary length measure to ∂B • (x, s) can be reformulated as the statement that we have a way to randomly assign a marked boundary point z to ∂B • (x, s). The boundary length measure is then L s times the conditional law of z given (S, d, ν, x, y). Among other things, the conditions of Theorem 1.1 will ultimately imply that L r can be viewed as a process indexed by r ∈ [0, d(x, y)], and that both L r and its time-reversal can be understood as excursions derived from Markov processes. We will see a posteriori that the time-reversal of L r is given by a certain time change of a 3/2-stable Lévy excursion with only positive jumps. One can also see a posteriori (when one samples from a measure which satisfies the axioms in the theorem -i.e., from the Brownian map measure µ 2 SPH ) that the definition of the "slices" above is not changed if one replaces "leftmost" with "rightmost" because, in fact, from almost all points on ∂B • (x, s) the geodesic to x is unique. We remark that the last condition in Theorem 1.1 can be understood as a sort of "infinite divisibility" assumption for the law of a certain filled metric ball, given its boundary length. Before we prove Theorem 1.1, we will actually first formulate and prove another closely related result: Theorem 4.6. To explain roughly what Theorem 4.6 says, note that for any element of M 2 SPH , one can consider the union of the boundaries ∂B • (x, r) taken over all r ∈ [0, d(x, y)]. This union is called the metric net from x to y and it comes equipped with certain structure (e.g., there is a distinguished leftmost geodesic from any point on the net back to x). Roughly speaking, Theorem 4.6 states that µ 2 SPH is the only measure on (M 2 SPH , F 2 ) with certain basic symmetries and the property that the infinite measure it induces on the space of metric nets corresponds to a special object called the α-(stable) Lévy net that we will define in Section 3. Outline In Section 2 we discuss some measure theoretic and geometric preliminaries. We begin by defining a metric measure space (a.k.a. mm-space) to be a triple (S, d, ν) where (S, d) is a complete separable metric space, ν is a measure defined on its Borel σ-algebra, and ν(S) ∈ (0, ∞). 2 Let M denote the space of all metric measure spaces. Let M k denote the set of metric measure spaces that come with an ordered set of k marked points. As mentioned above, before we can formally make a statement like "The doubly marked Brownian map is the only measure on M 2 with certain properties" we have to specify what we mean by a "measure on M 2 ," i.e., what σ-algebra a measure is required to be defined on. The weaker the σ-algebra, the stronger the theorem, so we would ideally like to consider the weakest "reasonable" σ-algebra on M and its marked variants. We argue in Section 2 that the weakest reasonable σ-algebra on M is the σ-algebra F generated by the so-called Gromov-weak topology. We recall that this topology can be generated by various natural metrics that make M a complete separable metric space, including the so-called Gromov-Prohorov metric and the Gromov-1 metric [GPW09,Löh13]. We then argue that this σ-algebra is at least strong enough so that the statement of our characterization theorem makes sense: for example, since our characterization involves surfaces cut into pieces by ball boundaries and geodesics, we need to explain why certain simple functions of these pieces can be understood as measurable functions of the original surface. All of this requires a bit of a detour into metric geometry and measure theory, a detour that occupies the whole of Section 2. The reader who is not interested in the details may skip or skim most of this section. In Section 3, we recall the tree gluing results from [DMS14]. In [DMS14] we proposed using the term peanosphere 3 to describe a space, topologically homeomorphic to the sphere, that comes endowed with a good measure and a distinguished space-filling loop (parameterized so that a unit of area measure is filled in a unit of time) that represents an interface between a continuum "tree" and "dual tree" pair. Several of the constructions in [DMS14] describe natural measures on the space of peanospheres, and we note that the Brownian map also fits into this framework. Some of the constructions in [DMS14] also involve the α-stable looptrees introduced by Curien and Kortchemski in [CK13], which are in turn closely related to the Lévy stable random trees explored by Duquesne and Le Gall [DLG02,DLG05,DLG06,DLG09]. For α ∈ (1, 2) we show how to glue an α-stable looptree "to itself" in order to produce an object that we call the α-stable Lévy net, or simply the α-Lévy net for short. The Lévy net can be understood as something like a Peano carpet. It is a space homeomorphic to a closed subset of the sphere (obtained by removing countably many disjoint open disks from the sphere) that comes with a natural measure and a path that fills the entire space; this path represents an interface between a geodesic tree (whose branches also have well-defined length parameterizations) and its dual (where in this case the dual object is the α-stable looptree itself). We then show how to explore the Lévy net in a breadth-first way, providing an equivalent construction of the Lévy net that makes sense for all α ∈ (1, 2). Our results about the Lévy net apply for general α and can be derived independently of their relationship to the Brownian map. Indeed, the Brownian map is not explicitly mentioned at all in Section 3. In Section 4 we make the connection to the Brownian map. To explain roughly what is done there, let us first recall recent works by Curien and Le Gall [CL14a,CL14b] about the so-called Brownian plane, which is an infinite volume Brownian map that comes with a distinguished origin. They consider the hull process L r , where L r denotes an appropriately defined "length" of the outer boundary of the metric ball of radius r centered at the origin, and show that L r can be understood in a certain sense as the time-reversal of a continuous state branching process (which is in turn a time change of a 3/2-stable Lévy process). See also the earlier work by Krikun on reversed branching processes associated to an infinite planar map [Kri05]. Section 4 will make use of finite-volume versions of the relationship between the Brownian map and 3/2-stable Lévy processes. In these settings, one has two marked points x and y on a finite-diameter surface, and the process L r indicates an appropriately defined "length" of ∂B • (x, r). The restriction of the Brownian map to the union of these boundary components is itself a random metric space (using the shortest path distance within the set itself) and we will show that it agrees in law with the 3/2-Lévy net. Given a single instance of the Brownian map, and a single fixed point x, one may let the point y vary over some countable dense set of points chosen i.i.d. from the associated area measure; then for each y one obtains a different instance of the Lévy net. We will observe that, given this collection of coupled Lévy net instances, it is possible to reconstruct the entire Brownian map. Indeed, this perspective leads us to the "breadth-first" construction of the Brownian map. (As we recall in Section 4, the conventional construction of the Brownian map from the Brownian snake involves a "depth-first" exploration of the geodesic tree associated to the Brownian map.) The characterization will then essentially follow from the fact that α-stable Lévy processes (and the corresponding continuous state branching processes) are themselves characterized by certain symmetries (such as the Markov property and scale invariance; see Proposition 3.11) and these correspond to geometric properties of the random metric space. An additional calculation will be required to prove that α = 3/2 is the only value consistent with the axioms that we impose, and to show that this determines the other scaling exponents of the Brownian map. Discrete intuition This paper does not address discrete models directly. All of our theorems here are formulated and stated in the continuum. However, it will be useful for intuition and motivation if we recall and sketch a few basic facts about discrete models. We will not include any detailed proofs in this subsection. Infinite measures on singly and doubly marked surfaces The literature on planar map enumeration begins with Mullin and Tutte in the 1960's [Tut62,Mul67,Tut68]. The study of geodesics and the metric structure of random planar maps has roots in an influential bijection discovered by Schaeffer [Sch97], and earlier by Cori and Vauquelin [CV81]. The Cori-Vauquelin-Schaeffer construction is a way to encode a planar map by a pair of trees: the map M is a quadrangulation, and a "tree and dual tree" pair on M are produced from M in a deterministic way. One of the trees is a breadth-first search tree of M consisting of geodesics; the other is a type of dual tree. 4 In this setting, as one traces the boundary between the geodesic tree and the dual tree, one may keep track of the distance from the root in the dual tree, and the distance in the geodesic tree itself; Chassaing and Schaeffer showed that the scaling limit of this random two-parameter process is the continuum random path in R 2 traced by the head of a Brownian snake [CS02], whose definition we recall in Section 4. The Brownian map 5 is a random metric space produced directly from this continuum random path; see Section 4. Let us remark that tracing the boundary of a tree counterclockwise can be intuitively understood as performing a "depth-first search" of the tree, where one chooses which branches to explore in a left-to-right order. In a sense, the Brownian snake is associated to a depth-first search of the tree of geodesics associated to the Brownian map. We mention this in order to contrast it with the breadth-first search of the same geodesic tree that we will introduce later. The scaling limit results mentioned above have been established for a number of types of random planar maps, but for concreteness, let us now focus our attention on triangulations. According to [AS03, Theorem 2.1] (applied with m = 0, see also [Ang03]), the number of triangulations (with no loops allowed, but multiple edges allowed) of a sphere with n triangles and a distinguished oriented edge is given by where C > 0 is a constant. Let µ 1 TRI be the probability measure on triangulations such that the probability of each specific n-triangle triangulation (with a distinguished oriented edge -whose location one may treat as a "marked point") is proportional to (27/2) −n . Then (1.2) implies that the µ 1 TRI probability of obtaining a triangulation with n triangles decays asymptotically like a constant times n −5/2 . One can define a new (non-probability) measure on random metric spaces µ 1 TRI,k , where the area of each triangle is 1/k (instead of constant) but the measure is multiplied by a constant to ensure that the µ 1 TRI,k measure of the set of triangulations with area in the interval (1, 2) is given by 2 1 x −5/2 dx, and distances are scaled by k −1/4 . As k → ∞ the vague limit (as defined w.r.t. the Gromov-Hausdorff topology on metric spaces) is an infinite measure on the set of measure-endowed metric spaces. Note that we can represent any instance of one of these scaled triangulations as (M, A) where A is the total area of the triangulation and M is the measure-endowed metric space obtained by rescaling the area of each triangle by a constant so that the total becomes 1 (and rescaling all distances by the fourth root of that constant). As k → ∞ the measures µ 1 TRI,k converge vaguely to the measure dM ⊗ A −5/2 dA, where dM is the standard unit volume Brownian map measure (see [LG13] for the case of triangulations and 2p-angulations for p ≥ 2 and [Mie13] for the case of quadrangulations); a sample from dM comes equipped with a single marked point. See Figure 1.1. The measure dM ⊗ A −5/2 dA can be understood as type of grand canonical or Boltzmann measure on the space of (singly marked) Brownian map instances. Now suppose we consider the set of doubly marked triangulations such that in addition to the root vertex (the first point on the distinguished oriented edge), there is an additional distinguished or "marked" vertex somewhere on the triangulation. Since, given an n-triangle triangulation, there are (by Euler's formula) n/2 other vertices one could "mark," we find that the number of these doubly marked triangulations is (up to constant factor) given by n times the expression in (1.2), i.e. n2 n+1 (3n!) 2n!(2n + 2)! ≈ C(27/2) n n −3/2 . Figure 1.1: Up to constant factor, the right graph is the C → ∞ limit of the left graph rescaled (squeezed by factor of C horizontally, stretched by factor of C 5/2 vertically). This explains why the "total surface area" marginal of the measure µ 1 TRI,k converges vaguely to the infinite measure A −5/2 dA as k → ∞. One can deduce from this that the area marginal of µ 2 TRI,k converges vaguely to A −3/2 dA. (Note the relationship between µ 2 TRI,k and the "slice law" suggested by Figure 1.3.) Let µ 2 TRI,k denote this probability measure on doubly marked surfaces (the doubly marked analog of µ 1 TRI,k ). Then the scaling limit of µ 2 TRI,k is an infinite measure of the form dM ⊗ A −3/2 dA, where M now represents a unit area doubly marked surface with distinguished points x and y. Note that if one ignores the point y, then the law dM in this context is exactly the same as in the one marked point context. Generalizing the above analysis to k marked points, we will write µ k SPH to denote the natural limiting infinite measure on k-marked spheres, which can be understood (up to a constant factor) as the k-marked point version of the Brownian map. To sample from µ k SPH , one may 1. Choose A from the infinite measure A −7/2+k dA. 2. Choose M as an instance of the standard unit area Brownian map. 3. Sample k points independently from the measure with which M is endowed. x y Figure 1.2: Shown is a triangulation of the sphere (the outer three edges form one triangle) with two marked points: the blue dots labeled x and y. The red cycles are outer boundaries of metric balls centered at x (of radii 1, 2, 3) and at y (of radii 1, 2, 3, 4, 5). From each point on the outer boundary of B • (x, 3) (resp. B • (y, 5)) a geodesic toward x (resp. y) is drawn in white. The geodesic drawn is the "leftmost possible" one; i.e., to get from a point on the circle of radius k to the circle of radius k − 1, one always takes the leftmost edge (as viewed from the center point). "Cutting" along white edges divides each of B • (x, 3) and B • (y, 5) into a collection of triangulated surfaces (one for each boundary edge) with left and right boundaries given by geodesic paths of the same length. Within B • (x, 3) (resp. B • (y, 5)), there happens to be a single longest slice of length 3 (resp. 5) reaching all the way from the boundary to x (resp. y). Parts of the left and right boundaries of these longest slices are identified with each other when the slice is embedded in the sphere. This is related to the fact that all of the geodesics shown in white have "merged" by their final step. Between B • (x, 3) and B • (y, 5), there are 8 + 5 = 13 slices in total, one for each boundary edge. The white triangles outside of B • (x, 3) ∪ B • (y, 5) form a triangulated disk of boundary length 13. 4. Rescale the resulting k-marked sphere so that it has area A. Of the measures µ k SPH , we mainly deal with µ 1 SPH and µ 2 SPH in this paper. As mentioned earlier, we also sometimes use the notation µ A=1 SPH to describe the standard unit-area Brownian map measure, i.e., the measure described as dM above. Properties of the doubly marked Brownian map In this section, we consider what properties of the measure µ 2 SPH on doubly marked measure-endowed metric spaces (as described above) can be readily deduced from considerations of the discrete models and the fact that µ 2 SPH is a scaling limit of such models. These will include the properties contained in the statement of Theorem 1.1. Although we will not provide fully detailed arguments here, we note that together with Theorem 1.1, this subsection can be understood as a justification of the fact that µ 2 SPH is the only measure one can reasonably expect to see as a scaling limit of discrete measures such as µ 2 TRI (or more precisely as the vague limit of the rescaled measures µ 2 TRI,k ). In principle it might be possible to use the arguments of this subsection along with Theorem 1.1 (and a fair amount of additional detail) to give an alternate proof of the fact that the measures µ 2 TRI have µ 2 SPH as scaling limit. But we will not do that here. Let us stress again that all of the properties discussed in this subsection can be proved rigorously for the doubly marked Brownian map measure µ 2 SPH . But for now we are simply using discrete intuition to argue (somewhat heuristically) that these are properties that any scaling limit of the measures µ 2 TRI should have. Although µ 2 SPH is an infinite measure, we have that µ 2 SPH [A > c] is finite whenever c > 0. Based on what we know about the discrete models, what other properties would we expect µ 2 SPH to have? One such property is obvious; namely, the law µ 2 SPH should be invariant under the operation of resampling one (or both) of the two marked points from the (unit) measure on M . This is a property that µ 2 TRI clearly has. If we fix x (with its directed edge) and resample y uniformly, or vice-versa, the overall measure is preserved. Another way to say this is the following: to sample from dM , one may first sample M as an unmarked unit-measure-endowed metric space (this space has no non-trivial automorphisms, almost surely) and then choose x and y uniformly from the measure on M . Before describing the next properties we expect µ 2 SPH to have, let us define B • (x, r) to be the set of vertices z with the property that every path from z to y includes a point whose distance from x is less than or equal to r. This is the obvious discrete analog of the definition of B • (x, r) given earlier. Informally, B • (x, r) includes the radius r metric ball centered at x together with all of the components "cut off" from y by the metric ball. It is not hard to see that vertices on the boundary of such a ball, together with the edges between them, form a cycle; examples of such boundaries are shown as the red cycles in Figure 1.2. Observe that if we condition on B • (x, r), and on the event that d(x, y) > r (so that y ∈ B • (x, r)), then the µ 2 TRI,k conditional law of the remainder of the surface depends only on the boundary length of B • (x, r), which we denote by L r (x, y), or simply L r when the choice of x and y is understood. This conditional law can be understood as the standard Boltzmann measure on singly marked triangulations of the disk with boundary length L r , where the probability of each triangulation of the disk with n triangles is can be decomposed into "slices" by drawing geodesics from the center to the boundary. Upper right: the slices are embedded in the plane so that along the boundary of each slice, the geodesic distance from the black outer boundary (in the left figure) corresponds to the Euclidean distance below the black line (in the right figure). We may glue the slices back together by identifying points on same horizontal segment (leftmost and rightmost points on a given horizontal level are also identified) to recover the filled metric ball. Bottom: Lower figures explain the equivalence of the slice measure and µ 2 SPH . proportional to (27/2) −n . From this we conclude in particular that L r evolves as a Markovian process, terminating when y is reached at step d(x, y). This leads us to a couple more properties one would expect the Brownian map to have, based on discrete considerations. 1. Fix a constant r > 0 and consider the restriction of µ 2 SPH to the event d(x, y) > r. (We expect the total µ 2 SPH measure of this event to be finite.) Then once B • (x, r) is given, the conditional law of the singly marked surface comprising the complement of B • (x, r) is a law that depends only a single real number, a "boundary length" parameter associated to B • (x, r), that we call L r . 2. This law depends on L r in a scale invariant way-that is, the random singly marked surface of boundary length L and the random singly marked surface of boundary length CL differ only in that distances and areas in the latter are each multiplied by some power of C. (We do not specify for now what power that is.) To partially justify this, note that is not hard to see that if one has a limit of the sort shown in Figure 1.1, then the right hand graph has to be a power law (since for every C, the graph must be preserved when one rescales horizontally by C and vertically by some value). Thus if the µ 2 TRI,k have a scaling limit of the form dM ⊗ f (A)dA (as one would expect if the n-triangle triangulations, each rescaled to have area 1, have dM as a limit) then f (A) has to be a power law. A similar argument applies if one replaces the area parameter A with the diameter, or with the distance between x and y (in the doubly marked case). 3. The above properties also imply that the process L r (or at least its restriction to a countable dense set) evolves as a Markov process, terminating at time d(x, y), and that the µ 2 SPH law of L r is that of the (infinite) excursion measure associated to this Markov process. The scale invariance assumptions described above do not specify the law of L r . They suggest that log L r should be a time change of a Lévy process, but this still leaves an infinite dimensional family of possibilities. In order to draw further conclusions about this law, let us consider the time-reversal of L r , which should also be an excursion of a Markov process. (This is easy to see on a discrete level; suppose we do not decide in advance the value of T = d(x, y), but we observe L T −1 , L T −2 , . . . as a process that terminates after T steps. Then the conditional law of L T −k−1 given L T −k is easily seen to depend only the value of L T −k .) Given this reverse process up to a stopping time, what is the conditional law of the filled ball centered at y with the corresponding radius? On the discrete level, this conditional law is clearly the uniform measure (weighted by (27/2) −n , where n is the number of triangles, as usual) on triangulations of the boundary-length-L disk in which there is a single fixed root and all points on the boundary are equidistant from that root. A sample from this law can be obtained by choosing L independent "slices" and gluing them together, see Figure 1.2. As illustrated in Figure 1.3, we expect to see a similar property in the continuum. Namely, that given a boundary length parameter L, and a set of points along the boundary, the evolution of the lengths within each of the corresponding slices should be an independent process. This suggests that the time-reversal of an L r excursion should be an excursion of a so-called continuous state branching process, as we will discuss in Section 3.5. This property and scale invariance will determine the law of the L r process up to a single parameter that we will call α. In addition to the spherical-surface measures µ k SPH and µ A=1 SPH discussed earlier, we will in the coming sections consider a few additional measures on disk-homeomorphic measure-endowed metric spaces with a given fixed "boundary length" value L. (For now we give only informal definitions; see Section 4.2 for details.) 1. A probability measure µ L DISK on boundary length L surfaces that in some sense represents a "uniform" measure on all such surfaces -just as µ k SPH in some sense represents a uniform measure on spheres with k marked points. It will be enough to define this for L = 1, as the other values can be obtained by rescaling. This L = 1 measure is expected to be an m → ∞ scaling limit of the probability measure on discrete disk-homeomorphic triangulations with boundary length m, where the probability of an n-triangle triangulation is proportional to (27/2) −n . (Note that for a given large m value, one may divide area, boundary length, and distance by factors of m 2 , m, and m 1/2 respectively to obtain an approximation of µ L DISK with L = 1.) 2. A measure µ 1,L DISK on marked disks obtained by weighting µ L DISK by area and then choosing an interior marked point uniformly from that area. In the context of Theorem 1.1, this is the measure that should correspond to the conditional law of S \ B • (x, r) given that the boundary length of B • (x, r) is L. A measure µ L MET on disk-homeomorphic measure-endowed metric spaces with a given boundary length L and an interior "center point" such that all vertices on the boundary are equidistant from that point. In other words, µ L MET is a probability measure on the sort of surfaces that arises as a filled metric ball. Again, it should correspond to a scaling limit of a uniform measure (except that as usual the probability of an n-triangle triangulation is proportional to (27/2) −n ) on the set of all marked triangulations of a disk with a given boundary length and the property that all points on the boundary are equidistant from that marked point. This is the measure that satisfies the "slice independence" described at the end of the statement of Theorem 1.1. Suppose we fix r > 0 and restrict the measure µ 2 SPH to the event that d(x, y) > r, so that µ 2 SPH becomes a finite measure. Then one expects that given the filled metric ball of radius r centered at x, the conditional law of the component containing y is a sample from µ 1,L DISK , where L is a boundary length measure. Similarly, suppose one conditions on the outside of the filled metric ball of radius d(x, y) − r centered at x. Then the conditional law of the filled metric ball itself should be µ L MET . This is the measure that one expects (based on the intuition derived from Figure 1.2 and 1.3 above) to have the "slice independence" property. Metric measure spaces A triple (S, d, ν) is called a metric measure space (or mm-space) if (S, d) is a complete separable metric space and ν is a measure on the Borel σ-algebra generated by the topology generated by d, with ν(S) ∈ (0, ∞). We remark that one can represent the same space by the quadruple (S, d, ν, m), where m = ν(S) and ν = m −1 ν is a probability measure. This remark is important mainly because some of the literature on metric measure spaces requires ν to be a probability measure. Relaxing this requirement amounts to adding an additional parameter m ∈ (0, ∞). Two metric measure spaces are considered equivalent if there is a measure-preserving isometry from a full measure subset of one to a full measure subset of the other. Let M be the space of equivalence classes of this form. Note that when we are given an element of M, we have no information about the behavior of S away from the support of ν. Next, recall that a measure on the Borel σ-algebra of a topological space is called good if it has no atoms and it assigns positive measure to every open set. Let M SPH be the space of geodesic metric measure spaces that can be represented by a triple (S, d, ν) where (S, d) is a geodesic metric space homeomorphic to the sphere and ν is a good measure on S. Note that if (S 1 , d 1 , ν 1 ) and (S 2 , d 2 , ν 2 ) are two such representatives, then the a.e. defined measure-preserving isometry φ : S 1 → S 2 is necessarily defined on a dense set, and hence can be extended to the completion of its support in a unique way so as to yield a continuous function defined on all of S 1 (similarly for φ −1 ). Thus φ can be uniquely extended to an everywhere defined measure-preserving isometry. In other words, the metric space corresponding to an element of M SPH is uniquely defined, up to measure-preserving isometry. As we are ultimately interested in probability measures on M, we will need to describe a σ-algebra on M. We will also show that M SPH belongs to that σ-algebra, so that in particular it makes sense to talk about measures on M that are supported on M SPH . We would like to have a σ-algebra that can be generated by a complete separable metric, since this would allow us to define regular conditional probabilities for all subsets. We will introduce such a σ-algebra in Section 2.4. We first discuss some basic facts about metric spheres in Section 2.2. Observations about metric spheres Let M k SPH be the space of elements of M SPH that come endowed with an ordered set of k marked points z 1 , z 2 , . . . , z k . When j ≤ k there is an obvious projection map from M k SPH to M j SPH that corresponds to "forgetting" the last k − j coordinates. We will be particularly interested in the set M 2 SPH in this paper, and we often represent an element of M 2 SPH by (S, d, ν, x, y) where x and y are the two marked points. The following is a simple deterministic statement about geodesic metric spheres (i.e., it does not involve the measure ν). Proposition 2.1. Suppose that (S, d) is a geodesic metric space which is homeomorphic to S 2 and that x ∈ S. Then each of the components of S \ B(x, r) has a boundary that is a simple closed curve in S, homeomorphic to the circle S 1 . Proof. Let U be one such component and consider the boundary set Γ = ∂U . We aim to show that Γ is homeomorphic to S 1 . Note that every point in Γ is of distance r from x. Since U is connected and has connected complement, it must be homeomorphic to D. We claim that the set S \ Γ contains only two components: the component U and another component that is also homeomorphic to D. To see this, let us define U to be the component of S \ Γ containing x. By construction, ∂ U ⊆ Γ, so every point on ∂ U has distance r from x. A geodesic from any other point in Γ would have to pass through ∂ U , and hence such a point would have to have distance greater than r from x. Since all points in Γ have distance r from x, we conclude that ∂ U = Γ. Note that U has connected complement, and hence is also homeomorphic to D. The fact that Γ is the common boundary of two disjoint disks is not by itself enough to imply that Γ is homeomorphic to S 1 . There are still some strange counterexamples (topologist's sine curves, exotic prime ends, etc.) To begin to rule out such things, our next step is to show that Γ is locally connected. Suppose for contradiction that Γ is not locally connected. This implies that there exists z ∈ Γ and s > 0 such that for every sub-neighborhood V ⊆ B(z, s) containing z the set V ∩ Γ is disconnected. Note that since Γ is connected the closure of every component of Γ ∩ B(z, s) has non-empty intersection with ∂B(z, s). Since these components are closed within B(z, s), all but one of them must have positive distance from z. Moreover, for each ∈ (0, s), we claim that the number of such components which intersect B(z, ) must be infinite. Indeed, otherwise Γ would be locally connected at z, which would contradict our assumption that Γ is not locally connected at z (and this latter statement, the reader may recall, is the one we were assuming for the purpose of deriving a later contradiction). Now (still assuming that Γ is not locally connected), the above discussion implies that there must be an annulus A (i.e., a difference between the disk-homeomorphic complements of two concentric filled metric balls) centered at z such that A ∩ Γ contains infinitely many connected components. Let δ be equal to the width of A (i.e., the distance between the inside and outside boundaries of A). It is not hard to see from this that both A ∩ U and A ∩ U contain infinitely many distinct components crossing A, each of diameter at least δ. Let A I be the inner boundary of A and let A M be the image of a simple loop in A which has positive distance from ∂A and surrounds A I . Fix > 0. Then the above implies that we can find w ∈ A I ∩ B(x, r) and points z 1 , z 2 ∈ A M ∩ ∂ U with d(z 1 , z 2 ) < such that a given geodesic γ which connects w and x necessarily crosses a given geodesic η which connects z 1 and z 2 . Since w ∈ B(x, r), we have that γ is contained in B(x, r). Let v be a point on γ ∩ η. Then d(x, w) = d(x, v) + d(v, w). We claim that d(v, w) < . Indeed, if d(v, w) ≥ then as d(z j , v) < for j = 1, 2 we would have that This contradicts that z 1 , z 2 / ∈ B(x, r), which establishes the claim. Since d(v, w) < , we therefore have that Since > 0 was arbitrary and A I , A M are closed, we therefore have that A M ∩ A I = ∅. This is a contradiction since we took A M to be disjoint from A I . Therefore Γ is locally connected. Note that the image of Γ under a homeomorphism S → S 2 must be locally connected as well. Moreover, there is a conformal map ϕ from D to the image of U , and a standard result from complex analysis (see e.g. [Law05, Proposition 3.6]) states that since the image of Γ is locally connected, the map ϕ must extend continuously to its boundary. This tells us that Γ is given by the image of a continuous curve ψ : S 1 → S. It remains only to show ψ(z 1 ) = ψ(z 2 ) for all z 1 , z 2 ∈ S 1 . This will complete the proof because then ψ is a simple curve which parameterizes ∂U . Assume for contradiction that there exists z 1 , z 2 ∈ S 1 distinct so that ψ(z 1 ) = ψ(z 2 ). We write [z 1 , z 2 ] for the counterclockwise segment of S 1 which connects z 1 and z 2 . Then we have that ψ restricted to each of [z 1 , z 2 ] and S 1 \ (z 1 , z 2 ) is a loop and the two loops touch only at ψ(z 1 ) = ψ(z 2 ). Therefore the loops are nested and only one of them separates U from x. We assume without loss of generality that ψ| S 1 \(z 1 ,z 2 ) separates U from x. Fix w ∈ (z 1 , z 2 ), let η be a path from x to w, and let t 1 (resp. t 2 ) be the first time that η hits ∂U (resp. w). Then we have that t 1 = t 2 . Applying this to the particular case of a geodesic from x to w, we see that the distance of x to w is strictly larger than the distance of ∂U to w. This a contradiction, which completes the proof. As mentioned earlier, given a doubly marked geodesic metric space (S, d, x, y) which is homeomorphic to S 2 , we let B • (x, r) denote the filled metric ball of radius r centered at x, as viewed from y. That is, B • (x, r) is the complement of the y-containing component of S \ B(x, r). Fix some r with 0 < r < d(x, y), and a point z ∈ ∂B • (x, r). Clearly, any geodesic from z to x is a path contained in B • (x, r). In general there may be multiple such geodesics, but the following proposition gives us a way to single out a unique geodesic. Proposition 2.2. Suppose that (S, d, x, y) is a doubly marked geodesic metric space which is homeomorphic to S 2 , that 0 < r < d(x, y), and that B • (x, r) is the radius r filled ball centered at x and z ∈ ∂B • (x, r). Assume that an orientation of ∂B • (x, r) is fixed (so that one can distinguish the "clockwise" and "counterclockwise" directions). Then there exists a unique geodesic from z to x that is leftmost viewed from x (i.e., furthest counterclockwise) when lifted and understood as an element of the universal cover of B • (x, r) \ {x}. Proof. Note that for each r ∈ (0, r), the lifting of ∂B • (x, r ) to this universal cover is homeomorphic to R (since R is the lifting of the circle to its universal cover) and one can find the leftmost (i.e., furthest counterclockwise) point in this universal cover reachable by any geodesic. It is not hard to see that the union of such points (over all r ) forms the desired leftmost geodesic. We next establish some "rigidity" results for metric spaces. Namely, we will first show that there is no non-trivial isometry of a geodesic closed-disk-homeomorphic metric space which fixes the boundary. We will then show that the identity map is the only orientation-preserving isometry of a triply marked geodesic sphere that fixes all of the marked points. (Note that there can be many automorphisms of the unit sphere that fix two marked points if those points are on opposite poles.) We will note that it suffices to fix two points if one also fixes a distinguished geodesic between them. Proposition 2.3. Suppose that (S, d) is a geodesic metric space such that there exists a homeomorphism ϕ : D → S. Suppose that φ : S → S is an isometry which fixes ∂S := ϕ(∂D). Then φ(z) = z for all z ∈ S. Thus for x ∈ ∂S and z ∈ S, we have a well-defined leftmost geodesic γ connecting z to x with respect to this orientation. Since φ fixes ∂S, it preserves the orientation of ∂S. In particular, if it is true that φ(z) = z then it follows that φ must fix γ (for otherwise we would have more than one leftmost geodesic from z to x). We conclude that {z : φ(z) = z} is connected and connected to the boundary, and hence its complement must have only simply connected components. Brouwer's fixed point theorem implies that none of these components can be non-empty, since there would necessarily be a fixed point inside. This implies that φ(z) = z for all z ∈ S. Proposition 2.4. Suppose that (S, d, x 1 , x 2 , x 3 ) is a triply marked geodesic metric space with x 1 , x 2 , x 3 distinct which is topologically equivalent to S 2 . We assume that S is oriented so that we can distinguish the clockwise and counterclockwise directions of simple loops. Suppose that φ : S → S is an orientation-preserving isometry with φ(x j ) = x j for j = 1, 2, 3. Then φ(z) = z for all z ∈ S. Similarly, if (S, d, x 1 , x 2 ) is a doubly marked space with x 1 , x 2 distinct and γ is a geodesic from x 1 to x 2 , then the identity is the only orientation-preserving isometry that fixes x 1 , x 2 , and γ. Proof. The latter statement is immediate from Proposition 2.3 applied to the disk obtained by cutting the sphere along γ. To prove the former statement, we assume without loss of generality that R = d(x 1 , x 2 ) ≤ d(x 1 , x 3 ). Consider the filled metric ball B • (x 1 , R) (relative to x 3 ) so that x 2 ∈ ∂B • (x 1 , R). Since we have assumed that S is oriented, we have that ∂B • (x 1 , R) is oriented, hence Proposition 2.2 implies that there exists a unique leftmost geodesic γ from x 1 to x 2 . Since φ fixes x 1 , x 3 and φ is an isometry, it follows that φ fixes ∂B • (x 1 , R). Moreover, φ(γ) is a geodesic from φ(x 1 ) = x 1 to φ(x 2 ) = x 2 . As φ is orientation preserving, we must in fact have that φ(γ) = γ. Therefore the latter part of the proposition statement implies that φ fixes all of S. We remark that the above argument implies that the identity is the only map that fixes x and the restriction of γ to any neighborhood about x. In other words, the identity is the only map that fixes x and the equivalence class of geodesics γ that end at x, where two geodesics considered equivalent if they agree in a neighborhood of x. This is analogous to the statement that a planar map on the sphere has no non-trivial automorphisms (as a map) once one fixes a single oriented edge. We next observe that Proposition 2.3 can be further strengthened. Proposition 2.5. In the context of Proposition 2.3, if the isometry φ : S → S is orientation preserving and fixes one point x ∈ ∂S it must be the identity. Proof. By Proposition 2.3, it suffices to check that φ fixes the circle ∂S pointwise (since φ is a homeomorphism, it clearly fixes ∂S as a set). Note that the set {y ∈ ∂S : φ(x) = y} is closed and non-empty. Suppose for contradiction that {y ∈ ∂S : φ(y) = y} is not equal to all of ∂S. Then there exists I ⊆ ∂S connected which is relatively open in ∂S such that φ fixes the endpoints z 1 , z 2 of I but does not fix any point in I itself. Fix > 0 small so that there exists z ∈ I with d(z, z 1 ) = . Then there is a well-defined first point w ∈ I starting from z 1 with d(z 1 , w) = /2. Since φ fixes I as a set, it must be that φ(w) = w. This is a contradiction, which gives the result. We now return to our study of leftmost geodesics. Proposition 2.6. Suppose that we are in the setting of Proposition 2.2. Suppose that a ∈ ∂B • (x, r) and that (a j ) is a sequence of points in ∂B • (x, r) which approach a from the left. For each j, we let γ j be the leftmost geodesic from a j to x and γ the leftmost geodesic from a to x. Then we have that γ j → γ uniformly as j → ∞. Moreover, for all but countably many values of a (which we will call jump values) the same is true when the a j approach a from the right. If a is one of these jump values, then the limit of the geodesics from a j , as the a j approach a from the right, is a non-leftmost geodesic from a to x. Proof. Suppose that the (a j ) in ∂B • (x, r) approach a ∈ ∂B • (x, r) from the left and γ j , γ are as in the statement. Suppose that (γ j k ) is a subsequence of (γ j ). It suffices to show that (γ j k ) has a subsequence which converges uniformly to γ. The Arzelá-Ascoli theorem implies that (γ j k ) has a subsequence which converges uniformly to some limiting path γ connecting a to x. This path is easily seen to be a geodesic connecting a to x which is non-strictly to the left of γ. Since γ is leftmost, we conclude that γ = γ. This proves the first part of the proposition. Suppose now that the (a j ) approach a from the right and let γ j , γ be as in the previous paragraph. The Arzelá-Ascoli theorem implies that every subsequence of (γ j ) has a further subsequence which converges uniformly to a geodesic connecting a to x. That the limit does not depend on the subsequence follows by monotonicity. To prove the second part of the proposition, note that each jump value a is associated with the non-empty open set J a ⊆ B • (x, r) which is between the leftmost geodesic from a to x and the uniform limit of leftmost geodesics along any sequence (a j ) approaching a from the right. Moreover, for distinct jump values a, a we must have that J a ∩ J a = ∅. Therefore the set of jump values is countable. As in the proof of Proposition 2.6, if a is a jump value, we let J a denote the open set bounded between the (distinct) left and right limits described in Proposition 2.6, both of which are geodesics from a to x. Recall that if a, a are distinct jump values then J a , J a are disjoint. Moreover, observe that the union of the J a (over all jump values a) is the complement of the closure of the union of all leftmost geodesics. As the point a moves around the circle, the leftmost geodesic from a to x may vary continuously (as it does when (S, d) is a Euclidean sphere) but it may also have countably many times when it "jumps" over an open set J a (as is a.s. the case when (S, d, ν) is an instance of the Brownian map, see Section 4). We next need to say a few words about "cutting" geodesic metric spheres along curves and/or "welding" closed geodesic metric disks together. Before we do this, let us consider the general question of what it means to take a quotient of a metric space w.r.t. an equivalence relation (see [BBI01, Chapter 3] for more discussion on this point). Given any metric space (S, d) and any equivalence relation ∼ =, one may define a distance function d between equivalence classes of ∼ = as follows: if a and b are representatives of distinct equivalence classes, take d(a, b) to be the infimum, over even-length sequences a = x 0 , x 1 , x 2 , . . . , x 2k = b with the property that x m ∼ = x m+1 for odd m, of the sum This d is a priori only a pseudometric on the set of equivalence classes of ∼ = (i.e., it may be zero for some distinct aand b). However, it defines a metric on the set of equivalence classes of ∼ = * where a ∼ = * b whenever d(a, b) = 0. It is not hard to see that d is the largest pseudometric such that d(a, b) ≤ d(a, b) for all a, b and d(a, b) = 0 when a ∼ = b. The procedure described above is what we generally have in mind when we speaking of taking a quotient of a metric space w.r.t. an equivalence relation. Now let us ask what happens if a geodesic metric sphere is cut along a simple loop Γ, to produce two disks. Note that on each disk, there is an internal metric, where the distance between points a and b is defined to be the length of the shortest path that stays entirely within the given disk. This distance is clearly finite when a and b are in the interior of the disk. (This can be deduced by taking a closed path from a to b bounded away from the disk boundary, covering it with open metric balls bounded away from the disk boundary, and taking a finite subcover.) However, when either a or b is on the boundary of the disk, it is not hard to see that (if the simple curve is windy enough) it could be infinite. Let us now ask a converse question. What happens when we take the two metric disks and try to "glue them together" to recover the sphere? We can clearly recover the sphere as a topological space, but what about the metric? Before we address that point, note there is always one way to glue the disks back together to create a new metric space: namely, we may consider the disjoint union of the pair of disks to be a common metric space (with the distance between points on distinct disks formally set to be infinity) and then take a metric quotient (in the sense discussed above) w.r.t. the equivalence relation that identifies the boundary arcs. This can be understood as the largest metric compatible with the boundary identification. In this metric, the distance between a and b is the length (in the original metric) of the shortest path from a to b that only crosses Γ finitely many times. However, although we will not prove this here, it appears that one can actually construct a geodesic metric sphere with a closed curve Γ and points a and b such that the shortest path from a to b that crosses Γ finitely many times is longer than the shortest path overall. In other words, it appears that there may be situations where cutting a sphere into two disks and gluing the disks back together (using the quotient procedure described above) does not reproduce the original sphere. On the other hand, it is easy to see that this type or pathology does not arise if Γ is a curve comprised of a finite number of geodesic arcs, since one can easily find a geodesic γ between any points a and b that crosses no geodesic arc of Γ more than once. (If it crosses an arc multiple times, one may replace the portion of γ between the first and last hitting times by a portion of the arc itself.) The same applies if one has a disk cut into two pieces using a finite sequence of geodesic arcs. This is an important point, since in this paper we will frequently need to glue together disk-homeomorphic "slices" whose boundaries are geodesic curves. The following proposition formalizes one example of such a statement. Proposition 2.7. Suppose that (S, d, x, y) is a doubly marked geodesic metric space which is homeomorphic to S 2 . Suppose that γ 1 , γ 2 are distinct geodesics which connect x to y and that S \ (γ 1 ∪ γ 2 ) has two components U 1 , U 2 . For j = 1, 2, let x j (resp. y j ) be the first (resp. last) point on ∂U j visited by γ 1 (or equivalently by γ 2 ). We then let (U j , d j , x j , y j ) be the doubly marked metric space where d j is given by the internal metric induced by d on U j . Let S be given by the disjoint union of U 1 and U 2 and let d be the distance on S which is defined by d(a, b) = d j (a, b) if a, b ∈ U j for some j = 1, 2, otherwise d(a, b) = ∞. We then define an equivalence relation ∼ = on S by declaring that a ∼ = b if either a = b or if a ∈ ∂U 1 corresponds to the same point b ∈ ∂U 2 in S. Let d be the largest metric compatible with S/ ∼ =. Then d = d. That is, the metric gluing of the (U j , d j , x j , y j ) along their boundaries gives (S, d, x, y). For future reference, let us remark that another instance where this pathology will not arise is when (S, d, x) is an instance of a Brownian map with a marked point x and Γ is the boundary of a filled metric ball centered at x. In that case, the definition of d given in Section 4.1 will imply that the length of the shortest path between points a and b is the infimum over the lengths of paths comprised of finitely many arcs, each of which is a segment of a geodesic from some point to x. By definition, such a path clearly only crosses Γ finitely many times. Note that the two situations discussed above (cutting along geodesics and along boundaries of filled metric balls) are precisely those that are needed to make sense of the statements in Theorem 1.1. In this article we will not rule out the possibility that the internal metric associated with S \ B • (x, r) defines an infinite diameter metric space. Let us note, however, that one can recover the entire collection of geodesics back to x (hence d) from the interior internal metrics associated with S \ B • (x, r) and B • (x, r). In particular, if z ∈ S \ B • (x, r) then by the very definition of B • (x, r) we have that the distance between z and ∂B • (x, r) is finite and given by d(x, z) − r. Moreover, the shortest paths from z to ∂B • (x, r) in S \ B • (x, r) comprise of the initial (d(x, z) − r)-length segments of the geodesics from z to x. It is clearly the case that the remaining r-length segments of the geodesics from z to x are contained in B • (x, r). Update: Pathologies of the aforementioned type were ruled out in other settings for natural gluing operations one can perform for Brownian and 8/3-LQG surfaces in [GM16b], which together with [GM16a,GM16c] has led to a proof that the self-avoiding walk on random quadrangulations converges to SLE 8/3 on 8/3-LQG. A consequence of slice independence/scale invariance At the end of Section 1.5, the measure µ L MET is informally described, along with a notion of "slice independence" one might expect such a measure to satisfy. Although we have not given a formal description of µ L MET yet, we can observe now some properties we would expect this measure to have. For concreteness, let us assume that L = 1 and that a point on the boundary is fixed, so that the boundary of a sample from µ L MET can be identified with the interval [0, 1]. We "cut" along the geodesic from 0 to x and view a sample from µ L MET as a "triangular slice" with one side identified with [0, 1] and the other two sides forming geodesics of the same length (one from 0 to x and one from 1 to x). We define d(a, b) to be the distance from the boundary at which the leftmost geodesic from a to x and the leftmost geodesic from b to x merge. Now, no matter what space and σ-algebra µ L MET is defined on, we would expect that if we restrict to rational values of a and b, then the d(a, b) should be a countable collection of real-valued random variables. Before we even think about σ-algebras on M or M SPH , we can answer a more basic question. What would "slice independence" and "scale invariance" assumptions tell us about the joint law of these random variables d(a, b)? The following proposition formalizes what we mean by scale invariance and slice independence, and shows that in fact these properties characterize the joint law of the random variables d(a, b) up to a single real parameter. As we will see in the proof of Theorem 1.1, this will allow us to deduce that the metric net associated with a space which satisfies the hypotheses of Theorem 1.1 is related to the so-called Lévy net introduced in Section 3 below. Proof. The lemma statement describes two ways of choosing a random d and asserts that the two laws agree. It is immediate from Lemma 2.9 (stated and proved just below) that the laws agree when one restricts attention to [0, 1/k, 2/k, . . . , 1] 2 , for any k ∈ N. Since this holds for all k, the result follows. Lemma 2.9. Suppose for some β > 0, a real-valued random variable A has the following property. When A 1 , A 2 , . . . , A k are i.i.d. copies of A, the law of k −β max 1≤i≤k A i is the same as the law of A. Then A agrees in law (up to some multiplicative constant) with the size of the maximum element of a Poisson point process chosen from the infinite measure x α dx, where α = −1/β − 1 and dx denotes Lebesgue measure on R + . Then Thus F (k β s) = F (s) 1/k . Set r = k β so that 1/k = r −1/β . Then when r has this form we have F (rs) = F (s) 1/k = F (s) r −1/β . Applying this twice allows us to draw the same conclusion when r = k β 1 /k β 2 for rational k = k 1 /k 2 , i.e., for all values r which are a βth power of a rational. Since this is a dense set, we can conclude that in general, if we set It is then straightforward to see that this implies that (up to a multiplicative constant) A has the same law as the Poisson point process maximum described in the lemma statement. (See, e.g., [Sat99,Exercise 22.4].) A σ-algebra on the space of metric measure spaces We present here a few general facts about measurability and metric spaces, following up on the discussion in Section 1.4. Most of the basic information we need about the Gromov-Prohorov metric and the Gromov-weak topology can be found in [GPW09]. Other related material can be founded in the metric geometry text by Burago, Burago, and Ivanov [BBI01], as well as Villani's book [Vil09, Chapters 27-28]. As in Section 1.4, let M denote the space of metric measure spaces, defined modulo a.e. defined measure preserving isometry. Suppose that (S, d, ν) ∈ M. If we choose points If ψ is any fixed bounded continuous function on R k 2 , then the map is a real-valued function on M. The Gromov-weak topology is defined to be the weakest topology w.r.t. which the functions of this type are continuous. In other words, a sequence of elements of M converge in this topology if and only if the laws of the corresponding M k (understood as measures on R k 2 ) converge weakly for each k. We denote by F the Borel σ-algebra generated by this topology. Since we would like to be able to sample marked points from ν and understand their distances from each other, we feel comfortable saying that F is the weakest "reasonable" σ-algebra we could consider. We will sometimes abuse notation and use (M SPH , F) to denote a measure space, where in this context F is understood to refer to the intersection of F with the set of subsets of M SPH . (We will apply a similar notational abuse to the "marked" analogs M k , M k SPH , and F k introduced below.) It turns out that the Gromov-weak topology can be generated by various natural metrics that make M a complete separable metric space: the so-called Gromov-Prohorov metric and the Gromov-1 metric [GPW09,Löh13]. Thus, (M, F) is a standard Borel space (i.e., a measure space whose σ-algebra is the Borel σ-algebra of a topology generated by a metric that makes the space complete and metrizable). We do not need to discuss the details of these metrics here. We bring them up in order to show that (M, F) is a standard Borel space. One useful consequence of the fact that (M, F) is a standard Borel space is that if G is any sub-σ-algebra of F, then the regular conditional probability of a random variable, conditioned on G, is well-defined [Dur10, Chapter 5.1.3]. We can also consider marked spaces; one may let M k denote the set of tuples of the form (S, d, ν, x 1 , x 2 , . . . , x k ) where (S, d, ν) ∈ M and x 1 , x 2 , . . . , x k are elements ("marked points") of S. Given such a space, one may sample additional points x k+1 , x k+2 , . . . , x m i.i.d. from ν and consider the random matrix M m of distances between the x i . One may again define a Gromov-weak topology on the marked space to be the weakest topology w.r.t. which expectations of bounded continuous functions of M m are continuous. We let F k denote the Borel σ-algebra of the marked space. Clearly for any m > k one has a measurable map M m → M k that corresponds to "forgetting" the last m − k points. One can similarly define F ∞ to be the space of (S, d, ν, x 1 , x 2 , . . .) with an x j defined for all positive integer j. The argument that these spaces are standard Borel is essentially the same as in the case without marked points. One immediate consequence of the definition of the Gromov-weak topology is the following: from ν), letting d k be the restriction of d to this set, and letting ν k assign mass 1/k to each element of S k . Then (S k , d k , ν k ) converges to (S, d, ν) a.s. in the Gromov-weak topology. A similar statement holds for marked spaces. If m < k and (S, d, ν, x 1 , x 2 , . . . , x m ) ∈ M m then one may choose x m+1 , x m+2 , . . . , x k i.i.d. and consider the discrete metric on {x 1 , . . . , x k } with uniform measure, and x 1 , . . . , x m marked. Then these approximations converge a.s. to (S, d, ν, x 1 , . . . , x m ) in the Gromov-weak topology on M m . Let N be the space of all infinite-by-infinite matrices (entries indexed by N × N) with the usual product σ-algebra and let N be the subset of N consisting of those matrices with the property that for each k, the initial k × k matrix of N describes a distance function on k elements, and the limit of the corresponding k-element metric spaces (endowed with the uniform probability measure on the k elements) exists in M. We refer to this limit as the limit space of the infinite-by-infinite matrix. It is a straightforward exercise to check that N is a measurable subset of N . Proposition 2.11. There is a one-to-one correspondence between 1. Real-valued F-measurable functions φ on M, and 2. Real-valued measurable functions φ on N with the property that their value depends only on the limit space. The relationship between the functions is the obvious one: 1. If we know φ, then we define φ by setting φ (S, d, ν) to be the a.s. value of φ(M ∞ ) when M ∞ is chosen via (S, d, ν). If we know Moreover, for each k ∈ N the analogous correspondence holds with (M k , F k ) in place of (M, F). Proof. We will prove the result for (M, F); the case of (M k , F k ) for general k ∈ N is analogous. Suppose that φ is a bounded, continuous function on N which depends only on a finite number of coordinate entries. Then we know that In particular, this holds if φ is a bounded, measurable function on N which depends only on the limit space. This proves one part of the correspondence. On the other hand, suppose that φ is an F-measurable function of the form Therefore the map which associates M ∞ with φ((S, d, ν)) where (S, d, ν) is the limit space of M ∞ is measurable as it is the limit of continuous maps. By a monotone class argument, this proves the other part of the correspondence. We are now going to use Proposition 2.11 to show that certain subsets of M are measurable. We begin by showing that the set of compact metric spaces in M is measurable. Throughout, we let C consist of those elements of N whose limit space is compact. Proposition 2.12. The set of compact metric spaces in M is measurable. More generally, for each k ∈ N we have that the set of compact metric spaces in M k with k marked points is measurable. Proof. We are going to prove the first assertion of the proposition (i.e., the case k = 0). The result for general values of k is analogous. For each > 0 and n ∈ N, we let N n, be those elements (d ij ) in N such that for every j there exists 1 ≤ k ≤ n such that d jk ≤ . That is, (d ij ) is in N n, provided the -balls centered at points in the limit space which correspond to the first n rows (or columns) in (d ij ) cover the entire space. As N n, is measurable, we have that both N = ∪ n N n, and ∩ ∈Q + N are measurable. By Proposition 2.11, it therefore suffices to show that ∩ ∈Q + N is equal to C. Suppose that (d ij ) ∈ C. Fix > 0. As the limit space associated with (d ij ) is compact, it follows that there exists n ∈ N such that the union of the -balls centered at the points associated with the first n columns (or rows) of (d ij ) covers the entire space. Therefore C ⊆ ∩ ∈Q + N , so we just need to establish the reverse inclusion. Suppose that (d ij ) ∈ ∩ ∈Q + N . We are going to show that (d ij ) ∈ C by showing that the limit space of (d ij ) is sequentially compact. Suppose that (j k ) is any sequence in N. It suffices to show that there exists a subsequence ( j k ) of (j k ) such that d j k , j k+1 ≤ 2 −k because this implies that the corresponding sequence in the limit space is Cauchy hence convergent (recall that the limit space is complete). We construct this sequence diagonally from (j k ) as follows. By assumption, there exists n 1 such that for every j there exists 1 ≤ k ≤ n 1 such that d jk ≤ 2 −2 . Therefore there exists 1 ≤ k 1 ≤ n 1 such that d j k k 1 ≤ 2 −2 for an infinite number of k. Let (j 1 k ) be the subsequence of (j k ) with d j 1 k k 1 ≤ 2 −2 for all k. Then we note that d j 1 k j 1 ≤ 2 −1 for all k, . Assume that we have defined subsequences (j 1 k ), . . . , (j m k ) of (j k ). By assumption, there exists n m+1 such that for every j there Passing to a diagonal subsequence of the sequences (j m k ) implies the result. To prove the measurability of certain sets in M, we will find it useful first to show that they are measurable with respect to the Gromov-Hausdorff topology and then use that there is a natural map from C into the Gromov-Hausdorff space which is measurable. In order to remind the reader of the Gromov-Hausdorff distance, we first need to remind the reader of the definition of the Hausdorff distance. Suppose that K 1 , K 2 are closed subsets of a metric space (S, d). For each > 0, we let K j be the -neighborhood of K j . Recall that the Hausdorff distance between K 1 , K 2 is given by Suppose that (S 1 , d 1 ), (S 2 , d 2 ) are compact metric spaces. The Gromov-Hausdorff distance between (S 1 , d 1 ) and (S 2 , d 2 ) is given by where the infimum is over all metric spaces (S, d) and isometries ϕ j : S j → S. We let X be the set of all compact metric spaces equipped with the Gromov-Hausdorff distance d GH . More generally, for each k ∈ N, we let X k be the set of all compact metric spaces (S, d) marked with k points x 1 , . . . , x k ∈ S. We equip X k with the distance function where the infimum is as in (2.3). We refer the reader to [Vil09, Chapter 27] as well as [BBI01, Chapter 7] for more on the Hausdorff and Gromov-Hausdorff distances. We remark that in (2.3), one may always take the ambient metric space to be ∞ . Indeed, this follows because every compact metric space can be isometrically embedded into ∞ . We will use this fact several times in what follows. We also note that there is a natural projection π : C → X . Moreover, if we equip N with the ∞ topology (in place of the product topology), then the projection π : C → X is continuous. Indeed, this can be seen by using the representation of d GH in terms of the distortion of a so-called correspondence between metric spaces; see [Vil09,Chapter 27]. Since the product topology generates the same σ-algebra as the ∞ topology on N , it follows that π is measurable. This observation will be useful for us for proving that certain sets in N are measurable. We record this fact in the follow proposition. In the following proposition, we will combine Proposition 2.11 and Proposition 2.13 to show that the set of compact, geodesic metric spaces in M is measurable. Proposition 2.14. The set of compact, geodesic spaces is measurable in M. Proof. That the set of geodesic spaces is closed hence measurable in X follows from [Vil09, Theorem 27.9]; see also the discussion in [BBI01, Chapter 7.5] . Therefore the result follows by combining Proposition 2.11 and Proposition 2.13. We note that it is also possible to give a short proof of Proposition 2.14 which does not rely on the measurability of the projection π : C → X . The following proposition will imply that the set of good measure endowed geodesic spheres is measurable in M. We will prove Proposition 2.15 result in the case that k = 0 (i.e., we do not have any extra marked points). The proof for general values of k is analogous. As in the proof of Proposition 2.14, it suffices to show that the set of geodesic metric spaces (S, d) which are homeomorphic to S 2 is measurable in X . In order to prove this, we first need to prove the following lemma. Lemma 2.16. Suppose that (S, d) is a geodesic metric space homeomorphic to S 2 and suppose that γ is a non-space-filling curve on S. Let U be a connected component of S \ γ and let A = S \ U . For every > 0, γ is homotopic to a point inside of the -neighborhood of A. Proof. Since γ is a continuous curve, it follows that U is topologically equivalent to D. Let ϕ : D → U be a homeomorphism. Then there exists δ > 0 so that Γ = ϕ(∂(1 − δ)D) is contained in the -neighborhood of A. Since Γ is a simple curve, it follows that there exists a homeomorphism ψ from D to the component V of S \ Γ which contains γ. Let γ = ψ −1 (γ). Then γ is clearly homotopic to 0 in D hence γ is homotopic to ψ(0) in V , which implies the result. Proof of Proposition 2.15. For simplicity, we will prove the result in the case that k = 0. The case for general values of k is established in an analogous manner. We are going to prove the result by showing that the set Y of geodesic metric spaces in X which are homeomorphic to S 2 is measurable in X . The result will then follow by invoking Proposition 2.11 and Proposition 2.13. We are first going to show that Y = Y. We clearly have that Y ⊆ Y, so we just need to We assume without loss of generality that diam(S) = 1. Then there exists a sequence (S n , d n ) in Y which converges to (S, d) in X . We note that we may assume without loss of generality that both S and the S n 's are subsets of ∞ such that d H (S n , S) → 0 as n → ∞ and that diam(S n ) = 1 for all n. Fix > 0. It suffices to show that there exists δ > 0 such that f (δ, (S n , d n )) < for all n ∈ N. Indeed, this implies that the (S n , d n ) converge to (S, d) in X regularly which, by [Beg44], implies that (S, d) is in Y. Fix δ > 0 such that f (δ, (S, d)) < . We assume that n 0 ∈ N is sufficiently large so that We note that for each 1 ≤ n ≤ n 0 there exists δ n > 0 such that f (δ n , (S n , d n )) < . We set δ 0 = min 1≤n≤n 0 δ n . We are now going to show that there exists δ > 0 such that f ( δ, (S n , d n )) < for all n ≥ n 0 . Upon showing this, we will have that with δ = δ 0 ∧ δ we have f ( δ, (S n , d n )) < for all n. Fix n ≥ n 0 and suppose that γ n : S 1 → S n is a path in S n with diam(γ n ) ≤ δ/4. Then we can construct a path γ in S as follows. We pick times 0 ≤ t n 0 < · · · < t n j ≤ 2π such that with x n i = γ n (t n i ) we have (2.6) By (2.5), for each 1 ≤ i ≤ j there exists x i ∈ S ⊆ ∞ such that x n i − x i ∞ ≤ δ/16. We then take γ to be the path S 1 → S which is given by successively concatenating Consequently, by (2.5) and (2.6) we have that This implies that diam(γ) < δ. Moreover, we have that the d H -distance between the ranges of γ n and γ is at most δ/2. By assumption, we can contract γ to a point in S inside of a set A ⊆ S of diameter at We claim that the component B n of S n \ γ n containing x n has diameter at least 1− −δ. Indeed, suppose that u, v ∈ S are such that there exists a path η connecting u, v which has distance at least δ/2 from γ. Arguing as above, we can find a path η n in S n whose range has Hausdorff distance at most δ/2 from the range of η so that the range of η n is disjoint from the range of γ n . This proves the claim. Thus as diam(S n ) = 1, with A n = S n \ B n we have that A n is contained in the Therefore f (δ, (S n , d n )) ≤ + 3δ 2 for all n ≥ n 0 . This finishes the proof that Y = Y. To finish proving the result, we will show that Y (hence Y) can be written as an intersection of sets which are relatively open in the closure of geodesic spheres in X , hence is measurable. It follows from the argument given just above that, for each fixed extends to a continuous map on Y. It therefore follows that, for each > 0, we have that is a Borel set in X . The result follows since this set is equal to Y. Proposition 2.17. Fix a constant r > 0 and let M 2 SPH,r be the set of elements (S, d, ν, x, y) ∈ M 2 SPH such that R = d(x, y) − r > 0 (and note that this is a measurable subset of M 2 SPH ). Then the space which corresponds to B • (x, R) (with its internal metric) is in M 1 . The function M 2 SPH,r → M 1 given by associating (S, d, ν, x, y) to this space is measurable. Moreover, if we have a measurable way of choosing z 1 , z 2 , . . . , z k ∈ ∂B • (x, R) and an orientation of ∂B • (x, R) that only requires us to look at S \ B • (x, R), then the map to the set of k slices (i.e., the metric measure spaces which correspond to the regions between the leftmost geodesics from each z j to x) is measurable as a map M 2 SPH,r → (M 3 ) k . (The three marked points in the jth slice are given by z j , z j+1 , and the point where the leftmost geodesics from z j and z j+1 to x first meet.) If there is a unique geodesic from x to y, one example of a function which associates S \ B • (x, R) with points z 1 , . . . , z k is as follows. Assume that we have a measurable way of measuring "boundary length" on ∂B • (x, R). Then we take z 1 , . . . , z k ∈ ∂B • (x, R) to be equally spaced points according to boundary length with z 1 given by the point on ∂B • (x, R) which is first visited by the geodesic from x to y. Proof of Proposition 2.17. That the space which corresponds to B • (x, R) is an element of M 1 is obvious. We are now going to argue that the map which associates (S, d, ν, x, y) ∈ M 2 SPH,r with the metric measure space associated with B • (x, R) is measurable. To see this, we note that a point w is in S \ B • (x, R) if and only if there exists > 0 and y 1 , . . . , y ∈ S such that the following hold: 2. y ∈ B(y 1 , ) and w ∈ B(y , ), and 3. B(y j , ) has non-empty intersection with both B(y j−1 , ) and B(y j+1 , ) for each 2 ≤ j ≤ − 1. Suppose that x 1 = x, x 2 = y, and x 3 , x 4 , . . . is an i.i.d. sequence chosen from ν and suppose that d ij = d(x i , x j ). The above tells us how to determine those indices j such that x j ∈ S \ B • (x, R). In particular, it is clear from the above that the event that is a measurable function of (d ij ) viewed as an element of N . Suppose that we are on the event that Then the event that the internal distance between x i and x j is at most δ is equivalent to the event that there exists > 0 and indices j 1 = i, j 2 , . . . , j k−1 , j k = j such that d j j +1 < for each 1 ≤ ≤ k − 1, (k − 1) < δ, and B(x j , ) ⊆ B • (x, R) for each 1 ≤ ≤ k (which we can determine using the recipe above). Thus it is easy to see that the element of N which corresponds to the matrix of distances between the (x i ) which are in with the internal metric is measurable. Thus the measurability of the metric measure space corresponding B • (x, R) viewed as an element of M 1 follows by applying Proposition 2.11. To see the final claim of the proposition, we note that a point w is in the slice between the leftmost geodesics from z i and z i+1 to x if and only if there exists δ > 0 such that for every > 0 there exists points y 1 , . . . , y which satisfy the following properties: 4. B(y j , ) has non-empty intersection with B(y j−1 , ) and B(y j+1 , ) for each 2 ≤ j ≤ − 1, 5. No geodesic from z i+1 to x passes through the B(y j , ), and 6. No geodesic from a point on ∂B • (x, R) which is infinitesimally to the left of z i to x passes through the B(y j , ). Property 5 holds if and only if Property 6 The last property can be checked in an analogous way, so the result thus follows in view of Proposition 2.11 and the argument described in the previous paragraph. Proposition 2.18. Let ψ be the map that sends an element (S, d, ν, x, y) of M 2 SPH to the element of X 2 that represents the metric net from x to y. Then ψ is a measurable map from M 2 SPH to X 2 . Proof. We are going to prove the result by showing that the map from the set of doubly marked geodesic spheres in X 2 to itself which associates (S, d, x, y) with the metric net from x to y is continuous. This, in turn, implies the result by combining with Proposition 2.11 and Proposition 2.13. Fix > 0 and suppose for i = 1, 2 that (S i , d i , x i , y i ) is an element of X 2 which is a geodesic sphere and that the d GH -distance between the two spaces is at most /2. Then we may assume without loss of generality that (S 1 , d 1 ) and (S 2 , d 2 ) are isometrically embedded into ∞ such that d H (S 1 , S 2 ) < , x 1 − x 2 ∞ < , and y 1 − y 2 ∞ < . For each r > 0 and i = 1, 2 we let U i,r be the component of S i \ B(x i , r) which contains y i . We are going to show that for each 2 < r < d(x, y) − 2 we have that ∂U 1,r is contained in the 7 -neighborhood of ∂U 2,r−2 and vice-versa. This, in turn, implies that the d H -distance of the metric net in (S 1 , d 1 ) from x 1 to y 1 from the metric net in (S 2 , d 2 ) from x 2 to y 2 is at most 7 , so the same is also true for the d GH -distance. Fix 2 < r < d(x 1 , y 1 ) − 2 and suppose that v 1 ∈ ∂U 1,r . Then there exists u 1 ∈ U 1,r with d 1 (u 1 , v 1 ) < . Let γ 1 be a path in U 1,r connecting u 1 and y 1 . Arguing as in the proof of Proposition 2.15, there exists a path γ 2 in S 2 terminating at y 2 such that the d H -distance between the range of γ 1 and γ 2 (viewed as paths in ∞ ) is at most and u 1 − u 2 ∞ < where u 2 = γ 2 (0). In particular, the distance between any point on γ 2 and x 2 is at least r − 2 . It thus follows that γ 2 is in U 2,r−2 . In particular, u 2 ∈ U 2,r−2 . Moreover, Therefore u 2 is in the 5 -neighborhood of ∂U 2,r−2 . Thus since we have that v 1 is in the 7 -neighborhood of ∂U 2,r−2 , as desired. 3 Tree gluing and the Lévy net Section 3.1 and Section 3.2 briefly recall two tree-mating constructions developed in [DMS14], one involving a pair of continuum random trees, and the other involving a pair of α-stable looptrees. These very brief sections are not strictly necessary for the current project, but we include them to highlight some relationships between this work and [DMS14] (relationships that play a crucial role in the authors' works relating the Brownian map and pure Liouville quantum gravity). The real work of this section begins in Section 3.3, which describes how to construct the α-Lévy net by gluing an α-stable looptree to itself (or equivalently, by gluing an α-stable looptree to a certain related real tree derived from the α-stable looptree -the geodesic tree of the Lévy net). The reader may find it interesting to compare the construction in Section 3.3, where a single α-stable looptree is glued to itself, to the one in Section 3.2, where two α-stable looptrees are glued to each other. In Section 3.4 we present a different but (it turns out) equivalent way to understand and visualize the Lévy net construction given in Section 3.3. We give a review of continuous state branching processes in Section 3.5, then give a breadth-first construction of the Lévy net in Section 3.6, and finally prove the topological equivalence of the Lévy net constructions in Section 3.7. We end this section by showing in Section 3.8 that the embedding of the Lévy net into S 2 is determined up to homeomorphism by the geodesic tree and its associated equivalence relation in the Lévy net. Gluing together pair of continuum random trees There are various ways to "glue together" two continuum trees to produce a sphere decorated by a space-filling path (describing the "interface" between the two trees). One approach, which is explained in [DMS14, Section 1.1], is the following: let X t and Y t be independent Brownian excursions, both indexed by t ∈ [0, T ]. Thus X 0 = X T = 0 and X t > 0 for t ∈ (0, T ) (and similarly for Y t ). Once X t and Y t are chosen, choose C large enough so that the graphs of , viewed as a Euclidean metric space. Let ∼ = denote the smallest equivalence relation on R that makes two points equivalent if they lie on the same vertical line segment with endpoints on the graphs of X t and C − Y t , or they lie on the same horizontal line segment that never goes above the graph of X t (or never goes below the graph of C − Y t ). Maximal segments of this type are shown in Figure 3.1. As explained in [DMS14, Section 1.1], if one begins with the Euclidean rectangle and then takes the topological quotient w.r.t. this equivalence relation, one obtains a topological sphere, and the path obtained by going through the vertical lines in left-to-right order is a continuous space-filling path on the sphere, which intuitively describes the "interface" between the two identified trees. In fact, this remains true more generally when X t and Y t are not independent, and the pair (X t , Y t ) is instead an excursion of a correlated two-dimensional Brownian motion into the positive quadrant (starting and ending at the origin), as explained in detail in [DMS14,MS15b]. Gluing together pair of stable looptrees Also discussed in [DMS14, Section 1.3] is a method of obtaining a sphere by gluing together two stable looptrees (with the disk in the interior of each loop included), as illustrated in Figure 3.2. In the setting discussed there, each of the grey disks surrounded by a loop is given a conformal structure (that of a "quantum disk"), and this is shown to determine a conformal structure of the sphere obtained by gluing the trees together; given this structure, the interface between the trees in Figure 3.2 is shown to be an SLE κ process for κ = 16/γ 2 ∈ (4, 8). In a closely related construction, the interface between the trees in the left side of Figure 3.1 is shown to be a space-filling form of SLE κ in which the path "goes inside and fills up" each loop after it is created. As explained in [DMS14], one obtains a range different values of κ by taking the trees to be correlated with each other and varying the correlation coefficient. 3.3 Gluing stable looptree to itself to obtain the Lévy net Figure 3.3 illustrates a procedure for generating a sphere from a single stable looptree, which in turn is generated from the time-reversal of a Lévy excursion with only upward jumps. Precisely, Proposition 3.4 below will show that the topological quotient of the There is a standard discrete analog of the construction shown in the left that produces a planar triangulation (with distinguished tree and dual tree) from a finite walk (X n , Y n ) in Z 2 + that starts and ends at (0, 0). The bottom figure is obtained by collapsing the horizontal red and blue lines to produce two trees, connected to each other by black edges. See [Mul67,Ber07,She11] for details. rectangle, w.r.t. the equivalence relation illustrated, actually is a.s. homeomorphic to the sphere. The process Y t illustrated there is sometimes known as the height process of the α-stable process X t (or, more precisely, the time-reversal of X t ). The fact that this Y t is well-defined and a.s. has a continuous modification (along with Hölder continuity and the exact Hölder exponent) is established for example in [DLG05, Theorems 1.4.3 and 1.4.4] (see also [LGLJ98]). In this construction the upper tree in the figure is not independent of the lower tree (with holes); in fact, it is strictly determined by the Lévy excursion below, as explained in the figure caption. Note that every jump in the Lévy excursion (corresponding to a bubble) comes with a "height" which is encoded in the upper tree. If one removes from the constructed sphere the grey interiors of the disks shown, one obtains a closed subset of the sphere; this set, together with its topological structure, can also be obtained directly without reference to the sphere (simply take the quotient topology on the set of equivalence classes in the complement of the grey regions in Figure 3.3). It is important to note that after a given time t, the set of record infima achieved after time t looks locally like the range of a stable subordinator with index α − 1 [Ber96, Chapter VIII, Lemma 1], and that in particular it a.s. has a well-defined Minkowski measure [FT83], Figure 3.2: Gluing stable looptrees to each other. Left: X t and Y t are i.i.d. Lévy excursions, each with only negative jumps. Graphs of X t and C − Y t are sketched; red segments indicate jumps. Middle: Add a black curve to the left of each jump, connecting its two endpoints; the precise form of the curve does not matter (as we care only about topology for now) but we insist that it intersect each horizontal line at most once and stay below the graph of X t (or above the graph of C − Y t ) except at its endpoints. We also draw the vertical segments that connect one graph to another, as in the left side of Figure 3.1, declaring two points equivalent if they lie on the same such segment (or on the same jump segment). Shaded regions (one for each jump) are topological disks. Right: By collapsing green segments and red jump segments, one obtains two trees of disks with outer boundaries identified. which also corresponds to the time parameter of the stable subordinator. 6 We now give the formal definition of the Lévy net. Definition 3.1. The (α-stable) Lévy net is the random doubly marked compact topological space which is constructed as follows. Fix α ∈ (1, 2) and suppose that X t is the time-reversal of an α-stable Lévy excursion and let Y t be its associated height process. Fix C > 0 large so that the graphs of X t and C + Y t are disjoint and let R be the smallest Euclidean rectangle which contains both the graphs of X t and C + Y t . We then define an equivalence relation on R as follows. We declare points of R which lie above the graph of C + Y t to be equivalent if they lie on a horizontal chord which does not cross the graph of C + Y t . For each t, we declare the points of R on the vertical line segment from (t, X t ) to (t, C + Y t ) to be equivalent. Finally, we declare points of R ρ = 1 2 + (πα) −1 arctan(tan(πα/2)) = 1 2 + (πα) −1 (πα/2 − π) = 1 − 1/α. (Recall that for x ∈ (π/2, π) we have arctan(tan(x)) = x − π.) Thus in this case the index of the stable subordinator is αρ = α − 1. This value varies between 0 and 1 as α varies between 1 and 2. The dimension of the range is given by the index α − 1 (a special case of [Ber96, Chapter III, Theorem 15]. . (This quantity corresponds to a "distance" to the dual root, in the sense of [DLG02].) Red and green lines indicate equivalences. Note that whenever the lower endpoints of two vertical red segments are connected to one another by a green segment, it must be the case that the upper endpoints have the same height (which may be hard to recognize from this hand-drawn figure). Right: Once the green lines are collapsed, one has a tree and a tree of loops (which we will refer to as either the dual tree or looptree). The tree above is the geodesic tree. The orange dot is the root of that tree. The blue dot is a "dual root" (a second marked point). The horizontal green lines above the graph of Y t "wrap around" from one side of the rectangle to the other; these lines correspond to the points on the geodesic tree arc from the orange dot to the blue dot. which lie below the graph of X t to be equivalent if they lie on a horizontal chord which does not cross the graph of X t . Let π be the corresponding quotient map. We call the image of the graph of C + Y t under π the geodesic tree and the looptree associated with X t the dual tree associated with the Lévy net. The Lévy net quotient is then marked by the roots of the geodesic tree and the dual tree. Although a priori we do not put a full metric space structure on the Lévy net, we define a "distance from the root" of a point in the Lévy net to be the distance inherited from the geodesic tree. Also, from every point in the Lévy net, one has either one or two distinguished "geodesics" from that point to the root, which correspond to paths in the geodesic tree. When there are two, we refer to them as a left geodesic and a right geodesic. We now establish a few basic properties of the Lévy net. Proposition 3.2. Suppose that Y t is the height process associated with the time-reversal of an α-stable Lévy excursion with only upward jumps. It is a.s. the case that Y t does not have a decrease time. That is, it is a.s. the case that there does not exist a time t 0 Shown is the behavior of the geodesic tree and dual tree if Y did have a decrease time t 0 . The middle blue line on the graph of C + Y t corresponds to the decrease time and the blue dots to its left and right are points which are all glued together by the Lévy net equivalence relation. Observe that every point in the Lévy net which corresponds to a point in the graph of C + Y t which lies below the blue line would have more than one geodesic back to the root. This is a contradiction in view of Lemma 3.22, because then we would have a positive measure of points in the geodesic tree from which there is more than one geodesic to the root. See Figure 3.4 for an illustration of the proof of Proposition 3.2. We will postpone the detailed proof to Section 3.6, at which point we will have collected some additional properties of the height process Y t . We emphasize that Proposition 3.2 will only be used in the proof of Proposition 3.4 stated and proved just below, so the argument is not circular. Proposition 3.3. Suppose that Y t is the height process associated with the time-reversal of an α-stable Lévy excursion with only upward jumps. It is a.s. the case that Y t has countably many local maxima, and each of these local maxima occurs at a distinct height (and hence in particular each local maximum is isolated). Proof. This is established in the first assertion in the proof of [ Proposition 3.4. If one glues a topological disk into each of the loops of the looptree instance associated with an instance of the Lévy net, then the topological space that one obtains is a.s. homeomorphic to S 2 . Proposition 3.4 implies that the quotient of the rectangle shown in Figure 3.3, w.r.t. the equivalence relation induced by the horizontal and vertical lines as illustrated is topologically equivalent to S 2 . We will prove Proposition 3.4 using Moore's theorem [Moo25], which for the convenience of the reader we restate here. Recall that an equivalence relation ∼ = on S 2 is said to be topologically closed if and only if whenever (x n ) and (y n ) are two sequences in S 2 with x n ∼ = y n for all n, x n → x and y n → y as n → ∞, then x ∼ = y. Equivalently, ∼ = is topologically closed if the graph {(x, y) : x ∼ = y} is closed as a subset of S 2 × S 2 . The topological closure of a relation ∼ = is the relation whose graph is the closure of the graph of ∼ =. (Note that it is not true in general that the topological closure of an equivalence relation is an equivalence relation.) The following statement of Moore's theorem is taken from [Mil04]. Proposition 3.5. Let ∼ = be any topologically closed equivalence relation on S 2 . Assume that each equivalence class is connected and not equal to all of S 2 . Then the quotient space S 2 / ∼ = is itself homeomorphic to S 2 if and only if no equivalence class separates S 2 into two or more connected components. Proof of Proposition 3.4. Proposition 3.2 implies that no vertical line segment corresponding to an equivalence class in Definition 3.1 (or Figure 3.3) has an endpoint on two distinct (non-zero-length) horizontal segments which correspond to an equivalence class in Definition 3.1. Thus no equivalence class contains a non-empty horizontal chord of both the upper and lower graphs. The equivalence classes can thus be classified as: Type I: Those containing neither upper nor lower chords. These are isolated points (on the boundaries of the grey regions in Figure 3.3) or single vertical lines connecting one graph to the other. Type II: Those containing an upper (but not lower) chord. By Proposition 3.3, such a chord can hit the graph of C + Y t either two or three times, but not more. Thus these equivalence classes consist of a horizontal line segment attached to either two or three vertical chords. Type III: Those containing a lower (but not upper) chord. Since stable Lévy processes with only downward jumps have a countable collection of unique local minima, such a chord must hit the black curves in either two or three places. In the (a.s. countable) set of places where the latter occurs, it is not hard to see that the rightmost point is a.s. in the interior of one of the boundaries of the grey regions. (One can see from this that the path tracing the boundary of the looptree hits no point more than twice.) Thus the number of vertical line segments is either one (if one of the two endpoints lies on the boundary of a grey region) or two (if neither endpint lies on the boundary of a grey region). From this description, it is obvious that all equivalence classes are connected, fail to disconnect the space, and do not contain the entire space. It only remains to check that the equivalence relation is topologically closed. To do this we use essentially the same argument as the one given in [DMS14, Section 1.1]. Suppose that x i and y i are sequences with x i → x and y i → y, and x i ∼ = y i for all i. Then we can find a subsequence of i values along which the equivalence classes of x i and y i all have the same type (of the types enumerated above). By compactness, we can then find a further subsequence and such that the collection of segment endpoints converges to a limit. It is not hard to see that the resulting limit is a necessarily a collection of vertical chords and horizontal chords (each of which is an equivalence class) that are adjacent at endpoints; since x and y are both in this limit we must have x ∼ = y. We next briefly remark that the Lévy net can be endowed with a metric space structure in various ways. The approach that we use in Definition 3.1 is to use the distance inherited from the leftmost-geodesic tree; given any two points x and y, one may draw their leftmost geodesic until they merge at a point z and define the distance to be the sum of geodesic arc lengths from x to z and from y to z. Another is to consider the geodesic tree (as described by Y t ) with its intrinsic metric structure and then take the quotient (as in Section 2.2) w.r.t. the equivalence relation induced by the gluing with the looptree. Note that when two points in the upper tree are equivalent, their distance from the root is always the same; thus, the distance between any point and the root is the same in the quotient metric space as it is in the tree itself. This implies that the metric space quotient defined this way is not completely degenerate -i.e., it is not the case that all points become identified with each other when one takes the metric space quotient in this way. It would be natural to try to prove a stronger form of non-degeneracy for this metric structure: namely, one would like to show that a.s. no two distinct points in the Lévy net have distance zero from each other in this quotient metric. This is not something that we will prove for general α in this paper; however, in the case that α = 3/2, it will be derived in Section 4 as a consequence of the proof of our main theorem. We will see in Section 3.8 that given the structure described in Definition 3.1, one can recover additional structure: namely an embedding in the sphere (unique up to homeomorphism of the sphere), a cyclic ordering of the points around each metric ball boundary (which is homeomorphic to either a circle or a figure 8) with a distinguished point where the geodesic from x to y intersects the metric ball boundary, and a boundary length measure on each such boundary. A second approach to the Lévy net quotient We are now going to give another construction of a topological space with the height process Y t as the starting point which we will show just below is equivalent to the Lévy net. Figure 3.5: Left: Illustration of Definition 3.6, the second approach to the Lévy net quotient. Shown is the graph of Y t together with all horizontal lines, both above and below the graph, drawn as chords. The points on a horizontal chord that lies strictly above or below the graph (except for its two endpoints) are considered to be equivalent. The equivalence class corresponding to a given chord is either the chord itself or a pair of such chords above the graph with a common endpoint (a local maximum). The two horizontal purple segments correspond to sets of local minima of the same height each indicated with a purple dot, which in turn correspond to jumps of the Lévy process. Only two such segments are drawn, but in fact there are infinitely many; the endpoints of such segments occupy a dense set of points on the graph of Y t . Each such segment contains an uncountable collection of equivalence classes, including uncountably many single points (purple dots), countably many closed chords that lie strictly under the graph except at endpoints, and the pair of endpoints of the whole black segment (which is its own equivalence class). Each purple segment becomes a circle in the topological quotient. Right: Same graph with a horizontal stripe of "extra space" inserted at each purple segment. The height of the stripe can be chosen so that the sum of the heights of all of the (countably many) stripes is finite. At each of the (uncountably many) places where Y t intersects the purple segment, a corresponding red vertical "bridge" is added crossing the green stripe; points on the same bridge are considered equivalent. Points on the closure of the same green rectangle (bounded between successive bridges) are also considered equivalent. The bottom, left, and right edges of each grey rectangle together constitute a single equivalence class, so that the topological quotient of each grey rectangle's boundary is a circle (as in the left figure). Definition 3.6 (Second definition of the Lévy net). Let R be the smallest rectangle which contains the graph of the height process Y t . We let ∼ = be the equivalence relation on R given by declaring points which lie on a horizontal chord which is entirely above or below the graph of Y t to be equivalent. See the left side of Figure 3.5 for an illustration of ∼ =. The next proposition suggests an arguably simpler way to understand Definition 3.1 (or Figure 3.3), which only involves the upper graph C + Y t (or equivalently just Y t ). The implications of this are discussed further in the caption to Figure 3.5. Proposition 3.7. In the setting of Definition 3.6, is a.s. the case that two distinct points on graph of Y t are equivalent in ∼ = if and only if one of the following holds. 1. There is a horizontal chord above or below the graph of Y t that connects those two points and intersects the graph of Y t only at its endpoints. 2. There is a horizontal chord above the graph that intersects the graph of Y t at exactly one location, in addition to its two endpoints. 3. The two points are the left and right endpoints of the (uncountable) set of local minima of a given height corresponding to a jump time for X t . Proof. This is immediate from the proof of Proposition 3.4. The right hand side of Figure 3.5 illustrates an alternate way to represent the topological sphere shown in Figure 3.3. On the left hand side of Figure 3.5 (i.e., Definition 3.6), two distinct points are considered to be equivalent if and only if either: Case 1: The line segment connecting them is horizontal and intersects the graph of Y t in at most finitely many points. (Recall that it is a.s. the case that there can be at most three such intersection points, counting the endpoints themselves; and if one of these points is in the interior of the segment, it must be a local maximum of Y t .) Case 2: They are a pair representing the leftmost and rightmost local minima of a given height (which in turn corresponds to a jump in the Lévy process). It is interesting because at first glance it looks like any two points of the same horizontal line in the left side of Figure 3.5 should be equivalent. But of course, this is not the case if the segment between them intersects the graph of Y t infinitely often. 7 It is straightforward to verify that the right side of Figure 3.5 (modulo the given equivalence relation) is homeomorphic to the middle image of Figure 3.3 (modulo the given equivalence relation). We remark that it is also straightforward to check directly that the relation on the right hand side of Figure 3.5 satisfies the conditions of Moore's theorem (Proposition 3.5), since each of the equivalence classes is a single point, a single line segment (horizontal or vertical), a solid rectangle, or the union of the left, right, and lower sides of a grey rectangle. Therefore the spaces defined in Definition 3.1 and Definition 3.6 are equivalent. 7 If one begins with the tree obtained by gluing along horizontal chords above the graph (the tree we call the geodesic tree) then each of the two types of equivalence classes described above produces an equivalence relation on this tree in which each equivalence class has exactly one or two elements. The smaller equivalence class obtained by focusing on either one of these two cases is a dense subset in the full equivalence relation; so the full relation can be understood as the topological closure of either of these two smaller relations. Characterizing continuous state branching processes To study the Lévy net in more detail, we will need to recall some basic facts about continuous state branching processes, which were introduced by Jiřina and Lamperti several decades ago [Jiř58,Lam67a,Lam67b] (see also the more recent overview in [LG99] as well as [Kyp06, Chapter 10]). A Markov process (Y t , t ≥ 0) with values in R + , whose sample paths are càdlàg (right continuous with left limits) is said to be a continuous state branching process (CSBP for short) if the transition kernels P t (x, dy) of Y satisfy the additivity property: (3.1) Remark 3.8. Note that (3.1) implies that the law of a CSBP at a fixed time is infinitely divisible. In particular, this implies that for each fixed t there exists a subordinator (i.e., a non-decreasing process with stationary, independent increments) A t with A t 0 = 0 such that A t t d = Y t . (We emphasize though that Y does not evolve as a subordinator in t.) We will make use of this fact several times. The Lamperti representation theorem states that there is a simple time-change procedure that gives a one-to-one correspondence between CSBPs and non-negative Lévy processes without negative jumps (stopped when they reach zero), where each is a time-change of the other. The statement of the theorem we present below is lifted from a recent expository treatment of this result [CLUB09]. Theorem 3.9. The Lamperti transformation is a bijection between CSBPs and Lévy processes with no negative jumps stopped when reaching zero. In other words, for any CSBP Y , L(Y ) is a Lévy process with no negative jumps stopped whenever reaching zero; and for any Lévy process X with no negative jumps stopped when reaching zero, Informally, the CSBP is just like the Lévy process it corresponds to except that its speed (the rate at which jumps appear) is given by a constant times its current value (instead of being independent of its current value). The following is now immediate from Theorem 3.9 and the definitions above: Proposition 3.10. Suppose that X t is a Lévy process with non-negative jumps that is strictly α-stable in the sense that for each C > 0, the rescaled process X C α t agrees in law with CX t (up to a change of starting point). Let Y = L −1 (X). Then Y is a CSBP with the property that Y C α−1 t agrees in law with CY t (up to a change of starting point). The converse is also true. Namely, if Y is a CSBP with the property that Y C α−1 t agrees in law with CY t (up to a change of starting point) then Y is the CSBP obtained as a time-change of the α-stable Lévy process with non-negative jumps. Proposition 3.10 will be useful on occasions when we want to prove that a given process Y is the CSBP obtained as a time change of the α-stable Lévy process with non-negative jumps. (We refer to this CSBP as the α-stable CSBP for short. 8 ) It shows that it suffices in those settings to prove that Y is a CSBP and that it has the scaling symmetry mentioned in the proposition statement. To avoid dealing with uncountably many points, we will actually often use the following slight strengthening of Proposition 3.10: Proposition 3.11. Suppose that Y is a Markovian process indexed by the dyadic rationals that satisfies the CSBP property (3.1) and that Y C α−1 t agrees in law with CY t (up to a change of starting point) when C α−1 is a power of 2. Assume that Y is not trivially equal to 0 for all positive time, or equal to ∞ for all positive time. Then Y is the restriction (to the dyadic rationals) of an α-stable CSBP. Proof. By the CSBP property 3.1, the law of Y 1 , assuming Y 0 = a > 0, is infinitely divisible and equivalent to the law of the value A a where A is a subordinator and A 0 = 0 (recall Remark 3.8). Fix k ∈ N and pick C > 0 such that C 1−α = 2 −k . Similarly, by scaling, we have that Y C 1−α d = C −1 A Ca . By the law of large numbers, this law is concentrated on aE[A 1 ] when k is large; we observe that E[A 1 ] = 1 since otherwise (by taking the k → ∞ limit) one could show that Y is equal to 0 From this we deduce that Y is a martingale, and the standard upcrossing lemma allows us to conclude that almost surely Y has only finitely many upcrossings across the interval (x, x + ) for any x and , and that Y a.s. is bounded above. This in turn guarantees, for all t ≥ 0, the existence of left and right limits of Y t+s as s → 0. It implies that Y is a.s. the restriction to the dyadic rationals of a càdlàg process; and there is a unique way to extend Y to a càdlàg process defined for all t ≥ 0. Since left limits exist almost surely at any fixed time, it is straightforward to verify that the hypotheses of Proposition 3.10 apply to Y . CSBPs are often introduced in terms of their Laplace transform [LG99], [Kyp06, Chapter 10] and Proposition 3.10 is also immediate from this perspective. We will give a brief review of this here, since this perspective will also be useful in this article. In the case of an α-stable CSBP Y t , this Laplace transform is explicitly given by More generally, CSBPs are characterized by the property that they are Markov processes on R + such that their Laplace transform has the form given in (3.2) where u t (λ), t ≥ 0, is the non-negative solution to the differential equation The function ψ is the so-called branching mechanism for the CSBP and corresponds to the Laplace exponent of the Lévy process associated with the CSBP via the Lamperti transform (Theorem 3.9). In this language, an α-stable CSBP is a called a "CSBP with branching mechanism ψ(u) = u α ." One of the uses of (3.2) is that it provides an easy derivation of the law of the extinction time of a CSBP, which we record in the following lemma. Lemma 3.12. Suppose that Y is an α-stable CSBP and let ζ = inf{t ≥ 0 : Y t = 0} be the extinction time of Y . Then we have that which proves (3.5). As we will see in Section 3.6 just below, it turns out that the boundary length of the segment in a ball boundary between two geodesics in the Lévy net evolves as a CSBP as one decreases the size of the ball. The merging time for the geodesics corresponds to when this CSBP reaches 0. Thus Proposition 2.8 together with Lemma 3.12 allows us to relate the structure of geodesics in a space which satisfies the hypotheses of Theorem 1.1 with the Lévy net. A breadth-first approach to the Lévy net quotient Now, we would like to consider an alternative approach to the Lévy net in which we observe loops in the order of their distance from the root of the tree of loops (instead of in the order in which they are completed when tracing the boundary of the stable looptree). Consider a line at some height C + s as depicted in Figure 3.6. As explained in the figure caption, we would like to define Z s to be in some sense the "fractal measure" of the set of points at which this line intersects the graph of C + Y t (which should be understood as some sort of local time) and then understand how Z s evolves as s changes. A detailed account of the construction and properties of Z s , along with Proposition 3.14 (stated and proved below), appears in [DLG05]. We give a brief sketch here. First of all, in what sense is Z s defined? Note that if we fix s, then we may define the set E t = {t : Y t > s}. Observe that within each open interval of E t the process X t evolves as an α-stable Lévy process, which obtains the same value at its endpoints and is strictly larger than that value in the interim. In other words, the restriction of X t to that interval is (a translation and time-reversal of) an α-stable Lévy excursion. If we condition on the number N of excursions of this type that reach height at least above their endpoint height, then it is not hard to see that the conditional law of the set of excursions is that of an i.i.d. collection of samples from the Lévy excursion measure used to generate X t (restricted to the positive and finite measure set of excursions which achieve height at least ). The ordered collection of Lévy excursions agree in law with the ordered collection one would obtain by considering the "reflected α-stable Lévy process" (with positive jumps) obtained by replacing an α-stable Lévy process R t by R t = R t − inf{R s : 0 ≤ s ≤ t}. (See [Ber96] for a more thorough treatment of local times and reflected processes.) The process R t then has a local time describing the amount of time it spends at zero; this time is given precisely by R t − R t . The set of excursions of R t explored during the local time interval [0, Q] (i.e., during the time before R t − R t first reaches Q) can be understood as a Poisson point process corresponding to the product of the Lebesgue measure [0, Q] and the (infinite) Lévy excursion measure. In particular, one can deduce from this that as tends to zero (and β is the appropriate constant) the quantity N / β a.s. tends to the local time; this can then be taken as the definition of Z s . Definition 3.13. We refer to the process Z s constructed just above from the height process Y t associated with X t as the boundary length process associated with a Lévy net instance generated by X t . Note that the discussion above in principle only allows us to define Z s for almost all s, or for a fixed countable dense set of s values. We have not ruled out the possibility that there exist exceptional s values for which the limit that defines Z s is undefined. To be concrete, we may use the above definition of Z s for all dyadic rational times and extend to other times by requiring the process to be càdlàg (noting that this definition is almost surely equal to the original definition of Z s for almost all s values, and for any there is a corresponding upward jump in Z s of the same magnitude. This is due to the fact (not obvious in this illustration) that all points on the corresponding looptree are identified with points on the upward graph of the same height; the local time of this set of points is the magnitude of the jump. The amount of this local time in the orange/black intersection that lies to the right of the point (t, s) is a quantity that lies strictly between 0 and the height of X at the lower end of that jump (see [DLG05, Proposition 1.3.3]); this quantity is encoded by the height of the red dot (one for each of the countably many jumps) shown in the center graph. Another perspective is that the jumps in Z s correspond to loops observed in the tree on the right as one explores them in order of their distance from the boundary, where the distance between two macroscopic loops is the measure of the set of cut points between those loops. The orange circle on the right encloses the set of loops explored up until time s. Each red dot in the middle graph indicates where along the boundary a new loop is attached to the already-explored looptree structure, as defined relative to the branch in the geodesic tree connecting the root and dual root. Conditioned on Z s , the vertical locations of the red dots are independent and uniform. fixed s value; alternatively see [DLG05] for more discussion of the local time definition). This allows us to use Proposition 3.11 to derive the following, which is referred to in [DLG05, Theorem 1.4.1] as the Ray-Knight theorem (see also the Lévy tree level set discussion in [DLG02,DLG05]): Proposition 3.14. The process Z from Definition 3.13 has the law of an α-stable CSBP. Proof. The CSBP property (3.1) follows from the derivation above because if the process records L + L units of local time at height s, then the amount of local time it records at height t > s in the first L units of local time at height s is independent of the amount of local time it records at height t in the last L units of local time. Moreover, the scaling property required by Proposition 3.11 follows from the scaling properties of X and Y . Related to Proposition 3.14 is the following correspondence between the jumps of the Z and X processes shown in Figure 3.6. Proposition 3.15. The (countably many) jumps in the process Z from Definition 3.13 are a.s. in one-to-one correspondence with the (countably many) jumps in the process X used to generate the corresponding Lévy net instance. Namely, it is a.s. the case that whenever a jump in Z occurs at a time s we have s = Y t for some t value at which the process X has a jump, and vice-versa; in this case, the corresponding jumps have the same magnitude. Proof. When a jump occurs in Z s , the line with height of s intersects the graph of Y t at all points at which X t (run from right to left) reaches a record minimum following the jump, up until X t (run from right to left) again reaches the value on the lower side of the jump. Using the description of local time above (in terms of R and R), we see that the amount of local time added due to the appearance of the jump is precisely the height of the X t jump. In what follows, we will make use of the following setup. Let Z s be as in Definition 3.13, and let [0, D] denote the interval on which it is defined. Let ∂U s be the set of points which have distance equal to D−s from the root (so that ∂U s corresponds to a horizontal line in Figure 3.7). In view of Definition 3.13 and Figure 3.6, we note that if x, y ∈ ∂U s then it makes sense to talk about the clockwise (resp. counterclockwise) segments of ∂U s which connect x and y. The boundary length of such a segment is determined by the local time of the intersection of the line with height s with the graph of Y t which is to the left and right of the preimage of such a point under the quotient map. Fix r, t > 0 and assume that we are working on the event that D > t and Z t ≥ r. Let γ be the branch of the geodesic tree which connects the root and the dual root. We can then describe each point x ∈ ∂U s in terms of the length of the counterclockwise segment of ∂U s which connects x and the point x s on ∂U s which is visited by γ. Definition 3.16. For each s which is a jump time for Z and t such that s = Y t , we refer to the amount of local time in the intersection of the line with height s with the graph of Y which lies to the right of the point (t, s) as the attachment point associated with the jump. As explained in the caption of Figure 3.6, the attachment point associated with a given jump records the boundary length distance in the counterclockwise direction of the loop in the stable looptree encoded by X from the branch in the geodesic tree that connects the root of the geodesic tree to the root of the looptree. Next, we make a simple observation: Proposition 3.17. Suppose that A s is a subordinator with A 0 = 0 and P[A 1 > 0] = 1. Suppose also that A s is an independent instance of the same process. Then for any fixed values a and b we have . (3.6) By writing A a+b in place of A a + A b in the denominator in (3.6), Proposition 3.17 can be seen as equivalent to the elementary fact that A a /a is a backward martingale. See, for example, the proof of [Ber96, Chapter III, Proposition 8]. We will give an independent proof of this simple fact below. Proof of Proposition 3.17. First, suppose that a = mδ and b = nδ for some small δ > 0 and m, n ∈ N. Then A a is the sum of m i.i.d. copies X 1 , . . . , X m of a random variable and A b is the sum of n such copies X m+1 , . . . , X m+n . Imagine that we sample X 1 , . . . , X m+n in two steps: 1. Condition on the sequence of m + n values P = (X 1 , . . . , X m+n ); and then 2. Randomly decide which of the elements of P contribute to A a (as opposed to A b ). Note that P determines Q = A a + A b = X 1 + · · · + X n+m . Moreover, given P , we have that the conditional probability that a given element of P is part of the sum that makes up A a is m m + n = a a + b . In particular, This proves (3.6) in the special case that a = mδ and b = nδ. The general statement of (3.6) is easily obtained by sandwiching the expectation between two approximating rationals. (Note that rounding a down to the nearest multiple of δ and b up to the nearest multiple of δ only decreases the expectation; rounding a up to the nearest multiple of δ and b down to the nearest multiple of δ only increases the expectation.) Proposition 3.17 now implies another simple but interesting observation, which we record as Proposition 3.19 below (and which is related to the standard "confluence-ofgeodesics" story). See We let η be the geodesic starting from the point on ∂U t such that the length of the counterclockwise segment of ∂U t to x t is equal to r. For each s ≥ t, we let A s (resp. B s ) be the length of the counterclockwise (resp. clockwise) segment of ∂U s which connects η ∩ ∂U s to x s . Note that A t = r, B t = Z t − r, and A s + B s = Z s for all s ∈ [t, D]. Proposition 3.19. When the processes A, B, and Z and the values t and D are as defined just above, the following holds for the restrictions of these processes to the interval s ∈ [t, D]. 1. The processes A s and B s are independent α-stable CSBPs. 2. The process A s /Z s = A s /(A s + B s ) is a martingale. (This corresponds to the horizontal location in the trajectory illustrated in Figure 3.8 when parameterized using distance). 3. The process A s /Z s almost surely hits 0 or 1 before time D. Root Dual root/target Figure 3.7: Recovering topological structure from bubbles: Shown is a representation of a Lévy net using a width-1 rectangle R. The top (resp. bottom) line represents the root (resp. dual root/target). The left and right sides of R are identified with each other and represent the branch γ in the geodesic tree connecting the root and dual root. If r is not one of the countably many values at which a jump in boundary length occurs, then each point z on the Lévy tree of distance r from the root is mapped to the point in the rectangle whose horizontal location is the length of the counterclockwise radius-r-ball boundary segment from γ to z divided by the total length of the radius-r-ball boundary; the vertical distance from the top of the rectangle is the sum of the squares of the boundary-length jumps that occur as the radius varies from 0 and r. Each of the green stripes represents the set of points whose distance from the root is a value r at which a jump does occur. Every red line (going from the top to the bottom of a stripe) is an equivalence class that encodes one of these points. The height of each green stripe is equal to the square of the jump in the boundary length corresponding to the grey triangle (the sum of these squares is a.s. finite since the sum of the squares of the jumps of an α-stable Lévy process is a.s. finite; see, e.g., [Ber96, Chapter I]). The top (resp. bottom) of each green stripe represents the outer boundary of the metric ball infinitesimally before (resp. after) the boundary length of the metric ball jumps. Each red line is a single closed equivalence class (except that when two red lines share an end vertex, their union forms a single closed equivalence class). The uppermost horizontal orange line is also a single closed equivalence class. Also, each pair of left and right boundary points of the rectangle (with the same vertical coordinate) is a closed equivalence class. Any point that does not belong to one of these classes is in its own class. Proof. The first point is immediate from the construction; recall the proof of Proposi- Root Dual root/target In the context of Figure 3.6, Lemma 3.20 states that conditionally on the process Z s , the red dots in the bottom left of Figure 3.6 are conditionally independent and uniform on each of the vertical orange lines. Proof of Lemma 3.20. This follows because the CSBP property (3.1) implies that for each fixed s we can write Z s+t for t ≥ 0 as a sum n independent α-stable CSBPs each starting from Z s /n and the probability that any one of them has a jump in > 0 units of time is equal. Theorem 3.21. The σ-algebra generated by the process Z as in Definition 3.13 and the attachment points defined in Definition 3.16 is equal to the σ-algebra generated by X. (In other words, the information encoded by the graph in the bottom left of Figure 3.6 a.s. determines the information encoded by the first graph.) That is, these definitions yield (as illustrated in Figure 3.6) an a.e.-defined one-to-one measure-preserving correspondence between 1. α-stable Lévy excursions and 2. α-stable Lévy excursions (which are naturally reparameterized and viewed as CSBP excursions) that come equipped with a way of assigning to each jump a distinguished point between zero and the lower endpoint of that jump (as in Definition 3.16 and illustrated in the bottom left graph of Figure 3.6). Before we give the proof of Theorem 3.21, we first need the following lemma. Lemma 3.22. Let W t be a process that starts at W 0 = , then evolves as an α-stable CSBP until it reaches 0, then jumps to and continues to evolve as an α-stable CSBP until again reaching zero, and so forth. Then as tends to zero, the process W t converges to zero in probability. Proof. Since W t evolves as a martingale away from the times that it hits zero, we expect to have order −1 of these jumps before W t reaches 1. However, by scaling, on the event that the process reaches zero before reaching 2 , the law of the time is a random constant times α−1 . Since α ∈ (1, 2), we have that −1 α−1 tends to infinity as → 0, which implies that as → 0, the amount of time until W t first goes above any fixed positive constant tends to infinity; from this the proposition follows. Proof of Theorem 3.21. We claim that the trajectory η considered in Proposition 3.19 is a.s. uniquely determined by the boundary length process Z s together with the attachment points (i.e., the information in the decorated graph Z s , as shown in the bottom left graph of Figure 3.6). Upon showing this, we will have shown that the geodesic tree is almost surely determined by Z s and the attachment points which in turn implies that the entire α-stable Lévy net is almost surely determined. To prove the claim, we choose two such trajectories η and η conditionally independently, given Z s and the attachment points, and show that they are almost surely equal. We begin by noting that the length of the segment which is to the left of η evolves as an α-stable CSBP and the length which is to the right of η evolves as an independent α-stable CSBP. The same is also true for η. It follows from this that in the intervals of time in which η is not hitting η we have that the length A s (resp. C s ) of the segment which is to the left (resp. right) of both trajectories evolve as independent α-stable CSBPs. Our aim now is to show that the length B s which lies between η, η also evolves as an independent α-stable CSBP in these intervals of time. Fix an interval of time I = [a, b] in which η does not collide with η. Then we know that both A| I and C| I can be a.s. deduced from the ordered set of jumps they have experienced in I along with their initial values A a , C a (since this is true for α-stable CSBPs and α-stable Lévy processes). That is, if we fix s ∈ I and let J s be the sum of the jumps made by A| [a,s] with size at least then A s is almost surely equal to A a + lim →0 J s − E[J s ] and the analogous fact is likewise true for C| I . Since this is also true for (A + B + C)| I as it is an α-stable CSBP (Proposition 3.14), we see that B| I is almost surely determined by the jumps made by B| I and B a in the same way. To finish showing that B| I evolves as an α-stable CSBP, we need to show that the law of the jumps that it has made in I has the correct form. Lemma 3.20 implies that each time a new bubble comes along, we may sample which of the three regions it is glued to (with probability of each region proportional to each length). This implies that the jump law for B| I is that of an α-stable CSBP which implies that B| I is in fact an α-stable CSBP. The argument is completed by applying Lemma 3.22 to deduce that since B s starts at zero and evolves as an α-stable CSBP away from time zero, it cannot achieve any positive value in finite time. We have now shown that it is possible to recover X and Y in the definition of the Lévy net from Z together with the attachment points. That is, it is possible to recover the top left graph in Figure 3.6 from the bottom left graph almost surely. We have already explained how to construct Z and the attachment points from X and Y , which completes the proof. We now have the tools to give the proof of Proposition 3.2. Proof of Proposition 3.2. See Figure 3.4 for an illustration of the argument. We suppose for contradiction that Y has a decrease time t 0 . Then there exists h > 0 such that Y s ≥ Y t 0 for all s ∈ (t 0 − h, t 0 ) and Y s ≤ Y t 0 for all s ∈ (t 0 , t 0 + h). Let u 0 (resp. v 0 ) be the supremum (resp. infimum) of times s before (resp. after) t 0 such that Y s < Y t 0 (resp. Y s > Y t 0 ). As h > 0, we have that u 0 < t 0 < v 0 . Let π be the quotient map as in Definition 3.1. By the definition of the geodesic tree in Definition 3.1, we have that π( . Consequently, it follows that π((u 0 , X u 0 )) = π((t 0 , X t 0 )). Since π((t, C + Y t )) = π((t, X t )) for all t, we conclude that π((u 0 , C + Y u 0 )) = π((t 0 , C + Y t 0 )). That is, there are two distinct geodesics from the root of the geodesic tree to π((t 0 , X t 0 )) = π((v 0 , X v 0 )). Therefore the projection under π of the line segment C + [t 0 , v 0 ] is a positive measure subset of the geodesic tree from which there are at least two geodesics in the geodesic tree back to the root. We will now use Lemma 3.22 to show that the subset of the geodesic tree from which there are multiple geodesics basic to the root a.s. has measure zero. It is shown in Proposition 3.19 that the boundary length between two geodesics in the Lévy net evolves as an α-stable CSBP as the distance from the dual root increases. Suppose that x is a fixed point in the Lévy net and that η is the branch in the geodesic tree from x back to the root. Fix > 0, let τ 0 = τ 0 = 0, and let η 0 (resp. η 0 ) be the branch in the geodesic tree back to the root which starts from clockwise (resp. counterclockwise) boundary length distance from x = η(τ 0 ) back to the root. We let τ 1 (resp. τ 1 ) be the time at which η first merges with η 0 (resp. η 0 ). Assuming that η 0 , . . . , η j and η 0 , . . . , η j as well as τ 0 , . . . , τ j and τ 0 , . . . , τ j have been defined, we let τ j+1 (resp. τ j+1 ) be the first time that η merges with η j (resp. η j ) and let η j+1 (resp. η j+1 ) be the branch of the geodesic tree starting from units in the clockwise (resp. counterclockwise) direction along the boundary relative to η(τ j+1 ) (resp. η j+1 ( τ j+1 )). Suppose that there are at least two geodesics from x = η(0) back to the root of the geodesic tree. Then it would be the case that there exists δ > 0 such that for sufficiently small > 0 there is a j such that either τ j+1 − τ j ≥ δ or τ j+1 − τ j ≥ δ. By Lemma 3.22, this a.s. does not happen, from which the result follows. We will later also need the following lemma, which gives an explicit description of the time-reversal of the Lévy process whose corresponding CSBP is used to generate a Lévy net. Lemma 3.23. Suppose that α ∈ (1, 2) and W t is an α-stable Lévy excursion with positive jumps (indexed by t ∈ [0, T ] for some T ). That is, W t is chosen from the natural infinite measure on excursions of this type. Then the law of W T −t is also an infinite measure, and corresponds to an excursion of a Markov process that has only negative jumps. When the process value is c, the jump law for this Markov process is given by a constant times a −α−1 (1 − a/c) α−2 . Proof. This is a relatively standard sort of calculation about time-reversals of Lévy excursions. The Lévy excursion can be understood as a limit of measures obtained by starting an ordinary α-stable Lévy process with negative jumps at , and renormalizing the measure by a constant so that the probability of exceeding 1 before reaching zero is of constant order. The conditional law of the time-reversal, given the process up to a stopping time, is (roughly speaking) that of an α-stable Lévy process with negative jumps conditioned to have a record minimum of value zero (i.e., not to "jump past" zero first). Intuitively, this means that when considering the probability of a negative jump of magnitude a, one has to weight by the probability an α-stable Lévy process from the new location, i.e. from c − a, will have a record minimum at exactly zero. Of course this probability is zero, but one may instead consider the probability that it has a record minimum within (0, ) and compare the rate of scaling as → 0. The dimension of the record minima range for an unconstrained α-stable process with negative jumps is given by the index α − 1 (see [Ber96] and the footnote in Section 3.3). Essentially we are conditioning on the event that this process includes zero, which we approximate by the probability that it includes a point in [0, ]. If we start at height (c − a) then the probability of this scales like c−a 2−α . (Note that for a random fractal subset of [0, 1] of dimension d, we would expect the probability that it intersects an -length interval to scale like 1−d .) This is an -dependent factor times (c − a) α−2 , which is precisely the extra factor that appears in the statement of Lemma 3.23. Topological equivalence of Lévy net constructions We have so far given three different descriptions of the Lévy net quotient, namely in Definition 3.1 (illustrated in Figure 3.3), Definition 3.6 (illustrated in Figure 3.5), and Definition 3.18 (illustrated in Figure 3.7). Moreover, we explained in Section 3.4 that the quotients in Definition 3.1 and Definition 3.6 yield an equivalent topology. The purpose of this section is show that the topology of the quotient constructed in Definition 3.18 is equivalent to the topology constructed in Definition 3.1. Proposition 3.24. The topology of the Lévy net quotient constructed as in Definition 3.1 is equivalent to the topology of the quotient constructed in Definition 3.18. In particular, the quotient constructed in Definition 3.18 is a.s. homeomorphic to S 2 . We remark that it is also possible to give a short, direct proof that the quotient described in Definition 3.18 is a.s. homeomorphic to S 2 using Moore's theorem (Proposition 3.5), though we will not do so in view of Proposition 3.24. For each r > 0, we let Z r s be the local time of the intersection of the graph of Y with the line of height s and width r (i.e., the line connecting (0, s) with (r, s)). Note that Z s = Z T s where T is the length of the Lévy excursion. In order to show that the topology of the breadth first construction of the Lévy net quotient from Definition 3.18 (illustrated in Figure 3.7) is equivalent to that associated with the constructions from Definition 3.1 (illustrated in Figure 3.3) and Definition 3.6 (illustrated in Figure 3.5), we first need to construct a modification of Z r s which has certain continuity properties. We will then use this modification to construct the map which takes the construction described in Figure 3.5 to the breadth first construction. Proposition 3.25. The process (r, s) → Z r s has a jointly measurable modification which almost surely satisfies the following two properties (for all r, s simultaneously). We need to collect several intermediate lemmas before we give the proof of Proposition 3.25. We begin with two elementary estimates for α-stable CSBPs. Lemma 3.26. Suppose that W is an α-stable CSBP with W 0 > 0 and let W * = sup s≥0 W s . There exists constants c 0 , β > 0 depending only on α such that Proof. Assume that W 0 = 1. By the Lamperti transform (Theorem 3.9), it suffices to prove the result in the case of an α-stable Lévy process with only upward jumps starting from 1 and stopped upon first hitting 0 in place of W . Let S t (resp. I t ) be the running supremum (resp. infimum) of the Lévy process. Then we in particular have for each Lemma 3.27. Suppose that W is an α-stable CSBP. There exists a constant c 0 > 0 depending only on α such that Proof. Using the representation of the Laplace transform of an α-stable CSBP given in (3.2), (3.3), we have for λ > 0 that where u t (λ) = (λ 1−α + (α − 1)t) 1/(1−α) . Taking λ = t 1/(1−α) yields the result. For each s, u ≥ 0, we let T u s be the smallest value of r that Z r s ≥ u. On the event that T u s < ∞, we note that the same argument used to prove Proposition 3.14 implies that Z T u s t evolves as an α-stable CSBP for t ≥ s with initial value u. Lemma 3.28. There exists a constant c 0 > 0 such that the following is true. Fix s > 0. For each u ≥ 0 and w, v > 0 we have that Proof. Let n be the excursion measure associated with an α-stable Lévy process with only upward jumps from its running infimum. As explained in [Ber96,Chapter VIII.4], there exists a constant c α > 0 depending only on α such that n[ζ ≥ t] = c α t −1/α where ζ denotes the length of the excursion. This implies that in v units of local time, the number N of excursions with length at least t is distributed as a Poisson random variable with mean c α vt −1/α . Note that on the event that we have at least one such excursion, it is necessarily the case that T u+v s − T u s ≥ t. Consequently, (3.12) follows from the explicit formula for the probability mass distribution for a Poisson random variable evaluated at 0. We turn to describe the setup for the proof of Proposition 3.25. We first assume that we have taken a modification of Z r s so that Z Fix s 0 > 0. Then we know that Z t for t ≥ s 0 evolves as an α-stable CSBP starting from Z s 0 . Fix δ > 0. We inductively define stopping times as follows. First, we let so that the Z 1,j for 1 ≤ j ≤ n 1 are independent α-stable CSBPs defined on the time-interval [s 0 , ∞) all with initial value δ/5 ≤ δ 1 ≤ δ/4 (unless n 1 = 1). We then let Assume that stopping times τ 1 , . . . , τ k and CSBPs Z j,1 , . . . , Z j,n j have been defined for 1 ≤ j ≤ k. We then n k+1 = 4δ −1 Z τ k , δ k+1 = Z τ k /n k+1 , and Z k+1,j Then the Z k+1,j t are independent α-stable CSBPs defined on the timeinterval [τ k , ∞) all with initial value δ/5 ≤ δ k+1 ≤ δ/4 (unless n k+1 = 1). We then let By further modifying Z if necessary, we may also assume that the processes Z k,j t are càdlàg. We note that n * := sup Combining (3.13) and Lemma 3.26, we see for constants c 0 , β > 0 that on the event (3.14) Lemma 3.29. For each δ > 0 and δ < a < b < ∞ there exists a constant c 0 > 0 and a universal constant β > 0 such that on the event {Z s 0 ∈ [a, b]} we have that Proof. Throughout, we shall assume that we are working on the event {Z s 0 ∈ [a, b]}. By (3.14), we know that there exists constants c 0 , β > 0 such that (3.16) We take M = n 1/2 Z s 0 /δ so that the error term on the right hand side of (3.16) is at most a constant times n −β/2 . Let F t be the σ-algebra generated by Z T u s r for all s ≤ r ≤ t with u, s, r ∈ Q + . We claim that, given F τ k , we have that τ k+1 − τ k is stochastically dominated from below by a random variable ξ k such that the probability that ξ k is at least 1/n k+1 is at least some constant p 0 > 0 (which may depend on δ but not n). Upon showing this, (3.15) will follow by combining (3.16) with binomial concentration. We note that the claim is clear in the case that n k+1 = 1, so we now assume that n k+1 ≥ 2 and we let Since τ k+1 ≤ τ k+1 , it suffices to prove the stochastic domination result for τ k+1 − τ k in place of τ k+1 − τ k . Proof of Proposition 3.25. We assume that we are working with the modification of Z r s as defined just after the statement of Lemma 3.27. We will prove the result by showing that r → Z r · for r ∈ Q + is almost surely uniformly continuous with respect to the uniform topology. Throughout, we assume that s 0 , δ 0 , δ > 0 are fixed and we let H s 0 ,δ 0 = {Z s 0 ∈ [δ 0 /2, δ 0 ]}. Also, c j > 0 will denote a constant (which can depend on s 0 , δ 0 , δ). For each ∈ N and ∆ > 0 we let Lemma 3.26 and Lemma 3.28 together imply that By optimizing over M , it follows from (3.17) that Let ζ = inf{s > 0 : Z s = 0}. By performing a union bound over values, from (3.18) and Lemma 3.12 we have with F δ Therefore the Borel-Cantelli lemma implies that with ∆ = e −j , for each δ > 0 there almost surely exists j δ F ∈ N (random) such that j ≥ j δ F implies that F δ ∆ occurs. We also let G δ ,∆ be the event that for every s ∈ Q with s ∈ [s 0 + ( − 1)∆, s 0 + ∆] and t 1 , We claim that it suffices to show that Letting G δ ∆ = ∩ G δ ,∆ , we have from (3.21) by performing a union bound over values (and applying Lemma 3.12 as in the argument to prove (3.20)) that Thus the Borel-Cantelli lemma implies that with ∆ = e −j , for each δ > 0 there almost surely exists j δ G ∈ N (random) such that j ≥ j δ G implies that G δ ∆ occurs. In particular, this implies that for every s ≥ s 0 with s ∈ Q and t 1 , for ∆ = e −j and j ≥ j δ G . Assume that j ≥ j δ F ∨ j δ G so that with ∆ = e −j we have that both F δ ∆ and G δ ∆ occur. Suppose that t 1 , t 2 , s are such that Z t 2 s − Z t 1 s ≥ δ. With as in (3.22), it must be true that Z t 2 s 0 + ∆ − Z t 1 s 0 + ∆ ≥ 2δ 2 . This implies that there exists k such that T kδ 2 s 0 + ∆ ≤ t 2 and T (k−1)δ 2 s 0 + ∆ ≥ t 1 . (3.23) Rearranging (3.23), we thus have that This implies that r → Z r · | [s 0 ,∞) for r ∈ Q + has a certain modulus of continuity with respect to the uniform topology. In particular, r → Z r · | [s 0 ,∞) for r ∈ Q + is uniformly continuous with respect to the uniform topology hence extends continuously. The result then follows (assuming (3.21)) since s 0 , δ 0 , δ > 0 were arbitrary. To finish the proof, we need to establish (3.21). For each j, we let We first claim that G δ 1,∆ ⊇ ∩ n j=1 E j . To see this, fix a value of s ∈ [s 0 , s 0 + ∆] and suppose that Z t 2 s − Z t 1 s ≥ δ. Let j be such that τ j ≤ s < τ j+1 and let k be the first index so that Z j,1 s + · · · + Z j,k s 0 +∆ . The claim follows because we have that Z j,k+1 s 0 +∆ ≥ 2δ 2 on ∩ j E j . Thus to finish the proof, it suffices to show that Thus applying a union bound together with (3.26) in the second step below, we have for each n ∈ N that Applying Lemma 3.26, we therefore have that (3.28) Optimizing over n and M values implies (3.25). Proof of Proposition 3.24. As we remarked earlier, it suffices to show the equivalence of the quotient topology described in Figure 3.5 with the quotient topology described in Figure 3.7. We will show this by arguing that Z r s induces a continuous map Z r s from Figure 3.5 to Figure 3.7 which takes equivalence classes to equivalence classes in a bijective manner. This will prove the result because this map then induces a bijection which is continuous from the space which arises after quotienting as in Figure 3.5 to the space which arises after quotienting as in Figure 3.7 and the fact that bijections which are continuous from one compact space to another are homeomorphisms. Fix a height s as in Definition 3.6 (Figure 3.5) and let t be corresponding height as in Definition 3.1 (Figure 3.3). If t is not a jump height for Z, then we take Z r s = Z r t /Z t . Suppose that t is a jump height for Z. If s is the y-coordinate of the top (resp. bottom) of the corresponding rectangle, we take Z r s = lim q↓t Z r q /Z q (resp. Z r s = lim q↑t Z r q /Z q ). Suppose that s is between the bottom and the top of the corresponding rectangle. If (s, r) is outside of the interior of the rectangle, then we take Z r s = Z r t /Z t . Note that in this case we have that the limit lim q→t Z r q /Z q exists and is equal to Z r t /Z t . Let s 1 (resp. s 2 ) be the y-coordinate of the bottom (resp. top) of the rectangle. If (s, r) is in the rectangle, then we take Z r s to be given by linearly interpolating between the values of Z r s 1 and Z r s 2 . That is, By the continuity properties of Z given in Proposition 3.25 and the construction of Z, we have that the map (s, r) → Z r s is continuous. Observe that Z is constant on the equivalence classes as defined in Definition 3.6 ( Figure 3.5). This implies that Z induces a continuous map from the topological space one obtains after quotienting by the equivalence relation as in Definition 3.6 ( Figure 3.5) into the one from Definition 3.18 (Figure 3.7, not yet quotiented). As Z bijectively takes equivalence classes as in Definition 3.6 ( Figure 3.5) to equivalence classes as in Definition 3.18 (Figure 3.7), it follows that Z in fact induces a bijection which is continuous from the quotient space as in Definition 3.6 ( Figure 3.5) to the quotient space as in Definition 3.18 (Figure 3.7). The result follows because, as we mentioned earlier, a bijection which is continuous from one compact space to another is a homeomorphism. Recovering embedding from geodesic tree quotient We now turn to show that the embedding of the Lévy net into S 2 is unique up to a homeomorphism of S 2 . Recall that a set is called essentially 3-connected if deleting two points always produces either a connected set, a set with two components one of which is an open arc, or a set with three components which are all open arcs. In particular, every 3-connected set is essentially 3-connected. Suppose that a compact topological space K can be embedded into S 2 and that φ 1 : K → S 2 is such an embedding. It is then proved in [RT02] that K is essentially 3-connected if and only if for every embedding Proposition 3.30. For each α ∈ (1, 2), the Lévy net is a.s. 3-connected. Hence by [RT02] it can a.s. be embedded in S 2 in a unique way (up to a homeomorphism). Proof. Suppose that W is an instance of the Lévy net and assume for contradiction that W is not 3-connected. Then there exists distinct points x, y ∈ W such that W \ {x, y} is not connected. This implies that we can write W \ {x, y} = A ∪ B for A, B ⊆ W disjoint and A, B = ∅. We assume that W has been embedded into S 2 . Let A (resp. B) be given by A (resp. B) together with all of the components of S 2 \ W whose boundary is entirely contained in A (resp. B). Then A, B are disjoint and we can write S 2 as a disjoint union of A, B, {x}, {y}, and the components of S 2 \ W whose boundary has non-empty intersection with both A and B. Suppose that C is such a component. Then there exists a point w ∈ ∂C which is not in A or B. That is, either x ∈ ∂C or y ∈ ∂C. Note that S 2 \ ( A ∪ B ∪ {x, y}) must have at least two distinct components C 1 , C 2 (for otherwise A, B would not be disjoint). If either x or y is in ∂C 1 ∩ ∂C 2 then we have a contradiction because the distance of both ∂C 1 and ∂C 2 to the root of W must be the same but (in view of Figure 3.7) we know that the metric exploration from the root to the dual root in W does not separate more than one component from the dual root at any given time. If ∂C 1 ∩ ∂C 2 does not contain either x or y, then there must be a third component C 3 of S 2 \ ( A ∪ B ∪ {x, y}). This leads to a contradiction because then (by the pigeon hole principle) either ∂C 1 ∩ ∂C 3 or ∂C 2 ∩ ∂C 3 contains either x or y. We are now going to use that the Lévy net a.s. has a unique embedding into S 2 up to homeomorphism to show that the Lévy net almost surely determines the Lévy excursion X used to generate it. Proposition 3.31. For each α ∈ (1, 2), the α-stable Lévy excursion X used in the construction of the Lévy net is a.s. determined by the Lévy net together with an orientation. Proof. By Proposition 3.30, we know that the embedding of the Lévy net into S 2 is a.s. determined up to homeomorphism; we assume throughout that we have fixed an orientation so that the embedding is determined up to orientation preserving homeomorphism. Recall that the jumps of Z s are in correspondence with those made by X t . Thus, if we can show that the jumps of Z are determined by the Lévy net, then we will get that the jumps of X are determined by the Lévy net. More generally, if we can show that the processes Z T u s t are determined by the Lévy net, then we will be able 9 It is clear from our construction that when K is a Lévy net there exists at least one embedding of K into S 2 . More generally, it is shown in [RRT14] that a compact and locally connected set K is homeomorphic to a subset of S 2 if and only if it contains no homeomorph of K 3,3 or K 5 . to determine the jumps of X and their ordering. This will imply the result because X is a.s. determined by its jumps and the order in which they are made. For simplicity, we will just show that Z s is a.s. determined by the Lévy net. The proof that Z T u s t is a.s. determined follows from the same argument. Let x (resp. y) denote the root (resp. dual root) of the Lévy net. Fix r > 0 and condition on R = d(x, y) − r > 0. We let ∂B(x, R) be the boundary of the ball of radius R centered at x in the geodesic tree in the Lévy net. Fix > 0. We then fix points z 1 , . . . , z N ∈ ∂B(x, R) as follows. We let z 1 be the unique point on ∂B(x, R) which is visited by the unique geodesic from x to y. For j ≥ 2 we inductively let z j be the first clockwise point on ∂B(x, R) (recall that we have assumed that the Lévy net has an orientation) such that the geodesic from z j to x merges with the geodesic from z j−1 to x at distance . As the embedding of the Lévy net into S 2 is a.s. determined up to (orientation preserving) homeomorphism, it follows that z 1 , . . . , z N is a.s. determined by the Lévy net. Conditional on the boundary length L r of ∂B(x, R), we claim that N is distributed as a Poisson random variable Z with mean m −1 L r where m = ((α − 1) ) 1/(α−1) . The desired result will follow upon showing this because then To compute the conditional distribution of N given L r , it suffices to show that the boundary length of the spacings are given by i.i.d. exponential random variables with mean m given L r . We will establish this by using that L r evolves as an α-stable CSBP as r varies. Fix δ > 0 and let (Z δ j ) be a sequence of i.i.d. α-stable CSBPs, each starting from δ. Then the CSBP property (3.1) implies that the process s → L r+s is equal in distribution to Z δ 1 + · · · + Z δ n + Z δ where n = L r /δ and Z δ is an independent α-stable CSBP starting from L r − δn < δ. We then define indices (j δ k ) inductively as follows. We let j δ 1 be the first index j such that the amount of time it takes the α-stable CSBP Z δ 1 + · · · + Z δ j (which starts from jδ) to reach 0 is at least . Assuming that j δ 1 , . . . , j δ k have been defined, we take j δ k+1 to be the first index j such that the amount of time that it takes the α-stable CSBP Z δ j δ k +1 + · · · + Z δ j (which starts from δ(j − (j δ k + 1))) to reach 0 is at least . Note that the random variables We claim that the law of Z δ 1 converges in distribution as δ → 0 to that of an exponential random variable with mean m . To see this, we fix u > 0, let u = δ u/δ , and let W be an α-stable CSBP starting from u. Then we have that (3.29) As in the proof of Lemma 3.27, using the representation of the Laplace transform of an α-stable CSBP given in (3.2), (3.3), the Laplace transform on the right hand side of (3.29) is given by exp(−(λ 1−α + ((α − 1) ) 1/(1−α) u). Therefore the limit on the right hand side of (3.29) is given by exp (−m −1 u). This, in turn, converges to exp(−m −1 u) as δ → 0, which proves the result. 4 Tree gluing and the Brownian map 4.1 Gluing trees given by Brownian-snake-head trajectory We now briefly review the standard construction of the Brownian map (see e.g. [Le 14, Section 3.4]). Our first task is to identify the measures µ 1 SPH and µ 2 SPH discussed in Section 1.5 with certain Brownian snake excursion measures. In fact, this is the way µ 1 SPH and µ 2 SPH are formally constructed and defined. Let S be the set of all finite paths in R beginning at 0. An element of S is a continuous map w : [0, ζ] → R for some value ζ = ζ(w) ≥ 0 that depends on w. We refer to S as the snake space and visualize an element of S as the (y-to-x coordinate) graph {(w(y), y) : y ∈ [0, ζ]}. As illustrated in Figure 4.1, such a graph may be viewed as a "snake" with a body beginning at (0, 0) and ending at the "head," which is located at w(ζ), ζ . From this perspective, ζ = ζ(w) is the height of the snake, which is also the vertical head coordinate, and w(ζ) is the horizontal head coordinate. A distance on S is given by (4.1) There is a natural way to create an excursion into S beginning and ending at the zero snake. To do so, let Y t be a Brownian excursion into [0, ∞) (starting and ending at zero). Then Y t encodes a continuum random tree (CRT) T [Ald91a, Ald91b, Ald93], together with a map φ : [0, T ] → T that traces the boundary of T in order. Once one is given the Y t process, one may construct a Brownian process Z τ indexed by τ ∈ T and write X t = Z φ(t) . Precisely, we take X t to be the Gaussian process for which X 0 = 0 and Cov(X s , X t ) = inf {Y r : r ∈ [s, t]} . An application of the Kolmogorov-Centsov theorem implies that X has a Hölder continuous modification; see, e.g. [Le 14, Section 3.4]. The RHS of (4.2) describes the length of the intersection of the two tree branches that begin at φ(0) and end at φ(s) or φ(t). Given the (X t , Y t ) process, it is easy to draw the body of the snake in Figure 4.1 for any fixed time t ∈ [0, T ]. To do so, for each value b < Y t , one plots the point (X s , b) Figure 3.1 except that the pair (X t , Y t ) is produced from a Brownian snake excursion instead of a Brownian excursion. In this setup Y t is chosen from the (infinite) Brownian excursion measure and X t is a Brownian motion indexed by the corresponding CRT. The process (X t , Y t ) determines a trajectory in the snake space S. Left: At a given time t, the "snake" has a body that looks like the graph of a Brownian motion (rotated 90 degrees). The blue vertical line represents the leftmost point reached by the process (X t , Y t ). The (single) time at which the blue line is hit corresponds to the Brownian map root. At all other times, distance from the blue line represents distance from the root in the Brownian map metric. Middle: Suppose inf{X · } < a < 0 and consider the vertical line through (a, 0). This divides the snake space S into the subspace S >a of snakes not hit by the red line (except at the origin if a = 0) and the complementary subspace S ≤a = S \ S >a of snakes that are hit. Right: If a snake is hit by the red line, then it has a unique "ancestor snake" whose body lies entirely to the right of the red line and whose head lies on the red line. A snake lies on the boundary of S >a if and only if it has this form. The distance from a snake in S ≤a to S >a (in terms of the metric on S, not the Brownian map metric) is the difference in head height between itself and this ancestor. This distance evolves as a Brownian motion in the snake space diffusion. where s is the last time before t at which the Y process reached height s. Note also that if one takes s to be the first time after t when the Y process reaches b, then we must have X s = X s . Intuitively speaking, as Y t goes down, the snake head retraces the snake body; as Y t goes up, new randomness determines the left-right fluctuations; see the discrete analog in Figure 4.2. As discussed in the captions of Figure 4.1 and Figure 4.2, this evolution can be understood as a diffusion process on S. We now consider two natural infinite measures on the space of excursions into S. The first is the measure described informally in the caption to Figure 4.1. To construct this, first we define n to be the natural Brownian excursion measure (see [RY99,Chapter XII] for more detail on the construction of n). Each such excursion comes with a terminal Figure 4.2: A discrete analog of the snake space diffusion process. In this model one tosses a fair coin at each step to decide whether the snake shrinks (we delete the top edge) or grows (an independent fair coin to decide whether we add a left or right directed edge to the top). A number of planar map models are known to be encoded by close variants of the discrete snake shown here. The microscopic rules depend on the model (triangulations, quadrangulations, etc.) but the scaling limit is the snake space diffusion in each case. , and Y t = 0 for all t ≥ T . We recall that the excursion measure is an infinite measure that can be constructed as follows. Define n to be −1 times the probability measure on one-dimensional Brownian paths started at , stopped the first time they hit zero. Note that this measure assigns unit mass to the set of paths that reach 1 before hitting zero. The measure n is obtained by taking the weak the limit of the n measures as → 0 (using the topology of uniform convergence of paths, say). Note that for each a > 0 the n measure of the set of paths that reach level a is exactly a −1 . Moreover, if one normalizes n to make it a probability on this set of paths, then one finds that the law of the path after the first time it hits a is simply that of an ordinary Brownian motion stopped when it hits zero. Now that we have defined n, we define Π to be a measure on excursions into S such that the induced measure on Y t trajectories is n, and given the Y t trajectory, the conditional law of X t is that of the Brownian process indexed by the CRT encoded by Y t (i.e., with covariance as in (4.2)). Given a sample from Π, the tree encoded by X t is the tree of geodesics drawn from all points to a fixed root, which is the value of φ at the point t that minimizes X t . The tree T described by Y t (the dual tree) has the law of a CRT, and Y t describes the distance in T from the dual root (which corresponds to time 0 or equivalently time T , which is the time when Y t is minimal). Note that for any time t, we can define the snake to be the graph of the function from y ∈ [0, Y t ] to x that sends a point y to the value of the Brownian process at the point on T that is y units along the branch in T from φ(0) to φ(t). As in Figure 4.1, for each a we let S >a be the subspace of S which consists of those snakes w such that w(t) > a for all t ∈ [0, ζ]. That is, w ∈ S >a if and only if its body lies to the right of the vertical line through (a, 0). We also let S ≤a = S \ S >a . Now that we have defined Π, we remark that it is natural to consider a related measure Π + on the set of excursions into S >0 , i.e., into the space of snakes whose bodies lie completely right of the vertical line through zero, except at their base point (0, 0). This can be constructed in two ways: one is to consider a sample from n, find the location on the corresponding CRT at which the X t value is minimal, and then re-root the tree at that point. The other way is to consider the measure n restricted to those excursions for which the minimum value of X t is obtained within time units of zero, and then take a limit (appropriately normalized) as → 0. It is possible to check that these two approaches lead to an equivalent definition of Π + . See, e.g., the work [LGW06] as well as [CU14] which consider a related question. In what follows, we will actually not make use of the measure Π + so we will omit the details of this correspondence here. We next proceed to remind the reader how to associate an (X, Y ) pair with a metric measure space structure. This will allow us to think of Π and Π + as measures on M. Roughly speaking, the procedure described in the left side of Figure 3.1 already tells us how to obtain a sphere from the pair (X, Y ). The points on the sphere are the equivalence classes from the left side of Figure 3.1. The tree described by X alone (the quotient of the graph of X w.r.t. the equivalence given by the chords under the graph) can be understood as a geodesic tree (which comes with a metric space structure), and we may construct the overall metric space as a quotient of this metric space (as defined in Section 2.2) w.r.t. the extra equivalence relations induced by Y . An equivalent way to define the Brownian map is to first consider the CRT T described by Y , and then define a metric and a quotient using X as the second step. This is the approach usually used in the Brownian map literature (see e.g. [Le 14, Section 3]) and we give a quick review of that construction here. Consider the function d • on [0, T ] defined by: where ρ : [0, T ] → T is the natural projection map. Finally, for a, b ∈ T , we set where the infimum is over all k ∈ N and a 0 = a, a 1 , . . . , a k = b in T . We get a metric space structure by quotienting by the equivalence relation ∼ = defined by a ∼ = b if and only if d(a, b) = 0 and we get a measure on the quotient space by taking the projection of Lebesgue measure on [0, T ]. As mentioned in the introduction, it was shown by Le Gall and Paulin [LGP08] (see also [Mie08]) that the resulting metric space is a.s. homeomorphic to S 2 and that two times a and b are identified if and only if vertical red lines in the left side of Figure 3.1 (where X t and Y t are Brownian snake coordinates) belong to the same equivalence class as described in the left side of Figure 3.1. Thus the topological quotient described in the left side of Figure 3.1 is in natural bijection with the metric space quotient described above. Given a sample from Π, the corresponding sphere comes with two special points corresponding to a snake whose head is at the leftmost possible value (the root), and the origin snake (the dual root). Indeed, if we let S denote the set of points on the sphere, ν the measure, x the root, and y the dual root, then we obtain a doubly marked metric measure space (S, d, ν, x, y) of the sort described in Section 2.4. We note that by construction Π + is supported on those snakes such that these two points coincide; in this case the sphere induced by a sample from Π + comes with only a single marked point x In fact, we claim that Π induces a measure on (M 2 SPH , F 2 ), and Π + induces a measure on (M 1 SPH , F 1 ). These measures are precisely the doubly and singly marked grand canonical ensembles of Brownian maps: i.e., they correspond to the measures µ 2 SPH and µ 1 SPH discussed in Section 1.3. There is a bit of an exercise involved in showing that the map from Brownian snake instances to (M k , F k ) is measurable w.r.t. the appropriate σ-algebra on the space of Brownian snakes, so that µ 1 SPH and µ 2 SPH really are well-defined as measures on (M 1 SPH , M 1 ) and (M 2 SPH , M 2 ), respectively. In particular, one has to check that the distance-function integrals described in Section 2.4 (the ones used to define the Gromov-weak topology) are in fact measurable functions of the Brownian snake; one can do this by first checking that this is true when the metric is replaced by the function d • discussed above, and then extending this to the approximations of d in which the distance between two points is the infimum of the length taken over paths made up of finitely many segments of the geodesic tree described by the process X. This is a straightforward exercise, and we will not include details here. Note that switching from Π to Π + corresponds in some sense to "conditioning" to have the two points coincide; intuitively, the probability that two randomly chosen points coincide should be inversely proportional to the area (i.e., the length of the excursion), so this should correspond to unweighting by the total area measure. (This can be made precise using approximations as briefly discussed above.) Similarly, if one considers the measure Π + and weights the measure by total area -and samples an extra marked point uniformly from that area -one obtains the measure Π. Given a snake excursion s chosen from Π, we define the snake excursion s so that its associated surface is the surface associated to s rescaled to have total area 1. In other words, s is the snake whose corresponding head process is Here we have scaled t by a factor of ζ, we have scaled Y t by a factor of ζ −1/2 , and we have scaled X t by a factor of ζ −1/4 . An excursion s can be represented as the pair ( s, ζ(s)) where ζ(s) represents the length of the excursion -or equivalently, the area of the corresponding surface. Since a sample from the Brownian excursion measure n is an excursion whose length has law ζ −3/2 dζ [RY99, Chapter XII], where dζ is Lebesgue measure on R + , we have the following: Proposition 4.1. If we interpret Π as a measure on pairs ( s, ζ), then Π can be written as Π ⊗ t −3/2 dt, where dt represents Lebesgue measure on R + , and Π is a probability measure on the space of excursions of unit length. We remark that it is also possible to show it similarly holds that Π + has the form Π + ⊗ ζ −5/2 dζ where dζ denotes Lebesgue measure on R + . Moreover, the marginal law of the labeled CRT re-rooted at the root of the geodesic tree is the same under Π + and Π + . The snake trajectory corresponds to a path that traces the boundary of a (space-filling) tree of geodesics in the doubly marked Brownian map. The figure illustrates several branches of the geodesic tree (the tree itself is space-filling) and along with the outer boundary (as viewed from the dual root) of a radius-r metric ball centered at the root. From a generic point on the doubly marked Brownian map, there is a unique path in the dual tree back to the dual root. The distances from the root vary as one moves along that path; this variation encodes the shape of the body of the snake, and the total quadratic variation along this path encodes the height of the snake's head. During the snake trajectory (as the snake itself changes) the first and last times that the horizontal coordinate of the snake's head reaches a = inf{X t } + r correspond to the intersection point (shown in orange) on the dual-root-to-root geodesic whose distance from the root is r. Intuitively, as one traces the boundary of the space-filling geodesic tree (beginning and ending at the dual root), the orange dot is the first and last point that the path visits within the closed orange disk. Brownian maps, disks, and Lévy nets The purpose of this subsection is to prove that the metric net of the doubly marked Brownian map has the law of a 3/2-stable Lévy net. We will refer to the (countably many) components of the complement of this net as "bubbles" and will describe a one-to-one correspondence between these bubbles and the "holes" in the corresponding Lévy net. We will also introduce here the measure µ L DISK on random disks, which give the law of the complementary components of a metric exploration in the Brownian map. The two jump processes in Figure 3.6 correspond to different orders in which one might explore these holes. The first explores holes in a "depth-first" order -i.e., the order in which they are encountered by a path that traces the boundary of the geodesic tree; the second explores holes in a "breadth-first" order -i.e., in order of their distance from a root vertex. We will see what these two orderings look like within the context of the Brownian map, as constructed from a Brownian snake excursion. In order to begin understanding the metric net of the Brownian map, we need a way to make sense of the boundary length measure on a metric ball within the Brownian map. Observe that for any real number a < 0, the snake diffusion process has the property that if the snake lies in S ≤a at time t, then its distance (in the snake space metric as defined in (4.1)) from the boundary of S >a is given by Y t − Y s , where s is supremum of the set of times before t at which the snake was in S >a ; see Figure 4.1 for an illustration. This distance clearly evolves as a Brownian motion until the next time it reaches zero. Let us define i a (t) to be the total time before t that the snake process spends inside S >a , and o a (t) = t − i a (t) the total amount of time before t that the snake process spends in S ≤a . Then we find that when we parameterize time by o a (t), this process is a positive, reflected Brownian motion, and hence has a well-defined notion of local time a (t) for any given value of a (see [RY99, Chapter VI] for more on the construction of Brownian local time). In fact, it is not hard to see that a sample from Π may be obtained in two steps: 1. First sample the behavior of the snake restricted to S >a , parameterized by i a (t). We claim that this determines the local time a (t) as parameterized by i a (t). To see this, let Y 1 t be the difference between Y t and the height of the ancestor snake head at time t (as in Figure 4.1), and define Y 2 and Π. Note that each excursion is translated so that it is "rooted" at some point along the vertical line through (a, 0), instead of at (0, 0). (We note that the process a (t) described and constructed just above is a special case of the so-called exit measure associated with the Brownian snake. See, e.g., [LG99] for more on exit measures.) From this discussion, the following is easy to derive: Proposition 4.2. As a decreases, the process a (T ) evolves as a 3/2-stable CSBP. Proof. The proof is nearly the same as the proof of Proposition 3.14. One has only to verify that the process satisfies the hypotheses Proposition 3.11. Again, the scaling factor is obvious (one may rescale time by a factor of C 2 , the Y t process values by a factor of C and the X t process values by a factor of C 1/2 ); and the value of the a (T ) process then scales by C and its time to completion scales by C 1/2 , suggesting that the scaling hypothesis of Proposition 3.14 is satisfied with α − 1 = 1/2, so that α = 3/2. The CSBP property (3.1) is also immediate from construction. Proposition 4.3. The jumps in a (T ) are in one-to-one correspondence with the bubbles of the metric net from the root to the dual root of the Brownian map. If one keeps track of the location along the boundary at which each bubble root occurs together with the total boundary length process, one obtains an object with the law of the process Z s as in Definition 3.13 together with the attachment points of Definition 3.16 (as shown in Figure 3.6). In particular, conditioned on the process a (T ), the attachment points are independent random variables with law associated with a jump occurring for a given value a that of a uniform random variable in [0, a− (T )]. Proof. If one conditions on the Brownian snake growth within the set S >a , one can resample the locations of the excursions into S ≤a . In particular (taking limits as a approaches a bubble root time, along the dyadic rationals say) we find that the location at which each bubble occurs with respect to the a.s. unique point on the boundary which is visited by the geodesic connecting the root and dual root is uniform, independently of everything else. We will now introduce the measure µ L DISK on random disks, which is one of the key actors in what follows as it describes the law of the complementary components of a metric exploration in the Brownian map. The key ideas to understanding its definition and construction are illustrated in the captions of Figures 4.1-4.4. The correspondence between the metric net and the 3/2-stable Lévy net follows from Proposition 4.2 and Proposition 4.3. The problem is to condition on the metric net and consider the conditional law, given the metric net, of what happens inside each of the disks in the complement of the metric net. Each such disk can be understood as Figure 4.4 is hit. The snake process makes a number of excursions into S ≤a (as described in Figure 4.1) during this interval and there is a well-defined local time corresponding to the time spent on the boundary of S >a during this interval, which corresponds to the length of the bubble boundary. One can produce a truncated snake process by "excising" from [t 1 , t 2 ] all the time intervals in which the snake lies in S ≤a . The truncated process corresponds to an excursion into S >a that begins at (a, b) and reflects off the boundary of S >a (when the head is at position a) for some amount of local time. It is not hard to see that conditioned on this amount of local time, one may resample (independently of the set of excursions into S ≤a , where the latter are interpreted as excursions modulo vertical translation) this truncated process. We call µ L DISK the metric disk produced by this truncated process. From the definition, it is clear that its law depends only on the boundary length L and possibly one additional marked point on the boundary. This marked point corresponds to when the path which traces the geodesic tree first enters the corresponding disk in the Brownian map. (This is the point which corresponds to the snake on the right side of Figure 4.4.) However, the argument in Lemma 4.13 given below implies that the law µ L DISK is given by unweighting the law of the complement of the metric ball which contains the dual root by the square of its boundary length. Since this latter law obviously does not have a marked boundary point, neither does µ L DISK . From the above discussion, we obtain the following. Proposition 4.4. The metric net of a sample from µ 2 SPH has the law of a 3/2-stable Lévy net. In this correspondence, the 3/2-stable CSBP excursion a (T ) described above for a sample from µ 2 SPH agrees with the 3/2-stable CSBP excursion Z s of Definition 3.13 (recall also Figure 3.6), up to an affine transformation relating a and s. Indeed, a sample (S, d, ν, x, y) from µ 2 SPH can be generated as follows: first sample an instance of the 3/2-stable Lévy net to be the metric net between x and y. Then glue in a conditionally independent disk from a probability measure µ L DISK for each hole in the Lévy net (which occurs in the canonical embedding of the Lévy net into S 2 ) where L is the boundary length of that hole and the gluing is done in a length-preserving way (and one defines the metric quotient in the usual way: as the largest metric compatible with the identification, see the end of Section 2.2). Remark 4.5. In the case that α = 3/2, we now have that up to time parameterization, both the process defined for Brownian maps and the process Z defined for the Lévy net can be understood as descriptions of the natural boundary length measure L t discussed in Section 1. Axioms that characterize the Brownian map Most of this subsection will be devoted to a proof of the following Lévy net based characterization of the Brownian map. At the end of the section, we will explain how to use this result to derive Theorem 1.1. Theorem 4.6. The doubly marked Brownian map measure µ 2 SPH is the unique (infinite) measure on (M 2 SPH , F 2 ) which satisfies the following properties, where an instance is denoted by (S, d, ν, x, y). (S, d, ν), the conditional law of x and y is that of two i.i.d. samples from ν. Given In other words, the law of the doubly marked surface is invariant under the Markov step in which one "forgets" x (or y) and then resamples it from the given measure. 3. Fix r > 0 and consider the circle that forms the boundary ∂B • (x, r) (an object that is well-defined a.s. on the finite-measure event that the distance from x to y is at least r). Then the inside and outside of B • (x, r) (each viewed as an element of M 1 ) are conditionally independent, given the boundary length of ∂B • (x, r) (as defined from the Lévy net structure). Let us emphasize a few points before we give the proof of Theorem 4.6. • Recalling Proposition 4.4, in the case of µ 2 SPH one has α = 3/2. Moreover, Proposition 4.4 implies that µ 2 SPH satisfies the second hypothesis of Theorem 4.6 and the discussion before Proposition 4.2 implies that µ 2 SPH satisfies the third assumption. • The second assumption together with Proposition 3.31 implies that the boundary length referenced in the third assumption is a.s. well-defined and has the law of a CSBP excursion (just like the CSBP used to encode the Lévy net). In particular, this implies that for any r > 0, the measure of the event d(x, y) > r is positive and finite. • In the coupling between the metric net and the Lévy net described above, we have made no assumptions about whether every geodesic in the metric net, from some point z to the root x, corresponds to one of the distinguished left or right geodesics in the Lévy net. That is, we allow a priori for the possibility that the metric net contains many additional geodesics besides these distinguished ones. Each of these additional geodesics would necessarily pass through the filled ball boundaries ∂B • (x, r) in decreasing order of r, but in principle they could continuously zigzag back and forth in different ways. We also do not assume a priori that the distinguished geodesics in the metric net of (S, d, x, y) (i.e., the ones that correspond to the left and right distinguished Lévy net geodesics) are actually leftmost or rightmost when viewed as geodesics in (S, d). We similarly make no assumption about the lengths of the shortest paths in the metric net that connect points in the metric net that are both distinct from x. That is, we allow a priori for the possibility that there might be a path in the metric net between two endpoints that is strictly shorter than the shortest path obtained by concatenating finitely many segments of distinguished geodesics. • The measurability results of Section 2.4 imply that the objects referred to in the statement of Theorem 4.6 are random variables. In particular, Proposition 2.17 implies that the inside and the outside of B • (x, r) (viewed as elements of M 1 ) are measurable functions of an element of M 2 SPH and Proposition 2.18 implies that the metric net (viewed as an element of X 2 ) is a measurable function of an element of M 2 SPH . Now we proceed to prove Theorem 4.6. This proof requires several lemmas, beginning with the following. Lemma 4.7. If µ 2 SPH satisfies the hypotheses of Theorem 4.6, and (S, d, ν, x, y) denotes a sample from µ 2 SPH , then it is a.s. the case that the metric net from x to y has ν measure zero. That is, the set of (S, d, ν, x, y) for which this is not the case has µ 2 SPH measure zero. Proof. Suppose that the metric net does not have ν measure 0 with positive µ 2 SPH measure. Then if we fix x and resample y from ν to obtain y, there is some positive probability that y is in the metric net from x to y. Let L r be the process that encodes the boundary length of the complementary component of B(x, r) which contains y. Then we have that L r does not a.s. tend to 0 as y is hit. This is a contradiction as, in the Lévy net definition, we do have that L r almost surely tends to 0 as the target point is reached. If µ 2 SPH satisfies the hypotheses of Theorem 4.6, then we let µ 1,L DISK denote the conditional law of S\B • (x, r), together with its internal metric and measure, given that the boundary length of ∂B • (x, r) is equal to L. Once we have shown that µ 2 SPH agrees with µ 2 SPH , we will know that µ 1,L DISK agrees with µ 1,L DISK , which will imply in particular that µ 1,L DISK depends on L in a scale invariant way. That is, we will know that sampling from µ 1,L DISK is equivalent to sampling from µ 1,1 DISK and then rescaling distances and measures by the appropriate powers of L. However, this is not something we can deduce directly from the hypotheses of Theorem 4.6 as stated. We can however deduce a weaker statement directly: namely, that at least the probability measures µ 1,L DISK in some sense depend on L in a continuous way. Note that given our definition in terms of a regular conditional probability, the family of measures µ 1,L DISK is a priori defined only up to redefinition on a Lebesgue measure zero set of L values, so the right statement will be that there is a certain type of a continuous modification. Lemma 4.8. Suppose that µ 2 SPH satisfies the hypotheses of Theorem 4.6. Let µ 1,L DISK denote the conditional law of S \ B • (x, r), together with its internal metric and measure, given that the boundary length of ∂B • (x, r) is L. For L 1 , L 2 > 0, define ρ( µ 1,L 1 DISK , µ 1,L 2 DISK ) to be the smallest > 0 such that one can couple a sample from µ 1,L 1 DISK with a sample from µ 1,L 2 DISK in such way that with probability at least 1 − the two metric/measureendowed disks agree when restricted to the y-containing component of the complement of the set of all points of distance from the disk boundary (and both such components are nonempty). Then the µ 1,L DISK (after redefinition on a zero Lebesgue measure set of L values) have the property that as L 1 tends to L 2 the ρ distance between the µ 1,L i DISK tends to zero. In other words, the map from L to µ 1,L DISK has a modification that is continuous w.r.t. the metric described by ρ. Proof. It is clear that a sample from µ 1,L DISK comes equipped with an instance of a time-reversed CSBP starting from L and stopping when it hits zero (corresponding to a continuation of the L r process corresponding to the Lévy net from a point at which it has value L). If L 1 and L 2 are close, then we can couple the corresponding time-reversed CSBPs that arise from µ 1,L 1 DISK and µ 1,L 2 DISK so that they agree with high probability after some small amount of time. Let us define ρ (L 1 , L 2 ) to be the smallest so that the two time-reversed CSBPs, started at different heights L 1 and L 2 , can be coupled to agree and are both non-zero after an interval of time with probability 1 − . It is easy to see that ρ (L 1 , L 2 ) is continuous in L 1 and L 2 and zero when L 1 = L 2 . Now using the Markov property assumed by the hypotheses of Theorem 4.6, we find ρ( µ 1,L 1 DISK , µ 1,L 2 DISK ) ≤ ρ (L 1 , L 2 ) for almost all L 1 and L 2 pairs. Thus, if a countable dense set Q of L values is obtained by i.i.d. sampling from Lebesgue measure, then this bound a.s. holds for all L 1 and L 2 in Q. Then for almost all other L values, we have that with probability one, ρ( µ 1,L DISK , µ 1,L DISK ) → 0 as L approaches L with L restricted to the set Q. We obtain the desired modification by redefining µ 1,L DISK , on the measure zero set of values for which this is not the case, to be the unique measure for which this limiting statement holds. (It is clear that the limiting statement uniquely determines the law of disk outside of an -neighborhood of the boundary, and since this holds for any L, it determines the law of the overall disk.) Proof. This is simply an extension of the theorem hypothesis from a deterministic stopping time to a specific type of random stopping time. The extension to random stopping times is obvious if one considers stopping times that a.s. take one of finitely many values. In particular this is true for the stopping time τ δ obtained by rounding τ up to the nearest integer multiple of δ, where δ > 0. It is then straightforward to obtain the result by taking the δ → 0 limit and invoking the continuity described in Lemma 4.8. Proof. This is immediate from the definition of the Lévy net and Proposition 2.1. If µ 2 SPH satisfies the hypotheses of Theorem 4.6, and τ is a stopping time as in Lemma 4.9, then we can now define µ L DISK to be the conditional law of the disk cut out at time τ given that the boundary length of that disk (i.e., the size of the jump in the L r process that occurs then r = τ ) is L. The following lemma asserts that this conditional law indeed depends only on L and not on other information about the behavior of the surface outside of this disk. Lemma 4.11. Assume that µ 2 SPH satisfies the hypotheses of Theorem 4.6. Then the conditional probability µ L DISK described above is well-defined and indeed depends only on L. Proof. If one explores up until the stopping time τ , one can resample the target point y from the restriction of ν to the union of the two disks pinched off at time τ . Since ν is a.s. a good measure, there will be some positive probability that y ends up on each of the two sides. The theorem hypotheses imply that the conditional law of each of the two disks bounded by the figure 8, on the event that y lies in that disk, is given by µ 1,L DISK , independently of any other information about the surface outside of that disk. This implies in particular that the two disks are independent of each other once it has been determined which disk contains y. Now, one can resample the location of y, resample the disk containing y from µ 1,L DISK , resample the location of y, resample the disk containing y again, etc., and it is not hard to see that this process is mixing, so that these assumptions determine the form of µ L DISK . The explicit relationship between µ L DISK and µ 1,L DISK will be derived in the proof of Lemma 4.13 just below. Lemma 4.12. Given the L r process describing the boundary length of ∂B • (x, r), the conditional law of the disks in the complement of the net are given by conditionally independent samples from µ L i DISK where L i are the lengths of the hole boundaries (which in turn correspond to the jumps of L r ). Proof. This is a consequence of Lemma 4.11. Lemma 4.13. Assume that µ 2 SPH satisfies the hypotheses of Theorem 4.6, and that µ L DISK and µ 1,L DISK are defined as above. Let A be the total area measure of a sample from µ L DISK . Then the µ L DISK expectation of A is given by a constant times L 2α−1 . Moreover, the Radon-Nikodym derivative of µ 1,L DISK w.r.t. µ L DISK (where one ignores the marked point, so that the two objects are defined on the same space) is given by this same expectation, is hence also given by a constant times L 2α−1 . Proof. Suppose that we evolve L r from a positive initial value of L up to a stopping time at which a jump occurs -for example, the first time at which a jump occurs that would decrease the total boundary length by at least an fraction of its total (where 0 < < 1/2). At such a jump time, the boundary length c is divided into two components, of lengths a and b with a + b = c. (That is, c is the value of L r just before the downward jump; one of the two {a, b} values is the value of L r just after the jump and the other is determined by a + b = c.) At this point in the proof, let us relabel slightly and set L ∈ {a, b} to be the boundary length of the component surrounding y. By Lemma 4.9, the conditional law of the disk in this component is given by µ 1,L DISK . Following Lemma 4.11, we let µ L DISK denote the probability measure that describes the conditional law of the metric disk inside the loop that does not surround y, when L ∈ {a, b} is taken to be the length of that loop. (Again, we have not yet proved this is equivalent to the µ L DISK defined from the Brownian map.) If we condition on the lengths of these two pieces -i.e., on the pair (a, b) -then what is the conditional probability that y belongs to the a loop versus the b loop? We will address that question in two different ways. First of all, if p is that probability, then we can write the overall measure for the pair of surfaces as the following weighted average of probability measures Now, observe that if we condition on the pair of areas A 1 , A 2 , then the resampling property for y implies that the conditional probability that y is in the first area is A 1 /(A 1 + A 2 ). This implies the following Radon-Nikodym derivative formula for two (non-probability) measures From this, we may deduce (by holding one of the two disks fixed and letting the other vary) that the Radon-Nikodym derivative of µ 1,L DISK w.r.t. µ L DISK (ignoring the marked point location) is given by a constant times the area A of the disk; since both objects are probability measures, this Radon-Nikodym derivative must be the ratio A/E µ L DISK [A]. Plugging this back into (4.6), we find that . (4.7) In other words, the probability that y lies in the disk bounded by the loop of length L ∈ {a, b} (instead of the other disk) is given by a constant times the µ L DISK -expected area of a disk bounded by that loop. Next, we note that there is a second way to determine p. Namely, we may directly compute the relative likelihood of a jump by a versus a jump by b in the time-reversal of an α-stable Lévy excursion, given that one has a jump of either a or b. By Lemma 3.23, the ratio of these two probabilities is a 2α−1 /b 2α−1 . Plugging this into (4.7) gives . Since this is true for generic values of a and b, we conclude that E µ L DISK [A] is given by a constant times L 2α−1 . We define a big jump in the process L r associated to µ 1,L DISK to be a jump whose lower endpoint is less than half of its upper endpoint. A big jump corresponds to a time when the marked point lies in the disk bounded by the shorter of the two figure 8 loops. In what follows, it will sometimes be useful to consider an alternative form of exploration in which the endpoint y is not fixed in advance. We already know that if let y 1 , y 2 , . . . be independent samples from ν, then the metric nets targeted at those points should be in some sense coupled Lévy nets, which agree up until the first time at which those points are separated. Indeed, there will be countably many times at which one of those points is first disconnected from the other, as illustrated in Figure 4.5. This union of all such explorations can be understood as sort of a branching exploration process, where each time the boundary is "pinched" into two (forming a figure 8, as in Figure 4.5) the exploration continues on each of the two sides. In what follows, it will be useful to consider an alternative form of exploration in which, at each such pinch point, the exploration always continues in the longer of these two loops, rather than continuing in the loop that contains some other predetermined point y. That is, we choose the exploration so that the corresponding boundary length process L r has no "big jumps" as we defined them above. It is clear that each y i will almost surely fail to lie in the bigger loop of a figure 8 at some point, and hence a.s. all of the points y i will lie in disks that are cut off by this exploration process in finite time. Let A r denote the unexplored disk that remains after r units of exploration of this process. Then A r is a closed set, which is the closure of the set of points y i with the property that Lévy nets explorations targeted at those points have no big jumps before time t. The intersection of A r , over all r, is thus a closed set that we will call the center of the disk. We do not need to know this a priori but we expect that the center contains only a single point. Note that the center can be defined if the surface is sampled from either µ L DISK or µ 1,L DISK (and in the latter case its definition does not depend on the marked point y). We refer to the modified version of the Lévy net as the center net corresponding to the surface. We are now going to prove an analog of Lemma 4.12 for the center net. Lemma 4.14. Given the M r process describing the center net corresponding to a sample from µ L DISK , the conditional law of the disks in the complement of the net are given by conditionally independent samples from µ M i DISK where M i are the lengths of the hole boundaries. Proof. We can condition on the positive probability event that the center net exploration agrees with the exploration with a marked point up to fixed time. Note that this is a positive probability event and, on this event, Lemma 4.12 implies that the conditional law of the disks cut off given M up to this time is given by i.i.d. samples from µ M i DISK where M i are the lengths of the hole boundaries. The result follows because the disks cut off up to this fixed time are conditionally independent of the unexplored region given their boundary lengths. We now would like to discuss the relationship between the laws of the following processes: 1. The process L r obtained by exploring the metric net from a sample from µ 1,L DISK , starting with L 0 equal to some fixed value L. 2. The process M r obtained by exploring a sample from µ L DISK toward the center (again starting with M 0 = L). 3. The process M 1 r obtained by exploring a sample from µ 1,L DISK toward the center (again starting with M 1 0 = L). We already know that the Radon-Nikodym derivative of µ 1,L DISK w.r.t. µ L DISK is given by a constant times the area of the disk. This immediately implies the following: Lemma 4.15. The Radon-Nikodym derivative of the process M 1 r w.r.t. the process M r is given by the expected disk area given the process, which (by Lemma 4.13 and Lemma 4.14) is given by a constant times K K 2α−1 where K ranges over the jump magnitudes corresponding to the countably many jumps in the process. Moreover, if L r and M 1 r are coupled in the obvious way (i.e., generated from the same instance of µ 1,L DISK ) then they agree up until a stopping time: namely, the first time that L r experiences a big jump. As a side remark, let us note that the stopping time τ of the process M 1 r , as defined in Lemma 4.15, can be constructed in fairly simple way that roughly corresponds to, each time a new figure 8 is created, tossing an appropriately weighted coin to decide whether y is in the smaller or the larger loop, and then stopping when it first lies in the smaller loop. To formulate this slightly more precisely, suppose that for each r ≥ 0 we let χ r be the product of a 2α−1 a 2α−1 + b 2α−1 over all jumps of M 1 | [0,r] where a is the size of the jump and b is equal to the value of M 1 immediately after the jump. Suppose that we choose p uniformly in [0, 1]. Then we can write τ = inf{r ≥ 0 : χ r < p}. We next claim the following: Lemma 4.16. If one explores the center net of an instance of µ L DISK up to some stopping time τ , then the conditional law of the central unexplored disk (i.e., the one in which exploration will continue) is given by an instance of µ L DISK where L = M τ is the boundary length at that time. In particular, this implies that the process M r is Markovian. it would be weighted by the expected area in the corresponding figure 8, namely by a constant times (a/c) 2α−1 + (b/c) 2α−1 . (4.10) But we know by Lemma 3.23 that the jump law for L r is given by a constant times a −α−1 (b/c) α−2 . Since a jump of size a in M 1 r can correspond to two kinds of jumps in L r (one of size a and one of size b = c − a) we find that the jump law for M 1 r is given by a constant times which is indeed the product of (4.9) and (4.10), which implies that the jump law described by (4.9) must have been the correct one. We remark that from the point of view of the discrete models, the jump law for M r described in Lemma 4.17 is precisely what one would expect if the overall partition function for a boundary-length a disk were given by a constant times a −α−1 . Indeed, in this case a −α−1 b α−1 would be the weighted sum of all ways to triangulate the loops of a figure 8 with loop lengths a and b, which matches the law described in the lemma statement. It is therefore not too surprising that the jump law for µ L DISK exploration toward the center has to have this form. Furthermore, we may conclude that the M r process can be a.s. recovered from the ordered collection of jumps (since this is true for Lévy process, hence true for CSBPs, hence true for time-reversals of these processes, hence true for this modified time-reversal that corresponds to µ L DISK ) and the reconstruction procedure is the same as the one that corresponds to the L r process. As explained in Figure 4.6, now that we have constructed the law of the exploration of a sample from µ L DISK toward the center, we may iterate this construction within each of the unexplored regions and repeat, so that in the limit, we have determined the joint law of the metric net toward all points in some countable dense subset of the metric disk. Proof of Theorem 4.6. By Lemma 4.7 there is a.s. no area in the metric net itself. This implies that if we explore the center net of a sample from µ L DISK up until a given time, then the center net also a.s. contains zero area. Let M r be the boundary length process associated with a sample from µ L DISK . By Lemma 4.13, Lemma 4.14, and Lemma 4.16 if we perform an exploration towards the center of a sample produced from µ L DISK up until a given time s then the conditional expectation of the total area is given by where the a i are an enumeration of the jumps in the process M r up to time s. Thus, (4.11) must evolve as a martingale in s. Proposition 4.22 (stated and proved in Section 4.6 below) implies that (4.11) evolves as a martingale if and only if α = 3/2. Thus, the fact that α = 3/2 is a consequence of the properties listed in the theorem statement. For the remainder of the proof, we may therefore assume that α = 3/2. Let A be the overall area measure of a surface sampled from µ L DISK . Let A k denote the conditional expectation of A given the σ-algebra G k generated by k exploration iterations, where each iteration corresponds to adding an exploration toward the center of each unexplored component, as described above. Note that since the hypotheses of Theorem 4.6 apply to µ 2 SPH with α = 3/2, all of the lemmas above apply if we use µ L DISK and µ 1,L DISK in place of µ L DISK in and µ 1,L DISK , respectively. We know that the joint law of the processes encoding the iterations A k , and the law of the conditional expectation of the area in the unexplored regions, is the same in each case. This implies that µ 2 SPH and µ 2 SPH can be coupled in such a way that their branching boundary length explorations agree, and that their conditional expected amounts of area in the not-yet-explored regions agree, and their metric net structures are compatible. But does this imply that the overall areas agree in this coupling? To prove this, it would suffice to show that the A k → A almost surely. By the martingale convergence theorem, to prove that A k → A a.s. it suffices to show that A is a measurable function of the σ-algebra G generated by the information encoded by all of the countably many exploration iterations. To prove this we will in fact prove a stronger claim: that G encodes all of the information about random metric measure space. Let us first consider this claim in the case of µ 2 SPH . That is, we will show that the entire doubly marked Brownian map instance is G-measurable. Note that if a point z is chosen uniformly from the measure on the surface, then it is almost surely the case that, as one explores towards z, there are only countably many "large jumps" (where the target point lies in the component of the smaller boundary). Thus, as k → ∞, the exploration a.s. gets arbitrarily close to z. In particular, as k → ∞ one obtains the entire Lévy net targeted toward z. Indeed, this holds for a countable collection of points z i chosen i.i.d. from the measure on the surface, and one then recovers the structure of the tree that describes the union of the geodesics from all of the z i to the root. Since the z i are a.s. a dense set (as the area measure on the Brownian map is a good measure), one obtains from this procedure the entire geodesic tree structure. This, in turn, determines (up to time change) the horizontal component of the snake process that encodes the Brownian map. One then also obtains the vertical component (by observing the quadratic variation along the dual paths of the geodesic) and the time parameter (by observing the quadratic variation of the vertical component). We now turn to establish the claim in the case of µ 2 SPH . We know that an instance of µ 2 SPH determines a sample from µ 2 SPH in a unique way, and that the expectation of the area measure ν corresponding to µ 2 SPH given G is equivalent to the area measure ν corresponding to the sample from µ 2 SPH . It could still be the case that the ν corresponding to a sample from µ 2 SPH differed from its expected value given G and, in particular, this corresponding Brownian map structure. For example, this would be the case if the ν corresponding to µ 2 SPH was chosen as some sort of Poisson point process of atoms, whose expectation was the Brownian map measure; however, since by assumption there are no atoms (as ν is required to be a good measure), this type of pathology cannot arise. To make this precise, we fix > 0 and we let G k, be the event that the total amount of area in each of the individual complementary components after performing k iterations of the exploration is at most . Under µ 2 SPH , we know that ν is a good measure hence does not have atoms. Recall from the discussion just after the statement of the theorem that, for each fixed r > 0, the event d(x, y) > r has positive and finite µ 2 SPH mass. Therefore it follows that the µ 2 SPH mass of G c k, ∩ {d(x, y) > r} tends to 0 as k → ∞ (with fixed). For each j, let X j denote the area of the jth component (according to some ordering) after performing k iterations of the exploration. Then we have that the total variation distance between the law of j X j 1 X j ≤ and the law of j X j under µ 2 SPH conditioned on d(x, y) > r tends to 0 as k → ∞ (with fixed). As the conditional variance of the former given G k obviously tends to 0 as k → ∞ and then → 0, it thus follows that the latter concentrates around a G-measurable value as k → ∞. This proves the claim. Finally, now that we have coupled an instance of µ 2 SPH with an instance of µ 2 SPH in such a way that the measures and iterated center net explorations agree, we would like to argue that the distance functions also agree. We already know that the distances between points y i in a countable dense set (sampled i.i.d. from ν) and the root can be made to agree on the two sides. And we know, by definition of distance on the µ 2 SPH Brownian map side, the distance between any two such points is the infimum over the lengths of continuous paths between those points made by concatenating finitely many distinguished geodesics to the root (recall (4.3)-(4.5)). This is clearly an upper bound on the associated length on the µ 2 SPH side. However, it is easy to see on the Brownian map side, that if one conditions on the total area of the surface, then the expected distance between x and y is finite. (This follows from the fact that the µ A=1 SPH expectation of the diameter is finite.) If we condition on the total area of the surface, then the expected distance from x to y must the same as the expected distance from y 1 to y 2 when y 1 and y 2 are chosen uniformly from the overall measure. Thus, the one-sided bound described above implies almost sure equality. We are now ready to prove Theorem 1.1. The main ideas of the proof already appeared in the proof of Theorem 4.6. Proof of Theorem 1.1. The beginning of the proof of this result appears in Section 2.3 with the statement of Proposition 2.8. In particular, the combination of Proposition 2.8 and Lemma 3.12 implies that for each fixed value of r the law of the merging times of the leftmost geodesics of (S, d, x, y) from ∂B • (x, s) for s = d(x, y)−r to x have the same law in a Lévy a net (when the starting points for the geodesics have the same spacing in both). Thus in view of the proof of Proposition 3.31, we have that L r is almost surely Figure 3.7) and once the Lévy net is given, the disks are conditionally independent unmarked Brownian disks with given boundary lengths. As shown below, even an unmarked disk of given boundary length L has a special interior point called the center. Once one conditions on the exploration net toward that point, the holes are again conditionally independent unmarked Brownian disks with given boundary lengths. determined by the metric space structure of (S, d, x, y). This combined with the second assumption in the statement of Theorem 1.1 implies that L r is a non-negative Markov process which that satisfies the conditions of Proposition 3.11. That is, L r evolves as a CSBP excursion as r increases, stopped when it hits zero. This discussion almost implies that the hypotheses of Theorem 4.6 are satisfied for some α ∈ (1, 2). It implies that the intersection of a metric net with B • (x, s) looks like a portion of a Lévy net. However, it does not rule out the possibility that the boundary length process L r might not tend to zero as r approaches d(x, y). As explained in the proof of Lemma 4.7, this can be ruled out by showing that the metric net from x to y almost surely has ν measure zero. If the metric net failed to have measure zero, then the expression (4.13) from Proposition 4.22 would have to fail to be a martingale, which would imply by Proposition 4.22 that we must have α = 3/2. However, the expression (4.13) would have to be a supermartingale, and it would have to become a martingale if an appropriate non-increasing function were added (corresponding to the accumulated amount of mass in the portion of the metric net observed thus far). The Doob-Meyer decomposition implies that the form this function would have to have is uniquely determined. Moreover, it can be determined explicitly from the expression for the drift term associated to (4.13), which is derived in the proof of Proposition 4.22. Indeed, one finds that the accumulated metric net mass would have to be the integral of a power of L r , as t varies from 0 to d(x, y). However, it is not hard to see that if this power is anything other than 1, there must be a violation of the independence of slices assumption (since the amount being added would not a linear function of the slices taken individually). On the other hand, if the power is 1, then the overall scaling exponent would be wrong, since the duration of time scales like L α−1 and the integral would have to scale like L α−1 L = L α , and not as L 2α−1 . Tail bounds for distance to disk boundary It will be important in [MS16a] to establish tail bounds for the amount of time that it takes a QLE(8/3, 0) exploration starting from the boundary of a quantum disk to absorb all of the points inside of the quantum disk. This result will serve as input in the argument in [MS16a] to show that the metric space defined by QLE(8/3, 0) satisfies the axioms of Theorem 4.6 (and therefore we cannot immediately apply Theorem 4.6 in the setting we have in mind in [MS16a] to transfer the corresponding Brownian map estimates to 8/3-LQG). However, in the results of [MS15a] we already see some of the Brownian map structure derived here appear on the 8/3-LQG sphere. Namely, the evolution of the boundary length of the filled metric ball takes the same form, the two marked points are uniform from the quantum measure, and we have the conditional independence of the surface in the bubbles cut out by the metric exploration given their quantum boundary lengths. The following proposition will therefore imply that the results of [MS15a] combined with the present work are enough to get that the joint law of the amount of time that it takes for a QLE(8/3, 0) starting from the boundary of a quantum disk to absorb all of the points in the disk and the quantum area of the disk is the same in the case of both the Brownian map and 8/3-LQG. Proposition 4.18. Suppose that we have a probability measure on singly-marked diskhomeomorphic metric measure spaces (S, d, ν, x) where ν is an almost surely finite, good measure on S such that the following hold. 1. The conditional law of x given (S, d, ν) is given by ν (normalized to be a probability measure). 2. For each r which is smaller than the distance d(x, ∂S) of x to ∂S, there is a random variable L r , which we interpret as a boundary length of the x-containing component of the complement of the set of points with distance at most r from ∂S. As r varies, this boundary length evolves as the time-reversal of a 3/2-stable CSBP stopped upon hitting 0. The time at which the boundary length hits 0 is equal to d(x, ∂S). 3. The law of the metric measure space inside of such a component given its boundary length is conditionally independent of the outside. 4. There exists a constant c 0 > 0 such that the expected ν mass in such a component given that its boundary length is is c 0 2 . Let d * = sup z∈S dist(z, ∂S). Then the joint law of d * and ν(S) is the same as the corresponding joint law of these quantities under µ 1,L DISK where L is equal to the boundary length of ∂S under (S, d, ν, x). In particular, for each 0 < a, L 0 < ∞ there exists a constant c < ∞ such that for all L ∈ (0, L 0 ) and r > 0 we have P [d * ≥ r | ν(S) ≤ a] ≤ c exp(− 3 2 (1 + o(1))r 4/3 ) (4.12) where the o(1) term tends to 0 as r → ∞. Moreover, the tail bound (4.12) also holds if use the law with Radon-Nikodym derivative given by (ν(S)) −1 with respect to the law of (d * , ν(S)). We note that the law in the final assertion of Proposition 4.18 corresponds to µ L DISK . We will need to collect two lemmas before we give the proof of Proposition 4.18. Lemma 4.19. For each 0 < a < b < ∞ there exists a constant c > 0 such that the following is true. For an instance (S, d, ν, x, y) sampled from µ 2 SPH , we let d * be the diameter of S. Conditionally on ν(S) ∈ [a, b], the probability that d * is larger than r is at most c exp(− 3 2 (1 + o(1))r 4/3 ) where the o(1) term tends to 0 as r → ∞. Proof. It follows from [Ser97, Proposition 14] that the probability that the unit area Brownian map has diameter larger than r is at most a constant times exp(− 3 2 (1 + o(1))r 4/3 ) where the o(1) term tends to 0 as r → ∞. The assertion of the lemma easily follows. Lemma 4.20. Fix 0 < a, L 0 < ∞. There exists a constant c > 0 depending only on a, L 0 such that for all L ∈ (0, L 0 ) the following is true. Suppose that we have an instance (S, d, ν) sampled from µ L DISK conditioned on ν(S) ≤ a. Let d * be the supremum over all z ∈ S of the distance of z to ∂S. There exists a constant c > 0 depending only on a, L such that the probability that d * is larger than r is at most c exp(− 3 2 (1 + o(1))r 4/3 ) where the o(1) term tends to 0 as r → ∞. The same holds with µ 1,L DISK in place of µ L DISK . Proof. Suppose that we have a sample (S, d, ν, x, y) from µ 2 SPH conditioned on the positive and finite probability event that: 1. There exists an r and a component U of S \ B(x, r) with y / ∈ U such that the boundary length of U is equal to L. Then we know that the law of U (viewed as a metric measure space) is given by µ L DISK conditioned on having area at most a. The amount of time that it takes the metric exploration starting from ∂U to absorb every point in U is bounded from above by the diameter of (S, d). Thus the first assertion of the lemma follows from Lemma 4.19. The second assertion follows from the first because the Radon-Nikodym derivative between µ 1,L DISK and µ L DISK is at most a on the event that ν(S) ≤ a. Proof of Proposition 4.18. This follows from a simplified version of the argument used to prove Theorem 4.6. The second assertion of the proposition follows by combining the first with Lemma 4.20. Given L 1 and L 2 , one may decompose the metric balls as in Figure 1.3 (the first L 1 units of time describing the first ball, the second L 2 units the second ball). The right figure is an independent unmarked Brownian disk, which represents the surface that lies outside of the two metric balls in the second figure. Given the disk, first blue dot is uniform on the boundary; the second is L 1 units clockwise from first. The measure that µ 2+1 SPH induces on the pair (L 1 , L 2 ) is (up to multiplicative constant) the measure (L 1 + L 2 ) −5/2 dL 1 dL 2 . This follows from the overall scaling exponent of L and the fact that given L = L 1 + L 2 the conditional law of L 1 is uniform on [0, L]. Adding a third marked point along the geodesic In this section, we present Figure 4.7 and use it to informally explain a construction that will be useful in the subsequent works [MS15a,MS16a] by the authors to establish the connection between the 8/3-Liouville quantum gravity sphere and the Brownian map. This subsection is an "optional" component of the current paper and does not contain any detailed proofs; however, the reader who intends to read [MS15a,MS16a] will find it helpful to have this picture in mind, and it is easier to introduce this picture here. Roughly speaking, we want to describe the continuum version of the Boltzmann measure on figures such as the one in Figure 1.2, where one has a doubly marked sphere together with two filled metric balls (centered at the two marked points) that touch each other on the boundary but do not otherwise overlap. Clearly, the Radon-Nikodym derivative of such a measure w.r.t. µ 2 TRI should be D + 1 where D is the distance between the two points, since the radius of the first ball can be anything in the interval [0, D]. In the discrete version of this story, it is possible for the two metric balls in Figure 1.2 to intersect in more than one point (this can happen if the geodesic between the two marked points is not unique) but in the continuum analog discussed below one would not expect this to be the case (since the geodesic between the marked points is a.s. unique). To describe the continuum version of the story, we need to define a measure µ 2+1 SPH on continuum configurations like the one shown in Figure 4.7. To sample from µ 2+1 SPH , one first chooses a doubly marked sphere from the measure whose Radon-Nikodym derivative w.r.t. µ 2 SPH is given by D. Then, having done so, one chooses a radius D 1 for the first metric ball uniformly in [0, D], and then sets the second ball radius to be D 2 := D − D 1 . Now µ 2+1 SPH is a measure on Brownian map surfaces decorated by two marked points and touching two filled metric balls centered at those points. Let L 1 and L 2 denote the boundary lengths of the two balls and write L = L 1 + L 2 . 2. It is possible to verify the following scaling properties (which hold up to a constant multiplicative factor): The two properties above suggest that µ 2+1 SPH induces a measure on (L 1 , L 2 ) given (up to constant multiplicative factor) by (L 1 + L 2 ) −5/2 dL 1 dL 2 . The measure on L itself is then L −3/2 dL. If we condition on the metric ball in Figure 4.7 of boundary length L 1 , we expect that conditional law of the complement to be that of a marked disk of boundary length L 1 , i.e., to be a sample from µ 1,L DISK with L 1 playing the role of the boundary length. This suggests the following symmetry (which we informally state but will not actually prove here). Proposition 4.21. Given L 1 , the following are equivalent: 1. Sample a marked disk of boundary length L 1 from the probability measure µ 1,L DISK (with L 1 as the boundary length). One can put a "boundary-touching circle" on this disk by drawing the outer boundary of the metric ball whose center is the marked point and whose radius is the metric distance from the marked point to the disk boundary. 2. Sample L 2 from the measure (L 1 + L 2 ) −5/2 dL 2 (normalized to be a probability measure) and then create a large disk by identifying a length L 2 arc of the boundary of a sample from µ L DISK , with the entire boundary of a disk sampled from µ L 2 MET . The interface between these two samples is the "boundary-touching circle" on the larger disk. Interestingly, we do not know how to prove Proposition 4.21 directly from the Brownian snake constructions of these Brownian map measures, or from the breadth-first variant discussed here. Indeed, from direct considerations, we do not even know how to prove the symmetry of µ 2 SPH with respect to swapping the roles of the two marked points x and y. However, both this latter fact and Proposition 4.21 can be derived as consequences of the fact that µ 2 SPH is a scaling limit of discrete models that have similar symmetries (though again we do not give details here). We will see in [MS15a,MS16a] that these facts can also be derived in the Liouville quantum gravity setting, where certain symmetries are more readily apparent. We will also present in [MS15a,MS16a] an alternate way to construct Figure 4.7 in the Liouville quantum gravity setting. In this alternate construction, one begins with a measure µ 2 LQGSPH on doubly marked LQG spheres. Given such a sphere, one may then decorate it by a whole plane SLE 6 path from one marked point to the other. Such a path will have certain "cut points" which divide the trace of the path into two connected components. It is possible to define a quantum measure on the set of cut points. One can then define a measure µ 2+1 LQGSPH on path-decorated doubly marked quantum spheres with a distinguished cut point along the path. This is obtained by starting with the law of an SLE 6 -decorated sample from µ 2 LQGSPH , then weighting this law by the quantum cut point measure, and then choosing a cut point uniformly from this cut point measure. We will see in [MS15a,MS16a] that a certain QLE "reshuffling" procedure allows us to convert a sample from µ 2+1 LQGSPH into an object that (once an appropriate metric is defined on it) looks like a sample from µ 2+1 SPH . 4.6 The martingale property holds if and only if α = 3/2 Proposition 4.22. Fix α ∈ (1, 2) and suppose that M r is the process associated with an exploration towards the center of a sample produced from µ L DISK where µ L DISK is as in where J r is the set of jumps made by M | [0,r] . Then A r is a martingale if and only if α = 3/2. We will need two intermediate lemmas before we give the proof of Proposition 4.22. Lemma 4.23. Suppose that X t is a non-negative, real-valued, continuous-time càdlàg process such that there exists p > 1 with sup 0≤t≤T E|X t | p < ∞ for all T > 0. (4.14) Let τ = inf{t ≥ 0 : X t = 0} and let (F t ) be the filtration generated by (X t∧τ ). Suppose that q : R + → R + is a non-decreasing function such that q(∆)/∆ → 0 as ∆ → 0. Assume that Y t is a càdlàg process adapted to F t with E|Y t | < ∞ for all t and that a is a constant such that Then Y t is a martingale if and only if a = 0. Proof. Fix ∆ > 0, s < t, and let t 0 = s < t 1 < · · · < t n = t be a partition of [s, t] with ∆/2 < t j − t j−1 ≤ ∆ for all 1 ≤ j ≤ n. Then we have that We are going to show that the right hand side above tends to Y s + a t s E[X u∧τ | F s ]du in L 1 as ∆ → 0. This, in turn, implies that there exists a positive sequence (∆ k ) with ∆ k → 0 as k → ∞ sufficiently quickly so that the convergence is almost sure. This implies the result because if s < τ then a t s E[X u∧τ | F s ]du = 0 if and only if a = 0. We begin by noting that This implies the claim because the càdlàg property implies that n j=1 a(t j − t j−1 )X t j−1 ∧τ → a t s X u∧τ du as ∆ → 0 which, combined with the integrability assumption (4.14), implies that n j=1 a(t j − t j−1 )E[X t j−1 ∧τ | F s ] → a t s E[X u∧τ | F s ]du as ∆ → 0. Lemma 4.24. Fix α ∈ (1, 2) and suppose that M r is the process associated with an exploration towards the center of a sample produced from µ L DISK where µ L DISK is as in Section 4.3. There exists constants c 0 , c 1 > 0 such that P[M r ≥ u] ≤ c 0 e −c 1 r −1/α u for all u, r > 0. (4.15) In particular, E|M r | p < ∞ for all r, p > 0. (4.16) Proof. We first note that (4.15) in the case of an α-stable process with only downward jumps follows from [Ber96, Chapter VII, Corollary 2]. The result in the case of M r follows by comparing the jump law for M r as computed in Lemma 4.17 with the jump law for an α-stable process (which we recall has density x −α−1 with respect to Lebesgue measure on R + ). Proof of Proposition 4.22. We assume without loss of generality that L = 1. Let J r be the set of jumps made by M | [0,r] and, for each , δ > 0, let J r (resp. J ,δ r ) consist of those jumps in J r with size at least (resp. size in [ , δ]). Let J r (resp. J ,δ r ) be the sum of the elements in J r (resp. J ,δ r ) and let We also let A r be given by A r = (M r ) 2α−1 + a∈J r |a| 2α−1 . We note that and that the expectation of (4.17) tends to 0 as → 0. Using that A 0 = M 0 = 1, we have that We will show later in the proof that the limit in (4.19) converges, compute its value, and show that I α = 0 precisely for α = 3/2. Assuming for now that this is the case, we are going to prove the result by showing that where I α is as in (4.19). This suffices because then we can invoke Lemma 4.23. Let E 0,δ r (resp. E 1,δ r ) be the event that M | [0,r] does not make a (resp. makes exactly 1) jump of size at least δ and let E 2,δ r be the event that M | [0,r] makes at least two jumps of size at least δ. Combining (4.32), (4.34), and (4.35) (with p ∈ (1, 2) so that 2/p > 1), and taking a limit as → 0 we see that Indeed, this follows because each of the error terms which have a factor of r also have a positive power of δ as a factor, except for the term with I α . Thus we can make these terms arbitrarily small compared to r by taking δ small. The remaining error terms have a factor with a power of r which is strictly larger than 1, so we can make these terms arbitrarily small compared to r by taking r small. Therefore to finish the proof we need to show that I α = 0 precisely for α = 3/2. The indefinite integral x 2α−1 + (1 − x) 2α−1 − 1 Π(dx) − (2α − 1) x −α dx (4.36) can be directly computed (most easily using a computer algebra package such as Mathematica) to give where 2 F 1 is the hypergeometric function. In particular, the limit in (4.18) is equal to where B x (a, b) = x 0 u a−1 (1 − u) b−1 du is the incomplete beta function. Direct computation shows that this achieves the value 0 when α = 3/2 and (since this is an increasing function of α) is non-zero for other values of α ∈ (1, 2). Thus, (4.18) is equal to zero if and only if α = 3/2, and as noted above, the result follows from this.
2018-03-14T12:06:33.000Z
2015-06-11T00:00:00.000
{ "year": 2015, "sha1": "6e2fed6a924d7f8de6ae959385fe3eecb856adc0", "oa_license": "CCBYND", "oa_url": "https://jep.centre-mersenne.org/item/10.5802/jep.155.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "6e2fed6a924d7f8de6ae959385fe3eecb856adc0", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
14371861
pes2o/s2orc
v3-fos-license
Minitwistor spaces, Severi varieties, and Einstein-Weyl structure In this paper we show that the space of nodal rational curves, which is so called a Severi variety (of rational curves), on any non-singular projective surface is always equipped with a natural Einstein-Weyl structure, if the space is 3-dimensional. This is a generalization of the Einstein-Weyl structure on the space of smooth rational curves on a complex surface, given by N.Hitchin. As geometric objects naturally associated to Einstein-Weyl structure, we investigate null surfaces and geodesics on the Severi varieties. Also we see that if the projective surface has an appropriate real structure, then the real locus of the Severi variety becomes a positive definite Einstein-Weyl manifold. Moreover we construct various explicit examples of rational surfaces having 3-dimensional Severi varieties of rational curves. Introduction In the paper [3], N. J. Hitchin established a kind of twistor correspondence which provides a bijection between 3-dimensional Einstein-Weyl manifolds and non-singular complex surfaces which have non-singular rational curves with normal bundle O (2). The latter complex surfaces are called the minitwistor spaces, and the rational curves in the spaces are called the minitwistor lines. In this Hitchin correspondence, Einstein-Weyl manifolds appear as parameter spaces of the minitwistor lines. The parameter space has a natural complex conformal structure and holomorphic geodesics, by which the space becomes an Einstein-Weyl 3-fold. Conversely, when an Einstein-Weyl 3-fold is given, the minitwistor space is obtained as the leaf space for a certain foliation on a conic bundle naturally constructed over the Einstein-Weyl 3-fold. Here in order to obtain the minitwistor space, we need to assume in general that an Einstein-Weyl 3-fold is sufficiently small. In this sense, the Hitchin correspondence is local in nature. It is also remarkable that there are essentially only two compact minitwistor spaces, while there are many non-compact minitwistor spaces as constructed by H. Pedersen and K. P. Tod ( [9,10]). The two compact minitwistor spaces correspond to the two standard Einstein-Weyl 3-folds (i.e. the Euclidean space and the hyperbolic space). In this paper, we show that if we allow the minitwistor lines to be nodal rational curves, then their parameter space still carries a natural Einstein-Weyl structure, if the parameter space is 3-dimensional. When the complex surface is projective algebraic, the parameter space of nodal curves (of any genus) in a linear system is called a Severi variety in algebraic geometry, which is known to have a natural structure of a (non-complete) algebraic variety. Thus our result can be precisely stated that any Severi variety of rational curves has a natural Einstein-Weyl structure, if it is 3-dimensional. If C denotes any of the nodal rational curves, the last condition is equivalent to the condition that the self-intersection number of C and the number of the nodes of C are 2m and (m − 1) respectively. So when m = 1 our result is reduced to the Hitchin's original Einstein-Weyl structure. In Section 2.1 we recall the above Hitchin correspondence, with some emphasizes on null plane bundles over Einstein-Weyl 3-folds, a plane distribution on the bundle, and its integrability. In Section 2.2 we first recall fundamental results on Severi varieties in general (Proposition 2.4), and show that if the nodal curves (parametrized by the Severi variety) are rational, then the Severi variety is non-singular and its dimension is expressed by the self-intersection number of the rational curves and the number of the nodes (Proposition 2.6). In Section 2.3 by using Hitchin's result we show that the Severi variety of rational curves has a natural Einstein-Weyl structure, if the variety is 3-dimensional (Theorem 2.10). In Section 2.4 motivated by this result we define minitwistor spaces to be a pair of non-singular projective surface and a linear system on it which has a 3-dimensional Severi variety of rational curves (Definition 2.11). Our definition involves a positive integer m which is one greater than the number of the nodes, and we call this integer the index of the minitwistor space. Then the original minitwistor spaces by Hitchin are exactly the minitwistor spaces of index one. As any blow-up of the minitwistor spaces (in the above sense) becomes again minitwistor space, we also introduce the notion of minimality of the minitwistor spaces (Definition 2.13). Then any minimal minitwistor spaces of index one are isomorphic to either CP 1 × CP 1 or the Hirzebruch surface P(O(2) ⊕ O) (Proposition 2.14). In Section 3 we investigate certain subvarieties of the 3-dimensional Severi variety (of rational curves) naturally arising from the Einstein-Weyl structure. Namely we investigate null surfaces and (null and non-null) geodesics in the Einstein-Weyl 3-fold. Just as in the Hitchin's case, null surfaces are formed by minitwistor lines going through a point on the minitwistor space, and geodesics are formed by those going through two points on the minitwistor space. (In particular, they are automatically algebraic subvarieties.) But significant difference in our case is that both of them are non-normal subvarieties. The singular locus of these subvarieties are formed by minitwistor lines which have nodes at the prescribed point(s). In Section 4 we show that if the minitwistor space (in our sense) has a real structure and a real minitwistor line which has the nodes as its all real points, then the real locus of the (3-dimensional) Severi variety has a natural structure of a real, positive definite Einstein-Weyl 3-manifold (Theorem 4.3). They are obtained as real slices of the complex Einstein-Weyl structure obtained in Section 2. In Section 5 we provide various examples of the minitwistor spaces. In Section 5.1 for any m ≥ 2 we construct minimal minitwistor spaces of index m. They are obtained from the product surface CP 1 × CP 1 by blowing-up 2m points. In this example, the configuration of the 2m points can be taken generically, so that they constitute a 4m-dimensional family, while the number of effective parameter is 4m − 6. Also we see that if we specialize the configuration of the 2m points in a certain way, then we obtain minitwistor spaces with C * -action, or even toric minitwistor spaces (of any index). In Section 5. 2 we provide examples of minimal minitwistor spaces of any index which have a real structure enjoying the conditions in Section 4. This creates real, positive definite Einstein-Weyl 3-manifolds. These minitwistor spaces are obtained as the canonical quotient spaces of the twistor spaces of Joyce's self-dual metrics on the connected sum of complex projective planes. Notations and Conventions. For a complex space X, Sing X means the singular locus of X. For a sheaf F on X, we put h i (X, F ) = dim H i (X, F ). By a rational curve, we mean a reduced irreducible curve whose normalization is isomorphic to a complex projective line (as usual). A nodal curve is a reduced curve which has ordinary nodes as its all singularities. If Y is a non-singular submanifold in X, the normal bundle is denoted by N Y /X . (This is used only when Y ∩ Sing X = ∅.) The base locus of a linear system |D| is denoted by Bs |D|. A Hitchin correspondence and Einstein-Weyl structure on Severi varieties 2.1. Hitchin correspondence. First we recall the definition of Einstein-Weyl structure on complex manifolds. Let M be a complex manifold, and T M and T * M the holomorphic tangent and cotangent bundles respectively. A complex metric on M is a holomorphic section of the symmetric tensor product S 2 T * M such that the induced quadratic form on T x M is nondegenerate for any x ∈ M . Two complex metrics g and g ′ are said to be conformal if there is a non-vanishing holomorphic function f on M satisfying g ′ = f g. A conformal class of a complex metric g is denoted by [g]. An affine connection on M is a holomorphic connection on T M . An affine connection ∇ on M is said to be compatible with a conformal structure [g] if for each g ∈ [g], there is a (holomorphic) 1-form a on M such that For real manifolds, a Weyl structure and the Einstein-Weyl condition on it is defined in a similar way by the equations (1) and (2). We say that a Weyl structure ([g], ∇) on a real manifold is positive-definite or negative-definite if [g] is so. In this paper, we are concerned with Einstein-Weyl structures on 3-dimensional manifolds. So let M be a complex 3-fold and ([g], ∇) a Weyl structure on M . A 2-dimensional subspace V in a tangent space T x M (x ∈ M ) is called a null plane if [g] degenerates on V . The set of null planes in a tangent space T x M is called the null cone (at x ∈ M ). A 2-dimensional subspace V ⊂ T x M is a null plane iff V tangents to the null cone of [g]. A 2-dimensional submanifold Σ ⊂ M is called a null surface if T x Σ is a null plane for any x ∈ Σ. Then the Einstein-Weyl condition for ([g], ∇) is characterized in terms of null surfaces as follows. Proposition 2.1. ( [3]; see also [8]) Let M be a 3-dimensional complex manifold and ([g], ∇) a torsion-free Weyl structure. Then ([g], ∇) is Einstein-Weyl if and only if, for any x ∈ M and any null plane V ⊂ T x M , there is a null surface Σ in a small neighborhood of x such that V tangents to Σ. Proof. Since we later require some detail of the proof, we briefly recall it. Let P(T * M ) → M be the projectivization of the holomorphic cotangent bundle, and consider the null planes bundle where g ∈ [g], and ϕ ∈ T * M is considered to be an element of T M by the identification T * M ≃ T M induced by g. As explained in the introduction, we shall define an Einstein-Weyl structure on a certain kind of complex 3-folds arising from some complex surfaces (which will be called the minitwistor spaces). Our method is based on the following construction called Hitchin correspondence established by Hitchin in [3]. Let S be a non-singular complex surface. A non-singular rational curve C ⊂ S is called a minitwistor line if N C/S ≃ O C (2) holds. As H 1 (N C/S ) = 0, by Kodaira's theorem, the parameter space W of minitwistor lines becomes a 3-dimensional complex manifold such that for any minitwistor line D ∈ W , there is a canonical isomorphism The complex 3-fold W is naturally equipped with certain families of 2 and 1-dimensional submanifolds as follows. First, for any p ∈ S define W p := {D ∈ W | p ∈ D}. Then by considering the blowing-up S at p, it can be shown ( [3]) that W p becomes a 2-dimensional complex submanifold in W if it is non-empty. Next, for any two points p, q ∈ S we define W p,q := {D ∈ W | p, q ∈ D}. Then W p,q becomes a 1-dimensional complex submanifold of W . Note that W p,q naturally makes sense even when q is an infinitely near point of p. Here for any point p ∈ S, an infinitely near point of p is a point on the exceptional curve E p of the blowing-up of S at p. When q is an infinitely near point of p ∈ S, then W p,q is defined as Then on the parameter space W an Einstein-Weyl structure is defined as follows: [3]; see also [8] Proof. Since we will again require some details of the proof, we give an outline. The conformal structure [g] on W is defined in such a way that, for each D ∈ W , the null cone N D ⊂ T D W ≃ H 0 (O D (2)) is given by Then W p gives a null surface for [g] if it is non-empty. Moreover the null surface W p is totally geodesic for [g]. Indeed, for each C ∈ W p , the family {W p,q } q∈C gives a CP 1 -family of geodesics on W passing through the point C and contained in W p . Let ̟ : Q(W ) → W be the null-plane bundle defined by (3). Then we have an isomorphism Hence we obtain the following double fibration: where f is the restriction of the projection to the second factor. From the proof of Proposition 2.1, null surfaces on W naturally lift on Q(W ), and Q(W ) is foliated by such surfaces. Moreover, the leaves of this foliation coincide with fibers of f by construction. Note that W p = ̟(f −1 (p)) and that Q(W ) is nothing but the universal family of minitwistor lines (which are close to C). Using the facts that each null surface W p is totally geodesic, we can show that there exists a unique torsion-free affine connection ∇ such that ∇ is compatible with [g], and such that the conditions (ii) and (iii) are satisfied. Then ([g], ∇) is Einstein-Weyl by Proposition 2.1 Finally in this subsection we recall some facts concerning the null surfaces. Remark 2.3. (i) If p and q are two points in S which are not infinitely near, then W p,q = W p ∩ W q holds. Hence the intersection of any two null surfaces is a non-null geodesic if it is not empty. (ii) For each p ∈ S and C ∈ W p , there is a unique null geodesic passing through the point C and contained in W p . Indeed, the tangent line T p C ⊂ T p S determines a point q ∈ E p , where E p is the exceptional curve of the blowing up of S at p. Then W p,q is the required null geodesic. Notice that (i) and (ii) above are also based on the following basic facts respectively: (i)' the intersection of any two null planes at a point is a non-null complex line, and (ii)' each null plane V bijectively corresponds to a null complex line L such that L ⊂ V . 2.2. Severi varieties. First we recall a definition of Severi varieties and their basic properties. The most useful reference on Severi varieties is a book by E. Sernesi [11], especially §4.7. Let S be a non-singular projective algebraic surface and L a line bundle over S satisfying |L | = ∅. We often identify the complete linear system |L | and its parameter space PH 0 (S, L ) * (the dual projective space). For any integer δ > 0, define a subset of |L | by W |L |, δ := {C ∈ |L | | C is reduced, irreducible and Sing C consists of δ ordinary nodes}. This is called the Severi variety of δ-nodal curves in |L |. If C ∈ W |L |, δ , we often write W |C|, δ instead of W |L |, δ . A fundamental result on Severi variety is the following. where T C W |L |, δ means the Zariski tangent space of W |L |, δ at the point C, and I Sing C ⊂ O C is the ideal sheaf of Sing C. (iii) If H 1 (O C (C) ⊗ I Sing C ) = 0, W |C|, δ is smooth at C and its dimension is equal to h 0 (O C (C) ⊗ I Sing C ). (Namely H 1 (O C (C) ⊗ I Sing C ) is the obstruction space for deforming C in S preserving all the nodes.) Remark 2.5. (i) The closure of W |L |, δ in the projective space |L |, which becomes an algebraic variety, is also called the Severi variety. But in this paper we do not adapt it. (ii) In general, Severi varieties can become singular, or even non-reduced. (iii) It is not necessarily easy to determine non-emptiness of the Severi variety. Also, even one can show the non-emptiness, it is not easy to determine its connectedness (or irreducibility of the closure). By the adjunction formula, the geometric genus of a member C ∈ W |L |, δ is independent of a choice of C. (Namely, it is given by (1/2)(L 2 + K S · L ) + 1 − δ.) If it is zero, namely when a Severi variety parametrizes (nodal) rational curves, it becomes non-singular and has an expected dimension as follows. Proposition 2.6. Suppose a non-singular projective surface S has a rational curve C which has δ (> 0) nodes as its all singularities. If C 2 + 1 − 2δ > 0, the Severi variety W |C|, δ is non-singular and (C 2 + 1 − 2δ)-dimensional. Moreover, under this assumption, S is a rational surface. Proof. By the Riemann-Roch formula (see [1, (3.1) Theorem]), we have Let ν :C → C be the normalization of C, so thatC ≃ CP 1 . The nodes of C determine 2δ distinct points onC. We put k := C 2 for simplicity. Then as ν * O C (C) ≃ OC (k), we have On the other hand, as C has δ nodes, we have Hence by (7), we obtain H 1 (O C (C)⊗ I Sing C ) = 0. Then by Proposition 2.4 (iii) we obtain that the component of W |C|,δ containing C is non-singular and its dimension is given by h 0 (O C (C)⊗ I Sing C ) = C 2 + 1 − 2δ. Next we show that S is a rational surface. If we K S denotes the canonical bundle on S, by adjunction formula we have Hence we have CK S = −(C 2 + 1 − 2δ) − 1 < 0. Therefore since C actually moves in S, the Kodaira dimension of S is −∞. Hence S is birational to a ruled surface. If q := h 1 (O S ) = 0, S is a rational surface and we are done. If q > 0, let α : S → T be the Albanese map, so that T is a Riemannian surface of genus q. Then obviously a general nodal curve C cannot be contained in a fiber of α. But at the same time, α| C cannot be surjective since otherwise we obtain a non-trivial map from CP 1 to T via the normalization of C. This means q = 0. Hence S is rational. A more direct and geometric proof of the first claim of Proposition 2.6 can be given by taking "the normalization of a tubular neighborhood of C ". See Section 2.3 for this. As an immediate consequence of Proposition 2.6, we obtain the following characterization of 3-dimensional Severi varieties of rational curves: Proposition 2.7. If W is a 3-dimensional Severi variety of δ-nodal rational curves on a nonsingular projective surface S, there exists an integer m > 1 satisfying C 2 = 2m and δ = m − 1, where C is any one of the curves represented by points of W . Conversely, if a non-singular projective surface S has a (m − 1)-nodal rational curves C satisfying C 2 = 2m, the Severi variety W |C|,δ is 3-dimensional. As for the linear system |C| on S, we have the following Proposition 2.8. Let S and C be as in Proposition 2.6, and suppose C 2 > 2δ − 2. Proof. We again write k = C 2 . Let ω C be the dualizing sheaf. Then as C is a rational curve with exactly δ nodes, we have deg ω C = 2δ − 2. Hence by the assumption we have deg(ω C ⊗ O C (−C)) = 2δ − 2 − k < 0. Therefore by duality we have H 1 (O C (C)) = 0. Then by Riemann-Roch formula we obtain Hence by the cohomology exact sequence of and the rationality of S, we obtain h 0 (O S (C)) = k + 2 − δ, and obtain (i). Next in order to show (ii) and (iii) we explicitly give a basis of H 0 (O C (C)). For this, let ν :C → C be the normalization as before. Take a non-homogeneous coordinate z onC ≃ CP 1 . Let p 1 , · · · , p δ be the nodes of C andp 1 i ,p 2 i ∈C (1 ≤ i ≤ δ) the 2 points determined by the 2 branches at p i . We write z = a i and z = b i forp 1 i andp 2 i respectively, where we can obviously suppose a i = ∞ and b i = ∞ for any i. (We do not make distinction forp 1 i andp 2 i .) Then by the isomorphism ν * O C (C) ≃ OC(k), there is an isomorphism OC(k)p1 i ≃ OC(k)p2 i between 2 fibers of OC(k) →C. From these, we obtain an isomorphism If we write f (z) = c k z k +c k−1 z k−1 +· · ·+c 0 , then the equation f (a i ) = f (b i ) gives a homogeneous linear equation for c 1 , c 2 , · · · , c k . For each δ < i ≤ k, we can choose f i ∈ V such that deg f i = i (by a dimensional reason). Then {1, f δ+1 , f δ+2 , · · · , f k } are obviously linearly independent. Hence by (9) these form a basis of H 0 (O C (C)). Then since the zero locus of the two sections 1 and f k are disjoint, it follows that the system |O C (C)| is base point free. Hence, since the ) is surjective as above, we obtain that |C| is also base point free, meaning (ii). Let φ : S → CP N (N = C 2 +1−δ) be the morphism associated to |C|. Then for (iii), since C ∈ |C|, it suffices to show that the morphism φ| C is generically 1 : 1. The last morphism is exactly the morphism induced by the system |O C (C)|. Suppose that this morphism is generically d : Then by the universality of the normalization, the compositionC → C → φ(C) factors as a surjective morphismC → D and the normalization D → φ(C). The morphismC → D is clearly d : 1. Let w be a non-homogeneous coordinate on D ≃ CP 1 . Then the mapC → D can be written , and the degree of the polynomials g i (f (z)) is a multiple of d. Since we have suppose d > 1, this contradicts our choice of the above basis {1, f δ+1 , f δ+2 , · · · , f k }. Hence we have d = 1, and φ is a birational morphism. This implies that the degree of φ(S) equals to C 2 = k, as desired. 2.3. Einstein-Weyl structures on the Severi varieties. For the purpose of proving the main result in this section, we first introduce notions of 'tubular neighborhood' of a nodal curve in a complex surface, and the 'normalization' of the tubular neighborhood. To this end, we first consider a local model of a nodal curve and its normalization. Let (x 1 , x 2 ) be the usual coordinate on C 2 , and consider a nodal curve Y = Y 1 ∪Y 2 where Y l = {x l = 0}. Then for small ε > 0 and l = 1, 2, we put U l = {(x 1 , x 2 ) ∈ C 2 | |x l | < ε} and call the union U 1 ∪U 2 as a tubular neighborhood of the nodal curve Y . Then the disjoint union U 1 U 2 can be regarded as a tubular neighborhood (in the usual sense) of a non-singular non-connected curve Y 1 Y 2 , and the natural map This is a local model of a tubular neighborhood of nodal curves and its normalization. For a global situation, let S be a compact complex surface, C ⊂ S a nodal curve, and p 1 , · · · , p n the set of nodes of C. Then we can take a covering {U 0 , U 1 , · · · , U n } of C by open subsets of S which satisfies the following conditions: is a coordinate on U i such that p i = (0, 0). Next for any 1 ≤ i ≤ n letŨ 1 i andŨ 2 i be two copies of U i , equipped with the same coordinate ( i be the origins respectively. LetŨ 0 be a copy of U 0 andC 0 ⊂Ũ 0 be the curve corresponding to C ∩ U 0 . Then by the above conditions 1) -3), the open sets Figure 1. the normalization of a tubular neighborhood of a nodal curve andC 0 also glue together. LetŨ andC be the resulting non-singular complex surface and the non-singular curve inŨ respectively. Let ν :Ũ → U := 0≤i≤n U i be the natural projection. Then the restriction ν|C gives a normalization of C. We call U a tubular neighborhood of C, and the map ν :Ũ → U the normalization of the tubular neighborhood. (This construction is illustrated in Figure 1). The relation between the sheaf O C (C) (= O S (C)| C ) and the normal bundle NC /Ũ ≃ OC(C) is described by the following Lemma 2.9. In the above situation, there exists the following exact sequence: where C p means the skyscraper sheaf supported at p. In particular, there is an isomorphism Proof. We use the above notations prepared for the construction of ν :Ũ → U . We put C := ν −1 (C) and D := C −C (subtraction as a divisor). Then D is a non-compact curve consisting of 2n connected components. Then if we note an isomorphism ν * O U (C) ≃ OŨ ( C), the sequence (11) is exactly the third low of the following obvious commutative diagram of exact sequence of sheaves onŨ : , which is a consequence of the fact that the two curves D andC intersect transversally atp 1 i andp 2 i ). The isomorphism (12) is directly deduced from the exact sequence (11). Now we are ready to prove the main result in this section. Theorem 2.10. Any Severi variety W of nodal rational curves admits a natural torsion-free Proof. By Proposition 2.7, there exists an integer m > 1 such that any member C of W is a nodal rational curve in S with (m − 1) nodes and satisfies C 2 = 2m. For each nodal curve C ∈ W , let ν :Ũ C → U C be the normalization of a tubular neighborhood of the nodal curve C. As C 2 = 2m we have ν * O C (C) ≃ OC(2m). Therefore, since the third arrow in the exact sequence (11) is just the evaluation map at sufficiently small open neighborhood of the point C such that any D ∈ O C is a nodal rational curve contained in U C . Notice that each D ∈ O C is naturally lifted to a non-singular curveD onŨ C (see Figure 2). Conversely, for each rational curveD onŨ C which is sufficiently close toC, the image ν(D) gives a nodal rational curve which is a member of O C . In this way, O C coincides with the set of non-singular curves obtained as small deformations ofC inŨ C . Hence by the Hitchin correspondence (Proposition 2.2) the neighborhood O C has a natural torsion-free Einstein-Weyl structure. We claim that the constructed Einstein-Weyl structures on O C 1 and O C 2 agree on O. Let ν i :Ũ C i → U C i be the normalization of the tubular neighborhood U C i for i = 1, 2. Let us fix an arbitrary point C ∈ O. We can take a tubular neighborhood U of the nodal curve C such that any D ∈ O is contained in U and U ⊂ U C 1 ∩ U C 2 . Let ν :Ũ → U be the normalization of the tubular neighborhood U . Then a natural Einstein-Weyl structure on the intersection O is induced from U . By our construction ofŨ → U , after shrinking U if necessary, we can define embeddings ι 1 and ι 2 so that the diagram Figure 2. the natural lift of the nodal rational curve D near C commutes. Since the Einstein-Weyl structure induced by the Hitchin correspondence is characterized by the local structure of the corresponding complex surface, the above diagram indicates that the three surfaces ν −1 2.4. The minitwistor spaces. In view of Theorem 2.10, it seems natural to introduce the following Definition 2.11. Let m > 0 be any integer. Then by a minitwistor space of index m, we mean a pair (S, |C|) of a non-singular projective algebraic surface S and a complete linear system |C|, where C is a nodal rational curve C satisfying C 2 = 2m which has exactly (m − 1) ordinary nodes as its all singularities. Evidently, minitwistor spaces of index 1 are nothing but a (compact) minitwistor spaces in the sense of Hitchin [3]. In Section 5 we will provide examples of minitwistor spaces of index m for arbitrary m > 1. In general, if (S, |C|) is a minitwistor space of index m, there can exist a (nodal rational) curve C ′ such that (S, |C ′ |) is a minitwistor space of index m ′ = m. Then the two Severi varieties W |C|, m−1 and W |C ′ |, m ′ −1 are not biholomorphic in general. This is a reason why we define the minitwistor space as a pair of S and |C|. If (S, |C|) is a minitwistor space of index m, by the results obtained so far, we have the following: S is a rational surface by Proposition 2.6 (since C 2 + 1 − 2δ = 2m + 1 − 2(m − 1) = 3 > 0). The Severi variety W |C|, m−1 (⊂ |C|) is a 3-dimensional non-singular complex manifold by Propositions 2.6 and 2.7. Furthermore, it is equipped with a torsion-free Einstein-Weyl structure by Theorem 2.10. We will call any nodal rational curve C ∈ W |C|, m−1 as a minitwistor line. Next we show that a blowing up of a minitwistor space is again a minitwistor space. For this we just need the following Lemma 2.12. Let (S, |C|) be a minitwistor space of (any) index m. Then for any point p ∈ S, there is a member C ∈ W |C|, m−1 such that p ∈ C. (Namely, W ⊂ CP m+2 is non-degenerate.) Proof. If m = 1, this can be readily seen from Kodaira's theorem on displacement of submanifolds. So let m > 1. Suppose that there exists a point p ∈ S such that p ∈ C for any C ∈ W |C|, m−1 . Let µ : S ′ → S be the blowing up at p. First assume that general members of W |C|, m−1 have a node at p. Let C ′ be the strict transform of a general member C ∈ W |C|, m−1 . Then (C ′ ) 2 = 2m − 4 holds. If m = 2, C ′ is a non-singular rational curve satisfying (C ′ ) 2 = 0. Hence |C ′ | is a pencil. Since we have supposed that C ′ is a general member (of W |C|, m−1 ), Let (S, |C|) be a minitwistor space of index m, and p ∈ S any point. Then by Lemma 2.12 we have p ∈ C for a general member C ∈ W |C|, m−1 . Let µ : S ′ → S be the blowing up at p. Then C ′ := ν −1 (C) is still an (m − 1)-nodal rational curve satisfying (C ′ ) 2 = 2m. Further we have C ′ ∈ µ * |C|. Therefore the pair (S ′ , µ * |C|) becomes a minitwistor space of index m. Let E be the exceptional curve of µ. Take any member C ′ ∈ W |µ * C|, m−1 . Then C ′ ∩ E = ∅ holds since C ′ · E = 0 and C ′ is irreducible by the definition of Severi variety. This means µ(C ′ ) ∈ W |C|, m−1 . Thus we obtain a natural inclusion (as an open subset) by the definition of Severi variety. To exclude those minitwistor spaces (obtained by blowing up of another minitwistor space), we introduce the following Definition 2.13. We say that a minitwistor space (S, |C|) is minimal if it cannot be obtained from another minitwistor space by blowing-up a point; more precisely, if there exist no minitwistor space (S, |C|) and a point p ∈ S such that S is obtained from S by blowing-up p and such that |C| = µ * |C|, where µ : S → S is the blow-up at p. Of course, this minimality does not imply that S does not have a (−1)-curve. For minitwistor spaces of index 1, the following classification result seems more or less well known: Proposition 2.14. Let (S, |C|) be a minimal minitwistor space of index 1. Then one of the following holds. Proof. When the index is 1, a standard argument readily implies dim |C| = 3 and Bs |C| = ∅. Let φ : S → CP 3 be the morphism associated to |C|. If |C| is composed with a pencil, general member of |C| becomes reducible which cannot happen. Hence dim φ(S) = 2. Since C 2 = 2 (and φ(S) cannot be a plane), φ must be birational over φ(S) which is a quadratic surface. If φ(S) is a full-rank (i. e. non-singular), it is isomorphic to CP 1 × CP 1 , and φ(C) is a hyperplane section for any C ∈ |C|. Hence φ(C) ∈ |O(1, 1)|. On the other hand since φ is a birational morphism φ is decomposed as a composition of the usual blow-ups. Hence by minimality of (S, |C|), we obtain that φ is isomorphic. Hence (i) holds. Alternatively, if φ(S) is not a fullrank, φ(S) must be a (quadratic) cone Σ 2 with a unique vertex and φ(C) is again a hyperplane section. Hence φ factors through as S → Σ 2 → Σ 2 , where Σ 2 → Σ 2 is the minimal resolution of the cone. But again by minimality, the map S → Σ 2 must be isomorphic and C must be the pull-back of the hyperplane section. The last curve is a (+2)-section, which implies (ii). We note that the proof of (ii) of the proposition means that even if a minitwistor space (S, |C|) is minimal, the morphism φ associated to |C| can be non-isomorphic (over the image) in general. If the morphism φ is non-isomorphic, the minimality means that the (birational) image φ(S) necessarily has singularities. (See Remark 5.3 for this.) In Section 5 we will use the following criterion for the minimality. Geometry of Einstein-Weyl structure on the Severi varieties In the previous section we showed that on the 3-dimensional Severi variety of a minitwistor space (in the sense of Definition 2.11) there exists a natural Einstein-Weyl structure (Theorem 2.10). In this section, we investigate null surfaces and geodesics on these 3-dimensional complex Einstein-Weyl manifolds. Throughout this section, (S, |C|) denotes a minitwistor space of (any) index m, W denotes the Severi variety W |C|, m−1 (which is non-singular and 3-dimensional), and ([g], ∇) denotes the natural Einstein-Weyl structure on W . 3.1. The conformal structure on the Severi varieties. In this subsection, for any point C ∈ W , we first identify the tangent space T C W as a subspace of H 0 (OC(2m)) and also with H 0 (OC(2)), whereC is the normalization of C. Next we represent the conformal structure [g] in terms of polynomials onC. We fix any C ∈ W . First, by Proposition 2.4 (i), we have a canonical isomorphism Let p 1 , · · · , p m−1 be the nodes of C, and put ν −1 (p i ) = {p 1 i ,p 2 i }. Next we define V C ⊂ H 0 (OC(2m)) to be the image of the composition of the following two canonical injections: Then we have an isomorphism T C W ≃ V C . Also we have Let (z 0 , z 1 ) be any homogeneous coordinate onC ≃ CP 1 . Suppose that in this coordinate the two pointsp 1 i andp 2 i are represented as Of course we have (a i , b i ) = (c i , d i ) as points on CP 1 . If we put then by (16) each element s ∈ V C can be written as Here, since (a i , b i ) and (c i , d i ) are determined only up to scale, the coefficients (a, b, c) are also determined up to scale. Thus we obtain isomorphisms Using the normalization of a tubular neighborhood of C explained in the previous section, the tangent space T C W can be alternatively described as follows. Let U be a 'tubular neighborhood' of the curve C ∈ W , and letŨ → U be the normalization, which is the extension of the normalization ν :C → C. By the proof of Theorem 2.10 and the Hitchin correspondence explained in Section 2.1, we have an isomorphism where we have NC /Ũ ≃ OC(2). Using the homogeneous coordinate onC as above, each section θ ∈ H 0 (NC /Ũ ) is written as a quadratic polynomial θ(z 0 , z 1 ) = az 2 0 + bz 0 z 1 + cz 2 1 . By Lemma 2.9, the relation between the two isomorphisms T C W ≃ V C and T C W ≃ H 0 (NC /Ũ ) is clearly given by which is uniquely determined up to scaling. The null cones of the conformal structure [g] on W can be readily written down using these isomorphisms. Because [g] was locally defined by regardingŨ as Hitchin's minitwistor space, the conformal structure [g] is defined, as in (4), in such a way that the null cone N C ⊂ T C W ≃ H 0 (NC /Ũ ) is given by (2)) | the equation θ = 0 has a double root}. By the identification (23), N C is also written as N C = {s ∈ V C | the quadratic equation s(z 0 , z 1 )/f (z 0 , z 1 ) = 0 has a double root }. If we use the notation of (20), then we can write 3.2. Null surfaces in the Severi varieties. In general, Severi varieties are naturally embedded in a projective space by the definition. If W is a Severi variety of the minitwistor space (S, |C|), W is embedded in CP m+2 since dim |C| = m + 2 by Proposition 2.8 (i). In this subsection we investigate particular hyperplane sections of W , which will turn out to be null surfaces of the Einstein-Weyl manifold (W, [g], ∇). First, for arbitrary p ∈ S we set which is a hyperplane in |C| since Bs |C| = ∅ by Proposition 2.8 (ii), and define a hyperplane section by W p := W ∩ |C| p . By definition, W p = {D ∈ W | p ∈ D}. By Lemma 2.12 we have W p = W . However, since W is not closed in the projective space CP m+2 , W p can be empty. Actually, we will later show that Proof. Let C ∈ W p , and letŨ → U be the 'normalization of a tubular neighborhood' of C, that is, the extension of the normalization ν :C → C to the tubular neighborhoods ofC and C. Then noting that two different null surfaces always intersect transversally along a non-singular non-null geodesic in general (Remark 2.3 (i)), (27) directly implies (ii) and (iii) of the proposition. In the above proof, if C ∈ W p \W 1 p , we obtain by the Hitchin correspondence. Similarly, if C ∈ W 1 p , then we obtain Then any null plane V on W has a unique point p ∈ S such that V is tangent to the null surface W p as follows: Proposition 3.2. For any point C ∈ W and any null plane V ⊂ T C W , there exists a unique point p ∈ S which satisfies exactly one of the following: (i) C ∈ W p \W 1 p and T C W p = V , or (ii) C ∈ W 1 p and V is the tangent space of one of the two branches of W p at C. Proof. LetŨ , U,C be as in the proof of Proposition 3.2. By the definition of the conformal structure [g], we can readily see that for each null surface V in T C W ≃ H 0 (NC /Ũ ), there exists a unique pointp ∈C such that We put p = ν(p). If p ∈ C\Sing C, then (i) follows from (28). If p ∈ Sing C, then the pointp coincides with somep k ∈ ν −1 (Sing C) and (ii) follows from (29). Next we explain another way to see that W 1 p is exactly the singular locus of W p . Take any point C ∈ W . We have a natural isomorphism T C W ≃ H 0 (O C (C) ⊗ I Sing C ) as in (14). We also have a natural isomorphism T C (|C| p ) ≃ H 0 (O C (C) ⊗ I p ). Therefore at the tangent space level, we have If p ∈ Sing C (namely if C ∈ W 1 p ), the right-hand-side is equal to H 0 (O C (C)⊗I Sing C ) = T C W . This means that the hyperplane |C| p is tangent to W at the point C. Hence the hyperplane section W p is singular at C. Therefore W 1 p ⊂ Sing W p holds. On the other hand, if p ∈ C\Sing C (namely if C ∈ W p \W 1 p ), the right-hand-side of (30) becomes H 0 (O C (C)⊗ I Sing C ⊗ I p ), which is readily seen to be 2-dimensional. This means that |C| p and W intersect transversally at C. Therefore W p = W ∩ |C| p is non-singular at C. Hence we obtain the required coincidence W 1 p = Sing W p . Using the isomorphism (30), we can also explain that W p has ordinary nodes along W 1 p . Fix any C ∈ W 1 p and let ν −1 (p) = {p 1 ,p 2 }. Now we takeq ∈C sufficiently close top 1 , and put q = ν(q). Then the surface W q is non-singular at the point C since q ∈ Sing C. Moreover, similarly to (30), we have Hence we obtain lim q→p 1 which is a 2-dimensional subspace of V C ≃ T C W . Obviously, the subspace (33) coincides with the tangent space T C Σ 1 , where Σ 1 is the null surface in the proof of Proposition 3.1. The same argument works for another pointp 2 . Remark 3.3. The subvarieties W p and W 1 p in W are naturally considered as Zariski open subsets of certain Severi varieties. Indeed, let µ : S ′ → S be the blowing-up at p, and E p the exceptional curve. We fix C ∈ W p \W 1 p and let C ′ be the strict transform of C under µ. Then the map gives an (open) embedding of W p to the Severi variety W |C ′ |, m−1 ⊂ |C ′ | whose dimension is (2m − 1) + 1 − 2(m − 1) = 2 by Proposition 2.6. A similar argument works for W 1 p . When m = 2, W 1 p is isomorphic to a Zariski open subset of CP 1 , since the Severi variety in which W 1 p is embedded is just a pencil. 3.3. Geodesics on the Severi varieties. In this subsection, we investigate geodesics on W . Let p and q be arbitrary two distinct points on S, and set |C| p,q := |C| p ∩ |C| q = {D ∈ |C| | p, q ∈ D}, W p,q := W ∩ |C| p,q = W p ∩ W q . Obviously if W p = ∅ or W q = ∅, then W p,q = ∅. Moreover, we will see later (Proposition 3.5) that W p,q is empty if φ(p) = φ(q), which is the case when the coincidence |C| p,q = |C| p = |C| q occurs . The following proposition means that if W p,q is non-empty, W p,q becomes a non-null geodesic on W with respect to [g] which has self-intersections. Proof. LetŨ → U be the 'normalization of a tubular neighborhood' of C. First suppose p, q ∈ Sing C. If we putp = ν −1 (p) andq = ν −1 (q), then we have By Proposition 2.2 (ii) and (iii), this is a non-null geodesic in O. Hence (i) holds. Next suppose p ∈ Sing C and q ∈ Sing C. Putting Further L 1 and L 2 are non-null geodesics by Proposition 2.2 (iii). Moreover we have L 1 ∩ L 2 = {C} sinceC is the unique curve in O which containsp 1 ,p 2 andq ∈D. Hence O ∩ W p,q is a union of two non-null geodesics intersecting at C. Finally, suppose p and q are distinct nodes of C. Putting ν −1 (p) = {p 1 ,p 2 } and ν −1 (q) = {q 1 ,q 2 }, we have Hence again by Proposition 2.2 (iii), O ∩ W p,q is a union of four non-null geodesics intersecting at C. Next we show that the hyperplane section W p can be empty in general. Proposition 3.5. Let φ : S → CP m+2 be the rational map associated to the system |C|. If p and q are distinct two points on S satisfying φ(p) = φ(q), then the sets W p , W q and W p,q are all empty. Proof. If φ(p) = φ(q), then we have |C| p,q = |C| p = |C| q . Hence W p,q = W p = W q by definition. On the other hand, by Proposition 3.4, W p,q is 1-dimensional as long as it is nonempty, while W p and W q are 2-dimensional if they are non-empty. Therefore since W p,q ⊂ W p and W p,q ⊂ W q , W p,q = W p = W q happens only when W p,q = W p = W q = ∅. Suppose W p,q is non-empty. By Proposition 3.4, a point C ∈ W p,q is a singular point of W p,q if and only if (Sing C) ∩ {p, q} = ∅. We can also prove this in the following way. If (Sing C) ∩ {p, q} = ∅, C is a smooth point of W p and also of W q by Proposition 3.1. Moreover we have The dimension of the last space is (2m + 1) − 2(m − 1) − 2 = 1. This means that the two surfaces W p and W q intersect transversally at the point C. Hence W p,q is non-singular at the point C. If (Sing C) ∩ {p, q} = ∅, at least one of W p and W q has singularities at the point C by Proposition 3.1. Since W p,q is a hyperplane section of W p and W q , W p,q has singularity at C. So far in this subsection we have supposed p = q. Next we define W p,q when q is an infinitely near point of p. Let p ∈ S, µ : S ′ → S be the blowing-up at p, E p the exceptional curve, and q a point on E p . Then we set where D ′ is the strict transform of D. By definition we have W p,q ⊂ W p . In particular, W p,q is empty if W p is empty. Proposition 3.6. Let p ∈ S and q ∈ E p be as above. Then if W p,q = ∅, the set W p,q is a non-singular null geodesic on W . Proof. Take any C ∈ W p,q . Let ν :C → C be the normalization andŨ → U the extension of ν to the tubular neighborhoods. Then we can uniquely define the natural liftp ∈C and its infinitely near pointq in the following way. If p ∈ Sing C, then putp = ν −1 (p). Then we define the liftq to be the point corresponding to the tangent direction TpC. If p ∈ Sing C, then ν −1 (p) consists of two points. We define the liftp ∈ ν −1 (p) in such a way thatp lies on the branch of which the image by ν tangents to the direction determined by q. The liftq is defined by the direction TpC (See Figure 6). If O is a sufficiently small open neighborhood of the point C ∈ W , then by construction, for both cases we have whereD ′ is the strict transform ofD under the blowing-up ofŨ atp. Hence by Proposition 2.2 (iii) W p,q is non-singular everywhere, and is a null geodesic. Remark 3.7. Let p ∈ S and q ∈ E p be as above. If W p = ∅ and q ∈ Bs |µ −1 (D) − E p | for some D ∈ W p , then we have |C| p,q = |C| p . However this does not happen. Indeed, we obtain W p,q = W p = ∅ by an argument similar to the proof of Proposition 3.5, and this is a contradiction. Finally in this subsection we illustrate the way how a nodal curve moves when a point on the Severi variety moves along a geodesic. Take any C ∈ W . As in Section 3.1, we have T C W ≃ H 0 (NC /Ũ ). Take any tangent direction at C, which is represented by a non-zero section θ ∈ H 0 (NC /Ũ ). (θ is uniquely determined up to scaling.) Letp,q ∈C be the zeros of θ. Putting ν(p) = p and ν(q) = q, the geodesic γ which satisfies T C γ = Cθ is determined (in a neighborhood of C) in the following way: (a1) If p = q, p ∈ Sing C and q ∈ Sing C, then γ coincides with W p,q locally, and C is a smooth point of W p,q . (This is the most general case.) If the point C moves along the geodesic W p,q , the nodal curve C moves as illustrated in Figure 3. (a2) If p = q, p ∈ Sing C and q ∈ Sing C, then γ is one of the two branches of W p,q . If the point C moves along the geodesic W p,q , the nodal curve C moves as illustrated in Figure 4, depending on the two branches of W p,q . (a3) If p = q, p ∈ Sing C and q ∈ Sing C. then γ is one of the four branches of W p,q . (b) If p = q andp =q, then p ∈ Sing C andp andq correspond to the two branches at p. Hence if D ∈ W p,q , D has a node at p. Therefore γ = W 1 p . In this case the nodal curve C moves as in Figure 5. (c) Ifp =q, then we can consider q as an infinitely near point of p, and γ = W p,q . Depending on whether p ∈ Sing C or p ∈ Sing C, the nodal curve C moves as illustrated in Figure 6. 3.4. Double fibration. Finally in this section we explain a double fibration associated to our construction. As our Severi varieties W (of a minitwistor space (S, |C|)) carry Einstein-Weyl structure, we have a null plane bundle ̟ : Q(W ) → W defined by (3), which is a conic bundle. For each point u ∈ Q(W ), let V u ⊂ T ̟(u) be the null plane corresponding to u. By Proposition 3.2, there exists a unique point p ∈ S such that V u is tangent to W p , where if ̟(u) ∈ W 1 p , 'tangent to W p ' means 'tangent to one of the branch of W p '. Let f (u) := p. Namely, f (u) ∈ S is the unique point satisfying ̟(u) ∈ W f (u) and T ̟(u) W f (u) ⊃ V u . (When p ∈ Sing C, T C W p means the union of two tangent spaces of the two branches of W p at C.) This way we obtain a map f : Q(W ) → S. This insists that W p is the tautological lift of W p . In particular, the restriction ̟| f Wp : W p → W p gives the resolution of the singularity W 1 p of W p . Clearly { W p | p ∈ S} foliates Q(W ). If we denote Q(W ) C := ̟ −1 (C) for the fiber, then we can write as Hence the image f (Q(W ) C ) is the nodal rational curve C itself. The restriction f | Q(W ) C : Q(W ) C → C gives the normalization of C. On the other hand, since the Severi variety W parametrizes curves on the surface S, there is the universal family which is concretely given by For any C ∈ W , the fiber over the point C is exactly the nodal curve C. Obviously the total space R(W ) has ordinary nodes along the locus {(C, p) ∈ R(W ) | p ∈ Sing C}. Then we can define a natural map Ψ : Q(W ) → R(W ) by Ψ(u) = (̟(u), f (u)), which makes the diagram where V σ C is the fixed subspace under σ. We prove that the restricted conformal structure [g]| V σ C naturally defines a definite real conformal structure. Let (z 0 , z 1 ) be a homogeneous coordinate onC such that the lifted involution is given by (z 0 , z 1 ) → (−z 1 ,z 0 ). We can use a weighted homogeneous coordinate (z 0 , z 1 , v) on OC(2m) with the equivalence relation where the projection OC (2m) →C is given by (z 0 , z 1 , v) → (z 0 , z 1 ). The involution σ oñ C naturally induces an anti-linear bundle automorphismσ : OC(2m) → OC(2m) covering σ, which can be written asσ using a holomorphic function h on C 2 . We have, however, h(z 0 ,z 1 ) = h(λz 0 , λz 1 ) by the welldefinedness. Therefore h(z 0 ,z 1 ) is a non-zero constant h. Moreover, sinceσ is an involution, we obtain |h| = 1. From now on we use the same notation as (17), (18) and (19) . Take any s ∈ V C ≃ H 0 (OC(2m)). In the above coordinate, the polynomial s defines a section of the line bundle OC (2m) given by (z 0 , z 1 ) → (z 0 , z 1 , s(z 0 , z 1 )). By (40), this section is mapped byσ to the section represented by the polynomial s ′ (z 0 , z 1 ) given by Now, since the two pointsp 1 i andp 2 i (over the node p i ∈ C) are antipodal each other, we can set (c i , d i ) = (−b i ,ā i ). This means Substituting this to (41), and using the isomorphisms T C W ≃ V C ≃ {(a, b, c) | a, b, c ∈ C}, we obtain where we put (−1) m h = e 2iθ . Then if we put we have The restriction of the complex conformal structure [g] is represented by Hence the real slice [g]| V σ C is defined by the quadratic form x 2 1 + x 2 2 + x 2 3 which is actually positive-definite. Now, let us define a connection∇ on T W bỹ Here,∇ is actually a connection, since we have, for example, We claim that: (i)∇ is torsion-free, (ii)∇ is contained in the projective class [∇], and (iii)∇ is compatible with [g]. Then since ∇ is the unique connection which satisfies these conditions (see Proof of Proposition 4.1 of [8]), we obtain∇ = ∇. Therefore the statement of the lemma follows from the definition (49). The claim (i) can be directly checked by using an obvious equality [σ(X),σ(Y )] =σ[X, Y ]. To check (ii), it is enough to see that every geodesic of ∇ is also a geodesic of∇. (Here, a 'geodesic' means an unparametrized geodesic.) Take an arbitrary C ∈ W and a small neighborhood O ⊂ W of C. Let p, q ∈ C be two points which may be infinitely near, and consider the geodesic γ : is also a geodesic. If X is a tangent vector field of γ, thenσ(X) is a tangent vector field of σ(γ). So ∇σ (X)σ (X) is proportional toσ(X). This means that∇ X X is proportional to X, which is an equivalent condition for γ to be a geodesic of∇. So∇ is compatible with [σ(g)]. Hence, to prove (iii), it is enough to show the coincidence [σ(g)] = [g]. For this, notice that each null geodesic is mapped to another null geodesic by σ. Hence for a tangent vector X, g(X, X) = 0 if and only ifσ(g)(X, X) = 0. Therefore the null cones of g and those ofσ(g) coincide, soσ(g) and g are conformally equivalent. Now we can show the main result in this section. For each p ∈ S we denote (W σ 0 ) p := W σ 0 ∩ W p = {C ∈ W σ 0 | p ∈ C}. Obviously we have (W σ 0 ) p = (W σ 0 ) σ(p) . Proposition 4.4. For each p ∈ S, the locus (W σ 0 ) p is a geodesic on the real Einstein-Weyl manifold W σ 0 if it is not empty. Proof. Take any C ∈ W σ 0 and any complex null plane V ⊂ T C W ≃ T C W σ 0 ⊗ C. Here, we claim V = σ(V ) Let L ⊂ V be the complex null line (see (ii)' written after Remark 2.3). Suppose V = σ(V ), then we obtain L = σ(L). This means that there exists a real null line L R ⊂ T C W σ 0 such that L = L R ⊗ C. However, such a real null line can not exist because the conformal structure is definite. Hence we obtain V = σ(V ), as required. Therefore the intersection V ∩ σ(V ) is a complex line. Since l = V ∩ T C W σ 0 is the real line satisfying V ∩ σ(V ) = l ⊗ C, we find that for each null surface Σ ⊂ W , the real locus Σ ∩ W σ 0 is a real curve if it is not empty. Now we putp = σ(p). If p =p, then p ∈ Sing C by the condition 1 • . Hence (W σ 0 ) p does not intersects the singular locus W 1 Since W p,p is a geodesic on W and ∇ is real on W σ 0 , (W σ 0 ) p is a real geodesic. If p =p, then each C ∈ W σ p has a node at p by the conditions 1 • and 3 • . Here, notice that W 1 p is a σ-invariant complex geodesic in this case. Hence its real locus (W σ 0 ) p is also a geodesic. Finally in this section, we explain what happens for the basic diagram (37) if we take real structure into account. Let P (W σ 0 ) := P(T W σ 0 ) be the projectivization of the tangent bundle of the real manifold W σ 0 . Then we can define a natural map given by j : ) and [ϕ] = [φ] by the conditions 1 • -3 • , j is a double cover. Combined with the universal family R(W ) → W obtained in Section 3.4, we obtain the following commutative diagram: x x r r r r r r r r r r r Recall that there is an integrable complex two-plane distribution D on Q(W ) consisting of the fiber directions of f : Q(W ) → S. Then D descends by j to a real 1-dimensional distribution on P (W σ 0 ). The integral curves of the last distribution is the natural lift of a geodesic on W σ 0 . Namely, the distribution is the geodesic spray on W σ 0 . Explicit examples of the minitwistor spaces In this section, for any m ≥ 2 we construct a family of minimal minitwistor spaces of index m. By the results we have obtained so far, the relevant Severi varieties of these minitwistor spaces have a natural Einstein-Weyl structure. 5.1. Construction of the minitwistor spaces by blowing-up CP 1 × CP 1 . Let m and k be integers satisfying m ≥ 2 and 1 ≤ k < m. Let D 1 and D 2 be any irreducible curves on CP 1 × CP 1 whose bidegrees are (k, 1) and (m − k, 1) respectively. (Of course, these are nonsingular rational curves.) We suppose that the reducible curve D 1 + D 2 has only ordinary nodes; in other words, we suppose that D 1 and D 2 intersect transversally at any intersection points. (So D 1 + D 2 has exactly m nodes.) Next we choose arbitrary 2k points p 1 , · · · , p 2k on D 1 \D 2 , and also 2(m − k) points p 2k+1 , · · · p 2m on D 2 \D 1 . Then let µ : S → CP 1 × CP 1 be the blowing-up at p 1 , · · · , p 2m . Let C 1 ⊂ S and C 2 ⊂ S be the strict transforms of D 1 and D 2 respectively. We readily have C 2 1 = C 2 2 = 0 and C 1 and C 2 intersect transversally at m points. Here, we are allowing (subsets of) the points {p 1 , · · · , p 2k } or {p 2k+1 , · · · , p 2m } to be infinitely near: in such a case, the blowing-up is always performed at the intersection point of the strict transforms of D 1 or D 2 with the exceptional curves. We show that this surface S satisfies the required property as follows: It follows from Theorem 2.10 that the Severi variety W |C|, m−1 (for S) is a 3-dimensional complex Einstein-Weyl manifold. Proof of Proposition 5.1. We prove the proposition by showing that for the reducible curve C 1 +C 2 any one of the m nodes can be smoothed, while all other nodes remain nodes. Since S, C 1 and C 2 are rational satisfying C 2 1 = C 2 2 = 0, both of the systems |C 1 | and |C 2 | are base point free pencils. Therefore the system |C 1 + C 2 | is base point free. By the cohomology exact sequences Further, we easily have H 2 (O S (C 1 + C 2 )) = 0. On the other hand, by the Riemann-Roch formula we have Hence we have dim |C 1 + C 2 | = m + 2. Let φ : S → CP m+2 be the morphism associated to the system |C 1 + C 2 |. We claim that φ is birational to its image. Take different two points p and q on S which are not on C 1 ∪ C 2 . Then as the pencil |C 1 | is base point free, for general p and q, there is a curve C ′ 1 ∈ |C 1 | which satisfies p ∈ C ′ 1 and q ∈ C ′ 1 . Then the curve C ′ 1 + C 2 satisfies p ∈ C ′ 1 + C 2 and q ∈ C ′ 1 + C 2 . This means φ(p) = φ(q). Therefore φ is birational, as claimed. Then by Bertini's theorem, general members of the system |C 1 + C 2 | are irreducible (and nonsingular). Further, by adjunction formula, we have K S C 1 = K S C 2 = −2 < 0. Therefore, any of the nodes of the curve C 1 + C 2 can be smoothed in the system |C 1 + C 2 | independently ([2, page 251]). By taking a smoothing which makes exactly one of the nodes smooth, we obtain a curve C ∈ |C 1 + C 2 | which has exactly (m − 1) nodes as its all singularities. It is obvious that C is irreducible. Further C must be a rational curve by topological reason. Finally, as (C 1 + C 2 ) 2 = 2m, we have C 2 = 2m. Thus we obtain that the curve C satisfies the required properties for the pair (S, |C|) to be a minitwistor space. Therefore the above minitwistor spaces constitute 2 · 2m = 4m-dimensional family. By noting dim Aut(CP 1 × CP 1 ) = 6, the number of effective parameters for the family is 4m − 6. As m ≥ 2, when the 2m points p 1 , · · · , p 2m are in a general position, the automorphism group of S is readily seen to be 0-dimensional. Next by locating the 2m points in some special position, we provide examples of S which admit an effective C * -action, or even an effective C * × C * -action. For the configuration that allows C * -action, we choose distinct two curves of bidegree (0, 1). Next take m points p 1 , · · · , p m on one of the (0, 1)-curves and other m points q 1 , · · · , q m on another (0, 1)-curve. Here, we are allowing the case that p i = p j or q i = q j for some i = j; in that case, the blowing-up is always performed on the strict transform of the (0, 1)-curve on which p i = p j or q i = q j belongs. Then the surface S clearly admits an effective C * -action which is a lift of the C * -action on the first factor of CP 1 × CP 1 . In order for the surface S to have the required nodal rational curves, we suppose that the 2m points satisfy the following (genericity) condition: ( * ) Let a i := π 1 (p i ) ∈ CP 1 and b i := π 1 (q i ) ∈ CP 1 , where π 1 : CP 1 × CP 1 → CP 1 denotes the projection to the first factor. Then after possible renumbering for the indices of a i and b i independently, there exists a subset I ⊂ {1, 2, · · · , m} with I = ∅ and I = {1, · · · , m} such that Note that if the 2m points {p i , q i | 1 ≤ i ≤ m} satisfies this condition, then {p ′ i , q ′ i | 1 ≤ i ≤ m} also satisfies the condition if p ′ i and q ′ i are sufficiently close to p i and q i respectively, and if p ′ i and q ′ i are lying on the same (0, 1)-curves respectively. Proposition 5.2. Assume that the 2m points on CP 1 × CP 1 satisfy the above condition ( * ). Then the surface S (with C * -action) has a rational curve C such that the pair (S, |C|) is a minimal minitwistor space of index m. Since the C * -action on S clearly induces a non-trivial C * -action on the set of nodal curves, the Severi variety W |C|,m−1 on S has a non-trivial C * -action. Namely, W |C|, m−1 is an Einstein-Weyl manifold which admits a C * -action. The last C * -action preserves the Einstein-Weyl structure, since the action clearly preserves both of the set of null cones (24) (or (25)) and the set of geodesics W p,q . Proof of Proposition 5.2. As in the proof of Proposition 5.1, it suffices to find 2 non-singular rational curves C 1 and C 2 on S satisfying C 2 1 = C 2 2 = 0 and C 1 C 2 = m, and intersecting transversally. We give these curves explicitly. Let u and v be non-homogeneous coordinates on the first and second factor of CP 1 × CP 1 respectively. In these coordinates we can suppose that the 2m points are explicitly defined by (u, v) = (a i , 0) for respectively, where c ∈ C * . By the condition ( * ) we have {a i | i ∈ I} ∩ {b i | i ∈ I} = ∅ and {a i | i ∈ I}∩{b i | i ∈ I} = ∅. Therefore the denominators and numerators in 55 have no common factor. Hence if k means the number of elements of I, so that 1 ≤ k < m, the bidegrees of D 1 and D 2 are (k, 1) and (m − k, 1) respectively. Clearly, these are irreducible curves, {p i , q i | i ∈ I} ⊂ D 1 , and {p i , q i | i ∈ I} ⊂ D 2 . By the condition ( * ), {p i , q i | i ∈ I} ∩ {p i , q i | i ∈ I} = ∅ holds. In particular, D 1 and D 2 do not intersect on the two (1, 0)-curves on which p i and q i belongs. Hence, for general choices of c ∈ C * , D 1 and D 2 intersect transversally (at m points). Then as the two curves C 1 and C 2 , it suffices to choose the strict transforms of D 1 and D 2 respectively. Remark 5.3. For the surface S and the curve C in Proposition 5.2, the linear system |C| is base point free and (m + 2)-dimensional by Proposition 2.8. If φ : S → CP m+2 still denotes the morphism associated to |C|, φ contracts two rational curves which are the strict transforms of the (0, 1)-curves on which the 2m points lie. The image of these two curves become cyclic quotient singularities of φ(S). For general choices of the 2m points satisfying the condition ( * ), the identity component of the automorphism group of the minitwistor space S is clearly C * . By a similar consideration for the case of general configurations (without C * -action), the number of effective parameters for the family of the minitwistor spaces in Proposition 5.2 is given by 2m − 3. Among all configurations of the 2m points satisfying ( * ), the surface S becomes a toric surface iff the set {a i , b i | 1 ≤ i ≤ m} (⊂ CP 1 ) consists of exactly 2 points; in other words, iff p i = (0, 0) for 1 ≤ i ≤ k, p i = (∞, 0) for k < i ≤ m, q i = (0, ∞) for k < i ≤ m and q i = (∞, ∞) for 1 ≤ i ≤ k (after renumbering the indices and changing the coordinates). By looking the self-intersection numbers of the irreducible components of the unique C * × C * -invariant anticanonical curves (which determines the toric surface uniquely), it can be readily verified that the structure of these toric surfaces is independent of k. Therefore, for each m ≥ 2 we have obtained exactly one toric surface whose Severi variety of (m − 1)-nodal rational curves is a 3-dimensional complex Einstein-Weyl manifold. Exactly as in the case with C * -action, C * × C * acts on these manifolds preserving the Einstein-Weyl structure. Therefore for each m ≥ 2 we obtain a complex Einstein-Weyl 3-fold which admits a C * × C * -action. Next we show the minimality of the minitwistor spaces obtained so far: Proof. Let µ : S → CP 1 × CP 1 be the blowing-up, E 1 , · · · , E 2m the exceptional curves which satisfy E i · E j = −δ ij for 1 ≤ i, j ≤ 2m, and φ : S → CP m+2 the birational morphism induced by |C| as in the proof of Proposition 5.1. (In particular, E i 's are not necessarily irreducible.) For the minimality, by Proposition 2.15 it suffices to show that φ does not contract any (−1)curves on S. Suppose that E is a (−1)-curve which is contracted by φ. If E = E j for some 1 ≤ j ≤ 2m, E j is irreducible, and we have Hence (together with the fact that Bs|C| = ∅) we have dim φ(E) = 1. Hence E = E j for any 1 ≤ j ≤ 2m. So we can write where ∼ denotes linearly equivalence, k, l and m i satisfy k ≥ 0, l ≥ 0, k + l > 0, m i ≥ 0 and 2m i=1 m i > 0. Then as E is a (−1)-curve, we have On the other hand, as we have supposed dim φ(E) = 0, we have By (58) and (59) we obtain l(m−2) = −1. As l and m−2 are non-negative integers, this means l = m = 1. This contradicts our assumption m > 1. Therefore there exists no (−1)-curve E on S which is contracted by φ. 5.2. Examples of the minitwistor spaces with a real structure and real twistor lines. The complex surfaces we shall consider next are the rational surfaces given in [6, Section 2]. Though they can be shown to be included in the examples in the last subsection, an advantage of the surfaces in this subsection is that they are equipped with a natural real structure which is induced from that of the twistor spaces (of real self-dual 4-manifolds), and we can find real nodal rational curves satisfying the conditions we have considered throughout Section 4. By the result in Section 4, this means that the real locus of the Severi varieties become real (3-dimensional) Einstein-Weyl manifolds. First we briefly recall some of the results in [6]. We consider (arbitrary) effective U (1) 2 -action on nCP 2 and take any one of Joyce's self-dual metrics on nCP 2 which are invariant under the U (1) 2 -action [7]. Let Z be the twistor space of the self-dual metric. The U (1) 2 -action on nCP 2 naturally induces a holomorphic G := C * × C * -action on Z. On the other hand, the U (1) 2action on nCP 2 has exactly (n + 2) invariant two-spheres. As the next step for obtaining the required complex surface, choose any one of these U (1) 2 -invariant spheres and let K 1 ⊂ U (1) 2 be the isotropy subgroup which fixes any points on the sphere. K 1 is isomorphic to U (1). Let G 1 ⊂ G be the complexification of K 1 . We have G 1 ≃ C * . Let F be the canonical square root of the anticanonical line bundle of Z. Then the G-action on Z naturally lifts on F , so that also on the tensor product kF , k > 0. Let H 0 (Z, kF ) G (resp. H 0 (Z, kF ) G 1 ) be the subspace consisting of all G-invariant (resp. G 1 -invariant) sections, and |kF | G (resp. |kF | G 1 ) the corresponding linear system. |F | G is a pencil whose members are smooth toric surfaces. Under this situation, we have the following. Proposition 5.6. ([6, Section 2]) There exists a unique integer m satisfying the following. (i) If k < m, the system |kF | G 1 is composed with the pencil |F | G , so that dim |kF | G 1 = k. (ii) dim |mF | G 1 = m + 2. (iii) If Φ G 1 m : Z → CP m+2 denotes the rational map associated to the system |mF | G 1 , the image Φ G 1 m (Z) is a normal rational surface whose degree (in CP m+2 ) is 2m. Note that the integer m is explicitly computable through the algorithm given in [6, Section 2, Procedure (A)]. As in [6, Def. 2.9] we write T := Φ G 1 m (Z). Recall that T can be regarded as a quotient space with respect to the G 1 -action: namely for general points of T , the inverse image under Φ G 1 m are the closure of the G 1 -orbits. (More invariantly, T is exactly the 'canonical quotient space' of the Moishezon twistor space Z under the G 1 -action, as proved in [6,Appendix]). Also recall that T has a natural real structure induced by that on Z. Although T always has isolated singularities as long as m ≥ 2 ([6, Prop. 2.14]), we have the following. Proof. First we recall the structure of Z more closely. By [6,Prop. 2.11], the system |mF | G is composed with the pencil |F | G . As in [6, Diagram (13)] we have the following commutative diagram of meromorphic maps: (60) where Ψ m is the map associated to the system |mF | G , Λ m is the image of Ψ m so that Λ m ≃ CP 1 , and π m is the linear projection associated to the obvious inclusion H 0 (mF ) G ⊂ H 0 (mF ) G 1 whose fibers are CP 2 . Here, ι embeds Λ m ≃ CP 1 as a rational normal curve, and fibers of the restriction π m | T : T → ι(Λ m ) are conics, by which T has a structure of (rational) conic bundle over ι(Λ m ) ≃ CP 1 . To find the curve C in the proposition, take a real twistor line L which is disjoint from Bs |F | G . (Recall that Bs |F | G is exactly the cycle of rational curves which is the unique Ginvariant anticanonical curve on a smooth member of |F | G .) We will show that if L is sufficiently general then the image C := Φ G 1 m (L) satisfies the required properties in the proposition. For verifying the property (i), recall that Sing T consists of (a) one conjugate pair {P ∞ , P ∞ } of cyclic quotient singularities and (b) real singularities ([6, Prop. 2.14]). For the former ones, we have (Φ G 1 m ) −1 (Φ G 1 m (P ∞ )) ⊂ Bs |F | G , and hence also (Φ G 1 m ) −1 (Φ G 1 m (P ∞ )) ⊂ Bs |F | G . For the latter ones, the inverse images (under Φ G 1 m ) of the singularity is one of the G-invariant twistor lines, so that it intersects Bs |F | G . Since L ∩ Bs |F | G = ∅ by our assumption, these imply C ∩ Sing T = ∅. Hence C satisfies (i). Next to show (iii) we claim that C is contained in a hyperplane in P(H 0 (mF ) G 1 ) * . This can be proved in the same way as in [6,Prop. 4.4 (d)]. Actually as F · L = 2, Ψ m | L is 2 : 1 over Λ m . Then by the commutativity of the diagram (60), (π m • Φ G 1 m )| L is 2 : 1 over ι(Λ m ). Hence π m | C is either generically 1 : 1 or 2 : 1. But since C intersects any irreducible component of reducible fibers of π m | T at least once as L intersects any irreducible component of reducible members of |F | G exactly once, and the last irreducible component is mapped to an irreducible component of a reducible fiber of π m | T , we deduce π m | C is 2 : 1 over ι(Λ m ). Furthermore, by the property (i), we have C ∩ Sing T = ∅. On the other hand, as π m | T : T → Λ m is a conic bundle, we see that hyperplane sections of T also have the same intersection numbers with irreducible components of any fibers of π m | T . Moreover, general hyperplane sections do not intersect Sing T . These imply that the curve C and hyperplane sections of T determine the same element in H 2 (T , Z). Hence C is contained in a hyperplane section, as claimed. Therefore we have C 2 = deg T = 2m, and we obtain (iii). Remark 5.10. As we have explained, the minitwistor spaces T in Proposition 5.7 are obtained as quotient spaces of the twistor spaces of Joyce metrics on nCP 2 by C * -action. It might be worth mentioning that for any one of these minitwistor spaces, there exist Moishezon twistor spaces on nCP 2 with C * -action (for some n) whose quotient space is exactly the given minitwistor space but whose self-dual metric is not conformal to any Joyce metrics. This is proved in [6,Theorem 4.3]. In the papers [4] and [5], detailed structure are studied for some of such twistor spaces. Finally we give a comment on the number of effective parameters involved in the construction of the surfaces T (which are the quotient spaces of the twistor spaces of the Joyce metrics). As in the proof of Proposition 5.7, there is a canonical map π m | T : T → ι(Λ m ) ≃ CP 1 whose fibers are conics. For a general T (more precisely if T has no real singularities), the complex structure of T is uniquely determined by the discriminant locus of π m | T , and the number of elements of the last locus is exactly 2m. Subtracting the dimension of the automorphism group of CP 1 , the number of effective parameters is given by 2m − 3. This coincides with the number obtained in the last subsection. But of course, the location of discriminant locus is subject to a reality condition, and the last number is the dimension over real numbers. On the other hand for the surface T which may have real singularities, if ν i denotes the number of real A i -singularities (see [6, Proposition 2.14]), then the number of reducible fibers of π m | T decreases, which is given by 2m − i iν i .
2009-01-15T13:45:39.000Z
2009-01-15T00:00:00.000
{ "year": 2011, "sha1": "db5c0e2e1ab6c0032175ef63b7cd3834da1cf408", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0901.2264", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "db5c0e2e1ab6c0032175ef63b7cd3834da1cf408", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
247024822
pes2o/s2orc
v3-fos-license
Lactogenic hormones in relation to maternal metabolic health in pregnancy and postpartum: protocol for a systematic review Introduction Maternal metabolic disease states (such as gestational and pregestational diabetes and maternal obesity) are reaching epidemic proportions worldwide and are associated with adverse maternal and fetal outcomes. Despite this, their aetiology remains incompletely understood. Lactogenic hormones, namely, human placental lactogen (hPL) and prolactin (PRL), play often overlooked roles in maternal metabolism and glucose homeostasis during pregnancy and (in the case of PRL) postpartum, and have clinical potential from a diagnostic and therapeutic perspective. This paper presents a protocol for a systematic review which will synthesise the available scientific evidence linking these two hormones to maternal and fetal metabolic conditions/outcomes. Methods and analysis MEDLINE (via OVID), CINAHL and Embase will be systematically searched for all original observational and interventional research articles, published prior to 8 July 2021, linking hPL and/or PRL levels (in pregnancy and/or up to 12 months postpartum) to key maternal metabolic conditions/outcomes (including pre-existing and gestational diabetes, markers of glucose/insulin metabolism, postpartum glucose status, weight change, obesity and polycystic ovary syndrome). Relevant fetal outcomes (birth weight and placental mass, macrosomia and growth restriction) will also be included. Two reviewers will assess articles for eligibility according to prespecified selection criteria, followed by full-text review, quality appraisal and data extraction. Where possible, meta-analysis will be performed; otherwise, a narrative synthesis of findings will be presented. Ethics and dissemination Formal ethical approval is not required as no primary data will be collected. The results will be published in a peer-reviewed journal and presented at conference meetings, and will be used to inform future research directions. PROSPERO registration number CRD42021262771. INTRODUCTION Pregnancy entails profound maternal physiological and metabolic adaptations to accommodate the needs of the growing fetus and to prepare for lactation. An increase in insulin resistance of 50%-60% between prepregnancy and the late third trimester is a physiological change in every pregnancy (regardless of glucose tolerance) and is essential to prioritise the delivery of glucose across the placenta for fetal development. 1 This is paralleled-in a normal pregnancy-by adaptive changes in the islets of the maternal endocrine pancreas to allow increasing insulin synthesis and secretion, including an increased beta-cell mass. Overall, this results in maintenance of maternal glucose homeostasis. 1 Gestational diabetes mellitus (GDM) may develop when there is failure to balance insulin secretion with the composite of prepregnancy and pregnancy-induced insulin resistance, and is an increasingly prevalent condition (affecting between 2% and 38% of pregnant women worldwide). 2 GDM is associated with multiple adverse maternal and fetal outcomes, including macrosomia, pre-eclampsia and gestational hypertension, polyhydramnios, stillbirth and neonatal hypoglycaemia, as well as an increased lifetime risk of obesity and dysglycaemia in the offspring. 3 In women with pre-existing diabetes mellitus (type 1 or type 2), superimposed pregnancyinduced insulin resistance exacerbates Strengths and limitations of this study ► Novel and relevant research area linking lactation hormones to maternal metabolic health, with particular relevance to pregnancies affected by obesity and/or diabetes. ► Protocol is for the first systematic review in this area. ► Employs rigorous, standardised methodology and will involve an exhaustive literature search and quality appraisal. ► Limitations include the anticipated heterogeneity in study designs, most of which will likely be observational in nature and hence unable to establish causality. Open access established pregestational insulin resistance and/or deficiency, with similar potential complications. Lactogenic hormones, chiefly human placental lactogen (hPL) and prolactin (PRL), are well recognised for their roles in the antenatal preparation of the breast for lactation, and-in the case of PRL-in establishing and maintaining lactation after delivery. However, these hormones also have central roles in maternal metabolism: during gestation, both contribute to insulin resistance but are also likely to act as stimuli for the adaptation of maternal pancreatic islet function. Postpartum, the hormonal control of lactation (primarily mediated by PRL) may fundamentally alter carbohydrate and lipid metabolism and adipocyte biology, guarding lactating postpartum women against progression to type 2 diabetes. 4 Human placental lactogen(HPL) is a peptide hormone produced by the placenta. It is detectable as early as 6 weeks' gestation and increases across gestation, peaking at around 30 weeks. The secretion rate of hPL near term is about 1 g/day (a rate considerably greater than that of any other protein hormone), 5 and the peak concentration of hPL is at least 25-fold that of PRL. 4 hPL binds with high affinity to the PRL receptor and is increasingly recognised as playing a major role in the modulation of maternal metabolism to meet the energy requirements of the growing fetus. 6 It is also involved in lactogenesis I (secretory initiation), supporting alveolar and ductal growth in the breast in preparation for milk production. 5 As one of the major 'diabetogenic' hormones of pregnancy (alongside placental growth hormone, progesterone, cortisol and PRL), hPL increases maternal insulin resistance and reduces maternal glucose utilisation, elevating maternal blood glucose levels (supporting transplacental glucose transfer and adequate fetal nutrition). 4 However, this appears to be matched by parallel upregulation of insulin secretory capacity. In rodent models, placental lactogens significantly increase glucose-induced insulin secretion, beta-cell proliferation and survival in isolated pancreatic islets. [7][8][9] In humans, in vitro evidence using human islet cell tissue suggests that hPL also acts (likely via the PRL receptor) on the endocrine pancreas to promote maternal beta-cell function, enhancing insulin synthesis and glucose-stimulated insulin secretion. 9 The net effect of this is-in a healthy pregnancymaintenance of maternal normoglycaemia. hPL also increases lipolysis and release of free fatty acids (FFAs). With maternal fasting, hPL release increases the availability of FFAs to the mother for use as fuel, sparing glucose and amino acids for placental transport and fetal nutrition. 10 hPL is also likely to play a role in inducing and maintaining the state of physiological hyperleptinaemia but relative leptin resistance seen in pregnancy, which provides maternal appetite stimulus even with increasing adipose deposition. 4 hPL (and PRL) also seems to increase appetite and food intake via other mechanisms, with widespread distribution of PRL receptors in the hypothalamus and induction of hyperphagia after intracerebroventricular administration, suggesting a central mode of action. 11 Being placentally derived, hPL is also positively correlated with birth weight and placental mass, with potential clinical application in the antenatal prediction of macrosomia and/or fetal growth restriction in both metabolically normal and abnormal pregnancies. 12 Prolactin (PRL) is a peptide hormone produced by lactotrophs in the anterior pituitary gland and has close structural homology to hPL. Basal serum PRL increases progressively during normal pregnancy, with peak values in late gestation approximately 10-fold higher than preconception. 4 While best known for its lactogenic effect on the female mammary gland, PRL also alters insulin sensitivity and lipid metabolism. PRL may induce insulin resistance outside of pregnancy (as demonstrated in non-pregnant patients with prolactinoma and pathological PRL elevation) 13 and, like hPL, is likely to contribute to the insulinresistant state of pregnancy, ensuring the availability of glucose for the fetal-placental compartment. However, the physiological contribution of PRL to glucose tolerance in pregnancy and postpartum is thought to differ from other states of relative or absolute hyperprolactinaemia. 4 In vitro evidence suggests that PRL (like hPL) can directly enhance insulin secretion from human islets, although the latter hormone may have the dominant effect during human pregnancy due to its higher concentrations. 9 It is worth noting that rodent evidence for the effect of PRL on maternal beta-cell function during pregnancy is striking: knockout mice specifically lacking PRL receptors on pancreatic beta cells have normal glucose tolerance outside of pregnancy but become progressively glucose intolerant with gestation due to corresponding failure of beta-cell proliferation, essentially developing GDM. 14 15 Postpartum, physiological hyperprolactinaemia is the key endocrine change responsible for the initiation and maintenance of lactation. PRL concentrations during lactation are intermediate between those in the non-pregnant state and those in late pregnancy, and the pulsatile nature of secretion (lost during pregnancy) is restored. PRL surges occur following nursing, and peaks are higher in women who exclusively breast feed their infants than in those who supplement with formula or only feed formula. In women who do not breast feed, PRL falls to non-pregnant concentrations within 3 weeks postpartum. 4 Lactation-under the chief control of PRL-is a unique metabolic state associated with an elevation of plasma FFAs and with the mobilisation of lipids from diet and adipose stores to the breast for milk production. Observational evidence suggests that lactation is associated with maternal metabolic benefits, with consistent findings of lower rates of persistent postpartum dysglycaemia and progression to type 2 diabetes in women who breast feed compared with those who do not (both in the general population 16 and following GDM pregnancy 17 ). As such, PRL may link effective and sustained lactogenesis Open access to improved maternal metabolic status postpartum. Whether this is primarily mediated by improved insulin secretory capacity or reduced insulin resistance remains unclear, as there are putative biological mechanisms for both in the postpartum context. 4 18 19 Regardless, lactation may present a particular window of opportunity for women with postpartum insulin resistance (relevant to many women following a GDM pregnancy) to significantly improve long-term health outcomes by improving insulin secretion and/or sensitivity. Indeed, some authors have argued that lactation (quite apart from its other benefits to mother and offspring) may be seen as a therapeutic intervention in this patient cohort, analogous to the prescription of an insulin-sensitising medication. 4 It is also increasingly apparent that the relationship between impaired glucose/insulin metabolism and poor lactation outcomes may be bidirectional. While lactation outcomes are not the focus of this review, women with obesity and/or diabetes are at increased risk of lactogenesis delay and persistent poor milk supply, 20 21 reasons for which may include a suboptimal PRL response to infant suckling 22 and impaired insulin-receptor dynamics at the level of the lactocyte. 23 Authors linking PRL to glucose dynamics during lactation have suggested that 'good beta-cell plasticity' in metabolically healthy women may exert a permissive effect on lactation, allowing PRL to play its primary evolutionary role. 18 As such, the women who stand to benefit most from the metabolic benefits of sustained lactation may face the most barriers to achieving it. A more complete understanding of lactogenic hormone action, as well as how it is altered in metabolically abnormal pregnancies, is essential to promote and support lactation in this population. Narrative reviews (which constitute the majority of the existing work in this area and have produced many of the current mechanistic hypotheses) are often incomplete or reach subjective conclusions. Systematic reviews focused on key physiological questions are uncommon in contemporary endocrine literature but provide an opportunity to move toward extensive synthesis with objective, evidence-based conclusions. This review aims to systematically examine the relationship between hPL and PRL and maternal metabolism in pregnancy and postpartum, particularly in relation to common gestational metabolic conditions, as well as the association between hPL and PRL and key fetal outcomes. It also aims to provide mechanistic insights and to examine the clinical implications of these findings from both diagnostic and therapeutic perspectives. SYSTEMATIC REVIEW QUESTION In pregnant women (participants), what is the relationship between hPL/PRL levels (exposures) and ► Maternal gestational metabolic status/ outcomes? ► Relevant fetal outcomes? ► Maternal metabolic outcomes up to 12 months postpartum? METHODS/DESIGN Rigorous international gold-standard methodology will be adopted in this review, which will conform to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines. 24 This review has been registered with the International Prospective Register of Systematic Reviews (PROSPERO). We used the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols checklist when writing this protocol paper. 25 Any future amendments to this protocol will be reported on PROS-PERO and published with the results of the review. Eligibility criteria Selection criteria using a modified version of the participant, exposure, comparison, outcome and study type framework 26 (table 1), established a priori, will be used to determine the eligibility of articles to include in this review. All articles published prior to 8 July 2021 will be eligible, but only articles with full text available in English will be included. It should be noted that the review aims to elucidate the relationship between maternal serum hPL/PRL levels and metabolic/fetal conditions/outcomes without assuming causality or directionality. The designation of hPL and PRL levels as 'exposure' and the listed outcomes as 'outcomes' is somewhat arbitrary and may not apply to all studies: some may work in the opposite direction. For example, studies that enrol women with pre-existing diabetes or GDM (relevant metabolic exposure) and look at PRL and hPL levels across gestation (outcome) would warrant inclusion. It is acknowledged by the reviewers that the relationship between lactogenic hormones and maternal metabolism is likely bidirectional, and the inclusion criteria will reflect this. Search strategy A systematic search strategy using relevant search terms, in accordance with the selection criteria (table 1), has been developed (see online supplemental material 1) in consultation with expert subject librarians. A combination of keywords and database-specific subject headings will be used. The following electronic databases will be searched: ► MEDLINE via OVID. ► MEDLINE ePub ahead of print, in-process, in-data review and other non-indexed citations via OVID. Bibliographies of relevant studies identified by the search strategy and relevant reviews/meta-analyses will also be manually searched for identification of additional eligible studies. Given that we intend to conduct an in-depth synthesis of a large body of research spanning several decades, only peer-reviewed published data with all results available will be considered eligible for inclusion (conference abstracts will be excluded, and grey literature will not be searched). Open access Inclusion of studies References will be screened and managed using EndNote V.x9 and Covidence software. Two reviewers will scan the titles, abstracts and keywords of every record retrieved by the search strategy, assessing eligibility according to the inclusion and exclusion criteria in table 1 (and in consultation with a third reviewer where required). A pilot test of the selection criteria will be conducted on 20-30 article titles and abstracts in order to refine and clarify the criteria prior to the formal commencement of screening. If initial information suggests that an article meets the selection criteria for eligibility, the full text will be retrieved for further assessment by two reviewers. Disagreement between reviewers as to whether a study meets the inclusion criteria will be resolved by discussion, with referral to a third reviewer if consensus cannot be reached. Studies excluded based on full-text review will be tabulated, along with reasons for their exclusion. Following PRISMA guidelines, 24 a flow diagram will be created to illustrate the selection process. †Regarding classification of diabetes type: include studies referring clearly to type 1 or type 2 diabetes, or gestational diabetes or impaired glucose tolerance; include studies which refer to 'insulin-dependent', 'juvenile-onset' or 'insulin-requiring' diabetes (inside or outside of pregnancy) only if the supporting data clearly suggest type 1 diabetes; exclude studies which refer to 'diabetic' pregnancies, 'diabetes', 'chemical diabetes', or 'DM' in pregnancy without further definition, or 'pregestational' diabetes without further definition or 'insulintreated' diabetes without further clarification; exclude studies which define diabetes only according to White's classification (A/B/C/D) for diabetes in pregnancy. If one group within a study is considered adequately defined and another inadequately defined, include the study but only extract data for the groups meeting definition requirements ART, assisted reproductive technologies; DM, diabetes mellitus; FGR, fetal growth restriction; GDM, gestational diabetes mellitus; hPL, human placental lactogen; IGT, impaired glucose tolerance; OGTT, oral glucose tolerance test; PECOT, participant, exposure, comparison, outcome and study type; PRL, prolactin. Open access Quality appraisal of the evidence Methodological quality of the included studies will be assessed by two independent reviewers using criteria established a priori, outlined in the Monash Centre for Health Research and Implementation Evidence Synthesis Programme critical appraisal template 27 (see online supplemental material 2). Individual quality items will be investigated using a descriptive component approach. Assessment will be based on criteria relating to external validity (population, setting, clarity of study objectives, inclusion and exclusion criteria, appropriateness of study design and follow-up) and internal validity (selection, performance and detection bias, attrition, exposure and outcome measurement, reporting bias and potential confounders). Other domains for assessment will include potential conflicts of interest, study power and appropriateness/quality of statistical methodology. Any disagreement or uncertainty will be resolved by discussion among review authors. Using this approach, we will allocate a risk of bias rating for each study. Data extraction Data will be extracted from all included studies by two independent reviewers using a specifically developed data extraction form. Pilot testing of the form will be conducted using three to five studies of different formats to ensure all required data are captured, particularly given the anticipated heterogeneity in study design. Key anticipated domains for extraction are shown in table 2. Relevant data which are not reported in published studies will be requested from corresponding authors. Statistical analysis Analysis for the two lactogenic hormones of interest, hPL and PRL, will be undertaken separately. Key exposure/outcome associations for each hormone will be determined based on the number of studies available. It is anticipated that hPL will be analysed primarily in relation to maternal metabolic/glycaemic status during pregnancy and to fetal outcomes (birth weight, macrosomia, growth restriction and placental mass) in pregnancies affected by diabetes. For PRL, it is anticipated that key outcomes will be maternal metabolic/glycaemic status and related maternal metabolic indices (measures of insulin secretion, sensitivity and beta-cell function) during both pregnancy and postpartum. After data extraction, the reviewers will determine whether metaanalysis is appropriate (based on the number of studies for each hormone/outcome relationship and the heterogeneity of their designs and participant groups). If metaanalysis is possible, Review Manager statistical software will be used for analysis with random effects models employed to generate weighted mean differences. Statistical heterogeneity will be assessed using the I 2 test, with I 2 values of >50% indicating moderate to high heterogeneity. Sensitivity analyses will be performed where applicable to explore the effects of studies with high risk of bias on the overall results. Subgroup analyses will also be performed where possible (eg, by type of diabetes). Where meta-analysis is not possible, a narrative synthesis of results will be performed. Data will be presented in summary tables and in narrative format to describe the populations, exposures and key outcomes of the included studies. Forest plots and funnel plots will be used to present results from metaanalyses (where applicable) and publication bias assessments, respectively. Meta-analysis results will be reported according to PRISMA guidelines. 24 Ethics and dissemination This project will collate aggregate data from published studies (or aggregate data provided by study investigators on request), and thus ethical approval will not be required. Findings will be disseminated via publications in peerreviewed journals and presentations at scientific meetings. If deemed appropriate, findings will also be communicated to relevant stakeholders to guide clinical practice and public health actions in this area. Data availability statement No data have been generated or analysed in this paper. Patient and public involvement statement It was not feasible or appropriate to involve patients or members of the public in the design, planning or conduct of the planned research. DISCUSSION The proposed review will be the first, to our knowledge, to systematically collate and synthesise the existing scientific literature linking two key lactogenic hormones, hPL and PRL, to maternal metabolic health in pregnancy and postpartum (and, by extension, to infant outcomes). Systematic reviews which evaluate biomarkers or aim to explore physiological questions are rare in the endocrine literature, and represent an opportunity to move beyond subjective, narrative work towards inclusive, extensive reviews with the potential for objective and evidencebased conclusions. While these hormones have long been recognised for their roles in the antenatal preparation of the breast for lactation and (in the case of PRL) for the postnatal initiation and maintenance of lactation, their metabolic roles have been relatively underappreciated. Both hormones contribute to the insulin resistance associated with the pregnant state, but also potentially have central roles in the adaptation of the maternal pancreas during gestation, stimulating beta-cell adaptation and increasing beta-cell mass and insulin secretion. 1 9 During a normal pregnancy, this may allow compensation for pregnancyinduced insulin resistance, resulting in overall maintenance of euglycaemia. Despite likely playing a key role in the regulation of glucose and insulin dynamics during pregnancy, the Open access Relationship of said outcomes to hPL/PRL levels (as t-test result, OR, regression coefficient, etc) ► Unadjusted. ► After adjustment (with list of covariates included in models). Conclusions regarding the aforementioned Key infant metabolic outcomes* of interest (for pregnancies affected by GDM or preexisting diabetes) Birth weight (absolute/centiles) Macrosomia Growth restriction Placental mass Key infant outcomes assessed (from list) Relationship of said outcome to hPL/PRL levels (as t-test result, OR, regression coefficient, etc) ► Unadjusted. ► After adjustment (with list of covariates included in models). Conclusions regarding the aforementioned *Due to the likely bidirectional nature of the lactogenic hormone/maternal metabolism relationship, some studies will consider hPL/PRL as 'exposure' and a metabolic parameter (eg, postpartum glucose tolerance) as 'outcome'. Others may consider a metabolic parameter (eg, maternal pregestational diabetes) as exposure with hPL/PRL levels during pregnancy, in comparison to healthy controls, as outcome. The extraction template will accommodate both. BMI, body mass index; GDM, gestational diabetes mellitus; HDL, high-density lipoprotein; hPL, human placental lactogen; LDL, low-density lipoprotein; OGTT, oral glucose tolerance test; PRL, prolactin; T1DM, type 1 diabetes mellitus; T2DM, type 2 diabetes mellitus. Open access relationship between hPL levels and the pathophysiology of GDM remains unclear. Several studies have investigated possible links, with some reporting no association between maternal hPL levels and GDM status [28][29][30][31] and others reporting higher hPL in GDM subjects than controls, 32 33 particularly if insulin-treated. 34 For hPL levels during pregnancies affected by pre-existing diabetes (type 1 diabetes mellitus (T1DM)/type 2 diabetes mellitus), the majority of authors report serially higher hPL throughout gestation in diabetic women compared with controls, 29 32 35-37 although other studies in T1DM have shown lower levels in the setting of poor control. 38 Furthermore, higher hPL levels are clearly related to increased placental weight and macrosomia, and several authors have suggested that increased levels of hPL in many diabetic pregnancies may simply reflect higher placental mass. 4 32 35 This does not suggest it is aetiologically unimportant; however, as it is possible that the placentomegaly seen in maternal diabetes causes higher hPL levels, stimulating maternal and fetal beta-cell expansion and increasing fetal insulin production, thus promoting glycogenesis, fat deposition and further fetal growth. 6 Importantly, however, this area of the literature is particularly dated, with many studies performed well prior to the 21st century and prior to contemporary diagnostic definitions of diabetes in pregnancy. As such, the exact type of maternal diabetes among study participants is often unclear (they are simply deemed to be 'diabetic', are defined according to the now-historical White classification of diabetes in pregnancy 39 or are termed 'insulindependent'). 30 32 35-38 Such studies provided valuable basic insights into the pathophysiology of the lactogenmaternal metabolism relationship, but comparison to the available better-described contemporary cohorts 28 is not possible. In this systematic review, a sufficiently clear definition of diabetes type (or adequate detail for this to be confidently deduced) is thus mandated for inclusion, as we believe this is a minimum requirement if our review findings are to be applicable to modern obstetric populations. Acknowledging these challenges, a better understanding of the role of hPL in metabolically abnormal pregnancies has potential clinical application. For example, accurate antenatal prediction of fetal macrosomia remains challenging, and current macrosomia prediction strategies (including physical examination and ultrasound assessment) are both resource-intensive and imprecise. There is thus a clear requirement for maternal serum biomarkers in improving antenatal macrosomia prediction, particularly in women at high risk of the outcome (such as those with pregestational diabetes or GDM). While several candidate maternal biomarkers have been assessed for their association with birth weight or macrosomia (both in diabetic and non-diabetic pregnancies), evidence is mixed and uncertainties around clinical utility persist. 40 hPL (which was used clinically in some settings to assess the well-being of the fetoplacental unit in the 1970s and 1980s, prior to the widespread availability of obstetric ultrasound) 41 has recently been largely overlooked as a candidate biomarker in this capacity, but previous work suggests it may have significant potential if revisited. For instance, one 1998 study measured hPL at the time of GDM screening (n=257) and found that among the subset of women with a normal glucose challenge test but whose infants ultimately weighed >4000 g (n=11), mean hPL at the time of testing had in fact been similar to the mean hPL found in women with GDM. 42 This suggests that hPL may warrant evaluation as a biomarker for macrosomia prediction, both in women with diagnosed diabetes and those without. Such an application would require the marker to be validated in modern cohorts where the underlying aetiology of maternal diabetes was adequately understood and described. Unlike hPL (which, as a placentally derived hormone, is washed from the circulation following delivery), PRL has probable influence in maternal metabolism during both pregnancy and postpartum, particularly if lactation ensues. The literature here is similarly conflicting. For example, maternal serum PRL levels during GDM pregnancy have been examined by several groups, with the majority reporting levels similar to those of normal pregnancies. 28 31 43 However, more recent studies have directly contradicted this. Two groups have shown that higher PRL levels in the first 44 and third 45 trimesters of pregnancy are associated with reduced glucose tolerance on OGTT, with both groups suggesting that PRL may be independently involved in GDM pathogenesis. A third study has demonstrated an opposite result, showing an inverse association between third-trimester PRL and GDM risk. 46 This lack of consensus highlights the need for effective evidence synthesis followed by further research. Postpartum, lactation (under the chief control of PRL) appears to confer maternal metabolic benefits, but the mechanism by which this occurs is unclear. One group found that maternal serum PRL in late pregnancy was significantly higher in women who progressed to normal glucose tolerance postpartum than in those who progressed to postpartum pre-diabetes/diabetes; and that higher antepartum PRL independently predicted improved postpartum insulin secretion capacity. That group suggested that these findings may reflect a postpartum extension of the beneficial effects of PRL on beta-cell mass and islet adaptation that are thought to occur during gestation. Another group which measured PRL postpartum presented different findings and discussion: women with higher circulating PRL in the context of lactation in their study had reduced beta-cell function and lower insulin secretion indices but were less insulin resistant. 18 Authors have suggested that this improvement in insulin resistance may result from the mobilisation of muscle and liver lipids into breast milk under the control of PRL, 4 an action that may be particularly beneficial in women who are insulin resistant at baseline (women with recent GDM are known to have increased intramyocellular lipid content at 4-6 months postdelivery compared with controls). 47 Open access There is thus a clear need for a systematic review of the literature in this field-both lactogenic hormones clearly have central roles in the regulation of maternal metabolism (both during pregnancy and postpartum, and for women with normal and abnormal pregnancies). However, to date, the evidence has not, to our knowledge, been effectively synthesised. Some limitations of the review process should be noted. First, owing to the intentionally broad scope of the review, included studies will be heterogeneous in their design, methodology and research questions. In the analysis phase, hPL and PRL will thus be considered separately and studies will be grouped according to similar outcomes, but it is possible that marked heterogeneity will preclude meaningful conclusions and/or statistical meta-analysis. Second, some of the basic clinical work on hPL and PRL levels in normal and diabetic pregnancies is now very dated, extending back to the 1970s and 1980s. While robust and worthy of inclusion, differences in experimental design and (in particular) the classification and treatment of maternal diabetes will present challenges when comparing such studies to modern cohorts. As such, clear requirements for the adequacy of maternal diabetes definitions have been stipulated in our inclusion and exclusion criteria. Where possible, we will endeavour to conduct a subgroup analysis by publication year range or otherwise perform a narrative comparison between older and newer studies. We will also extract and tabulate variables such as the exact GDM diagnostic criteria used and the assay methodology employed in each case, as such details are likely to vary according to era of publication (in particular, many older studies involve the routine use of radioimmunoassay, now largely superseded by modern enzyme-linked immunoassay techniques). Finally, as previously described, the relationship between lactogenic hormones and maternal metabolism is almost certainly bidirectional, whereby some studies examine the effects of lactogenic hormones (exposure) on metabolic conditions (outcome), while in others, exposure and outcome are reversed. The review is designed to capture both, but-particularly in the postpartum context-the bidirectional nature of the relationship can bias observational studies. While this cannot be directly addressed in our review methodology, it will be acknowledged in the synthesis and interpretation of the findings. CONCLUSION In summary, this systematic review will rigorously and systematically collate and synthesise current evidence linking the key lactogenic hormones hPL and PRL to maternal metabolic health in pregnancy and postpartum. Both hormones have key roles in the maintenance of glucose homeostasis during pregnancy, including direct actions on the beta cells of the maternal endocrine pancreas. However, the exact roles of these hormonesparticularly in metabolically abnormal pregnanciesremain unclear, and evidence is conflicting. Further, hPL may have untapped potential clinical application in the antenatal prediction of macrosomia; whilst lactation, under the hormonal control of PRL, may regulate glucose and lipid metabolism and help to guard postpartum women against persistent dysglycaemia. Through this review process, the available scientific evidence will be synthesised to clarify these relationships and to inform future research in the field of maternal metabolic and endocrine health. Contributors KLR was the project lead, conceptualised and designed the protocol, wrote the first draft of the paper, and coordinated and conducted the systematic review process along with co-reviewer RG. AMM has contributed to the design of the search strategy and provided support with evidence synthesis. AM, AJ and HJT reviewed and edited the paper, and provided oversight and supervision for the systematic review process. All authors contributed substantial intellectual input to the paper in line with ICMJE criteria for authorship and approved the final version for publication. Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors. Competing interests None declared. Patient consent for publication Not applicable. Provenance and peer review Not commissioned; externally peer reviewed. Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
2022-02-23T06:23:23.090Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "f822faa439877fbf61078dbb21aefae8da716c4f", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/2/e055257.full.pdf", "oa_status": "GOLD", "pdf_src": "BMJ", "pdf_hash": "2a118ff583b1cb797260dac2ae681b6ad5deb31e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253367842
pes2o/s2orc
v3-fos-license
Low-Cost Fiber Chopped Strand Mat Composites for Compressive Stress and Strain Enhancement of Concrete Made with Brick Waste Aggregates Given the excessive demolition of structures each year, the issues related to the generated structural waste are striking. Bricks being a major constituent in the construction industry, also hold a significant proportion of the construction waste generated annually. The reuse of this brick waste in new constructions is an optimal solution considering cost-effectiveness and sustainability. However, the problems related to the substandard peak stress and ultimate strain of concrete constructed with recycled brick aggregates (CRAs) limit its use in non-structural applications. The present study intends to improve the unsatisfactory mechanical characteristics of CRAs by utilizing low-cost glass fiber chopped strand mat (FCSM) sheets. The efficacy of FCSM sheets was assessed by wrapping them around CRA specimens constructed with different concrete strengths. A remarkable increase in the peak compressive stress and the ultimate strain of the CRA specimens were observed. For low, medium, and high strength CRAs, the ultimate strain improved by up to 320%, 308%, and 294%, respectively, as compared to the respective control specimens. Several existing analytical models were utilized to predict the peak compressive stress and ultimate strain of the CRAs strengthened using FCSM sheets. None of the considered models reproduced experimental results accurately. Therefore, equations were formulated using regression predicting the peak stress and ultimate strain of the CRAs confined with FCSM sheets. The predicted values were found to correlate well with the experimental values. Introduction The rapid urbanization and the consequent demolition of existing buildings have raised some serious concerns regarding the proper and safe disposal of construction waste. The risk of an increased carbon footprint looms if proper and adequate measures are not taken regarding the generated construction waste. One possible solution to prevent the costs related to the disposal of construction waste is to reuse it. This not only increases the economic feasibility of the project but also lowers the demand for the rapidly depleting axial direction, the possibility of using existing compressive stress-strain analytical models for concrete externally confined with FRP in predicting the mechanical characteristics of CRA is also investigated. For this purpose, this study presented experimental findings of the monotonic compression tests applied to concrete constructed with CRA and externally confined with low-cost FCSM wraps. Three concrete strengths were considered, and eight rectilinear specimens were tested for each concrete strength. For each concrete strength, two, three, and four wraps of FCSM were applied. Test Matrix Twenty-four specimens were constructed and tested in this study. Specimens were categorized into three groups depending on the concrete strength (see Table 1). The design strength of the concrete in groups 1, 2, and 3 was 15, 20, and 25 MPa, respectively. Specimens in each group were of four types, and two specimens belonged to each particular type to assess the consistency of the results. The first type comprised two control specimens, the second type comprised two specimens strengthened using two wraps of FCSM confinement, and the third specimens were strengthened using three wraps, whereas the fourth type specimens were strengthened using four FCSM wraps. The notation for each specimen recognized its concrete strength, the presence of FCSM sheets, and the number of their wraps. For this, the first part corresponded to 15, 20, or 25 MPa concrete, respectively. The second part was either CON or FCSM corresponding to the control or strengthened specimens, respectively. The last part described the number of FCSM layers. For instance, 20-FCSM-2L represented a specimen constructed with 20 MPa concrete strength and strengthened using two layers of FCSM wraps. 15-CON 15 None 2 15-FCSM-2L 15 2 2 15-FCSM-3L 15 3 2 15-FCSM-4L 15 4 2 20-CON 20 None 2 20-FCSM-2L 20 2 2 20-FCSM-3L 20 3 2 20-FCSM-4L 20 4 2 25-CON 25 None 2 25-FCSM-2L 25 2 2 25-FCSM-3L 25 3 2 25-FCSM-4L 25 4 2 Material Properties Aggregates were recycled by crushing solid clay bricks (see Figure 1a) using a brick crushing machine, as shown in Figure 1b. Screening of the crushed bricks was performed, resulting in brick aggregates with sizes from 5 mm-20 mm. The recommendations of ASTM C1314-21 and ASTM C140/C140M-22a [43,44] were used to measure the mechanical characteristics of the bricks, such as water absorption, density, and compressive capacity. The density of the bricks was estimated at 120 kg/m 3 , compressive capacity at 3.14 MPa, and water absorption at 23.27%. Concrete was prepared by substituting 50% of the natural coarse aggregates with recycled brick aggregates. The mix proportions of the concrete for three design strengths are presented in Table 2. In this study, the FCSM sheet was comprised of a non-woven glass fiber mat manufactured by spreading a continuous filament roving of 50 mm in length randomly in combination with a polyester binder ( Figure 2). The density of the FCSM sheet was 600 g/m 2 . The thickness of the FCSM sheet was 0.5 mm and the width of the FCSM roll was 1.0 m. The mechanical properties of the FCSM wraps were estimated by following the recommendations of ASTM D3039M-08 [45]. The ultimate tensile strength and modulus of elasticity of the FCSM composite sheet were estimated as180 MPa and 7470 MPa. Typical Specimen Details, Fabrication, and Strengthening Process In this study, rectilinear concrete specimens of dimensions of 150 mm × 150 mm × 300 mm were constructed, as shown in Figure 3. The sharp corners were rounded off to a 13 mm radius in accordance with ACI 440.2R-17 [46] to improve the efficiency of the FCSM wraps by reducing the stress concentrations near the sharp corners. All specimens were constructed in laboratory environments. Steel molds were prepared to cast the specimens, as shown in Figure 4a. Concrete pouring was performed in three equal layers. Each individual concrete layer was compacted using vibration tables to achieve uniform compaction. Steel molds were taken off following one day of casting, whereas the curing of the specimens was maintained for 28 days. Each specimen was strengthened after complete curing of 28 days. Specimens were prepared by thoroughly cleaning their surfaces using cloth, and rough patches were removed before the application of the FCSM wraps. Further, a brush was used to apply epoxy and then a roller was used to remove the entrapped air between the concrete surface (see Figure 4b) and the FCSM composite. For the next thickness, the surface was thoroughly soaked with resin, followed by the application of the FCSM wrap, as shown in Figure 4c. FCSM sheets were tightened during their application to ensure uniform contact with the concrete surface. An analogous process was performed to attach the subsequent FCSM wraps. Typical FCSM strengthened specimens are shown in Figure 4d. The interfacial interactions between the concrete and FCSM as well as the FCSM-FCSM were assumed to be perfectly bonded because the concrete surface and or the FCSM were thoroughly soaked with the resin prior to the next layer of the FCSM sheet. Test Setup and Instrumentation A universal testing machine (UTM) with a 1000 kN was utilized to apply a compressive monotonic load. The end surfaces of each specimen were properly cleaned and smoothened prior to the testing. Steel plates were attached above and beneath the specimen to guarantee a uniform load application. A load cell with a 500 kN capacity was utilized to measure the load intensity, whereas a logger was used to record the measured data. Two linear variable displacement transducers (LVDT) were employed to measure the compressive shortening of the specimens (see Figure 5). Failure Modes The failure types of specimens in each group are shown in Figure 6. Specimen 15-CON failed due to the splitting and crushing of the concrete. The crushing was concentrated within its upper half. Specimen 15-FCSM-2L exhibited a delayed and less brittle failure as compared to Specimen 15-CON (Further discussions on this delayed behavior are provided in Section 3.3). The failure of Specimen 15-FCSM-2L accompanied the tearing of the FCSM wraps in the hoop direction, whereas the rupture was mainly concentrated near the corners. This indicates that the 13 mm corner radius was insufficient to mitigate the stress concentrations completely. Specimen 15-FCSM-3L also failed due to the rupture of the FCSM wraps near the corners. However, the concrete crushing was lesser than that of Specimen 15-FCSM-2L, and the failure mode was less brittle as well. Finally, Specimen 15-FCSM-4L exhibited the least brittle failure among the group 1 specimens, and the least concrete crushing was observed. However, the rupture of the FCSM wraps was still concentrated in the corners. Specimen 20-CON failed in a brittle manner similar to Specimen 15-CON. However, the crushing and splitting of the concrete were detected along its full height. The failure of strengthened specimens in group 2 (i.e., with a 20 MPa designed concrete strength) also accompanied the rupture of the FCSM wraps. However, this rupture was observed in the center of the vertical sides. This suggests that the 13 mm corner radius was sufficient in the higher strength concrete. The ultimate failure modes of the group 3 specimens were similar to those in group 2, as shown in Figure 6. Peak Stress and Ultimate Strain The experimental peak compressive stresses and ultimate strains are presented in Table 3. The increase in the peak compressive stress as a result of two, three, and four FCSM wraps in the first group was 61%, 98%, and 140%, respectively. The increase in the ultimate strain was 188%, 270%, and 320%, respectively for the same specimens. For the second group, two, three, and four FCSM wraps increased the peak compressive stress by 53%, 74%, and 102%, respectively, whereas the improvement in the ultimate strain was 163%, 255%, and 308%, respectively. Similarly, the increase in the peak compressive stress of the third group specimens as a result of two, three, and four FCSM wraps was 46%, 65%, and 83%, respectively, whereas the ultimate strain improved by 135%, 235%, and 294%, respectively. Both the peak compressive stress and the ultimate strain were substantially increased due to FCSM confinement regardless of the strength of concrete and the number of FCSM wraps. The issues associated with the effect of concrete strength and the quantity of FCSM wraps on the efficacy of the FCSM wraps are discussed in the subsequent sections. Compressive Stress-Strain Curves Continuous recording of the compressive load and axial shortening was conducted using a data logger. The recorded compressive load was converted to the compressive stress using the cross-sectional area of the specimens, whereas the compressive shortening to the strain us was converted ing the height of the specimens. The measured compressive stress and strain curves of the group 1 specimens are illustrated in Figure 7. The control Specimen 15-CON exhibited typical stress versus strain response of unconfined concrete. A steep ascending branch was observed till a peak value of about 16.0 MPa, followed by an abrupt drop due to the brittle failure. The specimen 15-FCSM-2L was able to sustain high ultimate strains to a value of 0.024. At this point, the sudden rupture of the FCSM wraps led to a drop in its compressive load capacity. The specimens strengthened with 3, and 4 FCSM wraps exhibited a bilinear stress-strain response exhibiting a high ductility till the ultimate strains of 0.0308 and 0.035, respectively. The stress versus strain graphs of the group 2 specimens are shown in Figure 8. The control Specimen 20-CON failed in a brittle manner dropping its load capacity abruptly, and did not exhibit any ductility. All the strengthened specimens in group 2 demonstrated a bilinear response. The peak compressive stress and the ultimate strain sustained improved with the number of FCSM wraps. The ductility of the CRA was observed to increase with the number of FCSM wraps as well. Unlike Specimen 15-FCSM-2L, Specimen 20-FCSM-2L did not drop its capacity, which can be attributed to the higher unconfined concrete strength in group 2. Finally, the stress versus strain graphs of the group 3 specimens are presented in Figure 9. The stress-strain response was similar to those of the specimens in group 2. The strengthened specimens depicted a bilinear response, whereas the control Specimen 25-CON failed abruptly. It is clear that the FCSM sheets provided sufficient compressive ductility to the CRA. For the 25 MPa concrete strength, two FCSM wraps enhanced the peak load up to a certain strain level only. Apart from Specimen 15-FCSM-2L, all the confined specimens exhibited bilinear stress versus strain behavior. Effect of the Number of FCSM Wraps and Concrete Strength The effect of the number of FCSM wraps and concrete strength on the increase in the peak compressive stress is shown in Figure 10. It can be seen that by increasing the number of FCSM wraps, a clear improvement was detected in the peak compressive stress. For the low-strength specimens, this increase was 61%, 98%, and 140% for 2, 3, and 4 FCSM wraps, respectively. The corresponding increase in the peak stress for the group 2 specimens was found to be lower than that for group 1 specimens. A further reduction in the increase in the peak compressive stress was observed for the group 3 specimens. This is indicated in Figure 10 as for the two wraps of the FCSM, the increase in the peak stress for the medium and high-strength concrete was 8% and 15% lower than that of the low concrete strength specimens. This difference increased as the number of FCSM wraps increased to three, where the medium and high-strength specimens experienced an increase in the peak stress that was lower than that of the low-strength specimen by 24% and 33%, respectively. This difference further increased as the number of FCSM wraps increased to four, where the medium and high-strength specimens experienced a 38% and 57% lower increase in the peak stress as compared to the low-strength specimen. The impact of the strength of concrete and the number of FCSM wraps on the increase in the ultimate strain is shown in Figure 11. The maximum gain in the ultimate strain was detected for the low concrete strength specimens, irrespective of the number of FCSM wraps. This was followed by the medium and high concrete strength specimens, respectively. This observation is analogous to the one made for the peak stress improvement in Figure 10. However, the difference in the gain in the peak stress increased as the number of FCSM wraps increased (see Figure 10), whereas this difference decreased for the case of the ultimate strain (see Figure 11). For instance, the difference in the gain of the ultimate strain between the low and medium concrete strength specimens for two FCSM wraps was 25%, whereas this difference was reduced to 15% and 12% for the case of three and four FCSM wraps. In general, both the peak compressive stress and ultimate strain improved as the number of FCSM wraps increased, whereas this improvement was reduced as the unconfined concrete strength increased. Existing Analytical Models In this section, existing analytical models were evaluated in approximating the peak compressive stress and ultimate strain. To the authors' knowledge, no analytical models for FCSM confined concrete are available in the literature. However, several researchers have proposed confinement models for synthetic and natural fiber-reinforced polymer (FRP) confined concrete [25,[47][48][49][50][51][52][53][54][55]. Since FCSM and FRP wraps exert confinement pressures through their in-plane stiffness mainly, it is assumed that existing analytical models can be applied to FCSM confined concrete. In the existing analytical studies, the general form of Equation (1) [56] is used to relate the peak compressive stress of strengthened concrete f cc to the lateral pressure f l applied by external wraps. where f co is the unconfined compressive strength and the constant k 1 is proposed from the regression. From the equilibrium between the outward bursting pressure f l under the compressive loads and the resulting forces f t × t f in the FCSM wraps shown in Figure 12, an expression for f l can be derived in the form of Equation (2) [47,48,[50][51][52][53][54][55]. This is the general form of the equation for confining pressure. This equation has been extensively used in previous studies [47,48,[50][51][52][53][54][55]. where D is the length of the diagonal of the rectilinear section given in Equation (3) [46], f t is the ultimate tensile capacity of the FCSM sheet, t f is the thickness of the FCSM sheet, n f is the number of wraps, and ρ can be determined using Equation (4) [57]. where b and d are the cross-sectional sizes of the section defined in Figure 12, r c is the corner radius, and A is the gross area defined in Equation (5) considering the corner radii. Several existing peak stress and ultimate strain models are described in Table 4. The accuracy of the analytical models in Table 4 is evaluated by the mean value (referred to as the AVG) of the ratios of the predicted to the experimental peak compressive stresses and the corresponding standard deviations STDs. The average and standard deviations for the peak compressive stresses are presented in Table 5. The considered models underestimated the peak compressive stresses of the CRA confined with the FCSM. This is indicated by their AVG values of less than 1.0 in Table 5. However, it was observed that the AVG values increased for a particular model as the unconfined compressive strength increased. For instance, the model of Shehata et al. [25] resulted in AVG values of 0.56, 0.62, and 0.65 for groups 1, 2, and 3, respectively. The inconsistency of the existing models to approximate the peak compressive stress of the FCSM strengthened CRA with different unconfined concrete strengths suggests that there is a need for the analytical model that could consider the impact of the unconfined concrete strength without comprising the consistency of an accurate prediction of the peak compressive stresses. The existing models were evaluated to predict the ultimate strain of the CRA confined with FCSM wraps. For the low-strength specimens in group 1, the models of ACI-440.2 R-17 [46] and Lam and Teng [51] produced AVG values of 0.81 and 1.02, respectively. The corresponding standard deviations were 0.107 and 0.030. For group 2, the best prediction was provided by Ilki and Kumbasar [58], with an AVG value of 1.00 and a standard deviation of 0.049. Finally, the ultimate strain of group 3 strengthened specimens was best predicted by the models of ACI-440.2 R-17 [46], Lam and Teng [51], and Ilki and Kumbasar [58]. However, they produced standard deviations of 0.071, 0.123, and 0.097, respectively. This suggests that none of the considered models predicted the ultimate strain of strengthened specimens of all groups consistently. Proposed Models The experimental results were utilized to propose expressions to estimate the peak compressive stress and ultimate strain of the CRA strengthened with FCSM wraps. A nonlinear regression was performed to propose Equations (6) and (7) where f co is the compressive strength of the unconfined concrete, co is the ultimate strain of the unconfined concrete, and f l is the lateral pressure exerted by the FCSM wraps and computed from Equation (2). The accuracy of the proposed equations is illustrated in Figure 13a and Figure 13b for the peak compressive stress and ultimate stress, respectively. Pearson's coefficient was utilized to measure the accuracy of the proposed equations and defined using Equation (8). where x i is the ith observed value and y i is the ith predicted value, x is the sample mean of the observed values, and y is the sample mean of the predicted values. An r of 0.99 and 0.97 was obtained for Equations (6) and (7), respectively, indicating that high accuracy in predicting the experimental results was obtained. Further, the mean values of the ratios of the predicted to the experimental values and corresponding standard deviations are presented in Tables 5 and 6. It can be seen in Table 5 that the AVG values of 1.01, 0.99, and 1.00 were obtained for groups 1, 2, and 3 specimens, respectively, whereas the corresponding standard deviations were 0.027, 0.013, and 0.022, respectively. This suggests that the proposed Equation (6) accurately predicted the peak compressive stress of the CRA strengthened with FCSM wraps. Similarly, the AVG values and standard deviations for Equation (7) are presented in Table 6. AVG values of 1.03, 0.97, and 0.99 were produced by Equation (7) for the group 1, 2, and 3 specimens, respectively, whereas the corresponding standard deviations were 0.032, 0.058, and 0.090, respectively. 1. This study presented experimental findings of the monotonic compression tests applied to concrete constructed with recycled brick aggregates (CRAs) and externally confined with low-cost FCSM wraps. Three concrete strengths were considered, and eight rectilinear specimens were tested for each concrete strength. For each concrete strength, two, three, and four wraps of FCSM were applied. The subsequent important inferences can be made: 2. The peak compressive stress of the specimens was increased by 61%, 98%, and 140% as compared to the reference specimen for the 2, 3, and 4 wraps of the FCSM applied to the low strength (i.e., a 15 MPa design strength) CRA specimens. For the medium strength CRA (i.e., a 20 MPA design strength), an up to 102% improvement in the peak stress was observed, whereas the peak stress was improved up to 83% for the high strength CRA (i.e., a 25 MPa design strength). The peak stress was found to increase as the number of FCSM wraps increased. 3. The FCSM wraps were efficient in enhancing the compressive ductility of the CRA. For the low, medium, and high strength CRA, the ultimate strain improved up to 320%, 308%, and 294%, respectively, as compared to the respective control specimens. 4. In particular, 3 and 4 wraps of the FCSM resulted in a bilinear stress-strain behavior irrespective of the concrete strength. 5. The improvement in the peak stress and ultimate strain as a result of the FCSM wrap confinement varied in inverse relation to the unconfined concrete strength, irrespective of the number of FCSM wraps. 6. Various existing analytical models of confined concrete were assessed to predict the peak compressive stress and ultimate strain of the CRA strengthened with the FCSM wraps. None of the existing models were found to estimate the peak stress and ultimate strain for all the groups consistently. Therefore, equations for the peak stress and ultimate strain were formulated from a nonlinear regression analysis. The accuracy of the proposed equations was assessed using Pearson's coefficient r. An r value of 0.99 and 0.97 was observed for the equation of the peak stress and ultimate strain, respectively, indicating that a good agreement existed between the experimental and predicted values.
2022-11-06T16:20:46.080Z
2022-11-01T00:00:00.000
{ "year": 2022, "sha1": "36fb8ec78c849e03068c14a05eb76f2b8e7cc531", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4360/14/21/4714/pdf?version=1667891206", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "971ac59a31c7d4a6cae04e0db3705e421294a7f9", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
215777020
pes2o/s2orc
v3-fos-license
Justice and Righteousness in Post-Communist Societies-The Case of Croatia Can Croatia, as a transition country, be put in the category of democratic states? Yes, if by democracy we consider only implemented mechanisms, which provide a formal democratic procedure for the election of power holders, and guarantee certain political rights and freedoms, such as freedom of speech, opinion, assembly or association. However, if we describe the concept of democracy as a political system that serves the people through a democratically elected government, guaranteeing every citizen the right to work, prosperity, equality before the law, justice, righteousness, equal opportunity for every member of society, but at the same time if we take into account that it has no control over it, Croatia is still at the beginning of its democratic development. There are numerous obstacles along the way. The aftermath of the war, and the poorly implemented transformation of the inherited socialist political order and economy based on self-management of social property, opened the way for the development of corruption into a disease of the system, as well as political clientelism and conflict of interest as a way of functioning of political elites. The consequence is the stratification of the Croatian society in material terms, and the division in ideological and world-view issues. Structural reforms in all areas of society, starting from the political system, the public administration system, the health, education and justice systems are a precondition for Croatia's development towards true democracy, in which every individual will be provided with the so-called the rights of the first, second and third generation. In order to carry out the reforms, the political will of the authorities is crucial, the imposed system of value, partly inherited from the period of socialism and partly formed in the beginnings of the functioning of a newly founded state, must be changed through reforms. Introduction The campaign for re-election of the President of the Republic of Croatia, Dr. Ivo Josipović, was based on a program that emphasized the need for further development of the Croatian society, based on the principles of justice and righteousness [1]. Dr. Josipović failed to win the new term, and the program which was called New Justice remained only a dead letter on paper. The author of the article will identify the main problems and causes that are slowing down Croatia's development into a modern, European, economically prosperous state based on the principles of justice and righteousness. In the analysis it is necessary to start from certain specifics, which had a decisive influence on the beginning of functioning of the newly founded state. These specifics relate to the transition process of an inherited socialist political and economic system implemented in the context of leading a defensive war. According to Horowitz, the negative effects of war on the democratization of society and the implementation of market reform through the transition process are visible in the form of accumulation of political power, lack of tolerance towards political opposition, the media, restriction of human and political freedoms, strengthening of political repression, and economic favoring of individuals or groups, while resisting the implementation of structural reforms [2]. Also, in accordance with the aforementioned conditions in the transition, processes were conducted, international institutions rank Croatia in the group of transitional postconflict countries [3]. This grouping of countries is made up of states that emerged after the breakup of the former federations, which had to defend their newly-acquired independence and state borders with weapons, and therefore their transition process simultaneously involved building the state and nation, market economy and democracy [4]. Consequently, in the case of Croatia, together with other countries of the former Yugoslavia and some former Soviet republics, we can speak of the so-called. "quadruple transition" [2,5]. Namely, a conditional division into a "double", "triple" and "quadruple" transition has emerged in science. Thus, the "double" transition covers the process of economic and political transformation (Poland, Hungary, Czech Republic), the "triple" transition covers democratization / marketisation / state building (other countries of Central-Eastern Europe), while the "quadruple" transition includes democratization / marketisation / state / national-building, as was the case with the countries of the former Yugoslavia, USSR, Slovakia [6,7]. Therefore, it is necessary to explain the internal sociopolitical circumstances in which this "quadruple" transition took place, because they have decisively influenced the further direction of Croatia's development. These circumstances address the issue of the existence of weak political institutions, which were just in the initial stages of construction, with no experience and tradition of functioning on the principles of "good government"; then, the structure and composition of members of the political elite of the time as the bearers of the transition process; the lack of awareness of the public and political elites about the necessity of changing the way of thinking and acting in the past, typical for the inherited socialist political and economic systems, and the problem of implementing a policy of economic transition based solely on the category of property. By analyzing the above mentioned circumstances, the author of the article will try to explain the causes of today's weaknesses in the Croatian society. Political clientelism, conflicts of interest of holders of political and economic power, systemic corruption, the absence of a value system based on democratic principles are the main obstacles to Croatia's development today. The World Bank has warned about the emergence of certain forms of corruption during the process of transforming the political and economic system. The very fact that the said process took place under the conditions of the newly established legislative and institutional framework, enabled an uncontrolled concentration of economic power in the hands of particular interest groups, closely related to political centers of power [3]. Transition Processes At the beginning of the last century, Croatia, as well as other countries of the former socialist socio-political system, was confronted with the need to carry out transition processes, through which it had to simultaneously ensure the transformation of society in two directions: from the political and economic order -the so-called self-governing socialism, with social ownership as a prominent value, into a model of society based on democratic mechanisms and market conditions of business. Here, the notion of self-governing socialism should be distinguished from the notion of socalled real socialism, typical for other post-communist countries. "Real socialism was created on the basis of negation and the market and democracy", while "selfgoverning socialism developed on the foundations of the market, labor democracy, social ownership and selfgovernment" [8]. Welsh explains the term transiton period as "the interval between an authoritarian political regime and a democratic one" [9]. Transition is a long-term process, consisting of changing the political and economic system. This change in the case of post-communist countries had to take place simultaneously to democratize society and introduce market rules in the economic sector. Changes in the economic system are conditioned by changes in the political sphere. According to Horrowitz ˝In the post-communist world, democratization was typically a prerequisite for dismantling planned or socialized economies and instituting market-based ownership˝ [2]. Croatia is an example of a transition country which has been in an economic recession for many years precisely because of the lack of key reforms political system. Transformation of the Political System In the process of transformation of the inherited socialist socio-political system, a framework had to be set for the implementation of all those democratic mechanisms typical for societies with a long democratic tradition, such as pluralism of opinion, political activity, freedom of assembly, speech, writing, voting. This was supposed to be the foundation upon which the democratic process of electing government at local, parliamentary and presidential levels would rest. However, true democracy is not only represented by these characteristics. The fundamental meaning of democracy is that democratically elected officials serve for the benefit of all citizens, promoting the public interest as a whole. Otherwise, these democratic mechanisms may serve political elites to gain power in order to protect exclusively the particular interests of individuals and groups. This opens the door to an undemocratic way of governing and a turn towards qualification, the notion of "bad government or simply government by the most unscrupulous or unsuitable people" [10]. Consequently, when talking about democracy, one should distinguish between the fact that a democratic state implies the existence of a democratic government, but also that a democratically elected government does not need to guarantee the existence of a democratic state. Distinguishing the understanding of democracy as a political system and democracy as a form of government is the factor that distinguishes countries with a long tradition of democracy from other states, which yet have to develop the said political system of government. In a democratic political system, the responsible government acts in the interest of all citizens, enabling the application of democratic mechanisms to formulate their interests, by ensuring "alternatives to expression, right to vote, right to political leaders to compete for support, alternative sources of information, free and fair elections, right of political leaders to contest for votes, institutions for making government policies depend on votes and other expressions of preference " [11]. If we compare the above mentioned with the process of transition of the political system in Croatia, we can see a gap between the implemented mechanisms, through which the citizens could democratically elect the holders of power and ruling of political elites. The actions of political elites are not always in line with all democratic principles of good governance. The notion of good governance not only defines the way government works, but also applies to all other actors in society; political parties, parliament, the judiciary, the media, civil society, and their interactions with each other, all with the aim of improving citizens' standards. The tripartite formula for good governance encompasses the elements of ˝State capability -the extent to which leaders and government are able to get things done. Responsiveness -whether public policies and institutions respond to the needs of citizens and uphold their rights; Accountability -the ability of citizens, civil society and the private sector to scrutinize public institutions and governments and hold them to account˝ [12]. The type of party system, the type of political parties and the way they function are key to the functioning of the overall political system. If they operate in the conditions of unsettled legislative and institutional frameworks of society, they become the main source of corruption and political clientelism. According to Bandelj & Radu ˝political transformations in postcommunist Europe need to take into account the specific historical and socio-economic context of large-scale post-socialist transformations in this region " [7]. As in the case of other countries of Eastern and Southeastern Europe, the Croatian political scene was marked in 1989 and 1990 by the beginnings of a change in the political system from a one-party system, dominated by the Communist Party for decades, to a multi-party system [7]. The peculiarity of the first democratic elections, held in May 1990, is that they were held at a time when Croatia had not yet gained its independence, because it was part of the former Yugoslavia, and the elections themselves were held in the atmosphere of war threats in the territory of the former Yugoslavia. Ekiert at al. emphasize that the bloody civil war in the former Yugoslavia and its legacies are a stark reminder of the potential difficulties faced by divided societies in their quest to build democracy˝ [13]. However, in the long run their results will influence the creation of the Croatian political space. Its basic feature is the existence of bipolar multipartism, in which two political parties of the left and right center have been alternating power for the past 25 years. As a rule, only two parties won the biggest number of votes in the parliamentary elections: HDZ and SDP, which formed governments with their coalition partners: from 1990 to 2000 -HDZ; from 2000 to 2003 -SDP; from 2003 to 2011 -HDZ; from 2011 to 2016 -SDP, and from 2016 to present -HDZ. Also, by analyzing the results of the first multi-party elections for members of the Croatian Parliament, certain comparisons can be made with other countries of the former socialist political system, which indicate that the first free elections in those countries turned into certain "essentially plebiscites against Communist Party and protest against the existing political system: the elections were mainly protest votes against the former regimes " [14]. On the Croatian political scene, between the 33 political parties and 16 different associations registered for the elections, two main parties or two poles stood out, which exist to this day [15]. The largest number of seats in the first multi-party elections was won by the Croatian Democratic Union -HDZ (205 or 58%), and the Reformed Communists Party -the League of Communists of Croatia -Party for Democratic Change -SKH-SDP (107 seats or 30%). Compared to ideologically "sister" parties in Czechoslovakia (Communist Party won 14% of votes), Hungary (Socialist Party won 11%) in East Germany (Party of Democratic Socialism won 16.4%), only reformed communists in Croatia, and in Bulgaria (Socialist Party won 44%), they achieved good results [14]. The Croatian Democratic Union (HDZ), as the strongest political party, emerged from the national independence movement, taking on the character of a mass party led by a charismatic leader. Reformed Communist Party of Croatia -Social Democratic Party of Croatia (SDP) has profiled itself as the largest opposition political party since the early 2000s. Through the activities of these two main political parties, one can see the overall process of democratization of the Croatian political system. This process can be provisionally divided into two time periods -from 1990 to 2000, and from 2000 to the present. To draw parallels between the presentation of the transformation of the political system over the specified time period, it is necessary to analyze the structure of the members and the leadership of the newly formed political parties. From the aspect of influencing the creation of party and national politics, there are two basic categories of members of political parties: former members of the Union of Communists of Yugoslavia, and members of the Croatian diaspora. By joining the newly formed political parties, former members of the League of Communists of Yugoslavia took prominent positions in the new pluralist party system [16]. After the breakup of the former Yugoslavia and the establishment of new states within the territorial borders of the former republics in Croatia, the League of Communists of Croatia changes its name to the Party of Democratic Change (SKH-SDP), and then to the Social Democratic Party (SDP). Simultaneously with the process of transformation of the SDP, as the sole successor to the former Communist Party, a large number of political parties was formed, with a large number of former Communists joining, for example, the Croatian Democratic Union (HDZ); The Croatian People's Party -HNS, and the Serb ethnic minority party -the Serb Democratic Party (SDS), which ceased to exist after the end of the Homeland War. Some of its members are now politically active in the Inedependent Democratic Serbian Party (SDSS). So, regardless of the political program of the new political parties, they had one thing in common -the former communists became members of the mentioned parties, and their influence, after the first multi-party elections, was not weakened. The described pattern of rapid transformation of politicians with many years of experience in the one-party system is found in other post-communist transitions. According to Stoica ˝ Romania is a country marred by former communist politicians' survival˝, in which the positioning of former communist elites in the new authorities has led to the establishment of the so-called "mock democracy," controlled by an incompetent, highly politicized, and excessive bureaucracy, as well as an "economic system that rewards politically-connected individuals or firms and punishes honest, hard-working entrepreneurs" [17]. The second category of party members was represented by members of the Croatian diaspora, especially part of the political emigration, who returned to Croatia in the early 1990s. The very term "diaspora" does not refer exclusively to "the objective group of people but always the result of social or political mobilization, with foundation myths, rituals and representative organizations" [18]. In order to understand their role in the construction of the new political and economic system of Croatia, it is necessary to explain the genesis of their emergence. During the 20th century, the Croatian diaspora, as part of the total Yugoslav emigration, had three major emigrant waves, whose actors can be divided into three categories -"old emigrants", "political emigrants" and "quest workers" [18,19]. The "Old Emigrants" were emigrants, mostly members of the peasantry and the working class, who left the Austro-Hungarian Monarchy from 1880 to 1914, which at that time comprised areas of present-day Croatia. The next group (with two groups provisionally speaking) was represented by "political emigrants" -Croatian emigrants who left Yugoslavia after 1945 and the collapse of the Quisling state formation of the Independent State of Croatia. The aforementioned group of emigrants was marked by the Calvary of the "Cross Road" and the mass liquidations at Bleiburg [20]. This group was represented by members of the middle and upper classes, who were treated by enemy emigration by the Yugoslav Communist leadership. Another subset of political emigrants emerged after the collapse of the so-called MASPOK (mass national movement) in Croatia, that is, the "Croatian Spring" in late 1971 and during 1972 [21]. This movement was led by Communist leaders of the younger generation and liberal orientation, who "made genuine efforts to broaden the regime's social base to increase Croatia's autonomy within the Yugoslav Federation" [16]. The consequences of its collapse were reflected in the entire Croatian society in terms of the imposition of the "ideological dictatorship" by the League of Communists of Yugoslavia in all areas of social life, from social, cultural, to educational [16]. This was followed by persecution, intimidation, imprisonment for prominent members of the movement, and the elimination from political life of its major leaders, from Dr. Savka Dabcevic-Kuchar, Mike Tripal to Dr. Franjo Tudjman and other high-ranking members of the communist elite. The third group of emigrants consisted of "guest workers" or temporary workers abroad. They were one of the "consequences" of the Communist leadership's attempt to carry out economic reforms during the 60 years of the last century, by introducing certain elements of a market economy. The reform failed because the Yugoslav economy, with the exception of the Croatian and Slovenian economies, was not ready to function on a market basis. The result of this experiment was a large increase in unemployment, which the Communist leadership "regulated" in such a way that it liberalized going abroad and thus solved two problems: it reduced unemployment and the number of political opponents of the regime, allowing them to go temporarily abroad. After winning the first elections in 1990, the HDZ held power until 2000, winning two consecutive election cycles for the Croatian Parliament. The aforementioned decade of HDZ rule was marked by the Homeland War (1991)(1992)(1993)(1994)(1995) and the transition process, during which the rule of the leading political party was based solely on the issue of national identity, as a mobilizing factor, instead of the priority task of implementing structural political and economic reforms. According to Bandelj and Radu ˝Democratic consolidation will be faster when the elites in power have a pro-democratic reform orientation, defined quite minimally as a government, where the ruling party is not a nationalist nor communist in orientation" [7]. The tenyear rule of HDZ has adversely affected the transition process of the Croatian society, and opened up the space "toward greater authoritarianism and greater corrpution" [13]. It was the absence of political will for consistent and thorough transformation processes during the 1990s that favored the spread of corruption and clientelism, and, in that period, classified Croatia as a group of countries with "illiberal democracies", together with Slovakia, Bulgaria and Romania [22]. During this period, the main opposition party, SKH-SDP, as a successor to the former Communist Party, reformed into a center-left Democratic Party, renamed as Social Democratic Party (SDP). In the next parliamentary elections in 2000, SDP won power with the center-left coalition of six political parties, which ushered in a new era of intense democratization, a result that put Croatia in the group of post-communist countries with consolidated democracy over the past decade. With the onset of democratization, the process of accession negotiations for Croatia's full EU membership opened. These processes influenced the main opposition political party -HDZ, to begin its internal reform from a political party based on national ideology to a modern European Christian Democratic Party, a member of the European People's Party, which won the next parliamentary elections in 2003 and 2007. After the opening of accession negotiations with the EU, the democratic processes of Croatian society intensified, in accordance with the dynamics of opening negotiation chapters with well-defined conditions set by the EU. A particularly significant shift has been noted in the implementation of the anti-corruption strategy. However, after gaining EU membership (in July 2013), Croatia stalled in implementing the key reforms necessary to further democratize society, primarily because of the lack of political will to carry out structural reforms systematically and thoroughly. This is due to the crisis of the party system. As well as in developed European countries, the transformation of a massive political party into a type of professional electorate is at work in Croatia. The characteristic of mass parties is the mobilization of a particular stratum of citizens on the question of ideology or nationality, thereby creating "a large base of dues-paying members, hierarchically structured party organizations linking the national and local levels" [23]. Today, the tendency in the Croatian political space is to separate political parties from society and to firmly attach them to the state (or rather to the state budget, from which they are largely financed). Contact with their own electoral base is lost, elements of democracy disappear, as party leaderships represent narrow elitist groups of people, led by an undisputed leader, of an authoritarian way of governing. The decision-making process is in the hands of several people -the president of the party and his closest associates, while "ordinary members" are used as "voting machinery", which is initiated and motivated exclusively on the eve of each election cycle. It is precisely the exclusive right of a few party people to create party politics that directly affects the party's undemocracy, since the electoral base deprives the possibility of any control over the actions of the party leadership, which opens the space for the arbitrariness of party leaders and their associates. Thus, a common feature of all political parties in Croatia is the persevering cultivation of a kind of dirigiste democracy which is used to control their activities in the national political space by the party's leader and his closest associates [24]. Furthermore, the closest associates, as a rule, are recruited from like-minded circles, who base their political existence on the unconditional obedience and loyalty of the party leader. The consequences of this are visible in the crisis of identity of political parties, because, according to Walter Lippman, "where all think alike, no one thinks very much" [25]. Also, in order to make the political parties more efficient on the aforementioned grounds, they have built strong and expensive professional apparatus. Such functioning of party management, by the method of "domino effect", also affects the behavior of other party members. Namely, when they join a political party, they see an opportunity to realize certain personal gains, materially or career-motivated (securing employment / leadership in public administration, or in public companies for themselves and family members, membership in the supervisory and management boards of state-owned companies, and other forms of bargaining). As a result, noncore party members become mere followers and obedients of their party leaders with the ultimate goal of securing their own gains. Consequently, a layer of people with average / below average abilities and knowledge, above average ambitions, and a very important characteristic of "political" obedience, is recruited into various institutions within the public administration system (state administration, public services, local and regional self-government units). Looking back at the current stages of the development of the political system in Croatia, one can see a certain pattern of functioning of the same, in the sense that the step in the democratization of the Croatian political system, as a rule, depends on the degree of democracy within the leading political parties. According to the above mentioned conditions in which political parties function in Croatia, it is exactly their undemocratic organization that generates political clientelism, conflicts of interest at all levels of society, and institutional or systemic corruption that has become a generally accepted way of life. This is also evident through the perception of corruption in post-communist societies see. Sajó who explains the increased perception of corruption in post-communist countries from several points of view: moral, cultural, historical, institutional and political and states that it is not the result of the absence or termination of citizens' trust in public servants, but is the result of "the needs of the political structures" [26]. Writing about political corruption, Issacharoff states that ˝the existence of public power is an opportunity for motivated special interests to seek to capture the power of government, not to create public goods, but to realize private gains through subversion. of state authority " [27]. On the other hand, the term of political clientelism could be explained as a patron-client relationship in which a political exchange between the politician ("patron") and the client is realized in the sense of giving patronage for the vote or any other kind of support of a client [28]. Conflict of interest ˝exists when a public employee's public responsibilities clash, or appear to clash, with his or her private economic affair˝ [29]. The same can be said to overlap with corruption in the sense that corruption cannot exist without a conflict of interest, because each and every corrupt act is driven by an underlying conflict [30]. It can be concluded that the absence of a tradition of Western-style political pluralism is a common "childhood illness" of all post-communist countries, but also developed countries are not immune to these "democracy faults" [9,31]. Transformation of the Economic System In the process of transforming the inherited socialist economic system into a market economy, there was a certain difference between Croatia and other Central and Eastern European countries. Namely, the transition of the economic system in Croatia took place in two phases: through the transformation of socially-owned enterprises (with the aim of transforming social ownership into state ownership) and privatization (with the aim of transforming state-owned into private ownership). The aforementioned difference stems from the fact that Croatia, together with other republics of the former Yugoslavia, has based its economy on the already mentioned model of the so-called of self-governing socialism. The main emphasis in the conversion and privatization processes was on the key issues of ownership of SOEs, major discrepancies between their real market value and their selling price, and consequently the issue of selling them to "buyers who have no money" [32]. It should be emphasized that the transformation of the economic system was carried out partially, based solely on the change of ownership of enterprises. Meanwhile, reforms in other areas closely related to the economy have been left out. Under the overall transitional processes of the economic system, Welsh implies the implementation of a group of reforms: ˝macroeconomic stabilization (e.g. reform of monetary and fiscal policies), price reform (e.g. price liberalization, currency convertibility), structural reform (e.g. privatization, trade liberalization), institutional reform (e.g. reform of legal and banking systems) and educational reform (e.g. management training)˝ [9]. In addition, Croatian transition has been characterized by several other problems common to post-communist countries: delays in the implementation of market economy mechanisms, failure to respect the principles of justice, equity, social insensitivity to the problems and needs of vulnerable groups in society [13]. According to Ekiert at al. transition processes had to be carried out in such a way that the political elites had to take into account the transparency and legality of the procedures, clear and realistically set ultimate goals of the transition, and with a built legal and institutional framework, which should ensure the rule of law [13]. By analyzing the transition models through which postcommunist countries have more or less successfully implemented market economy mechanisms, Izymov & Claxon identify three basic models: democratic capitalism (Baltic countries, Hungary, Poland, The Czech Republic, The Slovak Republic, Slovenija); autocratic capitalism (Central Azian countries) and clan capitalism (Albania, Bulgaria, Romania, Armenia, Moldava, Russia, Georga, countries of the former Yugoslavia) [33]. As the basic difference between democratic capitalism and the other two models, Izuymov & Claxon point out the ˝the strong role of the civil society and effective separation of economy and polity found in democratic capitalism˝ and conclude that the above mentioned circumstances allow interest groups in countries where the last two autocratic and clan capitalism models are applied to completely or almost completely control the economy, pursuing their own interests [33]. Kosals gives an explanation of the term "clan capitalism" as an example of transition processes in Russia [34]. Kosals describes the term as ˝social entity united by the common interest of survival in the hostile social Soviet environment and bound by shadow relations regulated by hidden norms˝, emphasizing that ˝clans formed a system of 'clan capitalism ' as a result of daily interactions with each other and the government promoting policy of market transformation˝ [34]. As a rule, these clans are a union of business people, politicians, members of the intelligence system, and members of the criminal milieu, who are not fully subject to the control of institutional authorities. It should be emphasized that one part of business people belonged to the so-called economic elite of the socialist era (former directors of socially owned companies), some of the politicians and intelligence officials, as already mentioned in the previous chapter, were members of the former socialist political and intelligence nomenclature. According to Kosals, their power was based on the control of economic, administrative or political, as well as judicial resources, enabling the above-mentioned clan structure to control all spheres of society, in order to prevent the implementation of market economy mechanisms [34]. The consequence of the imposed structure of relations in political-entrepreneurial-criminal groups is the creation of a model of "crony" capitalism, which is a common characteristic of a part of countries in transition. In the case of Bulgaria, this model is described as a "nonplanned and nonmarket system", functioning under conditions of underdeveloped institutions of government, lack of market mechanisms, post-socialist lobbying culture (rent-seeking), and the emergence of quasi entrepreneurs and owners [35]. Transition The process of economic transition in Croatia involves about 4,000 socially-owned enterprises, with a total value of $ 20 billion, which were subject to conversion as part of the process of overall reform of the Croatian economic system [36]. The law established the following four conversion models: the sale of all or a part of the enterprise to a natural or legal domestic or foreign person; by investing in the company (through the issue or payment of shares); converting earlier investments in the enterprise and receivables of the enterprise into a stake; transferring all shares / units into government funds free of charge. For the most part, the conversion was carried out according to the first model in such a way that the majority of stocks / shares of socially-owned enterprises were sold under favorable conditions to managers, employees and former employees within these enterprises, which made these categories of persons privileged in relation to other citizens. This model introduced the so-called system of "small shareholders", while the other dominant form of conversion was the sale of shares / stocks to outside investors, so it can be concluded that the prevailing conversion model was a combination of the so-called internal and external privatization. The aim of the conversion, which had to be completed by the statutory deadline (until 30 June 1992), was to make all socially-owned enterprises (except the grouping of those of strategic national importance) a state-owned legal entities, which were subsequently incorporated to the next stageprivatization. However, over time, the problem of growing repression of small shareholders in the ownership structure of companies emerged in the process of transformation, as "new managers" came from the ranks of former directors of these same companies from the time of socialism, who, with the help of political and long-established economic the connection slowly took over the ownership of the companies, which will become fully apparent in the second stage of privatization. The conversion started without careful preparation, which was to include a market assessment of the value of sociallyowned enterprises that entered the conversion process. Considering that Croatia was a war zone at the time, it was necessary to assess whether any investment in the aggression-affected country was worth the foreign capital. Also, the conversion itself could not be regulated solely by a single legal act without having prepared and adopted other legal frameworks that would normalize other segments of society inherited from socialism (banking system, land ownership, social housing problem). ownership, etc.). Another problem related to the fact that natural persons (employees) as owners of shares / stocks did not have ownership rights over the company, but only management rights in proportion to the paid-up amount of the purchased shares / stocks. Also, if it is taken into account that the employees and former employees of the conversion company received favorable loans for the purchase of shares / stocks, which they had to repay within 5 years, then they could not fully own the shares / stocks because, In the event that the loan could not be repaid within the given time, the outstanding shares were transferred to the state fund. Upon completion of the conversion process, it became clear that the process itself did not bring any improvement to the Croatian economy, but it created conditions for the stratification of society into those who, through their many years of work, created capital in the companies, and then, in the privatization process, they were tricked and pushed to the margin, and to that small group of individuals who used their political and economic ties, upgraded them with new ones and paved the way for the "tycoonization" of the Croatian economy. Privatization The second phase of privatization, with the aim of transferring state-owned to private ownership, was carried out through two models: by selling and transferring, without compensation, shares, stocks, rights and things to natural and legal persons, through so-called coupon privatization or "voucher privatization" [37]. The second model -coupon privatization, which carried out free distribution of shares to a certain category of natural persons through privatization investment funds established for this purpose, was a key moment in this segment of the overall transformation of the economic system. The course of privatization will show that these funds, as intermediaries in the sale of shares through the securities market, were not needed, because after the stage of collecting coupons and exchanging them for privatization shares, which were placed on the market, the true owners no longer had information about the fate of their shares. Which deprived them of the ability to control and supervise until the share repayment. Also, it was legally possible that the accumulated funds were managed by the owners of several funds, which practically came to the newly acquired capital with no invested own funds, except for the initial, legally prescribed capital in the form of a guarantee deposit, and they realized many times more benefits through coupon privatization. The Results of the Audit and the Consequences of Transformation and Privatization In the period from 2001 to 2004, the audit of the transformation and privatization was carried out, which included 1,556 socially-owned enterprises. According to data from the 2004 Transformation and Privatization Audit Work Report, devastating data were identified -only 75 companies were properly converted and privatized [38]. The key irregularities were related to 1. the initial failures (intentional and unintentional) related to the evaluation of the assets of socially-owned enterprises before entering the conversion and privatization process; 2. the negative business policies of these companies after conversion in the sense of failure to cope with market conditions and the unlawful withdrawal of capital from them, which resulted in the dismissal of employees and the initiation of bankruptcy; 3. reduction of the rights and influence of "small shareholders" in the decision-making process, as a consequence of the fact that employees and former employees, for certain reasons -ignorance, disinformation, inability to settle credit obligations, sold or transferred shares to other acquirers; 4. certain omissions in the legislation, which lacked the protection of the rights of "small shareholders" and employees from abuses in changing ownership structure and dismissals, as well as the absence of provisions on criminal and misdemeanor sanctions for violations of legal regulations governing the transformation and privatization process, in particular in cases of breach of contractual obligations of large shareholders (e.g. failure to comply with recapitalization obligations, repayment deadlines, misjudgment of company value, etc.); 5. the absence of an independent institution that would oversee the transformation and privatization process instead of the Croatian Privatization Fund, established with the task of implementing and controlling the said procedures. The Fund itself, as the main supervisory body, was subject to political influence by the very fact that the President of the Fund, deputies, vice-presidents were appointed and dismissed by the Government, and appointed by the members of the Supervisory and Management Board of the Fund. The audit of the identified omissions revealed that Croatia entered the process of conversion and privatization unprepared, since this process should be followed only after the basic legislative and political framework for introducing a market economy has been established. "Five key aspects of the institutional environment are vital for the implementation of a privatization program: property rights, contract law, entry and exit laws, securities laws, and political stability" [39]. The lack of constructing such an institutional environment opened the door for criminalization of conversion and privatization. In the absence of independent control mechanisms and the impossibility of sanctioning criminal activists, whose privileged position is reflected in their firm connection with political centers of power, a message was sent to the whole society "that political connections and loyalty, and sometimes ignoring and sometimes overtly, bring about the fastest entrepreneurial effect. disregard for the market rules of the game" [40]. In these processes, the public recognized the opportunity for a rapid enrichment of certain interest groups, while the majority of participants remained playful, thus creating "a unique type of system, capitalism with a Croatian face. This system was supposed to rely on 200 Croatian families, that would take over the economoy and begin the initial accumulation of capital, thus creating a new entrepreneurial class that would lead the country towards a profound transformation. This transformation could occur only by using political power an by eliminating all competitors... and by discouraging foreign investors that could appear and pretend to buy already well established economic ventures" [41]. The trend of making of certain groups rich has continued, which is evident from the data of Knight Frank, published in The Wealth Report in 2017, which show that in 2006, 90 citizens with assets in excess of $ 30 million were registered in Croatia only to increase to 120 persons in 2015 and 2016, compared to the total population in Croatia -4,130,304. Thus, in the period from 2006 to 2016, the number of richest citizens increased by 25%, while projections show that in 2026, their number will increase to 130 persons with assets exceeding $ 30m. A comparison with other Central and South Eastern European transition countries is shown in Table 1. Consequences of conversion and privatization have had a negative impact on the economy, but also on society as a whole, undermining the legitimacy and credibility of these processes, especially in light of the fact that the privatization process is not yet complete and public resistance to further privatization is increasing. Conclusion The prerequisite for the implementation of structural reforms in the system of public administration, justice, health, education is the reform of the political system. The course of transformation of the political system, which has existed in Croatia for the past 29 years, has shown that democratization of the party system (through the reform of political parties) is a precondition for the democratization of society. The undemocratic organizational structure and current way of functioning of political parties are generators of social and economic crises in Croatia. The prevalence of corruption, which today represents a systemic phenomenon, a constant conflict of interest, and political clientelism as a pattern of action of the political elite, finds its drawing on the existing party system. During the transformation of the political system, Croatia has built a legislative and institutional framework that provides a democratic way of electing power, but, in its totality, the political system itself is not fully democratized, in a way specific to Western countries with a long democratic tradition. As a rule, the two strongest political parties, which change in power during the election cycle, are burdened with an oligarchic way of functioning within their parties, and map the same mode of governance to society itself, ruling in the interests of individuals and groups, not the welfare of all citizens. It is this ability to govern in the interests and for the benefit of the entire community that makes the political system democratic, providing the conditions for building a society, with a new value system, based on the principles of justice and fairness. Transformation of political parties into democratic organizations requires the political will of their leaders and officials, as well as gaining awareness of the need to change the existing political culture. Croatia made the biggest progress in the democratization of the political system during the EU membership negotiation process, when it was forced to implement the acquis in national law, however, practice has shown that, after joining the EU, Croatia has stopped further democratization. This fact points to the conclusion that the imposition of "rules of the game" from the outside, without the political will of the domestic power-holders to implement structural reforms, and their awareness of accountability to their own electorate does not produce the expected results.
2020-04-16T04:43:39.923Z
2020-04-14T00:00:00.000
{ "year": 2020, "sha1": "8544aec2e96a657e86897be23ff7bf2dd91ee43b", "oa_license": "CCBY", "oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.jpsir.20200301.13.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "8544aec2e96a657e86897be23ff7bf2dd91ee43b", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
215745615
pes2o/s2orc
v3-fos-license
Surface rheotaxis of three-sphere microrobots with cargo Abdallah Daddi-Moussa-Ider, ∗ Maciej Lisicki, ∗ and Arnold J.T.M. Mathijssen ∗ Institut für Theoretische Physik II: Weiche Materie, Heinrich-Heine-Universität Düsseldorf, Universitätsstraße 1, Düsseldorf 40225, Germany Institute of Theoretical Physics, Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw, Poland Department of Bioengineering, Stanford University, 443 Via Ortega, Stanford, CA 94305, USA (Dated: April 14, 2020) INTRODUCTION For unicellular microorganisms, motility is an essential feature of life [1]. To overcome or benefit from the fluid drag forces, these microbes have devised numerous swimming strategies [2]. Besides rich collective dynamics [3][4][5], even for isolated swimmers hydrodynamics can gravely affect microbial life [6,7], through surface trapping [8], circular motion [9], boundary- [10] and shearinduced accumulation [11,12] and swimming reorientation [13,14]. Some microorganisms have also evolved to respond to flows, such as N. scintillans dinoflagellates who exhibit bioluminescence to reduce grazing by predators that generate flows [15] and S. ambiguum ciliates who perform hydrodynamic communication [16]. However, so far only circumstantial evidence exists concerning the behavioural response to flow [17]. It is therefore important to elucidate the inherent hydrodynamic mechanisms at play in microbiology. In a bulk shear flow, one of the sources for the complex behaviour is the geometry of the cells. Classical Jeffery orbits [18] of elongated particles also apply swimmer dynamics [19], as seen in experiments with E. coli bacteria [20]. The chirality of their flagella was also shown to induce cross-streamline migration [21][22][23]. Interestingly, a rheotactic response leading to upstream swimming in bulk flows can also arise from viscoelasticity [24]. Conversely, surfaces are known to alter hydrodynamic interactions in their vicinity, thus affecting the shear response significantly even for rigid particles [25]. Surfaces may enhance rheotaxis by providing a strong environmental coupling in which swimmers react to an external shear flow by orienting upstream. In particular, shear has been argued to aid navigation in mammalian sperm cells [26][27][28], and govern the contamination dynamics of bacteria in channel flows [29][30][31]. The dominant mechanism behind this upstream reorientation, termed the 'weathervane effect', relies on the anisotropic and distance-dependent drag forces of the swimmer close to the surface. Far from walls this effect vanishes, which was also confirmed numerically [32,33]. Even though this mechanism of surface rheotaxis is fairly understood, studies that couple this knowledge with other effects like confined Jeffery orbits, hydrodynamic wall attraction and chirality still lead to new discoveries like oscillatory rheotaxis [34] and long-tailed distributions of run-tumble dynamics that can cause 'super-contamination' [35]. Understanding the influence of flow on microorganism behaviour has opened the exploration of artificial rheotaxis, using synthetic nanoparticles and micro-robots. For these, upstream swimming in response to shear was also observed in a variety of contexts and for different propulsion mechanisms, including chemical and acoustic effects [36], photocatalytic autophoretic systems of colloidal rollers [37] and rod-shaped Janus particles [38,39]. A generic swimming mechanism for natural swimmers involving elastohydrodynamic coupling is also strongly related to the dynamics of the environment and flow conditions [40]. In this contribution, we explore the transport of cargo by a simple model Najafi-Golestanian swimmer [41,42] in an external shear flow close to a planar boundary, where one sphere is larger to hold the payload. Depending on the swimming mechanism (cargo pusher or puller), we observe different reorientation mechanisms that all lead to a positive rheotactic response. Hence, after reorienting upstream, the full three-dimensional dynamics reduce to a two-dimensional motion in the shear plane. This allows us to quantify the swimmer dynamics in a phase space spanned by the wall-separation distance and the head orientation. By analysing the fixed points in these phase diagrams we identify the rheotactic states, and their stability for the different swimmer geometries. Finally, we map out the upstream migration speed as a function of imposed shear rate, and find that pushers and pullers perform optimally in completely different external flow conditions. MODEL We consider the dynamics of a Najafi-Golestanian swimmer [41,43] near a planar no-slip boundary subject to an externally applied shear flow [ Fig. 1a]. Throughout the paper, all quantities are non-dimensionalised by scaling lengths with the mean swimmer arm length, L, and velocities are scaled by the inverse of the free swimming speed in the absence of flow and boundaries, V 0 . The total mean length of the swimmer is thus 2L. The surface is located at z = 0 in Cartesian coordinates, and the flow is given by u =γzx in terms of the shear rateγ. So, for clarity, the dimensional shear rate isγ * =γV 0 /L. The swimmer is neutrally buoyant, and is composed of three spheres joined by thin arms, all aligned along the swimming direction,t. The arm lengths oscillate with frequency ω, respectively, at an angle π/2 out of phase. We consider both cargo-pushing swimmers with a larger sphere at the front, and pullers with a cargo at the back [ Fig. 1b]. The hydrodynamic signature, the farfield flow generated by such a three-sphere cargo pusher (puller) corresponds to an extensile (contractile) Stokes dipole [44][45][46]. The radius of the two smaller spheres is a = 0.1 and the larger sphere has radius a + = 0.12. The swimming dynamics are found by solving for the hydrodynamic interactions between the spheres and the wall, including the external shear flow [see Supporting Information (SI)]. We use the Rotne-Prager-Yamakawa approximation to account for the different-sized particles at low Reynolds numbers [47]. We also treat hydrodynamic interactions between the spheres and the wall [48], and the external flow, with the same level of accuracy using a shear disturbance tensor formalism. Hence, the generalised mobility tensor is constructed that relates the translational and rotational velocities of each sphere to the hydrodynamic forces and torques. By enforcing that the total external force and torque vanish, and by prescribing an oscillating distance between the three spheres as usual, the swimming motion is uniquely solved. UPSTREAM SWIMMING DYNAMICS The three-dimensional dynamics of these pullers and pushers are first described for different initial orientationst 0 parallel to the surface [ Fig. 1c,d]. Indeed, we observe that all swimmers will eventually align with the shear plane, such that the componentt ·ŷ → 0, for both swimmer types [also see Supplementary Videos S1, S2]. This alignment also occurs for different shear rates [ Fig. 1e,f]. As expected, stronger flows will reorient the swimmers more quickly. Of course, at very strong shear the swimming speed no longer exceeds the local flow strength, leading to downstream advection [Videos S3, S4], but the swimmers can still be oriented upstream. As a result of this alignment with the shear plane, the 3D trajectories reduce to two dimensions over time. This is true in all tested cases, regardless of initial conditions, shear rate or swimmer type, as long as the swimmers come close enough to interact hydrodynamically with the surface. Then, the orientation of the swimmer in the shear plane is given by the pitch angle, θ ∈ (−π, π], where negative (positive) values indicate upstream (downstream) orientations. Still, the mechanism of rheotaxis is not trivial. Both pullers and pushers tend to swim upstream at weak flows, but they do so in a completely different fashion. On the one hand, we describe the rheotaxis of pullers at low shear, as shown in the Videos S5-S6 in the laboratory frame and the co-moving frame, respectively. The three-sphere pullers tend to swim almost parallel to the surface, θ −π/2, with the directort slightly pointing towards the surface. Hence, the back sphere with the larger radius tends to stick out into the liquid where the flow gets stronger for larger z values, so the puller can rotate against the flow. This reorientation is referred to as the 'weathervane effect', as described for example in Refs. [29,34]. The pullers tend to align with the shear plane rather slowly, taking tens to hundreds of oscillation periods. On the other hand, we describe the pusher dynamics at low shear, as shown in the Videos S7-S8. The threesphere pushers tend to swim almost perpendicular to the surface, θ −π, with the directort slightly pointing upstream. While the front sphere almost touches the surface, the back sphere sticks out into the flow so it gets advected downstream, leading to an upstream orientation. Because the tail of the perpendicular pusher sticks out much further than the parallel puller, the 'weathervane effect' is stronger, so the pushers have a much faster reorientation rate and only require a few threesphere oscillations to turn upstream. Rather than a burden, the cargo can therefore also be exploited to enhance rheotaxis. This fundamental difference in the steadystate orientation also affects the velocity at which the two swimmer types can move against the flow. This is described in detail below, when we discuss the fixed point analysis. SWIMMING STATE DIAGRAMS Until now we have described the upstream motion at low shear, which is already fairly complex, but more intricate dynamics emerge at stronger flows. We aim to quantify this systematically for different shear rates and initial conditions. Because the 3D dynamics reduce to 2D over time, we can cast them into a dynamical system where the relevant variables are the pitch angle, θ, and the posi-tion of the central sphere, z. Figure 2 shows the evolution of these dynamics in (θ, z) phase-space diagrams, where the top row shows the behaviours for pullers and the bottom row for pushers. The steady-state swimming behaviours correspond to stable fixed points in these phase portraits, which change for different flow rates. At weak flows, atγ = 1/3, [Fig. 2(a)], the pullers mostly tend to swim upstream parallel to the surface (red), a stable fixed point around (− π 2 , 1 2 ), in agreement with the observations in Fig. 1. A small fraction of initial conditions also leads to downstream swimming parallel to the surface (blue), a stable fixed point around ( π 2 , 1 2 ). The phase portraits corresponding toγ = 2/3 andγ = 1 are essentially the same as panel (a). At strong flows, atγ = 2, [Fig. 2(b)], almost all pullers are first advected downstream during a transient 'toppling' motion. However, over time they will end up in a stable state on the surface, oriented upstream. If the external flow is stronger than the self-propulsion, this leads to downstream advection in the upstream orientation (green). The transition of the final state from moving upstream (red) to downstream (green) occurs aṫ γ ≈ 1.33, as discussed below. The pushers show very different dynamics, because the two fixed points around (±π/2, 1/2), of orientations parallel to the surface, are both unstable. Instead, aṫ γ = 1/3, [Fig. 2(c)], the pushers tend to orient themselves almost normal to the wall (brown), but still a little directed upstream. This corresponds to a fixed point around (−0.8π, 0.9), which is marked with an orange star. Regardless of the initial conditions, all pushers end in this state, for all cases tested. As the flow strength grows, the phase portraits corresponding toγ = 0.4 andγ = 0.5 remain essentially the same as panel (c). At even larger shear rates, however, atγ = 1, [ Fig. 2(d)], the orange star fixed point also becomes unstable, so the pushers tend to detach from the wall and topple downstream indefinitely (cyan). These are the arc-like trajectories depicted in Fig. 1(f). FIXED POINT ANALYSIS Having identified the stable fixed points of the phase diagrams, we can determine the properties of these steady-state swimming modes as a function of shear rate. In particular, we compute the velocity component V x (γ), which is negative for upstream swimming, the pitch angle θ(γ), and the vertical position z(γ). These quantities evolve very differently for pushers and pullers. Pullers in weak flows can move upstream very fast, V x −V 0 , almost their free swimming speed [ Fig. 3(a)]. As the shear rate increases, V x increases linearly [blue line]. This trend is also enhanced because the vertical position gradually increases [green line], exposing the swimmer to more flow. Therefore, the upstream swimming velocity tends to zero aroundγ 0 ≈ 1.35. At higher shear the pullers are still oriented upstream, but they are advected downstream. Surprisingly, the pushers show the opposite behaviour [ Fig. 3(b)]. Their vertical position decreases with shear rate [filled circles], and the pitch angle changes from swimming perpendicular to parallel to the surface [open circles], so the swimmer is less exposed. As a result, V x is almost zero in weak flows, but it decreases with shear rate, leading to faster upstream motion. Moreover, aroundγ c ≈ 0.38 there is a sharp transition. The vertical position suddenly drops even further, so the upstream swimming speed also jumps up to −V x /V 0 ≈ 0.8. At higher shear it stays relatively constant, until the pushers detach due to the toppling instability. The critical shear rate at which this occurs isγ t ≈ 0.56. DISCUSSION In summary, the rheotactic performance can be enhanced by exploiting the cargo, by tuning the swimmer geometry for a given shear rate. Indeed, both cargo pushers and pullers tend to swim upstream near surfaces, but in a very different manner. Pullers move almost parallel to the wall, so they are less susceptible to flow. As a result, it takes longer to reorient against the flow, but their upstream swimming speed is generally large. This speed decreases in strong currents, but even when detached they tend to return to the surface and move upstream. Pushers, however, move almost perpendicular to the wall, so they are more susceptible to currents. Consequently, they can reorient against the flow much faster, but their upstream swimming speed is poor at low shear. Interestingly, this speed dramatically improves at intermediate shear, to an extend that the pushers will actually outperform the pullers. But in even stronger flows the pushers will detach from the wall and are washed downstream. Thus, each cargo configuration has its own advantages, which may be optimised for different applications. For example, if the swimmer were to be used to transport cargo [49] upstream in fluctuating flow environments, it may be beneficial to use a puller for its robustness, while in strong but stable flows a pusher can be more expedient. When comparing our work with related literature, some important insights are revealed. For autophoretic Janus (Au/Pt) nanorods, the pullers assume a larger tilt angle compared to pushers and they reorient faster against the flow [38], while we see that the opposite is true for three-sphere swimmers. Therefore, the far-field hydrodynamic signature (dipole moment) is not a good classifier of surface rheotaxis, and near-field flows must be considered. For spherical squirmers [32], the pullers (B 2 /B 1 > 0) also feature two stable fixed points facing upstream and downstream, both almost parallel to the surface, but unlike three-sphere swimmers the majority of initial conditions leads to escape from the surface or downstream motion. Also spherical catalytic Janus particles can move upstream near surfaces [37], where high coverage by catalyst results in orientations almost perpendicular to the wall and a half catalyst coverage results in motion almost parallel to the wall [32]. These pitch angles may be observed in holography experiments [50]. A natural extension of our work would be to include effects of chirality, as observed in the dynamics of spermatozoa [26,27] or bacterial flagella [27]. This chirality induces an additional torque that leads to circular mo-tion in the absence of flow [9], but in flows it can lead to different dynamical regimes separated by critical shear rates [34]. The universality of these predictions could be tested with three-sphere swimmers by introducing a counter-rotation to the head and tail spheres. Moreover, navigation strategies for complex flow environments may be designed by tuning the swimmer shape and stroke.
2020-04-14T04:27:14.190Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "62349028c968a12f9864d661432e2c62b2f10cbd", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2004.05694", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "62349028c968a12f9864d661432e2c62b2f10cbd", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
253858504
pes2o/s2orc
v3-fos-license
Role of Proteostasis Regulation in the Turnover of Stress Granules RNA-binding proteins (RBPs) and RNAs can form dynamic, liquid droplet-like cytoplasmic condensates, known as stress granules (SGs), in response to a variety of cellular stresses. This process is driven by liquid–liquid phase separation, mediated by multivalent interactions between RBPs and RNAs. The formation of SGs allows a temporary suspension of certain cellular activities such as translation of unnecessary proteins. Meanwhile, non-translating mRNAs may also be sequestered and stalled. Upon stress removal, SGs are disassembled to resume the suspended biological processes and restore the normal cell functions. Prolonged stress and disease-causal mutations in SG-associated RBPs can cause the formation of aberrant SGs and/or impair SG disassembly, consequently raising the risk of pathological protein aggregation. The machinery maintaining protein homeostasis (proteostasis) includes molecular chaperones and co-chaperones, the ubiquitin-proteasome system, autophagy, and other components, and participates in the regulation of SG metabolism. Recently, proteostasis has been identified as a major regulator of SG turnover. Here, we summarize new findings on the specific functions of the proteostasis machinery in regulating SG disassembly and clearance, discuss the pathological and clinical implications of SG turnover in neurodegenerative disorders, and point to the unresolved issues that warrant future exploration. Introduction Stress granules (SGs) are phase-separated biomolecular condensates of RNA-binding proteins (RBPs) and mRNAs, which form liquid droplet-like, membraneless cytoplasmic compartments in response to stress. The primary function of SGs is to promote cell survival in stress by providing a temporary reservoir for storing translationally stalled mRNAs, RBPs, and ribosomal proteins. The low-complexity domain contained in many SG-associated RBPs tends to be intrinsically disordered and serves as a driving force for lipid-lipid phase separation (LLPS) that initiates the assembly of SGs [1,2]. Meanwhile, the composition and concentration of RBPs, the species and abundance of RNAs, the interaction between RBPs and between RBPs and RNAs, and the post-translational modifications (PTMs) of RBPs, as well as factors in the micro-environment such as pH, ionic concentration, temperature, and metabolites can also regulate or modify the process of phase separation and SG assembly [1,[3][4][5]. In normal cells, SGs are promptly disassembled when stress is relieved. In diseased conditions, aberrant SG assembly and/or liquid-to-solid phase transition may occur, triggering the formation of solid protein aggregates that are considered pathogenic in neurodegenerative diseases (Figure 1a). Disease-causal mutations in the genes encoding SG-associated RBPs are shown to alter the protein properties, making them less soluble and inclined to neurodegenerative diseases (Figure 1a). Disease-causal mutations in the genes encoding SG-associated RBPs are shown to alter the protein properties, making them less soluble and inclined to aggregate [4,6,7]. In addition, protein misfolding is increased during cellular stress and proteostasis disturbance. Misfolded proteins appear to accumulate in SGs, making the latter lose the liquid-like dynamics and form protein aggregation [2]. Thus, while the assembly and function of SGs have been a hot topic for research in the past decade, the role of SG turnover in preventing pathological protein aggregation has become increasingly clear and the molecular mechanisms regulating SG disassembly and clearance are emerging. Upon removal of stress, dynamic SGs are promptly disassembled by molecular chaperones, the UPS and VCP, whereas aberrant SGs and solid protein aggregates are cleared via the autophagy pathway. Abbreviations: RBPs, RNA-binding proteins; SGs, stress granules; UPS, ubiquitinproteasome system; VCP, valosin-containing protein. Protein homeostasis (proteostasis) refers to a balanced state in which proteins are maintained in the proper conformations, concentrations, and subcellular locations so that they can execute their cellular functions to maintain the integrity and functionality of a cell [8]. A sophisticated system has evolved to regulate proteostasis in cells, which controls the entire life cycle of proteins from synthesis to disposal. The proteostasis regulation system involves a variety of components, including the translational machinery, molecular chaperones and co-chaperones, the ubiquitin-proteasome system (UPS), and the autophagy pathway. Proteostasis disturbance is evident in normal aging and is associated with age-related neurodegenerative diseases [9,10]. In particular, malfunction of the UPS and/or autophagy can lead to accumulation and aggregation of misfolded Protein homeostasis (proteostasis) refers to a balanced state in which proteins are maintained in the proper conformations, concentrations, and subcellular locations so that they can execute their cellular functions to maintain the integrity and functionality of a cell [8]. A sophisticated system has evolved to regulate proteostasis in cells, which controls the entire life cycle of proteins from synthesis to disposal. The proteostasis regulation system involves a variety of components, including the translational machinery, molecular chaperones and co-chaperones, the ubiquitin-proteasome system (UPS), and the autophagy pathway. Proteostasis disturbance is evident in normal aging and is associated with agerelated neurodegenerative diseases [9,10]. In particular, malfunction of the UPS and/or autophagy can lead to accumulation and aggregation of misfolded proteins and impair organelles as well as biomolecular condensates such as SGs, which may further accelerate the degeneration process [10]. The alterations of the micro-environment and chronic stress during aging can lead to SG assembly and accumulation, which in turn may promote aging and age-related diseases. The related topics have been reviewed elsewhere [4,11,12] and are not examined here, as this mini-review focuses on the recent advances in our understanding of the molecular players and mechanisms regulating SG turnover. In this review, we first go through the major players regulating the disassembly of SGs, including the molecular chaperones, the UPS, the ubiquitin-dependent segregase valosin-containing protein (VCP), and other factors. Next, we summarize the recent findings on autophagy-mediated clearance of aberrant SGs and SG-derived protein aggregation (Figure 1b). Finally, we discuss the unsolved key questions in SG turnover with the prospect of developing novel therapeutic strategies. Molecular Chaperones Molecular chaperones are a class of proteins that assist in protein folding and refolding as well as the assembly of protein complexes. Heat shock proteins (Hsps) are probably the most extensively studied chaperones, which are divided into sub-families according to their molecular weight, including Hsp90s, Hsp70s, Hsp40s, and small Hsps. Hsps play a vital role in refolding, degradation, and sequestration of misfolded proteins in either an ATPase-dependent or ATPase-independent manner [13]. Mutations in the genes encoding Hsps, such as DNAJC6, DNAJC9, and HSPB1, are reported to cause Parkinson's disease, autosomal recessive spastic ataxia of Charlevoix-Saguenay, and CharcotMarie-Tooth neuropathy [14,15]. Furthermore, overexpression (OE) of Hsps in a variety of cell and animal models of neurodegenerative diseases is shown to reduce pathological protein aggregation [16]. In addition, recent advances in the research of chaperones have highlighted a new layer of their regulation in proteostasis and cellular homeostasis. This is related to the ability of Hsps to regulate protein phase separation and/or phase transition, thereby regulating SG disassembly and preventing misfolded proteins from accumulating in SGs (Table 1). Hsp70 Prevent misfolded proteins from accumulation [17,18] Prevent liquid-to-solid phase transition of FUS and TDP-43 [19,20] Hdj1 Prevent liquid-to-solid phase transition of FUS [21] HspB8 Prevent misfolded proteins from accumulation [17] HspB1 Inhibit LLPS of FUS and its association with SGs [22] Hsp90 Enhance DYRK3 activity that promotes SG disassembly [23] Molecular chaperones regulate SG disassembly and clearance. Hsp70 is one of the first Hsps shown to facilitate SG disassembly, and it plays a central role in this regulation. Upregulation of Hsp70 promotes SG disassembly and translation restoration in cells after being released from heat shock in Drosophila melanogaster [24], whereas deficiency of Hsp70 delays SG disassembly in both yeast [25] and mammalian cells [17,18]. Ydj1 and Sis1, two Saccharomyces cerevisiae Hsp40 molecular chaperones, are the co-chaperone of Hsp70, which determine the substrate specificity and enhance the ATPase activity of Hsp70. Hsp70 as well as Ydj1 and Sis1 proteins are found accumulating in SGs and the defects in the latter two reduce the disassembly and/or clearance of SGs [25]. Pharmacological activation of Hsp70 has been shown to reduce aggregation of huntingtin and alpha-synuclein. For example, YM-1 is a pharmacological mimetic of Hip (a co-chaperone that enhances binding of Hsp70 to its substrates), which could allosterically activate Hsp70 and rescue polyglutamine toxicity in a Drosophila model of spinobulbar muscular atrophy [26]. Likewise, activation of Hsp70 with YM-1 also modulated huntingtin proteostasis by reducing aggre-gation of huntingtin, which hence holds potential for treating Huntington's disease [27]. MAL1-271, a synthetic molecule directly increasing the ATPase activity of Hsp70, reduced synuclein aggregation in a model of Parkinson's disease [28]. As mentioned above, a fundamental role of molecular chaperones is that they facilitate protein folding and prevent the accumulation of misfolded proteins. This function is also essential for maintaining the assembly-disassembly dynamics of SGs. For instance, VER-155008, a potent small molecule inhibitor of the Hsp70 family, induces substantial SG-localized accumulation of misfolded proteins resulting in aberrant SG formation, and the disassembly of these SGs requires the functional HspB8-BAG3-Hsp70 chaperone complex [17,18]. Of note, HspB8 is a small Hsp that binds misfolded proteins and subsequently confers them to Hsp70, while BAG3 is a nucleotide exchange factor that endows the functional specificity of Hsp70. The HspB8-BAG3-Hsp70 complex not only helps with the autophagic degradation of misfolded proteins [29], but also assists in removing misfolded proteins from SGs to facilitate SG turnover [17]. Besides, another small Hsp, HspB1, has been shown to inhibit the LLPS of fused in sarcoma (FUS), which prevents the localization and association of FUS with SGs, suggesting that the LLPS capability of RBPs may be required for their partitioning in SGs [22]. In addition to regulating the LLPS of the SG-associated RBPs, a few recent studies have demonstrated that some molecular chaperones can phase separate on their own and/or cophase separate with RBPs, thereby preventing liquid-to-solid phase transition of SGs. For example, human Hsp40 proteins such as Hdj1 (DNAJB1) and Hdj2 (DNAJA1) display an intrinsic property of LLPS, and mutations in Hdj1 that disrupt its LLPS capability decrease its co-LLPS with FUS, reduce its association with SGs, and promote maturation of FUS into solid fibrils [21]. Likewise, Hsp70 exhibits the capability to phase separate with TDP-43 [19] and with FUS [20], thereby stabilizing them in the phase-separated, liquid-like state and preventing the proceeding to toxic aggregation. The chaperone Hsp90 is thought to function downstream of Hsp70 in regulating protein folding [30]. Although inhibition of the ATPase activity of Hsp90 barely elicits any accumulation of misfolded proteins inside SGs, Hsp90 can promote SG disassembly due to its interaction and stabilization of the dual-specificity tyrosine-phosphorylationregulated kinase 3 (DYRK3) [23], as the active DYRK3 promotes SG disassembly and restores mTORC1 signaling and translation [23,31]. The Ubiquitin-Proteasome System (UPS) The UPS is the primary ubiquitin-mediated proteolytic pathway that is responsible for the elimination of over 80% of damaged or misfolded proteins in eukaryotic cells [32]. It comprises proteasomes, ubiquitin, various protein adaptors, and enzymes that regulate ubiquitination and deubiquitination of substrate proteins. The UPS can recognize ubiquitinated misfolded proteins and subject them to proteasomes for timely degradation [33]. When the functional capacity of the UPS is impaired in diseased conditions or when misfolded proteins somehow escape from the protein quality control system, they accumulate and form pathological aggregation. Recent research in proteasome biology has demonstrated that genetic or pharmacological enhancement of the proteasome function can alleviate the neurodegenerative phenotypes in animal models [33]. The UPS also plays a pivotal role in regulating the assembly-disassembly of SGs. First, interruption of the UPS function, such as by the proteasome inhibitor MG132, induces proteostasis stress, which can elicit SG formation in cells [17,34]. Secondly, pharmacological inhibition of the ubiquitin-activating enzyme or proteasomes delays the disassembly of heat shock-and arsenite-induced SGs after the stress is relieved [35][36][37]. Thirdly, the deubiquitinases USP5 and USP13 are recruited to heat shock-induced SGs [38], and the recovery of heat shock-induced SGs is repressed with genetic depletion or pharmacological inhibition of the deubiquitinases [37,38]. The proteasome can provide an on-site degradation machinery for SG-localized misfolded proteins. For example, AN1-type zinc finger protein 1 (ZFAND1) delivers substrates to proteasomes under cellular stress [39], and it can be mobilized to arsenite-induced SGs, and it then recruits proteasomes to SGs [35]. Consistent with this function, the impairment of ZFAND1 or inhibition of proteasomes leads to the accumulation of misfolded proteins in SGs, subsequently eliciting the formation of aberrant SGs and/or protein aggregates that are subject to autophagic clearance [35]. Moreover, the proteasome foci can exhibit properties of liquid droplets [40] and stress can trigger the routing of protein clients to the degradation condensates [41]. Notably, the UPS also regulates the recruitment of RBPs into SGs. This regulation involves the PTM of SUMOylation, which covalently attaches a small ubiquitin-like modifier (SUMO) protein to the substrate proteins [42]. Protein SUMOlytion is found in SGs, and SUMOlytion of RBPs modulates the processes of both assembly and disassembly of SGs [43]. In particular, RING-type ubiquitin ligase 4 (RNF4), a mammalian SUMO-target ubiquitin ligase, mediates SUMO-primed ubiquitination and degradation of SG-associated RBPs in the nucleus during proteotoxic stress, and its impairment not only precludes the entry of a disease-associated FUS mutant into SGs but also dramatically delays SG disassembly upon stress relief [44]. In summary, the functional UPS maintains cellular proteostasis. Upon stress, cytoplasmic proteasomes are mobilized to SGs to enable on-site degradation of SG-localized proteins. Meanwhile, nuclear proteasomes are also present with misfolded RBPs in the nucleus, whose timely degradation prevents their translocation and deposition into cytoplasmic SGs. The initial link of VCP to SG turnover came from the observation that depletion or pathogenic mutations in VCP as well as inhibition of autophagy reduced SG clearance [47]. Later, VCP was shown to be co-recruited with the 26S proteasome to SGs, which promoted SG disassembly [35]. Phosphorylation of VCP by unc-51-like autophagy activating kinases 1 and 2 (ULK1/2) activated VCP and enhanced its ability to disassemble heat shock-induced SGs; however, loss-of-function of autophagy-related genes (Atgs) such as Atg7 did not impair SG disassembly in mouse embryonic fibroblast cells or cause the same muscle pathology elicited by ULK1/2 deficiency in mice [48], thereby suggesting an autophagyindependent mechanism. A recent study on the Ras-GTPase-activating protein-binding protein 1 (G3BP1) has provided novel insights into the function and molecular mechanism of VCP in regulating SG turnover [36]. Specifically, heat shock induces ubiquitination of G3BP1, and ubiquitinated G3BP1 interacts with VCP that dissociates G3BP1 from SGs. This process is mediated by the endoplasmic reticulum (ER)-associated VCP adaptor protein, FAS-associated factor 2 (FAF2), which recognizes ubiquitinated G3BP1 and recruits VCP to SGs. As a nucleating protein of the SG network of RBPs and RNAs, extraction of G3BP1 from SGs by VCP results in SG collapse and disassembly ( Figure 2). In addition, SG turnover is context-dependent: acute heat shock-induced SGs are dismissed via the above-described mechanism by VCP, whereas SGs formed during prolonged heat stress are cleared via the autophagy pathway [36]. described mechanism by VCP, whereas SGs formed during prolonged heat stress are cleared via the autophagy pathway [36]. Figure 2. VCP extracts G3BP1 from SGs and triggers SG disassembly. G3BP is an essential protein and the core of the interaction network of SGs. Upon heat shock, G3BP1 in SGs undergoes massive ubiquitination. The ER-associated protein FAF2 recognizes ubiquitinated G3BP1 and delivers it to VCP. The "extraction" of G3BP1 from SGs by VCP triggers the dissociation of the other SG proteins, leading to the disassembly of SGs. Abbreviations: ER, endoplasmic reticulum; FAF2, FAS-associated factor 2; G3BP1, Ras GTPase-activating protein-binding protein 1; SGs, stress granules; Ub, ubiquitin; VCP, valosin-containing protein. Autophagy Autophagy is a fundamental and evolutionarily conserved cellular degradation pathway, by which protein aggregates, damaged organelles, and other unnecessary or dysfunctional cellular components are removed via lysosome-mediated degradation [49]. Autophagy receptors such as p62, also known as sequestosome 1 (SQSTM1), contain LC3interacting regions, which recognize the substrates via ubiquitin and lipid-based signals. The phagophore grows and engulfs the targets, forming a closed, double-membrane vesicle known as autophagosome that fuses with a lysosome for degradation and recycling [50]. Autophagy is crucial for stress adaptation and proteostasis regulation [51], and autophagyic and endolysosomal dysfunction is linked to various human diseases [52]. Furthermore, multiple lines of genetic and pharmacological evidence have demonstrated the prominent role of autophagy in SG clearance [47,[53][54][55][56][57]. Thus, Figure 2. VCP extracts G3BP1 from SGs and triggers SG disassembly. G3BP is an essential protein and the core of the interaction network of SGs. Upon heat shock, G3BP1 in SGs undergoes massive ubiquitination. The ER-associated protein FAF2 recognizes ubiquitinated G3BP1 and delivers it to VCP. The "extraction" of G3BP1 from SGs by VCP triggers the dissociation of the other SG proteins, leading to the disassembly of SGs. Abbreviations: ER, endoplasmic reticulum; FAF2, FAS-associated factor 2; G3BP1, Ras GTPase-activating protein-binding protein 1; SGs, stress granules; Ub, ubiquitin; VCP, valosin-containing protein. Autophagy Autophagy is a fundamental and evolutionarily conserved cellular degradation pathway, by which protein aggregates, damaged organelles, and other unnecessary or dysfunctional cellular components are removed via lysosome-mediated degradation [49]. Autophagy receptors such as p62, also known as sequestosome 1 (SQSTM1), contain LC3interacting regions, which recognize the substrates via ubiquitin and lipid-based signals. The phagophore grows and engulfs the targets, forming a closed, double-membrane vesicle known as autophagosome that fuses with a lysosome for degradation and recycling [50]. Autophagy is crucial for stress adaptation and proteostasis regulation [51], and autophagyic and endolysosomal dysfunction is linked to various human diseases [52]. Furthermore, multiple lines of genetic and pharmacological evidence have demonstrated the prominent role of autophagy in SG clearance [47,[53][54][55][56][57]. Thus, pharmacological activation of autophagy has been proposed as a potential therapeutic means to restore proteostasis and exert beneficial effects in neurodegenerative disorders [58]. p62/Sequestosome 1 (SQSTM1) Delivery of SGs to autophagic degradation relies on the autophagy receptors, such as p62/SQSTM1. Notably, SQSTM1 is a causative gene in patients with ALS [59] and FTD [60,61], and its protein p62 is found in the pathological protein inclusion in patients with ALS/FTD [62][63][64] and in SGs colocalized with the autophagosome marker LC3-II [35]. The association of p62 with SGs is enhanced in persisting SGs [65] and SGs containing an ALS/FTD-linked FUS mutant [53]. Meanwhile, it is shown that a K63 polyubiquitin (poly-Ub) chain can induce p62 phase separation in vivo and in vitro, which recruits LC3-II and fosters autophagic degradation of p62 [66]. Given that chaperones such as Hsp27, Hsp40, and Hsp70 are recruited to SGs by co-phase separation with RBPs [19][20][21][22], it is possible that p62 is partitioned into SGs by poly-Ub-induced phase separation that promotes autophagic clearance of p62-associated aberrant SGs. p62 can recognize methylated proteins in addition to ubiquitinated proteins. SGassociated RBPs such as FUS are symmetrically methylated on arginines, which are recognized by another ALS-linked protein survival motor neuron (SMN). SMN then brings p62 to arginine-methylated RBPs, triggering the p62-mediated autophagic clearance of SGs [56]. Patients with C9ORF72 repeat expansions, the major genetic cause of ALS [67], accumulate arginine-dimethylated proteins that co-localize with p62, whereas mice lacking p62 accumulate arginine-methylated proteins [56]. These findings suggest that C9ORF72 associates with the autophagy receptor p62 and affects autophagy-dependent elimination of SGs. Given that SGs are rich in arginine-containing RBPs [68,69], protein methylation at arginines may serve as a unique signal for p62-mediated autophagic clearance of SGs. Chaperonin-Containing TCP-1 Subunit 2 (CCT2) The TRiC (chaperonin TCP-1 ring complex) subunit chaperonin-containing TCP-1 subunit 2 (CCT2) is a newly identified aggrephagy receptor in mammals, which specifically mediates the elimination of solid aggregates, but not liquid-like condensates, via the autophagy pathway. Additionally, it is shown that CCT2 functions independently of ubiquitin or the TRiC complex to facilitate the autophagic clearance of solid protein aggregates [70]. Although a direct role of CCT2 in SG clearance has yet to be demonstrated, multiple lines of evidence suggest its possible involvement. First, the subunits of TRiC are abundantly expressed in SGs [69]. Secondly, the TRiC complex functions to prevent protein aggregation [71] and CCT2 is associated with aggregation-prone proteins [72,73], while SGs are enriched with aggregation-prone RBPs [74][75][76][77][78][79]. Thirdly, CCT2 interacts with FUS and OE of CCT2 enhances autophagic clearance of mutant FUS only when it becomes solid aggregates [70]. Thus, it is possible that different autophagy receptors govern the clearance of SGs at different phases. p62 recognizes protein ubiquitination and arginine methylation in aberrant, less dynamic SGs, whereas CCT2 mediates the clearance of solid protein aggregates that are derived from liquid-to-solid phase transition of SGs. Concluding Remarks As summarized in Figure 3, to maintain the liquid-like, dynamic property of the phase separated SGs, various factors and pathways in cells participate in the regulation of the SG disassembly, such as the molecular chaperones (which co-phase separate with SG-associated RBPs and prevent the liquid-to-solid phase transition of SGs), the UPS (which performs on-site degradation of SG-localized misfolded proteins), and VCP (which extracts G3BP1 from SGs and causes the subsequent SG disassembly). When aberrant SGs form or SGs become solid aggregation, the autophagy receptors p62 and CCT2 recognize and target aberrant SGs and solid aggregates for autophagy-mediated clearance, respectively. extracts G3BP1 from SGs and causes the subsequent SG disassembly). When aberrant SGs form or SGs become solid aggregation, the autophagy receptors p62 and CCT2 recognize and target aberrant SGs and solid aggregates for autophagy-mediated clearance, respectively. Figure 3. The turnover of SGs-disassembly and clearance. (a) The disassembly of liquid-like, dynamic SGs are mediated by molecular chaperones, the UPS and VCP. Chaperones prevent liquidto-solid phase transition of SGs by co-phase separation with SG-associated RBPs; the proteasome provides an on-site degradation machinery for elimination of misfolded proteins in SGs; and VCP dissociates ubiquitinated G3BP1 from SGs, trigging SG disassembly. (b) Aberrant, persisting SGs and solid protein aggregates are recognized by the autophagy receptors p62 and CCT2, respectively, which are subsequently eliminated by autophagic degradation. Abbreviations: CCT2, chaperonincontaining TCP-1 subunit 2; G3BP1, Ras GTPase-activating protein-binding protein 1; RBP, RNAbinding protein; SGs, stress granules; UPS, ubiquitin-proteasome system; VCP, valosin-containing protein. Although the UPS enables a highly efficient degradation of SG-localized misfolded proteins, it is unclear how exactly these proteins are recognized within SGs and sorted out for degradation by the UPS. G3BP1/2 are so far the only SG-associated RBPs that have been proven essential for SG assembly, as their knockout disables the formation of SGs [80][81][82] and their removal by VCP is sufficient to trigger SG disassembly [36]. What makes Figure 3. The turnover of SGs-disassembly and clearance. (a) The disassembly of liquid-like, dynamic SGs are mediated by molecular chaperones, the UPS and VCP. Chaperones prevent liquidto-solid phase transition of SGs by co-phase separation with SG-associated RBPs; the proteasome provides an on-site degradation machinery for elimination of misfolded proteins in SGs; and VCP dissociates ubiquitinated G3BP1 from SGs, trigging SG disassembly. (b) Aberrant, persisting SGs and solid protein aggregates are recognized by the autophagy receptors p62 and CCT2, respectively, which are subsequently eliminated by autophagic degradation. Abbreviations: CCT2, chaperonin-containing TCP-1 subunit 2; G3BP1, Ras GTPase-activating protein-binding protein 1; RBP, RNA-binding protein; SGs, stress granules; UPS, ubiquitin-proteasome system; VCP, valosin-containing protein. Although the UPS enables a highly efficient degradation of SG-localized misfolded proteins, it is unclear how exactly these proteins are recognized within SGs and sorted out for degradation by the UPS. G3BP1/2 are so far the only SG-associated RBPs that have been proven essential for SG assembly, as their knockout disables the formation of SGs [80][81][82] and their removal by VCP is sufficient to trigger SG disassembly [36]. What makes G3BP1/2 so unique, sequence-and structure-wise? Does VCP extract any other SGassociated protein or any protein associated with other ribonucleoprotein granules such as P body and nuclear bodies? The autophagy pathway plays a major role in the clearance of aberrant SGs and SG-derived protein aggregation. In chronic neurodegenerative diseases, however, the liquid-to-phase transition and maturation of SGs to pathological aggregation are often gradually developed. So, how do cells surveil the states of SGs and promptly recognize the aberrant SGs and aggregation? Along the line, protein ubiquitination appears to be a molecular signal used for both degradation of misfolded proteins and turnover of aberrant SGs. As such, when an SG-associated RBP, e.g., G3BP1/2, is ubiquitinated, how do cells distinguish whether the ubiquitination of G3BP1/2 is to label it for routine protein degradation or to signal the SGs for disassembly and/or clearance? Finally, given the association of the proteostasis regulation and SGs in neurodegenerative diseases, there have been therapeutic attempts targeting molecular chaperones, the UPS, and autophagy [16,33,58]. With recent advances in the understanding of the regulation of SG disassembly and clearance, novel intervention strategies should be considered, such as to enhance co-LLPS of Hsps with RBPs, to improve the function of VCP in removing G3BP1/2, and to promote precise recognition and efficient clearance of aberrant SGs via autophagy. Together, we expect that further elucidation of the regulatory mechanisms of SG turnover will assist in the development of effective treatments for neurodegenerative diseases.
2022-11-25T17:06:11.168Z
2022-11-23T00:00:00.000
{ "year": 2022, "sha1": "28dbd2022ba06c5e14035105ce7125dd2c34cdfb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/23/23/14565/pdf?version=1669185699", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "62a080f14cdacefb7d49928c2120c754a350ac8c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
119265893
pes2o/s2orc
v3-fos-license
Unusual scenario of the temperature evolution of magnetic state in novel carbon-based nanomaterials Two porous carbon-based samples doped with Au and Co are investigated. The neutron diffraction study reveals an amorphous structure of both samples. The Co-doped sample exhibits a long-range ferromagnetic (FM) ordering at 2.6 K. The NMR investigations demonstrate, that the samples are obtained with a partial carbonization of initial aromatic compounds and do not reach a state of glassy carbon. The magnetization, longitudinal nonlinear response to a weak ac field and electron magnetic resonance data give evidences for presence of FM clusters in the samples well above 300 K. A short-range character of the FM ordering in the Au-doped sample transforms below T$_C \approx$ 210 K into another inhomogeneous FM state. Besides the FM clusters, this state contains a subsystem with a long-range FM ordering (matrix) formed by paramagnetic centers, existing outside the clusters. The nonlinear response data suggest a percolative character of the long-range FM matrix, which is connected probably with a porous sample structure. The magnetization data give evidence for the formation of an inhomogeneous state in the Co-doped sample, similar to that in the Au-doped one. However, this state is formed at higher temperatures, lying well above 350 K, and exhibits a more homogeneous arrangement of the FM nanoparticles and the FM matrix. Temperature dependence of the magnetization in the Au-doped sample is attributable to changes of the domain formation regime in the FM matrix on cooling, connected with the inhomogeneous character of its FM state. Such peculiarity is absent in the Co-doped sample below 350 K, which is in agreement with formation of the FM state in this sample at much higher temperatures. Further cooling below T ~ 3(10) K leads to a steep increase of the magnetization in both samples. This is attributable to the domain rearrangement in the inhomogeneous FM state at low temperatures. Introduction Nowadays, it is evident that carbon-based nanomaterials represent a novel class of ferromagnetic (FM) matter, which does not contain basically any FM metal components [1]. Such materials attract considerable attention due to a hightemperature FM behavior observed in various carbon structures, accompanied with magnetic hysteresis and the remanent magnetisation. These features make the materials above quite attractive for applications not only in technique, but in biology and medicine as well, which is connected with low toxicity due to vanishing concentration of metallic elements [1,2]. Experimental investigations establishing intrinsic magnetism of defect-rich carbon structures [1,2] have been supported by extensive theoretical work. Namely, the FM behavior has been predicted in such structures as (i) graphite surface with negative Gaussian curvature [3]; (ii) a mixture of carbon atoms with alternation of sp 2sp 3 bonds [4]; (iii) those containing the graphene zigzag edges [5,6]; (iv) disordered graphite with random single-atom defects [7]. In turn, theoretical values of a local magnetic moments µ ∼ 1 -2 µ B , connected with intrinsic defects or disorder [8,9], have been supported by experimental investigations of highly-oriented pyrolitic graphite yielding µ ∼ 0.2 -1.5 µ B per defect at a distance between defects ∼ 0.5 -4 nm [10]. Magnetic properties of powder and glassy carbon-based nanomaterials, including those doped with metals, have been investigated recently [11][12][13], remaining, however, many important details unclear. These include a structure and a local structure of the compounds, a character of the FM ordering, a possible similarity of the local structure to that of fullerenes or other carbon-based materials and, eventually, distribution of the magnetization over the sample volume. In this work, we have studied two carbon-based compounds doped by Au and Co (note that Au is nonmagnetic, whereas Co ions may possess a local magnetic moment), which are similar to the Au-doped and Co-doped samples, respectively, investigated in [11][12][13]. We have obtained information on their structural and magnetic properties, using several independent methods to clarify some of the issues mentioned above. Magnetic behavior of the Au-doped sample gives evidence for presence of a short-range FM order near the room temperature. Taking into account the observed amorphous structure of the Au-doped sample, this implies formation of FM nanoparticles at higher temperatures in agreement with the results of [11][12][13]. However, a subsystem of paramagnetic (PM) centers (referred below as "matrix"), not involved in formation of the FM clusters, have been found in the Au-doped sample on cooling. This matrix exhibits a magnetic ordering, leading eventually to an inhomogeneous FM state in the Au-doped sample. The obtained results suggest formation of such a state in the Co-doped sample at much higher temperatures. At low temperatures, a complex temperature evolution of the FM spin arrangement of this state, accompanied with appearance of a long-range FM order, is observed in both samples. The distinctions in the nonlinear response to a weak ac field, measured in different large-scale parts of the initial Co-doped sample, suggest corresponding differences of their magnetic state, as well as of the properties and density of the FM nanoparticles entering them. Experimental details The porous glassy carbon samples doped with 0.004 mass % of Au (S-Au) and with 0.117 mass % of Co (S-Co), which have been prepared and studied earlier in [11][12][13], were investigated. The preparation details have been described in Refs. 11 -13. The atomic force microscopy (AFM) investigations of the samples doped with Ag, Au and Co gave evidence for the presence of carbon particles with a broad size distribution given by the average, R av , and the maximum, R max , particle radii [11][12][13]. The values of R max ∼ 60 nm and R av ∼ 110 nm were found close in all the samples above. In this work the structure and the magnetic state of the samples were studied with neutron diffraction, whereas the local structure was obtained with solidstate NMR investigations. Magnetic properties were investigated by measurements of the dc magnetization with a SQUID magnetometer, by registration of the second harmonic of a longitudinal nonlinear response to a weak ac magnetic field and by recording the electron magnetic resonance spectra. The neutron powder diffraction study was carried out using the PNPI superpositional diffractometer equipped with 48 counters in four sections at WWR-M reactor, beam line 9. Neutron diffraction patterns were measured at 2.6 and 300 K in the superposition mode, using monochromatic neutrons with a wave length λ = 1.7526Å in the angular range of 4 • ≤ 2θ ≤ 145 • with a step of 0.1 • . Solid-state NMR spectra were recorded under magic angle spinning (MAS) conditions at ambient temperature, using the spectrometer AVANCE II-500WB (Bruker) operated at 125.8 MHz for 13 C nucleus. Samples were packed in a 4 mm zirconium rotor and spun at a 7 kHz frequency. Single-contact 1 H→ 13 C cross-polarization (CP) technique with 3 ms contact time was applied for 13 C CP-spectra recording with high power proton decoupling at frequency of 100 kHz. To increase the CP-efficiency, a sample was blended with proton containing chemical inert material (Al(OH) 3 ) in a 1:1 weight composition. For 13 C direct polarization spectrum, 2.3 µs pulse (π/4) was used with repetition time of 6 s and proton decoupling at frequency of 100 kHz. The number of scans was 4k or 8k to obtain a reasonable signal-to-noise ratio. All chemical shifts are given in ppm from tetramethylsilane. Deconvolution of the obtained spectra was performed using the DMFIT software [14]. Magnetization, M (B ), was measured with a SQUID magnetometer in the magnetic field B up to 5 T by increasing and decreasing the field. The dependence of M (T ) was measured in a constant magnetic field B between 1 mT -5 T, after cooling the sample from 300 K down to 3 K in a zero field (zero-field cooled magnetization, M ZF C ) or in the applied field (field-cooled magnetization, M F C ). Thermoremanent magnetization (TRM) was investigated after cooling the sample from 300 K to 3 K in a non-zero magnetic field and reducing the field to zero. The magnetization data are presented below after subtraction of the diamagnetic contribution. The measurements of the second harmonic of the magnetization, M 2 , of the longitudinal nonlinear response were performed in the parallel steady and alternating magnetic fields, H (t ) = H + h sin ωt (where h ≈ 14.3 Oe and f = ω/ 2π ≈ 15.7 MHz) under the condition of M 2 ∝ h 2 . The latter permits us to analyze the results in frameworks of the perturbation theory. The real and imaginary phase components of M 2 , Re M 2 and Im M 2 respectively, were recorded simultaneously as functions of H at different sample temperatures between 100 and 350 K. The field H was scanned symmetrically with respect to the origin to control the magnetic field hysteresis of the signal. According to symmetrical properties of M 2 , its presence in the response is connected with existence of a spontaneous FM moment in a sample. The amplitude of H was 300 Oe. The installation and the method of separation of the M 2 -phase components are described in [15]. The sensitivity of the measurements above was ∼ 10 −9 emu. The applied method permits us to detect formation of the FM clusters in a PM media and to trace temperature evolution of the cluster subsystem, due to its extreme sensitivity to the FM fraction of a sample. This has been demonstrated by our investigations of the Nd 1−x Ba x MnO 3 and La 0.88 MnO x manganite perovskites [16,17]. Measurements of the electron magnetic resonance (EMR) spectra were performed with a special home-made X-range (f = 8.37 GHz) ESR spectrometer, which provided a high-sensitive registration of wide lines [18]. It is supplied with a cylindrical two-mode balanced cavity with H 111 type of the electromagnetic oscillations. The steady magnetic field H was directed along the cylinder axis (z axis). A sample was placed at the bottom of the cavity, where it was acted by linearly polarized ac-field h directed along x axis produced by excitation mode. The recorded signal in receiving mode was proportional to the off-diagonal component of the magnetic susceptibility tensor, M y (ω) = χ yx (ω)h x (ω). Neutron powder diffraction The diffraction patterns for both investigated samples at room temperature and for the Co-doped sample at T = 2.6 K are displayed in Fig. 1. It can be seen that the lines for the Co-doped sample are appreciably narrowed. This suggests a larger size of structural clusters formed in the Co-doped sample with respect to those of the Au-doped sample. At the same time, positions of the main maxima are almost identical for both samples, implying a close internal cluster structure. For the Co-doped sample, the peak amplitudes increase with cooling from 300 K down to T = 2.6 K, as follows from the differential signal (curve 4). This testifies to presence of the magnetic scattering, connected with a longrange FM order in the Co-doped sample. The effect of the temperature factors on the neutron diffraction patterns is much smaller than the observed difference. NMR with magic-angle spinning. For better understanding of the composition and molecular structure of the samples, we obtained the NMR spectra of hydrogen ( 1 H) and carbon ( 13 C) nuclei under the MAS conditions. The main peaks in 1 H spectrum (see Fig. 2) correspond to the aliphatic (2.3 ppm) and the aromatic (6.8 ppm) groups. The signal arising from the aromatic groups exhibits considerable anisotropy of the chemical shift (CSA), expectable for the aromatic protons. The aromatic to aliphatic signal ratio ∼ 4 was found. The 13 C CPMAS spectrum displayed in Fig. 3 reveals four main lines, which can be interpreted as follows: (i) the 126 ppm line is connected with polynuclear aromatics, and a half-width of the peak, constituting only 15 ppm, looks quite narrow for such systems, which points out to a high isotropy of an environment; (ii) the 137 ppm line is attributed to substituted aromatics with a bent structure; (iii) the 153 ppm lineis connected with an oxygen-substituted aromatic, belonging probably to carbonyls; (iv) the 36 ppm line is attributed to aliphatic chains (corresponding to the peak at 2 ppm of the 1 H spectrum in Fig. 2). Using CPMAS spectrum at lower spin frequency (4.5 kHz) [19], we obtained main components of the CSA tensor of the 126 ppm line: δ 11 = 219.17, δ 22 = 145.18 and δ 33 = 11.25. Such magnitudes are typical of the carbon atoms with the sp 2 -type of a chemical bond in the aromatic compounds (such as benzene, graphite etc.), which confirms the aromatic origin of the investigated material. In addition, the obtained spectra permit us to conclude, that the material has no fullerene fragments, because no lines have been detected either at 143 ppm (corresponding to C 60 with narrow characteristic line) or within the range of 130 -150 ppm (corresponding to C 70 and higher fullerenes). To estimate the composition of the material, a direct-acquisition spectrum with proton decoupling (hpdec) has been obtained. The separation of the obtained signal into the spectral components permits us to obtain their integral intensities and thus to estimate contributions of different structural fragments to the composition of the investigated material (see Table 1). To summarize, the NMR data demonstrate that the investigated material consists of the partially polymerized aromatic compounds (or polynuclear aromatics). Because the initial materials for synthesis of the glassy carbon samples were just aromatic compounds of various types (the furfuryl alcohol, ether isooctylphenol, and dibutyl phthalate) [11][12][13], it is natural to conclude that the basis of the final material is derived of the partially transformed initial components. It is worth mentioning, that the spectra are very similar to those of the products of carbonization of the polyfurfuril alcohol at an incomplete heat treatment [20], but do not correspond completely to the spectrum of the commercial glassy carbon [21]. According to the NMR data, our samples are composed of the aromatic and aliphatic organic fragments. Their random distribution, typical of the amorphous structure, is found from the neutron diffraction investigations. This implies presence of multiple intrinsic defects acting as PM centers, which are well stabilized in aromatics and govern their magnetic properties. Second harmonic of magnetization of the longitudinal nonlinear response The field and temperature dependences of the phase components, Re M 2 (H, T ) and Im M 2 (H, T ), of the second harmonic of magnetization, M 2 , of the longitudinal nonlinear response were obtained in the regimes of slow cooling and slow heating of a sample. Stabilization time of the sample temperature before the signal recording was not less than 300 s at any T. Because the neutron diffraction data discussed in Subsection 1 indicate presence of the long-range FM order at least in S-Co, we use samples in a form of a plate, orienting the magnetic field H (t ) along its plane, to decrease a possible demagnetization effect. A typical in-plane size exceeds the thickness of the plate by more than five times for both samples. The dependences of Re M 2 (H ) and Im M 2 (H ) for the Au-doped sample at several characteristic temperatures are displayed in Fig. 5. At T = 293.3 K, both phase components of M 2 reveal a typical signal from FM clusters with an extremum in a weak steady field H ext ≈ 20 Oe, as well as the field hysteresis. The latter resembles the hysteresis in doped manganites, where the FM clusters are formed already above T C due to the magneto-electronic phase separation [16,17]. With lowering the temperature down to 235.5 K, another signal is added to the FM cluster signal of the Re M 2 component, exhibiting the dependence on H close to linear one (see Fig. 5 (a), panel 235.5 K). Such signal is typical of a PM matrix in the critical regime.This suggests that a part of the PM centers is not involved into the FM clusters during their formation, but exhibits a short-range exchange interaction. We attribute such centers to the matrix. Therefore, the sample can be characterized by a state of the magnetic phase separation. When the second-order susceptibility can be introduced under the condition of M 2 ∝ h 2 , the response of exchange magnets in the PM region is given by the expression, Here, . The first term in the right-hand side of Eq. (1) is connected with a nonlinearity of the magnetization curve, M (H ), and the second term to influence of the external magnetic field on relaxation processes. When the latter is absent, one has ∂Γ/∂ω 0 = 0 and the second term vanishes. Eq. (1) can be applied also to the analysis of the M 2 data (obtained under the condition of M 2 ∝ h 2 ) for an ensemble of single domain (SD) magnetic particles in a superparamagnetic (SP) regime [23]. A hysteretic response ofM 2 (H ) indicates presence of a strong magnetic anisotropy, characterized by the inequality where K is the anisotropy constant, V is the particle volume and T b is the blocking temperature. However, in such a case Eq. (1) can be used only for a qualitative analysis of the M 2 data. Under a weak-field limit of gµH << Ω(τ ) = kT C τ 5/3 , where W is the energy of critical fluctuations, one obtains the expressions Here, γ and γ 2 are critical exponents of the linear susceptibility and of M 2 , respectively, with the values of γ = 4/3 and γ 2 = 14/3, predicted for a cubic ferromagnet. Therefore, Re M 2 (H, τ ) is characterized by a linear dependence on H in the PM region with Re M 2 (H =0) = 0. The appearance of the hysteretic signal with Re M 2 (H =0) = 0 indicates a presence of the remanent magnetization, which is related to formation of a spontaneous FM moment in a sample. Note, that the M 2 response of the cubic ferromagnet CdCr 2 Se 4 in the critical PM region, 2T C > T > T C , is well described by Eq. (1) [22]. Crossover from the weak-field limit above to the strong-field limit, gµH >> Ω(τ ), is accompanied with appearance of an extremum in the dependence of Re M 2 (H ). The position of this extremum, H ext , is shifted towards low fields with decreasing T, because the energy of critical fluctuations, Ω, depends on τ , reaching a minimum at T C . At the same time, the extremum amplitude is increased with lowering the temperature, exhibiting a maximum near T C . Below T C but before the onset of the domain formation, H ext is shifted towards strong fields. Hence, the dependences of H ext and Re M 2ext on T exhibit extremum near T C , indicating a qualitative similarity to a second-order magnetic transition. To control the magnetic state of the samples, below we use the temperature dependences of signal parameters, which are presented in Fig. 7 (b). As follows from the plots at 235.5 K and 214.7 K in Fig. 5 (a), the second extremum (minimum at H > 0) appears in the Re M 2 (H ) component with decreasing temperature. The position of the minimum, H min2 , is shifted towards low fields with decreasing temperature, masking the weak-field signal from the FM clusters (see the plot at 212.2 K in Fig. 5 (a)), and reaches a minimum near 210 K, as can be seen in Fig. 6 (b). In addition, the signal amplitude at the extremum, Re M 2min2 (T ), exhibits a maximum at the same temperature, as follows from Fig. 6 (a). According to the arguments above, this temperature is addressed to the onset of magnetic ordering of the matrix that is T C ≈ 210 K. The extremum amplitude of the signal resulting from the FM clusters, also exhibits some enhancement near T C , which is more expressed in the Re M 2 component. This is connected to a small but increasing contribution of the signal, coming from the matrix, to the total response. Indeed, the corresponding extremum on cooling is shifted towards weak fields, whereas its amplitude is increased. This leads to increasing contribution of the matrix signal in weak fields at T → T C (see Fig. 6). On the other hand, such behavior gives evidence for some interaction of the FM clusters with the matrix (and for the intercluster coupling, mediated by the matrix), suggesting that a part of the FM clusters is involved into the FM ordering along with the matrix. At the same time, presence of the characteristic cluster signal well below T C , following from the Re M 2 (H ) data at 114 K in Fig. 5 (a), indicates that some part of the clusters is not involved in the FM ordering together with matrix. Let us discuss the behavior of the Im M 2 (H, T ) component of the response. It exhibits an opposite (positive) sign with respect to Re M 2 (H, T ) at high temperatures, where a contribution from the matrix is not observed. This indicates that the main contribution to Im M 2 (H ) is connected to the effect of H (t ) on the magnetic relaxation, given by the second term in the right-hand side of Eq. (1). As follows from Fig. 5 (b), the signal from the matrix with a linear dependence on H does not appear in the Im M 2 component on cooling even down to T close to T C . This means that the magnetic relaxation of the matrix is effective, leading to a strong inequality of 2ω/Γ << 1. Therefore, according to first term in the right-hand side of Eq. (1), the contribution of the matrix signal to Im M 2 ∝ (2ω/Γ) Re M 2 is negligible. Hence, the data of Im M 2 (H, T ) are preferable for tracing the evolution of the cluster subsystem. Temperature dependence of the positions, H min1 and H max , of the weak field extremes in the Re M 2 (H ) and Im M 2 (H ) components, respectively, is displayed in Fig. 6 (b). It can be seen that both parameters H min1 and H max are practically independent of T both above and below T C . This means that the critical properties of the FM cluster subsystem, which is responsible for the weak-field signal, are not changed within the investigated temperature interval. Such behavior is characteristic of a system of non-interacting single-domain FM particles. Taking into account a small variation of the signal amplitude in the extremum, Im M 2max , this behavior suggests that interaction of the FM cluster subsystem and the matrix is rather weak in the temperature range above, probably due to a specific porous structure of the samples [11]. Similar temperature behavior of the extremum positions of the M 2 (H ) response from the FM clusters has been observed by us in manganites in the regime of the phase separation above T C [16,17]. Lack of evidence for inter-cluster interaction permits us to refer the clusters to carbon particles (C-particles), which have been found in porous carbon samples doped with Ag, Au or Co, obtained by the same method as in the our case [11][12][13]. Indeed, according to the AMF results, C-particles observed on the sample surface are characterized by the average radius R av ∼ 60 nm and practically do not touch each other [11], which suggests a vanishing exchange interaction between them. Note, that the FM cluster can occupy only a part of the C-particle within its core, whereas the shell of the C-particle is characterized obviously by a larger structural disorder with respect to the core due to the lacking external bonds. This implies the different core and shell structures typical of any usual nanoparticles (see e. g. Ref. 24). The FM clusters and the matrix make principal contributions to the signal in the weak and relatively strong fields, respectively. Therefore, it is convenient to use for analysis of the behavior of these magnetic systems the cross sections of the Re M 2 (T, H ) data at some values of the magnetic field, H j , taken below and above H min1 , respectively. It is interesting to obtain also the cross section of the Re M 2 (T, H ) data at H = 0 related to the remanent magnetization. The temperature dependences of Re M 2 (T, H j ) at H j = 0, 10 and 200 Oe are displayed for S-Au sample in Fig. 7 (b). The plot of Re M 2 (T, H j ) vs. T at H j = 200 Oe (where the main contribution occurs from the matrix) contains an extremum near T ≈ 210 K similar to that of Re M 2min2 (T ) in Fig. 6 (a). The scaling law of Eq. (3) describes reasonably the temperature behavior above 210 K, yielding the value of T C = 209(1) K which is in a good agreement with that found above from Re M 2min2 (T ). However, the obtained value of the critical index γ 2 = 0.7(1) is much smaller than the value, predicted for the cubic ferromagnets (14/3). This implies the percolative character of the FM ordering, connected again with a porous structure of the sample. Note, that the maximum value of Re M 2 (T, 200 Oe) at T = T C is much smaller (by ∼ 3 orders of the magnitude) than that of the doped cobaltites La 1−x Sr x CoO 3 (x = 0.18 and 0.2). Both reveal a long-range percolative FM order in the ground state [25]. One order of magnitude in above difference can be attributed to a less value of a moment of paramagnetic center in the S-Au ∼ 1 µ B (against ∼ 3/2 -2 µ B in manganites). However, a great residual part of this difference suggests that (i) only a small part of our sample exhibits the FM order,or the matrix and the FM cluster moments partly compensate each other e. g. due to an antiferromagnetic (AF) coupling; and/or (ii) the magnetic moment per one magnetic centre is smaller then 1 µ B . The dependence of Re M 2 (T, 0), which is related to the remanent magnetization, exhibits a monotonic increase with T, whereas its value is rather close to the value of the M 2 response on the extremum. In addition, Figs. 6 and 7 (a) give evidence for an insignificant influence of different regimes of the temperature treatment (slow cooling or slow heating) on the M 2 parameters within the investigated temperature interval. This indicates a weak coupling of the magnetic moment with structural defects. As can be seen in the inset to Fig. 6 (a), the "coercive force", H C2 , determined by the condition of Re M 2 (H C2 ) = 0, gradually increases with lowering the temperature. The behavior of H C2 between the room temperature and T C is determined by the FM clusters, because the matrix in this temperature interval is still in the PM regime. The steep decrease of H C2 near T C is connected with contribution of the non-hysteretic matrix signal, exceeding the cluster one and masking it. Below T C , the onset of the domain formation should take place in the matrix. However, the development of the domain structure on cooling accompanying usually by a pinning, occurs in our heterogeneous magnetic system in an expanded temperature region. This is evident in the plots of Fig. 5 (a) at 205.3 and 114 K, showing transformation of the hysteresis loop in the wings of the Re M 2 (H ) curve on cooling. The plot at 114 K also demonstrates clear the presence of the characteristic M 2 signal (with extreme in a weak field) from FM clusters below T C . This suggests a weak coupling of some part of the FM clusters with the matrix, as it was mentioned above. The process of domain formation in the S-Au, stretched with temperature, can be explained as follows. On one hand, its porous structure of this sample hinders the FM ordering of the matrix in agreement with the above discussion. On the other hand, increase of magnetostatic energy at the FM ordering stimulates to domain formatioin a sample with a single magnetic phase to decrease this energy. However, in the S-Au consisting of two magnetic subsystems, a decrease of this energy can be provided by opposite orientations of the FM matrix and the FM cluster moments. This should lead to a change of the domain formation process with respect to a sample, having a single magnetic phase. As will be evident below, such assumption is in agreement with the static magnetization data. For the next, we discuss the M 2 data for sample S-Co. In Fig. 7 (b) are displayed the Re M 2 (H ) signals, obtained at room temperature, from two bits (bit1 and bit2 with masses 9.8 and 14.5 mg, respectively) cut from different parts of S-Co. The signals exhibit quite different dependence on H. In addition, the signal amplitude of bit2 is about two times higher than that of bit1 (both signals are normalized to the bit mass). This gives evidence for a large-scale magnetic (and probably structural) inhomogeneity of the sample, related presumably to a non-uniform Co ion distribution across the sample. The M 2 response of the bit1 exhibits a characteristic extremum in a weak field, which indicates presence of the FM clusters in this sample similar to S-Au as discussed above. This is observed on a background of the hysteretic signal from the matrix, suggesting a phase separated magnetic state of the sample formed at higher temperatures (above 350 K). The signals from the FM clusters and the matrix in bit1 of S-Co do not reveal any noticeable changes in the field dependences of both Re M 2 and Im M 2 components of the response on cooling from 350 K down to 250 K. This implies that the magnetic states of the matrix and the cluster subsystem (as well as their mutual arrangement) are not changed within this temperature interval. The Re M 2 (H, T ) response from bit2 in Fig. 8 (a) exhibits a larger magni-tude and a smoother field dependence with enhanced field hysteresis, containing no typical signal from the non-interacting FM clusters. These peculiarities suggest an enhanced contribution to the response from the matrix and formation of a more homogeneous magnetic ordering, which stimulate interest for possible applications. Therefore, the data of M 2 for this sample are discussed below in more details. Although a signal from the FM clusters is not observed in the Re M 2 component displayed in Fig. 8 (a), the presence of the FM clusters in bit2 is evident in Fig. 8 (b), which displays the Im 1), is small and the relaxation rate Γ of the matrix magnetization is large. Therefore, the magnetic state of this bit is also characterized by phase separation. The noticeable features of the Im M 2 (H ) response from the FM cluster system are different amplitudes of the signals, recorded at direct and reverse H -scans, as evident in Fig. 8 (b). This implies a violation of the symmetrical property of the M 2 response, given by the equality Im M 2 (-H ) = -Im M 2 (H ). Such behavior suggests that the directions of the FM cluster moments are partially conserved, instead of their reversal at changing the direction (sign) of H, which can be addressed to the effect of "magnetic memory". This phenomenon is absent in the Re M 2 (H ) component, because the contribution of the FM clusters to this component is negligible. The latter indicates that namely the FM clusters are responsible for such irreversibility. Possible reasons for this are attributable to a coalescence of the FM clusters (accompanied by increase of their average size), and/or increasing of their magnetic anisotropy on cooling. Similar behavior of the FM cluster response, observed in La 0.78 Ca 0.22 MnO 3 in the vicinity of the insulator-metal transition temperature, has been attributed to the formation of a percolative network [26]. Further investigations are required to clarify the nature of the irreversibility observed in our sample. As evident in Fig. 8 (a), the Re M 2 (H ) signal exhibits the hysteresis loop with only one well defined maximum at the direct H -scan, having the amplitude Re M 2max and position H max at H < 0. The Re M 2 "coercive force", H C2 , cannot be defined also in the interval of H > 0 at the direct H -scan. The latter points out that H C2 exceeds the H -scan amplitude (i. e. H C2 > 300 Oe). Therefore, below we use a modified value, H * C2 , defined at H < 0. Because the FM cluster contribution to the Re M 2 component is absent, the parameters H C2 and H * C2 characterize the temperature behavior of the most part of the sample, related to the matrix. As can be seen in Fig. 8 (b), the dependence of Im M 2 (H ) at the direct H -scan exhibits a maximum in the field interval of H > 0, Im M 2max , which is connected mainly to the FM clusters similar to sample S-Au. Temperature dependences of the positions and of the values of all the characteristic extremes are shown in Fig. 9. As evident in Fig. 9 (b), the positions of the extremes of Re M 2 (H ) and Im M 2 (H ), found at the direct H -scan, are practically independent of temperature. The values of the corresponding extremes exhibit a slight decrease on cooling down to ∼ 200 K, which is accompanied by shifting of H * C2 towards higher field, as can be seen in Fig. 9 (a) and in the insert to Fig. 9 (b), respectively. Below 200 K, all the parameters above become almost independent of T. This confirms that formation of the magnetic state of sample S-Co, responsible for the observed M 2 response, occurs appreciably above the room temperature, as has been already supposed above. Only a minor modification of this state (connected to the increase of pinning) is observed on cooling down to 200 K, where it is stabilized and does not change at least down to 100 K. The temperature behavior and the value of Re M 2 (H =0) are quite similar to those of Re M 2max (T ), which indicates a large value of the remanent magnetization. Note, the temperature evolution of the M 2 response in bit1 of sample S-Co in the interval of 350 K ≥ T ≥ 250 K is similar to that of bit2, exhibiting only a minor signal variation without any qualitative transformation. This suggests that the magnetic state of both bits does not change on cooling within the investigated temperature interval. Comparison of the M 2 data for the samples doped by Au and Co, respectively, demonstrates that the amplitude of the M 2 response in S-Co is enhanced by ∼ 5 times. Such enhancement is attributable to the larger sizes of the structural clusters, formed in sample S-Co at synthesis, which is supported by the neutron diffraction data. A difference of the amplitude of the M 2 response in the Co-doped sample and in typical cobaltite La 0.8 Sr 0.2 CoO 3 with the FM ground state [24] is decreased with respect to the Au-doped sample, but still consists of ∼ 2 orders of the magnitude. Electron magnetic resonance In Fig. 10 are displayed the EMR spectra of the samples S-Au and S-Co (bit2) at room temperature. As evident in Fig. 12 (a), the Au-doped sample exhibits a single narrow line with the Landé factor g = 2.0324 (8). This indicates the presence of PM centers not involved in formation of the FM clusters. These centers can be attributed to the PM matrix in agreement with the M 2 data. The absence of signals from the FM clusters is connected probably with their insufficient density. Another possible reason is the large transversal magnetic relaxation of cluster subsystem, related to a wide space distribution of the magnetic anisotropy axes and to the fluctuating values of the anisotropy constants in different parts of the sample with amorphous structure. The S-Au sample exhibits also a contribution to the EMR spectrum with a linear dependence on the external steady field, H. The origin of the linear contribution is connected with the Hall effect on microwave frequency due to conduction electrons. The Hall signal on microwave frequency is also detected by our home made spectrometer, since it is provided by a cylindrical balanced cavity with H 111 mode of the electromagnetic oscillations. This possibility was confirmed earlier by registration of the Hall signal from a material (Cu) of the cavity [18]. The presence of such a signal from the S-Au sample suggests its electrical conductivity. In Fig.10 (b) the EMR spectrum is displayed from bit2 of the Co-doped sample at room temperature. The spectrum contains a single wide line, which can be interpreted as a signal of a FM resonance from this sample. The signal suggests an almost homogeneous FM ordering of bit2, which have been established above 300 K. Hence, the spectrum in Fig. 10 (b) confirms the conclusion about the magnetic state of the sample S-Co, made above from the analysis of the M 2 data. Magnetization measurements. Temperature dependence of the dc magnetization in the Au-doped sample, measured in the fields of ∼ 50 mT and 1 T, is characterized by the deviation of M ZF C (T ) from M F C (T ), and by TRM (see Fig. 11 (a)). The absolute value of the difference between M ZF C (T ) and M F C (T ) initially grows with increasing field up to a maximum at B ∼ 0.2 T, then it decreases and vanishes at B above ∼ 1 T. The relative difference of M ZF C (T ) and M F C (T ) with respect to M F C decreases monotonically with increasing B, revealing a faster variation below a field of ∼ 0.2 T. TRM is shifted gradually towards higher values when B is increased up to 0.2 T. The dependences of TRM(T ), obtained between B ∼ 0.2 and 1 T, practically coincide with each other (see Fig. 12 (a)), whereas M ZF C (T ) and M F C (T ) persist to increase. This means that after switching off the field, the system tends to equilibrium determined only by temperature. Below B ∼ 0.2 T, a state of the sample after the field switching off depends on T, as well as on the field value. The latter indicates some quasi-equilibrium state of the magnetic system. The magnitude of TRM exhibits a monotonic increase with decreasing T between 300 and 140 K and a crossover to a faster upturn below T cr ≈ 140 K. Eventually, a steep increase of TRM (T ) is observed below T ∼ 3 -10 K and B between ∼ 0.2 -1 T. The dependence of M ZF C (T ), measured in the low field of B = 46.5 mT, does not reveal the increase below T C expected during domain formation. In turn, the value of M ZF C at T ∼ 140 K, coinciding with the crossover temperature, T cr , of TRM (T ), exhibits a rather steep decrease in all fields below B ∼ 0.5 T. These features reflect an unusual character of the magnetic ordering in the S-Au sample. A possible reason to such an "antiferromagnetic" (AF) behavior of M ZF C (T ) can be addressed to a peculiar AF interaction of the ferromagnetically ordered matrix and the FM cluster subsystem, which is attributable to the C-particles (see Subsection 3). The AF interaction of the C-particle FM core and an adjoining part of the matrix may be provided by the C-particle shell. The latter has probably a different structure and magnetic properties like that in usual magnetic nanoparticles [24]. The AF interaction induces above T C AF correlations in the matrix regions near the C-particles, which compete with the FM correlations opposing their fast growth. This leads to reduction of the critical indexes of the PM-FM transition in the matrix, which is in agreement with a small value of γ 2 following from the analysis of the M 2 data. The AF correlations persist to increase on cooling below T C leading to a peculiar "AF ordering" of the matrix and the FM cluster moments below the crossover temperature T cr ∼ 140 K. This explains the decrease of M ZF C , observed at B = 46.5 mT below T cr , as shown in Fig. 11(a). Below T C , S-Au sample can be considered as a "peculiar ferrimagmet ", containing two unusual magnetic sublattices. A state of the first sublattice (related to the matrix) is close to a complete FM ordered, whereas the second sublattice consists of the randomly distributed and very weakly interacted FM clusters. Slightly below T C , the matrix is built probably from ferromagnetically ordered fragments, surrounded by the FM clusters with moments oriented mainly opposite to that of the matrix fragments due to their AF correlations. Such a composite structure can be considered as a peculiar "domain", since its formation is accompanied by a decrease of the magnetostatic energy. The AF ordering below 140 K leads to an almost collinear alignment of the cluster moments and the FM moment of a matrix fragment in each "domain". This takes place along with weakening a weak FM coupling between such "domains" in porous amorphous structure. Surely, a complete compensation of the corresponding FM moments in a "domain" does not occur. Therefore, a weak moment still exists, which is an uncompensated remainder either of the moment of a matrix fragment, or of a total moment of FM clusters, surrounding this fragment. The resulting moments of the "domains" can have opposite signs, since a number of surrounding clusters, as well as a size of the matrix fragment and its moment can differ. Besides, their uncompensated moments can be oriented almost randomly due to a weak coupling of the "domains" in a porous amorphous structure of the S-Au sample. The proposed AF "ordering" leads to a decrease of M ZF C with cooling below 140 K. Note, that only a part of the FM clusters are involved in formation of the "domains". This follows from the Re (Im )M 2 (H ) dependences, which exhibit a presence of the characteristic signal with extremum in a weak field, pertinent to FM clusters at T = 114 K below T cr (Fig. 5). The formation of the "domains" is accompanied by a pinning, which appears below 140 K along with increasing TRM Fig. 12(a), and the field hysteresis of the M 2 response, as evident in Fig. 5(a) at T = 114 K. Such a peculiar AF "ordering" explains, at least partly, a small magnitude of the M 2 signal connected with matrix (see Subsection 3). On cooling, even a weak external field of B = 46.5 mT makes such mechanism less effective, as can be seen in Fig. 11(a). In the FC regime, the external field provides a partial alignment of the moments of FM clusters along the field above T C , as well as an alignment of the uncompensated moments of the "domains" below T C . This explains the predominance of M F C over M ZF C . Increase of the external field leads to the corresponding increase of M F C (T ) due to better alignment of the moments of FM clusters and of the "domains". On the other hand, application of the same increased field for measurements of M ZF C (T ) hinders a pinning of the randomly oriented domains below T C . Above T C this leads to a better orientation of the FM moments of C-particles along the field. Therefore, the increasing B decreases the difference of M ZF C (T ) and M F C (T ) at any T. Note, that the anisotropy of any magnetic nanoparticles is attributed to a type of an easy axis [23].In a zero external field the magnetic moment of a nanoparticle is directed along this axis providing a divergence between M ZF C (T ) and M F C (T ) at temperatures below blocking temperature T b [23]. Their coincidence is achieved at T > T b and our data permits us to estimate the mean anisotropy field, B a , of the C-particles to satisfy the relation of 0.2 < B a < 0.5 T. Indeed, relative difference of M ZF C (T ) and M F C (T ) below the external field of B = 0.2 T is practically constant, whereas at B > 0.2 T it decreases steeply both above and below T C . On the other hand, the difference of M ZF C (T ) and M F C (T ) at B = 0.5 T is close to zero above T C . In addition, the coincidence of M ZF C (T ) and M F C (T ) below T cr , which is evident in Fig. 11(a) at B ∼ 1 T near T ∼ 20 K permits us to estimate a mean effective field of the pinning in the interval of B ∼ 0.2 T -1 T. On cooling down to T ∼ 3 K, M ZF C (T ) and M F C (T ) in Fig. 11 (a) begin to increase, indicating an onset of the magnetic rearrangement in the S-Au. The latter can be interpreted as a transition from an almost opposite orientation of the matrix and the FM cluster moments in the "domains" to an nearly parallel alignment. This takes place along with formation of a new domain system, which looks more similar to that of an ordinary ferromagnet. Evidently, such arrangement is more preferable for minimization of the free energy of the sample at temperature T MT ∼ 3 K, looking like a transition from the ferrimagnetic to the FM state in a compound with two FM subsystems having different magnetic moments [27]. This transition depends on the applied magnetic field, and its onset shifts towards a higher temperature with increasing B, which suggests again a quasi-equilibrium magnetic state of the sample. The unusual magnetic state of the S-Au sample is indicated also by the change of sign of the field hysteresis, which is observed in the dependence of M (B ) at B ∼ 1 T and T = 5.1 K > T MT , as can be seen in the insert to Fig. 11 (b). Here, the difference between M ZF C (T ) and M F C (T ) becomes negligible, as follows from Fig. 11 (a). Several M (B ) curves, measured at temperatures from different characteristic intervals of T > T C , T C > T > T cr , T cr > T > T MT and at T ∼ T MT , are displayed in Fig. 11 (b). A steep increase of M (B ) up to approximately the same value at T = 225 K > T C , T C > T = 180 K > T cr , and even at T = 100 K < T cr in low fields of B ≤ 0.8 T is observed. This implies that the main contribution to the magnetization in the indicated field interval is given by the individual FM nanoparticles with different sizes, which are not involved in the magnetic arrangement discussed above. Close values of M (T ) at any B ≥ 1 T for curves obtained at T = 225 and 180 K > T cr , give evidence for a presence of the AF correlations above T cr . At lower T = 100, 50 and 5 K below T cr , a crossover to a moderate and approximately linear increase of M (B ) in the interval of B ≥ 1 T is observed up to 5 T. It is worth mentioning, that a similar dependence of M (B ) with a steep increase in a weak field, accompanied by the crossover to a linear behavior up to a high B ∼ 10 T, is observed in ferrites with a canted configuration of sublattices [27]. Such a configuration is quite expectable for the peculiar sublattices in our porous S-Au sample, confirming our model. Cooling the simple between 100 and 5 K leads to a shift of M (B ) to higher values, which is connected probably with a continuation of the "domain" formation. On further cooling down to the onset of the magnetic transition to FM alignment of the matrix and the FM cluster subsystem moments in the S-Au sample (at T ∼ T MT ∼ 3 K), a considerable growth of M (B ) is observed. However, M (B ) is still far from saturation at the highest B ∼ 5 T, which is evident in Fig. 11 (b). As follows from the dependences of M ZF C (T ) and M F C (T ) at B = 46.5 mT displayed in Fig. 11 (a), the rearrangement of the magnetic ordering in the S-Au sample only begins at T MT ∼ 3 K. This suggests that magnetic organization of any "domain" differs from that described above only insignificantly. Namely this can lead to the absence of saturation of the magnetization in high fields. At the same time, the onset of the transition leads to a considerable enhancement of M (T ) and more smooth (without a crossover) behavior of M (B ). The latter can be seen in Fig. 11 (b) at 3.3 K. Hence, the M (B ) data support our interpretation as well. The presence of the AF correlations above T C and formation of the AF "domains" below T C can give an explanation of the small amplitude of the M 2 response in the S-Au sample (see Subsection 3). Similar peculiarities can be seen also in the dependences of M ZF C (T ), M F C (T ) and TRM (T ) of the undoped powder carbon-based sample [11], suggesting a magnetic state similar to our S-Au sample and a weak effect of Au doping. In the Co-doped sample, the crossover of M ZF C (T ) and TRM (T ) is not found below 350 K. However, there exist such features as the divergence of M ZF C (T ) andM F C (T ), the sensitivity of these parameters and of TRM(T ) to the applied magnetic field. Finally, the steep increase of M ZF C (T ) and M F C (T ) on cooling down to T ∼ 10 K, depending on B, are observed in the S-Co sample, as can be seen in Fig. 12 (b). These results suggest formation of the specific heterogeneous magnetic state of the S-Co sample, which is similar to that of the S-Au sample, at substantially higher temperatures in agreement with the M 2 data discussed above. This assumption permits us to explain a small value of the M 2 response in the S-Co sample in comparison with doped cobaltites (see Subsection 3), by formation of the AF "domains" similar to the S-Au sample. The latter occurs, however, at higher T exceeding 350 K in agreement with the absence of the crossover to a faster decrease in M ZF C (T ), as can be seen in Fig. 12 (b). Comparison of the TRM(T ) data at B = 1 kG in Figs.12 (a) and (b), as well as of the M F C (T ) and M ZF C (T ) data in similar fields of B = 0.5 and 1 kG and at close temperatures demonstrate, that the values of all these parameters in the Co-doped sample are much smaller than in the Au-doped one. This is in disagreement with the results of the nonlinear response discussed above, as well as with the magnetization data for similar samples in [11][12][13], where the corresponding relations are inverted. These contradictions are attributable to the large-scale spatial inhomogeneity of the Co-doped sample, following from the M 2 data discussed above, since our magnetization measurements have been carried out using another bit of this sample. Indeed, comparison of the M 2 data obtained in different parts of the sample S-Co (bit1 and bit2) demonstrates (see Subsection 3), that the temperature evolution of their magnetic states on cooling is similar and the relative difference of the signal amplitudes is conserved at high temperatures. In addition, the observed behavior of the magnetization of the Co-doped sample on cooling is similar to that observed in [11][12][13]. These results suggest that the disagreements above do not change qualitatively the scenario of the temperature evolution of magnetic state in different parts of the Co-doped sample. Therefore, we have used the magnetization data of the S-Co sample, obtained here, in the comparative qualitative analysis above. Conclusions According to the neutron diffraction data, the structure of both nanocarbon samples, investigated in this work, has the amorphous character. This corresponds to the well-known concept of organization of the carbon-metal nanocomposites, containing nanoporosity. The Co-doped sample exhibits a more regular distribution of pores and probably larger average sizes of the sample material outside the pores with respect to the sample doped with Au. This is accompanied by a more homogeneous short-range magnetic arrangement, as well as by formation of a ground magnetic state with a long-range FM ordering, which is well detected by the neutron diffraction study. NMR investigations of the local structure of the samples permit us to conclude, that they are (i) the products of partial carbonization of initial aromatic compounds and (ii) these products have not reach a state of glassy carbon. The main result of the magnetic investigations of composite samples doped with Au and Co is establishing of their inhomogeneous phase-separated magnetic state, which depends on temperature. This state contains the system of the FM cluster and the magnetic matrix. The latter is formed by paramagnetic centers located outside the FM clusters. The magnetic characteristics and their temperature behavior, as well as structure of the compounds depend appreciably on the doping material. In the sample doped by nonmagnetic Au, the onset of the matrix ordering occurs at lower temperature (T C ≈ 210 K) whereas in the Co-doped sample this ordering takes place at higher temperature above 350 K. The S-Co sample exhibits the remanent magnetization and the coercive force, which exceed considerably those of the S-Au sample. In addition, the Co-doped sample displays inhomogeneous magnetic properties on the long-range spatial scale, characterized by larger magnitude of the mean magnetic moment. The complex temperature behavior of the magnetization in the Au-doped sample suggests changing of a mutual arrangement of magnetic moments of the matrix and the FM cluster system from an almost opposite orientation below T C to an almost parallel one at low temperatures. Only the last stage of this process has been observed in the S-Co sample within the investigated temperature interval. This stage is accompanied probably by formation of an almost homogeneous FM state, as follows from the neutron diffraction investigations. Generally, the results obtained by different techniques permit us to clarify the peculiarities of the structure and to obtain important information about delicate processes of the magnetic arrangement of carbon-based porous nanomaterials doped by Au and Co.
2014-06-11T10:24:05.000Z
2014-06-10T00:00:00.000
{ "year": 2014, "sha1": "2ec78d6c2be0974ae17c5a6eb0e4172d55ff102e", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2ec78d6c2be0974ae17c5a6eb0e4172d55ff102e", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
225053742
pes2o/s2orc
v3-fos-license
Anticipatory plastic response of the cellular immune system in the face of future injury: chronic high perceived predation risk induces lymphocytosis in a cichlid fish Vertebrate cellular immunity displays substantial variation among taxa and environments. Hematological parameters such as white blood-cell counts have emerged as a valuable tool to understand this variation by assessing the immunological status of individuals. These tools have long revealed that vertebrate cellular immune systems are highly plastic and respond to injury and infection. However, cellular immune systems may also be able to anticipate a high risk of injury from environmental cues (e.g., predation-related cues) and respond plastically ahead of time. We studied white blood-cell (leukocyte) profiles in African cichlids Pelvicachromis taeniatus that were raised for 4 years under different levels of perceived predation risk. In a split-clutch design, we raised fish from hatching onwards under chronic exposure to either conspecific alarm cues (communicating high predation risk) or a distilled water control treatment. Differential blood analysis revealed that alarm cue-exposed fish had twice as many lymphocytes in peripheral blood as did controls, a condition called lymphocytosis. The presence of a higher number of lymphocytes makes the cellular immune response more potent, which accelerates the removal of invading foreign antigens from the bloodstream, and, therefore, may be putatively beneficial in the face of injury. This observed lymphocytosis after long-term exposure to conspecific alarm cues constitutes first evidence for an anticipatory and adaptive plastic response of the cellular immune system to future immunological challenges. Electronic supplementary material The online version of this article (10.1007/s00442-020-04781-y) contains supplementary material, which is available to authorized users. Introduction To protect themselves against pathogens, the vertebrate immune system has evolved highly effective cellular immunity, of which white blood cells, also called leukocytes, are an important component. There are different types of leukocytes, ranging from cells with phagocytotic activity (neutrophils) to those that produce proteins such as antibodies (specialized lymphocytes called B cells). Hence, both the absolute amount and the relative frequency of different leukocytes characterize the cellular immune system response. Therefore, hematology, the study of blood, was developed since the 1920s as a valuable and highly informative medical diagnostic tool (Wintrobe et al. 1974). Researchers have since used differential leukocyte counts for studying variation in wildlife immune responses (Davis et al. 2008), but this variation is still not fully understood (Maceda-Veiga et al. 2015). Most previous studies have been conducted in a medical, toxicological, and animal ethics context, and thus focus on the consequences of exposure to environmental factors that disturb physical integrity such as toxins (Eeva et al. 2005;Villa et al. 2017), parasites, and pathogens (Davis et al. 2004;Lobato et al. 2005;Burnham et al. 2006), suboptimal nutrition, temperature, or humidity levels (Bennett and Daigle 1983;Altan et al. 2000;Brown and Shine 2018; Communicated by Donovan P German. However, to our knowledge, no previous study has considered that vertebrate cellular immune systems may also respond adaptively to non-integrity-disturbing cues that are indicative of an environment with increased injury risk. In the face of possible future injury, a cue-induced proliferation of cellular immune system components has the potential to fight off pathogens early and thereby may vastly reduce disease-related fitness costs. This may constitute another case of how adaptive phenotypic plasticity allows individuals to adapt to changing environments (West-Eberhard 2003;Scheiner et al. 2020), similar to how prey animals respond plastically to the key ecological factor predation (Lima and Dill 1990;Nosil and Crespi 2006). During antipredator phenotypic plasticity, cues that communicate high predation risk induce plastic modifications in the behavior, morphology, and life-history of prey animals, which increases individual fitness in a predatory habitat (Ghalambor et al. 2007;Kishida et al. 2010;Bourdeau and Johansson 2012). As predation is an environmental factor that substantially increases injury risk in any given environment (e.g., Reimchen 1988), it also provides a well-suited context for research on the adaptive plasticity of cellular immune systems. In vertebrates, antipredator phenotypic plasticity has first been discovered in a fish species, the crucian carp Carassius carassius. In this species, exposure to predators (Brönmark and Miner 1992) or to conspecific alarm cues (Stabell and Lwin 1997) triggers the development of a deeper body morphology (i.e., increased dorsoventral height) that decreases the risk of being swallowed by gape-limited piscivores such as the pike Esox lucius (Nilsson et al. 1995). Similar patterns of morphological antipredator plasticity have since then been confirmed in many other fish species (Eklöv and Jonsson 2007;Januszkiewicz and Robinson 2007;Frommen et al. 2011;Meuthen et al. 2018a. While there is also a lot of evidence for behavioral Kim 2016;Meuthen et al. 2019dMeuthen et al. , 2019c and life-history antipredator phenotypic plasticity (Reznick and Endler 1982;Belk 1998;Johnson and Belk 2001;Dzikowski et al. 2004) across fish taxa, no single study has considered that the fish cellular immune system may likewise respond with adaptive plasticity to perceived predation risk. Fish hematology has a long history (Hesser 1960;Blaxhall and Daisley 1973), and this is why, fish are a well-studied, non-human vertebrate group in terms of their leukocyte responses (Davis et al. 2008;Burgos-Aceves et al. 2019). Ichthyologists consider fish leukocyte responses one of the most sensitive indicators of stress (Wedemeyer et al. 1990). Hence, many researchers have studied changes in fish leukocyte frequencies following exposure to stressors. Some of these researchers suggest that exposure to stress increases neutrophil numbers (neutrophilia) and decreases lymphocyte counts (lymphopenia), which leads to an elevated neutrophil:lymphocyte ratio (Larsson et al. 1980;Pulsford et al. 1994;Witeska 2005;Campbell 2012;Grzelak et al. 2017). In contrast, other studies report that exposure to stressful environmental factors induces an increase in lymphocyte frequency (lymphocytosis) and a decrease in neutrophils (neutropenia) (Johansson-Sjöbeck and Larsson 1978;Nussey et al. 1995). Although they had diverging results, these studies were similar in that they performed acute exposure to environmental factors that disturb individual physical integrity. Even when there is mention of a chronic exposure protocol, this refers to a period of no more than up to 9 weeks and a 9-week exposure period was only applied in a single study (Johansson-Sjöbeck and Larsson 1978). However, because fish are ectothermic, the time course of fish leukocyte patterns is lengthy (Davis et al. 2008), and hence, they reflect long-term stress more accurately than short-term stress as directly shown in a study with the channel catfish Ictalurus punctatus (Bly et al. 1990). Hence, there is a clear need for more long-term research to understand patterns of phenotypic plasticity in fish leukocytes. Here, we study differential leukocyte profiles in response to long-term perceived predation risk in the Western African cichlid Pelvicachromis taeniatus (Lamboj 2004), also known as P. kribensis (Lamboj 2014). This socially monogamous, stream-dwelling fish with complex mutual mate choice (Thünken et al. 2012) and biparental care (Thünken et al. 2010) is a prime example for antipredator phenotypic plasticity. In this species, predation risk is communicated through alarm cues that are detected by conspecifics (Meuthen et al. 2014(Meuthen et al. , 2018b. Long-term exposure to high perceived predation risk as communicated through these cues during development plastically induces generalized neophobia (Meuthen et al. 2016). In adult fish, high perceived risk during development induces male-specific morphological modifications (Meuthen et al. 2018a), alters loser strategies during intrasexual competition (Meuthen et al. 2019a), and plastically adjusts mate preferences by lowering investment into mate choice (Meuthen et al. 2019b). Our aim here was to study the impact of the same developmental environment on the cellular immune system in the P. taeniatus individuals from the studies by Meuthen et al. (2016), Meuthen et al. (2018a), Meuthen et al. (2019b), and Meuthen et al. (2019a). To ensure that we studied antipredator plasticity in the differential leukocyte profiles of P. taeniatus rather than a short-term response to environmental modification, we investigated the immune response of P. taeniatus after individuals had completed more than half of their lifetime under high perceived predation risk. P. taeniatus reaches sexual maturity at 1-1.5 years age and can live up to 6 years in age (D. Meuthen, personal observation), and hence, we sampled fish at 4 years of age. At this time point, we obtained blood samples from P. taeniatus that had been raised under continuous exposure to either alarm cues or a water control treatment. With these samples, we then prepared stained peripheral blood smears and obtained differential leukocyte counts with light microscopy. Lymphocytes, the immune cells that have cytotoxic capabilities and produce antibodies (Campbell 1996), are the most common leukocytes in fish (Campbell 2012). Because they play a crucial role in host defense against pathogens (e.g., Rouse and Babiuk 1975;Gautreaux et al. 1994), an increased lymphocyte frequency (lymphocytosis) is a common response to infections across vertebrates and in fish also occurs in response to a high-quality diet (Fagbenro et al. 2013;Rashidian et al. 2020). The fact that vertebrates with immunodeficient mutations causing lymphopenia are particularly susceptible to infections (mice: Bosma and Carroll 1991;Rozengurt and Sanchez 1993;humans: Buckley et al. 1997;Villa et al. 2001) demonstrates the protective role of lymphocytes. Accordingly, a higher number of lymphocytes may accelerate the removal of pathogens from the bloodstream, and are therefore putatively beneficial in the face of injury. However, increased lymphocyte production is not without costs-it requires a higher resource investment, and it is also likely to accumulate DNA replication errors, which may ultimately lead to cancerous growth (Stetler-Stevenson 2005; Vineis et al. 2010;Greaves and Maley 2012). Hence, only in individuals that inhabit an environment with elevated risk of injury, such as an environment with high perceived predation risk, lymphocytosis would constitute a putatively beneficial plastic response of the cellular immune system. Consequently, we predict a higher number of lymphocytes in alarm cueexposed P. taeniatus as opposed to controls. Alternatively, as a typical stress response, we would expect lower lymphocyte and higher neutrophil numbers in peripheral blood, which causes an elevated neutrophil:lymphocyte ratio (Larsson et al. 1980;Pulsford et al. 1994;Witeska 2005;Campbell 2012;Grzelak et al. 2017). Because leukocyte patterns might be sex-dependent (Evans 2008) and previous research highlights the relevance of sex-specific plasticity in the study species (Meuthen et al. 2018a) and other fishes (Meuthen et al. 2019e), we also considered the sex of the experimental fish in our analyses. Rearing and treatment protocol The fish used in the present experiment were derived from 60 wild-caught individuals collected in June 2007 from the Moliwe river in Cameroon (04°04′ N, 09°16′ E) that were afterwards bred in captivity. In 2012, adult F1 fish were paired up in different combinations so as to set up 12 outbred pairs, from which we derived the clutches used in the present study. After collecting the clutches, we split them into two equally sized groups and then exposed fry from hatching onwards for 5 days a week over 3 years to two different chemical cues that communicated different levels of perceived predation risk. First, to control for possible effects of frequent water disturbance, we applied a low-risk control treatment that consisted of exposure to distilled water. Second, we exposed the other half of each clutch to conspecific alarm cues derived from ground whole conspecifics (a combination of four male and four female donor fish in every instance) in a concentration of 7.2 mg/l as a proxy for high perceived predation risk; alarm cue preparation has been described in more detail in Meuthen et al. (2019b). The applied alarm cue concentration has previously been shown to induce behavioral (Meuthen et al. 2016(Meuthen et al. , 2019a and morphological (Meuthen et al. 2018a) antipredator phenotypic plasticity in P. taeniatus and in other fish species (Chivers and Smith 1994). The benefits of using conspecific alarm cues to generate high perceived predation risk are that fish do not habituate to them even after chronic exposure, while they do in response to predator odors (Imre et al. 2016). Furthermore, exposure to conspecific alarm cues is known to generate similar phenotypes as in fish from natural water bodies that house predators (Stabell and Lwin 1997;Laforsch et al. 2006;Meuthen et al. 2019d). Throughout rearing, fish were kept in mixed-sex groups of up to ten individuals per tank; we increased tank sizes sequentially to conform to the increased space requirements of growing fish (age 22-220 days: 20 × 30 × 20 cm, age 220-1664 days: 50 × 30 × 30 cm). Furthermore, we matched food amounts to fish number and ontogenetic stage as antipredator plasticity has been suggested to be limited by nutrient availability (Chivers et al. 2008); stated are the days from which onwards the given food amounts were supplied: 8-13 d: 10 µl/fish; 22-27d: 20 µl/fish; 50-55 days: 40 µl/ fish; 78-83 days: 60 µl/fish; 115-122 days: 80 µl/fish; 150-157 days: 100 µl/fish; 185-192 days: 120 µl/fish; 220-227 days: 140 µl/fish; 255-262 days: 160 µl/fish; 297-304 days: 180 µl/fish; 339-346 days: 200 µl/fish. At first, food consisted of Artemia nauplii exclusively; from 115-122 days onwards it was replaced by a mix of frozen Artemia sp. and Chironomus, Culex as well as Chaoborus larvae in a ratio of 2:1:0.25:1. Throughout rearing, fish in different tanks had no visual or olfactory contact, water temperature was kept constant at 24.5 ± 1.5 °C, and illumination was provided by full-spectrum fluorescent tubes (Lumilux Cool Daylight 36 W/865, Osram, Germany) in a 12:12 light:dark cycle (from 8 am to 8 pm). In 2017, we derived 4-year old fish (age 1488-1664 days) from this split-clutch design to study variation in cellular immune system responses between treatments. Experimental procedure To collect blood samples, we individually removed fish from their home tank and first assessed fish size (standard length: distance from the snout tip to the base of the tail fin) to the nearest millimeter with graph paper as well as fish body mass to the nearest milligram using a digital precision scale (LC221S, Sartorius, Göttingen, Germany). Afterwards, we immediately killed the fish by hypothermal shock as induced by immersion in ice slurry at 0-4 °C temperature to collect blood samples. P. taeniatus did not show any signs of distress during this procedure and hypothermal shock is a well-established method of euthanasia that is less stressful for small, tropical fish relative to benzocaine and MS-222 exposure (Wilson et al. 2009;Blessing et al. 2010;Lidster et al. 2017). Furthermore, exposure to MS-222 is known to modify blood properties and leukocyte histology (Palic et al. 2006;Popovic et al. 2012) and is, therefore, unsuitable for the study of leukocyte profiles. Blood samples were then collected by puncturing the heart from below the gill covers with a 10 µl syringe (Microliter 701, Hamilton, USA). A small drop of blood was then put on a standard microscope slide (soda-lime glass with frosted edge, H868, Carl Roth, Germany). Afterwards, we placed a second slide (edge ground at a 45° angle) at 40° degrees angle against the surface of the first slide and drew it back to contact the drop of blood which then spread over the interface of the slide through capillarity. Then, we quickly pushed the slide in the opposite direction, which created a blood smear. We did not use anticoagulants so as to prevent modification of the morphology of certain leukocytes, which makes their classification difficult (Ellis 1977). We always prepared several slides per individual fish, which were then labeled with fish identity codes. Blood smears were left to dry for at least 2 days. Afterwards, we conducted differential staining by May-Gruenwald-Giemsa (Pappenheim stain). The staining protocol consisted of first submerging slides for 3 min in an eosine methylene blue solution with at least 80% methanol for fixation (May-Gruenwald's solution, T863, Carl Roth, Germany). Then, slides were rinsed with distilled water and afterwards submerged in an azure, eosine, methanol, and glycerin solution (Giemsa stock solution diluted in a ratio of 1:20, T862, Carl Roth, Germany). Afterwards, slides were again rinsed with distilled water and then left to dry. After all blood smears were stained and dried, the best slide (i.e., the slide that had the least signs of coagulation and the most intact cells) was selected for each individual, and blood smears were examined with an Axiolab light microscope (Carl Zeiss, Jena, Germany) at 400 × magnification by a hematologist (IM) that was naïve as to individual treatment. First, we conducted an initial qualitative differentiation of the different white blood cells in this species (Fig. 1). Afterwards, to quantify cellular immunity levels, for each slide, we first estimated absolute leukocyte counts at an accuracy of ± 50 leukocytes/µl. Then, thin areas of the blood smears where erythrocytes overlapped for a maximum of 1/3 of cell volume or alternatively, did not overlap at all, were examined for differential blood analysis. Here, we counted 100 randomly selected leukocytes per slide and assigned counts to their respective cell type. We followed a standard leukogram procedure by counting lymphocytes, neutrophils, eosinophils, basophils, monocytes, and erythroid/ neutrophile precursors. As basophils, eosinophils, and precursors were very rare (found to be present in only 11.24%, 1.24%, and 0% of all blood smears, respectively and equally distributed across treatments), we excluded them from our analysis. From these relative values, absolute blood counts were then calculated for each individual fish as well as the proportion of neutrophils:lymphocytes as this ratio is suggested to be a reliable indicator of stress (Davis et al. 2008). Peripheral blood smears were stained by May-Grünwald-Giemsa (Pappenheim stain). E Erythrocyte, L Lymphocyte, T Thrombocyte, N Neutrophil, and M Monocyte. To allow a better comparison between different cell types, one lymphocyte (in the bottom image), the thrombocyte, and the neutrophil were copied from a photograph taken from a different area of the same blood smear at the same magnification and inserted into the above images with an image editor. The scale bar equals 10 µm Observed lymphocytes were polymorph (different cell sizes, different core sizes, different core-cytoplasm ratios, and different chromatin structures) throughout. In total, we collected blood from 44 alarm cue-exposed fish (21 females and 23 males) and from 45 control fish (27 females and 18 males). At the point of sampling, males from different treatments did not differ in body size (median, interquartile range, IQR; alarm cue-exposed fish: 8.3 cm, 8.1-8.6 cm; control fish: 8.2 cm, 8.0-8.7 cm; Wilcoxon signed-rank test: W = 225, p = 0.644) or weight (alarm cue-exposed fish: 7.182 g, 6.073-8.108 g; control fish: 6.937 g, 6.254-7.943 g; Wilcoxon signed-rank test: W = 214, p = 0.866). Likewise, females did not differ in body size (alarm cue-exposed fish: 5.8 cm, 5.7-6.0 cm; control fish: 5.9 cm, 5.7-6.0 cm; Wilcoxon signed-rank test: W = 253.5, p = 0.535) or weight (alarm cue-exposed fish: 2.876 g, 2.681-2.977 g; control fish: 2.897 g, 2.555-3.118 g; Wilcoxon signed-rank test: W = 275.5, p = 0.876) between treatments. Statistical analysis For statistical analysis, we used R 3.2.5 (R Core Team 2016). After log-transformation, all variables met assumptions of normality according to Shapiro-Wilk tests (function "shapiro.test" in R package "stats"), and hence, we applied parametric tests throughout. We constructed linear mixed-effects models (function "lme" in R package "nlme", Pinheiro et al. 2016) with maximum-likelihood parameter estimation throughout. Here, we always entered "fish family" as random intercept so as to account for genetic effects. All results are based on likelihood ratio tests (LRT), which assessed whether the removal of a variable caused a significant decrease in model fit according to the Aikake information criterion; hence, degrees of freedom differed by one in all models. The reported P values refer to the increase in deviance when the respective variable was removed. To determine how leukocyte profiles differed between individuals, we constructed a model with the respective blood parameter (leukocytes, lymphocytes, neutrophils, monocytes, and proportion neutrophils:lymphocytes) as dependent variable and "sex" (male, female) as well as "treatment" (alarm cue-exposed, control) as explanatory variable. To determine whether sexes differed in their response to the treatment, we analyzed the "sex × treatment" interaction. When no significant interaction was present, we tested first for the general effects of sex, while treatment remained in the model as a covariate. Finally, when general sex effects were absent as well, we aimed to determine which blood parameter variation was affected by the treatment by testing treatment effects in the absence of any covariates. All initial and final models are available in the supplementary material (Online Resource 1). However, we found significant treatment effects (Table 1). Fish from the alarm cue exposure treatment had approximately 30% more leukocytes (LRT, χ 2 = 5.693, p = 0.017), which was caused by a doubling of lymphocyte counts in alarm cue-exposed individuals (LRT, χ 2 = 9.512, p = 0.002, Fig. 2). In contrast, the other blood parameters did not differ significantly between treatments: neutrophils (LRT, χ 2 = 2.767, p = 0.096); monocytes (LRT, χ 2 = 1.997, p = 0.158); proportion neutrophils:lymphocytes (LRT, χ 2 = 0.222, p = 0.638). Table 1 Leukocyte profiles (mean ± SE) in peripheral blood smears of 4-year old. P. taeniatus that were lifelong subject to different levels of perceived predation risk: alarm cue-exposed fish (N = 44) and control fish (N = 45). All values are accompanied by the results of our final linear mixed-effect models that analyzed whether treatment explained variation in blood parameters, while fish family was included as a random intercept to account for our split-clutch design with multiple families Cell type Control-exposed Alarm cue-exposed χ 2 p Discussion Our results revealed that alarm cue-exposed fish had a significantly higher absolute number of leukocytes (i.e., total white blood cells) which was caused by a significantly greater number of lymphocytes in alarm cue-exposed P. taeniatus relative to the water control. Instead, we did not find evidence for changes in the frequency of other bloodcell types or in neutrophil:lymphocyte proportions. Given the crucial role of lymphocytes in the host defense against pathogens (e.g., Rouse and Babiuk 1975;Gautreaux et al. 1994), having a higher number of lymphocytes likely benefits vertebrates in the face of injury, which is more likely to occur in an environment with high predation risk (Reimchen 1988). Hence, this observed lymphocytosis is first evidence for putatively beneficial phenotypic plasticity in a vertebrate cellular immune system. More generally, it is also the first evidence for a preceding putatively beneficial immunological response in an environment with increased injury risk. While, in our study, we used non-integrity-disturbing cues that communicate high perceived predation risk, lymphocytosis has previously been observed as a response to dietary supplementation in the rainbow trout Oncorhynchus mykiss (Rashidian et al. 2020), to copper exposure in the Mozambique tilapia Oreochromis mossambicus (Nussey et al. 1995), and as a response to cadmium exposure in the flounder Pleuronectes flesus (Johansson-Sjöbeck and Larsson 1978). Likewise, in humans, chronic stress (Pereira et al. 2012), cigarette smoking (Chan et al. 1990;Tollerud et al. 1991;Delannoy et al. 1993;de Haan and Pouwels 2006), or chronic viral and bacterial infections (Speight et al. 1999;Halim and Ogbeide 2002;Sever-Prebilic et al. 2002;Chabot-Richards and George 2014) have all been suggested to induce lymphocytosis. At first glance, our observation of an induced lymphocytosis in response to chronic exposure to high perceived predation risk appears contradictory to previous research. That is because similar to other stressors (Barcellos et al. 2011), perceived predation risk is suggested to induce an increase in the levels of the stress hormone cortisol (a glucocorticoid) as has previously been suggested in studies on fish transgenerational antipredator plasticity (Giesing et al. 2011;Sopinka et al. 2015). Elevated glucocorticoid levels then trigger a redistribution of leukocytes between body compartments (Davis et al. 2008): a rapid release of neutrophils from the head kidney into peripheral blood (which causes neutrophilia in the blood) and a mobilization of lymphocytes from circulating blood into compartments such as the skin, the spleen, and lymph nodes (which causes lymphopenia in the blood: Dhabhar et al. 1996;Dhabhar and McEwen 1997). This process then results in an elevated neutrophil:lymphocyte ratio in peripheral blood as has been shown multiple times as a consequence of exposing fish to other stressors (metals: Larsson et al. 1980;Witeska 2005;forced upside-down position: Pulsford et al. 1994; higher temperature and longer photoperiods: Campbell 2012; exposure to air: Grzelak et al. 2017). Despite potential short-term benefits of having more lymphocytes in specific body compartments as a preparation for injury (Johnstone et al. 2012), other researchers consider stress-induced lymphopenia in peripheral blood to be an immunosuppressive condition that impairs wound healing as showcased in mice (Padgett et al. 1998;Padgett and Glaser 2003). However, cellular immune responses to glucocorticoid exposure are different when it comes to chronic stress where these hormones are released continuously. Under these conditions, glucocorticoid receptor levels are typically downregulated (Svec and Rudis 1981;Vedeckis et al. 1989;Cohen et al. 2012) so as to avoid the negative effects on the vertebrate body that is associated with prolonged glucocorticoid exposure (Russell and Lightman 2019). Because lymphocytes also carry glucocorticoid receptors, lymphocyte sensitivity to glucocorticoid exposure decreases as well (Wodarz et al. 1991;Bauer et al. 2000). Likewise, neutrophil-secreted pro-inflammatory cytokines such as interleukin-8 are known to adjust the relative amounts of glucocorticoid receptors on other neutrophils so as to make them less sensitive to glucocorticoids, which avoids glucocorticoid-induced celldeath (Strickland et al. 2001). Hence, under chronic stress, despite continued glucocorticoid release, both lymphocyte and neutrophil numbers in peripheral blood are supposed to reach normal levels again, and this is likely the reason why we did not observe an elevated neutrophil:lymphocyte ratio as is typical for most studies on the consequences of acute stress. However, the effect of glucocorticoids on the vertebrate cellular immune system is now known to be more complex than anticipated; they have not only anti-inflammatory / Fig. 2 Absolute lymphocyte numbers (mean ± SE) in peripheral blood smears of 4-year old P. taeniatus that were subject to a lifelong difference in levels of perceived predation risk (alarm cue-exposed fish, dashed bar, N = 44; control fish, white bar, N = 45). **p = 0.002 effects such as lymphopenia, but contradictorily can also have pro-inflammatory effects such as lymphocytosis, a phenomenon that researchers have only recently started to understand (Cruz-Topete and Cidlowski 2015). Additionally, lymphocyte frequencies are known to be more sensitive to glucocorticoid levels compared to neutrophils (Cole et al. 2009). Hence, the putatively beneficial lymphocytosis that we observed in our study may still have been triggered through chronic predator-related glucocorticoid releases. On the other hand, the plasticity-mediated maintenance of a chronic lymphocytosis is not without potential costs. This is because as the probability of mutations increases with each cell replication event, a chronically increased production of lymphocytes is likely to accumulate DNA replication errors. Clonal selection and tumor progression models (Stetler-Stevenson 2005;Vineis et al. 2010;Greaves and Maley 2012) predict that such mutations then have the potential to cause a switch from a beneficial lymphocytosis to a malignant lymphocytosis such as, for example, a monoclonal B-cell lymphocytosis (MBA). In humans, MBA is an asymptomatic precursor condition for malignant chronic lymphatic leukemia (Shim et al. 2010;Mowery and Lanasa 2012). This theoretical tumor progression is confirmed by studies on humans, suggesting that persistent reactive polyclonal B-cell lymphocytosis can develop into malignant disorders such as lymphomas (de Haan and Pouwels 2006;Xochelli et al. 2015). As these malignant diseases are lethal, a shorter lifespan induced by the observed chronic lymphocytosis is likely to constitute one of the costs of cellular immune system plasticity that is outweighed only in environments with high injury risk. In line with the theory that traits only evolve to be plastic if they are costly (Ghalambor et al. 2007), this may be why an elevated proliferation of lymphocytes has evolved as a plastic rather than a fixed response. Future studies are required to expand on our findings. Because of the low amount of blood that we could collect in our experimental fish (~ 0.5 to 5 µl per individual), we could not measure glucocorticoid concentrations as performing such an analysis requires approximately 30-60 µl of blood. Hence, it is important to set up studies that measure how vertebrate glucocorticoid concentrations change over time in an experiment with chronic (i.e., over 50% of an individuals' lifetime) exposure to stress. Additionally, researchers should aim to reveal on a cellular level why chronic exposure to stress only impacts on lymphocyte but not neutrophil numbers or neutrophil:lymphocyte ratios. Furthermore, attempts should be made to directly determine the adaptive benefit of the observed lymphocytosis as induced by chronic exposure to an environment with high perceived predation risk. To do so, one would have to artificially injure fish that had previously been chronically exposed to the same treatments as here and afterwards statistically compare wound healing speed, probabilities to develop diseases, as well as mortality rates between treatments. Further follow-up studies should also aim to directly measure the costs associated with chronic lymphocytosis by comparing the probability of leukemia occurrence as well as maximum lifespan between fish from the same treatments. More generally, future research should also attempt to find additional examples for anticipatory plasticity of vertebrate cellular immune systems, and to do so, expand the hitherto lacking research on the consequences of chronic exposure to stressors that are associated with increased future injury probability. At the same time, immunological research should focus more on the impact of environmental cues that do not disturb physical integrity, which has been underrepresented to date. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-10-24T13:05:50.490Z
2020-10-23T00:00:00.000
{ "year": 2020, "sha1": "c3964c0dd64b077c77c2498032318354a44e5302", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00442-020-04781-y.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "0e2df5a465a9ef6ec2d6b1c886640d4709911379", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
211011259
pes2o/s2orc
v3-fos-license
On the freeness and projectiveness of Breuil-Kisin module In the work we have considered Breuil-Kisin module over the ring of witt vectors W (κ) over the residue field κ of characteristic p and a finite flat Zp-algebra R. Then considered Breuil-Kisin modules M over the ring W (κ) and taking the action of R on W (κ), we get again a Breuil-Kisin module M over the ring R⊗Zp W (κ). We have studied freeness and projectiveness of this module. where (1) G v is a finite group scheme over R of order p vh , (2) for each v ≥ 0, is exact sequence (i.e., G v can be identified via i v with the kernel of multiplication by Definition 1.4. [6] Let G be a p-divisible group over an integral domain O with field of fraction K having characteristic 0. The be the inverse system associated to the p-divisible group G. Then the Tate module of G is where we take the limit over the i n maps. is not necessarily local, for example W (κ) ⊗ Zp W (κ) decomposes as a direct product of [κ : F p ] copies of W (κ). So we can have only special cases and criterions in order to get affirmative answer of the above question which we have proved in the next section. Results Theorem 2.1. If R = O K is the ring of finite extension of Q p (i.e., R is regular or gorestein), ] is regular local ring, all finitely generated modules have finite projective dimension. By Auslander-Buchsbaum formula, we have Proof. It is sufficient to show that if R → R is a finite flat morphism between regular local rings and if M is an R ′ -module that is finite free over R, then M is finite free module over R ′ . But this follows using the same argument as in Theorem (2.1). If we emphasize on projectiveness rather than freeness, we get a positive answer. Proof. Since M/uM is projective over W (κ), it is u-torsion free, and therefore projective over where σ runs over the embeddings of W (κ) into R. By using same reason, we get Since M is projective, each summand M σ is projective and hence free W Finally, the Frobenius permutes the M σ transitively and so each M σ has the same rank, which implies that M is actually free as (R ⊗ Zp W (κ))[[u]]-module. Theorem 2.5. If R be the ring with the property that where ⊕R ′ denotes direct sum of finitely many copies of ring of integers R ′ in the compositum of the fraction fields F rac(R) and F rac(W (κ)), then the (R ⊗ Zp W (κ)) [[u]]-module M is free. Proof. In the proof we use the property that the Frobenius ϕ permutes the components of the spectra ring Spec(R ⊗ Zp W (κ)). Now we have the following property of R, Any finite projective R ′ [[u]]-module is free, and so the module M is free over each components of (R ⊗ Zp W (κ)) [[u]]. So M is free module of some rank over each components of R ⊗ Zp W (κ). Now Frobenius ϕ permutes cyclically over the components of Spec(R ⊗ Zp W (κ)) and since the pullback ϕ * is injective, all the ranks must be equal. Hence M is a free (R ⊗ Zp W (κ))-module in this case. Now we digress our attention to the p-adic Tate module T p (G) of the p-divisible group G, which will be a motivation for further work in this direction. We can recover the from Breuil-
2020-02-04T02:00:54.366Z
2020-01-27T00:00:00.000
{ "year": 2020, "sha1": "fa4c82c2b06a3695438197ed4f25af585ef9c9f0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "db28d97b9f96d563d4adb9b63af0834fcbb06018", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
67768928
pes2o/s2orc
v3-fos-license
The magnetic ground state of Sr2IrO4 and implications for second-harmonic generation The currently accepted magnetic ground state of Sr2IrO4 (the -++- state) preserves inversion symmetry. This is at odds, though, with recent experiments that indicate a magnetoelectric ground state, leading to the speculation that orbital currents or more exotic magnetic multipoles might exist in this material. Here, we analyze various magnetic configurations and demonstrate that two of them, the magnetoelectric -+-+ state and the non-magnetoelectric ++++ state, can explain these recent second-harmonic generation (SHG) experiments, obviating the need to invoke orbital currents. The SHG-probed magnetic order parameter has the symmetry of a parity-breaking multipole in the -+-+ state and of a parity-preserving multipole in the ++++ state. We speculate that either might have been created by the laser pump used in the experiments. An alternative is that the observed magnetic SHG signal is a surface effect. We suggest experiments that could be performed to test these various possibilities, and also address the important issue of the suppression of the RXS intensity at the L2 edge. I. INTRODUCTION The physical properties of layered iridates, in particular Sr 2 IrO 4 , have been thoroughly investigated since the seminal paper of B. J. Kim and collaborators [1] and their suggested analogy with the physics of cuprate superconductors. The formation of a half-filled t 2g doublet by the strong Ir spin-orbit interaction, that is then gapped by correlations, mimics what is seen in the cuprates, making Sr 2 IrO 4 an insulator, despite its Ir 4+ ionic configuration with five occupied t 2g electrons [1,2]. More recent experiments on doped iridates point to the emergence of a pseudogap [3] and at low temperatures a d-wave gap [4], therefore strengthening the analogy with cuprates. Most recently, a new experiment based on second-harmonic generation (SHG) [5] claimed the detection of an oddparity, magnetic hidden order in Sr 2 IrO 4 , suggesting the presence of orbital currents as proposed by Varma for cuprates [6]. This followed an earlier bulk property study indicating a giant magnetoelectric effect in Sr 2 IrO 4 [7]. Despite these analogies, there are also significant differences between Sr 2 IrO 4 and La 2 CuO 4 . First, the insulating gap has a different character, spin-orbit plus Mott versus charge transfer, therefore doped holes in Sr 2 IrO 4 go into the Ir 5d states, and not in the oxygen 2p ones as in La 2 CuO 4 . Second, the 5d states of Ir are much more spatially extended than the 3d states of Cu, making the on-site Coulomb and exchange terms significantly weaker. Therefore, the physical motivation for orbital currents, based as it is on the near degeneracy of the transition metal d and oxygen p states [6], seems unlikely in the iridate case, where the oxygen 2p bands lie more than 3 eV below the Ir t 2g doublet [8]. The existence of an SHG signal [5] points to a reduc-tion of the magnetic space group symmetry 2/m1 previously indicated by neutrons and resonant x-ray measurements. Whether this reduction is due to orbital currents or another mechanism remains to be seen. If we analyze the relative stacking along the c-axis of the ferromagnetic (FM) in-plane component of the moment in each of the IrO 2 planes, we find that three inequivalent configurations are possible. They can be labeled as − + +−, + + ++ and − + −+ (Fig. 1), where ± refer to the projection of the FM component in each plane along the b-axis, with the first configuration being that identified in Sr 2 IrO 4 . As detailed in Section III, both + + ++ and − + −+ lead to a symmetry reduction of 2/m1 and, for different reasons, can explain the SHG results. The former was found by resonant x-rays in Ref. 9 in a 0.3 T magnetic field and has also been identified by both neutrons and resonant x-rays upon doping with Rh [10][11][12]. Since the inter-plane spin exchange has been estimated to be as small as 1 µeV [13], one could speculate that the inter-plane magnetic pattern might be disrupted by a laser pulse of 1mJ/cm 2 fluence as used in Ref. 5. We argue that any such pattern breaking should in general lead to an SHG signal because of the resulting symmetry reduction. Another possibility is that SHG arises from a magnetized surface, which also has the desired symmetry. For these reasons, we believe the SHG signal can in principle be explained by magnetism. The aim of the present paper is to critically revisit several aspects of Ref. 5, looking for alternative explanations of the SHG signal, and as a byproduct address some important issues on the suppression of the resonant x-ray scattering (RXS) intensity at the Ir L 2 edge. To reach our goals, the present article is organized as follows: in Section II, we review the details of the crystal and mag-netic symmetries of Sr 2 IrO 4 , and show how RXS data on Rh-doped samples below T N might be explained in terms of the − + −+ state as well as the previously suggested + + ++ state [10][11][12]. We propose further RXS and neutron experiments to clearly identify the actual magnetic pattern. Section III is devoted to an analysis of the SHG experiment from the quantum-mechanical microscopic expressions of the tensors involved. This allows us to show that only two magnetic space groups are consistent with the SHG experiment. The first is 2 /m, advocated in Ref. 5, which is also the magnetic space group of the −+−+ magnetic pattern. The second is 2 /m , which is the magnetic group corresponding to the + + ++ pattern. We characterize the multipole ranks of the order parameters identified by the SHG experiment for each magnetic group. In particular, for the 2 /m magnetic space group of the −+−+ state, the allowed order parameters have the symmetry of inversion-odd magnetic multipoles of rank one, two and three: toroidal dipole, magnetic quadrupole and toroidal octupole (the magnetic quadrupole, though, does not contribute in an SS polarization geometry). Instead, for the 2 /m magnetic space group of the + + ++ state, the allowed order parameters have the symmetry of inversion-even magnetic multipoles up to rank three: magnetic toroidal monopole, magnetic dipole, magnetic toroidal quadrupole and magnetic octupole. In this Section, we also speculate on whether the SHG signal is induced by the laser pump, or rather that it is a surface effect (the magnetic point group of the surface being 2 ). Several experiments are suggested to test these possibilities. In Section IV, we address the important issue of the suppression of the RXS intensity at the L 2 edge and the results obtained in the literature on the doublet J eff = 1/2 of Sr 2 IrO 4 . We discuss some of the critical aspects of this doublet and clarify its connection to the RXS experiments. Finally, in Section V, ab initio simulations for some key x-ray absorption spectroscopy (XAS) experiments are presented, with the dual goal of confirming (or modifying) the J eff = 1/2 doublet picture, and also to test whether the two proposed magnetic symmetries we suggest could explain the SHG experiment [5]. Some conclusions are offered in Section VI. II. CRYSTAL AND MAGNETIC SYMMETRIES IN Sr2IrO4: ANALYSIS OF RESONANT STRUCTURE FACTORS The original analysis of the crystal space group of Sr 2 IrO 4 suggested I4 1 /acd [15], with the 8 Ir ions in this unit cell related by the symmetry operations of the 8a site of I4 1 /acd, as detailed in Table I. For future reference, we note that, though the point symmetry of the Ir site is 4, the further reduction from the 4/m point symmetry, and therefore the breaking of inversion symmetry, is determined only by the oxygens in the IrO 2 planes above (below) that of the Ir, and these are quite distant (> 6.5 A). For this reason, the effect of inversion-breaking at an Ir site, though non-zero, is extremely small. Of course, inversion symmetry in the unit cell is restored for the global symmetry I4 1 /acd. Very recently, the crystal structure has been resolved by neutron scattering to be rather I4 1 /a [11], explaining the observation of forbidden Bragg peaks in earlier experiments [16,17] and in keeping with recent SHG data [18]. Within this space group, the 8 Ir atoms in the unit cell split into two nonequivalent groups of 4 atoms, each with the same point group 4 (sites 4a and 4b). The same remark concerning the inversion breaking made above still applies, as the symmetry reduction from I4 1 /acd to I4 1 /a, leading to the loss of the two glide planes containing the c-axis, is just determined by a tiny displacement of the planar oxygen atoms (< 0.1%). As all the symmetry analysis in the literature up to now has been performed with the I4 1 /acd space group, in what follows, we shall use both the I4 1 /acd and I4 1 /a space groups, highlighting any differences between the two. A. Resonant structure factors for I41/acd and I41/a In the I4 1 /acd space group (settings 2), the 8 Ir atoms occupy the positions shown in Table I, in fractional units (with a=b=5.4846Å, c=25.804Å) [15]. They are characterized by a surrounding distorted oxygen octahedron, as shown in Fig. 1 for the basal (ab) planes, with apical oxygens along the c-axis at 2.057Å and planar oxygens at 1.979Å, with a tetragonal distortion of 4% [15]. The planar oxygens are rotated by about 12 • around the caxis: this rotation is the basis of the loss of the I4/mmm space-group symmetry that characterizes the analogous compound, Ba 2 IrO 4 . Below T N 230 K [13], an antiferromagnetic state develops, characterized by magnetic moments lying in the basal plane and forming an angle of about 12 • with the a-axis, as shown in Fig. 1. The in-plane magnetic pattern is such as to have an antiferromagnetic order parameter along the a-axis and a ferromagnetic component along the b-axis (smaller by the ratio sin 12 • / cos 12 • ), leading to the loss of tetragonal symmetry. The ferromagnetic component is however compensated when summed up over the four IrO 2 layers of the unit cell. In the undoped compound, the ferromagnetic component along the b-axis has the pattern −++− [9], as shown in Fig. 1. In order to write down the resonant x-ray structure factor at the Ir L 3 edge, we need to know the magnetic symmetry relations among the 8 Ir atoms of the 8a site. In order to allow for comparison with previous work, we list in Table I the Ir atoms with the − + +− pattern and the symmetry operations that need to be applied to the first Ir atom in order to obtain the others (this column, for the first atom, corresponds to its point-group symmetries). For future convenience, we list also two other patterns of the ferromagnetic component, the + + ++ and the − + −+ patterns, as shown in Fig. 1 Table I. + + ++ structure has been identified as the magnetic structure of Sr 2 IrO 4 in a magnetic field (H ≥ 0.3 T) directed in the ab-plane [9,19] and also suggested as the magnetic structure in Rh-doped samples [10,11]. The −+−+ magnetic structure has not been suggested in the literature up to now, but we claim that it can describe some of the experimental results on Rh-doped samples, as detailed below [12]. Both patterns can explain the SHG experiment, as we shall see. HereT ,Î andÊ are the time-reversal operator, the inversion and the identity, andm i andĈ 2i are the mirror symmetry and the two-fold rotation around the axis i (x, y, z parallel to a, b, c). Taking into account these symmetries, the resonant x-ray structure factor for the − + +− pattern, summed over the 8 Ir atoms, can be written as: where f 1 is the resonant atomic scattering amplitude for Ir atom 1 (see, e.g., Ref. 20). Below, we write, for future Fig. 1. This implies that the time-reversal symmetry relating Ir 1 and Ir 5 in Eq. (1) is replaced by the identity (see Eqs. (2) and (3)). The difference between the two structures lies in the way (for example) layer z = 7 8 is related to layer z = 5 8 . For the + + ++ structure, Ir 1 and Ir 2 atoms are related by inversion, as for the − + +− structure, whereas for the − + −+ structure, Ir 1 and Ir 2 atoms are related byTÎ. This is the second factor in the right-hand side of Eqs. (2) and (3). Overall, the corresponding structure factors are: We remark that the last factor (1 +Tm x (−1) h+k ) is unchanged in all three patterns because it relates the two in-plane Ir atoms, whose relative behavior is not affected by the overall stacking along the c-axis. This term is however responsible for changes in the structure factor when the symmetry is reduced to I4 1 /a. In fact, such a reduction is determined by the breaking of them x (and m y ) symmetry. This implies that (for example) Ir 1 and Ir 4 atoms in Table I now belong to two inequivalent sites (4a and 4b). This in turn leads to the altered structure factors: We notice for the next section on SHG interpretation that all the previous equations are equally valid for the SHG experiment by putting h = k = l = 0 (in the optical regime, only the zone center is involved). Interestingly, this shows that optical reflections in the − + +− pattern are only sensitive to time reversal and parityeven observables, otherwise the (1 +T )(1 +Î) prefactor of Eq. (4) would be zero. Likewise, optical reflections in the + + ++ pattern are only sensitive to parity-even quantities (both magnetic and non-magnetic, see Eq. (5)) and in the − + −+ pattern they are sensitive toTÎ-even observables (i.e., either magnetic, parity-odd multipoles, or non-magnetic parity-even ones). There are two groups of resonant x-ray reflections that have been studied in the literature at the Ir L 3 edge and deliver two independent pieces of information. In the first group, we have those reflections which served to identify the −++− magnetic space group of the stoichiometric compound. They are of the kind (1,0,4n), (0,1,4n+2) [21] and (0,0,2n+1), that we analyze within the I4 1 /acd space group to compare with the existing literature. Later, we shall highlight the differences induced by the reduction to I4 1 /a. From Eq. (1), as h+k+l is odd, these three reflections must be magnetic in the − + +− state, and they vanish for the other two patterns. Their presence is therefore a signature of the − + +− state. Notice also that for SP scattering [22], f 1 is proportional to k o · m ≡ m ko (with k o the outgoing scattering vector, m the magnetic moment). The selection rule imposed by the term (1 +Tm x (−1) h+k ) gives a signal proportional to (1 +m x )m ko ∝ m a for both F −++− 1,0,4n and F −++− 0,1,4n+2 . Here m a is the projection of the magnetic moment along the a-axis. Instead, for F −++− 0,0,2n+1 , the signal is proportional to (1 −m x )m ko ∝ m b , as noted earlier [23]. This method allows one to obtain the direction of the magnetic moment (the rotation angle of 12 • reported in the literature) by means of the ratio of the m a to the m b components. However, this assumes the I4 1 /acd space group. The reduction to I4 1 /a implies the breakdown of the (1 +Tm x (−1) h+k ) selection rule due to the inequivalence of Ir 1 and Ir 4 . Though this reduction is small, as stated above, it could lead to changes in the rotation angle, and it might be worthwhile to repeat this analysis for I4 1 /a, as now the magnetic moments of the two in-plane Ir atoms become inequivalent (though the argument of Ref. 2 should still impose a locking with the rotation of the oxygen octahedra). As a last remark on the I4 1 /acd to I4 1 /a symmetry reduction, we emphasize that it does not play any role for the coupling along the c-axis, as the symmetry that it breaks is the one that links the two Ir atoms in the same plane. A second group of reflections more interesting for our work, as they are key to interpreting the SHG experiment, are of the kind (1,0,4n±1) (similarly (0,1,4n±1)). As clear from Eq. (4), these reflections cannot have magnetic origin and their appearance below T N indicates an alteration of the − + +− configuration. They were found either by applying a small magnetic field in the stoichiometric material [9] or by Rh doping [10,11]. In both cases they were suggested as being signatures of the + + ++ state. We show below that this is not necessarily the case, as the − + −+ configuration can give rise to the same magnetic or charge reflections, and further investigation is needed to disentangle the two patterns, at least in the case of Rh doping [12]. As before, we start our analysis with the I4 1 /acd space group. Though h + k + l is even, reflections (1,0,4n±1) are Bragg forbidden for the − + +− configuration, since h + k is odd (Eq. (1). They can however be explained as magnetic in origin (but can also be non-magnetic in I4 1 /a space group, see below) if the system undergoes a phase transition to either the + + ++ configuration or the − + −+ configuration. If this is the case, the structure factors become: and They lead to the same intensity, so from a purely magnetic analysis, they cannot be differentiated. We should notice here that care is needed in identifying the orthorhombic a and b axes, as reversing them might lead to an incorrect pattern identification. In fact, switching h and k Miller indices corresponds to switching the ++++ and − + −+ patterns: and In all these expressions, m a is the a-axis component of the magnetic moment at sites Ir 1 and Ir 4 (difference of f 1 and f 4 ). In order to disentangle the two magnetic patterns, we need to play on the differences between theTÎ andÎ operators that relate f 1 to f 2 . This can only be done by allowing an interference with the charge scattering. In fact, the charge scattering in resonant conditions is not only given by the (scalar) Bragg scattering, but also by the anisotropic scattering that does not obey the extinction rule (1 +m x (−1) h+k ) = 0, because the mirror symmetry is not necessarily +1. The intensity of the charge scattering at these (1,0,4n± 1) reflections will be increased by the symmetry reduction to the I4 1 /a space group. As stated above, such a group breaks them xsymmetry, violating the above extinction rule, even at the level of the scalar charge scattering, and in fact the existence of these reflections in neutron scattering above T N have been taken to be a signature of this space group [16]. For RXS, magnetic and non-magnetic terms are out of phase by π/2. This implies that, writing the nonmagnetic atomic scattering factor as f c (in the I4 1 /a space group, from Eqs. (5) and (6), it is f c 1 −f c 4 ) and the magnetic one as f m (= f m 1 −f m 4 ), apart from an overall phase factor, the structure factors can be written as: and From these expressions, we see that the interference of magnetic and non-magnetic terms allows differentiating the two patterns. In the case of the + + ++ pattern, the signals at (1,0,4n+1) and (1,0,4n−1) are identical (apart from the different geometrical factors due to the different Q). But, they are different for the − + −+ pattern due to the constructive/destructive interference seen in these expressions. A numerical simulation of these findings by the FDMNES code (see Section V) is reported in Fig. 2. We should remark here on an important warning for this analysis. In the previous six equations, we have treated the atomic scattering factors f c and f m as if they were real quantities, whereas, close to an edge, they are complex, because of the energy denominator ( ω − ∆E + iΓ), where ∆E measures the energy difference of the two levels related by the photon transition and Γ measures the inverse of the core-hole lifetime. If, however, we are very close to a resonance, then | ω − ∆E| Γ, and the denominator becomes purely imaginary and all the previous discussion on interferences keeps its validity (Fig. 2, being at the L 3 maximum, corresponds to this case). Of course, the same is true in the opposite, non-resonant case: | ω − ∆E| Γ. It is not, however, true when | ω − ∆E| ∼ Γ, or if two resonances are sufficiently close to one another. Even though magnetic neutron scattering differs from magnetic RXS (in particular, no resonant denominator), in Ref. 11, the authors find a different signal at (101) and at (103) in the spin-flip versus non-spin-flip ratios. In the light of our previous analysis, this would seem to point to the − + −+ configuration. In RXS, this shows us as an alternation of the (10L) (L odd) intensity versus L for − + −+ (due to the above mentioned interference) as opposed to the smooth dependence for the + + ++ configuration, which, as we show in Fig. 2, is quite pronounced in ab initio calculations. A smooth behavior versus L was seen by Clancy et al. for their Rh-doped samples [10] which is probably why they advocated the + + ++ state over the − + −+ state. One issue is that their azimuthal plots indicate multi-domain effects. Another is that the energy at which the measurement was performed might have induced extra interference due to the energy denominator, as discussed above. We suggest that additional RXS and neutron experiments on single The alternating intensities (at the RXS maximum for L3) for magnetic pattern − + −+ as opposed to + + ++, of (a) the (1,0,4n-1) and (1,0,4n+1) reflections, and (b) the (0,1,4n-1) and (0,1,4n+1) reflections. These simulations were performed with the FDMNES code for a cluster radius of 6.5Å and a Hubbard U on the Ir sites of 2 eV. For (10L), the azimuthal angle was 0 • and for (01L), it was 90 • . In all cases, the polarization geometry was SP, with an assumed core-hole lifetime of 5.25 eV. Similar oscillations are found for a cluster radius of 3Å, though they are less pronounced. domains be performed to check which pattern (− + −+ or + + ++) is actually induced by Rh doping. Regardless, both −+−+ and ++++ configurations can explain the SHG experiment [5], without the need to invoke exotic magnetic symmetries. As shown in the Table II, the − + −+ state has the magnetic space group 2 /m, and the + + ++ state the 2 /m one. As detailed in the next Section, both are compatible with the SHG result. III. A REANALYSIS OF THE SYMMETRIES OF THE SHG EXPERIMENT In this Section, we reanalyze the SHG experiment [5], listing all the symmetries that allow one to address the experimental results. In subsection A, we discuss the two susceptibility tensors, χ (e) and χ (m) (defined in Appendix A.1), possibly involved in the interference pattern with the high-temperature signal, determined by χ (q) [18]. Both tensors should be investigated on the same footing because it is known, e.g., for Cr 2 O 3 , that the magnetic part [24] of χ (e) and χ (m) are of the same order of magnitude [25]. We also identify the symmetry of the order parameters associated with χ (e) and χ (m) for linear polarizations and their multipolar ranks. Then, in subsection B, we evaluate the different azimuthal dependences of χ (e) and χ (m) . This, together with a full analysis of the magnetic symmetries of Sr 2 IrO 4 and the findings of subsection A, allows us to point towards the following interpretation of the SHG experiment [5]: only two magnetic space groups can explain the interference pattern of the SHG experiment. The former is 2 /m (as already suggested in Ref. 5). The associated order parameters are all inversion and time-reversal odd, with the symmetry of either a toroidal dipole, a magnetic quadrupole or a toroidal octupole (the magnetic quadrupole cannot be observed in SS geometry). The latter is 2 /m , which is characterized by an order parameter with the symmetry of either a magnetic toroidal monopole, a magnetic dipole, a magnetic toroidal quadrupole or a magnetic octupole. In either case, we suggest that the observed symmetry reduction is not determined by exotic magnetic patterns, but by a transition to the − + −+ state (2 /m group), or to the + + ++ state (2 /m group). How this transition could happen is discussed at the end of this Section, along with an alternate explanation that what is observed is surface magnetic SHG. A. Order parameters associated with SHG tensors Second harmonic generation is a third-order process in the matter-radiation interaction, determined by two absorptions of a photon ω and the emission of a photon 2 ω [26], as pictorially described in Fig. 3. The full cross-section and its explicit derivation are reported in Appendix A.1. The total scattering amplitude, A SHG , is obtained, as for RXS [20], by a scalar coupling of tensors representing the properties of the sample with the corresponding tensors describing the electromagnetic field. The full SHG amplitude, reported in Eq. (A3), is then composed of terms of the kind: where and k are the polarization and wave-vectors of the electromagnetic field. As clear from the definitions in Appendix A.1, χ (e) is characterized by all three transitions (the two at frequency ω and the one at frequency 2ω) of electric-dipole character (therefore inversion odd), and χ (m) is characterized by one magnetic-dipole and two electric-dipole transitions (therefore inversion even). They can both interfere with the electric-quadrupole tensor χ (q) (characterized by one electric-quadrupole and two electric-dipole transitions), responsible for the SHG signal in the hightemperature phase of Sr 2 IrO 4 [18]. From Eqs. and (A15) in Appendix A, η (q) , the transition-matrix element associated to the susceptibility χ (q) , contains an extra factor of i coming from the expansion of e i k· r , and this determines the phase shift of π/2 of these matrix elements with respect to those of the η (e) term. Following the analysis reported in Appendix A.2, we have therefore that the time-reversal even part of η (q) is imaginary (and the time-reversal odd part of η (q) real); the time-reversal even part of η (e) is real (and its time-reversal odd part is imaginary); finally, the time-reversal even part of η (m) is imaginary (and its time-reversal odd part is real). This implies that, in the non-resonant regime where denominators are real, the time-reversal even part of χ (q) can interfere with both the time-reversal odd part of χ (e) and with the time-reversal even part of χ (m) , but not with the time-reversal even part of χ (e) and with the time-reversal odd part of χ (m) . However, the presence of the imaginary damping factors iΓ n in the resonant denominators scrambles up this analysis, as detailed in Appendix A.2, so that all terms can interfere among themselves. Yet, this analysis in terms of η (e,m,q) αβγ;ln without the resonant denominator is fundamental, as in the case of RXS, in order to identify the time-reversal properties of the order parameters associated with this process, that are determined just by the matrix elements. In fact, the physical origin of the complex numerators and denominators in second-order (RXS), or third-order (SHG) expressions is profoundly different. The imaginary unit in the numerator is a consequence of magnetism: in its absence, eigenstates Φ j of Eq. (A2) can be chosen as real and all matrix elements η (e,m,q) αβγ;ln (see Appendix A.2) would be real as well. Therefore, the time-reversal symmetry of the matrix elements η (e,m,q) αβγ;ln is related to their real and imaginary parts. The imaginary unit in the denominator is, instead, a consequence of damping due to spontaneous emission [27]: that sign cannot be changed, because damping is irreversible. The previous classification reminds one of the simpler RXS case [20,28]. There are however several differences, because of the intrinsic asymmetry of the SHG amplitude (two photons ω in; one photon 2ω out). For example, it is well known that the electric dipole-electric dipole (E1-E1) approximation in RXS leads to a time-reversal odd, imaginary part of the matter tensor, which is proportional to the magnetic dipole and is scalarly coupled to netic signal occurs in the SS channel. This is not the case for SHG, because of the asymmetry of the denominators ∆ (i) l,n with respect to the exchange n ↔ l, as shown in detail in Appendix A.1 and A.2: indeed, a magnetic SS signal is quite common in SHG [26]. The general classification of the order parameters associated with the transition matrix elements in the SHG susceptibilities is quite lengthy and will be treated in a future publication. As already reported in Ref. 29 for second-order susceptibilities, the order parameters in the optical regime, differently from the x-ray regime, are correlation functions, which are much harder to analyze. Here we focus on the SHG experiment of Ref. 5 and, in particular, consider just the symmetry of the order parameter associated with the χ (e) and χ (m) tensors when the incoming and outgoing electric fields are linearly polarized, and therefore real. Consider for example χ (e) and Eq. SHG is a scalar quantity. We can take advantage of this property to decompose the susceptibilities χ (e) in spherical tensors, as each of them must be scalarly coupled to an equivalent spherical tensor representing the polarization properties. Each spherical tensor derived from χ (e,m,q) αβγ is an irreducible representation of the rotation group whose symmetry can be identified with that of a given multipole. We can rewrite Eq. (A42) as: Though there are several ways to couple three vectors in irreducible components, Eq. (13) suggests the most natural one: the symmetry of the i γ i β part implies that only 6 out of 9 cartesian components contribute in this coupling: they form a scalar tensor (T (0) ) and a secondrank spherical tensor (T (2) m , m = −2 to 2). In turn, these two spherical tensors couple to the remaining outgoing polarization vector o . The coupling of the scalar (order-zero spherical tensor) with the vector o (firstrank spherical tensor), gives a first-rank spherical tensor (O (1) ). The coupling of the second-rank spherical tensor m with the vector o leads to three spherical tensors: (3) (as in the usual coupling of angular momenta). The explicit expression for all these tensors is given in Appendix A.3. The case of χ (m) is less straightforward because of the substitution of i,o α with ( i,o × k i,o ) α and the associated symmetrization over all three terms, as reported in Appendix A.3. However, mutatis mutandis, the order parameters are in this case spherical tensors of rank i =0, 1, 2, 3:Õ (i) . In this case, more tensors of the same order can appear, as detailed in Appendix A.3. Of course, theÕ (i) are all inversion-odd and theÕ (i) are all inversion-even. The former are associated with order parameters with the symmetry of a toroidal dipole, a magnetic quadrupole and a toroidal octupole (for the time-reversal odd part) and with the symmetry of an electric dipole, an axial toroidal quadrupole and an electric octupole (for the time-reversal even part). The latter are associated with order parameters with the symmetry of a magnetic toroidal monopole, a magnetic dipole, a magnetic toroidal quadrupole and a magnetic octupole (for the time-reversal odd part) and with the symmetry of an electric charge, an axial toroidal dipole, an electric quadrupole and an axial toroidal octupole (for the timereversal even part). As stated above, we can just speak of an order parameter 'with the symmetry of', because in the optical regime, differently from the x-ray regime, all involved states are band-like and the "order parameters" are rather many-body correlation functions [29,30]. In the following subsection, we specialize this analysis to the magnetic symmetries of Sr 2 IrO 4 . B. SHG symmetry analysis applied to Sr2IrO4 The magnetic space group of the − + +− state of Sr 2 IrO 4 associated with the I4 1 /a crystal symmetry is 2/m1 , as clear from Table II, when only the 4 equivalent Ir atoms of the I4 1 /a group are considered (e.g., Ir 1 , Ir 2 , Ir 5 , Ir 6 ). The behavior of the two tensors χ (e) and χ (m) under the magnetic symmetry group 2/m1 and subgroups is analyzed here and in Appendix A.4, the aim being to find out what magnetic subgroups allow for both the interference with the time-reversal even χ (q) signal of the non-magnetic phase (found in Ref. 18) and the odd ψ-dependence seen in the experimental data (Figs. 1 and 3 of Ref. 5). As shown in Eq. (14) below, the key feature for having an odd ψ-dependence is to have allowed cartesian tensors with an even dependence on z (which means an odd dependence on x and y, as both χ (e) and χ (m) are third-rank cartesian tensors). This statement comes from the following expressions for the electromagnetic field (in = incoming, frequency ω; out = outgoing, frequency 2ω; θ is the angle between the outgoing beam and the c-axis; ψ is the azimuthal angle around the c-axis, with ψ = 0 when the in-plane projection of the incoming wave-vector is along the a-axis): and H in P = H out P . For future use, we also write k in = (sin θ cos ψ, sin θ sin ψ, − cos θ) and k out = (sin θ cos ψ, sin θ sin ψ, cos θ). For the eight magnetic groups discussed below, those allowing for a third-rank tensor with an odd ψdependence can easily be picked out from Tables 4 and 7 of Birss [31]. For the time-odd case of interest, there are only two possibilities: a polar tensor for 2 /m (that is, χ (e) ) and an axial tensor for 2 /m (that is, χ (m) ). For the unlikely time-even case, there is only a polar tensor for m1 and an axial tensor for11 . A detailed demonstration of these properties is provided in Appendix A. 4. We remark that the surface electric dipole contribution for the relevant point groups, as listed in Ref. 18 (supplemental material), have an even dependence on ψ and can be excluded for this reason. It is interesting to identify the allowed components of the order parameters for the two space groups 2 /m and 2 /m . For the latter, the magnetic-dipole order parameter is the in-plane ferromagnetic component along the b-axis, noting that the SHG signal is actually determined by higher-order correlation functions with the same symmetry (Appendix A.3) as opposed to those determined from core-hole spectroscopies. For the 2 /m space group, the calculation of the toroidal dipole and magnetic quadrupole is slightly more complex: we can explicitly calculate their values, for the − + −+ pattern, taking, respectively, the antisymmetric and the symmetric traceless parts of the following cartesian tensor: Here the sum is performed over the 8 Ir atoms in the unit cell, r Table I shows that the only components different from zero for the − + −+ pattern are the toroidal dipole Ω x (antisymmetric) and the magnetic quadrupole M yz (symmetric, traceless). All other components are zero. The absolute value of both Ω x and M yz is 1 2 |m||c| sin φ, where |m| is the value of the magnetic moment at the Ir sites, |c| is the value of the c-axis length (25.804Å) and φ ∼ 12 • is the angle of the magnetic moment with the a-axis. Again, we remind that the SHG signal is actually associated with higher-order correlation functions with the same symmetry as Ω x and M yz (Appendix A.3). Some brief comments on the azimuthal dependence of the experimental data are in order. The high temperature data originates from a χ (q) signal, since χ (e) is not allowed and χ (m) does not have the correct angular dependence [18,31]. These data cannot be fit by functions of just 4/mmm symmetry [18]. In fact, the coefficients of the 4/m terms not in 4/mmm are larger than the 4/mmm term, despite the weak nature of the 4/m symmetry breaking ( Fig. 4(a)). Similar observations apply as well to the third harmonic signal [18]. Whether this is a real effect, or due to other factors not taken into account in the analysis, remains to be seen. In the low temperature phase, where C 4 symmetry is broken down to C 1 [5], we find that dipole terms alone are not sufficient to fit the change in the azimuthal dependence (in SS geometry, quadrupole terms do not enter, and the dipole term goes as sin(ψ)). This means that octupole terms play a significant role (see caption of Fig. 4(b)), as often However, the reason why the − + −+ or + + ++ magnetic patterns replaces the − + +− pattern remains to be found. Here we advance some hypotheses and propose new experiments to check them. We begin with the nature of the SHG process in Sr 2 IrO 4 . Zhao et al. [5] propose a doubly non-resonant virtual transition for the incoming 1.55 eV photon (λ = 800 nm) from the O 2p band since it is more than 3 eV below the Fermi energy (Fig. 5). We propose instead that the incoming 1.55 photons undergo a doubly resonant transition from the filled J eff = 3/2 band to the empty partner of the J eff = 1/2 doublet and then to the lower part of the e g band (Fig. 5). The existence of the first resonance is demonstrated by a previous pump-probe experiment [32] and the position of the second one can be inferred from O K-edge measurements in x-ray absorption [8] and x-ray inelastic scattering [33]. This level scheme has been advocated by a recent optics measurement as well [34]. The intensity of the doubly resonant path will be enhanced by a factor ∼ 500 (for a typical damping Γ ∼ 0.1 eV), due to the resonant denominator of Eq. (A3). Whereas in reflection geometry, as in Ref. 5, both processes generate an SHG signal, measurements in a transmission geometry would unambiguously identify one or the other [35]: the strong damping due to real absorption of the doublyresonant mechanism would deplete it, contrary to the the non-resonant process involving O 2p states. We also speculate that the laser pump itself might modify the SHG signal. After all, only a 0.3 T field or a few % Rh doping are needed to stabilize the + + ++ state (or possibly the − + −+ state for Rh doping). In support of our conjecture, it is known that laser-induced non-thermal changes in the magnetism can occur that follows the time profile of a short laser pulse (48 fs) as demonstrated in Ref. 36, where the effect was attributed to both the coupling of the electric and magnetic fields of the laser to the spins, and the alteration of the electric field of the ions due to the photodoped carriers [37]. In FeBO 3 , this leads to a change in the anisotropy of the probe polarization that has C 1 symmetry [38]. The laser can also generate changes in the symmetry of the lattice on this time scale, again due to the photodoped carriers, as demonstrated recently for Cr 2 O 3 [39]. There, the symmetry corresponded to an even parity mode, but coupling of excitations to an odd parity mode was recently suggested in Sr 2 IrO 4 [34] for an energy corresponding to the pump energy of Zhao et al. [5], which would again lead to a C 1 distortion of the SHG signal. Moreover, a non-thermal transition from an antiferromagnetic state to a ferromagnetic state was generated on the time scale of the laser pulse in a manganite [40], though this required a critical fluence and also an external field to align the ferromagnetic moments. In this context, Dean et al. [14] monitored the (-3,-2,28) magnetic Bragg peak associated with the − + +− ground state and found that it was strongly suppressed with a laser fluence of imum in the conductivity [34]. This implies that many of the SHG photons are absorbed, meaning the resulting signal is dominated by the surface. On the other hand, there is a pronounced minimum at 2 eV [34], meaning that experiments with a pump energy of 1 eV (SHG energy of 2 eV) should be less surface sensitive and the interference reduced. We now turn to an analysis of the ground state of Sr 2 IrO 4 , which we shall use in Section V. IV. KRAMERS DOUBLET GROUND STATE IN Sr2IrO4 The breakthrough idea in the original paper of B. J. Kim et al. [1] was the identification of the ground state of Sr 2 IrO 4 as an octahedral Kramers doublet. As shown in the last section, a proper knowledge of the ground and the excited states of Sr 2 IrO 4 are necessary, in order to explain the SHG experiment. As a consequence, we focus here on the theoretical description of this Kramers doublet and on the experimental evidence for its existence. We believe, in fact, that in spite of a number of papers published on the subject [9,19,43,44], there are some experimental consequences of this state that have not (or not correctly) been stated. As authors in the iridate literature have sometimes followed different conventions for the definition of spherical orbitals, we list our own definitions in Appendix B. One of the key experimental evidences leading to the octahedral Kramers doublet was the absence of magnetic RXS intensity at the Ir L 2 edge for reflections that showed a big resonant intensity at the L 3 edge [9,19]. This feature has been confirmed in several other nonstoichiometric compounds [19,45,46]. Initially interpreted as definitive evidence for this Kramers doublet [9], the absence of a magnetic signal at the L 2 edge was considered as inconclusive, because an in-plane magnetic moment could lead to the same conclusion [43] even if the octahedral limit was not realized. However, the fact that for some doped samples the magnetic moments point out of the xy-plane and the magnetic RXS intensity at the Ir L 2 edge was still strongly depleted [45,46] represent yet another hint towards the physical realization of an octahedral Kramers doublet. However, as this last mentioned experimental evidence referred to Ru-doped [46] and Mn-doped [45] samples, it is still an open question to find definitive experimental evidence of the octahedral Kramers doublet in Sr 2 IrO 4 . We want to show here, by revisiting some of the calculations, that direct experimental evidence for the realization of this Kramers doublet in Sr 2 IrO 4 is possible, looking at the Ir L 2 edge by X-ray absorption in partial yield in such a way as to reduce the core-hole lifetime to a value below 2 eV. The reason is due to the fact that two low-lying empty states are present in the spectrum: the empty partner of the Kramers doublet, within 1 eV of the Fermi level, and the e g states (around ∼ 3 eV above the Fermi level). The two cannot be identified in usual XAS measurements, because of the L 2 core-hole width of around 5 eV. If an octahedral Kramers doublet existed, this would be a pure j = 5/2 state (see Fig. 6 and calculations below) and no dipole transition at the L 2 edge would be allowed. Therefore in this case, the first, low lying peak would disappear and only the e g peak at higher energies would be present, as e g states are a mixture of both j = 5/2 and j = 3/2 states (Appendix B). Indeed, a hint towards the presence of the two peaks was highlighted by Boseggia et al. [19], who noticed two bumps in their magnetic spectra at the L 2 edge. We think that at least the higher energy peak corresponds to the e g states that, though just slightly magnetically polarized, can contribute to the magnetic intensity. In what follows, we explain the details of our calculations. Note that as our main objective is to write down the L 2,3 -edge cross-section, we shall not work with the effective angular momentum for the t 2g states often employed in the literature, but with the real ones. We feel that this representation is more transparent if we analyze core-level transitions, as the expression of the Kramers doublet in terms of the |j = 5 2 , j z states directly informs us about whether this transition is dipole-forbidden or not at the L 2 edge (we remind that dipole transitions are characterized by ∆j = ±1 and therefore the ||j = 1/2 core states of the L 2 edge cannot be promoted to |j = 5/2 states above the Fermi level). This would not be the case with |J eff = 1/2 . In Appendix B, we give the formulas to pass from one representation to the other, in order to compare with the existing literature. If we solve the crystal-field plus spin-orbit Hamiltonian at an Ir site, as done several times in the literature [2,47,48], and as reported in detail in Appendix B, but then express the solution in terms of the spin-orbit coupled j = 5/2 and j = 3/2 states that can be associated with the 5d electrons of Ir, we obtain: and where the coefficients R(η) and N (η) depend solely on the ratio η = ∆ t /λ between the tetragonal crystal field, ∆ t , and the spin-orbit coupling, λ: Notice that in the octahedral limit, η = 0, R(η = 0) = terms becomes zero and no signal at the L 2 edge can be detected. What should be underlined here is that the absence of intensity at the L 2 edge is not limited to the magnetic signal: all the signal at the L 2 edge associated with the empty part of the Kramers doublet would be zero if the octahedral limit is satisfied, even the non-magnetic absorption. In fact, in this case |ψ ± would be a pure |j = 5/2 state (see Fig. 6) and no dipole transition can occur between 2p 1/2 and 5d 5/2 (∆j = 2 is dipoleforbidden). If we rewrite Eqs. (15) and (16) using the cartesian representation, we obtain the form usually given in the literature for the half-filled Kramers doublet [49]: This expression, with R(η = 0) = 1 √ 2 , gives the usually quoted doublet with equal weights of t 2g states. The expression above for the doublet corresponds to the case where the magnetic moment is along the c-axis, as |ψ + and |ψ − are eigenstates of both L z and S z , of eigenvalue ±2/3 and ±1/6, respectively, when R = 1/ √ 2. If we want it in any direction, we should make a linear combination of the two as follows: In this expression, β = 0 and β = π/2 corresponds to the magnetic moment oriented along ±c, whereas the magnetic moment in the ab-plane, as detailed in Appendix B.3, is obtained when cos(β) = sin(β). In this case, γ corresponds to the angle with respect to the local x-axis in the direction of a planar oxygen (e.g., for Ir 1 in Fig. 1, β = γ = π/4). The experimental configuration, where the magnetic moment lies in the ab-plane, 45 • from the local octahedron axes [50], leads to the following expressions for the Kramers doublets: The same expressions in the |j, j z basis are reported in Appendix B.1. Notice that the expression given in Ref. 44 for the moment along a does not appear to be correct. In their expression, that in the octahedral limit is only d yz↑ and d xz↓ appear, violating the phase relation of the J eff = 1/2 subspace. This state therefore brings in some admixture with components from the J eff = 3/2 subspace [51]. We can now evaluate the matrix elements that appear in both magnetic RXS and XAS. The details are shown in Appendix B.2. We draw here the conclusions of such calculations, Eqs. (B8) and (B9). Where the L 2 edge is concerned, Eq. (B8) clearly shows that in the octahedral limit, R(η = 0) = 1 √ 2 and all matrix elements are zero. This is to be expected: in this limit, the Kramers doublet becomes a purely |j = 5/2 state and no transition is possible for the L 2 edge. Eq. (B8) tells us also that when the magnetic moment is in the ab-plane, whatever its ori-entation, cos(2β) = 0 and off-diagonal elements are zero, independent of R. This implies, in particular, that the magnetic RXS signal is zero. We remind that this conclusion is valid because magnetic RXS is proportional to the antisymmetric part of the L (2) αβ tensor (Eq. (B8)) and that the latter is an irreducible tensor of rank one: if it is zero in one frame, it will be zero in any other rotated frame. The opposite also is true, valid at the L 3 edge, Eq. (B9): if at least one component is non-zero in a given frame, there will be at least one non-zero component in any other rotated frame. This is sufficient to affirm that there is always a magnetic RXS signal at the L 3 edge, whatever R and β are (this rule, of course, does not consider eventual extinctions due to the structure factor). At the L 3 edge, moreover, Eq. (B9) shows us that the absorption coefficient for the empty part of the Kramers doublet is always different from zero for any incoming polarization (2R 2 + √ 2R + 1 > 0, for any R) and for any direction of the magnetic moment. We finally notice that, contrary to what was stated in Ref. 44, the L 2 edge magnetic RXS in the π − π channel is zero whatever R is if the magnetic moment is confined within the abplane, as Eq. (B8) implies that no magnetic signal exists in this case, in any frame (any incoming and outgoing polarizations). V. KEY EXPERIMENTS FOR Sr2IrO4 This Section is focused on the description of some key experiments with the double aim to find the fingerprint of a) the octahedral Kramers doublet in the stoichiometric material and b) the magnetic space groups, + + ++ and − + −+, of interest for the SHG experiment [5]. As explained in the previous Section, one key experiment to confirm whether the J eff = 1/2 doublet is realized in Sr 2 IrO 4 is to compare XAS experiments at the L 2 and L 3 edges of Ir, using the high-resolution capabilities of partial fluorescence detection to reduce the value of the core-hole width at these edges. Typical values of the core-hole width for Ir at the L 2 and L 3 edges are 5.69 eV and 5.25 eV, respectively [53]. We remark in this respect that, as depicted in Fig. 6, e g orbitals have both j = 3/2 and j = 5/2 character, as well as the J eff = 3/2 submanifold of t 2g [54]. This is evident by a simple inspection of the transformation formulas given in Appendix B. In the octahedral limit, the only purely j = 5/2 state is the Kramers doublet J eff = 1/2. Therefore, dipole transitions can reach the empty e g states but not the empty partner of the Kramers doublet J eff = 1/2. This is not the case for the L 3 absorption transitions that can reach both e g states and the empty partner of the Kramers doublet. A comparison of the two high-resolution XAS spectra could therefore provide a definitive confirmation of this issue for the stoichiometric compound. In Fig. 7 we show the results of a numerical simulation of L 2 -edge XAS by means of the FDMNES code [55]. We remind that the FDMNES code is based on a spin-polarized multiple-scattering approach including spin-orbit akin to LSDA+SO (local spin density approximation plus spin-orbit) calculations [56]. We performed two XAS calculations, one with a core-hole width of 5.69 eV and the other with a core-hole width of 2 eV. We used a cluster radius of 6.5Å, containing 85 atoms [57]. With Γ = 2.0 eV, the doublet structure of what appeared to be a single peak for Γ = 5.69 eV clearly emerges. The lower-lying structure, labelled as the A-peak in Fig. 7, should not be there if the octahedral Kramers doublet is realized (i.e., R = 1/ √ 2). Its absence is a fingerprint of the octahedral Kramers doublet. Unfortunately, the simulation does not reproduce all of the features of the data: for example, the energy splitting of the e g 5d 3z 2 −r 2 and 5d x 2 −y 2 states is underestimated (only ∼ 0.6 eV, as compared to 1.6 eV experimentally [33]), though it would take very high resolution to see this splitting in XAS. We performed also the same calculation with a Hubbard U = 2 eV, but it did not change qualitatively the results of Fig. 7, except for a shift of 0.5 eV of the higherenergy shoulder to still higher energies (however, this difference would be difficult to observe experimentally). As shown in the literature in the case of actinides, it is possible to select photons emitted from decay channels characterized by longer lifetimes, i.e. a smaller core-hole width, with resolutions of ∼2.0 eV at L edges, or even down to 1.2 eV for the M 2 edge in uranium compounds [58]. These resolutions would clearly allow for the detection of the presence or absence of the J eff = 1/2 peak at the L 2 edge, thereby providing the final word on this issue [59]. A further experiment to double-check the behavior of the Kramers doublet is suggested by Eq. (B8). Here it is shown that when XAS is measured with incoming po- larization along z, one should get a null L 2 signal for the Kramers doublet, regardless of whether one is in the octahedral limit or not. That is, z polarization only picks up the e g states. This would allow one to fix experimentally the energy level(s) of the e g states. If, starting from this configuration, we rotate the polarization towards, say, the x-direction, any signal that develops at lower energies would necessarily imply that we are filling in the unoccupied partner of the Kramers doublet and therefore we are not in the octahedral limit. The results of Moon et al. [8] at the O K edge and our simulations by FDMNES at the Ir L 1 edge, shown in Fig. 8, allow us to confirm that any signal of x character developed below the lowest z peak cannot be of e g origin (e.g., 5d x 2 −y 2 orbitals), because the latter are higher in energy than the 5d 3z 2 −r 2 states. Here as well, the experiment would strongly rely on a high resolution to resolve the three peaks. Further independent pieces of information on the nature of the low-lying energy states and their orbital distribution come from polarized XAS spectra at the Ir L 1 edge, where it is possible to play with both incoming polarization and wave-vector through electric-quadrupole (E2) transitions. The main results are shown in Fig. 8. In particular, we show some results that would be measured by high-resolution XAS (Γ ∼ 2 eV). The comparison with the curves characterized by Γ = 8.3 eV (the typical corehole width for L 1 XAS [53]) highlights the necessity for high-resolution XAS. For all curves with Γ = 2.0 eV, the E2 signal is clearly visible in the region up to 8 eV above the Fermi energy (the zero of our energy scale). The E2 origin of the signal is demonstrated by the different behavior of the black and red curves in the region from 0 to 4 eV, and the behavior of the green and light-blue curves in the region from 0 to 8 eV. In particular, setting c and k a (black curve) makes XAS blind to both the 5d x 2 −y 2 and 5d 3z 2 −r 2 states and only sensitive to the t 2g 5d xz ones. This allows the identification of the lowest-lying peak, between 0 and 1 eV, as due to these 5d xz states. We remark that the E2 nature of this peak is demonstrated by the difference with the purely E1 calculation (red curve). Experimentally, such a feature can be shown by rotating k. When we rotate the polarization to the a direction and the wave-vector to the b direction (both in-plane), E2 transitions become allowed for both the 5d x 2 −y 2 and 5d 3z 2 −r 2 states (the latter, because of the x 2 + y 2 part contained in the r 2 term). This is seen by the green curve in Fig. 8. Finally, the dark-blue curve with a and k c allows to double-check a) the E2 nature of the lowest-lying peak (because it is symmetric in the a ↔ c exchange, as only E2 transitions are) and b) the partly E1 nature of the features around 4 eV (of course, the latter is evident also from the purely E1, light-blue curve). We remark also that a smooth extrapolation of the E1 peaks around 14 eV shows that the Ir 6p density of states is present down to the Fermi energy and, in correspondence with the E2 features, further p density of states, probably due as well to the hybridization of the Ir 5d and O 2p states. The possibility that this inversionodd density of states is magnetized, so as to be T Iinvariant, as required in the 2 /m group suggested by the SHG experiment, can be analyzed by non-reciprocal xray linear dichroism or magnetochiral (XMχD) dichroism [60,61]. Such a technique would highlight the presence of a toroidal, magnetic signal of E1-E2 origin, therefore providing an independent confirmation of the presence of toroidal multipoles. By symmetry, this is only possible for the − + −+ state. Unfortunately, as the inversion breaking at the Ir sites is only determined by further oxygen neighbors (see Section II), E1-E2 radial matrix elements are extremely small, less than 10 −4 of the E2-E2 radial matrix elements leading to the XAS pre-edge peak shown in Fig. 8. We have verified this by FDMNES, where we find an XMχD peak at the pre-edge for k parallel to the a-axis that is of order 10 −4 of the L 1 XAS maximum (this requires a cluster radius of at least 7Å). The same order of magnitude applies to the K edge. This implies that, were the toroidal nature of the SHG signal confirmed, SHG would be a much more sensitive tool to detect a tinyTÎ-breaking order parameter than dichroism in XAS. In order to identify the actual magnetic state indicated by the SHG experiment, a more sensitive tool than XMχD is x-ray magnetic circular dichroism (XMCD), whose signal is zero for the − + −+ and − + +− states and non-zero for the + + ++ state. An XMCD signal has been reported in Ref. 62 in the presence of a magnetic field (therefore inducing the + + ++ state) with an intensity about 1% of the XAS intensity [63]. As shown already in Section II in Fig. 2, the two magnetic configurations −+−+ and ++++ can be identified also by RXS experiments at the L 3 edge, with the aim of identifying whether the −+−+ state or the ++++ state is realized in Sr 2 IrO 4 in the case of Rh doping. Though challenging, it would be interesting to perform both the RXS and the XMCD experiments under the influence of a laser beam, so as to check whether the experimental conditions of the SHG experiment in Ref. 5 could have possibly induced a phase transition from the − + +− state to the + + ++ or − + −+ states. We remind that a relatively small applied magnetic field (H 0.3 T) has been shown to induce the + + ++ state [9]. In fact, it would be very interesting to repeat the SHG experiments in the presence of a magnetic field that would induce the + + ++ state. Finally, we have tried to highlight the difference between the + + ++ and − + −+ states also at the Ir L 1 edge, by interference of E1 and E2 signals at the preedge (experimentally, they could be disentangled with the technique of phase plates developed in Ref. 65). Unfortunately, our FDMNES calculations find that charge scattering is the dominant term in this energy range, so no qualitative differences between the two states could be determined. VI. CONCLUSIONS To conclude, we summarize here the main achievements of the present paper: 1) We have shown that the SHG experiment [5] can be explained by either the 2 /m or the 2 /m magnetic space groups. The former was already identified in Ref. 5 where it was interpreted in terms of toroidal moments induced by orbital currents. However, an order parameter with the symmetry of a toroidal dipole is not sufficient to explain the azimuthal scan, as noted in Section III. The octupole is needed as well. 2) We have demonstrated that it is not necessary to invoke exotic orbital currents to obtain an SHG signal: an induced transition to the − + −+ state or to the + + ++ state would have the same effect. The latter was explicitly excluded in Ref. 5, probably because only χ (e) was considered in that paper, not χ (m) . Two other magnetic space groups might explain the odd-ψ azimuthal dependence of the interference SHG signal: m1 and 11 . The former was also identified in Ref. 5. However, both of them are characterized by time-reversal even order parameters, and it appears implausible that they play a central role below the magnetic transition temperature. Finally, the − + −+ state has a magnetoelectric space group and could explain also the results of Ref. 7, that invokes the breaking of both spatial parity and timereversal symmetry. Either effect could also arise from surface magnetic SHG, and we have suggested that the photon energy could be changed to test for surface sensitivity of the SHG signal. 3) We have suggested new experiments to highlight the interplay of the three states, − + +−, + + ++, or − + −+, characterized by different magnetic space groups. Though neutron and x-ray diffraction data clearly show that − + +− is the ground state for sto-ichiometric Sr 2 IrO 4 , the three are very close in energy (the inter-plane exchange ∼ µeV [13]). A laser pulse of 1mJ/cm 2 fluence is known to suppress the − + +− state at fs timescales [14]. Whether this is what produces the additional C 1 distortion of the SHG signal below T Ω remains to be determined by future experiments, several of which were outlined in Section III. 4) From the calculations of Appendix B, we have highlighted a new experimental criterion, based on highenergy resolution XAS, to identify the octahedral nature of the Kramers doublet in stoichiometric Sr 2 IrO 4 . Such an XAS measurement is independent of the direction of the Ir magnetic moments (in-plane or out-of-plane), as it only relies on the purely j = 5/2 nature of the Kramers doublet in the octahedral limit. It can provide, therefore, an independent confirmation of the realization of the octahedral Kramers doublet in the stoichiometric compound. Acknowledgments The authors would like to thank Dr. Liuyan Zhao and Prof. David Hsieh for providing the data plotted in Fig. 4 and for several discussions about this paper. They also thank Dr. Feng Ye for clarification concerning the neutron scattering results for Rh-doped samples, as well as Mark Dean for discussions about their pump-probe experiments. Finally, we would like to thank one of the Referees for suggesting the possibility of surface magnetic SHG. Work by MRN was supported by the Materials Sciences and Engineering Division, Basic Energy Sciences, Office of Science, US DOE. Appendix A: Technical details on the SHG calculations The total SHG amplitude In the most general case, the transition probability per unit time from a state Φ g to a state Φ f can be written in term of the transition operator T I as follows: where ρ f is the density of the final states and T I = H I + H I G H T H I , with H I the matter-radiation interaction Hamiltonian and G H T = (Σ g − H T + iΓ) −1 the Green's function related to the total Hamiltonian H T = H 0 + H I . Σ g is the total energy associated with Φ g . Here H 0 is the sum of the matter Hamiltonian and the radiation Hamiltonian, separately. With the usual Dyson expansion G H T = G H0 + G H0 H I G H0 + ..., we can replace in the above expression for T I and rewrite Eq. (A1) up to any order in H I . The scattering crosssection is obtained from the transition probability per unit time by dividing by the incoming flux (c/V for photons, when the vector potential is normalized to one photon per unit volume, V [27]). We can then pick out the third order for the scattering cross-section, of interest for SHG, σ SHG = 2πV c ρ f A SHG 2 , where the amplitude reads (with Φ f = Φ g ): We can now separate in H I the radiation part from the matter part, as Σ i = E i + m ω. Here E i is the energy of the material alone (without radiation) for state i and ω measures the photon energy associated with a given matrix element. In particular, for SHG, m = 1 in absorption and m = 2 in emission, because of the absorption of two photons of energy ω and the emission of one photon of energy 2 ω. Three different terms are possible, as represented in Fig. 3, and they correspond to the three processes: 1) absorption ( ω) -absorption ( ω)emission (2 ω): this is a doubly resonant process; 2) absorption ( ω) -emission (2 ω) -absorption ( ω): this is a singly resonant process; 3) emission (2 ω) -absorption ( ω) -absorption ( ω): this is a non-resonant process. Three different energy denominators are associated with each term, as shown below. For the interaction Hamiltonian, we suppose that we can perform a multipole expansion of the vector potential contained in H I [20] and consider electric-dipole (E1), magnetic-dipole (M1) and electric-quadrupole (E2) terms, only. With this hypothesis, Eq. (A2) can be written as (a sum over repeated variables α, β, γ = x, y, z is employed): where the denominators are: ∆ (1) l,n = ((E lg − 2 ω − iΓ l )(E ng − ω − iΓ n )) −1 , ∆ (2) l,n = ((E lg + ω)(E ng − ω − iΓ n )) −1 , and ∆ (3) l,n = ((E lg + ω)(E ng + 2 ω)) −1 . The only difference is that in the quantum-mechanical approach, all the processes depicted in Fig. 3 for χ l,n weight the three quantum-mechanical transition amplitudes leading to the SHG signal (two resonant processes, only one resonant process, no resonant processes), as shown in Fig. 3. An important element of our analysis is the recognition that, analogously to the case of RXS [20], each η (e,m,q) tensor is characterized by a time-reversal odd and a timereversal even part, due to the matrix elements and independent of the complex energy denominators. They can be found by looking for the real and imaginary parts of each tensor: η (e) = η (e) +i η (e) , η (m) = η (m) +i η (m) and η (q) = η (q) +i η (q) . Notice that we did not consider in our analysis the common imaginary unit multiplying the E1-E1-E1 transition matrix elements in Eq. (A4). Of course, we factorized it also in Eq. (A6) and Eq. (A15), so that η (q) is always phase shifted by π/2 compared to η (e) and η (m) . We remark that this analysis corresponds to Birss' separation into i-tensors and c-tensors [31]. Starting from Eq. (A42), we report the full expression only for η (e) (in order not to overburden the notation, in the following we remove the label (e) , superfluous as we just deal with η (e) ): where we defined∆ lng andη (glng) αβγ = φ g |r α |φ l φ l |r β |φ n φ n |r γ |φ g . Following the notation of Ref. 28, we labelled the energy spectrum of the timereversed configuration as En. So, if time-reversal is a symmetry for our system, then En = E n , so that ∆ We have also highlighted the order of appearance of intermediate states in the matrix elements through (glng), which plays a fundamental role in the analysis. In fact, the complex conjugate ofη (glng) αβγ isη (gnlg) γβα : this implies that we should not only reverse the order of the cartesian indices (which, alone, would have led to the antisymmetrization of the γ and α labels), but also keep track of the order of the intermediate states l, n (or n, l). The latter point is what makes the profound difference, mathematically, with the RXS case, where the imaginary part of the cartesian, third-rank tensor is antisymmetric in two labels α and γ (see Ref. 28 for the analysis of the analogous E1-E2 third-rank cartesian tensor in the case of RXS) and therefore only couples with the corresponding antisymmetric part of the polarization, leading to a powerful selection rule based on time-reversal. In SHG this is not possible, for this technical reason: η (glng) αβγ is not antisymmetric in α ↔ γ. Physically, this is related to the order of the absorption and emission processes (through the denominators∆ (i) lng , which have no definite symmetry in the exchange l ↔ n), that does not allow time-reversal to be a symmetry of the SHG process, as it is of RXS. In an analogous way, we can write similar expressions also for χ (m) and χ (q) and deduce that η (e) , η (m) and η (q) are time-reversal even tensors (non-magnetic or itensors in Birss' notation [31]), whereas η (e) , η (m) and η (q) are time-reversal odd (magnetic or c-tensors in Birss' notation). These cartesian tensors can then be analyzed in terms of the irreducible spherical decompositions, as shown in the next subsection and in Section III. In the light of our findings, it turns out that, of our two candidates to explain the SHG signal, one (for the 2 /m case) is the time-reversal odd part of η (e) , which is imaginary and therefore can interfere with the timereversal even part of η (q) , which is imaginary as well. The other (for the 2 /m case) is the time-reversal odd part of η (m) , which is real. Yet, it can still interfere with the time-reversal even part of η (q) , which is purely imaginary only far from resonance. In fact, sufficiently close to the resonance (within ∼ Γ), the complex resonant denominators of the above SHG expressions for η (e) , η (m) and η (q) scramble the previous imaginary/real separation based on the time-reversal properties of the matrix elements in the numerator. In fact, whatever is the numerator N = a + ib (with a and b real), we get: So, a and b matrix elements interfere, unless we are in one of the two extreme situations: 1) out of resonance (i.e., ω mi −2ω Γ m and ω ni −ω Γ n ), so that Γ is negligible and a and b in Eq. (A44) do not interfere any more; 2) in the case of a single, well separated resonance (within ∼ Γ), exactly at resonance (i.e., ω mi − 2ω Γ m and ω ni − ω Γ n ), so that the whole expression reduces to − a+ib ΓmΓn , and again a and b do not interfere. Identification of the SHG order parameters Here we give the explicit spherical and cartesian components of some of the polarization tensors that couple with the order parameters identified in Section III. We first list all the tensors of E1-E1-E1 origin (χ (e) ), associated with the 2 /m magnetic group and then those of E1-E1-M1 origin (χ (m) ), associated with 2 /m . For χ (e) , we have that the polarization dependence is determined by: a) two first-rank tensors,Ō (1) andÕ (1) , both coupled to an order parameter with the symmetry of a toroidal dipole, b) a second-rankÕ (2) , coupled to an order parameter with the symmetry of a magnetic quadrupole and c) a third-rankÕ (3) , coupled to an order parameter with the symmetry of a magnetic toroidal octupole. Their explicit expression can be obtained from the scalar product in Eq. (A42), that can be recoupled in spherical tensors as: o (i) ·χ (i) . If, for a simpler comparison with Eq. (14), we express the spherical polarization tensors in cartesian components, we have: here and below α is any of x, y or z. Here, for example,Õ yz couples to the corresponding susceptibilityχ yz , with the symmetry of a magnetic quadrupole. Eqs. (A45) to (A48) allow us to associate with each multipole component a well-determined azimuthal scan which constitutes a quantitative basis for our statements in Section III. For example, using the above equations with Eq. (14), we remark that firstrank tensors, with the symmetry of a toroidal dipole, contribute to the SHG signal with just a sin ψ (cos ψ) dependence. However, in order to explain the SHG azimuthal scans, sin 3 ψ (cos 3 ψ) terms are necessary. This implies that, as noted in Section III, we cannot neglect the signal determined by the magnetic toroidal octupole, Eq. (A48). We remark also that the azimuthal-scan technique employed for SHG by Ref. 5 proves to be a very powerful tool to extract the relative weight of each multipole order parameter, in full analogy with the RXS case [66,67]. We can look at an analogous treatment of the χ (m) terms of the 2 /m group, though the algebra is slightly more involved, given the number of terms to be treated in the E1-E1-M1 case (see Fig. 3). Consider the case of the E1-E1-M1 transition amplitude, A We can now analyze, as for χ (e) before, the properties of the polarization tensors: O As above, we can write the two terms in the last line of Eq. (A49) as a scalar product of spherical tensors: Here we analyze in detail the transformation properties, under rotation, of the polarization spherical tensor, that can be formally derived in a simple way from Eqs. (A45), (A46), (A47) and (A48) by the replace- and by the replacements: In this way, we double the number of tensors that we had in the χ (e) case and obtain the cartesian components of eight tensors (that we callŌ (1) ,Õ (1) ,Õ (2) ,Õ , P (1) ,P (1) ,P (2) ,P (3) ). Three further allowed polarization tensors are obtained from O (2) (a symmetric traceless second-rank tensor). These three tensors only contribute in SP, PS and PP geometry, as in SS geometry o × i is zero. This also implies that the scalar contribution is not allowed in SS geometry for any of the E1-E1-M1 terms (Ȏ (0) is the only scalar term). In detail, again here and below α is any of x, y, z.Ȏ Now we list the cartesian components of the former eight tensors. They are, forŌ (1) ,Õ ,Õ ,Õ : To finish, we list the terms coming from the second polarization term: O (m,2,1) αβγ A25)). In this case, as stated above, there is no symmetry between the two polarizations associated with the electric-dipole transitions, o β i γ . This implies that, besides the zeroth and second-rank tensors, analogous to the previous case, there is also the possibility of an antisymmetric coupling of o β and i γ , listed above. All three (zeroth, first and second-rank) tensors must then be coupled to the last vector, ( i × k i ). We have: Magnetic subgroups of Sr2IrO4 To follow the conclusions of Section III.B in more detail, we make use of the following table for the symmetry behavior of the P and M components under the 2/m1 magnetic group (in this subsection, we use the notation P α = er α and M α = µ B (L α + 2S α ), which is more often employed in the SHG community [25,26]): Table III, we can extract the allowed tensors for 2/m1 and each of its subgroups. For 2/m1 , the total signal needs to be invariant under the sum over all its symmetries. If the sum is calculated for each of the above components, P z , P x , P y , M z , M x , M y , we get, respectively, 0, 0, 0, 8i M z , 0, 0. This result simply expresses the fact that no matrix element for P α is allowed in the above group (as a consequence of inversion symmetry that forbids matrix elements of polar vectors). Therefore χ (e) is zero in 2/m1 . However, this is not the case for the imaginary part of χ (m) , time-reversal even, because of the 8i M z term. Such a term can interfere with the χ (q) signal. However, the only components of χ (m) that are different from zero are χ xyz . Therefore, from Eq. (14), this tensor is associated with a 2ψ azimuthal dependence, in disagreement with the experimental data. We can therefore disregard such a magnetic group and look for all possible symmetry reductions. A cautionary note is necessary: in principle, we should study the symmetry behavior of third-rank tensors with 27 cartesian components. We can simplify it and just study the sum of each line of Table III as done above for the following reasons: a) the square of all symmetry operations of Table III is +1; b) the couples (P x , P y ) and (M x , M y ) have the same behavior and c) the products P x(y) P z and M x(y) M z are always zero. Putting all this together implies that studying the behavior of symmetries A i B j C k (where A, B and C are any of P and M ) is the same as studying symmetries C k alone and then 'add' the non-zero A i B j part only to the non-zero symmetries C k -terms, as in the previous case. Turning to subgroups of 2/m1 , we have 7 subgroups with 4 symmetry elements, 2/m, 2 /m, 2/m , 2 /m , 21 , m1 , 11 . We can repeat the above analysis for each subgroup (keeping the order P z , P x , P y , M z , M x , M y ) and get: • 2/m: The sum over the symmetry elements gives 0, 0, 0, 4M z , 0, 0. No χ (e) is allowed. We have both the real and imaginary parts of χ (m) , but again, as for the full 2/m1 group, no odd dependence on the azimuthal angle ψ. Therefore, this magnetic subgroup is excluded. • 2 /m: The sum over the symmetry elements gives 0, 4i P x , 4i P y , 4i M z , 0, 0. This implies a signal from the time-reversal odd part of χ (e) when an odd number of x, y components is considered, meaning an even number of z components. As correctly recognized in Ref. 5, this can explain the interference with the χ (q) signal, being ψ-odd. The χ (m) signal has the same behavior as for the 2/m1 group, a 2ψ-azimuthal dependence. Therefore it is not responsible for the signal. We remark that this magnetic symmetry is the one of the − + −+ pattern studied in the previous section. We shall analyze this case more below. • 2 /m : The sum over the symmetry elements gives 0, 0, 0, 4i M z , 4 M x , 4 M y . No χ (e) is allowed and the imaginary, time-reversal even χ (m) tensor has no odd-ψ terms. However, the real, time-reversal odd χ (m) tensor has the desired oddψ terms (e.g., χ (m) zzx ) and it can interfere with the imaginary part of χ (q) (through the damping factor iΓ). Its order parameter is either a magnetic dipole or a magnetic octupole. Interestingly, this magnetic group corresponds to the + + ++ state. It is further discussed below. • m1 : The sum over the symmetry elements gives 0, 4 P x , 4 P y , 4i M z , 0, 0. The imaginary, time-reversal even χ (m) tensor does not have odd-ψ terms, because of the odd number of z components and does not contribute to the signal. An oddψ dependence for χ (e) is possible (even number of z components), but only for its real, time-reversal even part. Again, the real part of χ (e) can interfere with the imaginary part of χ (q) because of the damping factor iΓ. The order parameter associated with this magnetic group has the symmetry of a time-reversal even electric polarization (or octupole). On the basis of the experimental evidence, it appears as highly implausible that it determines the SHG signal because such an order parameter implies the displacement of atoms, so as to break the global inversion, and this would be detectable by other means. Moreover, the order parameter is time-reversal even and should have contributed also in the high-temperature phase, as no crystal distortion is detected in passing from the hightemperature phase to the low-temperature, magnetic phase. This is against the experimental evidence. • 11 : The sum over the symmetry elements gives 0, 0, 0, 4i M z , 4i M x , 4i M y . No χ (e) is allowed. The imaginary, time-reversal even χ (m) tensor is allowed with an even number of z components and can have the right odd-ψ dependence. It can interfere with the imaginary part of χ (q) , but, as for the m1 group, it seems implausible since its associated order parameter is time-reversal even (symmetry of an axial toroidal dipole or of an electric quadrupole): either it should have been different from zero also in the high-temperature, tetragonal phase, against the experimental evidence, or it should be a secondary order-parameter, induced by the ordered magnetic moments. In the latter case, however, magnetic moments should break the 2 symmetry by tilting along the c-axis, against the experimental evidence as well. |j = 5 2 , j z states. However, it is true that the Kramers doublet in an octahedral crystal field (i.e., J eff = 1/2, in the absence of a tetragonal distortion), is entirely due to the |j = 5 2 , j z states, as we shall see below. This is expressed in Fig. 6. As the transition operator r α only changes the orbital angular momentum, it is more convenient to rewrite ||j, j z states in terms of their 2p orbital counterparts. We have: Finally, the last step before performing the calculation is to remember the expression for r α in terms of spherical harmonics: z = crY 10 ; x = cr(Y 1,−1 − Y 11 )/ √ 2; y = cri(Y 1,−1 + Y 11 )/ √ 2. Here c = 4π/3 is a normalization constant (not important for the following, as it can be absorbed into the radial part, r). Having all the coefficients, the transition matrix elements are now easily calculated after noting that the spin is not changed in the transition (so, only equal-spin states are coupled during the x-ray excitation), and given the expressions for the only two Gaunt coefficients that appear in the calculation: 1 2 , − 1 2 ||Y 10 |ψ any = 0 These matrix elements are sufficient to derive the full scattering matrix, noting that j, j z ||Y 1m |ψ any = −( ψ any |Y 1,−m ||j, j z ) * , because of the spherical harmonics phase rule Y 1,m = −Y * 1,−m . From this, we get the following expressions for the total matrix L (2) αβ (α, β = x, y, z) at the L 2 edge (with a common constantC): Interestingly, as already highlighted after Eq. (16), we get that in the octahedral limit, R(η = 0) = 1 √ 2 , all the matrix elements at the L 2 edge are zero, not just the magnetic ones. In particular, the XAS signal should be zero in this limit. Therefore, if an XAS signal is confirmed at the energy of this edge, this would show that R deviates from 1 √ 2 , as discussed in Section IV. Notice however that, differently from magnetic RXS, XAS can also see the other empty states that are higher in energy, in particular the e g states, that are accessible from the L 2 edge because they have a sizable component in the J = 3/2 subspace as seen above. Therefore, the XAS signal should be reanalyzed with a better energy sensitivity, so as to clearly disentangle the unoccupied t 2g states from the e g ones, as discussed in Section V. If such a signal is clearly detected at the edge itself, this is a definitive proof that the half-filled Kramers doublet deviates significantly from the octahedral limit. If such a signal is not detected, to the contrary, this is clear proof that the doublet is composed purely of J = 5/2 states, as is the case of |ψ any above (Eq. (B4)) in the octahedral limit (R(η = 0) = 1 √ 2 ). In the latter case, this also implies the absence of a signal in the magnetic RXS at the L 2 edge. In the former case, instead, absence of the magnetic signal at the L 2 edge can also be explained by an in-plane magnetic moment, that makes cos(2β) = 0, since the magnetic signal originates from the off-diagonal matrix element, xy. It should be noted, however, that this latter case would not permit one to explain the lack of a RXS signal that is also seen when the magnetic moment is along c, as in Mn-doped [45] and Ru-doped [46] samples. At L 3 , we obtain the following values for the matrix elements: 20πN sin(β)e −iγ From this, we get the following expressions for the total matrix L (3) αβ (α, β = x, y, z) at the L 3 edge (with a common constantC, differing from that at the L 2 edge): R sin(2β) sin(γ) R sin(2β) sin(γ) R sin(2β) cos(γ) R sin(2β) cos(γ) We see that at least one off-diagonal matrix element always differs from zero, whatever R and β are. This implies that the magnetic RXS signal is always different from zero, as explained in Section IV. Finally, using L x = (L + + L − )/2 and L y = (L + − L − )/(2i) and analogously for the spins S x and S y , it is possible to show that the condition M x any = M y any implies γ = π/4 (modulo nπ).
2016-08-27T17:48:28.000Z
2016-03-14T00:00:00.000
{ "year": 2016, "sha1": "aa08596e6206d3d901db7d1ffc48b18cfbbb8bb9", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevB.94.075148", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "aa08596e6206d3d901db7d1ffc48b18cfbbb8bb9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249613350
pes2o/s2orc
v3-fos-license
Necessity of integrated genomic analysis to establish a designed knock-in mouse from CRISPR-Cas9-induced mutants The CRISPR-Cas9 method for generation of knock-in mutations in rodent embryos yields many F0 generation candidates that may have the designed mutations. The first task for selection of promising F0 generations is to analyze genomic DNA which likely contains a mixture of designed and unexpected mutations. In our study, while generating Prlhr-Venus knock-in reporter mice, we found that genomic rearrangements near the targeted knock-in allele, tandem multicopies at a target allele locus, and mosaic genotypes for two different knock-in alleles occurred in addition to the designed knock-in mutation in the F0 generation. Conventional PCR and genomic sequencing were not able to detect mosaicism nor discriminate between the designed one-copy knock-in mutant and a multicopy-inserted mutant. However, by using a combination of Southern blotting and the next-generation sequencing-based RAISING method, these mutants were successfully detected in the F0 generation. In the F1 and F2 generations, droplet digital PCR assisted in establishing the strain, although a multicopy was falsely detected as one copy by analysis of the F0 generation. Thus, the combination of these methods allowed us to select promising F0 generations and facilitated establishment of the designed strain. We emphasize that focusing only on positive evidence of knock-in can lead to erroneous selection of undesirable strains. Masahide Yoshida 1* , Tomoko Saito 2 , Yuki Takayanagi 1 , Yoshikazu Totsuka 2 & Tatsushi Onaka 1 The CRISPR-Cas9 method for generation of knock-in mutations in rodent embryos yields many F0 generation candidates that may have the designed mutations. The first task for selection of promising F0 generations is to analyze genomic DNA which likely contains a mixture of designed and unexpected mutations. In our study, while generating Prlhr-Venus knock-in reporter mice, we found that genomic rearrangements near the targeted knock-in allele, tandem multicopies at a target allele locus, and mosaic genotypes for two different knock-in alleles occurred in addition to the designed knock-in mutation in the F0 generation. Conventional PCR and genomic sequencing were not able to detect mosaicism nor discriminate between the designed one-copy knock-in mutant and a multicopy-inserted mutant. However, by using a combination of Southern blotting and the next-generation sequencingbased RAISING method, these mutants were successfully detected in the F0 generation. In the F1 and F2 generations, droplet digital PCR assisted in establishing the strain, although a multicopy was falsely detected as one copy by analysis of the F0 generation. Thus, the combination of these methods allowed us to select promising F0 generations and facilitated establishment of the designed strain. We emphasize that focusing only on positive evidence of knock-in can lead to erroneous selection of undesirable strains. Gene targeting, which transduces a mutation in a specific endogenous gene, has been broadly used to generate animal models for understanding physiological or pathological mechanisms. Homologous recombination in embryonic stem (ES) cells has been classically used to obtain gene-targeted rodents 1 . However, with this method, it takes nearly one year to obtain a genetically targeted mouse, including the time required to produce correctly targeted ES cell clones and acquiring animals capable of reproduction from chimeric mice. In contrast, genome editing methods such as zinc finger nuclease (ZFN) 2 , transcription activator-like effector nucleases (TALEN) 3 , and CRISPR-Cas9 ribonucleoprotein complexes 4,5 , are promising approaches for obtaining gene-targeted rodents in a limited period of time and for accelerating biological and medical research. Development of the CRISPR-Cas9 method is particularly remarkable because of its simplicity and efficiency 6 . During homologous recombination in ES cells, a gene targeting vector is electroporated into ES cells and drug-resistant ES cells are screened. Southern blotting or PCR analysis is then used to further select ES cell clones that have been recombined correctly. Lastly, the ES cell clones are microinjected into blastocysts to obtain germline-transmitted chimeric rodents 7 . In contrast, in the CRISPR-Cas9 method the combination of CRISPR RNA (crRNA), including the recognition sequence complementary to the targeting genomic sequence, and trans-activating CRISPR RNA (tracrRNA), or single guide RNA (sgRNA), and Cas9 nuclease are co-injected into fertilized eggs and the target gene is directly modified within embryos 8,9 . While it only takes three months to obtain adult candidate rodents, genetic variation must be investigated using genomic DNA extracted from somatic tissues of individual rodents. In the past few years, targeted knock-in methods have been developed which involve administering donor DNA template along with a component of the CRISPR-Cas9 complex. The CRISPR-Cas9 complex recognizes a www.nature.com/scientificreports/ specific sequence and cleaves the double-stranded DNA (dsDNA) adjusted to the PAM sequence. In mammalian cells, site-specific dsDNA breaks are repaired by nonhomologous end joining (NHEJ), homology-directed repair (HDR), or microhomology-mediated end joining (MMEJ) mechanisms 10,11 . Knock-in events can occur when repair of dsDNA breaks is performed using donor DNA template with two homology arms on either side of the transgene of interest 12 . Many researchers in the genome editing community have been endeavoring to improve the efficiency and specificity of CRISPR-Cas9-based knock-in methods in rodent embryos [13][14][15][16][17][18][19][20][21] . However, some obstacles remain in producing an accurate and rapid knock-in rodent model using these techniques. It has recently been reported that multiple tandem integrations at a target locus are frequently observed in F0 generation mice derived from zygotes injected with a combination of two guide RNAs and one donor DNA containing two loxP sequences for generation of conditional knock-out mice. In addition, the mosaic genotypes of the F0 generation have been revealed by analysis of F1 generation mice 22,23 . It was also noted that conventional PCR analysis, which is frequently used for genotype confirmation, failed to identify such multiple integration events in most cases, leading to a high rate of erroneous identification of mutants as correct single copy recombination events 23 . The knock-in method using CRISPR-Cas9 can enable the insertion of reporter genes and the generation of preclinical rodent models such as models of triplet disease and cancer by insertion of causative sequences [24][25][26] . It is thus critical to obtain the designed and precise knock-in rodents in order to reproduce these human pathological conditions and to elucidate the molecular mechanisms underlying diseases and physiological functions. With the CRISPR-Cas9 method, many candidate rodents that may have the designed mutation can be generated. In previous reports, targeted integration was identified in up to 67% of the F0 generation by conventional PCR analysis 17,18 . Analysis of the F1 generation, bred by mating all candidate F0 mice, results in unnecessary breeding of research animals and requires a lot of time, effort, and space, which creates a bottleneck for the CRISPR-Cas9 method. Conversely, if the number of F0 generation candidates is small, a decision must be made whether to perform additional injections to obtain more F0 generation animals. If the need for additional F0 mice could be determined without having to wait for the results from F1 generation analysis, it would save a great deal of time and effort. Whole genome sequencing seems to be the best method to validate the results of genome editing; however, it has been noted that whole genome sequencing using only the tail genome is not effective in mosaic animals 27 . It has also been noted that single-cell genome sequencing with somatic or germline cells requires further technical improvements in genome coverage, accuracy, and throughput 28,29 . In general, harvesting F0 generation germline cells requires invasive procedures that carry the risk of reduced fertility. Therefore, the first choice is to analyze genomes obtained from mosaic somatic cells. Although there are several methods for analyzing genome structure, the only way to identify rodents with the desired knock-in allele from a mixture of designed and unpredictable mutants is to combine multiple methods. However, it is not clear which combination of methods is most effective for identifying possible mutations produced by the CRISPR-Cas9-based knock-in method and for establishing a strain that has the designed mutation. In this study, we generated prolactin-releasing peptide receptor (Prlhr)-Venus knock-in reporter mice by a CRISPR-Cas9 method with one guide RNA and one long single-stranded DNA (lssDNA) making use of HDR mechanisms. We performed five different genome structure analyses, including the recently developed Rapid Amplification of Integration Sites without Interference by Genomic DNA contamination (RAISING) method, and compared the results over three generations to establish the desired strain. These analyses revealed that genomic rearrangement in the vicinity of the targeted knock-in allele as well as multicopy and mosaic genotypes occurred in the F0 generation; however, we were able to obtain a promising F0 generation by eliminating undesired mutations to establish the designed strain. Results Generation of CRISPR-Cas9-based Prlhr-Venus knock-in mice with lssDNA and one sgRNA. We sought to generate Prlhr-Venus knock-in mice by inserting a Venus-SV40 polyadenylation signal cassette into the gene encoding Prlhr (Fig. 1A). The target site of sgRNA was 24 bp downstream of the Prlhr initiation codon to produce 9 amino acids of a Prlhr and Venus fusion protein (Fig. 1B). We microinjected a mixture of human codon-optimized Cas9 nuclease (hCas9) mRNA, sgRNA, and lssDNA into 347 embryos and transferred 334 two-cell embryos into pseudopregnant female mice. From this, we obtained 42 pups of the F0 generation from 11 mothers (Fig. 1C). Conventional PCR analysis. When using the CRISPR-cas9 method for gene targeting in embryos, the initial step in mutation confirmation is conventional PCR of the F0 generation because it is convenient and sensitive. We first performed PCR analysis with an internal primer pair within the Venus-SV40 polyadenylation signal cassette. The PCR analysis revealed that 4 of the 42 pups obtained showed positive amplification (upper panel of Fig. 1D, Internal-1). We then used primers external to the targeting vector and internal primers in combination. PCR products of the 5' and 3' regions representing a potentially targeted allele were detected in F0 animals 5 and 24 (second panel of Fig. 1D, 5' ext-1 and 3' ext-1). The PCR product generated by the primer pair external to the targeting vector was only found in animal 24 of the F0 generation (second panel of Fig. 1D, Full ext-1). We crossed F0 numbers 5 and 24 with wild types to obtain the F1 generation (Fig. 1E). The F1 and F2 progeny of F0 number 5 showed precisely the same detection pattern as F0 number 5. However, the progeny of F0 number 24 were divided into two types. The PCR products of the 5' and 3' regions were detected in number 1 but not in number 6 of the F1 (left third panel of Fig. 1D, 5' ext-1 and 3' ext-1). We performed additional analysis of the offspring using other primer pairs. The PCR products of the 5' regions from the external-internal primer pairs were not detected in numbers 5, 6 and 7 of the F1 (right third panel of Fig. 1D, 5' ext -1 and 2). The PCR product amplified from the primer pair external to the targeting vector was found in number 1 of the F1 (left third panel of Fig. 1D, Full ext-1). The detection pattern of PCR products in the F2 was exactly the same as that of their www.nature.com/scientificreports/ parents (fourth panel of Fig. 1D). Two candidate F0 mice with a Venus insertion in the Prlhr locus were detected by conventional PCR. However, analysis of their F1 and F2 generation progeny showed that F0 number 24 was a mosaic with at least two types of Venus insertions that could be separated by PCR analysis. Droplet digital PCR analysis. We conducted droplet digital PCR to determine the copy number of the transgene. We first confirmed whether the copy number of the Venus gene was accurately detected using droplet digital PCR. Oxytocin receptor-Venus knock-in heterozygous mice generated by traditional ES cells were used as a positive control for a single copy of the Venus gene ( Fig. 2A) 30 . The average copy number of the Venus gene in the oxytocin receptor-Venus knock-in heterozygous and wild-type mice was calculated to be 1.00 and 0.00, respectively (Fig. 2B). These results confirmed that the copy number of Venus gene integration in the mouse genome could be accurately detected. We then conducted droplet digital PCR for two candidates of the F0 generation. The copy number of the Venus gene in numbers 5 and 24 of the F0 was calculated to be one (left panel of Fig. 2C). In the F1 generation, numbers 1, 5, 6, and 7 were also calculated to have one copy. These results indicate that the offspring of number 24 of the F0 had a copy number similar to that of its parent. In contrast, although number 5 of the F0 had one copy, numbers 11, 13, 15, and 16 had two copies (middle panel of Fig. 2C). The Venus copy number in the F2 was similar to their parents (right panel of Fig. 2C). Based on droplet digital PCR analysis, both of the two F0 generation mice that were candidates by conventional PCR were confirmed to have one copy. The results from the F1 and F2 generations suggest that some germ cells from F0 number 5 had two copies of the transgene and those from F0 number 24 had one insertion copy. This demonstrates that droplet digital PCR analysis using only the tail genome of the F0 generation was unable to accurately estimate copy number in the F1 generation. www.nature.com/scientificreports/ Southern blot analysis. We performed Southern blot analysis to characterize Prlhr locus-specific targeting. Restriction enzymes BamHI and HpaI were selected from a putative designed knock-in allele for digestion of genomic DNA (Fig. 3A). 5' and 3' probes external to the targeting vector and an internal Venus probe were used to distinguish the wild-type (9.4-kbp band for the 5' and 3' probes, and no band for the Venus probe) and designed target allele (4.1-kbp for the 5' and Venus probes and 6.2-Kbp for the 3' probe; Fig. 3A). None of the F0 mice showed the designed band pattern by Southern blot analysis. In F0 number 5, the 5' probe detected the expected band (red arrowhead, left upper panel of Fig. 3B), but the 3' probe detected a band that was larger in size than expected (blue arrowhead, left middle panel of Fig. 3B), and the Venus probe detected two bands, one expected and another unexpected (red and blue arrowheads, respectively, left middle panel of Fig. 3B). The F1 and F2 generation progeny of F0 number 5 showed exactly the same band pattern (middle and right panels of Fig. 3B). For F0 number 24, all probes detected two bands of the mutation, one expected and another unexpected (red and green arrowheads, respectively, left panels of Fig. 3B). Based on the conventional PCR results, the progeny of the F1 and F2 generation could be divided into two types: those that showed the expected band pattern for all probes (F1 number 1 and F2 number 30, middle and right panels of Fig. 3B) and those that showed unexpected bands for all probes (F1 number 6 and F2 number 37, middle and right panels of Fig. 3B). Unexpected bands were detected with the 5' and 3' probes designed outside the targeting vector sequence, indicating that an unexpected insertion of the Venus gene occurred between the two BamHI sites in the vicinity of the Prlhr locus. Next-generation sequencing-based RAISING analysis. We also performed random integration analysis with next-generation sequencing. This method was developed for sensitive detection of clonality in cells infected with Human T-cell leukemia virus type-1, which causes adult leukemia/lymphoma 31 . In number 5 of the F0 generation, two types of sequences including Venus were detected (Fig. 4A, Supplemental Fig. 1 and Supplemental Table 1). Type (a) contained both the endogenous genomic sequence and the knock-in vector-containing sequence. The Venus integration site was at the designed location of the Prlhr gene on chromosome 19. Type (b) consisted of only the knock-in vector-containing sequence, and parts of the 3' and 5' arms were inverted. Among the total of 463,463 reads, the proportions of type (a) and (b) were 78.2% and 21.6%, respectively ( Fig. 4B and Supplemental Table 1). In number 24 of F0, type (a) was detected, and the proportion of type (a) was 99.8% out of a total of 456,263 reads. Results for the F1 generation were similar to those of their parents ( Fig. 4B and Supplemental Table 1). Even in the F2 generation, results similar to those of their parents were obtained ( Fig. 4B and Supplemental Table 1). By RAISING analysis, two different sequences that included a Venus sequence were detected in F0 number 5. This suggests that a tandem two-copy occurred at one Prlhr locus or that the designed one copy knock-in occurred at one Prlhr locus and an inverted insertion of the targeting vector occurred at the other Prlhr locus. However, these two sequences did not separate in the F1 and F2 generation progeny of F0 number 5, suggesting that insertion of a two-copy tandem at one Prlhr locus had occurred. Although the F1 and F2 generations from F0 number 24 were divided into two strains based on the results of conventional PCR, only type (a) was detected in both strains. These results suggest that F1 number 6 and F2 number 37 have a Prlhr locus in which Venus was inserted and genome rearrangement occurred in the vicinity. Genomic sequencing analysis. We performed sequence analysis of numbers 5 and 24 of the F0 generation and numbers 11 and 1 of the F1 generation. We used three primer pairs external to the targeting vector and internal primers in combination as shown in Fig. 1. The sequences of the PCR products in the 5' and 3' joint areas were the designed sequences for numbers 5 and 24 of the F0 and their offspring numbers 11 and 1 of the F1 (Fig. 5A-C, Supplemental Figs. 2 and 3). PCR products containing the full length targeting vector were analyzed, and sequences of the designed mutant alleles were detected in number 24 of the F0 and number 1 of the F1 (Fig. 5A, D and Supplemental Fig. 4). In number 1 of the F1, PCR products were also detected by conventional PCR using primer pairs located outside of the primers used for the Full ext-1 as well as the primer located downstream of exon 1 and the internal primer (Supplemental Fig. 5A and B). No unexpected mutations in the confirmed range from exon 1 upstream to downstream of the knock-in allele were detected by sequence analysis of the PCR products (Supplemental Figs. 5C,D, 6 and 7). These results indicated that two candidate mice of the F0 generation had Venus inserted in the Prlhr locus and that each sequence was correctly transmitted to their offspring. www.nature.com/scientificreports/ www.nature.com/scientificreports/ Discussion In this study, we conducted a detailed analysis of the process for generating a Prlhr-Venus knock-in mouse line using five genotyping methods spanning three generations. After the comprehensive analysis, we found that one F0 generation mouse had the designed knock-in allele, and we were able to establish a knock-in mouse line with that designed mutant. Number 5 of the F0 had a tandem two-copy mutant allele. F0 number 24 was a mosaic mouse with the designed mutant allele and an allele that underwent reconstruction near the Prlhr locus with the targeted knock-in. These two mutant alleles were separated in the F1 generation. Validation of conventional PCR. In conventional PCR, the use of primers external to the targeting vector is effective because they can confirm knock-ins to the target locus. However, in the present study PCR products from the 5' and 3' regions were detected using the external and internal primer pairs even in the two-copy mutant, suggesting that conventional PCR alone did not guarantee a designed one-copy mutant. The multicopy event occurred with the one guide RNA-one donor DNA method as well as in the previously reported conditional knockout method using two guide RNAs-one donor DNA containing two loxP sequences 23 . In this study, PCR products containing the full length target vector were detected in mice with a one-copy mutant allele using external primer pairs. The detection of this PCR product seems to be the most reliable method for detecting a one-copy mutant allele using conventional PCR. However, in a different study on the generation of a conditional knock-out using the two guide RNA-one lssDNA method, no PCR products were detected using external primer pairs despite the designed one-copy integration 23 . In addition, in the majority of studies on improved knock- www.nature.com/scientificreports/ in methods using CRISPR-cas9, it was not determined whether PCR products could be detected or not using external primer pairs [14][15][16][17][19][20][21] , although the correct product was reported in one paper 18 . PCR amplification with external primer pairs is important for confirmation of a single copy of the transgene, although the true capability of PCR amplification may depend on the characteristics of the targeting vector and the target locus. Thus, just because the product is not detected does not necessarily mean that the expected mutation did not occur. For example, number 24 of the F0 was a mosaic genotype with two different knock-in mutant Prlhr alleles. This mosaicism was not detected by conventional PCR using the genome of number 24 of the F0, and could only be detected using multiple primer pairs for the genomes of the F1 and F2 generations. Conventional PCR is most often used for genotyping for strain maintenance. However, at least for the F1 generation, multiple external primer pairs should be used to select rodents with the designed mutation and to remove unexpected mutants. Validation of droplet digital PCR. We first confirmed the accuracy of copy number detection by droplet digital PCR using oxytocin receptor-Venus knock-in heterozygous mice that were established using ES cells via homologous recombination. In these mice, copy number analysis for the Venus gene was able to detect the one-copy mutant with high confidence. In the Prlhr-Venus knock-in mice, number 5 of the F0 generation was identified as a one-copy mutant. Conversely, all heterozygous progeny from number 5 of the F0 were identified as two-copy mutants over the F2 generation. HDR-mediated repair occurs during the S and G2 phases, when sister chromatids are formed 32 . Thus, four Prlhr loci can temporarily exist in a one-cell fertilized egg. In the case of number 5 of the F0, the two targeting vectors were inserted into one locus in tandem, and thus when droplet digital PCR was performed on the genomic DNA extracted from the F0 mouse, it was identified as having only one copy (Fig. 6A). Number 24 of the F0 was identified as a one-copy mutant and all heterozygous progeny of that mouse were also detected as one-copy mutants. In this case, one targeting vector was inserted into two of the four loci; this resulted in the droplet digital PCR calculating it as only one copy (Fig. 6B). For the F1 and F2 heterozygous mice, the copy number obtained by droplet digital PCR is quite reliable because the knock-in allele and the wild-type allele are present in a one-to-one ratio (Supplemental Fig. 8). Droplet digital PCR is not necessary for the F0 mice because copy number cannot be accurately calculated due to the potential for multi-copy or mosaic genotypes. However, estimation of knock-in events occurring in F0 embryos, performed by comparing copy number results in the F0 and F1 with droplet digital PCR, is valuable for the development of efficient and appropriate knock-in methods. Validation of southern blotting. Although Southern blotting is a classic method, we were able to detect both designed and unexpected mutant loci. In order to obtain sufficient information from the genomic DNA of the F0 generation, it was necessary to use all 5' and 3' probes external to the targeting vector and internal Venus probe. In number 5 of the F0, the band pattern was detected as expected in the 5' probe. However, the band detected with the 3' probe was larger than expected, and the Venus probe also detected a band other than the expected. These results suggested that knock-in occurred at the Prlhr locus but with an unexpected multicopy mutation, which could not be detected by the 5' probe alone. For F0 number 24, all probes detected both unexpected and expected bands. This suggests that the mice had two different mutations at the Prlhr loci. Based on the PCR results, the progeny were divided into two genotypes, and it was determined that the F0 had a mosaic of two knock-in genotypes. A larger band than that of the wild type was detected using the external probes, thus confirming that rearrangement occurred in the vicinity of the Prlhr locus. Southern blotting in the F1 generation should be performed to confirm the presence of the designed mutant. External probes can detect gross genomic changes near the target locus, while internal probes can distinguish integration events including off-target insertions from the entire genome. The weakness of Southern blotting is that it cannot detect small insertion or deletion mutations around the target locus, which often occur with the CRISPR-Cas9 method 15,21 , if they occur at the same time as the knock-in event to the target locus. Validation of next-generation sequencing-based RAISING. The RAISING method was used in the present study to detect knock-ins within the target locus and off-target insertions. Two different sequences were detected for number 5 of the F0 generation. Type (a) contained sequences outside the targeting vector and showed the designed knock-in. Type (b) contained sequences with partially inverted 5' and 3' homology arms but did not include genomic sequences outside the targeting vector. This result was attributed to either a tandem two-copy at one Prlhr locus or a designed one-copy knock-in at one Prlhr locus and a reverse insertion of the targeting vector at the other Prlhr locus. Southern blot results for F0 number 5 showed that the designed knock-in-derived band was not obtained with the 3' probe, suggesting that a tandem two-copy likely occurred. Both types of sequences were also detected in the progeny of number 5 from the F0, and the proportions of their sequence reads were the same as in the F0. The fact that type (a) and type (b) did not separate in the F1 and F2, that the sequence of type (b) contained the sequence of the 3' arm region, and that all heterozygous progeny of number 5 of the F0 were also identified as two-copy mutants by droplet digital PCR suggested that the sequences containing these two Venus genes were arranged in tandem at the Prlhr locus. In number 24 of the F0, only the type (a) sequence was detected. Based on results from conventional PCR, the strain was divided into two groups, but only type (a) was detected in all F1 and F2 mice. Number 6 of the F1 and number 37 of the F2 were predicted to have off-target insertions because no PCR products were detected by conventional PCR using external primers. However, combined with the Southern blot results, in which the mutant allele was detected with the 5' and 3' probes, the targeted knock-in was found at the Prlhr locus but with reconstruction occurring in the vicinity. Unlike conventional PCR for which results depend on the selection of primer pairs, the RAISING method provides information on the insertion of exogenous gene sequences throughout the entire genome at the sequencing level. The complexity of the data analysis is expected to be much simpler www.nature.com/scientificreports/ than whole genome sequencing using a next-generation sequencer because only the region associated with the insertion of mutations is sequenced. Interpretation of unexpected bands due to two-copy mutants in Southern blotting of the F0 generation is easier when combined with sequences from the RAISING method. However, the weakness is that the maximum sequence length per read is 300 bp, and unless the homologous arm sequence is less than 250 bp, the sequence cannot reach the endogenous genomic sequence and the insertion position cannot be identified. Moreover, while it is less expensive than whole genome sequencing, it is more expensive than conventional sequencing. In the F1 generation where mosaicism is resolved, droplet digital PCR and genomic sequencing are more useful for copy number detection and sequence confirmation. Validation of genomic sequencing. Sequence confirmation is particularly important for the CRISPR-Cas9 method which can induce insertion and deletion mutations. These mutations can occur in the junction region along with the knock-in of the targeting vector 15,21 . In the PCR products of the 5' and 3' regions using external primers, number 5 of the F0 generation and its offspring, number 11 of the F1, showed all the designed junction sequences. These results indicate that correct junctions were generated at the 5' and 3' parts in tandem multicopy mutants and that even if the sequences of the 5' and 3' junctions are correct, they cannot guarantee a single copy mutant. In number 24 of the F0, the PCR product containing the full length of the targeting vector using external primers was confirmed to be the designed sequence. In the present study, lssDNA was used as the donor DNA, and no unexpected mutations were found in any of the sequences analyzed, including the Venus-polyA signal cassette. However, in a previous study using two gRNAs and one lssDNA to generate conditional knock-out mice, unexpected point mutations occurred in the sequence within the targeting vector 22 . Thus, it is necessary to verify the sequence within the targeting vector as well as the sequence at the 5' and 3' junctions in the F1 generation. Four possible knock-in events using the CRISPR-cas9 method. In the generation of knock-in mice by homologous recombination of ES cells, chimeric mice can be obtained using a single clone with the correct recombination selected from a large number of ES cell clones. Therefore, chimeric mice of the F0 generation have a unitary knock-in allele. The knock-in allele of the F1 mice obtained by crossing chimeric mice with wild type is also identical to that of the injected ES cells. However, careful analysis is required for F0 mice obtained by the CRISPR-cas9-based knock-in method, because the somatic genomic DNA is a mixture of designed and unexpected mutations. In order to select promising F0 generations and establish designed strains, we used five genome analysis methods to detect knock-ins at the target locus, genomic rearrangements in the vicinity of the knock-in allele, and multicopy and mosaic genotypes (Fig. 7A). For detection of knock-ins within the target locus, conventional PCR and Southern blot analysis provided accurate information, the RAISING method and genome sequencing were then particularly informative because they provided sequence data. This was similar for both the F0 generation and the F1 and F2 generations. Genomic rearrangements occurring in the vicinity of the knock-in allele could only be detected by Southern blotting, and this was similar for the F0, F1, and F2 generations. For detection of multicopies, Southern blotting and the RAISING method were effective in the F0 generation, while in the F1 and F2 generations, in addition to these methods, the increased accuracy from droplet digital PCR was useful. Mosaicism was detected by Southern blotting in the F0 generation, whereas in the F1 and F2 generations, mosaicism was eliminated and consequently could also be identified by conventional PCR. Conclusion Candidate selection should be made based on the assumption that F0 generation mice generated by the CRISPR-cas9-based knock-in method contain a mixture of designed and unexpected mutations. Analytical methods that can reveal both unexpected and designed mutations enable a more confident selection of promising F0 mice. Southern blotting was particularly useful for detecting unexpected mutations in the whole genome when all external 5' , 3' , and internal probes were used. Interpretation of unexpected bands due to multicopy variants by Southern blotting was easier when combined with sequences from the RAISING method, although the RAIS-ING method is limited in that one side of the homologous arm in the knock-in vector must be less than 250 bp. For the F0 generation, the combination of four methods, conventional PCR, Southern blotting, RAISING, and genome sequencing, was very effective (Fig. 7B). Moreover, this series of analyses can be completed before the F0 generations become fertile. For the F1 generation, the results of droplet digital PCR in addition to conventional PCR, Southern blotting, and genome sequencing, were beneficial for establishing the designed strain (Fig. 7C). A combination of these methods is sufficient, and the RAISING method was not necessary in this case. On the other hand, in the case of a tandem multicopy, regardless of the generation, conventional PCR, Southern blotting, and sequencing yielded results that were partially identical to those of a one-copy insertion. These results demonstrate that focusing only on positive evidence can lead to erroneous selection of undesigned strains. Of course, if we can determine whether additional F0 mice should be obtained without waiting for analysis of the F1 generation, a great amount time and effort can be saved. The most important consideration, however, is to establish a strain with the designed knock-in mutation. Careful analysis of the F1 generation, which eliminates mosaicism, is essential to this goal. Materials and methods Animals. Mice Preparation of lssDNA. A DNA fragment containing a Venus-SV40 polyadenylation signal cassette and two homology arms was cloned into a pUC plasmid. A 100-bp fragment was used as the 5' homology arm and a 300-bp fragment was used as the 3' homology arm (Supplemental Table 2). The target sequence in the plasmid was amplified by PCR with a primer pair containing a nuclease resistant primer and the PCR product was digested with 5'-3' exonuclease to produce lssDNA. After digesting the template plasmid, the lssDNA was sequenced and confirmed to be lssDNA by capillary electrophoresis. The lssDNA was stored at − 80 ℃ until use. Microinjections into mouse embryos. Female mice were superovulated by injection of pregnant mare serum gonadotropin (PMSG, ASKA Pharmaceutical Holdings Co., Ltd., Tokyo. Japan) and human chorionic gonadotropin (hCG, ASKA Pharmaceutical Holdings Co., Ltd.). Pronuclear-stage embryos were then collected from the superovulated females. The embryos were cultured in KSOM medium (ARK Resource, Kumamoto, Japan) before and after microinjections. A mixture of 200 ng/mL Cas9 mRNA, 100 ng/mL sgRNA, and 50 ng/ mL lssDNA was microinjected into the male pronuclei of embryos using a micromanipulator (Narishige, Tokyo, Japan). Swelling of the pronuclei due to the injection (approximately 1-2 pL) was used as a confirmation of successful injection 34 . The embryos were cultured in KSOM medium then transferred into pseudopregnant female mice. Conventional PCR and sequencing analysis for detection of genomic mutations. Approximately 10 ng of genomic DNA extracted from the tail was used. Genomic PCR was performed in a 25-mL reaction volume containing HotStarTaq DNA polymerase (Qiagen, Hilden, Germany) or Q5 High-Fidelity DNA polymerase (New England Biolabs), genomic DNA, and 12.5 pmol of each primer. The primers used are shown in Supplemental Table 3. Oxytocin receptor-Venus knock-in heterozygous and wild-type mice were used as positive and negative controls, respectively. PCR products were directly sequenced using the BigDye terminator v3.1 and the Applied Biosystems 3130xl DNA Sequencer (Thermo Fisher Scientific, Inc.) according to the manufacturer's standard protocol. Southern blot analysis for detection of mutants. Five µg of genomic DNA extracted from the tail was digested with BamHI and HpaI (New England Biolabs, Massachusetts, USA) and loaded on 0.8% agarose gels. The digested DNA samples were subjected to electrophoresis and transferred to Hybond-XL membranes (Cytiva, Tokyo, Japan). The membranes were hybridized to 32 P-labeled DNA probes. The probes were obtained by digestion with restriction enzymes and labeled with DNA polymerase I, Large (Klenow) Fragment (New England Biolabs) and random primers (Takara Bio Inc., Shiga, Japan) with [ 32 P]dCTP (PerkinElmer, Massachusetts, USA). The probes used are shown in Supplemental Table 4. Droplet digital PCR for determination of transgene copy number. Droplet digital PCR was performed using a QX200 droplet digital PCR system (Bio-Rad Laboratories, Inc., California, USA). Genomic DNA extracted from the tail was digested with the restriction enzyme TaqαI (New England Biolabs). The mouse oxytocin receptor gene was used as a reference gene for normalization of Venus copy number. The assay was performed in a 20-µL reaction volume containing 2 ng of digested genomic DNA, ddPCR Supermix for Probes (Bio-Rad Laboratories, Inc.), gene-specific primers, and hydrolysis probes. Each reaction was performed in duplicate. The hydrolysis probe sets used are shown in Supplemental Table 3. The hydrolysis probe set for the mouse oxytocin receptor gene was designed in exon 4. Oxytocin receptor-Venus knock-in heterozygous mice generated by embryonic stem cells were used as one-copy positive controls. In these mice, part of exon 3 was replaced with a Venus-polyadenylated signal cassette, but exon 4 is intact. Droplet digital PCR data were analyzed with QuantaSoft version 1.7 software (Bio-Rad Laboratories, Inc.), and the number of Venus gene copies was calculated using the OXTR gene as a 2-copy reference per genome. Next-generation sequencing. Rapid amplification of integration sites was performed according to a previous report with minor modifications 31 . Genomic DNA extracted from the tail was used. Specific primers used for the amplification in this study are shown in Supplemental Table 3. The final PCR products were purified using the Agencourt AMPure XP kit (Beckman Coulter, California, USA) and were quantified with a Qubit dsDNA HS assay kit (Thermo Fisher Scientific, Inc.) and an Agilent BioAnalyzer with a High-Sensitivity DNA www.nature.com/scientificreports/ chip (Agilent Technologies, California, USA). Next-generation sequencing was performed using the MiSeq Reagent Kit v3 (600-cycle) on the Illumina MiSeq system (Illumina, California, USA) according to the manufacturer's protocols. For data analysis, amplicon-sequence reads of less than 50 nucleotides and low-quality sequencing reads were excluded using fastp software 35 (Supplemental Table 5). Adapter sequences were also trimmed with fastp (RRID:SCR_016962). A homology search was then performed using Magic-BLAST (RRID:SCR_015513), and trimmed sequence reads that had a sequence of both 20 or more nucleotides of Venus and 90% or greater match identity with the Mus musculus genome were extracted. Genomic locations of Venus were determined on the Mus musculus genome GRCm39 for all extracted sequence reads. The extracted sequence reads were grouped on the basis of Venus insertion sites and were analyzed with SnpEff software (RRID:SCR_005191) which annotates functional effect prediction (Supplemental Table 1). Data availability The raw reads of next-generation sequencing in this study are available from the DDBJ/EMBL/NCBI Sequence Read Archives under the accession number DRA014567. The data supporting the findings of this study are available from the corresponding author upon request. www.nature.com/scientificreports/
2022-06-13T13:26:57.275Z
2022-06-09T00:00:00.000
{ "year": 2022, "sha1": "91ab209fa69a99b0a8204c909006a049fb1a0124", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "bf5af87be07824f0dedad3ec9cb19d54b91a92c7", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology" ] }
56062068
pes2o/s2orc
v3-fos-license
PREDNISOLONE VERSUS PREDNISOLONE COMBINED WITH ACYCLOVIR FOR THE TREATMENT OF BELL ' S PALSY : A COMPARATIVE STUDY IN PERIPHERAL REFERRAL CENTRE Affiliation Shrestha K, Shah RK, Sapkota S, Giri S. Prednisolone Verses Prednisolone Combined ith Acyclovir or he Treatment f Bell's Palsy A Comparative w f t o Study n Peripheral Referral Centre. BJHS 2018;3(2)6: 443-446 i * Corresponding Author Dr Karuna Shrestha Consultant Otorhinolaryngologist Department of Otorhinolaryngology Birat Medical College and Teaching Hospital Email: karunashrestha215@gmail.com ORCID: https://orcid.org/0000-0002-6332-9095 ORA 72 1* 2 3 3 Shrestha K , Shah RK , Sapkota S , Giri S Objec ve To evaluate whether a prednisolone with acyclovir provide a be er degree of facial muscle recovery outcomes than prednisolone alone in pa ent with Bell's palsy. Methodology This is a hospital based longitudinal cross sec onal study conducted in Birat Medical College and Teaching Hospital and Birat Hospital PVT LTD, Biratnagar, Nepal from January 2017 to May 2018. A total of 42 pa ent diagnosed with Bell's palsy are included in this study, where 21 pa ent are treated with prednisolone and remaining 21 pa ent treated with a combina on of prednisolone and acyclovir. The House-Brackmann grading scale is used for recording the ini al presenta on of pa ent with Bell's palsy and their early recovery on follow-up visit. The collected data was analysed using SPSS 18.0. Results The total number of pa ents included in this study was 42. Mean age of pa ents is 27.1±10 years. Among them 25 (59.5%) were male and 17(40.5%) were female where male and female ra o is 1.5:1. Prednisolone plus acyclovir given in combina on in Bell's palsy pa ents has as 76.2% recovery while prednisolone given alone has a 57.1% recovery P value <0.195, odds ra o 2.400 (95% confidence interval 0.638 -9.028). Conclusions Prednisolone and acyclovir, the combined therapy is effec ve than prednisolone alone in the treatment of Bell's palsy. It requires confirma on with randomized controlled trial. INTRODUCTION Bell's palsy is named a er the Bri sh physician Sir Charles Bell, who described the onset, physical findings, and course 1 of the disease in 1821. The incidence rate of 20 per 100,000 per year and equal in both genders. Bell's palsy can occur at any age but the median age is 40 and both sides may be 2 affected equally. Bell's palsy is defined as an idiopathic, 3 sudden onset peripheral facial nerve palsy. The exact causes of Bell's palsy remains unclear, although ischaemic neuropathy, viral infec on usually herpes simplex virus, and autoimmune disorders like sarcoidosis 4 are proposed as a causes of Bell's palsy. The pathophysiology of Bell's palsy involves inflamma on and compression of seventh cranial nerve around the area where it exists the skull via stylomastoid foramen. The facial nerve travels through the fallopian canal and then enters the paro d gland where it divides into five terminal branches that are responsible for innerva ng the muscles of facial expression.The oedema and inability to expand beyond the inelas c bony fallopian canal leads to pressure effect and demyelina on of axon, resul ng in weakness or 5 paralysis of everything that it innervates. Many viruses including Herpes simplex virus type 1 (HSV-1), Herpes simplex virus type 2 (HSV-2), Human herpes virus, Varicella zoster virus (VZV), Adeno virus, influenza B virus, Coxsackie virus and Epstein-Barr virus (EBV) have been linked to the development of Bell's palsy but it is believed that HSV-1 is the one that is responsible for idiopathic facial 6 palsy. HSV may remains latent at the geniculate ganglia and increasing evidence implies that Bell's palsy is caused by the latent HSV being reac vated from the cranial nerve 7,8,9 ganglion and causes inflamma on of facial nerve. The majority of pa ents with Bell's palsy recover completely without interven on. Complete recovery typically occur within 6 months. Approximately 30% of pa ents do not recover completely and gets residual symptoms such as 10 contracture, synkinesis and paresis. Due to its unknown e ology, treatment of Bell's palsy remains controversial, frequently debated and variable. Steroid and an viral are main two types of pharmacological 3 treatment that have been used for Bell's palsy. The ra onal for these treatment is based on the presumed pathophysiology of Bell's palsy, the use of steroids to counteract the inflammatory process and an virals is aimed at eradica on 4,11 of virus such as HSV-1, an viral therapy seems logical. Most surgeon would advocate a combina on of steroid and an viral drugs. The usual recommended regime is oral prednisolone 1mg\kg\day for 7 days followed by ten days taper and oral acyclovir 200-400mg 5 mes daily for 7 days. METHODOLOGY This is a hospital based longitudinal cross sec onal study. A total of 42 pa ent diagnosed with Bell's palsy who visited OPD of Otorhinolaryngology of Birat Medical College and Teaching Hospital and Birat Hospital PVT LTD, Biratnagar, Nepal from January 2017 to May 2018 are included in this study. The permission to conduct this study was taken from the ins tu on. All pa ent with Bell's palsy age of more than 10 years and of either sex were enrolled in the study. All pa ents were randomly divided into two group. The first Group A of 21 pa ent were treated with prednisolone and remaining Group B of 21 pa ent were treated with a combina on of prednisolone and acyclovir. In Group A oral prednisolone 1mg\kg\day was given for 7 days followed by ten days taper and Group B were treated with a combina on of oral prednisolone 1mg\kg\day and oral acyclovir 400mg five mes per day for 7 days. Pa ent with facial palsy due to Ramsey Hunt syndrome, chronic suppura ve o s media, systemic infec on, th vasculopathy, secondary causes of 7 nerve palsy, sensi vity to acyclovir, Bell's palsy with >3 days of symptom onset, other cranial nerve paralysis, pa ents who are lost to follow up are excluded. This is therefore the compara ve study on recovery outcomes in pa ents with Bell's palsy treated either with prednisolone alone or with a combina on of prednisolone and acyclovir. The treatment of the pa ents with Bell's palsy depends on a number of variables. Steroid treatment has been shown to 17,18 be effec ve in many studies of pa ent with Bell's palsy. However adding an viral drugs to the treatment of Bell's palsy is to eradicate the virus while steroid reduced swelling 19 and inflamma on of nerve. Use of an an viral agent in addi on to steroid in the treatment of Bell's palsy has been shown to improve the recovery of facial func on when 20,21 compared to cor costeroid treatment alone. Kawagachi et al. showed that the recovery rate in pa ents with combina on of prednisolone and valacyclovir were 7 significantly greater than prednisolone alone. de Almeida JR et al suggested that combina on of an viral and glucocor coid treatment reduced risk of unfavourable recovery as compared with glucocor coid treatment 22 alone. Lockhart P et al showed that treatment with an viral agents alone were unsa sfactory, while the combina on of cor costeroid and acyclovir therapy were 20 significantly be er. Hato et al study, which reported a significant benefit of adding valaciclovir and showed that the benefit of valaciclovir was greater in pa ents with severe facial paralysis at presenta on than in those with 23 moderate paralysis. Minnerop et al performed a subgroup analysis of pa ents who presented with severe facial muscle paralysis (House-Brackmann grade of 5 or 6) and found significantly be er recovery in pa ents who received famciclovir plus steroids than in those on steroids alone 24 (72% v 47%, respec vely, achieved normal func on. In a double blind, placebo-controlled, randomized study, early treatment with prednisolone significantly improved Bell's palsy. However, no significant advantage was found 25 for acyclovir alone or in combina on with prednisolone. Steroids are effec ve in pa ents whose Bell's palsy is started recently, and that an viral therapy does not significantly 26 improved the facial nerve func on. The recovery rate with combina on therapy increases only slightly as compared to trea ng with prednisolone alone, according to Numthavaj 27 et al Prednisolone is the basis of Bell's palsy treatment. On the other hand, one of the most recently published trials, by Engstrom et al, is in opposi on to this argument. Pa ents in this trial had a median House-Brackmann grade of 4 at presenta on, and the authors convincingly showed 28 no benefit of adding valaciclovir to steroids . However, other studies that underes mate the efficacy of treatment by adding acyclovir. In our study though the response seems to be be er with combined acyclovir and the prednisolone over prednisolone alone, the difference was found to be sta s cally insignificant. CONCLUSION The combina on therapy of prednisolone along with acyclovir is found to be be er than prednisolone alone yet sta s cally insignificant. For the confirma on of the finding we recommend a randomized controlled trial with larger sample size. RECOMMENDATION The study recommends the combined therapy of prednisolone and acyclovir as the effec ve treatment for Bell's palsy. LIMITATION OF THE STUDY In this study sample size was small therefore mul centre studies with large sample size are required. ACKNOWLEDGEMENT I would like to thanks all the faculty from department of Otorhinolaryngology of Birat Medical College and Teaching Hospital and others who are involved directly and indirectly to make this study a success and also the en re pa ents who were enrolled in this study.
2018-12-15T11:43:27.041Z
2018-09-05T00:00:00.000
{ "year": 2018, "sha1": "011388157fc0516d7f90d74d7437fe2b0ded183b", "oa_license": "CCBY", "oa_url": "https://www.nepjol.info/index.php/bjhs/article/download/20942/17185", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "011388157fc0516d7f90d74d7437fe2b0ded183b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
224881343
pes2o/s2orc
v3-fos-license
THE INFLUENCE OF HEDONIC VALUE AND UTILITARIAN VALUE ON BRAND TRUST AND LOYALTY This study aims to examine and analyze the influence of Hedonic Value and Utilitarian Value on Brand Trust and loyalty on shoe products. This research will be conducted by distributing questionnaires to 100 respondents who use shoe products. The analysis technique used is quantitative analysis techniques with path analysis methods. The results of this study indicate that the hedonic value has a significant positive effect on brand trust. Utilitarian values do not have a significant influence on brand trust. Brand trust has a significant positive effect on Loyalty. References And Hypotheses Based on previous research conducted by Gary Hedonic Value Hedonic values are based on a purchase motivation from consumers because consumers like it, driven by the desire to achieve a form of pleasure, freedom, delusion, and escape from problems. Hedonic consumption refers to the needs of consumers in using a product to create a sense of fantasy, produce feelings based on, senses, and produce emotional stimuli to satisfy themselves. Hedonic value is an assessment of the quality of shopping seen in terms of enjoyment, the feeling of attraction due to the eyes (visual appeal) of a product and as an escape from reality (escapism). (Subagio, 2011). Hedonic value (hedonic value) is the overall assessment of a consumer as a result of fulfilling pleasure based on the use of a product (Yistiani, Yasa, & atmosphere, 2012). The hedonic value of a product also greatly influences consumer behavior, especially regarding consumer emotions and feelings. In practice before using or buying something, consumers must first like the product, freedom in determining, expressing joy, feeling happiness, and will produce satisfaction (Chitturi, Raghunathan, & Mahajan, 2008). According to (Kim, 2006) there are six dimensions to measure a consumer's hedonic level, namely: adventure, social, gratification, ideas, roles, and value shopping: a. Adventure shopping, consumers shop because of their experience and by shopping, consumers feel like they have their own world. b. Social shopping, consumers assume that the pleasure in shopping will be more enjoyable if done together with family or friends. c. Gratification shopping, shopping is an alternative to reduce stress, overcome bad problems, and a means to forget the problems being faced. d. Shopping ideas, consumers shop to follow new fashion trends and to see products or something new. e. Role shopping, consumers prefer shopping for others rather than for themselves so consumers feel that shopping for other people is a fun thing to do. f. Value shopping, consumers assume that shopping is a game when there is a bargain in prices, or when consumers are looking for shopping places that offer discounts, closeout, or low prices. Hedonist consumption is one of the consumer behaviors that is related to aspects of fantasy and emotion in experiences that are based on various benefits such as pleasure when using products (Park & Kim, 2006). Utilitarian Value Utilitarian value is the willingness in a person to study the motives for getting a quality product, and also efficient in time and energy (Subagio, 2011). Utilitarian products are generally not related to feelings / conditions. The utilitarian aspect of consumer value plays a very important role in shaping customer satisfaction and behavioral intentions (Ryu, Han, & Jang, 2010). Utilitarian motives emphasize the value of spending that has benefits related to the task, reasonable, cautious, and efficiency of activities (Fallefi & Siregar, 2018 Multiple benefits From some of the above meanings, it can be concluded that the utilitarian value itself is explained as the use or benefit felt by someone when using a product (Somba, Sunaryo, & Mugiono, 2018). Brand Trust Trust is the expectations of the parties in a transaction, and the risks associated with the estimates and behavior of those expectations (Lau & Lee, 1999). Trust is defined as the perception of reliability from the point of view of consumers based on experience or more on interactions characterized by meeting expectations and product performance and satisfaction (Tanojohardjo, Kunto, & Brahmana, 2014). Brand is a name or symbol used by companies to identify a product and distinguish it from other products so that it is easily recognized by consumers when they want to buy a product (Sangadji & Sopiah, 2013). Another understanding of a brand can be defined as a distinguished name or symbol (such as a logo, stamp, symbol, symbol, sign, slogan, words or packaging) to identify goods or services from the seller or brand holder (S.A., 2008). Understanding the brand itself can be classified into six levels, namely: 1. Brand as an attribute; that is, a brand can remind someone of certain attributes. 2. Brand with benefits; that is, the brand is not just about attributes, it's also about benefits. Because when a customer buys a product, not only the attributes are bought but also the benefits. 3. Brand as a value; that is, the brand represents something, not only about the value of the product but also the value of the brand holder and also the value of the customer. For example: Mercedes, which means high performance, safety and prestige. 4. Brand as culture; this means that in addition to representing a company brand, it also acts to represent a certain culture. For example: Mercedes represents German culture that is organized, efficient, and of high quality. 5. Brand as personality; the brand reflects a certain personality. For example: Mercedes reflects reasonable leaders. 6. Brand as user; The brand can indicate the type of consumer who buys or uses the product. So, it can be concluded that the brand is the name of a symbol used by the company as a characteristic for the goods or services it produces to make it easier for consumers to distinguish it. The selection of a stamp for a type of item needs to be thought about because it is clear that however small the brand or stamp or brand that we have chosen has an influence on the smooth sales (Alma, 2011). Giving the brand of these products must be careful not to deviate from the situation and the quality and capabilities of the company. The selection of brand names must be done carefully, because however small the brand that has been chosen by the company of course it can affect the smooth sales. So that every company should be able to establish a brand that can give a positive impression while still paying attention to the conditions in choosing a brand, including: 1. Easy to remember, in brand selection it is better if the brand chosen is easy to remember by consumers, both from the words and pictures, so that consumers can easily or prospective consumers remember it. 2. Giving a positive impression, companies in giving a brand to a product are endeavored to be able to give a positive impression on the goods or services produced. 3. Appropriate for promotion, in addition to the two conditions above, it is better to choose a brand if used for promotion is very good. Brands with beautiful names and attractive images play an important role in promotion. So try to give the company a brand that is easy to remember so consumers can easily pronounce it and also remember it. So from the explanation above, brand trust is defined as the willingness of customers to believe in a brand with all the risks that must be borne because expectations of the brand will lead to positive results. Brand trust is the ability of a brand that is trusted by consumers that the product can meet the needs and interests of consumers. Brand trust is the consumer's belief that in a product there are certain benefits, this belief arises because of repeated perceptions and the presence of learning and prior experience (Arief, Suyadi, & Sunarti, 2017). Brand Loyalty Brand loyalty is consumer behavior in which customers show a consistent attitude towards a brand that is reflected in the repurchase of an item or service from a particular company. Brand loyalty is a form of consumer loyalty to a brand that has been purchased and consumed. This loyalty is shown by whether or not a customer will switch to other brands offered by competitors, especially if the brand has changed (Arief et al., 2017). (Chaudhuri & Holbrook, 2001) defines brand trust as the willingness of the average consumer to depend on the ability of a brand to carry out all its uses or functions. Loyalty includes the likelihood of ongoing purchases or the likelihood that customers will switch to other service providers or brands. Forming loyalty is one way to maintain the sustainability of the company, because the formation of loyal customers will benefit the company as a manufacturer. When individuals trust other parties in interpersonal relationships, individuals will hang themselves with other parties and individuals will have a commitment in the relationship. This commitment will make individuals have the intention to maintain the relationship. Likewise, if what is trusted is the Brand (brand), then the individual has the intention (intention) to maintain its relationship with the brand (Juari, 2010). Loyalty can be interpreted as a deep commitment to repeat purchases of products or services consistently in the future with the same brand despite the situational influence and marketing efforts that can cause transitional behavior (Sari, Kumadji, & Latief, 2013). Customer loyalty is a very important element for every company. Because of customer loyalty can affect the survival of the company. Therefore, what must be considered by the company is not only how to maintain existing customers, but more important is how to make them loyal customers to the brand that the company produces. Companies that have customers with high brand loyalty are very profitable for the company. Because it can reduce company marketing costs where the cost to retain customers is much cheaper than getting new customers (Riana, 2008). Research Methods On the basis of the description of the research, this type of research is exploratory research (explanatory research) using a quantitative approach. Population is a unit of individuals or subjects in a certain area and time that will be observed or examined. While the sample is a small part of the population taken to represent the population to be studied. Based on this study because the population is not greater than 100 respondents, the authors take 100% of the total population. Data obtained through a questionnaire. Thus, the use of an entire population without having to draw research samples as observation units is called census techniques. Measurement of variable indicators using a Likert scale, namely for the perception of "strongly agree" was given a score of 5, "agreed" was given a score of 4, "disagreed" was given a score of 3, "disagreed" was given a score of 2, and "strongly disagreed" was given a score of 1 Each statement item will go through testing for validity and reliability in the research model. Data analysis was performed through measurement of model constructs and relationships between variables with the Pertial Least Square (PLS) technique. The variables used in this study include free or exogenous variables (X) consisting of Hedonic Value (X1) and Utilitarian Value (X2), as well as bound or endogenous variables (Y) consisting of Brand Trust (Y1), and Loyalty ( Y2). Relationship Between Variables Relationship between Hedonic Value and Brand Trust From previous research that discusses Relationships among hedonic and utilitarian values, satisfaction and behavioral intentions in the fast-casual restaurant industry (Ryu et al., 2010) shows the results that Hedonic Value does not have a significant influence on Brand Trust. Relationship between Utilitarian Values and Brand Trust Researchers who have previously discussed Relationships among hedonic and utilitarian values, satisfaction and behavioral intentions in the fast-casual restaurant industry (Ryu et al., 2010) found that Utilitarian Value has a positive influence on Brand Trust. The Relationship between Brand Trust and Loyalty From previous research conducted by (Arief et al., 2017) regarding the Effect of Brand Trust and Brand Commitment on Brand Loyalty shows the results that Brand Trust has a positive and significant effect on Brand Loyalty. In a study conducted by (Widodo & Tresna, 2018) with the title The Influence of Brand Trust On Brand Loyalty, the results show that customer trust in a brand will affect the level of loyalty to the brand. So it can be concluded that brand trust has a significant effect on Test Validity Validity test is done to test whether the answers from respondents through the questionnaire are really valid or not. The validity of a data for each variable is shown in the Outer Loading value which is angka 0.70. Reliability Test Reliability is the extent to which the accuracy, accuracy, or accuracy shown by the research instrument. The reliability test is conducted to find out whether the results of the answers of the respondents through a questionnaire are really stable in measuring symptoms or events. Reliability test is done by testing the value of Cronbach Alpha / CA (a). A variable is said to be reliable if it has a value of CA (a)> 0.60. Path Analysis The research hypothesis testing was carried out using a path analysis model and data processing using the SmartPLS program. This path analysis was originally developed by Sewal Wright who developed the method as a tool to study the direct effects and indirect effects of a variable, where some variables are considered to be the cause of other variables. Results And Discussion Data that has been entered into the construct model on SmartPLS is then calculated (running) to find out its validity and reliability. This process can be repeated repeatedly until the loading factor results of all indicators are above the validity requirement of 0.70. While indicators which have a loading factor value below 0.70 must be removed so that the validity and reliability of this model can be improved. Hypotheses are tested based on the path coefficient (path coefficient), so that the significant influence between constructs is known, by looking at the value of the parameter coefficient and the tstatistic value (t-count). Testing is done in 2 (two) directions, with limitations to reject or accept the proposed hypothesis, using an α value of 5%, and a T- Research Result a) Effect of Hedonic Value (X1) on Brand Trust (Y1) From the results of data analysis in this study, it can be seen that the hedonic value variable (X1) has a significant influence on brand trust (Y1) of 0.598, T-statistic = 9,999 ˃ T-table 1,985. This shows that (H1) is accepted, meaning that there is a significant influence between the hedonic value variable (X1) on brand trust (Y1). This shows that the better the hedonic value of shoe products, the higher the level of brand trust in shoe products. b) Effect of Utilitarian Value (X2) on Brand Trust (Y1) From the results of data analysis in this study, it can be seen that the Utilitarian Value (X2) variable influences the Brand Trust (Y1) of 0.052, T-statistic = 0.587 <T-table 1.985. This shows that (H2) is rejected, meaning that the Utilitarian Value (X2) has no direct effect on Brand Trust (Y1). c) The Effect of Brand Trust (Y1) on Loyalty (Y2) From the results of data analysis in this study, it can be seen that the brand trust variable (Y1) has a direct influence on buyer loyalty (Y2) of 0.698, T-statistics = 13,804 ˃ T-tables 1,985. This shows that (H3) is accepted, meaning that brand trust (Y1) has a direct and significant effect on Loyalty (Y2). This shows that the better the brand trust in the product, the higher the level of loyalty associated with purchases of shoes products.
2020-10-19T18:10:38.098Z
2020-09-23T00:00:00.000
{ "year": 2020, "sha1": "e755cded5d1f3f36cfe56656fbabc59d9441c35d", "oa_license": "CCBY", "oa_url": "https://jurnal.stie-aas.ac.id/index.php/IJEBAR/article/download/1287/707", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "dfa2a17288bc8a23a3816f6c8cf22a801bc14a68", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Psychology" ] }
23755426
pes2o/s2orc
v3-fos-license
Analgesic-Like Activity of Essential Oil Constituents: An Update The constituents of essential oils are widely found in foods and aromatic plants giving characteristic odor and flavor. However, pharmacological studies evidence its therapeutic potential for the treatment of several diseases and promising use as compounds with analgesic-like action. Considering that pain affects a significant part of the world population and the need for the development of new analgesics, this review reports on the current studies of essential oils’ chemical constituents with analgesic-like activity, including a description of their mechanisms of action and chemical aspects. Introduction Constituents of essential oils are commonly found in foods giving characteristic aroma and flavor. Several plants and their fruits can be recognized by the aroma provided by volatile substances, which are generally alcohols, aldehyde ketones, esters, hydrocarbons, phenols, among other chemical classes. For example, alcohol linalool can be found in mango, papaya and pineapple fruit. While the hydrocarbon limonene is present in orange, lemon and guava [1]. In cooking, the aroma of spices is due to the presence of these volatile compounds, such as phenol eugenol found in clove (Syzygium aromaticum (L.) Merrill & Perry) [2]. Therefore, people are always in contact with these constituents through food. Essential oils are a class of natural products with promising biological properties and are traditionally used in aromatherapy for various purposes. Pharmacological and clinical studies have demonstrated the profile of these compounds as drug candidates [3]. For example, the monoterpene perillyl alcohol has preventive and therapeutic effects in a wide variety of preclinical tumor models and is currently under phase I and phase II clinical trials, including against glioblastomas multiforme. Elemene and D-limonene are essential oil constituents also tested in patients with cancer [3]. In fact, several reviews have suggested the therapeutic potential of this group into multiple areas, including analgesics, whose activity presents a large number of published studies [4,5], anticonvulsants [6], anti-inflammatories [7][8][9], anticancer agents [10,11], anxiolytics [12], and antiulcer agents [13]. Studies of the anxiolytic effects have scientifically proven the therapeutic use of several essential oils in aromatherapy. The inhalation route is an interesting route for several therapeutic approaches using these natural products [3,12]. The chemical diversity found in essential oils may be responsible for these variety of pharmacological activities and possibly the various mechanisms of action of their chemical constituents. These findings not only support the traditional use of aromatic plants and their essential oils but also highlight analgesic-like uses of these natural products. In addition, antitumor essential oils with pharmacological activity in animal models of pain may have a dual effect on the therapeutic approach of patients with cancer. Pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage [14]. This symptom affects a lot of people in the world, mainly patients with some types of chronic pathologies, causing loss of good quality of life. In response to the demand for powerful analgesics and with less side effects many studies have been conducted to discover bioactive substances with profiles of analgesic drug candidates. Therefore, with the aim of contributing to the report of natural compounds with antinociceptive activity, the purpose of this review was to conduct a systematic investigation of studies on the essential oil constituents in experimental models related to antinociceptive activity. This article is an update of our latest review on the topic [4] and describes new studies conducted over the past six years. p-Cymene The aromatic monocyclic monoterepene p-cymene [1-methyl-4-(1-methylethyl) benzene] is the biological precursor of carvacrol and is abundantly found in essential oils from many plant species such as Protium heptaphyllum (Aubl.) Marchand (Burseraceae) [15] and Hyptis pectinata (L.) Poit. (Lamiaceae) [16]. It also occurs naturally in a wide variety of foods, including orange juice, carrots, tangerine, butter and oregano [17]. The antinociceptive and anti-inflammatory activities of p-cymene have been evaluated by different behavioral tests of nociception in rodents, which showed that this monoterpene exerted both peripheral and central antinociceptive action. For instance, the antinociceptive effect of p-cymene was demonstrated on an orofacial nociceptive response model through tests involving the subcutaneous administration of formalin, capsaicin, and glutamate into the upper lip of p-cymene-pretreated male Swiss mice (25,50 or 100 mg/kg, i.p.). p-Cymene markedly decreased the rubbing behavior induced by all three components; an effect counteracted by nonselective opioid receptor antagonist naloxone, indicating the participation of the opioid system in the antinociceptive response [18]. A study developed by Bonjardim and collaborators [19] revealed that, in the acetic acid-induced writhing and formalin tests, exposure of male Swiss mice to p-cymene (50 or 100 mg/kg, i.p.) significantly decreased the number of writhes and the licking time in the first and second phase of the formalin test. In the hot plate test, it increased the latency time of the licking and jumping behavior to thermal stimulus. Additionally, p-cymene (25, 50 or 100 mg/kg) produced an anti-inflammatory reaction induced by carrageenan (CG), which led to a marked reduction in leukocyte migration. In mice, intraplantar injection of CG leads to hypernociception and an inflammatory response that involves the release of cytokines by resident or migrating cells initiated by the production of bradykinin [20]. This is followed by the secretion of prostanoids and sympathomimetic amines, such as dopamine [21], which stimulate Aδ and C fiber nerve terminals and the release of substance P and neurokinin A, accentuating local blood flow and vascular permeability [22]. In CG-induced hypernociception, the release of tumor necrosis fator α (TNF-α) and keratinocyte-derived chemokine (KC), for exemple, is accompanied by the secretion of interleukin 1β (IL-1β) [23] with the subsequent induction of cyclooxigenase-2 (COX-2) expression and the production of protanoids, such as prostagladin E2 (PGE2) [24]. Other studies have provided further evidence of the antinociceptive and anti-inflammatory properties of p-cymene and the possible role of the opioid system and cytokines in these responses. The antinociceptive effect of p-cymene (25-100 mg/kg) on male Swiss mice was demonstrated in the tail flick test, showing increased dose-dependent reaction time, an effect that lasted for five hours and was antagonized by naloxone and by δ, κ and µ-opioid receptor antagonists naltrindole, nor-binaltorphimine (Nor-BNI) and CTOP (D-Phe-Cys-Tyr-D-Trp-Orn-Thr-Pen-Thr amide), respectively [25]. In the assessment of the anti-inflammatory activity, treatment with p-cymene (25,50 or 100 mg/kg, i.p.) decreased mechanical hyperalgesia induced by CG, TNF-α, PGE2, and dopamine. In the CG-induced pleurisy test, p-cymene reduced leukocyte (100 mg/kg) and neutrophils (50 and 100 mg/kg) migration to the pleural cavity, and decreased the levels of TNF-α in pleural exudates (25,50 and 100 mg/kg). Neutrophils, in particular, play an important role at the onset of inflammatory hypernociception by secreting pro-inflammatory cytokines (e.g., TNF-α and IL-1β) and mediators such as prostaglandins [26,27]. p-Cymene was also shown to diminish nitric oxide (NO) production in murine macrohages incubated with lipopolysaccharide (LPS) (25, 50 and 100 µg/mL), a component known to stimulate toll-like receptor 4 (TLR-4), leading to the activation of the transcription factor NF-κB [25]. NF-κB contributes to the production of inflammatory (pro-nociceptive) molecules and enhances inducible nitric oxide synthase (iNOS) activity, thereby increasing NO production [28], which is believed to act as a mediator of inflammation and to sustain hyperalgesia after CG injection [29]. Furthermore, p-cymene significantly enhanced the c-Fos immunoreactive neurons in the periaqueductal gray [25], a midbrain region activated by opioid agonists and involved with pain modulation [30]. Together these findings indicate an anti-inflammatory and antinociceptive action of p-cymene and suggest the involvement of descending pain suppression mechanisms since its antinociceptive role through the opioid system was increased by the activation of the periaqueductal gray [25]. The assessment of the antinociceptive property and redox profile of p-cymene and two other monoterpenes, namely (+)-camphene and geranyl acetate, revealed that p-cymene possessed the strongest antinociceptive action (50, 100 and 200 mg/kg, i.p.) while (+)-camphene and geranyl acetate (200 mg/kg) displayed a moderate analgesic effect in male Swiss mice tested in the acetic acid-induced writhing and formalin models [31]. In contrast, (+)-camphene exhibited the most relevant antioxidant effect in vitro detected by two specific assays: the thiobarbituric acid-reactive species (TBARS)-an assay employed to quantify lipid peroxidation [32]-and the total reactive antioxidant potential (TRAP)/total antioxidante reactivity (TAR)-an assay employed to estimate the nonenzymatic antioxidant capacity of samples [33]. It also showed the highest scavenging activity against different free radicals, including hydroxyl and superoxide radicals [31]. Carvacrol Carvacrol (5-isopropyl-2-methylphenol) is a phenolic monoterpene found in essential oils of plants from the genera Origanum and Thymus (Lamiaceae) [34,35]. Its phamacological properties include acetylcholinesterase inhibition [36], and anticonvulsive [37], anxiolytic [38], and antinociceptive [39] action. The antinociceptive activity of carvacrol was demonstrated in male Swiss mice tested in animal models of pain (acetic acid-induced writhing, formalin and hot plate). The data obtained after oral treatment with single doses of carvacrol showed a decrease in the number of constrictions (50, 100 and 200 mg/kg), and the paw-licking time (50 mg/kg, first phase of the formalin test; 100 mg/kg, first and second phases), and an increase in the reaction time at 60 min (50 and 100 mg/kg) in the hot plate test. These effects were not reversed by naloxone and L-arginine, suggesting that the antinociceptive action of carvacrol may not be related to the opioid system [40]. On the other hand, the antinociceptive activity of carvacrol was associated with the inhibition of prostaglandin synthesis [39] as it possesses an effective ability to suppress COX-2 expression and to activate the peroxisome proliferator-activated receptors (PPAR) α and γ [41]. In a study by Guimarães and collaborators [42], the role of carvacrol in the attenuation of mechanical hypernociception and inflammation was investigated in models of hypernociception induced by CG, TNF-α, PGE2 and dopamine, and in models of CG-induced pleurisy, paw edema, and LPS-induced nitrite production in murine macrophages. The administration of carvacrol (50 or 100 mg/kg, i.p.) to male Swiss mice significantly suppressed mechanical hypernociception and paw edema induced by CG and TNF-α (but not PGE2 and dopamine), and markedly reduced TNF-α levels in pleural lavage, blocked leukocytes recruitment, and decreased LPS-induced nitrite production in vitro (carvacrol: 1, 10 or 100 µg/mL). Additionally, Guimarães and collaborators [43] also demonstrated the antinociceptive effect of carvacrol in the formalin-, capsaicin-, and glutamate-induced orofacial nociception tests in which male Swiss mice exhibited reduced face-rubbing behavior in both phases of the formalin test, and nociception induced by capsaicin and glutamate (carvacrol-25, 50 or 100 mg/kg, i.p.). The antinociceptive action of carvacrol was further corroborated by a study developed by Luo and colloborators [44] in the assessment of its activity on glutamatergic spontaneous excitatory transmission in substantia gelatinosa neurons of the spinal dorsal horn, a region believed to modulate nociceptive transmission from the peripheral to the central nervous system [44,45]. By the use of the patch-clamp method in adult rat spinal cord slices, it was verified that exposure to carvacrol increased the secretion of L-glutamate from nerve terminals by activating transient receptor potential cation channels, subfamily A, member 1 (TRPA1), and produced membrane hyperpolarization; an effect that could be contributing to its anti-inflammatory action. Several studies have recognized TRP as important analgesic targets in inflammatory and neurophatic pains [46]. Another contribution was given by Joca and collaborators [47] that examined possible mechanisms involved in the effects of carvacrol on the peripheral nervous system. Carvacrol reversibly and dose-dependently suppressed the excitability of the rat sciatic nerve (IC 50 value of 0.50 ± 0.04 mM), and prevented the generation of action potentials (IC 50 0.36 ± 0.14 mM) of the intact dorsal root ganglion (DRG) neurons without altering the resting potential and input resistance. Carvacrol also suppressed neuronal excitability by a direct inhibition of the voltage gated sodium current of dissociated DRG neurons (IC 50 0.37 ± 0.05 mM), suggesting a local anesthetic effect of this compound. Linalool (−)-Linalool is an enantiomer monoterpene present in essential oils of various aromatic plants, such as lavender, rosewood and bergamot [48], and possesses several pharmacological activities including anti-inflammatory, anxiolytic, anticonvulsant and antinociceptive [49][50][51][52][53]. The effects of (−)-linalool, extracted from the essential oil of Ocimum basilicum L. (Lamiaceae) leaf, on orofacial nociception were addressed in formalin, glutamate and capsaicin tests and in an electrophysiological protocol, which involved the evaluation of the neuronal excitability of the hippocampal dentate gyrus. (−)-Linalool (50, 100 and 200 mg/kg, i.p.) administered to male Swiss mice effectively inhibited the nocifensive face-rubbing behavior in the first and second phase of the formalin test. At high doses, it also reduced nociceptive behavior in neurogenic inflammatory nociception induced by capsaicin and glutamate injection in the perinasal area (right upper lip) [53]. It is believed that these effects are related to possible inhibition of substance P release or blocking effect on its receptor neurokinin-1 (NK-1) [54]. In addition, the electrophysiological analysis revealed that (−)-linalool inhibited the field potentials activated by the antidromic stimulation of the hylus, suggesting that this compound affects the activation of the voltage-dependent sodium channels present in the granular neurons of the hippocampal dentate gyrus [53,55]. Similar results were observed with the O. basilicum leaf essential oil, indicating that both the oil and (−)-linalool display modulatory action on neurogenic and inflammatory pain, and that the antinociceptive effect could be related to reduced peripheral and central nerve excitability [53]. The antinociceptive activity of (±)-linalool was evidenced in the paclitaxel-induced acute pain model in male ddY-strain mice. Intraplantar injection of (±)-linalool (5 and 10 µg/paw) effectively and dose-dependently suppressed behavioral responses of paclitaxel-induced mechanical allodynia and hyperalgesia. (±)-Linalool injected into the ipsilateral paw produced antiallodynia and antihyperalgesia effects whereas no such action was detected in the linalool-injected contralateral paw, suggesting that the effects exerted by this monoterpene may be mediated locally rather than systemically. Moreover, (±)-linalool's effects were reversed by local (paw plantar surface) administration of naloxone hydrochloride (opioid antagonist) and by naloxone methiodide (peripherally acting opioid receptor antagonist), indicating that (±)-linalool's peripheral antiallodynia and antihyperalgesia activities could partly involve peripheral opiod mechanisms [48]. Bergamot essential oil extracted from Citrus bergamia (Risso, Rutaceae) is a rich source of linalool. The investigation of their effects on neurophatic hypersensitivity induced by partial sciatic nerve ligation (PSNL) in male ddY-strain mice showed that intraplantar injection of these components into the ipsilateral hindpaw decreased PSNL-induced mechanical allodynia dose-dependently whereas no antinociceptive activity was observed after intraplantar injection into the contralateral hindpaw, further suggesting a local effect of linalool and also of bergamot essential oil [56]. The possible involvement of spinal extracellular signal-regulated protein kinase (ERK) in bergamot essential oil and linalool-induced antimechanical nociception indicates that the attenuation of the observed effects entailed inhibition of spinal ERK phosphorylation since intraplantar injection of bergamot essential oil or linalool effectively blocked spinal ERK activation induced by PSNL [56]. The activation of ERK has been demonstrated in dorsal horn neurons in persistent CG and Freund's adjuvant-induced inflammatory hyperalgesia [57,58]. Previous studies have shown that injection of capsaicin into the hindpaw produced ERK activation in the spinal cord, while blockade of spinal ERK1/2 activity via i.t. injection of MEK inhibitor U0126 decreased nocifensive responses induced by formalin, capsaicin, CG or complete Freund's adjuvant [59][60][61]. Corroboration of the local action of bergamot essential oil and linalool was provided by Katsuyama and collaborators [62]. The nocifensive response to formalin (licking and biting) was considerably decreased in both phases of the formalin test following intraplantar administration of bergamot essential oil or linalool into the ipsilateral, but not the contralateral, hindpaw of male ddY-strain mice. These findings show the peripheral antinociceptive action of both compounds, which was antagonized by intraplantar and i.p. injection of naloxone hydrochloride and naloxone methiodide, and confirm previous reports that suggest the involvement of peripheral opioid receptors in antinociception induced by bergamot essential oil and linalool [62]. In traditional Chinese medicine, frankincense from Boswellia carterii is commonly used for topical treatment of pain and inflammation [63]. A study carried out to investigate the antinociceptive and anti-inflammatory action of frankincense oil and water extracts and three of its main componentes, i.e., linalool, α-pinene and 1-octanol, via xylene-induced ear edema and a formalin-inflamed hindpaw model in male Kunming mice, showed consistent evidence about their anti-inflammatory and analgesic effects. Frankincense oil extract, which contains more linalool, α-pinene and 1-octanol than frankincense water extract, produced a faster and more effective reduction of the swelling and pain than the water extract. In addition, the combination of linalool, α-pinene and 1-octanol exhibited stronger biological effect on hindpaw inflammation and COX-2 overexpression than the three compounds used separately, indicating that they contribute to the topical antinociceptive and anti-inflammatory properties of frankincense by inhibiting COX-2 activation [64]. A study by Tashiro and collaborators [65] reported the antinociceptive effect of linalool in a different experimental protocol using vapour exposure mediated by hypothalamic orexin neurons, one of the main mediators in the behavioral responses to pain [66]. The involvement of these cells was evidenced by a significant increase in the number of c-Fos-expressing orexin neurons, and in linalool odour-exposed and odourless air-exposed orexin neuron-ablated mice that exhibited similar pain behavior in the first and second phase of the formalin test. The confirmation of the contribution of orexinergic transmission was shown in orexin peptide-deficient mice exposed to linalool vapour in which linalool failed to evoke antinociceptive effects after formalin-induced insult, suggesting the participation of orexinergic transmission in linalool odour-induced antinociceptive response. Moreover, linalool odour exposure significantly decreased pain response in both phases of the formalin test in mice (wild type: C57BL16) while, in the hot plate test, it increased the latency of hindpaw withdrawal when compared with the odourless air control following an injurious heat stimulus. In the investigation of the participation of olfactory processing in linalool analgesic effects by chemical nociceptive stimulus (formalin test), pain behavior in olfactory bulbectomized mice under linalool vapour exposure did not differ markedly from the odourless air group in both phases of the test. In the anosmic model using mice with a nonfunctional olfactory epithelium, no effects of linalool vapour were observed, providing further evidence that the olfactory response produced by linalool vapour may play a key role in inducing analgesic effects [65]. Despite the biological properties of (−)-linalool, its use in the treatment of painful and inflammatory disorders is still limited due to poor oral availability [67,68]. A comparative study using experimental pain models (i.e., acetic acid-induced writhing, formalin and hot plate) in male Swiss mice examined the antinociceptive effect of (−)-linalool and β-cyclodextrin (β-CD) complexed (−)-linalool (20 or 40 mg/kg, p.o.). Both compounds effectively reduced the nocifensive response in all chemical and heat-induced tests, suggesting the involvement of peripheral and central antinociceptive mechanisms. In the writhing test, the antinociceptive effects were antagonized by naloxone, implying the involvement of the opioidergic neurotransmission pathway. (−)-Linalool and (−)-linalool/β-CD complex also inhibited total leukocyte migration and TNF-α levels in peritoneal fluid in the CG-induced peritonitis protocol. However, (−)-linalool/β-CD complex exhibited stronger antinocicptive effect than (−)-linalool alone, indicating once again that cyclodextrin may become a relevant tool to improve the biological activity of water-insoluble monoterpenes [67]. Furthermore, the antinociceptive effect of (−)-linalool and β-CD complexed (−)-linalool was demonstrated in an animal model of chronic noninflammatory muscle pain (fibromyalgia animal model) [69], corroborating the findings by Quintans-Júnior and collaborators [67]. After exposure of male Swiss mice to (−)-linalool and (−)-linalool/β-CD complex (25 mg/kg, p.o.), the animals were tested for mechanical hyperalgesia (von Frey), motor coordination (rotarod), and muscle strength (grip strength meter) for 27 days. Both compounds markedly suppressed mechanical hyperalgesia in the model for fibromyalgia, persisting for 24 h only in the linalool complexed in β-cyclodextrin group. Additionally, the assessment of the areas in the central nervous system involved in antihyperalgesic activity by a method for immunofluorescence labeling of fos protein showed that both compounds effectively activated neurons of the locus coeruleus, nucleus raphe magnus and periaqueductal gray areas, suggesting the participation of descending pain pathways in the improved antinociceptive effect of (−)-linalool/β-CD complex [69]. Eugenol Eugenol (2-methoxy-4-(2-propenyl) phenol) is a phenylpropanoid found as the main constituent of Eugenia aromatica (L.) Baill (clove oil, Myrtaceae) [70,71], being commonly used as an analgesic and anti-inflammatory in dental procedures, e.g., pulpitis and dentinal hypersensitivity [71][72][73]. Other phamacological properties of this compound include neuroprotective [74], anticonvulsant [75], antipyretic [76], and reduction of neuropathic [77] and orofacial pain [78]. The administration of eugenol (1-10 mg/kg, p.o.) has been reported to produced dose-dependent antinociceptive effects in male ICR mice (a strain of albino mice originated from the Institute of Cancer Research in the United States) tested in the acetic acid-induced writhing test-an effect that lasted for at least 30 min-and to inhibit the nociceptive behavior in the second phase of the formalin test as well as the nocifensive response time (reduced licking, scratching and biting of the lumbar or caudal region) for intrathecal injection of substance P (a neuropeptide associated with inflammatory processes and pain) or glutamate. Intraperitoneal pretreatment with naloxone and yohimbine (α2-adrenergic receptor antagonist) antagonized the analgesic effect of eugenol in the writhing test, whereas no such action was observed after pretreatment with methysergide (5-hydroxytryptamine (5-HT) serotonergic receptor antagonist) [73]. Bó and collaborators [71] provided more information about the mechanisms involved in the effect of eugenol on acute pain by indicating the participation of glutamatergic and TNF-α pathways. In the acetic acid-writhing test, exposure to eugenol (3-300 mg/kg, p.o., 60 min or i.p., 30 min) suppressed 82 ± 10% and 90 ± 6% of the nociceptive response of male Swiss mice (ID 50 values of 51.3 and 50.2 mg/kg, respectively) while, in the glutamate test, eugenol (0.3-100 mg/kg, i.p.) decreased the response behavior by 62 ± 5% (ID 50 of 5.6 mg/kg)-An effect that 7 of 40 was reversed by naloxone. The administration of eugenol (10 mg/kg, i.p.) inhibited the nociception induced by the intrathecal (i.t.) injection of glutamate (37 ± 9%), kainic acid (kainite) (41 ± 12%), α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid (AMPA) (55 ± 5%), substance P (SP) (39 ± 8%), and biting behavior induced by TNF-α (65 ± 8%). These findings give further evidence for the involvement of the opioid receptors in the antinociceptive action of eugenol and suggest that the mechanism of action also seems to include the modulation of glutamatergic receptors (i.e., kainate and AMPA), and the inhibition of TNF-α. Eugenol is also the major constituent of the essential oil of Ocimum gratissimum L. (Lamiaceae), a plant popularly used in the treatment of painful diseases. In a model of neuropathic pain induced by chronic constriction of the sciatic nerve, the oral exposure to eugenol and the monoterpene myrcene (5 or 10 mg/kg, for 14 days after surgery) produced antihypernociceptive effects on male C57BL/6J mice tested in mechanical (von Frey) and thermal (hot plate) tests [79]. In addition, the administration of eugenol (1, 5 and 10 mg/kg) markedly decreased IL-1β levels whereas no significant effect was caused by treatment with myrcene. Similar antihypernociceptive activity was obtained with O. gratissimum essential oil (20 and 40 mg/kg) in the neurophatic pain models, providing evidence for the biological action that supports its popular use [79]. The evaluation of the antinociceptive effect of eugenol in a monoiodoacetate-induced osteoarthritis model revealed that daily administration of eugenol (40 mg/kg, p.o.) to Sprague Dawley rats during four weeks significantly changed gait parameters (e.g., swing speed, swing phase duration and duty cycle) of the treated hindlimb and reduced secondary mechanical allodynia in the first and the third week of treatment (measurement of withdrawal threshold in response to von Frey filaments). Spinal pain-related peptide analysis showed reduced expression of substance P and calcitonin gene-related peptide (CGRP) and increased levels of dynorphin (an opioid peptide) in animals exposed to eugenol [80]. It is known that the pharmacological inhibition of Transient Receptor Potential Vanilloid 1 (TRPV1) lowers the secretion of substance P [81] and as eugenol acts in the central nervous system, this may account for the decrease in substance P content [77]. Furthermore, the decrease in CGRP could be related to reduced activity of the rat knee joint afferent fibres since, like substance P, CGRP is also present in these cells [80,82]. These results indicate an effective antinociceptive action of eugenol to attenuate osteoarthritis-related pain [80]. In an experimental procedure using a half-tongue model in humans, Klein and collaborators [83] demonstrated that eugenol and carvacrol induced temporally desensitinzing patterns of oral irritation and increased innocuous warmth and noxious heat sensation on the tongue. The irritant sensation caused by both compounds was reduced during repeated applications at a 1 min interstimulus interval (self-desensitization) which lasted for at least 10 min. Cross-desensitization of capsaicin-evoked irritation was also observed. Eugenol and carvacrol elicited a significant increase in the magnitude of perceived innocuous warmth for at least 10 min, and briefly (less than 5 min) intensified heat pain at a 49 • C stimulus. It was suggested that the short-lived hyperalgesia after eugenol exposure may be associated with TRPV3-mediated improvement of thermal gating of TRPV1 present in lingual polymodal nociceptors [83]. Xixin (Asari Radix et Rhizoma) is a traditional herbal medicine used in China, Japan, and Korea as a local anesthetic, in inflammatory diseases, and to releave toothache and headache [84]. Methyl eugenol (4-allyl-1,2-dimethoxybenzene), a structural analogue of eugenol, is the main active component isolated from Xixin, known to possess antinociceptive, anesthetic, anticonvulsant, hypothermic and myorelaxant properties [75,85,86]. An in vitro study using cDNA clone of pNaEx8 plasmid encoding the Nav1.7 α subunit transiently expressed in Chinese hamster ovary (CHO) cells showed the inhibitory effect of methyl eugenol on Nav1.7 channels as a mechanism involved in its antinociceptive and anesthetic actions [86]. Methyl eugenol tonically suppressed peripheral nerve Nav1.7 currents in a dose-and voltage-dependent manner in the whole-cell patch-clamp method (IC 50 of 295 µmoL/L at a −100 mV holding potential). Functionally, methyl eugenol expressed higher affinity to Nav1.7 channels in the inactivated and/or open state, indicating that, when in the presence of methyl eugenol, Nav1.7 channels presented decreased availability for activation in a steady-state inactivation protocol, strong use-dependent inhibition, increased binding kinetics, and slow recovery from inactivation compared to untreated channels. This suggests that the antinociceptive and anesthetic properties of methyl eugenol may be a consequence of the inhibitory effect of this compound on peripheral sodium channels [86]. In addition to methyl eugenol, another compound known as ortho-eugenol, a synthetic isomer of eugenol, has also been reported to cause antinociceptive and anti-inflammatory effects. In a study by Fonsêca and collaborators [87], the administration of ortho-eugenol (50, 75 and 100 mg/kg, i.p.) to male Swiss mice tested in the acetic acid writhing and glutamate tests caused a significant reduction in the number of writhes and in the licking time, respectively. The animals presented increased reaction time from thermal stimulus in the hot plate test, but treatment with yohimbine antogonized the antinociceptive activity, indicating a possible mechanism of action involving the adrenergic system. The anti-inflammatory action of ortho-eugenol was evidenced in inflammatory protocols, including the acetic acid-induced peritoneal permeability and the CG-induced peritonitis, which showed a suppressive effect on vascular permeability and leukocyte migration and the subsequent reduction of TNF-α and IL-1β due to inhibition of NF-κB and p38 phosphorilated forms observed in the peritonitis test. The central mechanisms of analgesia induced by menthol was investigated by Pan and collaborators [94] in an in vitro assay using primary cultures of spinal cord superficial dorsal horn neurons obtained from 2-to 4-day-old CD-1 mouse pups, and in vivo by use of CD-1 male mice (50 and 100 mg/kg, i.p.). The results obtained showed that exposure to menthol caused dose-dependent reduction of ipsilateral and contralateral pain hypersensitivity induced by complete Freund's adjuvant; reduction of nociceptive behavior in both phases of the formalin test; dose-dependent and Cl − -mediated generation of inward and outward currents in cultured dorsal horn neurons, indicating the activation of γ-aminobutyric acid type A receptors; blockage of voltage-gated sodium channels and voltage-gated calcium channels in a voltage-, state-, and use-dependent manner; decrease in the repetitive firing and action potential amplitude; reduced neuronal excitability, and interruption of spontaneous synaptic transmission of cultured superficial dorsal horn neurons. In the peripheral mechanisms involved in the effects displayed by menthol, activation of transient receptor potential cation channel subfamily M member 8 (TRPM8) and TRPA1 channels is believed to play an important role in menthol-induced cold hyperalgesia [97,98]. In fact, previous studies have shown that menthol reversed cinnamaldehyde-induced heat hyperalgesia, an effect that may have been a consequence of TRPA1 blockage by this compound [99,100]. In a study by Roberts and collaborators [46], the sensory effects and interactions of topically applied-TRP agonists menthol (TRPM8), capsaicin (TRPV1) and cinnamaldehyde (TRPA1) to the skin of 14 health humans were assessed through changes in thermal sensibility and contact heat-evoked potentials (CHEP). The application of menthol provoked cold hypersensitivity while cinnamaldehyde and capsaicin produced heat hyperalgesia. Furthermore, menthol and cinnamaldehyde did not exert any effect on evoked potentials, but the amplitude of CHEP and evoked pain ratings were negatively correlated after capsaicin exposure. Other studies revealed the participation of TRPM8 as the main mediator of menthol-(or L-arginine)-induced analgesia of acute and inflammatory pain [88,101]. For instance, L-arginine (10 and 20 mg/kg, i.p.) significantly reduced pain behavior in the hot plate, acetic acid writhing and tail flick, and inflammatory (complete Freund's adjuvant) tests. It effectively inhibited nocifensive behavior (licking, fliching or biting) caused by specific pharmacological activation of TRPV1 when administered in combination with capsaicin (5 nmoL; menthol −50 nmoL-intraplantar injection) in wild-type mice, but not in Trpm8-/-mice. Similar results were obtained following exposure of wild-type and Trpm8-/-mice to L-arginine (50 nmoL) coinjected with acrolein (25 nmoL; intraplantar), an agonist of TRPA1 in sensory nerves. More importantly, the antinociceptive activity of L-arginine was completely eliminated by genetic deletion of TRPM8, and its antinociceptive effect was restored in mice treated with AMG2850, a selective TRPM8 inhibitor. The selective activation of TRPM8 with WS-12 (a menthol derivative and a specific TRPM8 agonist) in cultured sensory neurons and in vivo also produced TRPM8-dependent antinociception of acute and inflammatory pain. L-arginine and WS-12 effects were counteracted by naloxone, indicating the participation of endogenous opioid-dependent analgesia pathways [88]. The effects of menthol on human embryonic kidney-derived 293 T cells expressing (h)TRPV1 and of capsaicin on hTRPM8 were addressed in vitro and showed that the activation of hTRPV1 currents by heat and capsaicin were suppressed by menthol, whereas activation of hTRPM8 currents were suppressed by capsaicin [101]. Moreover, an in vivo sensory irritation test carried out in Japanese males (20-30 years old; n = 10) demonstrated that menthol exhibited antinociceptive effect on the sensory irritation caused by a capsaicin analogue. These results suggest an interaction between TRPV1 and TRPM8 agonists and both of these channels, and since TRPM8 is not usually coexpressed with TRPV1 in primary afferent neurons [102,103], it is possible that the information transmitted by TRPM8-and TRPV1-expressing neurons could affect each other. Therefore, it is believed that menthol-induced TRPM8-mediated cold sensation could be improved by the inhibition of TRPV1 and that capsaicin-induced TRPV1-mediated heat sensation could be increased by the inhibition of TRPM8 [101]. In the inflammatory models of dextran-and CG-induced paw edema, edema formation after intraplantar injection of these compounds was effectively suppressed by (−)-α-bisabolol (100 and 200 mg/kg, p.o.) pretreatment in male Swiss mice. (−)-α-Bisabolol (100 and 200 mg/kg) significantly inhibited myeloperoxidase (MPO) activity and decreased TNF-α levels in the peritoneal fluid of rats with induced peritonitis. This result indicates that (−)-α-bisabolol blocks neutrophil migration to the peritoneal cavity as indicated by MPO, a marker of the presence of these cells [115]. Moreover, the edematogenic response triggered by croton oil, arachidonic acid and phenol in the mouse ear edema was significantly decreased by topically applied (−)-α-bisabolol (0.7 and 1.4 mg/ear); however, it did not effectively reduce edema induced by caipsaicin, indicating that (−)-α-bisabolol action mechanism does not involve activation of TRPV1 receptor [114]. The antinociceptive action of (−)-α-bisabolol (50, 100 and 200 mg/kg, i.p.) was also reported by Leite and collaborators [115] in visceral nociceptive models induced by acetic acid, formalin, capsaicin and mustard oil. To investigate the mechanisms involved in its effects, prior to exposure to (−)-α-bisabolol (50 mg/kg) in the acute model of visceral nociception induced by intracolonic instillation of mustrad oil, male Swiss mice were treated with N(G)-Nitro-L-arginine methyl ester (L-NAME), yohimbine, glibenclamide, ondansetron or ruthenium red in order to confirm the participation of nitrergic, noradrenergic, K ATP +, 5-HT 3 , and TRPV1 receptors in the effect of this compound. Treatment with (-)-α-bisabolol and ruthenium red (non competitive antagonist of TRPV1) in combination resulted in an additive antinociceptive effect, while the results obtained with (L-NAME, NO synthase inhibitor) and ondansetron (5-HT 3 antagonist) were inconclusive. Yohimbine (α-2 adrenoceptor) failed to antagonize (−)-α-bisabolol action, indicating that α2 adrenoceptor is not involved in the attenuation of visceral nociception. In addition, (−)-α-bisabolol (50 mg/kg) did not suppress capsaicin-induced visceral nociception, corroborating previous findings showing that this compound does not act as a TRPV1 agonist [114]. By the use of imaging, electrophysiological and biochemical methods, Nurulain and collaborators [116] showed that (−)-α-bisabolol reversibily and dose-dependently suppressed α7-nicotinic acetylcholine receptors (α7-nChRs) mediated currents in oocytes of Xenopus laevis. These receptors are found in the peripheral and central nervous system and are characterized by rapid desensitization and high permeability to calcium [117]. The obtained results revealed that the suppressive effect of (−-α-bisabolol (IC 50 = 3.1µM) was not altered after injection of calcium chelator BPTA and perfusion with calcium-free solution containing barium, which is indicative of the non-involvement of endogenous calcium-dependent Cl − channels in (−)-α-bisabolol activities. Furthermore, the effect of (−)-α-bisabolol on α7-nChRs were investigated in the first region in the hippocampal circuit known as CA1 region of stratum radiatum interneurons of rat hippocampal slices (whole-cell patch-clamp method) and was shown to have an inhibitory action in choline-induced currents in the CA1 interneurons [116,117]. Stachys lavandulifolia Vahl (Lamiaceae) is a plant used in Turkish and Iranian folk medicine as an analgesic and anti-inflammatory [118]. A study by Barreto and collaborators [118,119] provided information about the antinociceptive and anti-inflammatory effects displayed by the main compound of S. lavandulifolia essential oil, i.e., (−)-α-bisabolol, in models of orofacial nociception. Male Swiss mice exhibited reduced face-rubbing behavior after exposure to (−)-α-bisabolol (50 mg/kg, p.o. -first phase; 25 and 50 mg/kg, p.o. -second phase) in the formalin test. Further analysis of the pharmacological profile showed its effective inhibitory effect (25 and 50 mg/kg) on the nociceptive response in animals tested in the capsaicin-and glutamate-induced orofacial pain tests. The data presented in the pain models indicated a stronger effect of (−)-α-bisabolol in comparison with S. lavandulifolia essential oil. In the CG-induced pleurisy, (−)-α-bisabolol exhibited a significant anti-inflammatory activity and this result could be possibly related to the significant decrease in the level of TNF-α in pleural inflammatory exudate. No considerable alteration was observed in the level of IL-1β, but in contrast S. lavandulifolia essential oil markedly reduced both pro-inflammatory cytokines. These findings support the folk use of S. lavandulifolia and relates its antinociceptive and anti-inflammatory actions to (−)-α-bisabolol [119]. Cinnamaldehyde Cinnamaldehyde (CIN) is a naturally occurring phenylpropanoid and has been described as the most important component present in the volatile oil of different cinnamon species [120]. This component contributes to the fragrance and several biological properties observed in Cinnamomum species, including antioxidant, antipyretic, antimicrobial and anti-inflammatory activities [121][122][123][124]. For example, Cinnamomum zeylanicum essential oil exhibited antinociceptive properties on acute and chronic pain in mice, and CIN seems to be involved in C. zeylanicum antinociceptive effect [124]. In 2011, Roberts and collaborators [46] investigated the sensory effects, with emphasis on thermal sensibility, of CIN together with capsaicin (CAP) and menthol (MEN) in a human experimental pain model. Fourteen healthy human participants received topically on the skin an unguent containing three concentrations of CIN (1%, 5% and 10%), CAP (0.075%, 1% and 3%) or MEN (2.5%, 5% and 10%). Topical CAP caused a noteworthy heat hyperalgesia. MEN provoked a cooling sensation, whereas CIN caused heat hypersensitivity at all tested concentrations. However, CIN and MEN did not present effect on evoked potentials. Further, the intensity of CIN induced heat hyperalgesia was amplified by secondary compound CAP, indicating an additive effect [46]. In another study, it was investigated the antinociceptive effect of CIN in both peripheral and central pain models (acetic acid induced writhing and Eddy's hot plate methods, respectively) as well as its combination with two standard drugs (diclofenac sodium and pentazocine) in mice. CIN was able to reduce nociception in a dose-dependent manner, decreasing the number of writhes 54% and 81% at concentrations of 100 and 200 mg/kg, respectively. On the other hand, it was also observed a significant reduction in writhings (84.43%) when CIN (100 mg/kg) was co-administered with diclofenac sodium (2.5 mg/kg). In Eddy's hot plate method, CIN exhibited hyperalgesic behavior when given alone as well as decreased the antinociceptive effect of pentazocine in combination group (CIN 100 mg/Kg + pentazocine 2.5 mg/Kg). These data demonstrate that CIN considerably enhanced the antinociceptive effect of diclofenac sodium at the same time as inhibited the antinociceptive action of pentazocine [125]. Citronellal Citronellal (CTAL), also named rhodinal, is an acyclic monoterpenoid aldehyde known for its capacity to repel insects [126]. CTAL is one of the main responsible for the lemon-scent of many of the plants of the Cymbopogon genus (Poaceae), especially the species C. nardus [127,128], C. winterianus [127] and C. citratus [128]. In previous study, C. winterianus essential oil demonstrated noteworthy antinociceptive, anti-inflammatory and antioxidant properties, and the monoterpene CTAL seems to be involved in these effects [129,130]. For this reason, the antinociceptive effect of CTAL (50, 100 or 200 mg/kg, i.p.) was investigated in three experimental nociception models: formalin test, capsaicin test and glutamate-induced nociception. CTAL, in all tested doses, caused a dose-dependent reduction in the pain-related behaviors during both phases of the formalin test and was naloxone-sensitive. Similarly, this monoterpene also significantly decreased face-rubbing behavior induced by administration of capsaicin or glutamate, suggesting that CTAL possesses antinociceptive action [130]. In another study, it was examined the effect of CTAL on inflammatory nociception induced by different stimuli in mice as well as the involvement of the NO-cGMP-ATP-sensitive K + channel pathway. CTAL (25, 50 or 100 mg/kg, i.p.) exhibited a significant reduction of the mechanical nociception induced by tumor necrosis factor α (TNF-α) and carrageenan in all studied doses. This monoterpene also significantly decreased the mechanical nociception in the dopamine (DA) test at doses of 25 and 100 mg/kg, and in the prostaglandin E type 2 (PGE 2 ) test only at higher dose (100 mg/kg). Interestingly, pretreatment with L-NAME or glibenclamide reversed the antinociceptive effect of the CTAL (100 mg/kg) on PGE 2 -induced mechanical nociception, suggesting that CTAL inhibits mechanical nociception through the involvement of NO-cGMP-ATP-sensitive K + channel pathway. Taken together, these results show the potential of CTAL for the treatment of pain [131]. Citronellol Citronellol (CTOL), or dihydrogeraniol, is a natural alcoholic monoterpene found in essential oils of various aromatic plant species [132,133]. CTOL exists in nature as two enantiomers, designated R-(+) and S-(−). R-(+)-CTOL is widely found in citronella oils, such as Cymbopogon winterianus, and is the more common isomer [134]. On the other hand, S-(−)-CTOL is commonly found in rose [135] and geranium oils [136]. Different essential oils containing this monoterpene have been described in the literature to possess antinociceptive and anti-inflammatory effects, including C. winterianus, C. citratus and Pelargonium graveolens [130,137,138]. With this feedback, Brito and collaborators [139] evaluated the antinociceptive and anti-inflammatory activities of CTOL, in mice, using different experimental models for pain and inflammation: acetic acid-induced abdominal constrictions, formalin-induced nociception, hot plate test and carrageenan-induced pleurisy. CTOL, in all tested doses (25, 50 or 100 mg/kg, i.p), reduced the total number of writhing in acetic acid-induced abdominal constriction test. This monoterpene also decreased paw licking times during both the early and later phases of the formalin test at doses of 25, 50 and 100 mg/kg. In the hot plate test, CTOL caused a marked increase in the latency time of the animals only at the higher dose. Finally, pretreatment with CTOL was also capable to reduce, in a dose-dependent fashion, both neutrophil infiltration and the levels of TNF-α in the exudates from carrageenan-induced pleurisy. These data indicate that CTOL exhibits an interesting antinociceptive and anti-inflammatory effect, and its mechanism of action probably involves inhibition of peripheral mediators as well as central inhibitory mechanisms [139]. Giving continuity to the study performed by Brito and collaborators [139], Brito and collaborators [140] evaluated the antinociceptive effects of CTOL on orofacial nociception in mice as well as a possible central nervous system (CNS) involvement. Pretreatment with CTOL at doses of 25, 50 and 100 mg/kg (i.p.) was able to reduce nociceptive behavior in both phases of the formalin test and in the capsaicin test. Similarly, CTOL, in all assayed doses, decreased the nociceptive face-rubbing behavior in glutamate-induced orofacial nociception model. Additionally, to investigate the action of the CTOL on CNS, it was performed an immunofluorescence protocol for Fos protein. The obtained results revealed that CTOL was capable to induce a significant increase in the average number of neurons in the piriform and retrosplenial cortex, olfactory bulb and periaqueductal grey. Taken together, these data suggest that CTOL decreases orofacial nociceptive behavior and this effect involves, at least in part, the activation of CNS regions, mainly periaqueductal and grey retrosplenial cortex [140]. In another study, it was investigated the antihyperalgesic effect of CTOL in mice using several experimental models of hyperalgesia. The mechanical hyperalgesia was induced by four hyperalgesic agents: carrageenan (CG), TNF-α, PGE 2 or dopamine. Pretreatment with CTOL, in all tested doses (25, 50 or 100 mg/kg, i.p.), was capable to attenuate mechanical hyperalgesia induced by CG, TNF-α, PGE 2 and DA in the acute models of inflammatory nociception as well as reduce the edema formation. Additionally, it was also evaluated the involvement of the spinal cord lamina I in this antihyperalgesic effect. The immunofluorescence protocol showed that CTOL significantly decreased the average number of neurons presenting Fos protein, indicating that the action of CTOL on mechanical hyperalgesia occurs, at least in part, via inhibition of the spinal cord lamina I [141]. Citronellyl Acetate Citronellyl acetate (CAT), known for its pleasant smell, belongs to the family of fatty alcohol esters and is frequently used as flavor and fragrance agent [142]. CAT is present mostly in Eucalyptus citriodora [143], but also is found in minor quantities in the volatile extract from dried pericarp of Zanthoxylum schinifolium [144]. Since there are few studies investigating the biological potential of this monoterpene and E. citriodora essential oil possesses antinociceptive and anti-inflammatory effects [145], Rios and collaborators [142] investigated the antinociceptive effect of CAT in both physically-and chemically-induced acute pain models as well as the possible antinociceptive mechanisms involved. CAT (25, 50, 75, 100 or 200 mg/kg, i.g.), at two higher doses, caused a significant reduction of acetic acid-induced abdominal constrictions in mice. In the formalin test, CAT (100 or 200 mg/kg, p.o.) reduced nociceptive behavior in both the early and later phases. Similarly, in the glutamate test, CAT decreased nociceptive behavior after pretreatment with the doses of 100 and 200 mg/kg (p.o.). Regarding the mechanism of action, the results showed that, at least in part, protein kinase C (PKC) and protein kinase A (PKA), transient receptor potential vanilloid 1 (TRPV1), TRPM8, acid-sensing ion (ASIC) and glutamate receptors are involved in the antinociceptive effect of CAT. α-Terpineol α-Terpineol (α-TPN) is monoterpene alcohol that has been isolated from a variety from natural sources, such as the essential oils from Melaleuca leucadendra [157], Citrus aurantium [158] and Nepeta dschuparensis [159]. There are three isomers, α-, β-, and γ-TPN, the latter two differing only by the location of the double bond. Quintans-Júnior and collaborators [160] studied the antinociceptive action of α-TPN using heat-induced (hot-plate test) and chemical-induced (acetic acid, formalin, glutamate and capsaicin) nociception models in mice at different doses (25, 50 or 100 mg/kg, i.p). α-TPN, in all tested doses, exhibited a reduction of the nociceptive behavior at the early and late phases of paw licking and reduced the writhing reflex in mice (formalin and writhing tests, respectively). In the glutamate and capsaicin tests, α-TPN also reduced remarkably nociceptive response, and this inhibition of antinociceptive behavior was dose-related in capsaicin-induced nociception test. Finally, α-TPN significantly increased the latency time in the hot plate test (at only the higher dose). Taken together, these results demonstrate the potential antinociceptive properties of α-TPN [160]. Vanillin Vanillin (VAN), or 4-hydroxy-3-methoxybenzaldehyde, is a phenolic aldehyde with the molecular formula C 8 H 8 O 3 . This organic compound contains three highly reactive functional groups in its structure: aldehyde, phenol and ether [161]. VAN is one of the primary chemical constituent extracted from Vanilla planifolia seedpods, a monocotyledonous orchid native of Central America, and is broadly employed as flavoring agents in foods, cosmetics, beverages and pharmaceuticals. Synthetic vanilla is commonly used instead of natural vanilla, since vanilla extract is so much in demand and expensive [161,162]. VAN is known to have several biological activities, including antimutagenic [163], antidepressant [162], antioxidant, hepatoprotective [164] and antitumor [165]. Further, it has been demonstrated the antinociceptive potential of VAN in acetic acid-induced visceral inflammatory pain models [166]. With this feedback, Rathnakar and collaborators [167] investigated the antinociceptive effect of VAN using Eddy's hot plate method. Pretreatment with VAN displayed a significant increase in the latency period at both tested doses (10 or 100 mg/kg). As this experimental model is employed to evaluate the central pain, obtained results indicate a potential central antinociceptive activity probably mediated via opioid receptors [167]. In 2012, Srikanth and collaborators [168] examined the effect of VAN on acute inflammation induced by phlogistic agent carrageenan in rats. These authors observed that pretreatment with VAN (10, 100 or 200 mg/kg, p.o.) was able to significantly reduce the rat paw edema formation induced by carrageenan at 2nd, 3rd and 4th hour, and only at the higher doses (100 and 200 mg/kg). Further, there are no significant differences between the antioedematogenic effect observed at doses of 100 and 200 mg/kg [169]. In another study, it was evaluated the antinociceptive and anti-inflammatory effects of VAN on tail flick method and carrageenan-induced rat paw edema model, respectively. Pretreatment with VAN (50 or 100 mg/kg) produced a significant inhibition of pain, suggesting that antinociception in the mice tail flick test is mediated probably at the level of the spinal cord. In carrageenan-induced paw edema test, VAN caused a significant reduction in the paw volume at doses of 50 and 100mg/kg, indicating an anti-inflammatory action [169]. Borneol Borneol (BOR) belongs to the family of bicyclic monoterpene alcohols and is found in the essential oil of several medicinal plants, such as Lavandula officinalis, Matricaria chamomilla and Valeriana officinalis [170,171]. There are three different isomers of BOR, D-(+)-BOR, L-(−)-BOR and isoborneol. Natural BOR contains 98% of (+)-BOR. (+)-BOR is broadly employed in food and also used in analgesic and anesthetic preparations in traditional Chinese medicine and Japanese medicine [172]. Recent studies have reported that this monoterpenoid possesses a variety of pharmacological effects, including anti-inflammatory [173], vasorrelaxant [174] and neuroprotective activities [175]. Until now, little is known about the specific role of BOR in the pharmaceutical preparations to treat painful and inflammatory conditions. With this feedback, Almeida and collaborators [175] evaluated the antinociceptive and anti-inflammatory activities of BOR, measuring nociception and inflammation in five experimental models in rodents: acetic acid-induced abdominal writhings, formalin-induced nociception, hot plate test, grip strength test and carrageenan-induced peritonitis. BOR (5, 25 or 50 mg/kg, i.p.) was able to prevent the visceral pain in acetic acid-induced abdominal writhing test in all tested doses. This monoterpene also reduced nociceptive behavior in both the early and later phases of the formalin test at doses of 5, 25 and 50 mg/kg. In the hot plate test, BOR caused a marked increase in the latency time (only at higher dose). Additionally, BOR did not cause any significant motor performance alteration in grip strength meter test. Finally, pretreatment with BOR (5, 25 or 50 mg/kg, i.p.) was able to decrease the leukocyte migration to the peritoneal cavity in peritonitis model induced by carrageenan. These findings indicate that BOR exhibits significant central and peripheral antinociceptive effects as well as anti-inflammatory activity, and without producing motor deficit [175]. In another study, it was investigated the antihyperalgesic activity of BOR on neuropathic and inflammatory pain in different animal models as well as its possible mechanisms of action. BOR (125,250 or 500 mg/kg, p.o. or i.t.) was able to decrease mechanical hypersensitivity in both segmental spinal nerve ligation-induced neuropathic pain (SNL) and complete Freund's adjuvant-induced chronic inflammatory pain (CFA) models and in a dose-dependent manner. Further, the antihyperalgesic action of this monoterpene, in both SNL and CFA models, was totally reversed by the convulsant alkaloid bicuculline, a selective γ-aminobutyric acid (A) receptor [GABA(A)R] antagonist. This result suggests that BOR attenuates mechanical hyperalgesia through activation spinal GABAergic transmission in the spinal cord, being a potential candidate for treating chronic pain [176]. Myrtenol Myrtenol (MYR) belongs to the family of bicyclic monoterpene alcohols. This chiral alcohol contains two stereogenic centres and is present in the volatile oil of various aromatic species, including Aralia cachemirica [177] and Tanacetum vulgare [178]. MYR can also be obtained through the selective oxidation of the α-pinene [179], and has been reported in the scientific literature by its bioactivity [180][181][182]. Considering that this monoterpene presents important biological properties and great therapeutic potential, it was evaluated its antinociceptive and anti-inflammatory activities, in mice, using classical models of nociception (acetic acid-induced writhing, hot-plate test and paw licking induced by formalin, glutamate and capsaicin) and inflammation (paw edema induced by different agents, carrageenan-induced peritonitis, myeloperoxidase levels and cytokine measurement). Pretreatment with MYR (25-75 mg/kg, i.p.) effectively inhibited acetic acid-induced nociception; decreased time of licking the paw after the injection of the phlogistic agents glutamate, capsaicin and formalin (only in the second phase); and did not change the latency reaction time in the hot-plate test. In addition, MYR inhibited carrageenan-, histamine-, serotonin compound 48/80-and PGE 2 -induced by paw edema; and also decreased the cell counts, myeloperoxidase activity and cytokine levels of the peritoneal cavity induced by carrageenan. These results suggest that MYR attenuates the nociceptive and inflammatory responses by inhibiting cell migration and also signalling pathway of receptors involved in the transmission of pain [183]. Pulegone Pulegone (PUL) is a naturally occurring organic compound present in the essential oil from several members of the mint family (Lamiaceae), such as Minthostachys spicata [184], Mentha longifolia [185] and M. pulegium [186]. In nature, PUL occurs in both (+)-and (−)-forms and is classified as a monoterpene ketone [4]. Further, this monoterpenoid has been recognized as being responsible for most of pharmacological effects described for species M. longifolia [185]. In 2011, De Sousa and collaborators [3] investigated the antinociceptive potential of PUL in chemical (formalin test) and thermal (hot plate test) models of nociception. PUL (31.3, 62.5 and 125 mg/kg, i.p.) inhibited dose-dependently both phases of the formalin test, and this effect was not blocked by opioid antagonist naloxone. In hot plate test, PUL augmented significantly the latency reaction time of mice in hot plate in all tested doses (31.3, 62.5 or 125 mg/kg), confirming that this monoterpene ketone has a central antinociceptive effect [3]. Citral Citral (CIT) is a mixture of two isomers, cis-isomer neral and trans-isomer geranial, and has been described to be the most important member of the open-chain monoterpenoids. This monoterpene is found in volatile oil of several aromatic herbs, such as Cymbopogon citratus, a species commonly known as lemongrass [28,187]. Lemongrass tea possesses various biological properties described in literature, such as anti-inflammatory, antioxidant, anxiolytic, cytotoxic and antinociceptive activities [188]. The antinociceptive action of CIT was demonstrated in mice submitted to different experimental models of acute and chronic nociception. Pretreatment with CIT (25, 100 or 300 mg/kg, p.o.) inhibited formalin-induced licking in both the neurogenic and inflammatory phases (inhibition of 54% and 65% at 300 mg/kg, respectively); prevented and reduced mechanical hyperalgesia without producing any significant motor dysfunction, with a maximum effect at dose of 100 mg/kg; inhibited the nociceptive response (CIT 100 mg/kg) induced by glutamate (inhibition of 49%) and phorbol 12-myristate 13-acetate (PMA; inhibition of 54%); markedly attenuated the pain response (CIT 100 mg/kg) induced by N-methyl-D-aspartic acid (NMDA; inhibition of 54%), trans-1-amino-1,3-dicarboxycyclopentane (ACPD; inhibition of 77%), substance P (inhibition of 42%) or cytokine TNF-α (inhibition of 72%); and attenuated the nociception (CIT 100 mg/kg) to involve significant activation of serotonergic systems (via 5-HT 2A receptor). Together, these results display the potential of CIT for the treatment of inflammatory and neuropathic pain [189]. Thymol Thymol (THY) is a natural monoterpene phenol derivative of cymene, and isomeric with carvacrol [190]. This monoterpene is found mainly in thyme (Thymus vulgaris) essential oil (approximately 47%) [191,192]. Further, THY presents various biological properties, including antinociceptive effect [191] and inhibition of inflammatory response [192]. It also known that THY inhibits nerve conduction [193], but there are no studies about how this monoterpenoid influences synaptic transmission. For this reason, Xu and collaborators [194] investigated the effect of THY on spontaneous excitatory transmission by applying the whole-cell patch-clamp technique to substantia gelatinosa (SG) neurons of adult rat spinal cord slices, aiming to comprehend how THY modulates synaptic transmission, with an emphasis on transient receptor potential (TRP) activation. It was found that THY increased the frequency of spontaneous excitatory postsynaptic current, a measure of the spontaneous release of L-glutamate onto SG neurons, by activating TRPA1 channels while producing a membrane hyperpolarization without TRP activation in SG neurons [194]. Limonene Limonene (LIM) is a colorless liquid hydrocarbon belonging to the family of cyclic monoterpenes. There are two isomers, D-and L-LIM, and the more common D-isomer possesses a strong smell of orange. LIM is the major chemical component of citrus oils [195], but also is found in other aromatic plants species, including Lippia alba [196] and Artemisia dracunculus [197]. It has been reported that LIM has anti-inflammatory properties, inhibiting lipopolysaccharide (LPS)-induced production of nitric oxide, PGE 2 and pro-inflammatory cytokines in RAW 264.7 cells [198]. For this reason, Kaimoto and collaborators [199] investigated the properties of LIM on mouse sensory neurons and heterologously expressed mouse TRP channels in vitro, as well as its nociceptive effects in vivo. The results showed that LIM directly stimulated primary sensory neurons to provoke acute pain through the activation of TRPA1 channel when was topically applied. In addition, its systemic application reduced nociceptive behaviors via H 2 O 2 -induced TRPA1 activation, and this effect is related to the inflammatory pain [199]. Nerol Nerol (NER) belongs to the family of acyclic monoterpene alcohols and was originally isolated from neroli oil, hence its name. NER is found in many essential oils, such as Agastache mexicana [200] and Citrus aurantium [201]. In previous study, González-Ramírez and collaborators [202] reported the antinociceptive effect of hexane extract from A. mexicana aerial parts in the acetic acid-induced writhing model in rodents. As it has been reported the abundant presence of NER in this species [203], the authors suggested that this monoterpene is partially responsible by antinociceptive and anti-inflammatory activities of A. mexicana [202]. With this background, González-Ramírez and collaborators [203] evaluated the influence of NER in the emergence of pathological markers and hyperalgesia in oxazolone-induced colitis, as well as whether this monoterpene protects against gastric damage induced by ethanol. Pretreatment with NER (30-300 mg/kg, p.o.) significantly alleviated pathological markers (speed up body weight gain, macroscopic damage amelioration, decreased myeloperoxidase activity, reduced inflammatory parameters like disease activity index and intestinal tissue damage) observed in the oxazolone-induced colitis model. It also observed that NER (30 mg/kg) exhibited antinociceptive effect and led to a significant reduction on expression of some pro-inflammatory cytokines, like IL-13 and TNF-α. Further, NER was effective in preventing the gastric mucosa against ethanol-induced damage, starting at dose of 10 mg/kg (p.o.). These findings give evidence of the therapeutic potential of NER for the treatment of important gastrointestinal tract disorders, such as ulcerative colitis and gastric ulcers [203]. Anethole Anethole (ANT), or trans-anethole, is an organic compound frequently used as flavoring substance. It belongs to the family of phenylpropanoids (C 6 -C 3 ), a class of aromatic compounds that occurs widely in essential oils [204,205]. ANT is the main constituent of many essential oils [206], such as Illicium verum [207] and Pimpinella anisum [208], and seems to play a key role in the biological effects attributed to these oils. Previous studies have showed that ANT exhibits antioxidant [209], anti-inflammatory [210] and anesthetic [211] activities. With this feedback, Ritter and collaborators [212] examined the effects of ANT on carrageenan-induced acute inflammation and persistent inflammation induced by complete Freund's adjuvant, two pain models of inflammatory origin. Pretreatment with ANT (125, 250, and 500 mg/kg, p.o.) provoked a noteworthy reduction of mice paw edema at doses of 250 and 500 mg/kg. Similarly, ANT also significantly decreased hypernociceptive response induced by carrageenan at doses of 250 and 500 mg/kg, but was not capable to alter the PGE 2 -induced mechanical hypernociception. Further, this phenylpropanoid was able to reduce the level of some cytokines (TNF-α, IL-1β and IL-17) at doses of 250 and 500 mg/kg, as well as inhibited the myeloperoxidase activity in all tested doses. Taken together, these findings show that ANT exhibits antioedematogenic and antihypernociceptive effects. In another study, Ritter and collaborators [213] examined the antinociceptive activity of ANT in five experimental models of nociception: acetic acid-induced writhing, formalin test, complete Freund adjuvant-induced pain (CFA), hot-plate test and glutamate test. ANT was able to reduce the total number of writhing in the abdominal constriction model in all assayed doses (62.5, 125, 250 or 500 mg/kg, p.o.). This phenylpropanoid also decreased paw licking times during the second phase of the formalin test only at higher doses (125 and 250 mg/kg), but did not affect the nociceptive response in the first phase. Further, pretreatment with ANT significantly reduced paw edema in the glutamate test (62.5, 125 and 250 mg/kg), and decreased peripheral nociception induced by CFA (250 mg/kg). On the other hand, ANT, at different doses, did not alter the latency time in the hot plate test, confirming that ANT exhibits no central effect. These data demonstrate that ANT possesses peripheral antinociceptive action and this antinociception occurs, at least in part, by to decrease the synthesis or release of inflammatory mediators [213]. Nerolidol Nerolidol (NROL), also known as peruviol, is a naturally occurring sesquiterpene alcohol present in various plants with a floral odor. NROL is the allylic isomer of farnesol (FAR) and exists in two geometric isomers, a trans and a cis form, differing only in the geometry about the central double bond [214]. This sesquiterpenoid is found in the volatile oil of many aromatic species, including Canarium schweinfurthii [215] and Baccharis dracunculifolia [216]. With this background, Fonsêca and collaborators [217] investigated the antinociceptive and anti-inflammatory activities of NROL, as well as its possible mechanisms of action, in different experimental mouse models of pain and inflammation. In the acetic acid-induced writhing test, NROL (200, 300 or 400 mg/kg, p.o.) exhibited a significant antinociceptive effect in all tested doses. In the formalin test, NROL (300 or 400 mg/kg) was able to reduce nociceptive behavior in both the first phase and the second phase. On the other hand, this sesquiterpene did not increase latency at any of the observed time points in the hot-plate test, suggesting an antinociceptive action on chemical nociception models (acetic acid-induced writhing test and formalin test), but not in the thermal nociception model (hot-plate test). Further, pretreatment with NROL decreased carrageenan-induced paw edema at doses of 200, 300 and 400 mg/kg; and inhibited the production or action of some pro-inflammatory cytokines, like TNF-α and IL-1β. Regarding the mechanism of action, the antinociceptive activity of NROL involves the GABAergic system, but not the opioidergic system or ATP-sensitive potassium (KATP) channels [217]. (−)-Carvone Carvone (CAR) belongs to the family of monocyclic monoterpene ketones and exists as two optical isomers (different orientations of the isopropenyl group), D-and L-CAR. In general, these individual enantiomers possess specific biological responses, particularly toward olfactory receptors [218]. (−)-CAR is found in relevant quantities in the volatile oils from the Mentha genus, such as the species M. spicata [219,220]. This monoterpene is known to have a promising antinociceptive effect, exerting distinct effects on both central and peripheral nervous systems [221]. Until now, few studies were proposed to elucidate the potential mechanisms involved with the antinociceptive action of the (−)-CAR. For this reason, Gonçalves and collaborators [222] investigated the pharmacology of (−)-CAR in dorsal root ganglia (DRG) neurons and TRPV1-expressing HEK293 cells to verify if this compound activates TRPV1 channels. (−)-CAR did not provoke any membrane damage, presenting low cytotoxicity in both neural and epithelial cells. This monoterpene also promoted an elevation of the cytosolic calcium levels in DRG neurons through activation of TRPV1 channels. Further, activity of (−)-CAR on TRPV1 channels was examined in HEK293 cells expressing recombinant human TRPV1 channels, revealing that the increase in the calcium levels occurs in a concentration-dependent manner [222]. Farnesol Farnesol (FAR) is a natural 15-carbon organic compound that is an acyclic sesquiterpene alcohol. This sesquiterpenoid is commonly found in propolis (a resinous beehive product), citrus fruits and various plant essential oils, such as Tetradenia riparia [223] and Citrus sp. [224]. In study performed by Qamar and Sultana [225], FAR showed protective efficacy against massive lung inflammation, oxidative stress and injuries induced by cigarette smoke toxicants. With this background, the antinociceptive activity of FAR was investigated in two classic behavioral models of analgesia: acetic acid-induced writhing test and the formalin-induced nociception. Pretreatment with FAR (50, 100, and 200 mg/kg, i.p.) caused a noteworthy reduction in the number of contortions in the acetic acid-induced writhing test at all assayed doses. In the formalin test, FAR was capable to inhibit both phases of the pain stimulus at doses of 100 and 200 mg/kg. These results indicate that FAR possesses antinociceptive activity and this effect was similar to that found with centrally-acting analgesic drugs, like morphine and tramadol [226]. β-Caryophyllene β-Caryophyllene (β-CARY), a natural bicyclic sesquiterpene, is the main volatile constituent found in the essential oil of many common spices and food plants, such as Cinnamomum spp. [227], Origanum vulgare [228] and Piper nigrum [229]. In nature, is found three isomers, named (E)-β-CARY, (Z)-β-CARY (or isocaryophyllene), and α-humulene (formerly α-caryophyllene), a ring-opened isomer [230]. Further, β-CARY is known to be the main constituent of Cannabis sativa essential oil [231] as well as to possess antiarthritic effect [232] and noteworthy anti-inflammatory activity against carrageenan-and PGE 1 -induced edema in rats [233]. With this background, [234] examined the contribution of opioid and peripheral cannabinoid (CB) systems in the antinociceptive action produced by β-CARY as well as its action in combination with the opioid agonist morphine. β-CARY [9.0 µg/paw or 18.0 µg/paw, intraplantar (i.pl)] was able to attenuate the capsaicin-induced nociceptive behavioral response and in a dose-dependent manner. Further, β-CARY-induced antinociception was mediated by peripheral CB 2 receptor activation, which stimulates the local release of β-endorphin, an endogenous opioid, from keratinocytes. Finally, it was also observed a synergistic antinociceptive interaction between β-CARY and morphine, fact that may be an interesting therapeutic alternative to minimize risk of undesirable side-effects caused by this opioid analgesic [234]. α,β-Epoxy-Carvone α,β-Epoxy-carvone (ECAR) is a naturally occurring monocyclic monoterpene containing an epoxy group instead of the α,β-unsaturated ketone group present in CAR [235]. This monoterpene is present in the essential oils of various aromatic species such as Kaempferia galangal [236] and Carum carvi [237], but can also be obtained by organic synthesis [238]. ECAR exhibits depressor effect on CNS [180] and antimicrobial [239] and anticonvulsant activities [240]. In 2013, Da Rocha and collaborators [241] investigated the antinociceptive and anti-inflammatory effects of this monoterpenoid in four experimental mice models: acetic acid-induced writhing, formalin induced nociception, hot-plate test and peritoneal permeability induced by acetic acid. ECAR promoted a significant antinociceptive effect in the acetic acid-induced abdominal writhing test at doses of 100, 200 or 300 mg/kg (i.p.). In the formalin test, ECAR inhibited nociception in both the first phase (300 mg/kg) and second phase (200 and 300 mg/kg). In the hot-plate test, pretreatment with ECAR caused a significant latency prolongation at 30 min (100, 200 and 300 mg/kg), 60 and 120 min (300 mg/kg); and this effect was reversed by naloxone. Finally, ECAR was capable to inhibit the acetic acid-induced peritoneal capillary permeability at dose of 300 mg/kg [241]. Materials and Methods The compounds presented in this review were selected based on the effects shown in specific animal models for evaluation of the antinociceptive activity. Table 1 summarizes the essential oil constituents with antinociceptive activity. The search was conducted in the scientific database PubMed, focusing on works published during the last six years (January 2011 to December 2016). The data were selected using the following terms: "essential oils", "monoterpene," and "phenylpropanoids" refining with "analgesic" or "antinociceptive". [199] Oxazolone-induced colitis model Nociceptive behavior assay Male BALB/c mice [203] Oxazolone-induced colitis model Nociceptive behavior assay Alleviated pathological alterations induced by oxazolone Reduced the levels of proinflammatory cytokines (IL-13 and TNF-α) Male BALB/c mice [203] 2017, 18 Dorsal root ganglia (DRG) neurons and TRPV1expressing HEK293 cells. Did not provoke any membrane damage and promoted an elevation of the cytosolic calcium levels in DRG neurons Old Wistar rat [222] Dorsal root ganglia (DRG) neurons and TRPV1-expressing HEK293 cells. Did not provoke any membrane damage and promoted an elevation of the cytosolic calcium levels in DRG neurons Old Wistar rat [222] . Conclusions The increasing number of studies on the antinociceptive activity of essential oil constituents shows the therapeutic potential of this chemical class. Effective in various animal models of pain and acting via different mechanisms of action, these compounds are interesting molecules for studies in clinical approaches. Despite the small amount of various antinociceptive constituents in the essential oils, it is possible, using low cost reactions, to easily synthesize some of these compounds, such as monoterpenes α, β-epoxy-carvone [180] and hydroxydihydrocarvone [242,243], which were obtained via organic synthesis. The use of these bioactive constituents as prototypes to synthesize analogous compounds is another interesting way forward in the development of new analgesic drugs. It is also necessary to investigate the toxicological aspect of essential oils. Only a few publications have shown possible toxicological effects of essential oils to humans. For example, constituents as linalool, whose antinociceptive activity in animals is well established [244][245][246][247], have been the subject of few toxicological studies in humans [248]. The standardization of the experimental protocols is also essential to establish better doses and routes of administration. In this way, it is possible to make a more appropriate comparative analysis between the oils. In addition, the investigation of the chemical composition of essential oils is important to complement the pharmacological and toxicological approach. The present review makes it possible to conclude that the structural diversity of the bioactive constituents does not allow the establishment of a chemical characteristic responsible for antinociceptive action. Advanced studies on the mechanisms of action of these constituents, together with the computational medicinal chemistry approach, may be a more efficient way to understand the chemical requirement for this pharmacological activity.
2017-12-14T12:12:02.491Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "eb5eebef52759a289322ef3d45dd72c189b29f21", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/18/12/2392/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "eb5eebef52759a289322ef3d45dd72c189b29f21", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
259628612
pes2o/s2orc
v3-fos-license
Green Synthesis of Silver Nanoparticles Development of reliable and eco-accommodating methods for the synthesis of nanoparticles is a vital step in the field of nanotechnology. The remarkable chemical, physical, and biological characteristics of silver nanoparticles INTRODUCTION Nanotechnology is a new and emerging technology with a wealth of applications. One of the most cutting-edge technologies in many different scientific disciplines, including biology, chemistry, and material science, is nanotechnology [1]. It entails the creation and use of materials with one or more dimensions between the range of 1-100 nm [2]. Given their small sizes, huge surface areas with free dangling bonds, and more reactivity than their bulk cousins, they have surprising and fascinating features [3]. Researchers are becoming more interested in biological processes due to the development of effective green synthesis that uses natural reducing, capping, and stabilizing agents rather than hazardous, expensive, and energy-intensive chemicals [4,5]. Silver has been used for treating many illnesses for ages [6]. "The silver nanoparticles demonstrate better antibacterial" [7], "antifungal and antiviral properties compared with metallic silver and silver compounds" [8,9]. Utilizing biological processes to create nanoparticles is biocompatible because plants, in particular, release functional biomolecules that actively decrease metalions [10]. Both "Top-down" and "Bottom-up" methods can be used to create NPs. "An appropriate bulk material is divided into tiny particles by size reduction using various methods, such as pulse laser ablation, evaporation-condensation, ball milling, etc., in a top-down approach. The bottom-up method allows for the chemical and biological synthesis of NPs through the selfassembly of atoms into new nuclei that develop into nanoscale particles" [11]. In comparison to physical and chemical processes, green synthesis is more environmentally benign, economically advantageous, and readily scaled up for the production of large quantities of nanoparticles (NPs). Green synthesis also does not involve the use of hazardous chemicals, high temperatures, or high energy inputs [12]. Today, extracts from plant parts like fruit, leaves, bark, seeds, and stems have been successfully employed to create nanoparticles. The traditional processes for making NPs are pricey, hazardous, and unfriendly to the environment. Researchers have identified the precise green routes, or the naturally occurring sources and their byproducts, that can be employed for the synthesis of NPs, in order to get around these issues. GREEN SYNTHESIS The process of creating nanoparticles using microorganisms and plants that have biomedical implications is known as "biosynthesis of nanoparticles." This strategy is economical, safe, biocompatible, green, and safe for the environment [13]. "The two basic ingredients for the ecologically friendly production of AgNPs are a silver metal ion solution and a reducing biological agent. External capping and stabilizing agents are typically not required since reducing agents or other cell components act as stabilizing and capping agents the majority of the time" [14]. Metal Ion Solution The main ingredient needed to create AgNPs is the Ag + ion, which can be found in a variety of silver salts that are water soluble. However, the aqueous Silver Nitrate solution has been utilized, with Ag + ion concentrations ranging from 0.1 to 10 mm. Biological Reducing Agents Silver nanoparticles have been made using biopolymers, microbial cell biomass, or cell free growing media, along with plant extracts. From algae to angiosperms, plants are employed to make AgNps. The manufacture of silver nanoparticles has utilized components like leaves, bark, roots, and stems [14]. The medicinally important plants like Aloe vera [15], Azadirachta indica [16], Cocos nucifera [17] these were utilized to create silver nanoparticles in a green approach. With the exception of a few instances, all plant extracts served as both potential reducing and stabilizing agents. It was discovered that the plant extracts' metabolites, proteins, and chlorophyll serve as capping agents for synthesized silver nanoparticles [18]. MECHANISM OF AgNPs SYNTHESIS The existence of several organic chemicals, including those that can provide an electron for the reduction of Ag + ions to Ag 0 , such as carbohydrates, fats, proteins, enzymes & coenzymes, phenols, flavonoids, terpenoids, alkaloids, and gum, is what allows biological matter to produce Ag nanoparticle. Depending on SEPARATION OF AgNPs Researchers typically utilize the centrifugation approach to create synthesized silver nanoparticles in pellet or powder form. Additionally, the AgNPs suspensions were dried in the oven to produce the product in powder form [20]. Cloud-point Extraction "CPE is simple to perform based on the non-ionic surfactants' solubilization properties and cloud points. This extraction technique, in short, consists of three steps: First, a non-ionic surfactant is added to the sample solution at a concentration greater than its critical micelle concentration (CMC); second, the mixture becomes turbid when the external conditions (such as temperature, pressure, pH, or ionic strength) are changed because it reaches the cloud point (i.e., incomplete solubilization); and third, centrifugation or prolonged standing causes the micelle solution to easily separate into two phases, allowing the analytes to be concentrated and extracted into the surfactantrich phase due to the analyte-micelle interaction" [21]. Given its numerous advantages, including its low cost, ease of handling, high extraction efficiency, and the preconcentration factor, CPE is the best method for removing contaminants from a variety of environmental and biological samples. Field-flow Fractionation The hydrodynamic separation method known as "field-flow fractionation" (FFF) was created to separate complicated macromolecules, colloids, and particles. It is comparable to a field-driven method and liquid chromatography, with the exception that a stationary phase is not required. In essence, an external field that is applied perpendicular to the axis of the fractionation channel while the flowing stream containing the samples migrates through the FFF channel induces the retention of the analytes. Particles have unique diffusion coefficients due to their diverse physicochemical features and wide size distribution. The retention duration varies because particles remain at varied distances from the accumulation wall to balance this diffusibility and the external field [22]. Chromatographic Methods A technique for size-based separation is hydrodynamic chromatography (HDC). Non Porous microparticles are tightly packed in the column, and separation is accomplished by flow velocity and the velocity gradient across them [23]. Electrophoresis and Capillary Electrophoresis "Particle size, shape, and surface-chemical alteration of NPs are the key factors that influence electrophoretic separation of NPs. While the electrophoresis of functionalized NPs with surface functional-group modifications is influenced by quantity, chemical groups, and ionization of these functional groups, the electrocharge of NPs without surface modification is primarily from ion adsorption, and the electrophoretic separation greatly depends on particle size" [24]. Density-gradient Centrifugation "The density-gradient centrifugation technique, which was used to separate biomacromolecules, has a lot of possibilities for isolating nanoparticles. The development of densitygradient centrifugation has made it possible to employ it with organic solvents, such as non-hydroxylic solvents, which have been used to purify Ag, Au, and CdSe nanoparticles. In the study, cyclohexane and tetrachloromethane mixes with varying densities (by volume 50%, 60%, 70%, 80%, and 90%) were added to create a five-layer gradient. Following the addition of AgNP and centrifugation, distinct coloured zones were seen and are visible under a TEM" [25]. Miscellaneous Methods Ag nanoparticles have also been separated using a number of additional methods, such as membrane filtration, ultrafiltration, and dialysis. Centrifugation is a popular method for removing residues from freshly synthesized NPs since it is inexpensive and simple to perform. Membrane filtration, also known as ultrafiltration, has become an effective method for the separation of AgNPs of various sizes due to its straightforward process and lack of the need for additional separating agents. However, during centrifugation or filtering, unwanted aggregation or filter clogging may happen, which could affect the outcomes. Ag + and AgNPs may be separated using some methods for determining Ag ions. For the purpose of finding labile metal ions, the diffuse gradients in thin-films method, which is based on Fick's first diffusion law, has generated a lot of interest. In the presence of AgNPs, free Ag ions have been successfully measured by DGT [26,27]. CHARACTERIZATION OF AgNPs AgNPs are frequently characterized using several techniques, such as UV-Vis Spectra, SEM, TEM, FTIR, XRD, and EDAX or EDX/EDS. AgNPs are too small to be detected by conventional optical microscopy due to their nanoscale size. In order to visualize and characterise NMs, many people choose to use electron microscopy (EM) techniques, which are based on the application of an electron beam and have a much greater resolution. Transmission electron microscopy (TEM) and scanning electron microscopy (SEM) are the two techniques that stand out the most. The morphology and status of the particles' aggregation are also revealed by TEM images in addition to the particles' size and form [20]. "The UV-Vis spectral analyses have been used to examine the dependence of pH, metal ion concentration, and extract content on the formation of AgNPs, by showing a red shift in the SPR peak with an increase in nanoparticle size and a blue shift for a decrease in size" [28]. Most studies' SEM morphological analyses showed spherical Ag nanoparticles, however only a handful reported irregular [29], triangular [30], flake [31], flower [32], pentagonal [33] and rodlike structures [34]. FACTORS AFFECTING AgNPs SYNTHESIS The reaction temperature, metal ion concentration, extract content, pH of the reaction mixture, reaction time, and agitation are the main physical and chemical factors that influence the synthesis of AgNP. The size, shape, and morphology of the Ag nanoparticles are substantially influenced by variables such metal ion concentration, extract content, and reaction time [35]. The important parameters in a reaction include the temperature and stirring time. Many researchers employed bio-polymers and plant extracts to synthesize AgNP at temperatures as high as 100℃. The rate of AgNPs production increased as the temperature rose (30-90°C) [36] and also promoted the synthesis of smaller size AgNPs. Most have created AgNPs in a room-temperature (25˚C to 37˚C) range [37]. APPLICATIONS OF AgNPs "Numerous antibacterial and antifungal applications exist for Ag nanoparticles. Ag nanoparticles have been widely used as antibacterial coatings in medical applications, including heart implants, catheters, wound dressings, orthopedic implants, dental composites, nano-biosensing, and agriculture engineering" [38]. The health sector, food storage, textile coatings, and a number of environmental applications have all made substantial use of Ag nanoparticles as antibacterial agents [39]. Ag nanoparticles were used as antibacterial agents in a variety of applications, including water treatment, sanitizing household and medical equipment, and home appliances [40,41]. The Ag nanoparticles exhibited antifungal action against various fungi. Actual mechanism behind the antifungal activity is not fully understood. The antifungal effect has been related to the disruption of the cell membrane's structure by destroying the integrity of the membrane, which inhibits the budding process against C. albicans species. The shape of the Ag nanoparticles has a significant effect on the antimicrobial activity [42,43]. The effectiveness of the Ag nanoparticles as larvicidal agents against dengue vector Aedes aegypti [44] and malarial vector A. subpictus [45]. No attempt has been made to suggest a suitable mechanism for Ag nanoparticles' anti-parasitic effect. Several publications have been presented on the application of Ag nanoparticles in medicine. The Ag nanoparticles have been used as therapeutic agents [38], for disease diagnosis [46], and as nano carriers for drug delivery [47]. Author Reducing agent CONCLUSION It is determined that over the past ten years, significant efforts have been undertaken to advance green synthesis. Green synthesis advances over chemical and physical approaches because it is affordable, environmentally friendly, and can be scaled up successfully for large-scale synthesis [48]. Ag nanoparticles have a wide range of important pharmacological properties, and they are more affordable than local medications. In green synthesis mediated by plants Cu nanoparticles with an average size of 48 nm and high crystalline properties were produced by the A. indica leaf broth and remained stable for two months at 4°C [49,50]. In addition to plantmediated green synthesis, AgNPs' numerous bioassay capabilities have received particular attention. The form and size of the Ag nanoparticles produced employing biological reducing and capping agents vary greatly. The anti-microbial effect of Ag nanoparticles has been extensively researched among other applications.
2023-07-11T15:59:50.961Z
2023-07-05T00:00:00.000
{ "year": 2023, "sha1": "d81084c5765f13f9063d6e8962625767d89dbb51", "oa_license": null, "oa_url": "https://journalmrji.com/index.php/MRJI/article/download/1380/2744", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "292559bfe0e768c6ddd4f0d1474763c783f47c94", "s2fieldsofstudy": [ "Materials Science", "Chemistry", "Environmental Science" ], "extfieldsofstudy": [] }
271548544
pes2o/s2orc
v3-fos-license
Polyphenol-based polymer nanoparticles for inhibiting amyloid protein aggregation: recent advances and perspectives Polyphenols are a group of naturally occurring compounds that possess a range of biological properties capable of potentially mitigating or preventing the progression of age-related cognitive decline and Alzheimer’s disease (AD). AD is a chronic neurodegenerative disease known as one of the fast-growing diseases, especially in the elderly population. Moreover, as the primary etiology of dementia, it poses challenges for both familial and societal structures, while also imposing a significant economic strain. There is currently no pharmacological intervention that has demonstrated efficacy in treating AD. While polyphenols have exhibited potential in inhibiting the pathological hallmarks of AD, their limited bioavailability poses a significant challenge in their therapeutic application. Furthermore, in order to address the therapeutic constraints, several polymer nanoparticles are being explored as improved therapeutic delivery systems to optimize the pharmacokinetic characteristics of polyphenols. Polymer nanoparticles have demonstrated advantageous characteristics in facilitating the delivery of polyphenols across the blood–brain barrier, resulting in their efficient distribution within the brain. This review focuses on amyloid-related diseases and the role of polyphenols in them, in addition to discussing the anti-amyloid effects and applications of polyphenol-based polymer nanoparticles. Introduction Between the years 2000 and 2020, there was a decrease in deaths attributed to stroke, heart disease, and human immunodeficiency virus, whereas reported deaths from Alzheimer's disease (AD) saw a notable increase of over 145% (1).The incidence of deaths attributed to AD was further exacerbated in 2020 due to the global COVID-19 pandemic.Concurrently, the COVID-19 poses significant challenges and intricacies for caregivers of individuals with dementia.During the past two decades, the primary therapeutic approaches for AD have encompassed interventions targeting antineurotoxic β-amyloid (Aβ) aggregates, anti-Tau, antineuroinflammatory, neuroprotective agents, and brain stimulation, among others (2)(3)(4).The accumulation and deposition of Aβ, particularly Aβ 42 , serve as the primary and initiating etiological factor in the pathogenesis of AD (5,6).Consequently, investigations involving Aβ 42 aggregates in both in vivo and in vitro settings have consistently formed the cornerstone of research in this area.Due to the metastability and heterogeneity of Aβ (mainly Aβ 42 ) aggregates, the pathogenesis of Aβ is inevitably diverse and complex (7,8). In recent years, there has been significant advancement in the field of nanotechnology, particularly in the development of emerging nanomaterials for the construction of nanoparticle-based drug delivery systems.This has garnered increasing attention in the fields of medicine and biology for their potential applications in disease diagnosis and treatment (9,10).Nanoparticles exhibit a dense structure, enabling the attachment of drugs through mechanisms such as adsorption, dissolution, encapsulation, or covalent bonding (11)(12)(13).Nanoparticles have the capability to stabilize labile pharmaceutical compounds, enhance their aqueous solubility, extend the duration of drug or contrast agent presence in the circulatory system, and mitigate the inherent shortcomings of pharmaceutical compounds (14,15).Furthermore, specialized targeting molecules can be employed to modify the nanoparticles for enhanced penetration of the blood-brain barrier, allowing for targeted action on specific cells or intracellular compartments (16,17).Therefore, nanoparticles show great potential for enhancing pharmaceutical compounds delivery to specific central nervous system targets and facilitating pharmaceutical compounds release at sites of disease. Polyphenols are commonly present in natural plant sources and exhibit diverse biological properties, particularly in the mitigation of AD and other neurodegenerative conditions (18,19); however, their absorption and utilization within the human body are limited (20,21).Hence, it is imperative to develop a strategy for harnessing the potential of polyphenols within the organism.This review provides an overview of the characteristics of amyloid and associated diseases, the inhibitory properties of natural polyphenols on amyloid fibrils, and the therapeutic effects and potential applications of polyphenol-based polymer nanoparticles. Amyloidogenic property and diseases Proteins are fundamental biomolecules that constitute the building blocks of life, serving as essential components for executing biological functions within organisms and contributing to various physiological processes.The preservation of proper protein structure is essential for the normal physiological functioning of proteins.Any deviation from the correct structure of a protein molecule is likely to result in the impairment of its function and the onset of organic pathology (22).Amyloid refers to a series of fibrillar proteins with abnormal protein conformation due to misfolding, which tend to aggregate with each other and form insoluble amyloid fibrils that are deposited in the body, thus triggering diseases, i.e., clinically recognized protein conformation diseases (23, 24).Currently, there exist over 20 types of protein conformational diseases, such as the association of Type 2 diabetes (T2D) with pancreatic islet amyloid polypeptide; AD with Tau protein and β-amyloid protein (25,26); Parkinson's disease with α-nucleosynthetic protein (27, 28); and transmissible spongiform encephalopathy with prion protein; Huntington's syndrome with polyglutamine aggregation, among others (29).The amyloid proteins associated with each of these diseases have fibrillar properties in a β-folded structural conformation and are deposited intracellularly or extracellularly to form amyloid plaques (30). The investigation of these amyloid-related diseases has emerged as a prominent area of pharmacological research.These type of disease share the following common characteristics: (1) while the sequences of individual proteins may vary, the resulting aggregated properties exhibit a high degree of similarity, characterized by a structured and insoluble nature; (2) the formed aggregates contain a large amount of β-folding structure.These type of diseases are commonly classified under the umbrella term of amyloidosis (31-33).Proteinaceous deposits with a starch-like appearance accumulate in various organs and tissues, leading to the formation of insoluble aggregates that are typically detectable through Congo red staining (34).Various factors, including genetic predisposition, environmental influences, and individual behavior, can contribute to the development of amyloidosis.In addition to traditional diagnostic methods, physicians may employ live tissue sectioning techniques to assess the presence of amyloid proteins within affected tissues (35).Although amyloid proteins possess similar pathogenic characteristics, their distribution within the body may differ, resulting in unique clinical presentations (36). Islet amyloid polypeptide and T2DM Diabetes mellitus is a chronic metabolic disease characterized by abnormally high blood glucose concentrations, and it is widely recognized that diabetes mellitus is genetically determined and triggered by acquired factors.Diabetes mellitus is often associated with serious complications, such as blindness, dyslipidemia, cardiovascular disease and renal failure.Currently, type 2 diabetes mellitus (T2DM) presents a significant global health concern, with the Centers for Disease Control and Prevention reporting approximately 40 million individuals in the United States affected by pre-diabetic conditions (37).A study conducted in the U.S. investigating the efficacy of lifestyle modifications or metformin in preventing the progression of impaired glucose tolerance (IGT) to diabetes revealed that 11% of individuals with IGT transition to diabetes annually, resulting in an estimated 2-4 million new cases of diabetes each year (38).A recent study predicts that the global prevalence of diabetes will increase from 2.8% in 2000 to 4.4% in 2030, meaning that nearly 366 million people will develop T2DM (39). Human islet amyloid polypeptide is secreted by pancreatic islet β-cells and contains 37 amino acid residues, referred to as hIAPP37 (40).The aggregation and formation of amyloid fibril aggregates of human islet amyloid polypeptide have been shown to have deleterious effects on pancreatic islet cells, leading to damage and ultimately contributing to the development of T2DM (41,42).The facile formation of toxic amyloid fibril aggregates is a prominent pathological characteristic of T2DM.After conducting numerous clinicopathologic experimental studies on patients with T2DM, it was determined that a significant proportion of individuals exhibited (43).Normally, insulin and islet amyloid polypeptides are co-secreted in response to blood glucose stimulation.In the atypical scenario of insulin resistance, the heightened demand for insulin prompts an elevation in insulin secretion, consequently resulting in an augmented co-secretion of islet amyloid polypeptide (44).It has been shown that the concentration of islet amyloid polypeptide in the blood during this process increases from the normal 1-20 pM to 50 pM (45), and the increased concentration puts the risk of generating amyloid deposits at an increased risk.The aggregates generated cause damage to the islet cells, which in turn exacerbates the insulin deficiency, and a vicious circle is set in motion. β-amyloid peptide and AD The AD is a neurodegenerative disease that occurs when the intelligence of the cerebral hemispheres of the brain, such as memory, language and reasoning, is in an impaired state (46).This degenerative brain disease, characterized by neurodementia, is considered a significant health concern impacting human longevity in contemporary society (47).Globally, the number of AD patients has now exceeded 30 million and is increasing by about 5 million per year, and it is expected that by 2040, the number of people suffering from AD will be more than 80 million, and the number in China will reach about 15 million, which is a three-fold increase compared with the beginning of this century (48).The disease is anticipated to impose significant economic strain and distress upon the families of afflicted individuals, thereby escalating into a pressing social issue of great concern. The AD patients have two main types of abnormal lesions in the brain: (1) the presence of neurofibrillary tangles (NFTs) in the nerve cells of the brain, and (2) the appearance of age spots in the brain (49,50).The primary components of the two lesions are abnormally phosphorylated Tau protein and β amyloid polypeptide, which is derived from the hydrolysis of an amyloid precursor protein called APP.APP contains a total of 770 amino acids, and its transmembrane region is capable of becoming a component of amyloid deposition through shearing (51).Another major pathology of NFTs is the formation of paired helical filaments (PHFs), which are mainly composed of hyperphosphorylated Tau protein (52).The number of NFTs in a patient's brain has been reported to be a very important indicator of the severity of AD. Nguyen virus protein and spongiform encephalopathy Transmissible spongiform encephalopathies (TSEs) are fatal neurodegenerative diseases affecting both animals and humans, characterized by the presence of an infectious agent known as the self-expressed structurally abnormal Ranavirus protein (53).Examples of such diseases include Mad cow disease, Creutzfeldt-Jakob disease, and prurigo in sheep (54).The pathology of TSEs is characterized by the accumulation of amyloid plaque deposits, leading to degeneration, loss, vacuolar degeneration, death, and disappearance of neuronal cells in the cerebral cortex.This process ultimately results in the replacement of affected cells by vacuoles and stellate cells, causing a spongy state characterized by thinning of the cerebral cortex (gray matter) and relatively pronounced white matter, hence the term spongiform encephalopathy (55).Prion proteins (Prions) are a class of small molecule non-immunogenic hydrophobic proteins that can infect animals and replicate in host cells.The Prions molecule is incapable of inducing disease in its original form; rather, it must undergo a conformational change in order to adopt the infectious prion conformation and subsequently inflict damage upon neurons (56).Prions have the ability to induce conformational alterations in normal prion proteins, leading to their aggregation and subsequent interaction with additional normal prion proteins, ultimately resulting in a cascade effect that culminates in neuronal cell apoptosis (57).Normal Prion are present in both humans and animals and are labeled as PrP c proteins, and Prions that are not normally present in the body but are infectious are labeled as PrP sc (58).There is a specific hydrophobic sequence, the 106-126 fragment (denoted as PrP 106-126 ), in the ryanovirus protein.This sequence has an important role in the conversion of PrP C to PrP SC and in the generation of associated neurotoxicity (59).PrP 106-126 is frequently utilized as an in vitro model to investigating the aggregation process and pathomechanisms of Prions proteins due to its structural similarities with PrP SC , including a high prevalence of β-folded structures, propensity for aggregation to resist protease degradation, and significant cytotoxicity.To date, extensive research has been conducted on Prions and the PrP 106-126 model; however, the precise mechanism underlying PrP SC -induced cytotoxicity and the pathogenic basis of aggregation remain elusive. Amyloid fibrillation inhibition by natural polyphenols Toxic amyloid fibrils pose a significant threat to human health, yet a definitive treatment remains elusive.Up to now, researchers have screened dozens of inhibitors, which are broadly categorized into three groups: peptide inhibitors, antibodies, and small molecule inhibitors (60).Peptide inhibitors and antibodies offer the potential for targeted therapy; however, their limited utilization is primarily attributed to issues related to stability and cost.Small molecule inhibitors exhibit advantageous pharmacological properties and minimal cytotoxicity, with a wide range of natural polyphenol inhibitors standing out as particularly notable in this aspect (61,62).The structural formulae of common polyphenols for inhibiting amyloid protein aggregation are depicted in Figure 1.Furthermore, certain natural polyphenols not only hinder amyloid fibril formation but also demonstrate neuroprotective properties, including the mitigation of neuroinflammation, resistance to oxidative stress and apoptosis, restoration of mitochondrial damage, and enhancement of fibril deposit clearance (63, 64). Natural polyphenols have shown promising results in the treatment of a number of age-related diseases.Epigallocatechin gallate (EGCG) is a natural polyphenol compound found in green tea.It has been reported in the literature that the anti-amyloid effect of EGCG has been identified in SH-SY5Y neuronal cells and AD rat model in 2007 (65,66).It has been shown that EGCG can convert the Aβ and α-synuclein touch nucleoprotein mature protofibrils and toxic oligomers into non-toxic small protein aggregates (67) 69) have demonstrated that EGCG can inhibit hIAPP aggregation through hydrophobic effects.Resveratrol is also a natural polyphenol that is particularly abundant in grapes, and it is a natural non-flavonoid polyphenol that protects the cardiovascular system and prevent atherosclerosis (70).Resveratrol is also capable of directly interfering with amyloid aggregation of different peptides and reducing their toxicity (71).One study found that resveratrol effectively suppresses hIAPP aggregation via inhibiting the early formation of oligomeric intermediates (71).Resveratrol was also found to dose-dependently inhibit Aβ peptide amyloid fibrillation (72) and lysozyme amyloid fibrillation (73), and to break down already mature amyloid fibrils. Curcumin, a polyphenol compound consisting of a β-diketone structure and two o-methylated phenols is derived from turmeric and has been utilized in traditional Chinese and Indian medicine for centuries (74).The significant curcumin content found in curry has been linked to a decreased risk of AD in the Indian population, as well as beneficial effects on cognitive function in elderly individuals (75).Traditional Chinese medicines such as turmeric (Curcuma longa) and Poria cocos contain abundant curcumin.Curcumin exhibits neuroprotective properties by inhibiting the aggregation of Aβ peptides, thereby altering the structure of Aβ fibrils and ameliorating the toxic effects induced by Aβ oligomers and the formation of non-toxic Aβ oligomers (76).Tannic acid possesses numerous hydroxyl and functional groups, enabling it to engage in interactions with diverse proteins, and these interactions can function as inhibitors of β-site amyloid precursor protein cleaving enzyme-1 (BACE1), Aβ, and Tau proteins (77). Formation of polyphenol-based polymer nanoparticles Polyphenols have garnered significant interest due to their distinctive physicochemical characteristics (78-81).Polyphenols can be categorized into different groups based on the number of phenolic rings they contain and the underlying structural elements that bind these rings, and can be classified into several subclasses such as phenolic acids, flavonoids, stilbenes, and lignans (82), where flavanones, flavonoids, flavonols, anthocyanins, and phenolic acids contain at least one benzene ring, aldehyde group, and several phenolic hydroxyls (83), which are essential structural components that facilitate the biological activity of polyphenols. Numerous polyphenols exhibit limited water solubility and are vulnerable to environmental factors, leading to diminished bioavailability.The incorporation of composite nanoparticles has been shown to enhance the encapsulation efficiency of polyphenols and contribute to the improved stability of these bioactive compounds (84). Polyphenols possess distinctive structural and chemical characteristics, notably the inclusion of functional groups such as catechol and gallophenol groups, enabling them to engage in diverse non-covalent and covalent interactions with a broad range of materials.These interactions make polyphenols applicable in various fields, encompassing inorganic materials like metal ions, metals, metal oxides, semiconductors, carbon, and silicon dioxide, as well as organic materials such as small molecules and synthetic polymers, and even bioactive biomolecules and active microorganisms (85-87) (Figure 2).Such as, Sorasitthiyanukarn et al. (88) prepared chitosan/sodium alginate nanoparticles loaded with curcumin glutaric acid by O/W emulsification and ionization gelation, and optimized the conditions using response surface methodology, and the obtained products The study showed that curcumin glutaric acid-loaded chitosan/sodium alginate nanoparticles exhibited favorable stability, controlled release properties, and enhanced activity against multiple cancer cell lines.nanoparticles for the purpose of functionalizing collagen scaffolds into a spherical configuration.This structure exhibits slow release properties and possesses the capacity to scavenge free radicals, exhibit anti-inflammatory effects, inhibit bacterial growth, and promote angiogenesis. Green tea polyphenol-based nanoparticles Green tea polyphenols consist of a diverse range of polyphenols present in green tea, with catechins constituting 70% ~ 80% of the overall polyphenol composition.The four main types of catechins found in green tea are epicatechin, epigallocatechin, epigallocatechin gallate, and EGCG.Among these, EGCG is the most abundant and bioactive compound in green tea, and has been extensively researched in inhibiting amyloid fibrillation and has been observed to reduce amyloid cytotoxicity induced by Huntington's protein, alphasynuclein, and Aβ (91)(92)(93).EGCG was found to directly bind to unfolded proteins and prevent the formation of β-sheet structures, which is an initial step in the cascade leading to amyloid formation.At the molecular level, autoxidized EGCG reacts with free primary amine groups of proteins to form Schiff bases and induce protofibril remodeling (94).The catechol structure of EGCG confers metal chelating, anti-inflammatory, antioxidant and neuroprotective activities that are crucial in the therapeutic management of AD (95).96) synthesized a 25 nm EGCG nanoparticle that was 10-100 times more efficient than native EGCG in inhibiting protein aggregation, breaking down mature protein aggregates, and reducing amyloidogenic cytotoxicity.This result suggests that EGCG-based polymer nanoparticles are more potent in preventing and treating protein aggregation-derived diseases.Zhang et al. (97) attached EGCG to selenium nanoparticles (EGCG@Se) and synthesized EGCGstabilized selenium nanoparticles encapsulated with a Tet-1 peptide (Tet-1-EGCG@Se), EGCG@Se and Tet-1-EGCG@Se could label Aβ protofibrils with high affinity and the Tet-1 peptide could significantly enhance the cellular uptake of Tet-1-EGCG@Se in PC12 cells.In addition, studies have been conducted to prepare EGCG-derived carbonized polymer dots, and the prepared polymers interacted with Aβ through hydrogen bonding, electrostatic interactions, and hydrophobic interactions, leading to alterations in the Aβ aggregation pathway.It provides an important reference for the development of multifunctional EGCG-based polymer nanomaterials against neurodegenerative diseases and other protein conformation diseases (98). Curcumin-based nanoparticles Turmeric is widely produced in India as well as Pakistan, Bangladesh, China, Indonesia and Pakistan South America.The most active component of turmeric is curcumin, which makes up 2-5% of the spice (99).Numerous studies have demonstrated that curcumin can effectively inhibit amyloid fibrillogenesis.Such as, Pal et al. (100) explored the inhibition of amyloid fibrillogenesis by Al(III) and Zn(II) curcumin mixtures, and the results showed that metal-curcumin mixtures can inhibit the transition of oligomers to β-sheet protofibrils, and Al(III)-curcumin mixtures than Zn(II)curcumin mixture was more effective in inhibiting β-protein aggregation.In addition, experimental evidence has shown that curcumin has the ability to inhibit the formation of Aβ amyloid fibrils by modifying the structure of protofibrils and altering the aggregation pathway of protofibrils (101).Furthermore, native curcumin can inhibit the formation of amyloid fibrils by inhibiting the generation of primary nuclear structures of amyloid peptides through hydrogen bonding, hydrophobicity, and cationic tightness to A peptides (102).Brahmkhatri et al. (103) successfully synthesized curcumin-coated polymeric gold nanoparticles (PVP-C-AuNP) that demonstrated inhibition of Aβ 1-16 aggregation by targeting the n-terminal region of Aβ amyloid peptide, and this study revealed that PVP-C-AuNP exhibited the ability to degrade mature amyloid fibrils.Mirzaie et al. (104) prepared methoxy polyethylene glycol polymer nano colloids with/without curcumin as phosphatidylethanolamine-stearoyl, and then explored their effects on the amyloid fibrillation process of bovine serum albumin, and the results showed that curcumin-loaded nano colloids had an inhibitory effect on the formation of amyloid fibrils.Research has also been conducted on the potential inhibition of amyloid fibrillation through structural modifications of curcumin.Solid lipid curcumin particles (SLCP) provide an alternative strategy for curcumin delivery, and intraperitoneal injection of SLCP in AD mice demonstrates superior therapeutic effectiveness compared to the free curcumin.Specifically, the lipid bilayer of SLCP facilitated crossing the blood-brain barrier.Within the brain, SLCP was found to bind to Aβ plaques in the prefrontal cortex and dentate gyrus, resulting in a decrease in the formation of Aβ 42 oligomers and protofibrils, while also improving neuronal morphology (105).Giacomeli et al. (106) prepared curcumin-loaded lipid core nanocapsules (LNC) that demonstrated notable neuroprotective properties in mitigating Aβ 1-42 -induced behavioral and neurochemical alterations in an AD model. Resveratrol-based nanoparticles Resveratrol is a polyphenolic compound widely found in grapes, tiger nuts, peanuts, cassia and other plant foods or medicines, and is a fat-soluble plant antitoxin derived from plants (107, 108).Resveratrol exhibits a diverse array of physiological effects, including inhibition of cell membrane lipid peroxidation, protection against cardiovascular disease, anti-inflammatory properties, neuroprotection, and estrogenic activity (109,110).Recently, nanoparticles have been used as effective carriers to enhance the oral bioavailability of resveratrol (111).Selenium nanoparticles functionalized on the surface of 100 nm resveratrol has been shown to be more effective than native resveratrol in inhibiting Aβ aggregation and reactive oxygen species (ROS) formation (112).Li et al. (113) prepared resveratrol-selenopeptide nanocomposites that interacted with Aβ, reduced Aβ aggregation, effectively inhibited Aβ deposition in the hippocampus, ameliorated cognitive deficits; and reduced Aβ-induced ROS and enhanced antioxidant enzyme activities in PC 12 cells and in vivo; and also reduced Aβ-induced neuroinflammation in BV-2 cells and in vivo by regulating nuclear factor κB/mitogen-activated protein kinase/Akt signaling pathway to reduce Aβ-induced neuroinflammation in BV-2 cells and in vivo.Yang et al. (114) prepared resveratrol-loaded selenium nanoparticles/chitosan nanoparticles, which could alleviate cognitive deficits by restoring the balance of the intestinal flora, and thus suppress oxidative stress, neuroinflammation, and metabolic disorders in AD mice. Anthocyanidin-based nanoparticles Anthocyanidin are polyphenolic compounds found in fruits, grains, and flowers with antioxidant, anti-inflammatory, and antiapoptotic properties (115,116).Previous studies have shown that anthocyanins extracted from Korean black soybeans prevent neuroinflammation, neuronal apoptosis, and neuronal degeneration (117, 118).The findings indicate that anthocyanins may hold therapeutic potential for neurodegenerative disorder.Amin et al. (119) encapsulated anthocyanins in biodegradable nanoparticle formulations based on poly (lactide-co-glycolide) (PLGA) and stabilizer polyethylene glycol (PEG)-2000 delivery system.The findings demonstrate the therapeutic promise of anthocyanins in mitigating Alzheimer's disease pathology and suggest that the efficacy of anthocyanins can be enhanced by utilizing nanomedicine delivery systems.Kim et al. (120) encapsulated anthocyanins in polyethylene glycol functionalized AuNPs, which were coupled with PEG AuNPs to enhance Aβ 1-4 injecting the bioavailability of mice and controlling the release of anthocyanins.Kim et al. (120) prepared PEG-AuNPs loaded with anthocyanins that reduced Aβ 1- 42 -induced markers of neuroinflammation and apoptosis through inhibition of the p-JNK/NF-κB/p-GSK3β pathway, and were more potent with PEG-AuNPs than with native anthocyanins alone.These results imply the possibility of PEG-coated AuNPs loaded with anthocyanins as therapeutic agents for neurodegenerative diseases, especially AD. Quercetin-based nanoparticles Quercetin is a flavonoid found in many vegetables and fruits and Traditional Chinese medicine such as white onion bulbs, lingonberries, cranberries, Kudzu root, and Polygonum multiflorum (121)(122)(123).Quercetin exhibits promise in attenuating the advancement of degenerative neurological disorders through the modulation of cellular pathways associated with Aβ-induced neurotoxicity and the alleviation of its adverse effects on neuronal cell lines and neurons (124)(125)(126).Nevertheless, the limited water insolubility, and extensive metabolism of quercetin pose challenges to its biological utility.A recent study has demonstrated that quercetin released from the nanopreparation maintains its biological activity by preparing quercetinloaded modified core-shell mesoporous silica nano-preparations having a polyethylene glycol 3,000 surface-modified magnetite core that interferes with the aggregation of Aβ peptide and reduces the cytotoxicity of Aβ and the Aβ-induced generation of ROS (127).An additional method of encapsulating quercetin within lipid nanoparticles has been developed to produce a nano-system that is functionalized with transferrin, enabling the targeted delivery of quercetin to the brain.This approach has shown enhanced efficacy in inhibiting Aβ aggregation while maintaining minimal cytotoxic effects (128).The utilization of quercetin-based nanoparticle systems presents a novel approach to enhance current therapeutic interventions, offering compelling evidence and renewed optimism for the treatment of Alzheimer's disease through quercetin administration. Potential opportunities of polyphenol-based polymer nanoparticles In recent years, nanotechnology-based pharmaceutical compounds delivery systems have received much attention for their potential to prolong drug residence and circulation in the bloodstream as well as to enhance the stability and solubility of pharmaceutical compounds.Advantages of the nanoparticle form include increased water solubility of amyloid-targeting molecules and enhanced binding affinity to amyloid structures (96).Optimal nanoparticle should possess non-toxic and biodegradable properties, with additional consideration given to factors such as material composition, preparation technique, nanoparticle dimensions, and surface modifications, all of which significantly impact the successful targeted delivery of pharmaceutical compounds, including penetration into the central nervous system.The potential function of nanoparticles is to stabilize labile pharmaceuticals, enhance their water solubility, prolong the presence of drugs or contrast agents in the circulatory system, and address the inherent limitations of pharmaceutical compounds (129).The application of nanotechnology in polyphenols delivery systems improves the bioavailability and kinetic properties of natural polyphenols in biological systems, and advances in nanotechnology help to target natural polyphenols to specific sites or molecular targets and deliver natural polyphenols safely to specific sites of action, especially for natural polyphenols targeting the central nervous system, such as for AD.The sustained release of polyphenol-based polymer nanoparticle enhances the controlled release properties of the loaded polyphenols, thereby minimizing the dosage regimen, minimizing the (toxic) side effects of the polyphenol, and maximizing the safety, which greatly improves the applicability and feasibility of the natural polyphenols. Furthermore, the utilization of polyphenol-based polymer nanoparticles to target the central nervous system enhances the efficacy of natural polyphenols in traversing the blood-brain barrier, resulting in a synergistic therapeutic effect.Thus, the sustained effectiveness of natural polyphenol candidates in the treatment of AD can be enhanced by polyphenol-based polymer nanoparticles.Currently, chitosan, poly (alkyl cyanoacrylate) (PACA), PLGA and polyethylene glycol-modified PLGA (PEG-PLGA) are the most commonly used polymers for polyphenol-based polymer nanoparticles applications.PEG-PLGA are the most commonly used polymers in polyphenol-based polymer nanoparticles applications and play a great auxiliary role in brain-targeted delivery of natural polyphenols as well as in improving natural polyphenols utilization.Currently, it is also important to prepare safe and effective polyphenolbased polymer nanoparticles. Conclusion and perspectives Dr. Alois Alzheimer first described AD in 1906, but there are still no appropriate therapeutic drugs to treat it.Moreover, in the context of AD, existing treatments offer solely symptomatic relief and do not effectively halt the progression of the disease.No drug has been approved by the U.S. Food and Drug Administration for the treatment of AD since 2003.Therefore, it is important to improve the efficacy of currently available pharmaceutical compounds through the use of delivery technologies including nanomedicines and to develop new polymer nanoparticles that block all possible mechanisms of disease pathogenesis in order to treat patients with AD. Natural polyphenols are widely found in plants in nature.Due to their diverse biological activities, including antioxidant, antiinflammatory, and intestinal flora regulation properties, they are utilized in the food industry as primary nutritional components in functional foods (Figure 3).Numerous studies have shown that natural polyphenols effectively mitigate the deleterious impact of amyloid peptides on neuronal cells and neurons; however, these compounds are constrained by various limitations including the necessity for higher doses to achieve therapeutic efficacy, suboptimal absorption rates, restricted bioavailability, peripheral side effects, and challenges in traversing the blood-brain barrier.Natural polyphenols have the chemical property of providing both covalent and non-covalent bonds, and thus they have great potential for engineering materials applications to construct inorganic compound-phenolic fractions while generating functional polyphenol-based inorganic hybrid nanoparticles.In addition, by controlling and stabilizing the physicochemical interactions between molecules, polyphenols can not only self-assemble into particles in confined spaces, but also form coatings on the surfaces of preformed particles or serve as templates for the secondary growth of other functional materials.Thus, having special physicochemical properties makes polyphenols of practical use in various fields of polymer nanoparticles application. The research on polyphenol-based polymer nanoparticles in inhibiting amyloid protein aggregation in the field of traditional Chinese medicine (TCM) also holds great potential (Figure 4).The major manifestations include: (1) development of TCM resources: TCM possesses abundant plant resources, many of which are rich in polyphenol.Through research on polyphenol in TCM, it is possible to develop polyphenol-based polymer nanoparticles that possess the capability to inhibit amyloid protein aggregation.(2) Application of polyphenol-based nanoparticles: TCM ingredients have been proven to possess the efficacy of inhibiting amyloid protein aggregation.The enhancement of therapeutic effects of TCM can be achieved through the preparation of polyphenol-based nanoparticles, which serve to improve the bioavailability and stability of the medications.(3) Compatibility application of TCM polyphenol-based nanoparticles: TCM emphasizes the compatibility of medications, aiming to enhance therapeutic effects through the combination of multiple pharmaceutical compounds.The incorporation of polyphenol-based polymer nanoparticles in conjunction with other TCM components has been shown to augment the suppression of amyloid protein aggregation.(4) TCM's multi-target treatment strategy: TCM focuses on the holistic concept and adopts a multi-target treatment strategy.Polyphenol-based polymer nanoparticles can interact with multiple targets, inhibiting abnormal aggregation of amyloid proteins and intervening in diseases related to amyloid proteins at multiple levels.(5) Personalized treatment: TCM emphasizes personalized treatment, providing targeted therapies based on the patient's constitution and condition.Polyphenol-based polymer nanoparticles can be customized according to the specific situation of the patient, offering personalized treatment plans. Engel et al. (68) and Palhano et al. ( Zhou et al. (89) synthesized metal polyphenol nanoparticles using EGCG in aqueous solution with CuCl 2 with low toxicity and high biocompatibility, improved lysosomal escape efficiency by doping with cell membrane-penetrating peptides, and achieved targeted delivery to mitochondria.In addition, Ma et al. (90) used Cu 2+ chelated EGCG FIGURE 1 Fang FIGURE 1Chemical structures of polyphenols in common nanoparticles for inhibiting amyloid protein aggregation. FIGURE 2 Fang FIGURE 2 Mechanism of polyphenol nanoparticle formation.(A) Mechanism of Selenium Nanoparticle Formation.(B) Mechanism of Metal Oxide Nanoparticle Formation. Although polyphenol-based polymer nanoparticles have shown potential in inhibiting amyloid protein aggregation in the field of TCM, further research and clinical experiments are still needed to verify their safety and effectiveness.Simultaneously, further research is needed to investigate the integration of TCM with modern scientific principles in order to advance the utilization of polyphenolic compounds for inhibiting amyloid protein aggregation in therapeutic applications.
2024-07-31T15:15:15.198Z
2024-07-29T00:00:00.000
{ "year": 2024, "sha1": "71c96379819bb948c4bf5fb685bc2f9d22f19990", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fnut.2024.1408620", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5721d1c490cade0c9371864721d91cf852398f0d", "s2fieldsofstudy": [ "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
9624054
pes2o/s2orc
v3-fos-license
Pinching estimates for negatively curved manifolds with nilpotent fundamental groups Let $M$ be a complete Riemannian metric of sectional curvature within $[-a^2,-1]$ whose fundamental group contains a $k$-step nilpotent subgroup of finite index. We prove that $a\ge k$ answering a question of M. Gromov. Furthermore, we show that for any $\epsilon>0$, the manifold $M$ admits a complete Riemannian metric of sectional curvature within $[-(k+\epsilon)^2,-1]$. Introduction If the fundamental group of a complete pinched negatively curved manifold is amenable, it must be finitely generated and virtually nilpotent [BS87,Bow93,BGS85]. In this paper we relate the nilpotency degree of the group to the pinching of the negatively curved metric. If the cohomological dimension cd(Γ) of Γ equals to dim(M ) − 1, which if dim(M ) > 2 is equivalent to assuming that Γ acts cocompactly on horospheres, Theorem 1.1 follows from the proof of Gromov's theorem of almost flat manifolds (see [BK81,Corollary 1.5.2]), by combining the commutator estimate in almost flat horosphere quotients with the displacement estimate coming from the exponential convergence of geodesics. More recently, Gromov sketched in [Gro91,p.309] a proof of the more general estimate where [x] denotes the largest integer satisfying ≤ x. If k ≤ r + 1, the estimate gives no information, so Gromov asked [Gro91, p.309] whether it can be improved to an estimate that is nontrivial for all cd(G) < dim(M ). Theorem 1.1 2000 Mathematics Subject classification. Primary 53C20. Keywords: collapsing, horosphere, negative curvature, nilpotent group, pinching. provides a satisfying answer that involves no dimension assumptions whatsoever. The proof of Theorem 1.1 follows the original Gromov's idea in [BK81], except that the commutator estimate is run in a "central" orbit of an N-structure given by the collapsing theory of J. Cheeger, K. Fukaya, and Gromov [CFG92]. In [BK] we proved the following classification theorem: Theorem 1.2. [BK] A smooth manifold M with amenable fundamental group admits a complete metric of pinched negative curvature if and only if it is diffeomorphic to the Möbius band, or to the product of a line and the total space a flat Euclidean vector bundle over a compact infranilmanifold. The "if" direction in Theorem 1.2 involves an explicit warped product construction of a negatively pinched metric on the product of R and the total space of a flat Euclidean bundle over a closed infranilmanifold. By improving this warped product construction, we show that the pinching bounds provided by Theorem 1.1 are essentially optimal. Theorem 1.3. If M be a pinched negatively curved manifold such that π 1 (M ) has a k -step nilpotent subgroup of finite index, then M admits a complete Riemannian metric of sec(M ) ∈ [−(k + ǫ) 2 , −1] for any ǫ > 0. The metric constructed in Theorem 1.3 has cohomogeneity one, specifically M/Iso(M ) is diffeomorphic to R (with the only exception when M is the Möbius band equipped with a hyperbolic metric). We do not know whether M in Theorem 1.3 always admits a complete metric with sec(M ) ∈ [−k 2 , −1]. This does happen for k = 1, since as we show in [BK] any complete pinched negatively curved manifolds with virtually abelian fundamental group admits a complete hyperbolic metric. Another way to phrase the optimality of Theorem 1.1 is via the concept of pinching. Given a smooth manifold M , we define pinch diff (M ) to be the infimum of a 2 ≥ 1 such that M admits a complete Riemannian metric of −a 2 ≤ sec(M ) ≤ −1. If M admits no complete metric of pinched negative curvature, it is convenient to let pinch(M ) diff = +∞. We then define pinch top (M ) to be the infimum of all pinch diff (N ) where N is homeomorphic to M , and define pinch hom (M ) to be the infimum of pinch diff (N )'s where N is manifold with dim(N ) = dim(M ) that is homotopy equivalent to M . Of course, In general, the pinching invariants are hard to estimate and even harder to compute (see [Gro91] and [Bel01, Section 5] for surveys). Combining Theorems 1.1-1.3, we compute the invariants in case π 1 (M ) is virtually nilpotent. This work was partially supported by the NSF grants # DMS-0352576 (Belegradek) and # DMS-0204187 (Kapovitch). We are thankful to J. Cheeger for a discussion on collapsing. 2. Proof of Theorem 1.1 A Riemannian metric is called A-regular if A = {A i } is a sequence of nonnegative reals such that the norm of the curvature tensor satisfies ||∇ i R|| ≤ A i . We call a metric regular if it is A-regular for some A. The collapsing theory works best for regular metrics, and the Ricci flow can be used to deform any metric with bounded sectional curvature to a complete Riemannian metric that is close to the original metric in uniform C 1 topology, has almost the same sectional curvature bounds, and is regular. (This fact has been known to some experts, but the first written account only recently appeared in [Kap]). Thus we fix an arbitrary δ > 0 and replace the given metric on M by a nearby A-regular metric g with sec g ∈ [−(a + δ) 2 , −1], and then prove that a + δ ≥ k , which would imply a ≥ k because δ is arbitrary. Since the Riemannian covering of (M, g) corresponding to Γ ≤ π 1 (M ) has the same curvature bounds as (M, g), we can assume that π 1 (M ) = Γ. Denote the universal cover of M by X . If k = 1, all we assert is a ≥ 1 which is trivially true, so we assume from now on that k > 1. Then Γ fixes a unique point at infinity of the universal cover X of M (see e.g. [BS87]); let c(t) be a ray asymptotic to the point. Since sec(X) is bounded below, the family (X, c(t), Γ) has a subsequence (X, c(t i ), Γ) that converges in the equivariant GH-topology topology to (X ∞ , c ∞ , Γ ∞ ). Now sec(X) is also bounded above, the metric is regular, and X has infinite injectivity radius, hence the conver- We now review the main results of [CFG92] as they apply to our situation; we refer to [CFG92] for terminology. Fix ǫ, λ with 0 < ǫ ≪ 1 ≪ λ. By [CFG92, Theorems 1.3, 1.7, Proposition 7.21], there are positive constants ρ, κ, ν , σ , depending only on n, ǫ, A such that for each large i, the manifold M carries an N -structure N i and an is the Riemannian universal cover, thenṼ i admits a isometric effective action of a connected nilpotent Lie group G i that acts transitively on The above results are stated in [CFG92] in a different form, and their proofs are often omitted or merely sketched, so for reader's convenience we briefly explain in the appendix how to deduce (i)-(iv). For (v) see [CFG92,. Now we show that the inclusion V i → M is π 1 -surjective for all large i. Indeed, letV i be a connected component of the preimage of V i under the cover X → M , and as before letṼ i ,Õ i be the universal covers of V i , O i , respectively. Fixq i ∈Õ i , and its projections, where ν κ is the number of normal subgroups of index ≤ κ. Since Γ is nilpotent of cd(Γ) < n, it can be generated by < n elements, so there is a surjection from a rank n free group F n onto Γ, and ν κ equals to the number of normal subgroups of F n of index ≤ κ, i.e. the number of elements in Hom(F n , Z κ ), which is at most nκ.) Denote d(q i , γ(q i )) by d γ . Below this notation is used for different distance functions, and each time we specify which metric we use. Since |Γ : Γ 0 | < ∞, the nilpotency degree of Γ 0 is k . Thus there are γ j ∈ Γ 0 , j = 1, . . . , k satisfying Since Γ 0 lies in the image of π 1 (O i ) → Γ, we can think of each γ j as acting onŌ i ⊂ X , whereŌ i is the preimage of O i under the cover X → M . Note that one can choose γ j 's so that in the intrinsic metric onŌ i induced by implies that any k -fold commutator in Γ 0 is trivial, so its nilpotency degree is < k ). In particular, for the intrinsic metric induced on O i by g i the displacements of γ j 's satisfy d γ j → 0 as i → 0. By (i) and (iv) we see that each O i with intrinsic metric induced by g i is almost flat, so the commutator estimate of [BK81, Proposition 3.5 (iii), Theorem 2.4.1 (iii)] for the intrinsic metric on O i induced by g i gives where the constant c depends only on n, a. By Rauch comparison for Jacobi fields, the normal exponential map is bi-Lipschitz on the ρ-neighborhood in the normal bundle to O i , with Lipschitz constants depending on a, n, ρ. Hence the nearest point projection of the ρtubular neighborhood ofŌ i ontoŌ i is K -Lipschitz for K = K(a, n, ρ), so any g i -geodesic of length ≤ 2ρ with endpoints onŌ i is projected by the nearest point projection to a curve of length ≤ 2ρK . Since the intrinsic displacements of γ j 's are < 2ρ for all large i, the estimate (2.1) holds with a different c, for the distance function of the extrinsic metric g i , and again c only depends on n, a, ǫ, λ. Finally, since the distance functions of g and g i are bi-Lipschitz on B 1 (p i )), we get the same estimate (2.1) for the original metric g , with c depending on n, a, ǫ, k , λ. For the rest of the proof we work with displacements in metric g . Passing to a subsequence of p i 's, we can find j such that d γ j ≥ d γ l for all l , i. Taking logs we get ln d γ ≤ ln C + ln d γ 1 + ... + ln d γ k ≤ ln C + k ln d γ j Since ln d γ j < 0 and lim i→∞ d γ j = 0, we deduce lim sup On the other hand, by exponential convergence of geodesic rays, for any two elements of Γ, and in particular for γ, γ j we get lim sup t→∞ ln d γ ln d γ j ≤ a + δ so a + δ ≥ k , which completes the proof. Remark 2.2. The weaker conclusion a ≥ k−1 can be obtained by the following easier argument that does not use collapsing theory. The collapsing theory was used in the above proof to get the commutator estimate (2.1), which is a combination of the two independent estimates in [BK81], namely: ). An alternative way to get (b) in our case is via the rotation homomorphism φ : Γ → O(n), introduced by B. Bowditch [Bow93], which is the holonomy of a Γ-invariant flat connection on X . A key property of φ is that φ(γ) approximates the rotational part of any γ ∈ Γ with error ≤ d γ . Now since any nilpotent subgroup of O(n) is abelian, φ must have a kernel of nilpotence degree k − 1. Hence, there is a (k − 1)-fold commutator in Γ whose entries lie in the kernel of φ, and hence their rotational parts are bounded by their displacements. Repeating the argument at the end of the proof of Theorem 1.1 for this commutator, we get a ≥ k − 1. Infranilmanifolds are horosphere quotients Let G be a simply-connected nilpotent Lie group acting on itself by left translations, and let K be a compact subgroup of Aut(G), so that the semidirect product G ⋊ K acts on G by affine transformations. The quotient of G by a discrete torsion free subgroup of G ⋊ K is called an infranilmanifold. We showed in [BK] that any pinched negatively curved manifold with amenable fundamental group is either the Möbius band or product of an infranilmanifold with R, and conversely, each of these manifolds admits an explicit warped product metrics of pinched negative curvature. This section contains a slight improvement of the warped product construction, that yields Theorem 1.3. Consider the product of the above G ⋊ K -action on G with the trivial G ⋊ K -action on R. For the G ⋊ K -action on G × R, we prove the following. Theorem 3.1. If G has nilpotence degree k , then for any ǫ > 0, G× R admits a complete G ⋊ K -invariant Riemannian metric of sectional curvature within Proof. The Lie algebra L(G) can be written as Indeed, assume i ≤ j and argue by induction on i. The case i = 1 is obvious and the induction step follows from the Jacobi identity and the induction hypothesis, because The group K preserves each L i , so we can choose a K -invariant inner product , 0 on L. Let where h i are some positive functions defined below. This defines a G ⋊ K -invariant Riemannian metric g r on G. Let α i = i with i = 1, · · · , k and a = k . Given ρ > 0, we define the warping function h i to be a positive, smooth, strictly convex, decreasing function that is equal to e −α i r if r ≥ ρ, and is equal to e −ar if r ≤ −ρ; such a function exists since a ≥ a i for each i. Thus h ′ i < 0 < h ′′ i , and the functions i h i are uniformly bounded away from 0 and ∞. Define the warped product metric on G × R by g = s 2 g r + dr 2 , where s > 0 is a constant; clearly g is a complete G ⋊ K -invariant metric. A straightforward tedious computation (mostly done e.g. in [BW]) yields for g -orthonormal vector fields Y s ∈ F s that Correction (added on August 28, 2010): The above formula for R g ( ∂ ∂r , Y i )Y j , Y l g is incorrect. A correction can be found in Appendix C of [Bel] where it is explained why the mistake does not affect other results of the present paper. The above choice of a i 's implies that if r ≥ ρ, then where C only depends on the structure constants of L, so that we conclude It follows that if r ≥ ρ, then the norm of the curvature tensor of g r is bounded in terms of C , k [CE75, Proposition 3.18]. The same conclusions trivially hold for r ≤ −ρ, because then g r is the rescaling of g 0 by a constant e −ar > 1, and also for r ∈ [−ρ, ρ] by compactness, since g r is left-invariant and depends continuously of r . where the last inequality holds since s|Y | gr = 1 for any g -unit vector Y . It follows that as s → ∞, then R g uniformly converges to a tensorR whose nonzero components arē Thus g has pinched negative curvature for all large s. Finally, we show that for any ǫ > 0 there exists ρ such that sec g ∈ [−(k + ǫ) 2 , −1]. Note that By construction | ln(h i ) ′ | ≤ k . Also let ρ be large enough, so that one can choose h i on [−ρ, ρ] to satisfy | ln(h i ) ′′ | ≪ ǫ. Then for all sufficiently large s, the sectional curvature of g is within [−(k + ǫ) 2 , −1]. Proof of Theorem 1.3. By [BK] if a pinched negatively curved manifold contains has a virtually k -step nilpotent fundamental group, then it is diffeomorphic to the quotient of G × R by a discrete torsion free subgroup of G ⋊ K . Thus we are done by Theorem 3.1. Appendix A. On collapsing theory The purpose of this appendix is to outline the proof of the claims (i)-(iv) made in the proof of Theorem 1.1. Some details can be found in [CFG92]. Since g is regular, so is the corresponding metricg on the frame bundle F M . The balls (F B 1 (x),g) form an O(n)-GH-precompact family, where F B 1 (x) denotes the frame bundle over the unit ball B 1 (x), x ∈ M . By [Fuk88] the closure of the family consists of regular Riemannian manifolds. So for an arbitrary sequence p i ∈ M , the manifolds (F B 1 (p i ),g) subconverge in O(n)-GH-topology to a pointed regular Riemannian manifold (Y, y). By the local version of Fukaya's fibration theorem for some sequence δ i > 0 satisfying δ i → 0 as i → ∞, there exists for each large i an O(n)-equivariant δ i -almost Riemannian submersion F B 1 (p i ) → Y with nilmanifolds as fibers, which is also an O(n)-δ i -Hausdorff approximation. Furthermore, each F B 1 (p i ) carries an O(n)-invariant N-structureÑ i whose orbits are the nilmanifold fibers of the above submersion, and because of the O(n)-invariance, the structure descends to an N-structure N i on B 1 (p i ). By [CFG92, Proposition 7.21] F B 1 (p i ) carries a metricg i that is ǫ-close tog in C λ -topology, and is both O(n)invariant andÑ i -invariant. Henceg i induces unique Riemannian submersion metricsḡ i on Y , and g i on B 1 (x). To see (ii)-(iv), note that if l ≤ λ − 2, then ||∇ l Rḡ i || is bounded independently of i, so the sequenceḡ i is precompact in C λ−2 -topology. Then by [PT99, Lemma 2.7]ḡ i is precompact in O(n)-C λ−2 -topology, i.e. after pulling back by self-diffeomorphisms of Y , the metrics smoothly subconverge and share the same isometric O(n)-action. Thus there exists ρ > 0 such that for each large i, the point y ∈ (Y,ḡ i ) lies in a ρ-neighborhood of an O(n)-orbit that has normal injectivity radius ≥ ρ. The preimage O i of the O(n)-orbit under the Riemannian submersion (F B 1 (p i ),g i ) → (Y,ḡ i ) satisfies (ii)-(iii). Finally, (iii) implies the second fundamental form bound in (iv), which by Gauss formula gives a bound on | sec(O i )|. To see (i) note that theg -diameter of any orbit ofÑ i is ≤ δ i , so sinceg and g i are bi-Lipschitz, theg i -diameter of any orbit ofÑ i tends to zero as i → ∞, and the same holds for orbits of N i because F B 1 (p i ) → B 1 (p i ) is distance nonincreasing. Finally, the ambient diameter bound implies the intrinsic diameter bound, because Rauch comparison for Jacobi fields gives bounds on bi-Lipschitz constants of the normal exponential map of O i , and in particular, the Lipschitz constant of the nearest point projection of the ρ-tubular neighborhood of O i onto O i depends only on a, n, ρ, ǫ, λ.
2014-10-01T00:00:00.000Z
2004-05-30T00:00:00.000
{ "year": 2004, "sha1": "171c83c0f1a7b5d2a79dfbd787a92fed72d8c6d9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0405579", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "171c83c0f1a7b5d2a79dfbd787a92fed72d8c6d9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
55282629
pes2o/s2orc
v3-fos-license
Bridging the Gaps Between Students ’ Prior Knowledge and Skills in Writing and the Expected Thesis Outcomes This research aimed to seek the light on how the advisors made a use of feedback during supervisory panel and find out how different ways and types of feedback impacted on student-writers’ thesis outcomes. Qualitative study was applied by involving three tenured lecturers at College of Islamic Studies, in Java as the research subjects. Two data collection techniques were applied, such as interview and documentation, to trace evidence on what types of feedback used, how students noticed, and how they impacted on subsequent drafts. This study revealed that indirect feedback using error codes and commentary was the most frequent form used during the advisory session. However, the mere use of feedback could only serve a short-term impact on the development of writing, and even it seemed only to spoon-feed them which could create burdens in writing. It was quite evident that engaging them in such self-regulated and interdependence group works, through problem-solving discussion and peer review, was much worthier as compared to only ask them to process the feedback themselves. Introduction Thesis writing is one of the scientific papers that should be composed by undergraduate students in Indonesia, including in the college of Islamic studies, as a compulsory requirement to complete their degree.Fundamentally, it serves basic knowledge, skills, and experiences in research that can help them solve academic problems scientifically and make reports on the process and results of study to develop science.With the assistance and feedback from advisors, they are required to be able to access information and references from various sources, organize and make a use of them to support ideas, as well as to articulate their genuine ideas in order to produce cogent and coherence thesis reports.While advisors' feedback may be of great importance to assist them in writing theses, the pitfalls of feedback practices are often noticed.Hence, it is undoubtedly interesting to examine the strategies and kinds of feedback used by advisors during the supervisory panel and find out the most suitable types or ways that meet particular needs of students in writing research reports, particularly in the context of the college of Islamic studies. It cannot be denied that most students often have problems in thesis writing, especially when they are demanded to express and develop their own ideas, although they have taken some crucial courses, such as academic writing, advanced grammar, research methodology and other related subjects.Some mistakes and errors have frequently happened in terms of sentence structure, lexis, spelling, content, or coherence between the ideas in paragraphs developed.In response to these problems, feedback is mostly used to assist them in order that they can notice the errors and improve the quality of the written drafts they make.Feedback is given in various types and ways depending on advisors' own preferences or based on students' needs.While some students benefit from getting valuable information about their writing errors and how to deal with those problems, some others may get frustrated from the feedback as they find it hard to understand.Therefore, the effectiveness of corrective feedback in writing becomes one of the controversial issues that never ceases discussed by advisors as well as attracting much attention of researchers. Generally, it is widely believed that feedback plays an important role in a negotiation effort to help students produce a better piece of writing.In this void, direct or indirect corrective feedback is mostly used to improve the quality of students' writing from various aspects, such as content, lexis, grammar, organization, referencing, and other related components in writing thesis.Some studies have noted that corrective feedback is effective to help them improve their written works (Sheen, 2007;Bitchener, 2008;Bitchener & Knoch, 2008).It is reported that students who are given feedback have better skills in improving the subsequent errors compared with those who are not (Ferris, 1999).In fact, if the feedback is obviously noticed and easily understood by students, it is likely worth to assist them to increase their awareness in writing and develop their thesis reports.Shortly, feedback is necessarily used by advisors in order that students know the mistakes they make and help them revise their written works. While some believe that students benefit from corrective feedback, however, it is likely hard to justify to what extent the feedback significantly contributes to the development of students' thesis writing and improves their problem-solving skills.In fact, it is quite often found that students are struggling to process feedback and they encounter problems how to Several studies indicate that feedback is empirically ineffective (Hyland & Hyland, 2006;Russell & Spada, 2006) and even destructive to the writing skills of learners (Truscott, 1996). It is the case that students occasionally make mistakes in the same areas in their subsequent drafts of writing even though they have received feedback.Moreover, the drawback is also noticed as the amount of feedback can make them under pressure, and the result is predictable as they might postpone writing, commit plagiarism, and get depressed or even drop out. Reflecting upon some studies above, it seems that research on feedback is still inconclusive.The inconsistency of findings is inevitable and this is mainly caused by the inconsistency of the design or research methodology used in those studies (Gue'nette, 2007).This may happen because this issue involves at least three main inter-related factors which are not constant, such as students as the feedback users, advisors as the subjects, and the feedback itself.The failure in noticing feedback can rely on students.Some problems faced by students in processing feedback have been widely recognized, such as lack of writing skills and practices, insufficiency of grammar mastery, inability to conduct research or inadequacy of critical thinking and effort to solving writing problems.Besides, advisors' ways or techniques in providing feedback may potentially influence whether the feedback is comprehensible or not for students to process.Moreover, types of feedback may also serve as a predetermining factor as particular feedback can be preferable at certain context as compared to another type. Regardless of the pitfalls of feedback reported by some studies above, it is believed that corrective feedback is of great importance for students.Feedback can assist them to know the flaws or errors they make, such as in terms of sentence structure, content, punctuation, or research methods, and assist them to revise and develop their thesis writing. Furthermore, it serves also as a medium used by advisors to negotiate meaning with students towards their writing, make any important confirmation or clarification, and provide valuable suggestion to improve the quality of their theses. However, feedback should be adjusted in such ways, so that it can fit in students' needs in writing and meet with the particular condition.In one of State Islamic Colleges in East Java, for instance, there has been found huge gaps between the prior knowledge of thesis writing among the majority of student-researchers and the requirement of thesis outcomes they should make with a short target of maximum submission date.Moreover, a single supervisor policy has been established where the advisor should assure their theses writing in all facets, such as language, mechanical aspects, literature review, research design and other related aspects.Accordingly, the policy has influenced the supervision process and the quality of thesis reports composed by students.Besides, feedback serves as an important role during the supervision panel and it has been given in various ways. While students' prior knowledge in writing and research skills are likely to become problematic issues in the supervisory process, the advisors' ways of giving feedback might be more crucial to investigate.Hence, this study examined the strategies and kinds of feedback used by the advisors to assist students to write research reports during the supervisory panel. In particular, it attempted to know how different ways and types of feedback impact on students' thesis outcomes. Methods A qualitative approach was applied in this present study as it provides ample opportunities to depict and describe the behavior of the subjects in giving feedback in greater depth, and their perspectives could be maintained to complement the data obtained (McMillan, 2008).This study was conducted in the second semester of 2015/2016 academic year, and it involved three tenured English lecturers and advisors (US, PR, and WK) in one of Islamic State University in developing area of East Java.The data obtained were in the forms of words, phrases, and sentences related to the types and tactics used by the advisors in providing feedback on the students' drafts of thesis and factors to be considered to apply the feedback. To obtain the required data, two data collection techniques were applied, such as interviews and documentation.Semi-structured face-to-face interview format was used in order to ensure that the interview activities could run more flexible, create interest and involvement of the subject, and obtain the data in greater depth (Robson, 2000).The interview was conducted with the three subjects in this study at their offices, and each subject was interviewed separately.They were asked based on the list of questions that had been designed related to the feedback practices.To get deeper and more comprehensive answers about the types and ways of feedback given by the interviewees during supervisory panels, questions were detailed to complement, clarify and specify the data, and then any information given by the subjects was noted.In the interview process, the researchers did not use a tape recorder because in some respects it could put pressure on the subject, affect their openness in providing information, and reduce the validity and consistency of the data obtained.Moreover, this study also used documentation to complement the data generated from the interview.Some students' thesis drafts were taken to trace some shreds of evidences, particularly on the types of feedback given, the way how students noticed them, and the impact on their subsequent drafts of revision.The technique was conducted by carefully reading and analyzing the feedback given prior to students' written works on the drafts and the correction made by them as the responses of the feedback provided.To make it easy in recording the data, a noticing table, adopted from Mufanti (2012), was used to trace students' noticing process in feedback. After the data related to the research subjects' feedback strategies and consideration factors were collected, the data were analyzed qualitatively.The data analysis included a series of activities, such as identifying data patterns, finding the relationships among the data, making a detailed explanation and interpretation as well as generalizing the findings based on the theoretical basis used in this study.In this study, the data obtained were analyzed in depth to explain the research problems through three stages of interactive analysis process adopted from Miles, Huberman, & Saldaña (1994), namely data reduction, data display and conclusion drawing. Data reduction was a process of selecting, focusing, simplifying, summarizing and transforming data from the field notes or transcripts of conversations.In this study, the data collected from interviews with the subjects were sorted and classified by gathering the similar information with the data from the document observation.As a result, it was obtained the detailed and focused data to ease the researchers to conduct further analysis.After the reduction process of the data was done, the next step was data display.In this phase, the reduced data was then compiled, integrated, and systematically collected and presented. Furthermore, the data were then collated and interpreted based on the theoretical basis and the formulation of the problems to make a conclusion of the study.Once the data was presented and interpretation was done, the final stage of the analytical work was drawing a conclusion.In this final stage, the research findings were cross-checked out with the existing theories and related previous studies to draw conclusions. Findings and Discussion Results of the interview and document analysis showed that most of the subjects relied on the use of feedback to help students noticing the errors they made and improving the quality of writing.While different subjects might make a use of feedback in different ways, The document analysis revealed that indirect corrective feedback was more frequently used as compared to the direct form.In this case, error correction was provided by advisors using error codes or signs.To indicate where the error existed, some tricks were done, such as by underlining, circling, or putting an error code over the error.The codes were placed within the words or phrases, between the margins or on the right or left of the margins, and mostly they were written in blue or red to ease students find where the error was and identify what types of mistakes they made. Some common error codes were such as: • Ag -to indicate an agreement error, • Art -misuse of article, • F -wrong form of the word, • W -error in using word choices, • L -to show problem with linking words, • C -collocation error, • S -spelling error, • P -punctuation error, • R -wrong register, i.e., too informal Besides, some signs were also used to indicate the errors, such as: • ^ -to indicate that there was missing word or expression, • [(…)] -unnecessary word, could be omitted or to show the words referred to in a footnote, • ?-the phrase or clause were confusing, could not be understood or was not logical, • [Underlined Phrases] -Syntax was out of control, • [delete or delete] -delete the unnecessary word or phrase, and other related marks. The extent to which the error codes were used and how they were given towards their writing errors were exemplified in the following excerpts.Three excerpts below were respectively taken from student-writers' thesis drafts (IS, RI and WI).Excerpt 1 showed that indirect corrective feedback was given in the form of codes to assist students to notice the errors.It was obvious that the sentence was quite hard to understand as it consisted of some different ideas with a lot of errors in grammar.For instance, the advisor marked [W] to assign students to replace the word 'new' because this word was unlikely appropriate to modify the term 'technique' as the writers did not offer really concurrent method.This indirectly suggested the writer replace the word with a more appropriate term, e.g., alternative technique.In addition, another error codes were used, such as 'Art' to show the wrong use of articles; 'P' to mark punctuation error; the symbol L to indicate that the linking word was wrong; marker [^] to show that there was missing word (be); and 'F' to mark the error in term of word formation in which it should be present participle 'conducting'.In addition, the underline marker of the phrase 'teach vocabulary and the make students interested in vocabulary and make them can memorize' was to indicate that the clause was difficult to understand as a syntax error. Furthermore, in excerpt 2 RI made a very basic error in the sentence structure which caused the sentence hard to interpret.Some error codes were given prior to the errors to help her easily find where the errors were -for instances, 'P' to show error in punctuation, 'W' to avoid using personal reference and to indicate error in the word choice, 'F' to show that the use of noun 'confidence' was not appropriate because the sentence required the adverb 'confidently' and crossing the word '/' to indicate that the sentence did not require the preposition 'with'.Since the sentence contained several serious errors particularly on syntax, the error code '?' was given in effort that the writer realized this problem.There were not much other error codes given although the sentence consisted other various errors. Similarly, Excerpt 3 presents an indirect form of feedback towards WI's draft.It seemed that the sentence contained error particularly in term of sentence structure which might be affected by the writer's knowledge of her first language.The phrase had inverted wording and did not require a preposition for its auxiliary 'can'.Besides, feedback was also given to suggest the writer to avoid using personal reference. Moreover, indirect feedback was also used in the form of comments.This type of feedback was subsequently given when the advisors found that students missed major information to include in writing.The comments were sometimes written in the left/ right margins or even on the backside of the paper.The advisors commented on the students' writing to clarify or suggest to improve the draft.Besides, the comment was also given when the advisors found that the errors were complex as illustrated in the excerpt below.While the results revealed that indirect feedback was dominantly used by the advisors, direct feedback was given some of the time during a supervisory panel.The result of the document showed that this direct feedback was frequently given in students' former documents, whereas this immediate correction was found little of the time in later drafts. Excerpt 6 Excerpt 6 illustrates the instance of immediate correction given by the advisor on IM's draft.This form of feedback was simply used by replacing the incorrect forms of words with the correct ones.In subsequent writing, direct feedback was also given by reformulating students' sentences. Overall, feedback was used by the subjects to help and assist students to fix the errors on the drafts and improve the quality of the thesis outcomes.Both indirect and direct forms of feedback were used depending upon students' needs, knowledge in research and proficiency levels.While corrective feedback had been necessarily used by the advisors to increase students' writing accuracy, its effectiveness on improving their skills in writing and researching was still questionable as it was frequently found errors on their subsequent writing.Major pitfalls of feedback were noticed and these needed to be further analysed. Factors that Impact on How Students Process Corrective Feedback The extent to which feedback providing assistance for students in writing has been widely recognized.Corrective feedback is essentially given to help them know their errors and refine writing problems.Whilst the benefit is true, there are some drawbacks found in this study.Since students had different levels of language proficiency, writing experiences, or research skills, they likely processed the feedback in different ways. One of the most factors which affected students to respond feedback accurately was that the level of proficiency in English.High-proficient students (HPS) might be better able to process the feedback they received from advisors as compared to low-proficient students Some feedback might be comprehensible enough to notice, so students could reveal the case.However, it could be also the case that the feedback was definitely hard to understand which caused them processed incorrectly or even simply ignored them.Except 8 illustrates the extent to which a student fails to process indirect feedback accurately. The revised version showed that the writer was unable to catch up the commentary point given by the advisor, as a result, the revised sentence she made was problematic as it contained errors in term of sentence structure as well as the idea.This seemed that little input could be noticed from the feedback given and perhaps their lack ability in writing the topic sentence causing the feedback could not be revealed.Shortly, irrespective of student proficiency levels, it was reasonable to state that the quality of feedback by its own could determine whether it was comprehensible for students or not. Moreover, students' prior knowledge related to the topic or research methodology can also affect on how they process the feedback and revise their subsequent writing. Excerpt 9 Excerpt 9 shows the data that students' background knowledge may impact on how they process the feedback and revise their draft.It was obvious that the writer attempted to make a change on the underlined phrase with regards to the comment given by the advisor.A sentence reformulation was made in the subordinate clause and a change on lexical choice ('the best approach' changed to 'the most appropriate research design') was done, whereas the writer overlooked the feedback and did not provide any supporting reasons to justify her writing.This problem happened not only because of the writer's lack of writing skill or grammar but also due to her lack of background knowledge about the research methodology since she could not provide any justification or reasoning to respond the feedback she received. The extent to which corrective feedback was effective or not to assist students to notice the errors and improve the accuracy of their subsequent drafts remained inconclusive. There were some major factors that could influence the level of its effectiveness, such as language proficiency levels, writing experiences, and background knowledge and research skills.Besides, another significant finding was that students' critical thinking and reasoning skills might also considerably contribute to increasing awareness in realizing the errors and fix the problems based on the feedback they received.It was reasonable to justify that students who had engaged more actively with the advisors or peers to discuss their writing problems or the feedback they received were better able to revise their subsequent drafts than those who were not.They were likely more cognizant of the existing errors in the drafts and having a self-directed awareness to respond it properly.The feedback served them valuable information about what or how they should do, and such high order thinking and reasoning skills assisted them to fix the errors and helped them to elaborate ideas. To sum up, while such corrective feedback was beneficial to inform students the problematic issues in writing or to serve immediate correction, this effort seemed only to provide short-term impact.Some drawbacks on the use of corrective feedback, such as making similar mistakes in subsequent writing, ignoring the feedback, or making unintended changes, showed that this kind of feedback only contributed little to the students' development of writing skills, high order thinking, or problem-solving skills.Moreover, the finding found that engaging students in such problem-solving-discussion among peers in regular basis could better able increase their awareness in responding the advisors' feedback and help them improve the quality of their drafts. Engaging student-writers in such problem-solving discussion and Peer Review It could not be denied that corrective feedback played important role in assisting students to develop their thesis drafts.However, the mere use of corrective feedback might also result in negative ways, such as having less awareness to analyze their own mistakes, lacking critical analysis skill, depending much on the advisors' feedback, encountering similar errors in subsequent writing, causing confusion and frustration, and other related burdens.Realizing that corrective feedback alone could serve only sort-term-impacts on students' drafts and writing skills, a valuable measure had been applied by some of the research subjects to overcome the problems that was by involving them to participate actively in such problem-solving discussion and peer review session. Engaging students in such problem-solving discussion and peer review benefitted them from being highly critical of the writing problems they encountered and meaning negotiation.By involving students in a small group discussion, they had a wide of opportunities to share their problems in writing, get the suggestion from various point of views, and also learn from other works.It was worthy to work in a group discussing the feedback as they could notice the errors more comprehensively and fix them more accurately as compared to work alone.Moreover, students got great chances to negotiate their writing problems deliberately and this was beneficial to increase their critical analysis skills.As a result, it was possible to state that students could resolve more writing problems and increase the quality of their drafts. To engage them in such problem-solving group and peer review, there were some stages to do.Firstly, several training was conducted in advance and students were grouped.A reflective feedback was given by using utilizing the errors made by the group, for instance focusing on subject-verb agreement, as the sources of discussion.In turn, students were asked to present their writing problems to be discussed in the group with the advisors' assistances. While one presented, the other members were required to pay attention and give suggestions. In order to strengthen their comprehension, clarification was provided by the advisors.Basing on the suggestion, students were assigned revise the drafts. Peer review, in addition, was done to help students learn from each other's strengths and weaknesses.Students were assigned to regulate peer review session independently with regard to the peer review training they got beforehand.To ensure that small group discussion and peer review run effectively, the advisors monitored and checked the progress while asking for their improvement. In sum, engaging students in both problem-solving group and peer review benefitted them from several positive effects, such as increasing awareness in processing feedback, having more time for meaning negotiation which led them to the increase of comprehension, enhancing critical analysis skills, and improving the accuracy and the quality of writing. While such efforts seemed too complicated and cost considerable time and energy as very tight schedule occupied, it was likely the most reliable measure that the research subjects could do in order that the feedback could be comprehensible for students and they could improve the quality of their thesis writing. Discussion The present study was carried out in order to figure out how the research subjects made a use of feedback to bridge the existing gaps between student-writers' ability in thesis writing and the expected thesis outcome they should make within a short time period.In While the thesis supervisory process relied more on using corrective feedback to help students recognize the errors and improve writing accuracy, the finding revealed that the mere use of it, particularly in the direct way, could serve only short-term effect on students' writing development.Some essential abilities that students might need to have in writing and researching, such as critical analysis and problem-solving skill, did not increase.Although corrective feedback was given, students tended to make similar mistakes in their subsequent writing, ignore feedback, had a self-awareness to analyze their own mistakes, and rely solely on the advisors' feedback. Furthermore, since not every feedback could be comprehensible and processed by students, it was possible that they had negative attitudes towards the feedback they received. When analyzed, it is quite evident that the mere use of corrective feedback suffers from several major drawbacks, such as having less awareness to analyze own mistakes, lacking critical analysis skill, counting on the advisors' feedback (Truscott, 1996). Interestingly, corrective feedback was still somewhat essential and most students preferred to have it.However, it is likely hard to justify what and how feedback is comprehensible and meaningful for students.The finding revealed that there were some considerations used in giving feedback in order that it could satisfy most students, such as they should be aware of when and how to give it, type of errors and its focus, context, students' proficiency level, and preferences, and this is also asserted by Mufanti (2014). These considerations are reasonable since each student has different noticing skill in processing corrective feedback, encounters different writing problems, and processes the corrective feedback differently as well.Therefore, the type and the form of errors should be considered when giving the feedback, for instance, in the first draft, feedback is only focused on the content, and then it turns to grammar and lexis as the focus. Another finding confirmed the association between engaging students in such negotiation activities, through problem-solving discussion and peer review session, and improvement in processing feedback and writing accuracy.A likely explanation for this is that problem-solving discussion and peer review provided a wide range of opportunities for students to critically analyses their own or others strengths and weaknesses, and they worked together to solve writing problems and improve the quality of writing. It is almost certain that feedback can only benefit students if they are involved in such self-regulated and interdependence group work where they can learn from each other.While it seems hard to set up and maintain these activities for a couple of previous sessions, it will be much easier to manage after students engage in several meetings as they were more aware of the importance of group discussion and peer review for themselves.This adds to the study conducted by Min, (2006) into the effects of trained peer review on students' revision types and writing quality. Conclusions Corrective feedback, either direct or indirect form, was inevitably used during a supervisory panel.The findings demonstrated that indirect feedback using error codes was the most frequent form used by advisors and it was occasionally complemented with commentary when they found the writing problems were sophisticated.To make it more beneficial for student-writers, there was some consideration that should be taken into account, such as understanding when and how to give it, type of errors and its focus, context, proficiency level, need of writing and preferences. However, the mere use of corrective feedback could contribute less and serve shortterm impact on the development of their writing.The excessive use of feedback would only spoon-feed them, as a result it could create burdens in writing, such as ignoring the feedback, having less self-awareness to analyze own mistakes, and counting only on the advisors' feedback.In this void, it was quite evident that engaging them in such self-regulated and interdependence group works was more meaningful and beneficial as compared to only ask them to process it themselves.Therefore, problem-solving discussion and peer review sessions were the most reasonable measure to bridge the existing gaps in thesis writing as they provided huge chances for students to critically analyze their own and other writing and work collaboratively to process the feedback and improve the quality of writing. This study was influenced by some limitations due to a lack of document analysis with a broader focus of questions and this might not be generalized to other contexts.Further research can involve students' perception of the efficacy of feedback to triangulate the data in order that the consistency of findings can be maintained. Between Students' Prior Knowledge and Skills in Writing and the Expected Thesis Outcomes Restu Mufanti, Andi Susilo fix up the mistakes in their writing based on the feedback they have received from advisors. Bridging the Gaps Between Students' Prior Knowledge and Skills in Writing and the Expected Thesis Outcomes Restu Mufanti, Andi Susilo Excerpt 2 Excerpt 3 IS's draft where she got feedback in the form of a comment from the advisor.It showed that the advisor wanted her to write the introductory of paragraph headings that could well explain the background of statements of the problem.This type of comment was beneficial for guiding her to improve the draft as it provided detail information what to write and how she should do it. the instance of comment from an advisor towards ST's draft where the sentence contained complex errors.The sentence consisted of mixed ideas with poor grammatical structure and grammar.In response to this, the advisor underlined the most problematic sentence to indicate the error and provided comment under the margin to help them identify the problem. ( LPS).With their English skills, in fact, HPS could notice the gaps existing on their writing by observing the indicated errors more easily than those of what LPS did and this benefitted them to revise the errors in the appropriate way based on the feedback.On the other hand, with the lack of English skills, LPS might find it difficult to process any particular feedback and resolve their problems.Excerpt 7 Excerpt 7 presents the instance where LPS unsuccessfully process the feedback.It was obvious that she did not make any revision on the tick code, and it appeared that she failed to use the article.While direct feedback gave a clear indication of the error in prepositional use (about), the indirect one seemed unrevealed.The lack of grammar knowledge might cause them process the feedback improperly where she made an unintended change. Journal Homepage: http://ojs.umsida.ac.id/index.php/jeesDOI Link: http://dx.doi.org.10.21070/jees.v2i2.982corrective feedback was the preferable form applied during a supervisory panel.In addition, this type of feedback was given in various forms, either indirect or direct modes, depending on particular needs or circumstances. the Gaps Between Students' Prior Knowledge and Skills in Writing and the Expected Thesis Outcomes Journal Homepage: http://ojs.umsida.ac.id/index.php/jeesDOILink:http://dx.doi.org.10.21070/jees.v2i2.982preferablethanothertypes.It is provided by underlining, circling, or putting error codes or markers over the errors to indicate where they are.In responding to more sophisticated problems, indirect feedback with commentary is given to guide the students what and how they should revise the problems.This is in line withHyland's suggestion (1990: 283)that the writer can see how someone actually responds to their writing as it develops, where an idea gets across, where a confusion arises, where logic or structure breaks down. Most significantly, it was found that corrective feedback, either indirect or direct form, was frequently used.Since it is much easier to maintain, indirect corrective feedback is
2018-12-05T14:04:28.026Z
2017-10-10T00:00:00.000
{ "year": 2017, "sha1": "a49c1f8e0339f1a0660c6cc5e688c153d34442f8", "oa_license": "CCBY", "oa_url": "https://jees.umsida.ac.id/index.php/jees/article/download/1600/1803", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "a49c1f8e0339f1a0660c6cc5e688c153d34442f8", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
246889906
pes2o/s2orc
v3-fos-license
Bisphenol A-Related Effects on Bone Morphology and Biomechanical Properties in an Animal Model Bisphenol A (BPA), which is contained in numerous plastic products, is known to act as an endocrine-disruptive, toxic, and carcinogenic chemical. This experimental series sought to determine the influence of BPA exposure on the femoral bone architecture and biomechanical properties of male and female Wistar rats. BPA was applied subcutaneously by using osmotic pumps. After 12 weeks, the bones were analyzed by micro-computed tomography (micro-CT) and a three-point bending test. Comparing the female low- and high-dose groups, a significantly greater marrow area (p = 0.047) was identified in the group exposed to a higher BPA concentration. In addition, the trabecular number tended to be higher in the female high-dose group when compared to the low-dose group (p > 0.05). The area moment of inertia also tended to be higher in the male high-dose group when compared to the male low-dose group (p > 0.05). Considering our results, BPA-related effects on the bone morphology in female Wistar rats are osteoanabolic after high-dose exposure, while, in male rats, a tendency toward negative effects on the bone morphology in terms of a reduced cross-sectional cortical area and total area could be demonstrated. Introduction Bisphenol A (BPA) is one of the most frequently produced chemical compounds worldwide, and it is a harmful substance for the human organism, which is why it is part of the American National Toxicology Program and the subject of numerous studies [1]. After the first synthesis of BPA in 1891 by Dianin, Dodds and Lawson outlined 45 years later that BPA had estrogenic effects [2][3][4]. Today, BPA is part of the synthesis of various plastic polymers, polycarbonates, and epoxy resins [5]. Thus, humans are in constant and close contact with this chemical, especially because BPA can be found in many food products through the wrapping and canning as part of the packaging process [5,6]. Lorber et al. estimated that the average amount of BPA ingested by adults per kg of body weight is between 0.03 and 0.07 µg per day and may be even higher in Asian countries [7,8]. BPA acts as an endocrine-disruptive chemical and has antiandrogenic effects. It binds to estrogen receptors (ER alpha and beta) and interacts with estrogen related pathways [9][10][11]. Furthermore, BPA-related toxic effects decrease sperm quality and reproductivity, and it can encourage the development of malignant diseases such as prostate and breast cancer, which is the subject of a controversial discussion [10,[12][13][14][15][16]. In 2018, Chin et al. highlighted the effects of BPA and its derivates on bone in a review [17]. By affecting the endocrine system, BPA causes stress on osseous tissue, leading to cell death and DNA damage [18]. BPA inhibits osteoclast formation in vitro and induces apoptosis of both osteoblasts and osteoclasts by interfering with their differentiation [19]. Ovariectomized rats showed trabecular bone loss after BPA exposure, which could not be demonstrated in male rats, where the bone density was higher in BPA exposed groups. In contrast, bisphenol A diglycidyl ether (BADGE), a derivative of BPA, led to an increase in trabecular number in female and male rats, as well as to an increased bone density in female rats [17]. In addition, Li et al. concluded that BADGE improves bone quality in rat animal models and is osteoinductive in non-ovariectomized rats. There was no significant bone formation in ovariectomized rats. The molecular mechanism underlying the decrease in bone density occurs via binding of BPA to the estrogen-dependent gamma-type receptors, according to a study by Thent et al. [20]. This binding reduces bone morphogenic protein 2 (BMP-2) formation and alkaline phosphatase activity [20]. BPA disrupts the bone metabolism via the receptor activator of NF-κB ligand (RANKL), apoptosis, and the Wnt/beta-catenin signaling pathway [20]. Thent et al. also described BPA exposure as resulting in a loss of bone mass due to decreased plasma calcium levels and inhibition of calcitonin secretion. Facing these different and somewhat contradictory studies, our aim was to identify the effects of low-and high-dose BPA on the structure and the biomechanical properties of bone in growing male and female Wistar rats via a continuous application of BPA. Experimental Setup In this study, 36 Wistar rats that were 10 weeks old (Charles River Laboratories, Research Models and Services; Sulzfeld, Germany) were randomly assigned to one of three different groups: control (n = 12), low-dose BPA (n = 10), and high-dose BPA (n = 12). Thus, three groups each for male and female rats created six groups in total. The animals were housed in our local facility (Institute for Experimental Surgery; Rostock, Germany) under a 12 h day/night cycle and had ad libitum access to food and water. The rats were housed in a climatized room at a temperature of 22 • C and a relative humidity of 51%. The daily dose in the low-dose group was 2.55 µg of BPA, while that in the highdose group was 39 µg of BPA per day and rat. Transferring the low-dose (10.2 µg of BPA per kg bodyweight per day) and high-dose (156 µg of BPA per kg bodyweight per day) administration to an adult human with 70 kg body weight, this would be equal to 714 µg of BPA per day in the low-dose group and 10,920 µg of BPA per day in the high-dose group. The different dosages of BPA that were administered are based on the scientific opinion of the European Food Safety Authority (EFSA) in 2015. The rats weighed approximately 0.25 kg at the beginning of the experiments. The low-dose group reflects a dosage above the most recently defined tolerable daily intake (TDI) by the EFSA in 2015 (4 µg of BPA per kilogram bodyweight per day) [21]. BPA was dissolved in DMSO, which is ubiquitously used as a solvent [22]. The control group was exposed to DMSO only using osmotic pumps from the same manufacturer. The control group was not exposed to BPA. The osmotic pumps were chosen in order to apply either BPA or the control solution evenly and with a constant flow rate. The manufacturer guaranteed constant application for 4 weeks for each pump. The osmotic pumps were used to achieve maximum consistency when administering BPA. The subcutaneous administration of BPA is known to have identical effects on the BPA plasma concentration to the oral route while being more practical [23][24][25]. During the 12 weeks of survival, the animals underwent two additional surgeries for an exchange of the pumps 4 and 8 weeks after initial implantation. As part of the last operation, the rats were euthanized, and their femora were harvested. An illustration of the experimental setup is shown in Figure 1. The tissue samples were stored in an 8% formaldehyde solution in a freezer at −18 • C. consistency when administering BPA. The subcutaneous administration of BPA is known to have identical effects on the BPA plasma concentration to the oral route while being more practical [23][24][25]. During the 12 weeks of survival, the animals underwent two additional surgeries for an exchange of the pumps 4 and 8 weeks after initial implantation. As part of the last operation, the rats were euthanized, and their femora were harvested. An illustration of the experimental setup is shown in Figure 1. The tissue samples were stored in an 8% formaldehyde solution in a freezer at −18 °C. This experimental study was performed according to the German Laws on Animal Protection and was approved by the local animal care and use committee (Landesveterinär-und Lebensmitteluntersuchungsamt Mecklenburg-Vorpommern, Germany; reference/protocol number: 7221.3-1-055/16). Osmotic Pump Implantation The rats were anesthetized by intraperitoneally injecting ketamine hydrochloride 10% (100 mg per kg bodyweight, Bela-Pharm; Vechta, Germany) and xylazine hydrochloride 2% (25 mg per kg bodyweight, Rompun ® , Bayer Vital; Leverkusen, Germany). While undergoing surgery, the rats were positioned in a prone position on a heating pad (Klaus Effenberger, Medizinische Geräte; Pfaffing, Germany) to counteract thermal dysregulation due to anesthesia. Postoperatively, oral analgesia was administered by adding metamizole to the drinking water (100 mg per kg bodyweight, Novaminsulfon-ratiopharm ® , Ratiopharm; Ulm, Germany). From a posterior approach, a 2 cm skin incision between both medial scapula margins was performed, after shaving and sterilizing an area of 3 × 3 cm with povidone-iodine solution (Betaisodona ® , Mundipharma GmbH; Frankfurt am Main, Germany). After blunt preparation, the osmotic pumps were implanted epifascially. The incision was closed intracutaneously (Vicryl ® 3-0, Johnson & Johnson; New Brunswick, NJ, USA) and disinfected again with the povidone-iodine solution. No wound infections were observed, and it was never necessary to widen the skin incision when exchanging the pump, since the pump could always be removed through the existing scar from the previous surgery. The osmotic pumps (5.1 cm length, 1.4 cm diameter; ALZET ® MODEL 2ML4; DURECT Corporation; Cupertino, CA, USA) were used for the continuous application of BPA and the control solution DMSO. The pump rate was 2.5 μL/h (standard deviation ± 0.05 μL/h). DMSO does not affect the rat bone, and it is frequently used in experimental studies with rats due to its generally good tolerability [20,26]. The interventions took 5 min each. As mentioned above, after 12 weeks, the final surgery was performed in which the rat femora were collected, and the rats were euthanized by anesthesia overdose, followed by breaking their neck. Micro-Computed Tomography (Micro-CT) Micro-CT analysis was used to visualize the structural properties of the cortical and trabecular bone. The scans were performed using a high-resolution device (μCT 35, Scanco Medical AG; Brüttisellen, Switzerland; 70 kVp, 114 μA, 400 ms integration time). The isotropic voxel dimension was related to the different sizes of the animals adapted to the sex: 12 × 12 × 12 μm for female and 20 × 20 × 20 μm for male rats. By using a constrained Gaussian filter (trabecular bone: support = 1, sigma = 0.8; cortical bone: support = 2, sigma This experimental study was performed according to the German Laws on Animal Protection and was approved by the local animal care and use committee (Landesveterinär-und Lebensmitteluntersuchungsamt Mecklenburg-Vorpommern, Germany; reference/protocol number: 7221.3-1-055/16). Osmotic Pump Implantation The rats were anesthetized by intraperitoneally injecting ketamine hydrochloride 10% (100 mg per kg bodyweight, Bela-Pharm; Vechta, Germany) and xylazine hydrochloride 2% (25 mg per kg bodyweight, Rompun ® , Bayer Vital; Leverkusen, Germany). While undergoing surgery, the rats were positioned in a prone position on a heating pad (Klaus Effenberger, Medizinische Geräte; Pfaffing, Germany) to counteract thermal dysregulation due to anesthesia. Postoperatively, oral analgesia was administered by adding metamizole to the drinking water (100 mg per kg bodyweight, Novaminsulfon-ratiopharm ® , Ratiopharm; Ulm, Germany). From a posterior approach, a 2 cm skin incision between both medial scapula margins was performed, after shaving and sterilizing an area of 3 × 3 cm with povidone-iodine solution (Betaisodona ® , Mundipharma GmbH; Frankfurt am Main, Germany). After blunt preparation, the osmotic pumps were implanted epifascially. The incision was closed intracutaneously (Vicryl ® 3-0, Johnson & Johnson; New Brunswick, NJ, USA) and disinfected again with the povidone-iodine solution. No wound infections were observed, and it was never necessary to widen the skin incision when exchanging the pump, since the pump could always be removed through the existing scar from the previous surgery. The osmotic pumps (5.1 cm length, 1.4 cm diameter; ALZET ® MODEL 2ML4; DURECT Corporation; Cupertino, CA, USA) were used for the continuous application of BPA and the control solution DMSO. The pump rate was 2.5 µL/h (standard deviation ± 0.05 µL/h). DMSO does not affect the rat bone, and it is frequently used in experimental studies with rats due to its generally good tolerability [20,26]. The interventions took 5 min each. As mentioned above, after 12 weeks, the final surgery was performed in which the rat femora were collected, and the rats were euthanized by anesthesia overdose, followed by breaking their neck. Micro-Computed Tomography (Micro-CT) Micro-CT analysis was used to visualize the structural properties of the cortical and trabecular bone. The scans were performed using a high-resolution device (µCT 35, Scanco Medical AG; Brüttisellen, Switzerland; 70 kVp, 114 µA, 400 ms integration time). The isotropic voxel dimension was related to the different sizes of the animals adapted to the sex: 12 × 12 × 12 µm for female and 20 × 20 × 20 µm for male rats. By using a constrained Gaussian filter (trabecular bone: support = 1, sigma = 0.8; cortical bone: support = 2, sigma = 0.8), grayscale CT images were segmented; accordingly, the background noise of the original data could be reduced. For assessment of trabecular bone, 75 (1.5 mm, males) or 125 (1.5 mm, females) slices were evaluated in the distal femur with the volume of interest 2.5 mm below the growth plate, which included only the secondary spongiosa. The structure of the cortical bone was analyzed at the mid diaphysis. In doing so, 1 mm of bone, resulting in 83 slices in female and 50 slices in male rats, was included. In order to extract the trabecular bone, data were globally thresholded at 26%. For the cortical bone, a threshold of 30% was used. For the trabecular bone, the following parameters were obtained: bone volume fraction (BV/TV), trabecular number (Tb.N, 1/mm), trabecular separation (Tb.Sp, mm), and trabecular thickness (Tb.Th, mm). For the cortical bone, the following parameters were obtained: total cross-sectional area inside the periosteal envelope (Tt.Area, mm 2 ), cortical thickness (Ct.Th, mm), cortical area (Ct.area, mm 2 ), bone marrow area (mm 2 ) and the relation of the cortical area to total area (Ct.Ar/Tt.Ar, %). In addition, the length of the femora (mm) and the connectivity density (mm −3 ) were measured. The guidelines for assessment of bone microstructure in rodents using micro-computed tomography by Bouxsein et al. were utilized for all of the analyses [27]. Biomechanical Testing The servo-controlled electromechanical testing machine Z2.5/TN1S (ZwickRoell; Ulm, Germany) with a 2.5 kN load cell was used to analyze the biomechanical properties of the femoral rat bones by creating load-displacement and stress-strain curves until the breaking point. The two support points (4 mm diameter) on which the bone were 15 mm (females) or 20 mm (males) apart from each other, and the probe applied the force perpendicularly in the middle of the specimen in an anterior-posterior direction. The preloading was 0.1 N (0.1 mm/s), and the loading rate was 10 mm/min. The following parameters were obtained until the bones broke: ultimate load (N), ultimate displacement (mm), energy (mJ), energy density (mJ/mm 3 ), stiffness (N/mm), ultimate stress (MPa), ultimate strain, and Young's modulus (MPa). The Young's modulus and the stiffness were defined by the slope of the linear part of the stress-strain and load-deformation curves, respectively. For the calculation of the stress-strain curve, peripheral computed tomography (pQCT, XCT Research SA, Stratec Medizintechnik; Pforzheim, Germany) was used to determine the area moment of inertia (mm 4 ). All analyses of the biomechanical testing were based on the guide "Basic Biomechanical Measurement of Bone: A Tutorial" by Turner and Burr [28]. Statistical Analysis All results were evaluated with SPSS Statistics (Version 25; IBM, Armonk, NY, USA) At first, we analyzed the data with the D'Agostino Pearson normality test for normal distribution which did not show a consistent normal distribution for all of the values. Hence, a nonparametric Kruskal-Wallis test (H-test) was performed to determine whether the values, which were not normally distributed and considered to be independent samples, differed in terms of a central tendency. The advantage of the test is that the data do not have to be normally distributed [29]. Subsequently, a Mann-Whitney test was performed between each of the groups. This test is also valid when there is no normal distribution of the data [29]. No adjustment for multiple testing was performed. A value was considered to be significant when the p-value was less than 0.05. Tendencies were defined by p-values between 0.05 and 0.1. Results Two rats of the female high-dose group died during the first anesthesia; therefore, a total of 34 instead of 36 animals were included for further examination of the bones and statistical analysis. When comparing the three different groups of each sex with the Kruskal-Wallis test, no significant differences were detected. The p-values refer to the Mann-Whitney U tests that were performed simultaneously. Significant differences were found when comparing the male and female control groups, in which only the control solution DMSO was administered. They differed significantly in terms of length of the femora (p = 0.004), bone volume fraction (p = 0.018), connectivity density (p = 0.004), trabecular number (p = 0.01), trabecular thickness (p = 0.025), trabecular separation (p = 0.01), total cross-sectional area inside the periosteal envelope (p = 0.004), cortical area (p = 0.004), and the bone marrow area (p = 0.004). The three-point bending test found significant differences regarding the area moment of inertia (p = 0.004), ultimate load (p = 0.01), ultimate displacement (p = 0.016), and the energy (p = 0.004) when comparing both control groups. Except for the bone volume fraction, the connectivity density, and the trabecular number, which were higher in the female control group, all other parameters were higher in the male control group. Figure 2 shows cross-sectional micro-CT images of the femoral meta-and diaphysis which were used for the measurements of the beforementioned morphological parameters. trabecular thickness (p = 0.025), trabecular separation (p = 0.01), total cross-sectional area inside the periosteal envelope (p = 0.004), cortical area (p = 0.004), and the bone marrow area (p = 0.004). The three-point bending test found significant differences regarding the area moment of inertia (p = 0.004), ultimate load (p = 0.01), ultimate displacement (p = 0.016), and the energy (p = 0.004) when comparing both control groups. Except for the bone volume fraction, the connectivity density, and the trabecular number, which were higher in the female control group, all other parameters were higher in the male contro group. Figure 2 shows cross-sectional micro-CT images of the femoral meta-and diaphysis which were used for the measurements of the beforementioned morphologica parameters. Since these clear differences were already apparent when comparing the male and female control groups, the comparison of male and female rats was limited in its significance when comparing both low-dose and high-dose groups with each other Nevertheless, the control groups were crucial when estimating dose-related effects of BPA within the groups of male and female rats. Since these clear differences were already apparent when comparing the male and female control groups, the comparison of male and female rats was limited in its significance when comparing both low-dose and high-dose groups with each other. Nevertheless, the control groups were crucial when estimating dose-related effects of BPA within the groups of male and female rats. Micro-Computed Tomography Analyzing the bone morphology by micro-CT revealed several group-specific differences. The bone volume fraction is the relation of bone volume to the total tissue volume pictured in a cross-sectional micro-CT slide as a percentage. The bone volume fraction of the female control group was significantly higher when compared with the male control group (p = 0.01). We could show a tendency not only when comparing the female low-dose and high-dose groups (p = 0.076), but also when comparing the female high-dose and control Toxics 2022, 10, 86 6 of 13 groups (p = 0.076). The mean of the bone volume fraction was higher in the high-dose group in both cases (Figure 3). Micro-Computed Tomography Analyzing the bone morphology by micro-CT revealed several group-specific differences. The bone volume fraction is the relation of bone volume to the total tissue volume pictured in a cross-sectional micro-CT slide as a percentage. The bone volume fraction of the female control group was significantly higher when compared with the male control group (p = 0.01). We could show a tendency not only when comparing the female low-dose and high-dose groups (p = 0.076), but also when comparing the female high-dose and control groups (p = 0.076). The mean of the bone volume fraction was higher in the high-dose group in both cases (Figure 3). Further statistical tendencies could be shown when comparing the female low-dose and high-dose groups with respect to the number of trabeculae (p = 0.076) and the total cross-sectional area inside the periosteal envelope (p = 0.076). However, the trabecular number and the total cross-sectional area inside the periosteal envelope were higher in the low-dose group. In male rats, statistical tendencies could be shown by comparing the low-dose group with the high-dose group regarding the length of the femora (p = 0.062), the total crosssectional area inside the periosteal envelope (p = 0.062), and the cortical area (p = 0.088). The femora were longer in the high-dose group but with lower values of the cortical area of the same femora. Cross-sectional areas of the bone marrow were significantly higher in the female high-dose group than in the female low-dose group (p = 0.047, Figure 4). Further statistical tendencies could be shown when comparing the female low-dose and high-dose groups with respect to the number of trabeculae (p = 0.076) and the total cross-sectional area inside the periosteal envelope (p = 0.076). However, the trabecular number and the total cross-sectional area inside the periosteal envelope were higher in the low-dose group. In male rats, statistical tendencies could be shown by comparing the low-dose group with the high-dose group regarding the length of the femora (p = 0.062), the total crosssectional area inside the periosteal envelope (p = 0.062), and the cortical area (p = 0.088). The femora were longer in the high-dose group but with lower values of the cortical area of the same femora. Cross-sectional areas of the bone marrow were significantly higher in the female high-dose group than in the female low-dose group (p = 0.047, Figure 4). Three-Point Bending Test The evaluation of the biomechanical properties of the female femoral rat bones showed tendencies toward a decrease in the ultimate displacement (p = 0.076), as well as the ultimate strain (p = 0.076), in those groups that were exposed to higher BPA concentrations ( Figures 5 and 6). Another tendency could be found between the male low-dose and high-dose groups regarding the area moment of inertia (p = 0.062), which was higher in the low-dose group (Figure 7). Three-Point Bending Test The evaluation of the biomechanical properties of the female femoral rat bones showed tendencies toward a decrease in the ultimate displacement (p = 0.076), as well as the ultimate strain (p = 0.076), in those groups that were exposed to higher BPA concentrations (Figures 5 and 6). Three-Point Bending Test The evaluation of the biomechanical properties of the female femoral rat bones showed tendencies toward a decrease in the ultimate displacement (p = 0.076), as well as the ultimate strain (p = 0.076), in those groups that were exposed to higher BPA concentrations (Figures 5 and 6). The low-dose group was administered 10.2 µg BPA per kg bodyweight per day, and the high-dose group was administered 156 µg BPA per kg bodyweight per day. The data are visualized with boxplots; the mean is marked with a cross (+) and tendencies (p < 0.1, but ≥ 0.05) are marked with a pound (#). Differences between male and female rats are not marked. administered 10.2 μg BPA per kg bodyweight per day, and the high-dose group was administered 156 μg BPA per kg bodyweight per day. The data are visualized with boxplots; the mean is marked with a cross (+) and tendencies (p < 0.1, but ≥ 0.05) are marked with a pound (#). Differences between male and female rats are not marked. Another tendency could be found between the male low-dose and high-dose groups regarding the area moment of inertia (p = 0.062), which was higher in the low-dose group (Figure 7). 156 μg BPA per kg bodyweight per day. The data are visualized with boxplots; the mean is marked with a cross (+) and tendencies (p < 0.1, but ≥ 0.05) are marked with a pound (#). Differences between male and female rats are not marked. Another tendency could be found between the male low-dose and high-dose groups regarding the area moment of inertia (p = 0.062), which was higher in the low-dose group (Figure 7). 4 12 weeks after administration of the control solution (CTL; n = 6 CTL-f, n = 6 CTL-m), low-dose BPA (LD; n = 5 LD-f, n = 5 LD-f), and high-dose BPA (HD; n = 5 HD-f, n = 7 HD-m). The low-dose group was administered 10.2 µg BPA per kg bodyweight per day, and the high-dose group was administered 156 µg BPA per kg bodyweight per day. The data are visualized with boxplots; the mean is marked with a cross (+) and tendencies (p < 0.1, but ≥ 0.05) are marked with a pound (#). Differences between male and female rats are not marked. Discussion In this experimental study, 18 healthy male and 16 female rats were kept under controlled conditions for 12 weeks in an animal facility to reveal any effects of continuous intraperitoneal application of low and high dosages of bisphenol A on bone morphology and biomechanics. The statistically significant differences in terms of the micro-CT analysis and the biomechanical properties between the male and female control groups described above were due to sex-specific differences in height and weight. This resulted in significantly larger bones in male rats and subsequently higher biomechanical load-bearing capacity shown in the three-point bending test and greater trabecular and cortical bone structures in micro-CT analysis. Considering the results and related tendencies, as well as the significant difference of the bone marrow area, when comparing female low-dose and high-dose groups, these differences were conspicuously seen more often in female rats. Additionally, the statistical tendencies were all found in between the low-dose and high-dose groups except for the bone volume fraction, where the high-dose and control group differed. In male rats, no statistically significant differences were found in relation to the control group, and the tendencies were all in between the low-dose and high-dose groups. The differences in between the micro-CT parameters, as seen in Figures 3 and 4, were sex-specific and related not only to endogenous hormones but also to the body mass of the rats [27,30]. We could show a tendency regarding the bone volume fraction in the female high-dose group compared with the control group, as well as when comparing the female low-dose group with the high-dose group. In both cases, the proportion of the bone tissue with respect to the total tissue was higher in the group exposed to higher BPA concentrations. In addition, a greater number of trabeculae in the female high-dose group was found when compared to the low-dose group; hence, we assume that BPA might increase trabecular structures in rat bone. This was underlined by the tendency of a higher total cross-sectional area in the female high-dose groups when compared to the female low-dose group. Most importantly, the cross-sectional area of the bone marrow was significantly higher in the female high-dose group in comparison to the bone marrow area in the female low-dose group. This is similar to the findings of Lejonklou et al., who identified a significantly higher bone mineral content in female rats after high-dose exposure of BPA compared to low-dose exposure [31]. It also emphasizes that high-dose BPA exposure might rather have an anabolic effect on bone tissue which corroborates our hypothesis. Possible explanations for a significantly greater bone marrow area in the female high-dose group compared to the low-dose group could be due to BPA-induced thicker trabecular bone structures resulting in a greater marrow space, as well as a thicker cortical bone. Lind et al. observed BPA-induced bone marrow fibrosis in female rats which might also be a reason for the significantly increased bone marrow area in our rats [32]. However, the cortical thickness and area did not show any tendency or a statistically significant increase in any of the groups. When cortical cross-sectional area alone was considered, no statistically significant difference was found when comparing the female cohorts. In contrast, one tendency indicated a higher cross-sectional area inside the periosteal envelope in the male low-dose group when compared to the male high-dose group. Taking the BPA-related effects on female rat bone into account, a possible threshold of BPA could be discussed after which BPA was osteoinductive in female rats but led to a negative influence on bone metabolism after a certain concentration in male rats. Previously, this correlation could only be proven in part by other studies [17,[31][32][33]. Tendencies in the biomechanical analyses could be shown in between the female lowdose and high-dose groups. Lower strain and lower ultimate displacement in the female high-dose group compared to the low-dose group indicated a possible higher resistance of the femora of the female low-dose group and subsequently a positive effect of lowdose BPA on bone stability. Tendencies of a higher total cross-sectional area, as well as cortical cross-sectional area and a higher femur length in the low-dose group, underlined this assumption. According to a study by Miki et al., BPA affects human fetal and adult osteoblasts via steroid receptors [34]. Their study showed an activation of the cell proliferation rate and an accumulation of collagen in osteoblasts; thus, the authors hypothesized that BPA affects bone metabolism negatively in humans. Another study showed a reduced femur length and trabecular area in male offspring of pregnant rats who were exposed to lowdose BPA concentrations in their drinking water [33]. This conflicts with the observations made in our experiment when comparing the male low-dose and high-dose groups. After re-evaluation of these results with 52-week-old rats, the male cohort that was prenatally exposed to increased levels of BPA showed a normalization of the parameters collected to assess the bone structure and were comparable to the control group [32]. Lejonklou et al. demonstrated elongated rat femora in the female offspring of pregnant rats exposed to BPA via tube feeding and thicker cortical bone in the male offspring with no significant change of biomechanical parameters [31]. When analyzing our data, numerous tendencies could be shown in the comparison of the low-and high-dose groups within the male and female groups. Our results are mostly based on tendencies that indicate an osteoinductive effect of BPA after high-dose exposure in the female cohort, as well as a negative effect on bone metabolism after high-dose exposure in male rats only. This partially corroborates our hypothesis and the findings of Lind et al. described above [32,33]. Nevertheless, these findings are mostly based on weak tendencies between two of the three groups of each sex. Discussing the methods of our study, the subcutaneous application of BPA by pumps working with osmosis via diffusion of molecules through a semipermeable membrane represented a reliable procedure [35,36]. The amount of BPA in the high-dose group was based on adverse effects of BPA that other studies found when administering the same or lower BPA dosages [37]. Adverse effects include toxicity on the reproductive system [38,39], as well as the estrogen-mimicking and endocrine-disrupting effect of BPA [17]. Focusing on the advantages of the micro-CT, it seemed to be favorable compared to procedures addressing histomorphology only as it created a 3D reconstruction of trabecular bone structures. Accordingly, the evaluation of trabecular bone structures was not based on previously obtained two-dimensional data. Micro-CT imaging is also advantageous due to the large area that can be analyzed, the speed of the analysis, and the fact that the objects are not damaged by imaging them [27,40]. In addition to micro-CT imaging, we used the three-point bending test to detect BPA-related changes in the rat bone. Both methods can be used to reliably assess the microarchitecture, as well as the biomechanical properties, of bones. However, they cannot provide information about cellular processes or the bone metabolism on a molecular level. One weakness of our study was the sex-related difference in terms of size and weight and the associated different measurement results in the biomechanical tests, as well as the micro-CT examination. As a result, it was not possible to sum up the four male and female BPA-exposed groups as a whole to one low-dose and one high-dose group only. Furthermore, even if osmotic pumps deliver drugs reliably, they cannot provide continuous dosages per kilogram bodyweight, because the rats gain weight over time which would require an increasing dosage with the natural increase in bodyweight. Daily bodyweight-dependent dosages could be realized by subcutaneous injections in future studies. Furthermore, due to sex-specific weight differences at the very beginning of our experiments, male rats were 14% heavier than the females (302 g vs. 259 g); hence, the BPA dose per kilogram bodyweight was higher in female rats than in the male cohort. One should also take into account that the BPA-mediated effects on the bone are not linear [17]. The dosages of daily applied BPA in other studies varied from 0.25 to 50,000 µg per kilogram bodyweight. The heterogeneity of former study designs makes it harder to compare them with our findings [31,33]. As a result, inconsistencies are more likely to occur. Additionally, adding a mid-dose group might have been advantageous to further elucidate the dose-dependent effect of BPA. Furthermore, we focused on certain bone properties in terms of micro-computer tomographic morphology and biomechanics only. We found a significantly greater bone marrow area in female rats after high-dose BPA exposure when compared to the female low-dose group, which could indicate a possible osteoanabolic effect of BPA in female rats. In the same group, the trabecular number and the bone volume fraction were also higher, underlining this hypothesis. Nevertheless, further studies are needed to provide additional data since our results did only show tendencies except for one significant difference regarding the bone marrow area of the female cohort. Conclusions BPA is a ubiquitously used chemical with various adverse effects not only on rodents but also on human metabolism and the human hormonal system. We found an increase in the bone marrow area after high-dose exposure to BPA in female rats, as well as nonsignificant tendencies of an increase in the number of trabeculae and bone volume fraction when performing micro-CT analysis. In male animals, only a negative tendency of a change in bone morphology could be identified. Considering our data, the effects of BPA on the biomechanical properties and morphological changes quantified by micro-CT were limited. Further and longer-lasting studies are needed to analyze the molecular interaction of different dosages of BPA with bone to clarify BPA-related long-term effects.
2022-02-17T16:02:48.255Z
2022-02-01T00:00:00.000
{ "year": 2022, "sha1": "7fa468b7c7c9e39f5187c3108a81b20aeb9afbeb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2305-6304/10/2/86/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d8aa770777818838f0f7478fac51a284bc0f2ad", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
53594397
pes2o/s2orc
v3-fos-license
Dynamic Interactions of CdSe / ZnS Quantum Dots with Cyclic Solvents Probed by Femtosecond Four-Wave Mixing We studied dynamic interactions between CdSe/ZnS quantum dots (QDs) and cyclic solvents probed by femtosecond four-wave mixing. We found that the dynamic interactions of QDs strongly depend on the existence of π-bonds in solvent molecules. Introduction Colloidal semiconductor quantum dots (QDs) have attracted enormous interest in the past two decade, since they show excellent photoluminescence (PL) properties such as narrow PL linewidth and high quantum efficiency.The PL of semiconductor QDs can be turned over entire visible region with changing the size of QDs.These excellent properties of semiconductor QDs enable to find wide range of applications such as imaging of biomolecules [1].Many researches focus on further improving PL properties by the suppression of blinking phenomena in QDs [2]. Another property, optical coherence of exciton in QDs, has been of great interest from the viewpoint of fundamental physics and applications.The optical coherence properties of QDs result from the fluctuation of transition frequency induced by the dynamic interactions of QDs with the surrounding environments.We studied the dynamic interactions of QDs with cyclic solvents by femtosecond four-wave mixing (FWM) signals, and found that QDs in cyclic solvents show particular dephasing behaviour depending on the existence of -bonds in solvent molecules.In contrast to previous works on the optical coherence in QDs [3,4], we could observe unexpectedly long dephasing times in solvents with no -bonds, meaning much smaller dynamic interactions of QDs with those solvent molecules.The measured dynamic interactions were compared with the static interaction of QDs such as the peak shifts of PL and absorption bands. In order to study the dynamic interactions of QDs with cyclic solvents, femtosecond FWM spectroscopy on QDs solutions was performed by using the second harmonic light (@ 1.92 eV) of output from optical parametric amplifier which was pumped with femtosecond regenerative amplifier.Pulse duration of the second harmonic light was measured to be approximately 47 fs.We employed conventional two-beam (k1, k2) excitation geometry in femtosecond FWM experiments and the FWM signals in 2k2 -k1 were measured at room temperature as a function of delay time τ between the two beams [5]. Results and discussion Figure 1 displays the results of femtosecond FWM measurements performed for CdSe/ZnS colloidal QDs in (a) benzene and toluene and (b) CH and THF solutions, where the autocorrelation trace of exciting femtosecond pulses is denoted by a dotted line as a reference.As shown in Fig. 1(a), it is observable that the dominant components of FWM signals in benzene and toluene decay very fast with time constant nearly to or less than our time resolution of about 50 fs.These findings mean that the dynamic interactions of QDs with benzene and toluene molecules are very strong.As these signal decays almost follow the autocorrelation trace of femtosecond pulses, it needs numerical simulation to obtain precise dephasing time.However, we don't carry out the numerical simulation because it is out of our major scope.In addition, one can find the much smaller contributions to the FWM signals with longer dephasing time for τ > 200 fs in both solutions.The optical dephasing times for these contributions are tentatively estimated to be about 604 and 720 fs for benzene and toluene solutions, respectively.QDs with CH and THF molecules.To further examine the origin of FWM signal of QDs in CH and THF solutions, the spectrum of FWM signal at τ=0 was measured and compared with those of the excitation pulses and absorption in QDs, as shown in Fig. 2. As the spectrum peak of FWM signal in THF solution is located at the absorption peak of exciton band in QDs, it is concluded that the FWM signal originates from the exciton transition in QDs, not from the surface state or photo products.The dynamic interaction between QDs and solvent molecules can be modelled by employing stochastic transition frequency fluctuation δω(t) of QDs induced by the solvents, which obeys a Gaussian process with <δω(t)> =0 and <δω(t)δω(t')>= Ω 2 exp(-R|t-t'|) with Ω and R being a magnitude and a rate of fluctuation.The δω(t) gives the decay of FWM signal intensity expressed as I(τ)exp(-4Ω 2 τ/R).According to our FWM measurements described above, it is inferred that the cyclic solvents with benzene ring or -bonds give much stronger dynamic interaction with QDs than those without -bonds, which leads to a large values of Ω and resultantly very fast decays of FWM signal in benzene and toluene solutions.It is probably because the density distribution of -electrons in the cyclic molecule expands from the molecular plane, while the density distribution of -electrons in the cyclic molecule is strictly limited in the molecular plane. Conclusion We have studied the dynamic interactions of QDs with cyclic solvents, benzene, toluene, CH and THF, by femtosecond four-wave mixing and found particular dephasing behaviour depending on the existence or absence of -bonds in the cyclic solvent molecules.The dependence is interpreted in terms of the density distribution of -and -electrons in the solvent molecules. Fig. 1 . Fig. 1.The FWM signals of QDs observed in four solvents, (a) benzene (solid) and toluene (dot-dashed), (b) CH (solid) and THF (dot-dashed).The dotted lines denote the autocorrelation trace of exciting pulses.The straight solid and dot-dashed lines in (b) show exponential fitting to obtain optical dephasing times in CH and THF solutions.In sharp contrast to the FWM signals in benzene and toluene, Fig. 1(b) definitely reveals that the FWM signals in CH and THF solutions show extremely slow single-exponential decays.The optical dephasing time of QDs are obtained as 1160 and 720 fs for CH and THF solutions, respectively, with assumption that FWM signal intensity I(τ) decays following a single exponential function exp(-4τ/T2) with dephasing time T2.Since these our findings are contradictory to previous works on optical dephasing of QDs, which indicate the optical dephasing times of QDs at room temperature should be less than 10 fs, we confirmed the reproducibility of the FWM signals of QDs in CH and THF solutions by repeating sample treatment and measurements of FWM signals.We always obtained the same FWM signal traces as in Fig. 1(b) in the confirmation steps.Therefore, we believe that the slow decays of FWM signals in CH and THF solutions manifest the unexpectedly weak dynamic interactions of Fig. 2 . Fig.2.The spectrum (dot-dashed line) of FWM signal from QDs in THF at τ=0 is compared with the absorption spectrum (solid line) of QDs and the spectrum (dotted line) of exciting femtosecond pulses.
2018-11-05T13:03:29.094Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "f2710a7dd72c4bbbc9319d28a5a11404a057c968", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2013/02/epjconf_up2012_04036.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f2710a7dd72c4bbbc9319d28a5a11404a057c968", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
259202686
pes2o/s2orc
v3-fos-license
Generalized epidemic model incorporating non-Markovian infection processes and waning immunity The Markovian approach, which assumes exponentially distributed interinfection times, is dominant in epidemic modeling. However, this assumption is unrealistic as an individual's infectiousness depends on its viral load and varies over time. In this paper, we present a Susceptible-Infected-Recovered-Vaccinated-Susceptible epidemic model incorporating non-Markovian infection processes. The model can be easily adapted to accurately capture the generation time distributions of emerging infectious diseases, which is essential for accurate epidemic prediction. We observe noticeable variations in the transient behavior under different infectiousness profiles and the same basic reproduction number R0. The theoretical analyses show that only R0 and the mean immunity period of the vaccinated individuals have an impact on the critical vaccination rate needed to achieve herd immunity. A vaccination level at the critical vaccination rate can ensure a very low incidence among the population in case of future epidemics, regardless of the infectiousness profiles. I. INTRODUCTION The widely used formulation of compartmental epidemic models in terms of ordinary differential equations (ODEs) implicitly assumes both a constant probability per unit of time of leaving the infectious state (recovery rate) and a constant transmission probability per unit of time (transmission rate). This is analogous to the setting where the sojourn times in the infectious state (infectious period) and the generation (or interinfection) times are exponentially distributed. Following [1,2], we define generation times as the time between the infection of a secondary case and the infection of the corresponding primary case. However, many empirical studies have shown that the exponential distribution does not fit well clinical data about sojourn times in several compartments of an infectious disease model. For example, several studies have shown that the generation times for the spreading of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are not exponentially distributed [3][4][5]. This necessitates the development of proper epidemic models that consider nonexponential sojourn times. Already in the foundational paper by [6], the formulation of the Susceptible-Infected-Recovered (SIR) model assumed a transmission probability depending on the time after infection, also called the age of infection. The reason for that is pretty clear: an individual's infectiousness depends on their viral load, which, in turn, varies over time. Similar ages are also introduced when the probability of processes like loss of immunity depends on the time after entering the recovered state (time since clearance). In such cases, the dynamics are described by non-Markovian processes, as the current status of individuals depends on their complete history within a given compartment. Consequently, the sojourn time in each state and the generation time no longer follow an exponential distribution. In a deterministic context, this fact leads to the formulation of epidemic models in terms of partial differential equations (PDEs), where a population is described by densities with respect to one or more of those times or ages. For the age-of-infection SIR model, the PDE corresponds to the so-called McKendrick-von Foester equation (see [7] for a model with several ages, including the age of vaccination). Such a formulation, equivalent to renewal equations under enough regularity conditions [8], allows the analysis of the impact of non-Markovian processes on the epidemic spread. Recently, a PDE formulation at the node level has also been used to model the epidemic spread on complex networks [9,10]. Staged-progression epidemic models are an alternative way to model non-Markovian epidemics. These models are halfway between simple ODE compartmental models and PDE models because they consider a sequence of different lengths of infectious stages (compartments). Each of them has its own recovery rate and transmission rate [11,12]. So, they can be considered as a sort of discretization of the PDE models [13]. Indeed, these models have been used in the literature to approximate nonexponential infectious periods by subdividing the infectious compartment into several subcompartments with exponentially distributed infectious periods. The original distribution is then approximated by a sum of exponential distributions [14]. In this paper, we formulate a Susceptible-Infected-Recovered-Vaccinated-Susceptible (SIRVS) epidemic model and provide theoretical analyses of the model regarding the equilibria and the critical vaccination rate. Following [15], the latter is obtained from the bifurcation from the disease-free equilibrium where susceptible and vaccinated individuals are present. We perform PDE numerical integration and agent-based simulations to examine the impact of infectiousness profiles and vaccination rates on epidemic dynamics under these two approaches. In particular, agent-based simulations allow to assess the impact of population sizes on the occurrence of secondary waves. The contributions of the paper are summarized as follows: • We present a general method to model non-Markovian infection processes from rate-based transitions. In the agent-based simulations, transitioning from Markovian infection processes to non-Markovian infection processes is achieved by adjusting the value of infectiousness parameter, which results in the desired generation time distributions. This implementation option provides a straightforward way to create comparable agentbased models from PDE models. • We include the effects of recovery while calculating infectiousness profiles, which is more realistic compared to previous models which model the infectiousness profiles independent of the recovery. • With the same R 0 , we observe significant differences in the transient phase between non-Markovian and Markovian models, and the magnitude of the differences is affected by the infectious period. The transient phase refers to the early stages of the epidemic dynamics, when the number of infections changes and the system is far from the steady state. • We provide equilibrium analyses of the model and conclude that only R 0 and the mean immunity period of the vaccinated individuals have an impact on the critical vaccination rate needed to achieve herd immunity. • A continuous vaccination of the population at the predicted critical rate ensures a very low incidence among the population in case of future epidemics, regardless of the infectiousness profiles. • To the best of the authors' knowledge, the work for the first time, explores the potential contribution of agent-based models contrasted with PDE models in non-Markovian epidemic modeling. We observe the median values of simulation results with secondary waves are close to the results of the deterministic PDE model for population sizes sufficiently large. In contrast, simulations produce patterns not predicted by the PDE model when population sizes are sufficiently small and the stochastic extinction of the disease becomes an important factor after an initial outbreak. II. THE REPRODUCTION NUMBER AND GENERATION TIMES Suppose the recovery rate γ and the infectiousness (per-contact transmission probability) β are both functions of the age of infection τ , i.e., γ = γ(τ ) and β = β(τ ), and we assume a constant contact rate c per individual in a randomly mixed population. In that case, the basic reproductive number R 0 is the sum of the infections caused by an infected individual at each age of infection in a totally susceptible population, conditioning on the probability of being infectious at each age. So, we have where the exponential term is the probability of being infectious at time τ since infection, and η(τ ) = c β(τ ) e − τ 0 γ(s) ds is the infectivity of an individual at the age of infection τ [14]. In other models, the contact rate c is included in the definition of β, which is then called effective contact rate [16]. A simple but important remark follows from (1), namely, if β is constant, then R 0 depends on the mean infectious periodτ I but not on its particular distribution: R 0 = c βτ I . A similar result follows for stagedprogression models if β is constant in each compartment: i I with n ic being the number of infectious compartments [11]. Normalizing the infectivity η(τ ) by R 0 we obtain the probability density function (PDF) of the generation times during the initial phase of an epidemic [2,17]: The interinfection times generated according to this time-independent PDF have been called intrinsic generation times to distinguish them from the realized generation times as the epidemic progresses [18]. The realized generation time distribution changes over time due to changes in individuals' contact patterns, the depletion of the susceptible population, and the competition among infectors [18][19][20][21]. From the relationship between the transmission probability β(τ ), the recovery rate γ(τ ), and the generation time distribution w(τ ) given by Eq (2), it follows that an equivalent approach to study non-Markovian infection processes is the one based on the distribution itself of the generation times during the epidemic spread. For instance, such an approach has been used to simulate stochastic epidemics on networks. This relationship clearly shows that changing the profile of w(τ ) will affect the epidemic threshold because it implies a change in the profile of β(τ ), even though the mean infectious period and the mean transmission rate are kept the same. This is what was observed in [22]. The empirical knowledge of w(τ ) at the beginning of an epidemic helps to estimate R 0 from the initial epidemic growth rate r by means of the relation [2,14,17]: obtained from the Euler-Lotka equation after replacing η(τ ) by R 0 w(τ ). This expression also says that if we set R 0 to a fixed value, then different generation time distributions w(τ ) will lead to different initial epidemic growth rates r and, hence, to different transient behaviors of the epidemic. III. THE SIRVS MODEL In this paper, we generalize the SIRVS epidemic model with waning immunity for recovered (R) and vaccinated (V) individuals considered in [15] by introducing an age of infection for the individuals in the I compartment, and an age of immunity (time since clearance) for individuals in the R and V compartments. The population in each compartment at time t is then described by the densities I(t, τ ), R(t, τ ) and V (t, τ ) with respect to the corresponding sojourn time τ in the compartment. As in [15], the epidemic time scale is supposed to be much faster than the time scale for demographic processes (growth, births, and deaths), which allows us to consider that the population remains constant and equal to N , that is, Moreover, we assume that the recovery rate γ(τ ) satisfies that lim τ →∞ τ e − τ 0 γ(s) ds = 0. The same condition is satisfied by the rates δ(τ ) and δ v (τ ) of immunity loss in the R and V compartments, respectively. This hypothesis guarantees a finite mean sojourn time τ in any of these compartments: Here, v ≥ 0 stands for the vaccination rate of susceptible and recovered individuals. According to the previous assumptions, the equations governing the dynamics of the SIRVS model are given by where φ denotes the force of infection (the rate at which a susceptible individual becomes infected) and is given by These equations are endowed with the boundary conditions at τ = 0 Note that, if all the rates are constant, we obtain the original ODE model in [15] by integrating the first three equations of the SIRVS model with respect to τ . IV. EQUILIBRIA AND THE CRITICAL VACCINATION RATE Using that = 0, it follows that the equilibrium densities satisfy dτ is the equilibrium force of infection, and τδ = ∞ 0 e − τ 0 (δ(s)+v)ds dτ is the mean sojourn time in the R compartment. Note that τδ takes into account that an R individual can lose its immunity and become susceptible or, alternatively, it can move to the V compartment if vaccinated. Introducing the expression of I * (τ ) into that of φ * and using (1), it follows So, either φ * = 0, which corresponds to the disease-free equilibrium (DFE), or φ * > 0 and then R 0 S * /N = 1, which corresponds to the unique endemic equilibrium. The DFE is then given by I * (τ ) = 0, R * (τ ) = 0, and where τ δ v is the mean immunity period of vaccinated individuals and it is used that At the endemic equilibrium (I * (τ ), V * (τ ), R * (τ )), the fraction of susceptible individuals at equilibrium is which is the same well-known relationship between s * and R 0 as for the standard SIS (and SIRS) models ( [23]). Note that, to have an endemic equilibrium (s * < 1), The value of φ * is obtained from the condition which amounts to where τ γ is the mean infectious period. So, the force of infection at the endemic equilibrium is given by Note that, with vaccination, So, assuming this condition and dividing the equilibrium densities by the total population N , it follows that the normalized equilibrium densities i * , w * and r * in the I, V and R compartments, respectively, are given by The condition for a bifurcation from the DFE is obtained by imposing that the right-hand side of (4) is equal to 0. In particular, using v as a tuning parameter, the resulting critical vaccination rate is Note that, since this paper considers waning immunity, continuous vaccination campaigns are required to preserve herd immunity. The critical vaccination rate defines the minimum supply of vaccine that ensures that the system always reaches the DFE after the introduction of new cases. In other words, this vaccination rate confers herd immunity to the population and thus prevents future major epidemic outbreaks. Interestingly, only the mean immunity period of vaccinated individuals (but not the distribution of its duration) is relevant for v c . In particular, the threshold condition obtained in [15] follows from this expression after replacing τ δ v by 1/δ v 0 , the mean duration of immunity arising from vaccination when δ v is constant and equal to δ v 0 . V. AGENT-BASED STOCHASTIC SIMULATIONS To perform stochastic simulations, we reconceptualize the mathematical formulations from an agent-based perspective. The PDE models adopt an aggregate representation of the entire population. In comparison, agentbased models (ABMs) enable us to analyze the overall system behavior emerging from autonomous agents' behaviors and interactions. In the model, each person agent follows the SIRVS transition process as shown in At the start of the simulation, all individuals are equally susceptible, except a small fraction of the population that will be randomly selected to enter the infectious state to start the epidemic. Each person agent i records the time when it transitioned to the infectious state (becomes infected), denoted by t 0 (i). Each person contacts c number of other agents on average per day. Every time an infectious person agent i executes a contact event, it will fire an infection event based on a probability β(τ ), where τ is the age of infection (current time t − t 0 (i)). If the infection event happens, person-agent i will randomly select a person-agent j from the whole population to transmit the infection. If the selected person j is in the susceptible state, person j will immediately transition to the infectious state. Otherwise, person j will remain in its current state. Later, person i will leave the infected state and transition to the recovered state at recovery rate γ(τ ) = 1/τ γ , leading to exponentially distributed infectious periods. In addition, a person agent in the recovered state or susceptible state will transition to the vaccinated state according to the same rate v as defined by the PDE model. As immunity wanes over time, a person in the vaccinated state or in the recovered one will transition to the susceptible state based on an immunity loss rate δ = δ v . VI. RESULTS In this section, we compare the epidemic dynamics of the SIRVS models with different infectiousness profiles, infectious periods, and vaccination rates. A. General setup Results are obtained from both agent-based simulations and the PDE model formulation. The ABMs are implemented in the AnyLogic 8 university researcher version. The PDE system of the model is numerically integrated by using a finite difference scheme based on the one introduced in [24]. Agent-based simulations are performed with the same parameters values as the mathematical model. There are 500 simulation runs for each scenario. In all scenarios, the basic reproduction number R 0 is set to be 2.5, and the contact rate c equals 10. In addition, we assume recovered and vaccinated individuals have the perfect protection against infections for a mean immunity period of six months based on references [25,26]. In particular, the rates of immunity loss are assumed to be constant and equal to δ = δ v = 0.0055. For these values of R 0 and δ v , the corresponding critical vaccination rate is v c = 0.00825. Finally, the recovery rate γ(τ ) is also assumed to be constant and, hence, equal to 1/τ γ . For scenarios with constant infectiousness, β(τ ) is constant and equal to R 0 γ/c. In this case, since γ is also constant, the generation time is exponentially distributed: w(τ ) = γe −γτ (cf. with Eq. (2)). For varying infectiousness profiles, following [3], we assume that w(τ ) follows a Weibull distribution, i.e., w(τ ) = k λ ( τ λ ) k−1 e −(τ /λ) k , with the shape parameter k kept unchanged and changing the scale parameter λ to result in the desired value of the mean generation time (MGT). More precisely, from the expression of the MGT of a Weibull distribution, it follows that λ = MGT , where Γ denotes the gamma function. Then, β(τ ) will follow from Eq. (2) with R 0 = 2.5 and c = 10. So, the generation time distribution w(τ ) is introduced only to obtain β(τ ), which is then used to trigger an infection event once an infectious contact has occurred. In other words, simulations did not use timeout-triggered infection transmissions based on w(τ ), but rather rate-based transitions (see Fig. 1). The realized generation times are then recorded to check the accuracy of the procedure. There exists a variety of estimated epidemiological parameters for COVID-19. For example, the MGT of the alpha and delta SARS-CoV-2 variants are estimated between 3.44 and 7.5 days [19]. Similarly, variations exist regarding the duration of the infectious period [27]. The central values reported for Weibull shape parameter k and the scale parameter λ in [3] are 2.826 and 5.665, respectively. In this paper, we consider MGT varying between 4 and 8 days with an interval of one day, andτ γ = 14 orτ γ = 7 days. Figure 2 shows the infectiousness profiles corresponding to different MGTs andτ γ . In the ABM, we record the infection times between infector and infectee for all simulation runs associated with index cases and plot their distributions in Fig. 3. Index cases refer to those individuals infected at the beginning of the epidemic, who are used to introduce the disease into the population. The mean infectious period τ γ used to generate Fig. 3 is equal to 14 days. However, its precise value is irrelevant to the measured Weibull generation time distribution as long as it is far enough from 0. This fact, numerically verified for several values ofτ γ , confirms that the computation of β(τ ) from Eq. (2) counterbalances the recovery effects (see Fig. 2). B. Scenarios without vaccinations In this section, 0.01% of the total population is initially infected, and the remaining population is susceptible at the beginning of the epidemic. Without considering vaccinations, the results from ABM and the PDE model are presented in Figs. 4 and 5. Overall, the model with time-varying infectiousness profiles (Fig. 5) leads to more oscillations and of greater amplitude when compared to that with constant infectiousness profiles (Fig. 4). Due to their stochastic nature, ABMs provide extra patterns not observed in the PDE model. As ABMs treat each individual as an agent, and PDE models can have fractions of an individual, all curves with PDEs are associated with secondary waves resulting from a damped oscillatory approach to endemic equilibrium. At the same time, in the ABM (Fig. 5), 454 out of 500 (90.80%) simulation runs result in secondary waves, and the rest die out after the first epidemic wave. Accordingly, we present the median values of simulation runs with secondary waves and the PDE results in Fig. 4 (a) and Fig. 5 (a). With a population size equaling 500,000, the ABM results resemble PDE results very well. Figure 6 shows the impact of the population size and infectiousness profiles on the risk for secondary waves. We can see that the chance for secondary wave occurrences rises along with the increase in MGT. Probably, this is due to the fact that those individuals who remain infectious for a long time have higher infectiousness towards the end of their epidemic period as the MGT increases because the infectiousness profile gets stretched out to the right. On the other hand, the percentage of simulation runs with secondary epidemic waves in populations of size 20,000 is lower than that in populations of size 100,000 and 500,000. So, with the same fraction of infected cases, larger populations have a higher chance for secondary outbreaks over small populations. Simulation runs without secondary waves may be associated with epidemics dying out after the first peak, with initial extinctions, or with no index cases. Focusing on whether the secondary waves appear after the first peak, we calculate the risk for secondary waves in Fig. 6, without taking into account those simulation runs associated with initial (stochastic) extinctions or those without selected index cases. At the beginning of each simulation run, each agent enters the infected state based on a probability of 0.0001. This leads to stochasticity in the number of index cases and thus to simulation runs that fail to introduce index cases with populations of size 20,000, for which the expected number of index cases is only 2. Let us denote the percentages of total simulation runs with initial extinctions and without index cases as p ex and p no , respectively. The values of p ex and p no become substantially small as the population size increases. For example, with populations of size 20,000, p ex = 10.40% (p no = 11.00%) for MGT=4 days, and p ex = 15.80% (p no = 12.60%) for MGT=8 days. With populations of size 100,000, p ex = 0.4% (p no = 0) for MGT=8 days, and, with populations of size 500,000, p ex = 0 (p no = 0) for MGT=8 days. Fig. 7 (a), with the mean infectious periodτ γ = 14 days, when the MGT increases from 4 to 8 days, the peak time for the epidemic waves is postponed (from day 41 to day 79 for the first peak time) with height reduced by 29.95%. This shift in the time of the first peak is consistent with lower initial epidemic growth rates predicted by Eq. (3) for larger MGTs. Precisely, the predicted values of the initial growth rate are as follows: r(MGT=4)= 0.2464, r(MGT=5)= 0.1971, r(MGT=6)= 0.1642, r(MGT=7)= 0.1408, r(MGT=8)= 0.1235, and r = 0.1071 for constant β. All of them are in agreement with the initial growth rates estimated from Eq. (3). In comparison, differences between epidemic curves due to the shift in the infectiousness profiles are less pronounced when the range of MGT gets closer to the mean infectious period, e.g.,τ γ = 7 in Fig 7 (b). Remarkably, the only initial growth rate that has changed is the one for constant β, which, now, is higher than those corresponding to the largest MGT (predicted initial growth rate for constant β: r = 0.2143). This is due to the fact that, for constant β, the generation time distribution is equal to the distribution of the length of the infectious period (cf. with Eq. (2)). The rest of the generation time distributions are independent of the recovery rate and, so, the corresponding curves are arranged in the same order in both panels. This indicates that the differences between models based on time-varying and constant infectiousness profiles heavily depend on the recovery processes. When recovery processes interfere less with infection processes, we can see noticeable effects of infectiousness profiles. Figure 8 shows the fractions of infected individuals, computed again as ∞ 0 I(t, τ ) dτ /N , with a uniform vaccination rate v equal to 0.5v c and v c , and different infectiousness profiles. Here, the critical vaccination rate v c = (R 0 − 1)/τ δ v equals 0.00825. All individuals are initially susceptible except 0.01% of the total population which is set as index cases to start the epidemic. We can see that the infectiousness profiles and the mean infectious period affect the epidemic dynamics in the transient phases but have no impact on the critical vaccination as long as the parameters can achieve the same R 0 . On the other hand, we consider scenarios (Fig. 9) where the vaccinated population is present at the beginning of the epidemic, and 0.01% of the susceptible population is initially infected. More specifically, the fraction of the susceptible population is obtained through the DFE condition S * /N = 1/(1 + v τ δ v ), and the rest of the population is vaccinated ( ∞ 0 V * (τ ) dτ /N = 1 − S * /N ). Accordingly, we have 40% and 57.14% of the total population susceptible at the beginning of the epidemic for scenarios with v = v c and v = 0.5v c , respectively. Figure 9 shows the evolution of the median fractions of infected cases resulting from simulations where vaccinated people are initially present. As expected, when new cases are introduced, outbreaks are contained very well under scenarios with v = v c , with only a very small fraction of infections. In comparison, there are large outbreaks under scenarios with v = 0.5 v c . For example, with MGT=4 days, the peak fraction of infected cases in Fig. 9 (a) is 0.0889 while the peak fraction of infected cases in Fig 9 (b) is 1.16 × 10 −4 . Considering a population size of 500,000, a peak fraction equal to 1.16 × 10 −4 means that only 58 individuals are infected at peak time. So, at the critical vaccination rate v = v c , the introduction of new infections at the start of the simulation leads to only minor outbreaks. Fig. 9 (a) also shows that the initial growth rates are lower than without vaccination, but are ordered in the same way. As additional information, Fig. 9 (c) depicts the measured R * 0 through simulations, which is the number of secondary cases divided by the number of index cases. Since the initial fractions of vaccinated and susceptible individuals are given by the DFE, R * 0 can be interpreted as the basic reproduction number at the DFE considering vaccinations. At the critical vaccination rate v c , the mean value for R * 0 ranges from 0.9895 to 1.0182. At v = 0.5 v c , the mean value for R * 0 varies between 1.4069 and 1.5645. Under scenarios of v = v c and v = 0.5 v c , the predicted values for R * 0 , namely, R 0 S * /N , are 1 and 1.4285, respectively, which are within the observed ranges. VII. DISCUSSION This paper presents a general SIRVS model considering waning immunity and age of infection. We analyzed how variations in infectiousness profiles under the same R 0 could affect the epidemic dynamics. Compared with Markovian models, non-Markovian models with timevarying infectiousness profiles create more damped oscillations with peak times affected in the transient phases. Remarkably, the magnitude of this difference between the two types of models heavily depends on the recovery processes. When the recovery process interferes more with the infection processes, the variations between models become less pronounced. Such an interference is possible because, in the standard formulation of epidemic models with age of infection (see, for instance, [28]), recovery and infectiousness are modeled as independent of each other. This modeling assumption, however, is clearly questionable if infectiousness is interpreted in terms of viral load and recovery occurs only once a low level viral load is reached. We have also seen that different combinations of infectiousness profiles and infectious periods have no impact on the critical vaccination v c as long as they lead to the same R 0 . Indeed, given R 0 , the mean duration of the recovery period is the only feature of its profile that determines the value of v c . However, when vaccination rates are lower than the critical rate v c , models with time-varying infectiousness still have a transient behavior with damped oscillations of higher amplitude than Markovian models and retain the same order of the initial growth rates. This echoes the findings of [1], which found that vaccination reduces the reproduction number without changing the generation time distribution during the epidemic. Besides, with susceptible and vaccinated people at the beginning of the epidemic, a population at the predicted critical vaccination rate is resilient to future epidemics, regardless of the particular infectiousness profile. Loss of immunity is one of the causes of the oscillations observed in epidemic models. For instance, if there is a constant period of temporary immunity, destabilization of the endemic equilibrium of the SIRS model is possible through a Hopf bifurcation ( [29]). As for damped oscillations, they occur in the standard (Markovian) SIRS model, and an approximation of their period is also well known ( [23]). Here we have explored the impact of the infectiousness profile on the occurrence and shape of these oscillations. We have found that ABMs not only can produce results close to the PDE formulation with large population sizes, but also provide additional insights into the risk of secondary waves that are not obtained under the latter formulation. They suggest that, even with large populations, epidemics could die out after an initial epidemic peak if the decline in prevalence is fast enough. The occurrence of these waves then depends on both population size and infectiousness profile (through the assumed mean generation time). Moreover, since they are always associated with an endemic equilibrium, if stochastic extinction after the first peak is avoided, the convergence towards endemic equilibrium always occurs because the damped behavior of the oscillations prevents a return to very low levels of prevalence. Besides, at the same population size, the percentages of simulations with secondary waves with constant infectiousness are higher than those with varying infectiousness. Therefore, given the importance of reducing the risks of the emergence of secondary waves during the course of an epidemic, it highlights the importance of selecting the appropriate modeling approach and estimating the generation time distributions to tackle future epidemics. ACKNOWLEDGEMENT The work of Q.Y. and C.S. has been supported by the National Science Foundation under grant award no. CMMI-1744812. J.S. has been partially supported by Grant No. PID2019-104437GB-I00 of the Agencia Estatal de Investigación, Ministerio de Ciencia e Innovación of the Spanish government, and is a member of the Consolidated Research Group 2021 SGR 00113 of the Generalitat de Catalunya. Any opinions, findings, and conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies. Appendix: Simulation results with non-Markovian recovery processes Denote γ(τ ) as the age-dependent recovery rate, and ψ ip (τ ) as the infectious period distribution. This distribution can be characterized as recovery processes, which can be expressed as follows: ψ ip (τ ) = γ(τ )e − τ 0 γ(s) ds . (A.1) In the following, we consider infectious periods (from I to R compartments) following the Weibull distribution, ψ ip (τ ) = α µ ( τ µ ) α−1 e −(τ /µ) α , where α is the shape parameter and µ is the scale parameter of the infectious period distribution. According to Eq. In the simulations, we set α = 2.82 and µ = 15.72 to obtain the mean infectious period τ γ = 14 days. The parameters of w(τ ) are the same as those in the main text. Without considering vaccinations, the results from agent-based simulations are plotted in Fig. A.1. We record the recovery times for all simulation runs associated with index cases, from which we compute the corresponding infectious periods and plot their distribution in Fig. A.1(d). In Fig. A.1, 493 out of 500 (98.6%) simulation runs result in secondary waves, while the rest dies out after the first epidemic wave. In comparison, with the same MGT (5 days) and a constant recovery rate equal to the inverse of a mean infectious period τ γ = 14 days, 90.8% simulation runs result in secondary waves. It suggests that, with the same mean infectious period, the percentage of secondary wave occurrences increases when the recovery rate changes from constant to nonconstant values. In Fig. A.2, we present the median values of simulation runs with secondary waves. Consistent with the patterns observed in Fig. 7, we can observe that as the MGT increases from 4 to 8 days, the peak time for the epidemic waves is postponed (from day 42 to day 79 for the first peak time) with a reduced height of 35.91%. When comparing the results with constant recovery rate shown in Fig. 7, we notice that the disease prevalence increases when infectious periods are Weibull distributed but with the same τ γ = 14 days. FIG. A.1. Simulation results with varying infectious profiles (MGT=5 days) and non-Markovian recovery processes. The Weibull infectious period distribution is associated with shape parameter α = 2.82 and scale parameter µ = 15.72, leading to a mean infectious period τ γ = 14 days. Panels (a) -(c) plot the fractions of infected, recovered, and susceptible cases, respectively, for all simulation runs. Various colors represent the values resulting from different simulation runs. In panel (d), we present the infectious period distribution measured from ABM, and the red curve refers to the theoretical infectious period distribution. Here 0.01% of the susceptible population is initially infected. Five hundred simulation runs are performed with a population size of 500,000. Initially, 0.01% of the susceptible population is infected. Five hundred simulation runs are performed for each scenario with a population size of 500,000.
2023-03-28T01:22:38.129Z
2023-03-27T00:00:00.000
{ "year": 2023, "sha1": "e623eff0a709e25a25f5019963bbf0195c5700d0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e623eff0a709e25a25f5019963bbf0195c5700d0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Mathematics" ] }
45221034
pes2o/s2orc
v3-fos-license
Biosynthesis in Vitro of Homarine and Pyridine Carboxylic Acids in Marine Shrimp* Minces and homogenates of muscle obtained from the marine shrimp Penaeus duornrum are capable of syn-thesizing homarine from [14C]glycine. Glycine carbon atoms are incorporated into homarine but not signifi-cantly into picolinate or quinolinate. [2-"C]Acetate is readily incorporated into quinolinate in the in vitro system but only slightly into homarine and not at all into picolinate. Quinolinic acid is rapidly methylated to N-methyl quinolinate which is not decarboxylated to form homarine. Procedures have been developed for the satisfactory separation of N-methyl quinolinate from homarine. We have previously reported (1) that homarine (N-methyl picolinic acid) i s endogenously synthesized by the salt water shrimp Penaeus duorarum. After injection of a number of C-labeled substances and subsequent isolation of radioactive homarine, interpretation of the results was found to be sufflciently difficult to warrant efforts to develop an in vitro system capable of incorporating 14C-labeled precursors into homarine. Homogenized shrimp tail muscle was found to be capable of converting [I4C]glycine into homarine, while labeled acetate was incorporated into quinolinic acid and not into homarine. When these I4C-labeled precursors were incubated in the presence of nonlabeled potential intermediates, it was found that two separate biosynthetic pathways were evident. 1) Picolinic and quinolinic acids are not intermediates in the conversion of glycine to homarine; 2) [2-I4C]acetate, on the contrary, is incorporated into quinolinic acid and not into homarine. The resulting quinolinic acid is readily methylated to form N-methyl quinolinic acid. 14 MATERIALS AND METHODS In Vitro System for Homarine Biosynthesis-The shrimp, P. duorarum, were collected and maintained as previously described (1). After chilling on ice, the inert shrimp were shelled, and the muscle portion was finely minced and blended for 3 to 5 s in ice-cold citrate/ phosphate buffer (pH 7.4, 0.6 osmolar). Generally 20 ml of the buffer was used with 10 g of shrimp muscle. In typical experiments lo-'' mol each of ATP, DPN, FAD, and MgCly were added. Prior to incubation, i4C-labeled precursors were added, and, on occasion, nonradioactive potential intermediates (usually 5 mg of picolinic acid, quinolinic acid, or N-methyl quinolinic acid) which were subsequently isolated in addition to the homarine. At the end of the incubation (usually 6 h at 20 to 25"C), 5 mg more of nonradioactive carrier was added just prior to the work-up. Fractionation and Purification-The isolation of purified hom-* This study was aided in part by a grant from the National Institutes of Health. The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. arine has been previously described (1). Cell homogenates were deproteinized either 1) by addition of 10 or more volumes of methanol followed by overnight chilling, centrifugation, and evaporation of the aqueous methanolic supernatant fluid to a small volume or 2) by precipitation with 20% trichloroacetic acid. Following the methanol procedure, repeated vigorous shaking with a s m d volume of chloroform (0.1 to 0.2 volume) yielded a protein-free solution. All subsequent fractionations described here were done after deproteinization. Separation of Homarine and N-Methyl Quinolinic Acid-Following deproteinization by the methanol method (and shaking with chloroform) the concentrated aqueous preparation was evaporated in U~C U O to 2 to 3 ml, adjusted with a few drops of concentrated NH4OH to pH 10.9, and chromatographed on a column (1.5 X 20 cm) of Bio-Rad AG 1-X8 resin (OH-form, 100 to 200 mesh) previously equilibrated with 0.5% NH,OH. The column was eluted with the same solvent until homarine, N-methyl quinolinate, and other UV-absorbing quaternary nitrogen compounds were removed. After evaporation of NH3, the resulting syrup was adjusted to pH 4 to 5 with dilute HC1 and evaporated to a volume of 1 to 2 ml . The solution was then chromatographed on a column (1.5 X 20 cm) of Bio-Rad AG 50W-X8 resin (H+ form); the column was thoroughly washed with water and eluted with 0.01 N HCI (400 to 600 ml required). Both homarine and NMQ' are simultaneously eluted. Following evaporation of the eluate to 1 to 3 ml, both substances are co-precipitated by phosphotungstic acid in 1 N H2S04. The mixture, after removal of phosphotungstate, cannot be separated by thin layer chromatography with the solvent systems previously employed (1). Separation of the two substances can, however, be achieved by column chromatography on SP-Sephadex C-25 resin in 0.01 N HCI at pH 2. Homarine is readily eluted with 1 to 2 bed volumes of 0.01 N HCI, while NMQ can only be removed with 1 to 2 bed volumes of 0.1 M NaCl in 0.01 N HCI. Satisfactory separation of the two substances can also be obtained by thin layer chromatography (Analtech MN 300 microcrystalline cellulose) with isopropanol/water (85/15); RF for homarine is 0.41 and for NMQ is 0.23. It should be emphasized that trace contamination of the recovered homarine and NMQ by radioactive precursors was prevented by repeated additions of nonradioactive carrier precursors at suitable stages in the isolation procedure (prior to AG-50 column chromatography, phosphotungstic acid precipitation, and prior to final chromatography on SP-Sephadex columns). Highly radioactive products were checked several times by such washing-out procedures and rechromatographed for final 'C-labeled measurements. Picolinic Acid-After incubation of [i4C]glycine or ['Tlacetate in the presence of carrier picolinic acid and deproteinization with trichloroacetic acid, the picolinic acid was precipitated along with homarine by phosphotungstic acid in 1 N H2S04 and chilled several hours; after Centrifugation, the precipitate was dissolved in dilute NaOH to pH 7 and decomposed with 10% Ba(0H);. solution. Barium ions were removed with dilute H2S04, and after evaporation of the aqueous fraction, the residue was extracted into a small volume of methanol to eliminate salts. The methanol solution was evaporated to dryness, and the residue was dissolved in 1 to 2 ml of 0.5% NH,OH (pH 10.9) and chromatographed on a column of Bio-Rad AG I-X8 resin as previously described. The column was eluted with 0.5% NH40H until homarine and other quaternary nitrogen compounds were completely removed. After washing with H20, picolinic acid was eluted with 1 to 4 bed volumes of 0.05 N HCI. Fractions were monitored by UV ' The abbreviation used is: NMQ, N-methyl quinolinic acid. The picolinic acid fraction was evaporated to dryness and dissolved in minimal H20, the pH was adjusted to 2 to 3, and the solution was chromatographed on a column (1.5 X 20 cm) of Bio-Rad AG 50W-X8 resin (H+ form, 100 to 200 mesh). After subsequent washing with 250 ml of 0.01 N HCl, picolinic acid was eluted with 3 to 5 bed volumes of 1 N HCl. Following evaporation of the eluate, the residue was thoroughly dried and converted to the methyl ester by refluxing in methanolic HCl for 5 to 6 h. Solvent was removed by evaporation in vacuo, free base was liberated with NaHCOJ, and methyl picolinate was extracted into benzene. After drying over MgS04, the benzene was evaporated, and the residue was dissolved in 0.5 ml of methanol for chromatography on a Hewlett-Packard model 400 gas-liquid chromatograph equipped with a flame ionization detector (6-foot column of 3% JXR on 100/120 Gas-chrom Q; carrier flow rate, 60 ml/min; column temperature, llO°C; retention time, 162 s). The product was collected by means of an effluent splitter and trapped at -77"C, then dissolved in methanol for quantitative UV assay at 264 nm (c = 3200 M" cm" in CH30H). An aliquot was evaporated to dryness for radioassay in a Beckman LS 230 liquid scintillation counter. Quinolinic Acid-Since quinolinic acid is not precipitated by phosphotungstic acid, this step was omitted. Chromatography on AG 1-X8 anion exchange resin in 0.5% NHIOH effectively removed homarine and other quaternary bases; the column was then successively After removal of the fonnic acid by evaporation in vacuo at 30-35°C. the residue was dissolved in 2 to 3 ml of dilute H2S04 at pH 3 and converted to the CU" salt by addition of powdered CuS04 with stirring; a precipitate of crystalline copper quinolinate formed within a few minutes. After centrifugation, the precipitate was washed once with a small volume of dilute H804 (pH 3) and decomposed with dilute NaOH. The resulting slightly soluble CU(OH)Z was removed by centrifugation, the aqueous phase was acidified to pH 3 with dilute HzSOI and evaporated in vacuo to a syrup, and Na2S04 was precipitated with methanol. The methanol-soluble fraction was evaporated, dried at 1 to 2 mm Hg, and converted to the dimethyl ester as described for picolinic acid. The dimethyl quinolinate was subjected to gas-liquid chromatography as above (2) and collected by means of an effluent splitter (carrier flow rate, 60 ml/min; column temperature, 145OC; retention time, 105 s). The dimethyl quinolinate was dissolved in methanol for quantitative UV assay at 264 nm (e = 2450 M" cm" in CHaOH), and a measured sample was evaporated for radioassay. N-Methyl Quinolinic Acid-Synthetic NMQ was prepared by reaction of quinolinic acid with methyl iodide. Quinolinic acid in 50% aqueous methanol was neutralized to pH 7.5, treated with a 5-to 10fold excess of methyl iodide in a chilled pressure tube, and heated at 100-105°C for 24 h. After evaporation to dryness the residue was dissolved in minimal 0.5% N R O H and passed through a column of AG-1 resin at pH 10.9 to remove unreacted quinolinic acid (which is retained by the column). After evaporation of NHs and pH adjustment to 4 to 5 with dilute HCl, the solution was evaporated to dryness and recrystallized from ethanol/l-butanol (4/1). The UV spectrum showed a maximum at 264 nm (e = 4500 M" cm" for C1-salt in H20) and a minimum at 244 to 246 nm. On thin layer chromatography only a single spot was obtained. RESULTS A series of in uitro incubations with [2-I4C]glycine as well as [l-14C]glycine demonstrated that radioactive homarine is biosynthesized by such preparations. Although the radioactive yield is small, nevertheless, consistent and far greater radioactivity was obtained with glycine than with any of the other amino acids listed (Table I). It should additionally be pointed out that free glycine is present in large amounts (approximately 45 mg/lO g of shrimp tissue). The radioactive glycine is, therefore, diluted to an enormous extent by endogenous free glycine. When carrier picolinate or quinolinate was added prior to incubation with [14C]glycine, little or no radioactivity was recovered in the carrier substances while the isolated homarine remained radioactive. It is, therefore, clear that glycine carbons are incorporated into homarine without intermediate formation of either picolinate or quinolinate. In a preliminary experiment we have observed that dipicolinic acid (2,6-pyridine dicarboxylic acid) is likewise not an intermediate. The exclusion of picolinate as an intermediate suggested that glycine might be methylated early to form sarcosine which could then be incorporated into homarine. [ l-14C]Sarcosine, upon incubation, was converted to homarine (Table I) but only half as efficiently as glycine. Inasmuch as free endogenous glycine is present in very high concentrations in shrimp (as noted above), labeled sarcosine was diluted to approximately the Same extent with 45 mg of carrier sarcosine prior to incubation. The low radioactivity obtained in the recovered homarine suggests that sarcosine is converted to glycine prior to incorporation into homarine. In an earlier report (1) labeled acetate injected into live shrimp appeared to be incorporated into homarine; when incubated in uitro, however, little or no [2-'4C]acetate was converted to homarine (Table 11). Significant activity was recovered in carrier quinolinate (none in picolinate). In our earlier in uiuo experiments (1) labeled quinolinate also appeared to be a good precursor of homarine. Again, upon incubation of [6-14C]quinolinate with shrimp homogenate, relatively little radioactivity was recovered in the isolated homarine (Table 11). These results suggested that our earlier preparations of radioactive homarine derived from acetate or quinolinate were contaminated with traces of a highly radioactive substance. On the theory that quinolinic acid might undergo N-methylation to form N-methyl quinolinate during in vitro incubations, NMQ was chemically synthesized by methylation of quinolinate with methyl iodide. The resulting compound was found to be very similar in its properties to homarine and the two substances separable with diffkulty. Radioactive preparations of homarine derived from acetate or quinolinate lost most of their radioactivity upon additional chromatography and elution on SP-Sephadex in 0.01 N HC1, while the NMQ could be subsequently eluted only with 0.1 M NaCl in 0.01 N HC1. As a result of this additional final purification step, the data listed in Table I1 help to clarify the phenomena under study. 1) Acetate is converted to quinolinate which is readily methylated to NMQ. 2) Labeled tryptophan is converted to quinolinate which is in turn methylated. 3) Labeled glycerol, glutamate, and probably aspartate are incorporated into quinolinate. None of the substances listed in Table I1 appear to be converted to any significant extent into homarine. DISCUSSION It is apparent from this work that two separate pathways have been established. 1) Acetate carbon atoms are incorporated into quinolinic acid which is subsequently methylated to NMQ. 2) Conversion of glycine to homarine occurs by a pathway which does not involve picolinate or quinolinate as intermediates. Whether endogenous synthesis of quinolinate (from acetate) by shrimp provides a significant source of nicotinic acid has not been established; it is possible that low or moderate levels of nicotinate may be related to rapid methylation of quinolinate to form NMQ which is then decarboxylated to trigonelline. Since picolinate is not an intermediate in the biosynthesis of homarine, it may be postulated that glycine condenses with a suitable 4-carbon compound to yield a di-or tetrahydropyridine carboxylic acid which is subsequently converted to homarine. We have unsuccessfully investigated the possible condensation of glycine with succinic monoaldehyde, hoping that Schiff base formation followed by an aldol type cyclization would yield an important precursor of homarine. Efforts will be continued to find more effective intermediates in this metabolic pathway. It is recognized that quinolinate is formed by several pathways: 1) from tryptophan in mammals, yeast, and Neurospora (4); 2) from aspartate, acetate, and formate in Clostridium butylicum (5); and 3) from aspartate and glycerol (or a closely related intermediate) in higher plants (6) and various bacteria (7,8). It would appear that the latter mechanism is similar to what we have observed in shrimp.
2018-04-03T06:00:09.725Z
1980-10-25T00:00:00.000
{ "year": 1980, "sha1": "cfaaa7ffe4e76e5fa73e4568e23a801780310d62", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/s0021-9258(18)43426-6", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "58aa25b7fb62c9ced5c75fa87e20790b2bc6f18d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
219740677
pes2o/s2orc
v3-fos-license
Isolation, identification and antimicrobial activity of secondary metabolites of endophytic fungi from annona leaves (Annona squamosa L.) growing in dry land Research on isolation, identification, and antimicrobial activity of secondary metabolites of endophytic fungi from leaves of Annona squamosa growing in dryland had been undertaken. The work includes the isolation of endophytic fungi, cultivation, and extraction of fungal extract and identification of chemical metabolites together with the antibacterial test. The pure colony of endophytic fungi was grown on solid rice media using 1 L Erlenmeyer flask. The grown fungi were extracted with ethyl acetate. The ethyl acetate crude extract was then further subjected to chemical analysis and its antibacterial properties. Endophytic fungi species was identified as Aspergillus niger based on macroscopic and microscopic analysis. Analysis LC-MS/MS had revealed the presence of five metabolites including ephedradine A, ergosine, Ia, mudanpioside H, and trichosanic acid. The extract showed strong inhibition against Staphyloccus aureus with diameter of zone inhibition of 16.1 mm and moderate inhibition against Escherichia coli 0175H7 and Salmonella enteritidis ATCC 6939 with the observed diameter of zone inhibition of 9.6 mm and 11.3 mm, respectively. Introduction One of the important sources of bioactive compounds is from microbes including endophytic fungi that grow inside plant tissue such as leaves, flowers, seeds, twigs, stems, and roots without causing tissue damage in the host plant [1]. These fungi are rich in secondary metabolites and have been reported to provide and produce several lead drugs and drugs to cure deadly diseases such as cancer and bacterial infections [2]. Examples of endophytic fungus bioactive compounds that have been isolated from plants were equisetin, epi equisetin and beauvericin produced by Fusarium equiseti isolated from Piper niger leaves [3]. Another example was neosartorin, a potent antibiotic compound against multi-resistant bacteria without any significant toxicity observed, together with (-) Palitantin [4,5]. Several works regarding the bioactive fungal metabolites isolated from plants growing in Timor Island have been reported. The endophytic fungi Xylaria sp isolated from Curcuma xanthoriza leaves produced arugosin J, xylarugosin, and resacetophenone [6]. While the endophytic fungi Diaporthe melonis isolated from annona twigs produced diaporthemin A and B together with flavomannin dimethyl ether [7]. Aspergillus flavus isolated from the medicinal plant Catharanthus roseus growing in Timor Island was currently reported to accumulate high content of kojic acid, which has wide application in cosmetics and pharmaceutical industries [8]. In continuation of our work on finding fungal bioactive metabolites from Timor Island, we evaluated antibacterial extract from the endophytic fungi Aspergillus niger associated with Annona leaves. Material The endophytic strains selected for the obtainment of crude extracts were isolated from the fresh leaf of Annona squamosa L located at Timor Island East Nusa Tenggara, Indonesia. Samples of leaves were transported to the laboratory in sterile baggage. Isolation of endophytic fungi The healthy leaf samples were washed several times using sterile distilled water and followed by immersing the tissues in 70% alcohol for 30 s and sterile distilled water for eliminating epiphytic microorganisms. The leaves were cut into 0.5 cm2 pieces and then transferred to Petri dishes containing potato dextrose agar supplemented with chloramphenicol (0.02 g) to suppress bacterial growth. After five days, all fungal colonies were isolated, purified, and maintained in PDA. Cultivation and extraction of secondary metabolite In order to get secondary metabolites for antimicrobial activity screening, the pure culture of every endophytic fungus was grown in 100 g of rice media and incubated for thirty days. After reaching its stationary phase, the fungi were extracted with ethyl acetate. Ethyl acetate was removed by rotary evaporator. The crude extract was analyzed for its chemical profile by using HPLC and LC-MS/MS. Antibacterial assay The paper disk method used to screen the antimicrobial activity of the EtOAc endophytic fungus extracts. The micro-organisms were one Gram-positive bacteria Staphylococcus aureus camp. and two Gram-negative bacteria Escherichia coli 0175H7 and Salmonella enteritidis ATCC 6939. The test microorganisms were cultivated in the test tubes comprising 2 g/100 ml of nutrient broth and incubated for 24 h at 37 0 C. Turbidity was adapted to that of a standard for barium sulfate (0.5 McFarland). Paper disks were also inoculated with aquades (10 µl) as the negative control, tetracyclin (30 µg) as the positive control. On the surface of the medium containing bacteria test strain, each 10 μl of extracts were pipetted into 0.66 cm sterile paper disk. All plates were incubated for roughly 24 hours at 37 0 C. Zones of inhibition were evaluated and recorded. Antibacterial activity screening was repeated twice. Result and Discussion A white and black strain of endophytic fungi was isolated from the leaf of Annona squamosa. Based on the macroscopic and microscopic characteristics and after comparison with endophytic fungus morphology according to [9], the isolated endophytic fungus was identified as Aspergillus niger. Fungus cultivation was performed for 3 weeks so that the fungus was able to grow until its stationary phase was reached. The endophytic fungi were then extracted with ethyl acetate and left for two nights before filtration. The solvent was removed under vacuum using a rotary evaporator. The crude extract was then analyzed with HPLC and LC-MS for its chemical profile (Figures 2 and 3). In addition, the extract was also evaluated for its antibacterial properties. The chemical content of fungal extract was performed with HPLC using C-18 analytical column at a flow rate of 0.5 mL/minute for 15 minutes and detection at 204 nm. Five metabolites were able to be identified using LC-MS/MS method as shown in Table 1. The ethyl acetate fungal extract was further evaluated for its antibacterial properties against S. aureus, E. coli and Salmonella enteritidis with tetracycline as positive control and sterile distilled water as the negative control. The result was shown in Table 2. Gram-negative and gram-positive bacteria used in this study had a concentration of 10 8 cfu/mL. The fungal extract was found to inhibit bacterial growth at a concentration of 10 µg. Of all the identified components, Mudanpioside H was probably responsible for the antibacterial activity of the extract as it was previously reported to have moderate antibacterial activity [9]. However, further fractionation and isolation of pure metabolite from the fungal extract should be undertaken.
2020-05-28T09:12:13.155Z
2020-05-27T00:00:00.000
{ "year": 2020, "sha1": "0434a4990731d1d5e358ea14731e2731ba4956cd", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1757-899x/823/1/012039", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "2544c5ad13051cee99e809a2f98b962405d3dd69", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Physics", "Biology" ] }
204707301
pes2o/s2orc
v3-fos-license
PREVALENCE OF FATIGUE AND IMPACT ON QUALITY OF LIFE IN CASTRATION-RESISTANT PROSTATE CANCER PATIENTS: the VITAL study Background Fatigue is one of the most prevalent symptoms among cancer patients. Specifically, in metastatic castration-resistant prostate cancer (mCRPC) patients, fatigue is the most common adverse event associated with current treatments. The purpose of this study is to describe the prevalence of fatigue and its impact on quality of life (QoL) in patients with CRPC in routine clinical practice. Methods This was a cross-sectional, multicentre study. Male chemo-naïve adults with high-risk non-metastatic (M0) CRPC and metastatic (M1) CRPC (mCRPC) were eligible. Fatigue was measured using the Brief Fatigue Inventory (BFI) and QoL was assessed using the Functional Assessment of Cancer Therapy questionnaire for patients with prostate cancer (FACT-P) and the FACT-General (FACT-G) questionnaire. Data were analysed using Mann-Whitney or Kruskal-Wallis tests (non-parametric distribution), a T-test or an ANOVA (parametric distribution) and the Fisher or chi-squared tests (categorical variables). Results A total of 235 eligible patients were included in the study (74 [31.5%] with M0; and 161 [68.5%] with M1). Fatigue was present in 74%, with 38.5% of patients reporting moderate-to-severe fatigue. Mean FACT-G and FACT-P overall scores were 77.6 ± 16.3 and 108.7 ± 21.4, respectively, with no differences between the CRPC M0 and CRPC M1 subgroups. Fatigue intensity was associated with decreased FACT-G/P scores, with no differences between groups. Among 151 mCRPC patients with available treatment data, those treated with abiraterone-prednisone ≥3 months showed a significant reduction in fatigue intensity (p = 0.043) and interference (p = 0.04) compared to those on traditional hormone therapy (HT). Patients on abiraterone-prednisone ≥3 months showed significantly better FACT-G/P scores than patients on HT (p = 0.046 and 0.018, respectively). Conclusion Our data show a high prevalence and intensity of fatigue and its impact on QoL in chemo-naïve CRPC patients. There is an association between greater fatigue and less QoL, irrespective of the presence or absence of metastasis. Chemo-naïve mCRPC patients receiving more than 3 months of abiraterone acetate plus prednisone showed an improvement of fatigue and QoL when compared to those on traditional HT. Trial registration Not applicable since it is not an interventional study. Background Prostate cancer is the most frequent cancer among males in Europe [1]. In 2017, approximately 160.000 men will be diagnosed with prostate cancer adding to 3.3 million existing survivors [2]. Even though optimal disease control is achieved with androgen deprivation therapy (ADT), most patients will eventually progress and develop metastatic castration-resistant PC (mCRPC) [3], which is associated with poor prognosis. Cancer-related fatigue is one of the most prevalent, distressing and anticipated symptoms experienced by patients across all tumours. It is not proportional to recent activity and it interferes with usual functioning [4]. In patients with mCRPC, fatigue is by far a dominant symptom of the disease and is the most common adverse event associated with treatments [5]. Manifestations include a sense of persistent physical, mental and/ or emotional tiredness [6], which can cause a significant impact on quality of life (QoL) [7]. New therapeutic options for men with mCRPC have been developed over the last few years [8], including therapies targeting the androgen receptor pathway. Abiraterone acetate, a new class of anti-androgen, inhibits the synthesis of testosterone in the adrenal glands, testes and the tumour microenvironment, leading to suppression of PC growth and tumour regression [9]. In patients with mCRPC having progressed after docetaxel chemotherapy, abiraterone acetate and prednisone is the only treatment to have shown clinically meaningful improvements in fatigue [10]. Surprisingly, no studies have been conducted to evaluate the presence of fatigue in CRPC patients. The aim of this study was to describe the prevalence of fatigue and its impact on QoL in patients with both chemo-naïve mCRPC and high-risk non-metastatic CRPC in routine clinical practice. Study design The VITAL Study was a cross-sectional study, carried out in 39 specialised urological clinics across Spain between January 2015 and September 2015. The study was conducted in accordance with the Declaration of Helsinki including all amendments, and was approved by the Ethics Committee of Hospital Universitario 12 de Octubre (Madrid, Spain) as ethical reference committee. All patients gave written informed consent before their inclusion in the study, and their treatment followed routine clinical guidelines. Study population Eligible patients included adult males with a histological diagnosis of high-risk non-metastatic CRPC (defined as prostate-specific antigen [PSA] doubling time [PSADT] ≤10 months; M0) or mCRPC (defined by visceral metastases, distant lymph nodes, or presence of bone metastases; M1). Patients who had participated in any investigational drug study or any expanded-access or named-patient program were excluded, as well as those who had been treated with chemotherapy previously. Sample size calculation According to different studies published, fatigue is present in more than 40% of oncologic patients, increasing up to almost 90% depending on the study cohort characteristics, such as age, pathology, disease stage, etc. [11]. According to these data, an incidence of fatigue of around 65% was estimated in advanced prostate cancer patients. A total of 243 patients were needed in order to detect an incidence of fatigue of 65% with a 6% precision and a 95% confidence interval. Considering a losing rate of 5%, it was necessary to include a total of 256 patients in the study. Variables Data were collected using self-report questionnaires and supplemented with clinical data from the patients' medical records. Fatigue was measured using the Brief Fatigue Inventory (BFI), a standard and reliable instrument used to assess fatigue in patients with cancer. The BFI is a nine-item instrument, consisting of three items assessing present, usual and worst level of fatigue and six items concerning the interference of fatigue with general activity over the previous week [12]. 'Fatigue intensity' was defined as the score of the worst level of fatigue in the last 24 h (BFI item 3), on a 0-10 scale, with 0 being 'No fatigue' and 10 being 'As bad as you can imagine'. Fatigue was classified as mild, moderate or severe based on the score for item 3 (1-4, 5-7, or 8-10, respectively). 'Fatigue interference' was defined as the average score of all interference items (items 4A-4F), on a 0-10 scale, with 0 being 'Does not interfere' and 10 being 'Completely interferes'. The global BFI score is the arithmetic mean of all nine items (score, 0-10). The correlation between the physicians' and the patients' perception of fatigue was also calculated. QoL was assessed using the Functional Assessment of Cancer Therapy questionnaire for patients with prostate cancer (FACT-P), which has been validated to estimate QoL in men with PC [13]. This tool comprises the 27item FACT-General (FACT-G) questionnaire, which measures QoL in cancer patients, and a 12-item prostate cancer subscale, designed to measure QoL specifically in prostate cancer. The FACT-P questionnaire is scored by adding the subscales of the FACT-G plus the prostate cancer subscale to yield a comprehensive QoL score. Further data were recorded from the patients' medical records and included lifestyle habits, analytical values, comorbidities, current treatment, and other factors that could be associated with fatigue (Table 3) . Statistical considerations Descriptive analyses were used for the study variables. When inferential analyses were required, the Mann-Whitney or Kruskal-Wallis tests were used for variables not fitting a normal (or parametric) distribution. For variables fitting a normal (or parametric) distribution, a Ttest or an ANOVA were used. In contingency tables for categorical variables, the Fisher or chi-squared tests were used. All hypothesis tests were two-sided, with a significance level of 0.05. A logistic regression analysis was performed to evaluate the association between clinical characteristics and the presence of fatigue, based on those variables with a p-value < 0.2 in the bivariate analyses. Missing data were not imputed and were left as lost. Statistical analyses were performed using the Statistical Package for the Social Sciences (SPSS) software package version 18.0. Patient characteristics A total of 254 patients were included in the study. Of these, 19 subjects were excluded due to screening failures. The final evaluable population comprised 235 patients, with 74 (31.5%) in the M0 group and 161 (68.5%) in the M1 group (Table 1). At inclusion, median age for the entire patient population was 75.1 (46.2-92.4) years, median PSA value was 17.8 (6.8-43.3) ng/dL, and 90.7% of patients had an ECOG performance status grade of 0 or 1. The bivariate and multivariate analyses revealed that respiratory and cardiovascular disorders were the only factors significantly associated with the presence of fatigue based on the response to the BFI item 3 score > 0 (odds ratio [OR] 4.7 and 3.6, respectively; Table 3). QoL outcomes Mean FACT-G and FACT-P overall scores were 77.6 ± 16.3 and 108.7 ± 21.4, respectively. We compared M0 to M1 for their overall score on the QoL questionnaires, finding that both groups showed similar levels of functional status. The mean FACT-G score was 77.5 ± 17.0 for M0 versus 77.6 ± 16.0 for M1 (p = 0.955) and the mean FACT-P score was 108.6 ± 21.7 for M0 versus 108.7 ± 21.3 for M1 (p = 0.966). The mean scores for the domains of the FACT-G and FACT-P scales per study groups are displayed in Fig. 1. An association with fatigue intensity was seen across all QoL measures. Patients who reported greater fatigue intensity showed lower QoL, with worse mean FACT-G and FACT-P scores. This association was found to be independent of absence or presence of metastases (Table 4). Fatigue and QoL in mCRPC according to treatment Among all 161 mCRPC patients, 151 had available treatment data: 75 (50%) patients were receiving traditional hormone therapy (HT; mostly bicalutamide and flutamide given that during the recruitment of this study, no new anti-androgen drug such as apalutamide or enzalutamide was commercially available) and 76 (50%) were on abiraterone-prednisone. These were in turn classified based on treatment duration: 33 (22%) patients had been receiving treatment for < 3 months and 43 (28%) for ≥3 months. Table 5 shows the comparison of fatigue and QoL outcomes across these three cohorts. Patients receiving abiraterone acetate plus prednisone ≥3 months showed a significant reduction in median fatigue intensity Discussion To the best of our knowledge, this is the first observational study in the setting of routine clinical practice that specifically evaluates self-reported fatigue and its impact on QoL in chemo-naïve patients with CRPC, using wellestablished validated instruments for this purpose. Besides pain, fatigue is the most distressing and predominant symptom reported by patients with mCRPC [15]. We found that almost three quarters of our study population were suffering from fatigue, regardless of the presence of metastases, and a high proportion of patients were suffering from moderate-to-severe fatigue. The prevalence of fatigue has been studied previously, ranging from 39 to 90% [11]; however, the prevalence rates for cancer-related fatigue vary widely depending on how fatigue is defined and assessed. Even though cancer-related fatigue has a terrible impact on daily activities and is one of the main drivers of poor QoL [16], a poor correlation has long been observed between clinician-perceived and patient-reported subjective symptoms, such as fatigue [17][18][19]. Surprisingly, in our study we observed an improvement in the level of agreement between the clinicians' and the patients' perception of fatigue, finding an excellent concordance between the two. This highlights the importance of the need for assessing fatigue symptoms on an ongoing basis and developing management plans to increase health-care provider awareness of early fatigue symptoms, in order to help patients and their primary carers to recognise fatigue symptoms early, and thereby increase QoL in this group of patients. A list of possible correlates of fatigue in mCRPC was proposed recently by Colloca et al., grouping them in cancerrelated (anemia, pain, etc), patient-related (physical function, liver dysfunction, etc) and treatment-related (hormonal therapy, chemotherapy, etc) [5]. The logistic regression model developed in this study has revealed that beyond initial therapy or biological parameters, "patient-related" respiratory and cardiovascular disorders were the most important explanatory factors associated with fatigue. In bivariate analysis, hemoglobin and the practice of regular exercise seem to have some value but did not reach statistical significance. Interestingly, the time in treatment with analogues had no impact on fatigue in our study. In this study, we found that PC patients showed similar levels of functionality, as measured by the FACT-P questionnaire, irrespective of the absence or presence of metastases. In light of previous studies, this was a rather unexpected finding, as the prevalence of cancer-related fatigue is likely to increase as the disease progresses [10,20]. We observed that fatigue intensity was directly related to impaired QoL across all dimensions of the FACT-G and FACT-P instruments. This is in line with previous studies, in which fatigue was the most common symptom and the most significant predictor of impaired QoL [21]. As a multidimensional symptom, fatigue can affect specific dimensions of the QoL instruments, for which measurement of intensity alone is rather inappropriate. The findings reported by Gupta et al. [22] have essential implications in clinical practice. The authors highlighted that patients with PC at close monitoring of QoL, coupled with an improvement in fatigue, dyspnea and cognitive function within 3 months of treatment, were at a significantly decreased risk of mortality. Our findings are of practical importance to mCRPC treatment and further support abiraterone as a valuable option for the treatment of mCRPC patients. Sternberg et al. [10] reported the results of the first phase III clinical trial in the setting of advanced prostate cancer to specifically evaluate patient-reported fatigue outcomes, highlighting that abiraterone-prednisone was associated with improvements not only in fatigue intensity but also in fatigue interference, and that this was perceivable and meaningful to patients. The AQUARiUS study [23] also added evidence supporting the benefits of abiraterone-prednisone treatment regarding fatigue. In this observational study, fatigue and cognition was evaluated in mCRPC patients receiving either abiraterone-prednisone or enzalutamide. Abirateroneprednisone showed favourable effect on fatigue across all fatigue scales evaluated, proving significant difference at 3 months of treatment comparing to enzalutamide. In keeping with these, we have found that chemo-naïve mCRPC Data are expressed as mean ± SD patients receiving more than 3 months of treatment with abiraterone-prednisone had lower levels of fatigue and better QoL compared to traditional hormone therapy, which could not be ascribed to differences in previous chemotherapy exposure. Despite all these findings, we cannot determine the mechanism underlying the benefits associated with a longer duration of treatment with abirateroneprednisone, which could be the result of amelioration of disease progression. Nonetheless, these findings should guide new longitudinal studies to confirm the results. The cross-sectional design is probably the most important limitation of our study. In common with all cross-sectional studies, we can only offer a 'snapshot' of the current situation. It may have been better to follow the patients throughout a longer period of time, but this would have taken much longer, and we probably would have needed to increase the sample size. It should also be noted that, as an observational study design, certain biases might have been introduced when collecting the data. These might affect the interpretation of the results. However, conducting this type of studies -in real life-is of great relevance, as they help us learn about the conditions derived from routine clinical practice. Conclusions Our data show high prevalence rates and high intensity of fatigue with a significant impact on QoL in high-risk M0 CRPC and chemo-naïve mCRPC patients. There is an association between more fatigue and less QoL, which is independent of the presence or absence of metastases. Finally, chemo-naïve mCRPC patients receiving more than 3 months of abiraterone-prednisone showed an improvement of fatigue and QoL compared to patients on traditional HT. Supplementary information Supplementary information accompanies this paper at https://doi.org/10. 1186/s12894-019-0527-8. Data are expressed as median (IQR), unless otherwise stated AAP Abiraterone Acetate-prednisone, HT traditional hormone therapy, IQR interquartile range, WB well-being are appropriately investigated and resolved. All authors read and approved the final manuscript. Funding This study was funded by Janssen Cilag S.A. Janssen-Cilag S.A. was involved in the design of the study, interpretation of data, and in writing the manuscript. Quality control and statistical analyses were performed by a contract research organization that was funded by Janssen-Cilag S.A.. Availability of data and materials The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request. Ethics approval and consent to participate The study was conducted in accordance with the Declaration of Helsinki including all amendments, and was approved by the Ethics Committee of Hospital Universitario 12 de Octubre (Madrid, Spain) as ethical reference committee. All patients gave written informed consent before their inclusion in the study, and their treatment followed routine clinical guidelines. All sites involved approved the study by their ethics committee (see list of sites in additional file 1). Consent for publication Not applicable. Competing interests ARA, LMP, MEJ, JBG, DLB and FGV have received a speaker or consultant fee from Janssen, Astellas and Bayer. They do not present a conflict of interest specifically for the realization of this article. JMT and AGG are employees of Janssen Cilag-Spain.
2019-10-16T14:47:48.244Z
2019-10-16T00:00:00.000
{ "year": 2019, "sha1": "3c488a187f7bdeab61c3ed949547954c0b97e6f8", "oa_license": "CCBY", "oa_url": "https://bmcurol.biomedcentral.com/track/pdf/10.1186/s12894-019-0527-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c488a187f7bdeab61c3ed949547954c0b97e6f8", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
4854987
pes2o/s2orc
v3-fos-license
Readministration of Nivolumab after Persistent Immune-related Colitis in a Patient with Recurrent Melanoma Nivolumab shows promising efficacy against metastatic melanoma. However, immune-related adverse events are of great concern. We herein report a case of persistent colitis that developed during nivolumab monotherapy and nivolumab readministration. An 82-year-old Japanese woman with recurrent melanoma developed Grade 3 colitis after 6 cycles of nivolumab. She was treated with corticosteroid for 28 days. Follow-up by computed tomography and colonoscopy after corticosteroid treatment revealed persistent pancolitis. Her symptoms ameliorated spontaneously in two months. Given the amelioration, nivolumab was restarted and resulted in the maintenance of stable disease for 21 months without recurrence of colitis. Even in cases of persistent colitis over several months, nivolumab readministration should be considered. Introduction Nivolumab, a programmed cell death protein 1 (PD-1) inhibitor, shows promising efficacy in patients with metastatic melanoma and other solid tumors. However, systemic immune-related adverse events (irAEs), such as interstitial lung disease, liver dysfunction, hypothyroidism, and colitis, are of great concern (1). Approximately 30% of patients develop nivolumab-associated colitis (Common Terminology Criteria for Adverse Events v4.0; CTCAE, any Grade), and less than 10% of patients have severe colitis (Grade 3 or 4). Nivolumab-associated colitis generally occurs one to three months after starting nivolumab therapy (2). As irAEs can occasionally be lethal, the readministration of nivolumab after a severe irAE has been controversial. We herein report a case of restarting nivolumab after recovery from nivolumabassociated colitis. Case Report An 82-year-old Japanese woman was admitted to our hospital with intermittent severe abdominal pain. She had a history of malignant rectal melanoma treated with transanal tumor resection and 6 subsequent cycles of adjuvant chemotherapy (dacarbazine, nimustine, and vincristine: DAV) 15 years earlier. Four years after the first episode, relapse of the primary rectal lesion was found. With transanal tumor resection and DAV therapy, periodical gallium scintigraphy confirmed no tumor residue. She had been followed-up for three years after the relapse and had finished her periodical checkups seven years earlier. Four months before admission, she noticed bloody stool. Colonoscopy revealed the recurrence of rectal melanoma at the primary site with pathological confirmation, and computed tomography (CT) detected multiple lesions of paraaortic lymph nodes, lung, liver, and the first lumbar verte-Intern Med 57: 1173-1176, 2018 DOI: 10.2169/internalmedicine.8910-17 bra. We detected no other malignancy. Given the recurrence at the rectal primary site and simultaneous multiple lesions on CT, she was diagnosed with recurrent melanoma with multiple metastasis to the liver, lung, para-aortic lymph node, and first lumbar vertebra. Her Eastern Cooperative Oncology Group performance status was 0. She was administered nivolumab (2 mg/kg) every 3 weeks. After completing six cycles of nivolumab, she developed severe abdominal pain and loose bloody stool twice per day and was referred to our hospital. She had a sudden onset of a fever (38.1 ) the day before admission to our institution. A physical examination revealed abdominal tenderness from the left hypochondriac to the lower quadrant region. Laboratory findings showed an increased white blood cell count (12,400/mm 3 , neutrophils 87%), C-reactive protein level (CRP, 23.0 mg/dL) and decreased hemoglobin (9.7 g/dL). Nuclear antibodies and antineutrophil cytoplasmic antibodies were negative. Contrastenhanced CT on the day of admission showed that the colonic wall from the cecum to the transverse colon was markedly edematous and thickened (Fig. 1A), which is not consistent with ischemic colitis, as this typically occurs in a localized region, such as the splenic curvature, due to a reduced blood supply from the major arteries. We excluded in-fectious colitis due to the following clinical findings: Both blood and stool cultures were negative for pathogenic bacteria, and serum testing of cytomegalovirus (CMV) and Epstein-Barr virus (EBV) were negative. These clinical findings and the history of nivolumab therapy suggested an association between colitis and nivolumab therapy. Colonoscopy could not be performed because of severe abdominal pain and the risk of perforation at that time. Colonoscopy was therefore planned after the amelioration of her symptoms. The patient discontinued nivolumab and was immediately treated with intravenous prednisolone 2 mg/kg/day combined with antibiotics according to the manufacturer's management guidelines (1). Her abdominal tenderness was relieved in the next day and completely ameliorated after one week. The blood in her stool was resolved, and loose stool once a day was observed until six days after admission. Her hemoglobin level improved from 9.7 g/dL to her baseline level (12 g/dL) after1 week without blood transfusion. Prednisolone was gradually tapered every three days. As her symptoms were ameliorated, colonoscopy was performed on day 9 after admission, revealing pancolitis from the ilium to the descending colon with reddish, edematous erythematous mucosa. Histopathology revealed interstitial edema and inflammatory infiltration of lymphocytes, plasma cells, eosinophils, and neutrophils. The typical findings of ischemic colitis, such as ghost outlines and atrophy of crypt and interstitial eosinophilic deposition, were absent. We confirmed the diagnosis of nivolumab-associated colitis. Three weeks after starting prednisolone treatment, CT revealed that the thickened cecal wall was ameliorated (Fig. 1B). She was discharged on day 22 of hospitalization with prednisolone reduced to 5 mg/day. Corticosteroid treatment was continued for 28 days. Three weeks after the discontinuation of corticosteroid treatment, she still complained of loose stool. Follow-up CT on day 50 after the first admission revealed recurrent edematous and thickened cecal wall (Fig. 1C). Colonoscopy on day 59 after the first admission showed pancolitis from the ilium to the descending colon with reddish, edematous erythematous mucosa, and a loss of normal vascularity (3, 4) (Fig. 2). A histopathological examination of the biopsy specimen revealed mild inflammation with eosinophilic (3,4) and reconfirmed no evidence of pathogens, including CMV and Mycobacterium tuberculosis. These findings were consistent with recurrence of irAE colitis. To relieve discomfort, the rectal recurrent lesion was endoscopically dissected, confirming the diagnosis of recurrent melanoma without a v-Raf murine sarcoma viral oncogene homolog B (BRAF) mutation. Careful observation was carried out without reintroduction of corticosteroids (Fig. 3). Follow-up colonoscopy on day 80 after admission confirmed persistent pancolitis with edematous erythematous mucosa. The CRP level remained slightly elevated at around 1.5 mg/dL. It gradually decreased to 0.3 mg/dL, and her loose stool improved spontaneously 2 months after the discontinuation of corticosteroids. With stable clinical symptoms and a normal CRP level, nivolumab was restarted on day 99 after admission. Her condition has remained stable for 21 months without recurrence of irAE colitis or progression of the disease (more than 20% increase in tumor volume or new metastatic lesions). Discussion To our knowledge, this is the first case of nivolumabassociated colitis with successfully restarted nivolumab after the recovery from colitis. In the present case, colitis lasted more than three months, including corticosteroid tapering, which was longer than the one month stated in the guidelines (1). Our patient was followed up carefully after the development of Grade 3 irAE colitis with nivolumab monotherapy. Discontinuation of nivolumab and immunosuppressive therapy (corticosteroid) successfully relieved her symptoms. Corticosteroid administration was gradually tapered over one month. The CT and colonoscopy findings were more severe than the clinical symptoms (i.e. loose stool and abdominal pain) and CRP elevation (<1.5 mg/dL). This case suggests that periodical visits, an in-depth history taking, and careful CRP monitoring combined with CT imaging can assist in the early detection and follow-up of irAE colitis in the clinical setting. The persistence of colonic inflammation in this 82-year-old patient suggested that colitis could last for over 3 months. For irAE colitis, the management guidelines recommend continuing steroids until recovery to Grade 1 and then tapering the dose over at least one month (1). Previous clinical studies have reported that irAE colitis was sustained for less than 1 month [median 0.7 weeks (5) and 4.0 weeks (6)]. While some factors have been reported to be associated with the clearance of nivolumab, further investigation is needed to ensure the safe administration of nivolumab. The readministration of nivolumab was performed in this case after recovery from severe irAE colitis. With careful observation of clinical symptoms and laboratory data, the patient has maintained a stable condition for 21 months without recurrence of colitis. The readministration of nivolumab after irAEs has been controversial. A previous study reported that 7 out of 20 patients who developed irAE pneumonitis restarted nivolumab, and 5 patients successfully continued without recurrence of irAE pneumonitis (7). With regard to ipilimumab, an early approved immune-checkpoint inhibitor of cytotoxic T-lymphocyte antigen 4 (CTLA4), the accumulation of informative clinical data has indicated that ipilimumab-associated colitis is more frequently observed than nivolumab-associated colitis (8). A previous study reported the successful use of nivolumab after ipilimumabinduced colitis for 11 unresectable metastatic melanoma cases (3 Grade 2, 7 Grade 3, and 1 Grade 4) (9). Another study reported 67 advanced melanoma patients with a history of ipilimumab-induced irAE, and 47 cases of colitis (5 Grade2, 37 Grade 3, and 5 Grade 4) treated with nivolumab attained an average progression-free survival (PFS) of 7.2 months. It was also reported that recurrence of the same irAE was rare (3%, 2/67 cases), even in patients with severe colitis (10). Although molecular immunological evidence was not available in previous reports and this case, the clinical findings suggest that recurrence of irAE at the same organ is not common (10). Thus, readministration can be considered under careful observation. In the present case, restarting nivolumab was the only treatment option, as cytotoxic drugs had already been administered and there was no BRAF mutation. With the readministration of nivolumab, our patient successfully maintained stable disease status for 21 months. The present case suggests that the readministration of nivolumab may be a feasible treatment option, even in cases of persistent irAE colitis over a few months. The authors state that they have no Conflict of Interest (COI). Financial Support This study was supported by a grant from the Promotion Plan for the Platform of Human Resource Development for Cancer by Ministry of Education, Culture, Sports, Science and Technology, Japan.
2018-04-03T01:01:56.101Z
2017-12-21T00:00:00.000
{ "year": 2017, "sha1": "e4d66d8a3c0057bbd2bac94a2bd1515a3fc6649f", "oa_license": "CCBYNCND", "oa_url": "https://www.jstage.jst.go.jp/article/internalmedicine/57/8/57_8910-17/_pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4d66d8a3c0057bbd2bac94a2bd1515a3fc6649f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
268671212
pes2o/s2orc
v3-fos-license
Dynamics of soil penetration resistance, moisture depletion pattern and crop productivity determined by mechanized cultivation and lifesaving irrigation in zero till blackgram Rice fallow black gram is grown under the residual moisture situation as a relay crop in heavy texture montmorillonite clay soil under zero till condition. Since the crop is raised during post monsoon season, the crop often experiences terminal stress due to limited water availability and no rainfall. Surface irrigation in montmorillonite clay soil is determent to pulse crop as inundation causes wilting. Therefore, zero tilled rice fallow black gram has to be supplemented with micro irrigation at flowering stage (35 days after sowing) to alleviate moisture stress and to increase the productivity as well. Hence micro farm pond in a corner of one ha field was created to harvest the rain water during monsoon season and the same was utilized to supplement the crop with lifesaving irrigation through mobile sprinkler at flowering stage for the crop grown under conservation agriculture. Soil cracking is also the common phenomena of montmorillonite clay soil where evaporations losses would be more through crack surfaces. The present study was therefore conducted to study the changes in the soil physical properties, crop establishment and productivity in conjunction with mechanized sowing and harvest and supplemental mobile sprinkler irrigation. Sowing of black gram by broadcasting 10 days prior to the manual harvest of rice, manual drawn single row seed drill after the machine harvest of rice and sowing by broadcasting at 4 days prior to machine harvest of rice was experimented separately and in combination with lifesaving irrigation. Results indicated that the number of wheel passes and lifesaving irrigation had a very strong impact on soil penetration resistance and soil moisture. Combined harvester followed by no till seed drill increased the soil penetration resistance in all the layers (0–5 cm, 5–10 cm and 10–15 cm). Two passes of wheel increased the mean soil penetration resistance from 407 KPa to 502 KPa. The soil penetration resistance (0–5 cm) at harvest shown that black gram sown by manual broadcasting 10 days prior to manual harvest of paddy supplemented with life irrigation on 30 DAS reduced the soil penetration resistance from 690 Kpa to 500 Kpa, 740 Kpa to 600 Kpa and 760 Kpa to 620 Kpa respectively at 0–5 cm, 5–10 cm and 10–15 cm layer. In general, moisture depletion rate was rapid in the surface layer of 0–5 cm as compared to other layers of 5–10 cm and 10–15 cm up to 30 DAS (Flowering stage). The moisture content and the soil penetration resistance had an inverse relationship. The soil penetration resistance also had an inverse relationship with the root length in which the root length lowers as the soil penetration resistance increases. The soil crack measured at 60 DAS was deeper with no till seed drill (width of 3.94 cm and depth of 13.67 cm) which was mainly due to surface layer compaction. The relative water content, specific leaf weight and chlorophyll content were significantly improved through the supplemental irrigation given on 30 DAS irrespective of crop establishment methods. The results further indicated that compaction of ploughed layer in the moist soil due to combined harvester and no till seed drill had a negative impact on yield (457 kg ha−1), which was improved by 19.03 per cent due to increased soil moisture with supplemental irrigation. The mean yield increase across different treatments due to supplemental lifesaving irrigation through mobile sprinkler was 20.4 per cent. Rice fallow black gram is grown under the residual moisture situation as a relay crop in heavy texture montmorillonite clay soil under zero till condition.Since the crop is raised during post monsoon season, the crop often experiences terminal stress due to limited water availability and no rainfall.Surface irrigation in montmorillonite clay soil is determent to pulse crop as inundation causes wilting.Therefore, zero tilled rice fallow black gram has to be supplemented with micro irrigation at flowering stage (35 days after sowing) to alleviate moisture stress and to increase the productivity as well.Hence micro farm pond in a corner of one ha field was created to harvest the rain water during monsoon season and the same was utilized to supplement the crop with lifesaving irrigation through mobile sprinkler at flowering stage for the crop grown under conservation agriculture.Soil cracking is also the common phenomena of montmorillonite clay soil where evaporations losses would be more through crack surfaces.The present study was therefore conducted to study the changes in the soil physical properties, crop establishment and productivity in conjunction with mechanized sowing and harvest and supplemental mobile sprinkler irrigation.Sowing of black gram by broadcasting 10 days prior to the manual harvest of rice, manual drawn single row seed drill after the machine harvest of rice and sowing by broadcasting at 4 days prior to machine harvest of rice was experimented separately and in combination with lifesaving irrigation.Results indicated that the number of wheel passes and lifesaving irrigation had a very strong impact on soil penetration resistance and soil moisture.Combined harvester followed by no till seed drill increased the soil penetration resistance in all the layers (0-5 cm, 5-10 cm and 10-15 cm).Two passes of wheel increased the mean soil penetration resistance from 407 KPa to 502 KPa.The soil penetration resistance (0-5 cm) at harvest shown that black gram sown by manual broadcasting 10 days prior to manual harvest of paddy supplemented with life irrigation on 30 DAS reduced the soil penetration resistance from 690 Kpa to 500 Kpa, 740 Kpa to 600 Kpa and 760 Kpa to 620 Kpa respectively at 0-5 cm, 5-10 cm and 10-15 cm layer.In general, moisture depletion rate was rapid in the surface layer of 0-5 cm as compared to other layers of 5-10 cm and 10-15 cm up to 30 DAS (Flowering stage).The moisture content and the soil penetration resistance had an inverse relationship.The soil penetration resistance also had an inverse relationship with the root length in which the root length lowers as the soil penetration resistance increases.The soil crack measured at 60 DAS was deeper with no till seed drill (width of 3.94 cm and depth of 13.67 cm) which was mainly due to surface layer compaction.The relative water content, specific leaf weight and chlorophyll content were significantly improved through the supplemental irrigation given on 30 DAS irrespective of crop establishment methods.The results further indicated that compaction of ploughed layer in the moist soil due to combined harvester and no till seed drill had a negative impact on yield (457 kg ha − 1 ), which was improved by 19.03 per cent due to increased soil moisture with supplemental irrigation.The mean yield increase across different treatments due to supplemental lifesaving irrigation through mobile sprinkler was 20.4 per cent. Introduction Relay cropping is a traditional cropping system which is done by broadcasting the seeds of short duration pulses or oilseed crops in standing paddy field at the time of harvest.In Cauvery Delta Zone of Tamil Nadu, India relay cropping of black gram (Vigna mungo (L.) Hepper) is a unique system where sprouted seeds of black gram are broadcasted in standing rice crop 7-10 days prior to its harvest and grown under no-tillage conditions using the residual moisture and nutrients in the soil [1]. Though the relay cropping of rice-pulse has been practiced from time immemorial in the Cauvery delta zone of Tamil Nadu, the productivity of pulses is very low due to adoption of poor crop management practices.Unlike irrigated pulses, the improved production or protection technologies will not be followed by the farmers as the crop is considered to be incentive crop after the harvest of rice crop.Reference [2] reported that due to above said reasons the yield of rice fallow pulses is low due to the various biotic and abiotic stress and non-adoption of technologies.As far as rice fallow black gram concerned, the crop is totally grown under residual moisture content and as a result, soil and water plays a pivotal role in determining the yield.Since irrigation is withdrawn 10-15 days before the harvest of rice crop, the moisture content in the soil declines rapidly with advancement of crop period.Especially from the second fortnight of February due to rise in temperature, the crop faces drought during flowering and pod formation stages which eventually resulted in poor yield of black gram.In order to evade the terminal moisture stress in rice fallow black gram, supplemental irrigation would be useful at 30-35 DAS (Pre flowering) to augment the yield [3].In montmorillonite expanding clay, surface irrigation is determent to the crop as inundation of water leads to wilting of the crop.Hence portable or mobile sprinkler irrigation would be a better choice of supplemental lifesaving irrigation to avoid terminal drought.But water availability during the post monsoon period is the major concern.Lifesaving supplemental irrigations through mobile sprinklers with the use of harvested water could be a viable strategy to increase the productivity in rice fallow situations [3]. A shallow ditch of 2 m deep with 2:1 slope or 1:1 slope constructed in a corner of the field to harvest rainfall runoff served as a source of supplemental irrigation (4-6 cm) to avoid terminal stress in rainfed rice crop.The rain water harvested in the on-farm reservoir could be used as either supplemental irrigation or even for land preparation for the timely sowing of rainfed rice crop.The water stored in the reservoir may also provide 2 or three supplemental irrigations [4]. Maintenance of adequate plant population is a prerequisite to maximize returns from rice fallow pulse cultivation.The practice of broadcasting black gram seeds 7-10 days prior to harvest of rice crop after the final irrigation at a waxy soil condition is being adopted in the Cauvery delta region conventionally [5].Sub optimal population and uneven plant density are the common constraints associated with broadcasting of black gram seeds in standing crop of rice.The trampling effect of wheels on field traverse while employing combine harvester damages the establishing pulse crops.With wider adoption of mechanized harvesting, it has emerged as a major abiotic stress in rice fallow cultivation.Though mechanized cultivation is advantageous in crop production nowadays, surface soil compaction (crusting) and sub surface soil compaction (Hard pan) are an emerging problem in crop production.Especially in wet land condition, employing combined harvester in the deep clay soils resulted in more vertical stress due to heavy wheel loads.Vertical stress on the subsoil layers is on increasing mode with increase in the use of farm machineries in recent day [6]. Soil compaction is commonly known to be caused by different tillage methods, use of either power operated or bullock drawn machineries for puddling, transplanting, intercultural operations and surface [7].To quantify the vertical stress in terms of degree of compaction especially in adoption of no tillage immediately after employing combine harvester, measurement of soil penetration resistance (SPR) to quantify the soil quality and to identify the layers with increased degree of compaction would be a meaningful approach [8].Reference [9] also indicated that soil penetration resistance, a function of several mechanical properties of soil provides a rapid method to characterize the variability of soil strength or compaction within the soil profile in different layers. The studies on soil compaction have indicated that soil compaction changes soil structure, increases bulk density and penetrometer resistance, reduces soil aeration, decreases water infiltration, reduces hydraulic conductivity [10].The degree of soil compaction is determined by soil mechanical impedance, wheel passes and soil moisture content.Soil penetrometer is generally used to estimate the mechanical resistance of soil [11].The correlation between soil strength and root growth is well explained by the penetration resistance studies [12].A decrease in macropores and increase in micropores are associated with increase in bulk density which ultimately affects the soil hydraulic properties and the ability of soil to shrink and soil water conductance [13].Increase in bulk density was also reported to decrease the number of nodules [14].The reduced root length, elongation and decreased nutrient availability and uptake under compacted soils were the reasons for low crop yield.The soil compaction causes yield loss from 5 to 90 % depending upon the number of passes by the heavy vehicles [15].Though soil cracks were said to be reduce the soil erosion and enhance the soil moisture reserves, the movement of water during the post rainy season especially in the extended dry seasons in the soil cracks is very crucial as evaporations losses would be more through crack surfaces [16].The present experiment was hence conducted to study the changes in the dynamism of soil penetration resistance, moisture depletion rate, crop establishment and productivity in conjunction with S. Kasirajan et al. mechanized sowing and harvest and supplemental irrigation. Site description This study was conducted for four years during the post rainy seasons of 2014- 1.).However, the rainfall during the study period of January-March was nil during all the four years. The region is characterized by a sub-tropical climate with a hot dry summer (March-June), and extended wet period from September to February.The mean annual rainfall is about 1176 mm, majority of which was received during North East Monsoon.The mean annual maximum and minimum temperatures were 33.3 • C and 23.5 • C. The mean annual relative humidity was 89 per cent.The mean wind velocity and bright sunshine hours were 5.2 kmph and 6.7 h day − 1 .The field experiment has been commenced with the sowing of black gram (ADT-3 variety) during the month of January after the long duration paddy crop (CR 1009 variety).Sowing by manual broadcasting as per the 'traditional method' was done ten days prior to the manual harvest of paddy (T 1 ).A manual drawn single row seed drill (IIPR prototype) was employed to place the seeds after the harvest of rice (Harvesting with a chain type combine harvester) with an inter row spacing of one foot apart (T 2 ).The sowing was taken up immediately after the harvest of paddy to avoid soil moisture loss.Seeds fell continuously into a V shaped furrow (7.5 cm depth) opened by the disc type wheel of the seed drill.In another treatment (T 3 ), sowing by broadcasting at 3-4 days prior to harvest of paddy (Harvesting with a chain type combine harvester) were also experimented.Lifesaving irrigation with portable mobile sprinkler was combined with T 1 (T 4 ), T 2 (T 5 ) and T 3 (T 6 ). Experimental design and treatments A pond was dug at a dimension of 15 m × 6 m ×2 m (L x B x H) in a corner nearby the experimental field with an objective of harvesting the rain water during the North East monsoon.Subsequently the stored water in the pond was utilized for irrigating the pulse crop through mobile sprinkler at critical stage (flowering -35 days after sowing).Thus, even after the cessation of North East Monsoon in the month of December, the water harvested and stored in the farm pond was used to irrigate with mobile sprinkler so as to mitigate the moisture stress at flowering stage in the month of February.The full storage capacity of the pond is 2,10,000 L. The average stored water depth after the cessation of North East Monsoon was 1.65 m (1,73,250 L).The average water availability in the pond at the time of flowering was 1,05,000 L of water.The mobile sprinkle discharge rate per minutes was 80 L. The water requirement/ha to irrigate through mobile sprinkler for a depth of 10 mm is 1,00,000 L. The water availability at the time of flowering was sufficient to irrigate at a depth of 10 mm during all the four years of study. Soil of the experimental site was montmorillonitic, isohyperthermic, Udorthentic Chromusterts of heavy clay texture with a pH 7.8, low in organic carbon (0.15 %) and medium in available nitrogen (288 kg ha − 1 ), high in available phosphorus (35 kg ha − 1 ) and medium in available potassium (376 kg ha − 1 ).The soil bulk density was determined with undisturbed soil samples using the cutting Fig. 1.Rainfall (mm) during the cropping seasons. S. Kasirajan et al. ring method (Li et al., 2012) and the mean bulk density of the soil was 1.26 g cm − 3 .The black gram seeds of ADT 3 variety were used as a test crop.In a plot size of 10.0 m × 4.0 m, 33 rows of black gram and 400 plants per row were maintained by dibbling seeds at a spacing of 30 cm × 10 cm.The early post-emergence herbicide quizalofop ethyl was sprayed at 50 g a.i.ha − 1 on 15 DAS to keep the experimental plot weed free and clean. Sample measurements and data analysis 2.3.1. Soil moisture Assessments on soil profile moisture depletion were commenced with HH2 moisture meter (Delta-T Devices Ltd., Cambridge, UK) at 0-5, 5-10 and 10-15 cm layers at weekly intervals throughout the cropping period. Soil penetration resistance The sampling was performed with Hand penetrometer Eijkelkamp, minimal design (measuring at each point, reaching up to 1.0 m depth.with an accuracy of 1000 Kpa to measure the penetration resistance in each treatment randomly in six points at three depths (0-5, 5-10 and 10-15 cm).At each sampling points, the measurements were made with constant speed at different soil depths.The soil penetration resistance in each treatment is the mean of the six measurements in each depth. Soil crack The soil crack in terms of width and depth was measured by using a flexible measuring steel ruler in five randomly marked area of 1 m × 1 m in each treatment.The depth of the crack was measured by inserting the ruler down the crack and it was gently wiggled until it reaches the lowest point.The width of the crack was measured perpendicular to the crack walls.Since the depth of the crack measurement may not be realistic due to obstruction of small clods, the volume of the crack was measured by pouring the river sand in the crack and amount of sand required to fill the crack was taken as volume of the crack in litres. SPAD meter The SPAD-502 chlorophyll meter (SPAD-502 Minolta Camera Co., Ltd., Japan), a rapid, non-destructive and hand held spectral device was used for estimating leaf chlorophyll content.SPAD values of the four fully expanded uppermost leaves were determined on 30 and 45 DAS and the results are reported as SPAD units.Ten randomly selected plants from each plot were measured in the field. Growth and yield attributes The data on growth and yield attributes were observed at the time of harvest randomly from ten plants from the same five plants in each treatment.The seed yields were measured as total yield per plot and transformed to kg ha − 1 .The crop was harvested on 65 days after sowing. Number of effective nodules and nodule dry weight Five plants per treatment were randomly uprooted on 30 and 45 DAS for counting the number of effective root nodules per plant and root nodules dry weight study as well.Plants from each treatment were uprooted from the soil along with the ball of earth without any disturbance to the roots.The roots of each plant were gently washed in water.After removing the soil from the roots, the nodules were separated from the roots.After the detachment of root nodules, the nodules were examined for the presence of leg-hemoglobin to estimate the number of effective nodules per plant.The nodule dry weight was obtained by over drying the nodules at 80 • C. The total number of effective nodules and dry weight of nodules were measured in all the treatments and mean value was arrived [17]. Relative water content The relative water content (RWC) expresses the water content in per cent at a given time as related to the water content at full turgor and describes the degree of water saturation in plant leaves.The RWC was measured by using the formula suggested by Ref. [18]. Specific leaf weight (SLW) Specific Leaf Weight (SLW) is one of the few morphological characteristics of plants that shows large changes over the course of a single day.Specific leaf weight (SLW) was calculated by using the formula of Amanullah [19] and expressed as mg/cm 2 . Specific leaf weight (SLW) = Leaf dry weight per plant Leaf area per plant Statistical analyses Analysis of variance (ANOVA) was used to detect the significance of treatment effects on different parameters studied.Least S. Kasirajan et al. significant difference (LSD) was used to separate the mean, whenever the treatment means were significantly different.In general, differences are reported at the 5% probability level [20].Combined analysis of variance was performed after homogeneity test for error variances by using Bartlett's tests (Snedecor and Cochran 1983).For diagrams and regression analysis [21], Microsoft Excel 2010 was used. Because the results were similar across the four experimental years seasons, pooled mean values of the four-year data for each season were presented in tables and figures. Soil penetration resistance The different crop establishment methods had a great influence on soil penetration resistance irrespective of the soil layers (Fig. 2).In general, the soil penetration resistance was on increasing trend with advancement in the crop period.However, the soil penetration resistance diminished markedly after the lifesaving irrigation given through mobile sprinkler at the time of flowering on 30 DAS, which has been evidently seen from the data on soil penetration resistance observed on 40 DAS.Though the penetration resistance was decreased on 40 DAS, it tends to increase subsequently up to the harvest of black gram in all the sowing methods of black gram. The wheel passes of harvester and seed drill (Two passes) significantly increased the soil penetration resistance at all the stages of observation irrespective of the layers.The soil penetration resistance was almost zero at 0-5 cm layer with the farmers practice of sowing black gram 10 days prior to manual harvest of rice crop (No pass) and at 5-10 cm and 10-15 cm layers, it was only 20 and 40 S. Kasirajan et al. mpa respectively.In the modified sowing time of 4 days before the machine harvest of rice crop (Single pass), the soil penetration resistance at the time of sowing was 20, 40 and 60 (KPa) at 0-5, 5-10 and 10-15 cm soil layers respectively.Seed drill sown black gram after the machine harvest of rice crop (Two passes of wheel) had higher soil penetration resistance 40, 60 and 80 (KPa) at 0-5, 5-10 and 10-15 cm soil layer at the time of sowing as compared to other methods at all the stages of observation.Similarly, the soil penetration resistance at the time of harvest was also highest with the seed drill sown black gram at all the layers (720, 760 and 770 KPa) at 0-5, 5-10 and 10-15 cm soil layers.Whereas life irrigation given at the time of flowering stage (30 DAS) no till seed drill sown black gram after the machine harvest of rice profoundly influenced the soil penetration resistance at harvest (680,720 and 740 KPa at 0-5, 5-10 and 10-15 cm soil layers), which was almost equivalent to the soil penetration resistance observed on 30 DAS. Taking soil moisture as dependent variables and SPR as an independent variable, results from simple regressions indicated that variations in soil moisture content was significantly related to the variation in SPR.The variation in soil moisture content was inversely Fig. 3. Relationship between soil penetration resistance (KPa) and soil moisture content (%) during different crop period. S. Kasirajan et al. related to SPR. (Fig. 3).Based on R 2 value at 5% level of significance, the fitted equation was found to be significant.The linear relationship between soil moisture content and SPR was 96.4 % at 0 DAS, while it was around 80 percent at the time of harvest.The regression analysis further indicated that the relationship between soil moisture content and SPR was significantly increased at 50 DAS which was due to the supplemental life irrigation given on 30 DAS. Soil cracking (cm) The rice fallow pulse was grown in the soils of fine, montmorillonitic, isohyperthermic, Udorthentic Chromusterts with heavy clay texture in the entire Cauvery Delta Zone of Tamil Nadu, India.These soils become shrink when flooded and begun to crack on drying.The soil crack and depth were greatly influenced by the number of wheel passes (Fig. 4).In the different black gram establishment systems, black gram sown with seed drill under no till condition (two passes) exhibited deeper soil crack with a width of 3.94 cm and depth of 13.67 cm as compared to other sowing methods.Though the width and depth of the crack in the farmers practice of sowing black gram 10 days before manual harvest of rice crop (No pass) was more as compared to black gram sown 4 days before the harvest of rice crop (Single pass), the volume of the sand required to fill the crack was lesser than the other treatments. Bulk density (g/cc) The ratio of the mass of dry solids to the bulk volume of the soil is greatly influenced by the different crop establishment methods of black gram and supplemental irrigation.In general, the variation in the bulk density ranged from 1.28 g/cc to 1.42 g/cc at 0-5 cm layer with no pass, single and two passes of wheel in no till soil at 0 DAS.While it was 1.36 g/cc to 1.52 g/cc at 5-10 cm layer (Table 1). The measured values indicated that black gram sown with seed drill after the machine harvest of rice (two passes) significantly increased the bulk density at 0-5 cm and 5-10 cm layer by 20.1 and 6.9 per cent over broadcasting of black gram manually before the manual harvest of black gram (No wheel traffic).However, the bulk density was not greatly influenced at 10-15 cm layer due to the wheel passes of seed drill and harvester. The supplemental lifesaving irrigation given on 30 DAS had a significant influence on bulk density at 0-5 cm and it ranged from 1.28 g/cc to 1.58 at 0-5 cm.The wheel traffic of harvester and the furrow opened at optimum moisture by the seed drill did not respond positively to the supplemental irrigation through mobile sprinkler, where the bulk density increased due to the water droplets from the sprinkler system.However, in no pass, the bulk density of the soil decreased due to mobile sprinkler irrigation from 1.32 g/cc to 1.28 g/ cc at 0-5 cm layer though supplemental irrigation did not increase the bulk density at 5-10 and 10-15 cm layers. Soil moisture depletion pattern The assessment of soil profile moisture depletion was commenced with gravimetric method of taking soil cores at 0-5 cm, 5-10 cm and 10-15 cm (Fig. 5) at 10 days interval from sowing of black gram sowing to harvest.The mean moisture percentage at the time of sowing black gram (10 days before the harvest of rice crop) was 34.78 % in 0-5 cm layer, 33.80 % in 5-10 cm layer and 33.67 % in 10-15 cm layer.Whereas moisture percentage at the. time of sowing black gram (4 days before the harvest of rice crop) was 33.48 % in 0-5 cm layer, 32.23 % in 5-10 cm depth and 32.16 % in 10-15 cm layer.However, the moisture percentage observed in black gram sown with seed drill after the harvest of the rice crop was 32.81 % in 0-5 cm layer, 31.02% in 5-10 cm layer and 31.22 % in 10-15 cm layer.The data further revealed that, irrespective of the layer, black gram sown 10 days before the manual harvest of paddy (No wheel pass) had the highest soil moisture content at different crop periods as compared to other treatments.The moisture content observed at the time of flowering before lifesaving irrigation (30 DAS) ranged from 18.08 to 21.88 % in 0-5 cm layer, 19.68-23.68% in 5-10 cm layer and 20.11-24.68 % in 10-15 cm layer across the treatments.The variation in the moisture content across the treatments were significant after the lifesaving Lifesaving irrigation with mobile sprinkler was combined with T 1 (T 4 ), T 2 (T 5 ) and T 3 (T 6 ). S. Kasirajan et al. with IIPR seed drill (Two passes).The lifesaving irrigation given at flowering stage (30 DAS) did not have any profound influence on the moisture content in the 5-10 cm and 10-15 cm layer in all the crop establishment methods.Significant and strong relationship between bulk density and soil water content was observed at different soil layers and time of observation (Table 2).The bulk density of expanding clayey is mostly determined by the soil water content, which has been evidently seen in the R 2 value.The bulk density increased with increase in soil water content and vice versa.In the linear relationship, the intercept predicts the bulk density at maximum soil moisture and slope is an indication of changes in bulk density due to soil moisture. Crop physiological parameters The physiological parameters like Relative water content (RWC), Specific leaf weight (SLW) and Chlorophyll content (SPAD value) observed on 30 DAS were not greatly influenced by the different crop establishment methods due to increase in the duration of moisture stress (Table 3).However, the observation taken on 45 DAS after the lifesaving irrigation given at flowering stage (30 DAS) indicated significant variation in the above physiological parameters. Specific leaf weight (SLW) varied between 7.04 and 7.66 g/cm 2 at 30 DAS in different methods of black gram sowing.Whereas the relative water content ranged from 81.44 % to 85.91 %.The SPAD values across the treatments were 32.6-37.07.Sowing of black gram 10 days before manual harvest of rice crop (No pass) had higher level of RWC (85.91 %), SLW (7.62 g/cm 2 ) and SPAD value (36.93) at 30 DAS.However, the supplemental lifesaving irrigation through mobile sprinkler given on 30 DAS significantly increased the values of SLW, RWC and SPAD over the non-irrigated treatments.Sowing of black gram 10 days before the manual harvest of rice crop followed by supplemental lifesaving irrigation through mobile sprinkler on 30 DAS had the higher values though it was onpar with sowing of black gram 4 days before the mechanical harvest of rice crop (Single pass) followed by supplemental irrigation. Nodule count and dry weight Nodulation is considerably affected by the physical properties of heavy soil and soil moisture.Similar to the trend of physiological parameters, nodule count and nodules dry weight were also highly influenced by different crop establishment methods at 30 DAS (Table 4).Sowing of blackgram 10 days before the manual harvest of rice crop (No pass) registered a higher number of nodules and nodule dry weight (23.67 and 0.17 g) at 30 DAS.Sowing of black gramwith seed drill after the harvest of rice crop (Two passes of wheel) resulted in poor development of root nodules and nodule weight as well (23.67 and 0.17 g).The supplemental irrigation given on 30 DAS did not have a profound influence on number of root nodules and dry weight on 45 DAS. Though the number of nodules and dry weight (17.44 and 0.11 g) were higher with sowing of blackgram 10 days before the manual harvest of rice crop (no pass) followed by supplemental lifesaving irrigation through mobile sprinkler on 30 DAS, the number of nodules and dry weight (0.17 and 0.11 g) were comparable with sowing of black gram4 days before the machine harvest of rice crop (Two passes) followed by supplemental lifesaving irrigation through mobile sprinkler on 30 DAS. Root length (cm) and root dry weight (g) Similar to the physiological parameters, different sowing methods black gramhad significant influence on root length and weight at 30 DAS (Table 5).Sowing of black gram10 days before the manual harvest of rice crop (No wheel pass) resulted in highest root length of 6.93 cm and root dry weight of 0.073 g at 30 DAS as compared to two passes in no till seed drill (5.87 cm and 0.058 g).However, sowing of black gram10 days before the manual harvest of rice crop followed by supplemental irrigation through mobile sprinkler registered highest root length (8.76 cm) and root dry weight (0.092 g) at 40 DAS. Yield attributes and seed yield The growth and yield attributes were significantly influenced by crop establishment techniques (Table 6).The data on plant density at 10 DAS indicated that no till seed drill (Two passes) had the highest plant density of 42 plants/m 2 as compared to sowing of black gram10 days before the manual harvest of rice crop (32 plants/m 2 ).Sowing of black gram4 days before the machine harvest of rice crop resulted in reduced plant population (28 plants/m 2 ).However, the plant population was not significantly differed at the time of harvest.Though the plant density was highest with no till seed drill at 10 DAS, the population was drastically reduced to two third at the time of harvest. Black grambroadcasted 10 days before the manual harvest of rice crop (No pass) had more no. of pods/plant (20), seeds/pod (6.0), higher 100 seed weight (4.97 g) and grain yield (656 kg ha − 1 ) as compared to no till seed drill (Two passes).The yield improvement over seed drill sown black gram (two passes) was 30.4 %.The critical importance of the supplemental irrigation at flowering stage is to bridge dry spells, especially to subsidise the terminal stress risks in relay cropping.Significant yield improvement was observed with supplemental irrigation given at the time of flowering (30 DAS) irrespective of the sowing methods.The mean yield increase across the different treatments due to mobile sprinkler irrigation at flowering stage was 16.8 per cent over the non-irrigated treatments.Black gram broadcasted 10 days before the manual harvest of rice crop (no pass) followed by supplemental irrigation resulted in a yield increase of 30.7 per cent as compared to black gramsown with seed drill after the machine harvest of the rice (Two passes) with supplemental irrigation. Soil penetration resistance Soil penetration resistance (SPR) is an important parameter of soil strength, which is considered as an indicator for soil compaction that determines the root growth and crop yield [22]. Mechanized crop cultivation is of great growing concern as wheel traffic due to employment of transplanter and harvester emerges as potential threat to the subsoil structure, which is irreversible and ultimately lead to harmful soil compaction [23].In no till soil, zero tillage could be characterized by soil penetration resistance [24].Soil penetration resistance (SPR) has been used by several researchers to quantify the soil quality and to identify the layers with increased degree of compaction [8]. The experiment was carried out in zero till black gram which was grown as a relay crop after rice.In no till soil, additional compaction with a reduced total porosity is commonly observed due to denser top soil [25].The soil penetration resistance was found to be greater in zero tillage as compared to conventional tillage [26].The SPR was nil at 0-5 cm layer on the day of black gram sown 10 days before the manual harvest of rice crop which was due to optimal moisture in the rice field.However, there was an increase in the SPR when the black gram was sown 4 days before the harvest of rice to facilitate machine harvest of paddy which was mainly due to depletion of soil moisture in the surface.In the present study, the soil penetration resistance was higher with black gram seed sown with seed drill after the harvest of rice with combined harvester.The wheel passes of combine harvester and black gram seed drill would have created soil compaction and deaccelerate the root growth and water availability in deeper soil layers.The results are largely in accordance with changes in soil strength that the vertical loads of wheels had caused greater stress in the soil and dense top layer. Irrespective of the treatments, significant reduction in the soil penetration resistance (0-5 cm layer) at 45 DAS was found after the supplemental irrigation given at flowering stage through mobile sprinkler.The SPR observed was 400 Kpa at 0-5 cm layer where black gram was sown 10 days before the manual harvest of rice crop followed by supplemental irrigation as against the non-irrigated (580 Kpa).The findings indicated that sprinkler water droplets did not create any soil compaction in the surface soils (0-5 cm layer) of all the three methods of black gram sowing.This is in contrary to earlier findings that sprinkler water droplets energy increased the surface sealing (soil compaction) and reduced aggregate stability [27,28].The earlier findings may be hold good in soils where the crop would have irrigated through the crop and not as a supplemental irrigation once. The number of passes of agricultural machinery creates mechanical resistance in the surfaces of soil and thick layers of soil under plough pan due to soil compaction [29].In the present study, the use of combine harvester for rice harvest and seed drill for black gram sowing has created the higher soil penetration resistance of 620 Kpa at 45 DAS after the supplemental irrigation at flowering stage.In the seed drill, seeds fell continuously into a V shaped furrow opened by the disc type wheel of the seed drill.The depth of the V shaped furrow opened while sowing was about 7.5 cm and even up to 15 cm depth where the soil moisture was excess.Hence the seed drill employed on heavy soils with optimum moisture content after the harvest of rice crop might have increased the soil mechanical impedance.Earlier [30] also reported that use of heavy agricultural machinery, often on soils with high moisture content, it increased significantly the risk of soil compaction. The vertical load of agricultural machinery creates soil compaction which is evidently seen in the increase of bulk density or decrease in porosity of soil [7].Bulk density is also variable due to wheel compaction, tillage management and biological activity.The bulk density always had a positive relationship with soil penetration resistance and negative relationship with soil moisture [31].The results of the present experiment also indicated that the values of bulk density increased with soil penetrating resistance.The bulk density was highest with black gram sown with seed drill and employment of combine harvester for the harvest of rice where the number of wheel passes was more.The heavy machinery traffic (Wheel passes of seed drill and combine harvester) would have resulted in mechanical compaction and ultimately increased the bulk density.Similarly, Ref. [32] observed increase in bulk density due to soil compaction caused by heavy agricultural machinery. [33] also observed that wheel traffic caused an increase in soil bulk density though wheels are loaded in flooded condition.In the present study also, when the harvester and seed drill was employed, the soil moisture was optimum.Hence the bulk density was more due to the wheel passes.However supplemental mobile sprinkler irrigation given on 30 DAS eased way the soil compaction to a small extent which was reflected in the values of bulk density observed at 45 DAS.The continued compaction would have decreased the water content up to 30 DAS and the increased moisture content through the supplemental irrigation resulted in reduced values of bulk density.The moisture entered between the clay lattices would have increased the volume and as a result the bulk density at surface level was reduced.A steady increase in bulk density with decrease in moisture content was already reported by Ref. [34].Conversely gradual increase in bulk density with an increase in soil moisture content was already documented by Ref. [33].Reference [15] noted increase in bulk density and penetration resistance within no-till. Soil moisture pattern Surface soil compaction created in no till soil due to the puddled condition in the preceding rice crop and sub surface soil compaction due to wheel traffic of harvester and seed drill had played a major role in determining the depletion pattern of soil S. Kasirajan et al. moisture.Soil crusting due to the breakdown of surface soil aggregates and hard pan due to puddling and wheel passes had restricted the water and soil entry [35]. In conventional relay cropping system, black gram was sown manually 10 days before the harvest of rice crop, where the soil condition was waxy.The black gram was broadcasted and hence there was absolutely no vertical loading on soil while sowing the black gram.The rice was also harvested manually and the black gram was at 2-3 leaf stage while harvesting of rice was completed.This would be the major reason for the slower depletion rate of soil moisture under the conventional cropping system as compared to sowing of black gram manually 4 days before the mechanical harvest of rice and black sown with seed drill on the day of harvest of rice with combine harvester.The time delay of 10 days between black gram sown manually before the harvest crop and black gram sown with seed drill after the harvest of rice crop created huge variation in soil moisture.The purpose of relay cropping by utilizing the residual moisture was partially fulfilled as the moisture content at the time of black gram sowing after the harvest of rice was 5.6, 8.22 and 7.27 percent lesser respectively at 0-5 cm, 5-10 cm and 10-15 cm layer as compared to black gram sown 10 days before the harvest of rice crop.Further the V shape furrow opened during sowing of black gram with seed drill exposed the top soil completely and would have disintegrated the soil aggregates.This deleterious effect might have rapidly depleted the soil moisture in the subsequent stages when compared to other methods of sowing.The surface soil compaction caused by the combined harvester through ground contact pressure and sub soil compaction due to the axle load of seed drill would have damaged the soil structure and as a result of soil compaction, the moisture retention capacity was reduced, which has been seen in the soil moisture depletion rate throughout the crop period.The results are in accordance with the observation of [36] that soil compaction and wheel traffic decrease the moisture retention capacity and hydraulic conductivity. In order to avoid further drying of surface soil and depletion of soil moisture, mobile sprinkler irrigation was given at flowering stage (35 DAS).The soil moisture data recorded at 40 DAS revealed that though the soil moisture was not increased dramatically, the supplemental irrigation given was able to maintain almost the same soil moisture content at 0-5 cm layer that of 30 DAS in black gram broadcasted 10 days before the manual harvest of rice and 4 days before the machine harvest of rice crop.However, the soil moisture content did not improve with seed drill sown black gram.The wheel traffic of harvester and seed drill would have also decreased the water infiltration rate when supplemental irrigation was given.Reference [36] also in an opinion that the compacted soil profile contains small pores in the upper layer, which leads to higher evaporation loss and poor moisture retention.Further increase in soil penetration resistance and reduced water uptake due to drying of top soil was also documented by Ref. [37].Reference [38] also showed that the infiltration rate was significantly reduced due to wheel traffic, which eventually reduced the soil moisture retention capacity.As the crop was raised under no till condition irrespective of the sowing methods, the lifesaving irrigation did not alter the soil moisture content in 5-10 cm and 10-15 cm.The supplemental irrigation was given once at flowering stage was only 10 mm which would not have infiltrated the deeper layers of the soil.Soil compaction and reduced infiltration due to energy of sprinkler droplets was earlier reported by Ref. [28]. Soil crack Soil type is another factor that determines the soil compaction through influencing the soil aggregates.The present study was conducted in clayey soil where montmorillonite is the dominant mineral.Swelling phenomena is common in clayey soil due to the formation of diffuse double layer and flocculation.The soil shrinks when dry with deep wide cracks and swell moistened.Due to continuous moisture loss in the clay soil, shrinkage ends up with cracks due to generation soil suction [39].Tillage methods and organic amendments could alter the Soil cracking patterns [16]. The changes in soil strength due to wheel traffic of combine harvester and as well seed drill in the present study increased the width and depth by 7.9 and 25.8 per cent over the manual sowing and harvest.The maximum cracking area under no tillage system was earlier observed by Ref. [40].Though [40] reported that higher compaction of soil results in lower shrinkage and cracks of soil, the furrow opened to a depth of 7.5 cm at optimum soil moisture during the operation of seed drill might have resulted in deeper crack and volume.The data on soil cracking volume further indicated that, though the width and depth of soil cracks was higher in blackgram sown manually 10 days before the manual harvest of rice as compared to blackgram sown manually 4 days before the machine harvest of rice crop, the volume of sand required to fill the cracks was highest blackgram sown manually 4 days before the machine harvest of rice crop due to the soil compaction and plough pan created due to the wheel passes of combine harvester.Reference [41] also observed greater loss of water through bypass flow due to higher crack width and crack volume as a result of soil compaction.Soil shrinking ability and hydraulic conductivity has been reported to be affected by increased bulk density [13]. Root growth Penetration resistance is one among the four physical barriers that limits the development root and plant growth.The pressure needed to be exerted to drive the root cap into the soil in the elongation zone is largely controlled by the soil penetration resistance [42].Mechanical compaction of soil increased the soil bulk density which modify the root configuration and root-soil interactions [43]. The root length and root dry weight varied greatly with soil penetration resistance and bulk density.Irrespective of the treatments, the supplemental irrigation given on 30 DAS significantly reduced the penetration resistance at 40 DAS, which was exhibited in the corresponding increase in the root length and root weight as well.Black gram sown 10 days before the harvest of rice crop in a waxy soil condition would have increased the elongation of root due to minimal penetration resistance in the early stages.The increase in soil penetration resistance from mpa to mpa in seed drill sown blackgram reduced root length by 49% and root dry weight 25%.In the S. Kasirajan et al. soils of blackgram sown with seed drill after the machine harvest of rice, the lesser moisture content would have resulted in poor elongation of roots due to increased soil penetration resistance.The soil moisture content always had a negative relationship with soil penetration resistance and as a result increased soil penetration resistance severely reduced the root and plant growth [44] due to lack of oxygen and by accumulation of toxins [45].The reduction in soil water content and increase in bulk density would have restricted the root penetration of root due to low strength of soil.The use of machineries had increased the soil penetration resistance and bulk density and a result, restricted root growth and soil moisture content was noticed [46].Reference [31] also reported that higher bulk density affects the root distribution and biomass due to soil compaction. Root nodules The nitrogen fixation efficiency in leguminous crop is affected by root nodule development.Mechanized cultivation in the modern Agriculture often results in risk of soil compaction due to use of heavy machineries for sowing and harvest.Compaction causes phenomenal changes in the soil environmental characteristics that affects root nodulation especially in clay soils by virtue of its greater resistance to air movement by the formation of impermeable layer [45].The number of nodules per plant was significantly higher with black gram sown 10 days before the manual harvest of rice crop.Normally the nodules initiate at ten days after sowing in black gram.When compared to other method of sowing, black gram sown in this method had sufficient moisture in the soil and better microclimatic condition, which would have helped in producing a greater number of effective nodules and nodule dry weight at 30 DAS.The higher penetration resistance and lower moisture content in seed drill sown blackgram resulted in a reduction of root nodules/plant by 22.30 per cent and dry weight by 17.30 per cent.Reference [47] observed significant reductions in the number of nodules due to soil compaction.However, in the present study, though the number of nodules/plant nodules was lesser in soil with more penetration resistance due to more of wheel passes, the per nodule weight was larger as compared to soil with less penetration resistance.The mean per nodule weight was 9.20 mg and 9.38 mg respectively at 30 and 45 DAS in seed drill sown black gram after the machine harvest of rice as compared to manually sown black gram 10 days before the manual harvest of rice crop (8.71 mg and 9.37 mg).Similarly, Ref. [48] also reported that though the number of nodules per plant is reduced in compacted soil, the size of the nodules was bigger than the nodules observed in non-compacted soil. The effective nodulation and nitrogen fixation in pulses is greatly affected by soil moisture and optimum soil moisture is required for nodule formation [49].The supplemental irrigation given through mobile sprinkler on 30 DAS significantly increased the nodule number/plant and dry weight.Increase in the soil moisture would have enhanced the physiological and photosynthetic activities and finally improved the number of root nodules [50].Increase in soil moisture up to 50 per cent increased the nodule formation and nodule dry weight [51].Ceasing of nodulation in the absence of irrigation was earlier reported by Ref. [52].In general, the data on number of nodules at 45 DAS shown decline trend in all the methods of blackgram sowing.Rapid depletion of soil moisture content led to soil compaction would have restricted the production of nodules in the later stages.In addition, normally nodulation in pulses starts at 9 DAS, reaches the peak at 50 per cent flowering stage and starts to decline after 45 DAS due to spontaneous regeneration in pulses [53].A progressive increase in the number of nodules per plant was only up to the vegetative phase and started to decline in the flowering phase. Physiological parameters The soil penetration resistance due to soil compaction limits the root growth and subsequently physiological performance of the crop.The decrease in soil moisture due to soil penetration resistance and bulk density in the compacted soil retarded the physiological activities [54].The relative water content at 30 DAS was significantly reduced by 5.2 per cent in the seed drill sown blackgram, which might be due to low stomatal conductance because of increased soil penetration resistance and bulk density at 30 DAS, as the crop was sown after the harvest of rice crop.During the stress period, stomatal closure reduces the relative water content [55]. The soil moisture content had a direct influence on the relative water content at 30 DAS as the crop was grown under residual moisture condition without any irrigation.The leaf RWC was significantly improved on 40 DAS when the crop was given supplemental irrigation and the per cent of increase in relative water content at 40 DAS with the supplemental irrigation was 3.2 over non-irrigated.The RWC reduction also caused a considerable decrease in the total leaf area which was evidently seen in the specific leaf weight.The reduced water supply from the soil would have reduced the photosynthetic activity due to low stomatal conductance.Similar reduction in relative water content and specific leaf weight was earlier reported by Ref. [56].Thus, soil compaction which led to poor soil inhibited various physiological activities [55]. The restricted root growth due to soil compaction in seed drill sown blackgram after the machine harvest of rice crop had reduced the root demand for photosynthetic substances, which eventually reflected in the values of SPAD meter.The stomatal conductance was reported to be decreased due to high bulk density in the compacted soil, which in turn alter the transfer capacity of electron and chlorophyll content [57].The chlorophyll content was significantly lowered in the compacted soil which was due to poor N uptake by the restricted growth of roots.Thus, the photosynthetic efficiency of a crop is limited by the scant chlorophyll content [54]. Yield Soil compaction that hinders root growth, nutrient uptake and yield reduction are the major concern of farm mechanization [10].The number of wheel passes and poor soil management creates impermeable layers especially by the puddling operation.Though impermeable layer improves water retention capacity, it restricts water and nutrient cycles.Many authors have reported that the S. Kasirajan et al. subsoil compaction has negatively influenced the soil physical conditions, which substantially decreased crop yield.However, Ref. [58] showed that moderate compaction had no effect on crop yield or can increase yield.Reference [59] stated that moderate soil compaction favours seedling emergence, root growth and moisture and nutrient uptake. The traditional practice of broadcasting blackgram in the standing crop rice as relay crop by utilizing the residual moisture and nutrients improves the soil environment, conserve soil resources, save time and energy.Plant density per unit area is very important in determining the yield of any crop.In the present study, the initial plant population observed at 10 DAS and as well as at harvest was highest with blackgram sown with seed drill after the harvest of rice crop.The seed placement was not optimum in the seed drill and dropping of seeds were more than the required population per unit area.The crowded population resulted in interplant competition, as a result the plants grew tall with fewer branches and number of pods per plant.Earlier, Ref. [60] reported higher population per unit area causes mutual shading which reduced the photosynthetic efficiency and finally the crop yield.In addition to the above more plant population per unit area had also depleted more of soil moisture which ultimately resulted in poor yield attributes and yield.Reference [22] also observed that Soil penetration resistance (SPR) in the compacted soils restricts the crop root growth and water uptake and finally led to yield reduction. Supplemental lifesaving irrigation generally increases the crop yield in all the crops.The mobile sprinkler irrigation system normally maximizes the water use efficiency especially under water stress condition.This kind of lifesaving supplemental irrigation is regarded as that is deficit irrigation [61].The lifesaving irrigation through mobile sprinkler at the flowering stage had eased away the soil penetration resistance to some extent, halted the soil moisture depletion, improved the physiological activities, root growth, yield attributes and yield.The supplemental irrigation given between jointing and anthesis had significantly increased the grain yield, WUE and HI in wheat [62,63].Reference [64] also obtained similar yield with deficit irrigation to that of well-irrigated crops due to better root growth and elongation which maintained relative water content. Conclusion The soil compaction under mechanized cultivation in no till black gram had a significant effect on soil penetration resistance, bulk density and soil moisture depletion pattern.The results of the experiment indicated increased the soil penetration resistance, bulk density and volume of soil crack due to the mechanized harvest of rice and seed drill sown black gram.The soil penetration resistance and bulk density had an inverse relationship with the soil moisture content and as a result poor root growth and root dry wight was observed with two passes of wheel (Machine harvest and seed drill) as compared to single (machine harvest) and no pass (manual harvest).Supplemental lifesaving irrigation given at flowering stage had a profound influence on soil properties and crop growth irrespective of the wheel traffic.Two passes of wheel in the soil under optimum moisture condition created more compaction and had a negative impact on crop growth and yield.Though broadcasting of black gram seeds 10 days before the manual harvest of rice along with lifesaving irrigation on 30 DAS significantly increased the pod yield of black gram under zero till condition as compared to seed drill sown method of crop establishment after the machine harvest of rice, broadcasting of black gram seeds 4 days before the machine harvest of rice along with lifesaving irrigation may be recommended to obtain comparable yield of blackgram. Fig. 2 . Fig. 2. Soil Penetration Resistance as influenced by mechanization and lifesaving irrigation.T 1 -Manual broadcasting of seeds 10 days prior to the manual harvest of paddy.T 2 -No till seed drill after the machine harvest of paddy.T 3 -Manual broadcasting of seeds 4 days prior to the machine harvest of paddy.T 4 -T 1+ Lifesaving irrigation.T 5 -T 2+ Lifesaving irrigation T 6 -T 3+ Lifesaving irrigation. Fig. 4 . Fig. 4. Soil cracks changes due to mechanization and lifesaving irrigation.T 1 -Manual broadcasting of seeds 10 days prior to the manual harvest of paddy.T 2 -No till seed drill after the machine harvest of paddy.T 3 -Manual broadcasting of seeds 4 days prior to the machine harvest of paddy.T 4 -T 1+ Lifesaving irrigation.T 5 -T 2+ Lifesaving irrigation T 6 -T 3+ Lifesaving irrigation. Fig. 5 . Fig. 5. Soil moisture depletion pattern as influenced by different treatments.T 1 -Manual broadcasting of seeds 10 days prior to the manual harvest of paddy.T 2 -No till seed drill after the machine harvest of paddy.T 3 -Manual broadcasting of seeds 4 days prior to the machine harvest of paddy.Lifesaving irrigation with mobile sprinkler was combined with T 1 (T 4 ), T 2 (T 5 ) and T 3 (T 6 ). 15, 2015-16, 2016-17 and 2017-18 (Dec-March) at Tamil Nadu Rice Research Institute, Aduthurai (11 • 01′N, 79 • 48′E, 19.5 m a.s.l.), Tamil Nadu, India.The study area is characterized by a tropical climate with distinct wet and dry seasons with annual rainfall of 1169.4 mm.The amounts of annual rainfall during the study year were 1237.2,1292.4,530.1 and 1488 mm in 2014, 2015, 2016 and 2017 respectively.The total precipitation during the South Table 1 Trends of Soil bulk density (g/cc) in different layer due to mechanization and lifesaving irrigation. Table 2 Regression between soil moisture content and bulk density at different layers. Table 3 Crop physiological parameters of black gram as affected by different wheel passes. Table 4 Nodule count and nodule dry weight (g/plant) as affected by different treatments. Table 5 Root length (cm) and root dry weight (g/plant) as affected by different treatments. Table 6 Growth and Yield attributes and seed yield (kg/ha) as influenced by mechanization and lifesaving irrigation.
2024-03-25T15:19:25.964Z
2024-03-01T00:00:00.000
{ "year": 2024, "sha1": "d13362b1be337a32e5432cd89504ddc99571f300", "oa_license": "CCBYNC", "oa_url": "http://www.cell.com/article/S2405844024046565/pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "24dab1a26a4a96f086a449f9bd6a2f6c10c2b761", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
270023527
pes2o/s2orc
v3-fos-license
A Novel Surgical Technique for the Management of Large-Volume Neurogenic Heterotopic Ossification Following a Spinal Cord Injury: The Sashimi Technique This case series investigates the efficacy of the "sashimi technique," a novel surgical approach utilizing a curved chisel for the resection of heterotopic ossification (HO). The main focus is on reducing resection margins and preventing excessive bone removal while maintaining optimal functional outcomes and preventing recurrence. Two cases illustrate successful outcomes in patients with spinal cord injuries and severe HO of the hip, emphasizing the precision of using the curved chisel-based technique in improving patient mobility while still achieving a desired resection margin. The study highlights the effectiveness of using a curved chisel in protecting neurovascular structures and maintaining resection precision. Additionally, the integration of postoperative radiotherapy and pharmacological treatment is emphasized as a strategy to prevent recurrence. The goal of this procedure is to improve functional outcomes and patient quality of life. Introduction Heterotopic ossification (HO) is a pathological condition characterized by abnormal bone formation in soft tissues, often observed following central nervous system injuries.This case series examines the incidence, management, and outcomes of HO in patients.It underscores the complexity of diagnosing and treating HO, highlighting the need for a multidisciplinary approach.The series also explores innovative surgical techniques and postoperative care strategies aimed at reducing recurrence and improving patient mobility.This case series focuses on a novel surgical technique to manage HO developed to enhance patient outcomes by reducing resection of areas of unaffected bone during surgical treatment.In HO, aberrant bone formation in soft tissues significantly impairs mobility and quality of life post-injury.Our approach integrates advanced surgical methods with comprehensive postoperative care, addressing the unique challenges of HO. Case Presentation Case 1 The first case was a 33-year-old male with a history of spinal cord injury, suffering from severe HO of the hip.He had a history of paraplegia due to spinal cord infarction (Th8 AIS A) one year prior.This was after surgery for a type A aortic dissection thoracoabdominal aortic aneurysm due to residual dissection.At age 28, he experienced an initial type A dissection, which was repaired with an artificial graft and a stent graft.The patient had restricted mobility due to extensive hip joint ossification, making it difficult to maintain a seated posture.The ossification is seen visibly on the preoperative X-ray and shows space restriction (Figure 1). CT: computed tomography The patient also had a computed tomography (CT) scan and CT angiography to map the path of the vasculature surrounding the femur and heterotopic ossification (Figure 1).The "sashimi technique" was employed for precise ossification removal in this patient through the Smith-Peterson approach.Ectopic bone (300 g) was removed from the site using osteotome and bone saw initially, and then, a curved chisel was used to resect the bone close to the native femur (Figure 2).The curved chisel was also used in areas in close proximity to any neurovascular structures.Postoperative Xray showed good resection margins (Figure 3).Postoperatively, the patient had improved hip flexion from 20° to 80°.Operative bleeding was 1,297 mL.The postoperative treatment included radiation therapy two days postoperatively to prevent recurrence (8 Gy × 1 time), and the patient was also given oral etidronate disodium 800 mg.The patient was discharged from the hospital six days after surgery.At 10-month postoperative follow-up, there was no recurrence or signs of ossification.The patient reported that this significantly enhanced sitting comfort and daily activities, allowing the patient to be more mobile. Case 2 The second case involved a 23-year-old male with a cervical spinal cord injury (C6 AIS A) who experienced recurrent heterotopic ossification (HO) of the hip.He presented with persistent pain and limited range of motion (ROM) in his left hip.The injury occurred one year earlier during a company trip when he jumped into a pool, resulting in the need for C4-C6 posterior decompression and fixation surgery.Partial resection of the ectopic ossification in the left hip was performed at another hospital one year after his posterior decompression and fixation surgery.However, 10 months after this resection, he continued to have a limited range of motion, with severe restriction in hip flexion and adduction.He requested a second opinion and was referred to our hospital.The remaining ossification is seen visibly on the preoperative X-ray and also showed a significant reduction in the joint space (Figure 4). CT: computed tomography The patient also had a CT scan and CT angiography to map the path of all vasculature in close proximity to the femur and heterotopic ossification (Figure 4).A resection was planned using the sashimi technique, and a total of 212 g of ectopic ossification bone was removed (Figure 5). FIGURE 5: Patient 2 intraoperative images Pre-resection view of the femur with heterotopic ossification exposed from the surrounding tissue (A).Utilization of bone saw for gross resection of ossification at a safe level, avoiding neurovascular structures and excessive removal of healthy bone (B).Fine resection using a curved chisel to precisely remove the remaining ossified tissue while preserving native anatomy (C and D). Postoperative X-ray showed good resection margins (Figure 6).His hip flexion increased from 50° to 80°, adduction increased from -10° to 20°, and abduction increased to 40° from 35°.Operative bleeding was 627 mL.The postoperative treatment included radiation therapy two days postoperatively to prevent recurrence (8 Gy × 1 time), and the patient was also given oral etidronate disodium 800 mg.During follow-up, the patient mentioned increased joint mobility, aiding in smoother wheelchair transfers and daily functions.There was no recurrence reported six months after surgery. In both cases, joint mobilization training was started one week after surgery.The curved chisels used in our cases were of set specifications (radius, 40 or 65 mm; length, 48.5 or 65 mm; width, 15 or 20 mm) (Mizuho, Tokyo, Japan). Discussion These cases underscore the effectiveness of our developed "sashimi technique" in managing HO.The nomenclature behind the name "sashimi technique" is inspired by the preparation of "sashimi" in Japanese cuisine, where a chef uses a knife to very carefully cut the fish into thin slices.Just as a skilled sushi chef meticulously slices fish with a sharp, curved blade to maintain the integrity and texture of each piece, our surgical technique utilizes a curved chisel to carefully resect the heterotopic ossification, similar to a piecemeal resection approach, while preserving the surrounding healthy bone and soft tissue.This approach, characterized by precise and minimal removal of unaffected bone, aims to enhance postoperative range of motion (ROM) and alleviate patient symptoms.In conventional resection, a straight, flat-edged chisel is used to resect the ossified tissue in perpendicular angles, and approaching the edge of healthy bone, the angulation is changed to reduce the volume of tissue resected; however, this does not always preserve all the healthy tissue (Figure 7). FIGURE 7: Technique of resection using the sashimi technique compared to conventional resection technique This figure describes the difference in resection strategies between the conventional resection method (depicted in the upper section of the image) and the sashimi technique (depicted in the lower section of the image).The sashimi technique allows for the preservation of more native bone as compared to the conventional resection method. The figures were constructed by the authors. In our technique, a curved chisel is used to allow for better maintenance of margins during resection (Figure 8). FIGURE 8: Demonstration of sashimi technique and use case demonstrating margins This figure depicts an axial view of the sashimi technique and shows the curved path of the resection.The curvature of the chisel also allows for the protection of the vessel in close proximity. The figure was constructed by the authors. This surgical technique incorporates the principles of high precision, a minimally invasive approach, and a comprehensive understanding of both anatomy and pathology during HO resection.Intraoperative challenges include the critical task of distinguishing between normal bone and ectopic ossification.Equally important is the preservation of vital vascular structures, such as the femoral artery, to ensure sufficient blood flow to the femoral head and minimize the risk of ischemic complications.The utilization of 3D models in preoperative planning emerges as a crucial tool for understanding the complex anatomy and extent of HO.The potential risks associated with HO resection include intraoperative fractures and perioperative complications.Emphasis on surgical interventions within reasonable limits echoes the sentiment of minimizing unnecessary risks.Recent research sheds light on the impact of hip HO on function and potential management strategies [1].Studies highlight significant pain reduction and improved function after complete surgical resection.However, when resection margins include a significant amount of native bone, issues such as bleeding, stability, fracture, and functional loss can be a concern [2,3].Minimally invasive techniques such as computer navigation show promise for improved outcomes [4].Reduced tissue disruption is closely associated with faster healing processes, diminished postoperative pain, and a lowered incidence of complications, enhancing the overall recovery experience for patients.Moreover, the preservation of healthy tissue around the site of HO is instrumental in ensuring better long-term outcomes in terms of joint mobility and function, directly contributing to improved patient quality of life.Additionally, conserving surrounding tissue not only facilitates immediate recovery but also preserves options for future surgical interventions or treatments, offering a strategic advantage in long-term patient management.To improve the safety of the procedure, it is important to use intraoperative C-arm X-ray guidance to visualize both the resection margins and understand the anatomy during resection.Continuous range of motion assessment intraoperatively is also crucial to ensure good functional outcomes postoperatively.It is important to take care and avoid resecting into the joint, the articulating surfaces of the femur with the pelvis, as this can cause instability and increase the risk of intraoperative fracture. The need for thorough patient education to manage expectations and inform them about potential complications is also underscored.The concept of preoperative embolization is introduced as a strategy to facilitate HO removal, with a need for joint input and consultation from vascular surgery.The potential benefits of embolization, such as simplifying the identification and dissection of blood vessels are a key factor; however, there is a need for careful consideration and weighing the risks [5,6].In these cases, we opted not to perform embolization to preserve blood flow to the bone.This was primarily to avoid avascular necrosis of the femoral head and understand that this choice could increase intraoperative bleeding and present additional challenges during surgery. The advantages of the instrument used, the curved chisel, in the sashimi technique include how it allows for finer movements and more targeted tissue removal.The curved chisel enhances surgical precision while minimizing the risk of inadvertent damage to surrounding structures.Moreover, the curvature of the chisel facilitates better force distribution, reducing trauma and enhancing overall surgical outcomes.Previous reports of similar uses are seen in the literature for a use case in osteosarcoma resection of the femur [7]. Another case was reported in the literature regarding sarcoma resection in the sternum in crucial areas [8]. It is pertinent to highlight the importance of preoperative planning in optimizing the use of the curved chisel technique for resection so as to not compromise resection margins.It has been suggested in the literature that imaging modalities such as magnetic resonance imaging (MRI) can be used to assess the resection margins postoperatively [9].The sashimi technique, combined with postoperative radiotherapy and etidronate disodium, aims to reduce the recurrence of heterotopic ossification (HO).Although our case series of two patients is limited and does not provide definitive evidence, existing literature suggests that pharmacological treatments and radiotherapy can effectively reduce recurrence rates [10,11].It is crucial that postoperative management is optimized to prevent recurrence and further range-limiting masses.Different strategies, including combination therapy with radiotherapy and nonsteroidal anti-inflammatory drugs (NSAIDs), have proven effective [10,11].Further research is needed to confirm the effectiveness of combining surgical resection with these treatments.Bisphosphonate such as ethindronate was also shown in the literature to be the best choice for pharmacological treatment of already established HO, thus making it our agent of choice [12].It is suggested in the literature that serum alkaline phosphatase (SAP) can be used as a marker for heterotopic ossification.Therefore, it might be utilized as a marker for predicting recurrence through baseline monitoring of SAP immediately postoperatively and then measured during follow-up [13].The timing of resection is still debated; however, it seems to be the general consensus that earlier resection is better to reduce pain and prevent deterioration of function [14].Further considerations include the choice of imaging modality in the preoperative planning process.A combination of CT with orthogonal X-ray projections may help improve the quantification of the ossification [15].In our patients, it is key that more than the resection margins, the range of motion was prioritized to ensure good functional outcomes.It is key to evaluate the functional needs of the patient prior to determining the operative course. Conclusions The presented case serves as a platform for ongoing learning and improvement in surgical techniques.The participants express optimism for future cases and the continued accumulation of knowledge to refine and optimize the approach to HO resection, ultimately improving patient outcomes in this challenging clinical scenario.Combining detailed preoperative planning with precise intraoperative techniques offers a promising path to improving surgical outcomes, reducing recurrence risks, and enhancing the overall quality of life for patients with HO.It is key to explore the patient's functional status, including but not limited to an assessment of the patient's preoperative mobility status, and frailty, as well as a multifactorial assessment of the patient's ability to perform activities of daily living.Following this, it is important to address their expectations to determine the best course of action. FIGURE 1 : FIGURE 1: Patient 1 preoperative images Patient 1 preoperative X-ray (A), CT scan of the hip (B), and CT angiogram reconstruction of the vasculature (C).The yellow arrow depicts heterotopic ossification visible on X-ray.The white arrow shows the extent of heterotopic ossification on the CT scan. FIGURE 2 : FIGURE 2: Patient 1 intraoperative images Pre-resection (A), resected ossification (B), and post-resection image of the femur (C).The white arrow shows smooth heterotopic ossification surrounding the femur.The yellow arrow shows the post-resection state of the femur. FIGURE 3 : FIGURE 3: Patient 1 postoperative X-ray Anteroposterior (A) and lateral view (B).The yellow arrow shows the remaining heterotopic ossification postresection.The white arrow shows the remaining heterotopic ossification post-resection on the surface of the femur. FIGURE 4 : FIGURE 4: Patient 2 preoperative images Preoperative anteroposterior (A) and lateral X-ray (B), CT scan of the hip (C), and CT angiogram reconstruction of the vasculature (D).The white arrow depicts the wide extent of heterotopic ossification of the femur toward the medial aspect of the femur.The yellow arrow depicts the wide extent of heterotopic ossification of the femur.The green arrow depicts the wide extent of heterotopic ossification of the femur and erosion of the pelvis on CT scan. FIGURE 6 : FIGURE 6: Patient 2 postoperative X-ray of the hip joint Anteroposterior (A) and lateral view (B).The yellow arrow shows the post-resection remainder of the heterotopic ossification.The white arrow depicts the extent of resection on the femoral surface in the lateral view.
2024-05-26T15:08:49.325Z
2024-05-01T00:00:00.000
{ "year": 2024, "sha1": "891ba1ba1f38b7db63e1f8e5f1d25c558cb34b43", "oa_license": "CCBY", "oa_url": "https://assets.cureus.com/uploads/case_report/pdf/251081/20240524-32722-c42jjx.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "afcb628b4524537408fa9eb78435d1b758bc30f1", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
244936528
pes2o/s2orc
v3-fos-license
Towards explainable metaheuristics: PCA for trajectory mining in evolutionary algorithms. , Introduction Non-Deterministic Algorithms such as population-based meta-heuristics have seen an increase in use in applications that involve end-user interactions such as transport route planning, delivery scheduling and medical applications.This increase has highlighted the need for the decision processes behind these system to be more understandable by end-users.This is turn may help build a level of trust in the solutions generated by these systems, as seen in conclusions and recommendations of the Public Health Genetics (PHG) foundation [1]. Two significant metaheuristic approaches are Genetic Algorithms (GA) and Estimation of Distribution (EDA) algorithms.Both are evolutionary algorithms and explore a solution space using a population-based search metaheuristic.As a GA explores the search space, solution populations generated represent the implicitly learned structure of the problem it is solving.The EDA similarly represents this but also generates a sequence of explicit probabilistic models of the problem structure.Problem Structure refers to the graphical dependency relationship between solution variables and their joint influence on fitness value.This has been variously interpreted in the EDA literature through Bayesian, Markov or Gaussian probabilistic models [2].For both GAs and EDAs, the collected populations generated over the course of a run can be considered the trajectory through the search spaces that the algorithm has taken as it converges on an ideal or near-ideal solution.These trajectories reflect the implicit knowledge gained. We hypothesize that the trajectories generated in this process can be mined for valuable information regarding population changes that can aid in generating explanations for end-users.Our approach involves the projection of the high-dimension solutions space to a lower dimension space that can be used to generate more easily understood visualisations and provide a possible source of new metrics.This is accomplished through the application of Principal Components Analysis (PCA).The results can then be used to generate explanations with the aim of increasing an end-user understanding of the problem being solved and the process by which the algorithms have arrived at the provided set of solutions. Previous work covering the visualisation of algorithm trajectories using PCA can be seen in [3] and more recent methods in which local optima networks are used to generate search trajectory networks for different algorithm runs in [4].Other examples of work involving the exploration of algorithm paths via dimension reduction include [5] in which Sammon mapping is explored as a method of reduction for visualisation and [6] in which Euclidean Embedding is applied.These works focus on the visualisation of an algorithm through the search space however the approach taken in this paper using PCA has the potential to be used as a method extracting features from algorithm paths.These features can then be used to help support explanations by highlighting learning steps in the algorithm run and solution variable patterns that describe the fitness function. The rest of the paper is structured as follows.Section 2 outlines the experimental setup that covers the algorithms used and the problems they were used to solve.The section then outlines the concept of Entropic Divergence and how this is used as measure of population diversity change.introduced at the end of this section is the background method used to translate the algorithm trajectories into a lower dimension space as well as the new metrics derived from that space for comparison to the Entropic Divergence measurement.Section 3 highlights the results and performance between the newly created population metrics and the Entropic Divergence are shown and discussed.This section also highlights the findings regarding problem structure and post-PCA projection variable loading's.Section 4 sets out our conclusions based on the results of these tests. Algorithm Runs Two population-based solvers were selected to generate a series of population trajectories for use in this study.These were a Genetic Algorithm (GA) and a modified Population Based Incremental Learning (PBIL) Algorithm [7] [8] in which a negative mutation rate and mutation shift value is introduced.These algorithms were selected for the purpose of comparing the results of a univariate solver and a more traditional genetic algorithm on problems with different structure.Each algorithm was run on the set of outlined problems in order to generate the trajectories used in the analysis phase of this trial. The Genetic Algorithm used was an adaptation of the Canonical Genetic Algorithm (CGA) [9]. Figure 1 shows the main steps involved as the GA generates new populations during an optimisation run.Table 1 outlines the values used in the running of the GA during these experiments. P shows the number of solutions in each population.maxGen is the maximum number of generations before termination.mutRate is the mutation rate within the GA.Selection is the selection method used within the GA for solution comparison and reproduction.Crossover is the crossover type used in this trial with a rate of 1 and so occurs each generation.n is the problem length.runs is the number of runs for each problem the GA ran for. Population Based Incremental Learning (PBIL) is a form of Estimation of Distribution algorithm.The probability vector is updated and mutated each generation as seen in Equations 1 and 2: Here the vector of marginal probabilities P V = (p(X 1 ), ..., p ( X n )) is created by calculating the arithmetic mean of each variable X in a population of size N.As the solutions are comprised of bit strings we will see that values will range from 0 to 1. Table 2 outlines the values used for the PBIL algorithm in this trial.Additional to these, the PBIL used a mutShift value that was applied to mutated probability vector values.learnRate is the learning rate of the algorithm as is the nlearnRate which shows the negative learning rate penalty if the best solution matches the worst in a solution when updating the probability vector. Benchmark Problems The 1D Checkerboard function scores the chromosome based on the sum of adjacent variables that do not share the same value [11].The function is seen here in Equation 3. Because the function scores only adjacent variables it is possible to have two possible global maxima.As an example, for a bit string of length 5 the two possible would be [01010] and [10101].The implementation of the problem used in this experiment also checks the first and last allele to check if they match.This allows for a total fitness value equal to the bit-string length for an ideal solution The Royal Road function scores chromosomes based on collections of variable values based on a specified set of schema that the solution must fulfil in order to score an optimal value [12].below, Equation 4 specifies the fitness function for the royal road problem with a schema block size of 5, as used in this experiment. As noted in [12] the equation represents the fitness function, such that R 1 is a sum of terms relating to partially specified schema.The schemata are subsets of solutions that match the partial specification, s i .As an example, one partially specified schema with a size of 5 could be represented as [11111*****...] where unspecified members are denoted by "*" A given bit-string x is an instance of a specific schema s, x ∈ s if x matches s in the defined positions within that schema.o(s i ) defines the order of s i which is the the number of defined bits in s i .The royal road function was designed to "capture one landscape feature of particular relevance to GAs: the presence of fit low-order building blocks that recombine to produce fitter, higher-order building blocks" [13]. The Trap-5 concatenated problem is designed to be intentionally deceptive [14] [15], such that they "deceive evolutionary algorithms into converging on a local optimum.This is particularly a problem for algorithms which do not consider interactions between variables."[16].As with the Royal Road problem, the bit-strings are partitioned into blocks and their fitness scored separately.Seen in equation 5a is the function of a trap of order k. Blocks within the bit-string are scored according to the fitness function in equation 5b.A Trap5 problem with a bit-string length of 10 would have the values n=10, k =5, f high = 5 and f low = 4. The further from the goal of each Trap containing five 1's, the higher the fitness value, with only a maximum achieved when the whole Trap is comprised of 1s, leading the algorithm away from the optimal value. Principal Components Analysis The process of reducing the dimensionality of the algorithm trajectory population datasets is done through the use of Principal Components Analysis (PCA).This allows us to project the higher dimentional space of the solutions to a three-dimensional space as "PCA produces linear combinations of the original variables to generate the axes, also known as principal components, or PCs." [17].This involves the calculation of a series of perpendicular, non correlated, linear combinations of the variables in the population such that each combination accounts for the maximum possible variation in the dataset through the use of singular value decomposition (SVD).A summary of the calculation of linear combination and weights from [17] can be seen below in the following series of Equations 6 In Equation 6,matrix A denotes the matrix of eigenvectors.These are used to show the relationship between the original variables and the orientation of the principal components.The resulting datasets were then mined with the intent of finding features capable of explaining aspects of the optimisation problems that they were generated by. 3 Feature Extraction Existing Population Diversity Measures There exist several metrics used to measure the change in population diversity over the course of an optimisation run by genetic-based algorithms.In [18] a brief review of many of these metrics can be found.The metrics covered include the Hamming Distance, the sum of pair-wise comparison in the number of variable differences between two solutions although this method can be considered computationally expensive.An alternative to Hamming Distance is the Moment of Inertia [19] which provides a "..single method of computing population diversity that is computationally more efficient than normal pair-wise diversity measures for medium and large sized EA problems."When researching possible metrics for comparison, it was decided that the Kullback-Leibler Entropic Divergence distance measure [20] would be the best candidate as it was suitable for both population diversity monitoring and the detection of the "phase transition" point, in which it is said that a population based algorithm moves from the exploration of the search space to the exploitation of known problem structure to generate higher fitness solutions. Entropic Divergence The Kullback-Leibler Entropic Divergence (KL d ) is a population diversity distance measure based around the concept of information gain and Shannon's Entropy [22] in which "...the entropy of a random variable is defined in terms of its probability distribution."[20].It can be defined in the following Equation: Where P and Q are vectors of marginal probabilities for two different populations in the trajectory [21]. Using the above Equation it is possible to track the information gain from the initial population generated by the algorithms as Q(x) remains constant as the probability vector of the generation t=0.This metric is called the "Global Learning" and it measures the total information gain from initial population to the population at any given t.The expected behaviour for this metric is to increase over time until a "steady state" is arrived at. It is also shown in [20] that it is possible to use the above KL d Equation to measure the information gain between two consecutive populations where Q (i) (x) and P (i+1) (x) are used.This is known as "Local Learning" with the expected behaviour of increasing until a "Phase Transition" point at which the algorithm moves from exploring the search space to exploiting knowledge learned.In the exploitation phase, higher fitness solutions are generated using this implicit knowledge.When this happens it is expected that the local learning rate will decrease as the diversity within the population decreases until convergence has been completed or a local basin of attraction is escaped [23].This is of interest as it can be used to inform end-users when maximum population diversity is reached in a trajectory. Sub-Space Derived Features Population cluster centers The dimensionality of the dataset is reduced through the projection of the data into a lower dimension set based on the principal components calculated using 6.In this paper we project into a threedimensional sub-space to help visualise the population as a cluster, illustrated in Figure 2.a.The centroid of this cluster can be found by calculating the point that minimizes the sum of squared Euclidean distances between itself and each point in the set as seen in Equation 8.It is important to note that this method does not chart the algorithm trajectory in objective space and does not explicitly reflect the fitness landscape but instead can be used to measure the direction and magnitude of changes in population diversity after being projected into this subspace.Seen in Figure 2.c are all 100 trajectories created by the PBIL on the 1D Checkerboard problem, projected against the first three principle components. Angle from Origin measures the angle between the centroid of the initial starting population in the trajectory and each subsequent population that was created.Each of the two points in the space are represented by the centroids coordinates as a vector of [PC1,PC2,PC3] coefficients in place of x, y and z coordinates.In order to calculate the acute angle α between two vectors we use the inverse cosine of vector products as seen in Equation 9α = arccos Angle between clusters is calculated as in equation 9 using C i and C i+1 , where (i <= 0 <= maxGen).This allows for the angle between consecutive populations to be calculated. PCA Loading Values can be calculated using the resulting matrices from the principal component decomposition process outlined earlier in this paper. Loadings can be considered the weighting of each variable as they describe the magnitude of contribution each variable has to the calculation of each Principal component.Loading signs indicate the type of correlation between the PC and the variable in terms of negative and positive correlation and the strength of that relationship can be seen in the values -larger values indicate a stronger relationship.These loadings are shown in Equation 6as the matrix A and are the coefficients of the principal components (eigenvectors) with respect to the solution variables. Results We hypothesize that is it possible to derive features from algorithm trajectories that can aid in generating explanations for end-users similar in nature to existing known metrics such as the Kullback-Leibler Entropic Divergence values.For two population-based NDAs -a genetic algorithm and a univariate population based incremental learner -we generated a total of 100 algorithm trajectories on each of the three test functions used.These trajectories were transformed using PCA to allow the projection of the populations into a lower dimension space for the purpose of visualisation and feature extraction. PCA Explained Variation The values in Table 3 show the mean percentage of variation in the population data explained by the first three principal components, broken down by algorithm and problem.These results show that for the PBIL, total variation explained by the first three principal components was 34.4% in the 1D Checkerboard problem, 34.8% in the Royal Road problem and 34.9% in the Trap5 Problem.The results also show that for the GA, explained variation was 50.9% in the 1D Checkerboard, 46.2 in the Royal Road and 56.6% in the Trap5 Problem. Table 4 displays the Spearman Correlation Coefficients of Local and Global information gain to the Inter-Cluster and Angle from Origin features extracted. Global Information shows a strong positive correlation to the Angle from Origin feature with a range of 0.76 to 0.99 across all problems and algorithms.The PBIL coefficients were 0.99 for the 1D Checkerboard, 0.96 for the Royal Road and 0.88 for the Trap5.The GA coefficients were 0.98 for the 1d Checkerboard, 0.88 for the Royal Road and 0.76 for the Trap5 problem.3, split by algorithm and problem.It can be seen in the results and the correlation coefficients in table 4 that for all three problems and both algorithms, the angle from origin metric closely matches the behaviour of the Global Information Gain behaviour.Both metrics detect the increase in information gained as the algorithms solve the supplied problem.Local information Gain and Inter-Cluster Angle comparison results are more varied and appear to be showing that learning behaviour differs between algorithms on the same problem, seen in figure 4. The inter-cluster angle calculated for the populations generated by the PBIL do not share the same pattern of behaviour as the Local Information gain.The results show a peak approximately 25 to 30 generations later than the local information gain and so these events do not co-occur at the same point in the trajectory in all problems tested.This difference in behaviour is reflected in the wider range of correlation coefficients calculated. The results for the GA however do show a similar behaviour with a time lag of approximately 5 generations across all problems tested to the local information gain.Both sets of data peak early in the trajectory with the Inter-Cluster Angle peaking approximately 5 generations after the Local Information metric, displaying that the Inter-Cluster metric is detecting the occurrence of phase transition point only slightly later in the trajectory.The Inter-Cluster metric closely follows the profile of the Local Information as supported by the high positive correlation coefficients in table 4 The results show a clear difference between the two algorithms when Local Information Gain is compared to the Inter-Cluster-Angle results.This may be due to the fact that the PBIL increments the probabilistic model gradually over successive populations so local information gain accumulates before it is reflected in Inter-Cluster Angle change.As a GA can be considered a Markov process, the probability of each population is only dependant to the current state of the system.This can also be seen when Global information Gain is compared to Angle from Origin.The PBIL reaches maximum Global Information later in the trajectory than the GA with a shallower ascent.The PBIL reaches point at which Global Information Gain stops increasing between generations 25 and 40 whereas the GA has a steeper Global Information Gain rate, reaching the maximum value between generations 10 and 20.This may be due to the GA taking a more varied path across the search space than the PBIL which tends to have less varied performance.Together, these show that it is possible to detect differences in algorithm behaviour over the same optimisation problems through the differences in both sets of results. Principal Component Loadings The results of charting the mean loadings across all runs for each algorithm and problem can seen in Figure 5. The 1D Checkerboard results show that the loadings reflect the patterns of the coefficients.Adjacent variables in the solutions discovered have opposing values in both the PBIL Figure 5.a and GA Figure 5.b figures for the majority of cases.This matches closely the mathematical structure of the fitness functions.Both algorithms however show instances in which the loadings did not conform to the expected pattern, showing a flip in the alternating sequence at three or more points in the bit-string.The Royal Road results for the PBIL in Figure 5.c do not show any clear pattern that would match the expected fitness function structure however the GA in Figure 5.d does show some partial detection, with consecutive blocks of 5 bits having similar values that do not match the next block in 4 instances.The results for the Trap5 problem for the PBIL in Figure 5.e do not show any strong relation to the expected fitness function structure however the GA in Figure 5.f captures this correctly.It shows all 8 blocks of 5 consecutive bits possess similar values but are distinct from the next block.Since PBIL is univariate, it cannot detect multivariate interactions.1D Checkerboard results show that some bivariate interaction was detected but this will be accidental.These results shows that the algorithm trajectories reflect the simpler features of the problem structure that the algorithms have learned but the higher order features are less likely to be recovered. Conclusions In this paper, we presented the results of the application of Principal Components Analysis (PCA) to the trajectories created by two population-based Non-Deterministic Algorithms (NDA).This was done to mine features that can enrich explanations regarding how these algorithms traverse the search space and present significant solution features detected by the algorithms.We generated a collection of algorithm trajectories by solving a set of benchmark problems with a Genetic Algorithm (GA) and modified Population Based Incremental Learning (PBIL) algorithm and projected the resulting trajectories into a lower dimensional space through the of PCA.This process resulted in a dataset that was mined using a novel set of angular based metrics.Our evaluation of these metrics when compared to the Kullback-Leibler Entropic Divergence measure of both Local Information and Global Information gain shows that there is potential to capture a similar level of detail regarding the Global information learned.These metrics were used to detect differing algorithm behaviour on the same problems as seen between that of the PBIL and GA in the Inter-Cluster Angle values.Finally, it was shown that principal component loadings were used to represent what the algorithms have learned in terms of variable contributions to overall fitness.This is a move towards the generation of explanation of solutions returned by the algorithm.This can be seen in the Eigenvector values for the GA that implied the fitness function structure of the optimisation problem for the 1D Checker and Trap 5 Problem.This feature in the PBIL results show partial structure detection only in the 1D Checker problem and shows that some structure has not been captured using the features used in these tests.Being univariate, PBIL is incapable of creating probability features that capture higher level features with interactions as found in the remaining problems.The results of this paper have shown that the PC derived features are associated with the algorithm learnings regarding problem structure.These techniques can be considered a stepping stone towards supporting explanations by relating changes in information gain to the discovery of specific interaction features. Figure 2 . Figure 2.a is an example of a single trajectory visualisation post-PCA conversion.Each point in the trajectory represents the centroid of a population of solutions.For each generation in a given trajectory the centre point of the cluster is calculated.This process results in a set of points in 3D space that represents the algorithm trajectory, Figure2.b, from the initial population to the final population in terms of variation, as measured by the reduction in PCA coefficients over time from t=0 to t=final. Fig. 5 : Fig. 5: PCA Loading Values by Problem and Algorithm Table 1 : GA Run Specifications Table 3 : PCA Variance Explained by Three Components Table 4 : Spearman Correlation CoefficientGlobal Information Gain and Angle from Origin comparison results are shown in figure
2021-12-08T16:18:19.896Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "df8642db6f9c0b5acd93ae255b586c448124e760", "oa_license": "CCBY", "oa_url": "https://rgu-repository.worktribe.com/preview/1457090/FYVIE%202021%20Towards%20explainable%20(AAM).pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "1759f118c8aacfca063d58fa42852e598ea51bcd", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
252288424
pes2o/s2orc
v3-fos-license
Design of Sports Training Data Monitoring System Based on Wireless Internet of Things With the development of the times and the continuous improvement of science and technology, people’s living standards are getting better and better, living conditions are getting more and more abundant, the infrastructure of cities is becoming more and more perfect, the comprehensive strength of the country is constantly increasing, and the speed of development is also increasing fast. However, under conditions of continuous development, the physical health of adolescents and children has not improved, the physical ˆtness of adolescents has declined, and many problems have appeared. ­is survey will combine embedded software and Internet technology, focusing on testing the safety of the sports training system and collecting data. It is very necessary for us to investigate the safety of the sports training system. ­e system also has potential for development. Only when the safety of the sports training system is determined can we understand the most suitable exercise methods for contemporary youths and ensure that they are not right in order to make the relevant sports training methods more conˆdently when a second injury occurs to the body. ­e sports training system to be studied this time consists of three basic technologies, namely, data collection terminals, database stations, and web servers. We will install the relevant equipment of the data collection system in the campus, such as teaching buildings or campus buildings and roads on the open space next to it, and a chip with a wireless transmission system will be placed in the student’s campus card. ­is chip is small in size, high in transmission e‘ciency, and easy to carry by students. ­erefore, when each student holds the campus card, the related equipment in the campus card will be signal linked with the equipment installed in the school. ­is mode is wireless, which is very convenient and fast. In this way, we can collect the identity information and money storage status of the campus card through the web server. When the student ID is displayed on the computer screen, it proves that the student has swiped the card successfully. Introduction With the development of the times and the continuous improvement of science and technology, people's living standards are getting better and better, living conditions are getting more and more abundant, the infrastructure of cities is becoming more and more perfect, the comprehensive strength of the country is constantly increasing, and the speed of development is also increasing fast. However, under conditions of continuous development, the physical health of adolescents and children has not improved, the physical tness of adolescents has declined, and many problems have appeared [1,2]. According to the National Health Survey in recent years, we can see that the obesity rate of adolescents and adults has increased signi cantly, and more and more people are plagued by obesity [3]. At the same time, we can see from the data that the physical health problems of college students today are prominent, the physical tness level of college students has dropped signi cantly, and the downward trend is more obvious. ese problems deserve our attention [4]. College students are at the center of the youth group and account for a large proportion of the number of young people. e development of a country and the future are inseparable from the development of young people. Young people are a powerful driving force for the improvement of the country's comprehensive strength [5]. erefore, a young person with development potential should not only focus on cultivating good quality and achieving better results but also focus on cultivating a healthy body and a stable mental state. Only by combining these aspects will the future of young people be brighter, the development will be faster, and the strength will be stronger [6]. rough relevant data surveys, we can find that today's fitness system and health model have not received much attention. People use the Internet, high-tech digital systems, and intelligent related equipment to exercise and improve their own health [7]. e level of understanding of them is low, so sometimes you will encounter situations where you do not know how to train or what kind of exercise is suitable for you. erefore, the effect of training is not very good, and the effect of physical exercise cannot be achieved. ere are also people who suffer physical damage due to blind training and overtraining [8]. e sports training system is mainly composed of the central component of each equipment and the main part of each fitness method. e central component and related systems are connected using Internet technology, the central system is the main component, and the other fitness systems are supplemented [9]. rough the bottomup data collection of each layer of the real situation, the final data collection can be achieved, and it can also be compared with historical data. e fitness system can also directly affect those who want to exercise [10]. Related Work In the current era, people's physical health is getting less and less attention. e way people use the Internet, high-tech digital systems, and smart equipment to improve their health has not been popularized. ere are also a series of problems, such as low level of understanding. In order to solve the above problems, literature successfully designed a green platform ITIHP that can exercise through the Internet and promote the health of the whole people through practical exploration [11]. is platform uses a Bluetooth system to connect fitness equipment and related systems through signal transmission so that while the machine is operating, it uploads the time and calories consumed by the person using it to the mobile phone or computer to achieve fitness, entertainment, and informationization, and there will be dedicated personnel to compare the information uploaded by people and the information of people's previous physical conditions [12]. Finally, according to each person's different physique and different situations, users will be provided with more personalized and accurate fitness plans and diet plans. rough further exploration, literature successfully designed a series of network intelligent digital collection systems, and this system is mainly used in gyms. It includes human body and mind perception system, network system capable of collecting data, data collection system, and comprehensive analysis application system [13]. e main function of the perception system is to measure the body-related conditions of the fitness person and the pressure that the body can bear [14]. e network system converts the obtained customer information into useful data, transmits it to the relevant computer server, and then transmits it to the staff after sorting. e application system is mainly for data storage and extraction functions [15]. Literature puts forward a problem that needs to be solved urgently. Nowadays, fitness equipment is relatively single, and users are not interested in using it. e newly designed fitness equipment combines modern high technology, which can first verify the user's identity and then provide a personalized mode. Since the new system combines the simulated reality technology, it can give users a better experience, so it is more attractive [16,17]. e system can not only guide people's fitness methods but also record the data and results of people's exercise in real time, avoiding the disadvantage of poor communication with users and making fitness equipment more personalized and digitizing. We specifically use treadmills as an example. e treadmills under the new technology have added embedded technology, perception technology, and automatic collection technology, which can verify people's identities, collect statistics during and after exercise, make fitness distribution of functions between the equipment, fitness people, and fitness coaches more reasonable [18,19], and increase the communication between the three to maximize advantages, diversify exercises, and make data more accurate. Human Arm Motion Analysis. Bring the relevant position of the connecting rod into the formula established according to the coordinate system, and the transformation law can be obtained. e specific relation formula is as follows: (1) e specific explanation of the above formula is as follows: In order to study the impact of the sports training system on people, we need to introduce the D-H model to calculate the transformation law between the human arm joints and the joints. We bring the relevant collected human arm joint change data into the following formula: e specific calculation process of the above formula is as follows: When we perform shoulder-related exercises, the most commonly used sports equipment is the shoulder press. When people use this type of exercise equipment to exercise, their arm movements are carried out on a relatively inclined horizontal surface, including up and down or back and forth, two modes of exercise. Combining the above conditions, we need to fix the positions of the two rods. If in the coordinate system we have established, where people are located is the origin of the coordinate system, when the inclination of the arm and the ground is 30 degrees, we can bring relevant data into the formula to observe the result: e real data obtained in the coordinate system is Bring the relevant data into the DH system to get smaller data: Since the shelf of the referral equipment can only be composed of one component, we can only build a change matrix. Of course, we still have to calculate on the basis of the DH system, so we can describe the motion formula of the relevant machinery as Next, we need to bring in more specific relevant data to verify the hypothesis. First, we need to make a plan to specify the length of the mechanical lever and ensure that the human arm joints and the mechanical level are consistent so that they are in the same coordinate experiment under the department. At this time, we record the coordinates of the lever. But it is also worth noting that when people use shoulder presses to exercise, the direction of the arm force is not parallel to the plane, but in an inclined state, the angle of inclination is about 25 degrees, so we must also include the rotation angle, so we can combine the data in the shoulder pusher coordinate system with the original data and then convert the specific steps as follows: Next, get a more specific formula: Bring in We can make adjustments to relevant data, such as changing the angle at which the exercise equipment is placed or the angle when the human body moves, perform a more realistic measurement of the equipment, and finally get the end space displacement curve, as shown in Figure 1. According to the angle that the human body and the machine can rotate when moving, we count the angle and the recording angle of each rotation of the machine lever device and input the relevant formula to obtain the rotation angle and the recording angle of the arm joint during the movement. en, we calculate the difference between the two rotation angles and finally obtain the corresponding rotation speed of the human body joint. en, we bring the data of the rotation angle of the human arm into the differential calculation so that we can get the specific situation of the change of the rotational acceleration of the arm joint. e specific data selection and experimental results change are shown in Figure 2. Human Dynamics Analysis. If we decompose the weight of the human arm, the weight distribution of the front arm and the back arm of the human is the same, and we can record the relevant quality data as c 1 and c 2 . e formula for calculating the mass of the forearm is as follows: Mobile Information Systems e mass calculation formula of the back arm is is calculated by and followed by Mobile Information Systems e equation for calculating the force of the forearm is as follows: e calculation equation about the force of the rear arm is as follows: Among them, which is substituted into Among them, Combine the above formula to get So the specific equation of Newton Euler about force is as follows: Mobile Information Systems 5 After we know the specific machine composition, machine state, machine rotation speed, and machine force, we will combine the data with the R system. e relationship between the specific machine angle and the machine force is shown in Figure 3. Establish a Basic Exercise Prescription Generation Model. e relevant model designed in this experiment not only has the model foundation of the ordinary model but also makes relevant adjustments according to the development of the times. e main features are as follows. e model used in this experiment is a model with a personalized design for people of different age groups and people with different physical conditions. According to the different characteristics of exercise intensity that children, youth, adults, and the elderly can withstand, the exercise methods we recommend for each type of people are also different, and the exercise effects that can be achieved are also different. For example, for younger children or older people, we should use a less-intensity mode to exercise their cardiorespiratory capacity, muscle endurance, or flexibility so as not to have a bad effect on their bodies; for people and adults, they can withstand greater intensity and can exercise for a longer time, so we can relatively increase their cardiorespiratory capacity, muscle endurance, or flexibility exercises so that their exercise can produce a good effect. In addition, we must pay more attention to the problems of the body itself; that is, for people with heart disease, asthma, and other diseases, their exercise style will be very different from the exercise style of ordinary people; otherwise, there will be very serious consequences, which may be life-threatening. Another point worthy of our attention is that although sports are good for people's bodies, we still have to consider people's subjective wishes and see where they want to perform specific exercises. For example, girls pay more attention to body shape adjustments, boys pay more attention to the strengthening of muscle strength, and some people exercise to increase lung capacity, so the exercise system must combine multiple aspects to plan the exercise mode. e general exercise method has limitations. e exercise equipment is relatively single. During the exercise, we can only measure the amount that each person can do each time or each person per group, and the data collection is relatively incomplete. Under the new sports model design, we have added a new recording method to the original sports mode. We will check the speed of each exercise of each person, the highest point that the referral can reach, and the time between each referral or the exercise, monitor the heartbeat frequency and other aspects of the participants, and make relative adjustments to the planning of people of different ages, different physical conditions, and different exercise purposes. During people's exercise, people's physical condition is not a static state but constantly changes with the time and frequency of the exercise and maintains a more continuous state during this exercise. erefore, in the process of constant change, the new model can capture the changing process of people's bodies and make timely adjustments. e specific data to be referred to are as follows. e Pressure on the Body during the Referral Process. In this experiment, we will use the most accurate RM system to represent the pressure that the body bears during the referral process. e pressure that each person's muscles can bear is different, and the maximum bearing capacity of each person's muscles is also different, so we have to collect the maximum number of times and the longest time that each person can exercise. For example, if the maximum weight recommended by a person is 30 kg, then the maximum strength recommended by him is 30 kg. ese data correspond to each other. e specific data are shown in Table 1. Maximum Referral Height. e measurement of the lifting height refers to a calculation method of how high people can lift equipment of different weights during the lifting exercise; that is, we need to focus on the movement process and distance of the equipment. We can divide the reference height into two categories: one is the height that can be reached each time in the dynamic reference process, and the other is the maximum height that people can reach in the static time. Moreover, the arm length of people of different heights and genders is different, so the recommended height is also different. In the experiment, by analyzing the maximum value of the recommended height, we can get the relationship between it and the rotation angle of the motion machine, and finally, we can get the motion cycle data in real time. Pushing Speed. In the process of exercise, we must pay attention to muscle stretching and contraction and pay attention to the time and frequency of exercise. When performing a press exercise, we should control the force used for the upward movement of the arm and the speed of the upward push, try to maintain a uniform speed with a relatively uniform time interval, and ensure smooth breathing. When exercising and pushing, every time we want to accelerate the exercise, we must first ensure a slow and uniform increase so that the muscles have a process of adaptation and also maintain a more stable state after acceleration. In the same way, every time we want to decelerate, we must first ensure that the speed is reduced slowly and uniformly so that the muscles have an adaptation process and also maintain a relatively stable state after deceleration so as to ensure that it will not be accelerated due to rapid acceleration or slow down and damage the muscle. For people who want to achieve different goals, the speed of the press is different. For those who pursue lung capacity exercise and muscular endurance exercise, we can reduce their press speed and then increase the number of presses, but for those who want to exercise muscle power of people, we can increase the speed of recommendation appropriately. Recommended Interval. It is best to keep the interval within a few seconds. Recommended Number. For those who want to improve muscle power, the number of recommendations can be increased to about 10 as a group. is type of data is suitable for young people and adults with better health, but for the elderly, it is too intense, and the referral is not suitable for them, so we can adopt a strengthening strength training program, control the number of each group to about 10, and increase the interval of referrals appropriately. If middle-aged and elderly people want to improve muscle endurance, you can increase the number of recommendations to no more than 20 and practice repeatedly on this basis. e number of suitable exercises for different groups is shown in Table 2. Interval between Exercises. We can divide the interval of each group of exercises into two types: one is the longer rest time, which can control the rest time within 3-4 minutes, and the other is the shorter each group exercise interval mode. e rest time is controlled between 40 and 50 seconds. Choosing a suitable rest method and a more flexible rest time can strengthen muscle training and improve the effect of exercise. Number of Recommended Groups. According to the relevant standards set by the country, we can strengthen training for adults and young people with better physical conditions. ey can set the number of referrals to more than two groups, but it is best not to exceed four groups, and the intensity and speed of the referrals should not be too strong; for middle-aged and elderly people with relatively poor physical conditions and people who have just started to involve this exercise method, the number of referrals needs to be controlled within one group. Training Frequency. In accordance with relevant national standards, we suggest that ordinary people should be able to set aside 1-3 days a week for physical exercise and at least one set of recommended exercises each time. If people need to do multiple press exercises in a day, the interval between two consecutive press exercises should not be less than 9 hours, and the muscles should be fully relaxed; otherwise, it will cause damage to the body. After several weeks of exercise, you can record your exercise results and compare with the previous data to see if there is a better and more suitable training method. System Overall Design. e method used in this experiment to collect data is mainly to use the monitoring system to collect exercise data in the gym. Different types of fitness equipment have different monitoring methods, which has changed the single type and relatively boring mode of the previous fitness equipment. Equipment is more carefully divided into two types: aerobic and anaerobic. Regarding aerobic equipment, its main function is to reduce fat, reduce weight, shape the body, and exercise lung capacity. Common equipment includes spinning and treadmills. As for anaerobic exercise, its main function is to increase muscle endurance, explosive power, and so on. e main types are barbells, dumbbells, and so on, which mainly exercise various parts of the body such as thighs, buttocks, and biceps. e sports training system used in this experiment is a system that combines a variety of smart devices, which is both personalized and targeted and can make people feel interesting; the specific components of the sports training system built through the server are as shown in Figure 4. When we apply the above training system to our lives, we will use related intelligent tracking modes to monitor people's movement patterns and processes. e model connects the relevant equipment with the central processor through the network transmission system and then realizes the data transmission through the relevant website or platform. Use the Bluetooth system to connect the fitness equipment to the relevant system so that the machine can operate, and at the same time, it can also be used to upload people's exercise time, calories consumed, and other related data to the mobile phone or computer, and special personnel will conduct a comparative investigation on the information uploaded by people and finally provide users with more personalized and accurate fitness programs and diet programs according to each person's different physiques and different situations. e network topology is shown in Figure 5. e main communication methods are shown in Table 3. Functional Analysis of Embedded Software System and Internet of ings. e main function of the software is to combine the relevant data of the human body with the relevant data of the sports equipment and quickly process and calculate. e function of this mode can be related to the following aspects: the human body produces data during the exercise; data is constantly changing; especially after strenuous exercise, our body is actually in a relatively fragile state. When the body is relatively tired, the rate of change of human body data will be faster, so we have to beat the human heart. e changes in data such as speed and vital capacity are very concerned because these data can show people's physical condition. e most important data is the change in the heart rate. If the heart rate recovers faster after strenuous exercise, it means that the human body has better physical health and stronger cardiopulmonary function. On the contrary, if the recovery speed of the heart rate is slow after strenuous exercise, it means that the body's physical fitness is poor and the cardiopulmonary function is weak. erefore, monitoring the data generated by the human body during people's exercise is very important to judge the performance of the human body. System Hardware Design. After we have completed the design of various systems and functions, the ultimate goal is to apply them on the campus of the university to have a positive impact on the health of students. We install a chip with countless transmission systems in the student's campus card. is chip is small in size, high in transmission efficiency, and easy to carry by students. erefore, when each student holds the campus card, the related equipment in the campus card will be signal linked with the equipment installed in the school. is mode is wireless, which is very convenient and fast. In this way, we can collect the identity information and money storage status of the campus card through the web server. When the student ID is displayed on the computer screen, it proves that the student has swiped the card successfully. Database Design. e software of the lower computer uses wireless technology to transmit various data. e main function of the upper computer software is to integrate and analyze the data transmitted to each other and then upload the data to the relevant website, and students can check it by themselves. e database includes the following information:. (1) User-related information, such as name, gender, height, and weight. User information form is shown in Table 4. (2) Related information generated when the user exercises, such as data generated by raising or lowering the arm. User exercise record is shown in Table 5. (3) What is the effect produced by the user during the exercise, such as the force required to raise or lower the arm, and the overall effect after the completion of the press exercise. Exercise effect evaluation form is shown in Table 6. Lower Computer Installation and Performance Test. Design specific hardware facilities according to relevant data, and the bottom PCB design of the data acquisition terminal is shown in Figure 6. Top PCB design drawing of data acquisition terminal is shown in Figure 7. We will take a school in a certain place as an example to practice the experimental model designed above and then see how effective it is. First, after the students have created their own accounts, they enter the correct user name and password to enter the corresponding interface, where they can formulate their own sports mode and sports goals according to their own situation, and they can receive the administrator-related notices, some practical health recommendations, and so on; the system can monitor the students' exercise and avoid cheating. If students forget their account or password, they can log on to the relevant website to retrieve it, which is simple and convenient. Secondly, after we let the teacher create his own account, they enter the correct user name and password to enter the corresponding interface and check the student's movement through related functions. Finally, the students created their own accounts. After entering the correct user name and password to enter the corresponding interface, they can view the schedule of each sports course and choose the course they like. Analysis of Test Results. rough specific practice in a school in a certain area, we can know that the accuracy of the data is high, the network signal is better, the data transmission speed is faster, and the transmission distance is longer, which can cover the entire campus. After about 5 months of practical testing in a school in a certain area, no obvious problems were found. e information collection and network transmission functions are good, and the students' evaluation is high. Conclusion In the era of rapid development of the Internet, intelligent technology has become a part of our life and has been applied to many fields; now, there are more and more hightech emerging and developing; this paper mainly uses the embedded software to model the sports training system and studies the development of its security performance. We use the Internet to collect a large number of data and then Evaluation of the effect of raising the right arm decimal 10-30 Evaluation of the effect of right arm stay decimal 10-30 Evaluation of the effect of lowering the right arm decimal 10-30 Overall evaluation decimal 10-30 we found that the new system is more personalized and professional and has scientific outstanding advantages and found out the shortcomings of the traditional mechanical equipment to avoid a similar situation in the new model. We have also independently established a wireless network signal enhancement system to ensure the real-time data collection and transmission and ensure that, even in the case of bad weather, our signal remains in a relatively stable state, without information loss and signal interruption. We have also installed and tested the system. Before the system is installed, we need to plan the installation plan and carry out a series of tests at the set point to test the ideal distance between the two points and the packet loss rate. After finding a good point, the whole system is installed to simulate the normal operation state, and the students will test it, including the system pressure test. For the upper computer, it also needs a large amount of access data to test the pressure of the web page. Before the system is officially put into use, it has been running in the school for five days. According to the student visits and the running status of the website, it evaluates the whole background server and analyzes the reasons. Data Availability e data used to support the findings of this study can be obtained from the corresponding author upon request. Conflicts of Interest e authors declare that they have no conflicts of interest.
2022-09-16T15:19:57.232Z
2022-09-13T00:00:00.000
{ "year": 2022, "sha1": "195cab192354ad4b649bd8c3affcabad68de24b1", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/misy/2022/4162088.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1c744da6a965bad438d801cba4fa088a164498de", "s2fieldsofstudy": [ "Education", "Computer Science" ], "extfieldsofstudy": [] }
10439110
pes2o/s2orc
v3-fos-license
Implementing and Optimizing of Entire System Toolkit of VLIW DSP Processors for Embedded Sensor-Based Systems VLIW DSPs can largely enhance the Instruction-Level Parallelism, providing the capacity to meet the performance and energy efficiency requirement of sensor-based systems. However, the exploiting of VLIW DSPs in sensor-based domain has imposed a heavy challenge on software toolkit design. In this paper, we present our methods and experiences to develop system toolkit flows for a VLIW DSP, which is designed dedicated to sensor-based systems. Our system toolkit includes compiler, assembler, linker, debugger, and simulator. We have presented our experimental results in the compiler framework by incorporating several state-ofthe-art optimization techniques for this VLIW DSP. The results indicate that our framework can largely enhance the performance and energy consumption against the code generated without it. Introduction Very Long Instruction Word (VLIW) architecture [1], which first appeared in 1972, typically has multiple functional units (FUs) and is capable of executing several instructions in parallel within one single clock cycle and thus is granted the ability to largely improve the Instruction-Level Parallelism (ILP). VLIW architecture is now widely used in commercial process designs, such as NXP's TriMedia media processors, Analog Devices' SHARC DSP, Texas Instruments' C6000 DSP family, STMicroelectronics' T200 family based on the Lx architecture, Tensilica's Xtensa LX2 processor, and Intel's Itanium IA-64 EPIC, in both the embedded domain and the nonembedded domain. While it can be exploited to largely improve the ILP, it also brings a large challenge for the development of software toolkit for VLIW architecture.As compiler is the main responsibility for the code generation of VLIW architecture, VLIW's advantages come largely from having an intelligent compiler that can schedule as many instructions as possible into parallelization to maximize the total ILP [2].Thus, the design of compiler for VLIW is most critical.A full design toolkit is also very important, both for the consideration of time-to-market of embedded systems and for the convenience of application programming and performance verification. In this paper, we presented our methods and experiences to develop system toolkit flows for a VLIW DSP architecture.This VLIW DSP architecture is a scalable VLIW DSP architecture.Our system toolkit consists of compiler, assembler, linker, debugger, and simulator.The compiler is retargeted from Open64 compiler, and various issues are addressed to support optimization including software pipeline and SIMD code autogeneration.Assembler, linker, and debugger are developed based on Binutils.Finally, a cycle-approximate simulator has been developed based on Gem5.Benchmarks are evaluated on this framework, and results are presented.The results indicate that our framework can largely enhance the performance. The remainder of this paper is organized as follows: Section 2 describes the target VLIW DSP architecture.In Section 3, the methods and experiences concerning the development of the whole toolkit are presented.In Section 4, related works are discussed.We present experimental results in Section 5 and conclude this work in Section 6. The Target VLIW DSP Architecture Magnolia is designed for sensor-based embedded systems [3].It aimed at high performance and low energy consumption.Magnolia is a totally scalable VLIW DSP architecture, including the type of functional units (FUs), the number of register files, the number of registers in each register file, the number of clusters, the number of FUs and register files in each cluster, the type of instructions, and the execution time of instructions in each type of FU. A FU library has already been developed, which includes four different types of FUs, which are Unit A, Unit M, Unit D, and Unit F, respectively.Unit A, Unit M, and Unit D are fixed-point units, while Unit F is a floating-point unit.Unit A is dedicated to executing arithmetic operations, logical operations, and shift operations.Unit M can execute multiplication operations, as well as some arithmetic and logical operations.Unit D is in charge of memory access and process controlling, as well as some arithmetic and logical operations.Unit F carries out all the floating-point operations, including the floating-point vector operations [3]. Magnolia architecture supports both fixed-point instructions and floating-point instructions.The instruction width is 32 bits.In order to meet the ever increasing requirement of computation power of embedded applications, SIMD instructions are supported in Magnolia architecture to largely enhance the DLP (Data Level Parallelism).Special purpose instructions are also developed for acceleration of certain sensor-based embedded applications. Magnolia architecture supports both fixed-point register file and floating-point register file.The registers in the fixed-point register file are 128 bits, while the floating-point registers are 256 bits.The number of registers in each type of register file is scalable.Both fixed-point register file and floating-point register file are programmer visible. Traditionally, during the register allocation phase of compilation, if registers are not enough, additional save and load operations must be created and inserted into the original instruction queue by the compiler, to spill the data of a symbolic register to memory and restore it back to the register later.However, accessing memory is much slower than accessing registers and would slow down the execution speed.As in VLIW architectures, compiler needs to enhance the ILP, thus potentially increasing the pressure of register, which means spilling often happens. Thus a mechanism called spill register file [3] is built in Magnolia architecture.When spilling happens, the data is first transferred into spill register file.Only when spill register file is full, then does the data have to be saved into memory.This mechanism can largely reduce the number of memory access incidents.Spill register file cannot be accessed by the programmer. Figure 1 gives an instantiation of the Magnolia architecture.There are only one cluster and 2 sets of each functional unit, which means it can simultaneously execute 8 instructions in a single clock cycle.There are 64 fixed-point registers in the general register file and can be accessed by Unit A, Unit M, and Unit D. The floating-point register file consists of 64 registers and can be accessed only by Unit D and Unit F. Unit D is responsible for the data conversion between fixed-point data and floating-point data. Development of the System Toolkit 3.1.Overview of the Software Flow.The software flow is illustrated in Figure 2, which consists of compiler, assembler, linker, debugger, and simulator.Source files written with high-level languages are first compiled by the compiler into assembly files.Then the assembly files are assembled by the assembler and linked with libraries by the linker into executable files.These executable files can be run on the Magnolia VLIW DSP or on the simulator. Compiler. The compiler for Magnolia VLIW DSP architecture is designed based on Open64 [4] originally derived from the SGI compiler, which is designed for MIPS R10000 processor, called MIPSPro.It was released under the GNU GPL in 2000 and is an open source, optimized compiler [5].It includes many state-of-the-art optimization techniques for generating high performance codes. In order to retarget Open64 to support compiling for Magnolia architecture, three major works must be done: (1) implementing of machine description files related to Magnolia architecture; (2) constructing of a code generator for Magnolia architecture; (3) accomplishing of optimization techniques for Magnolia architecture. Implementing Machine Description Files. The retargetablity of Open64 compiler comes from the introduction of machine description files.Three main categories of information about the target architecture are described in the machine description files, including the following: (1) Information about the Instruction Set Architecture (ISA) describes the details of instructions in the instruction set, such as functions, number of operands, data type, assembly code format, and addressing mode. (2) Information about the Application Binary Interface (ABI) describes the details of the interface between an application program and the libraries or other parts of the application program, such as data type, data size, data alignment, and the calling convention. (3) Information about the processor model describes the details of resources in the target architecture, such as functions of each kind of functional units and number of each kind of functional units. So in order to support Magnolia architecture, machine description files related to Magnolia architecture must be generated. Constructing Code Generator.The Open64 compiler is mainly composed of three parts: the Front-End, the Middle-End, and the Back-End. The Front-End translates programs written in high-level language into the intermediate representation of Open64, which is WHIRL (Winning Hierarchical Intermediate Representation Language).The Front-End of Open64 supports C/C++/Fortran.The Middle-End is composed of several phases, each of which performs a target-machineindependent optimization on the WHIRL.And the Back-End is in charge of code generation, which builds assembly codes according to WHIRL. As the Front-End is totally target-machine-free, it needs no modification to support for Magnolia architecture.And the retargeting work of the Middle-End is discussed in next section. The Back-End of Open64 is retargeted to support code generation for Magnolia architecture.Our compiler can be roughly divided into three phases: Code Expansion, Resource Binding, and Code Emission.The details of the implementation of those three phases can be found in [3]. The Code Expansion phase analyzes the WHIRL structure and translates operations on WHIRL structure into instructions from the Magnolia architecture.So, during the implementation of this phase, two major works must be done: (1) constructing the corresponding relation between operations from WHIRL structure and instructions from machine description files of Magnolia architecture; (2) building correct Magnolia instruction format according to the machine description files. The Resource Binding phase binds instructions to certain resources in the architecture, such as execution cycle, execution FU, and registers for operands, too.During the implementation of this phase, proper order of intersteps must be carefully arranged to avoid deadlock, as there may be conflicts caused by binding instruction to different resources.Also, in this phase, the cooperation mechanism with machine description file must be dealt with. The Code Emission phase translates the bind instructions into assembly format.So, in this phase, correct assembly format of instructions must be extracted from the machine description files for Magnolia architecture. Accomplishing Optimization Techniques.The Middle-End of Open64 is retargeted to fit for Magnolia architecture, and our compiler's Middle-End is mainly composed of two parts: loop optimizer and global optimizer.The loop optimizer performs transformation on loops to optimize the compiling code.The global optimizer uses Static Single Assignment (SSA) as the program representation and performs def-use analysis, alias classification and pointer analysis, induction variable recognition and elimination, copy propagation, dead code elimination, partial redundancy elimination, and other typical optimizations.The details of the implementation of those two optimization phases can be found in [3].And, in this work, we focused on the implementation of the SIMD code autogeneration technique. A lot of existing approaches in researches perform SIMD code autogeneration at a late stage of the compilation process, because more information is available at late stage.However, the disadvantage is that the data parallelism in loops cannot be effectively exploited by these techniques, so the code quality can be less optimal. So, in this work, a high-level technique is used to generate SIMD code with examining of loop code.The SIMD code generation is in the early stage of the compilation process, just after the input source file has been transformed into the WHIRL structure.This approach only needs simple knowledge of the target-machine's ISA, so it is easily retargetable.The data packing work is done at the same time as the SIMD code is generated; thus, it can ease the work of Resource Binding, especially register allocation, in the Back-End stage. Our SIMD code autogeneration technique is focused on loops.A preprocessing engine is introduced in the SIMD code autogeneration process.It is responsible for filtering out loops that do not suit our SIMD code autogeneration technique.Several directive rules have been introduced to choose right candidates for the following process. After a candidate loop is selected, the compiler will go through the loop to annotate operations that could be candidates to be grouped into SIMD code.Then, all the candidates will be evaluated.Several rules are defined to conduct the evaluation process.After the evaluation, the compiler will reconstruct the WHIRL structure and replace those candidate operations in the loop into SIMD operations according to the evaluation result.Also, the data will be aligned and regrouped into packed data for the SIMD operations. The SIMD operations in the WHIRL structure will finally be translated into SIMD instructions from Magnolia ISA in Code Expansion phase of our compiler.And the data for SIMD instructions will be prepared in Resource Binding phase. 3.3. Assembler/Linker/Debugger. The assembler, linker, and debugger of Magnolia architecture are developed based on the open source toolkit GNU Binary Utilities.There are two major issues which need to be solved: (1) Maintaining correct instruction parallelism: according to the definition of the Magnolia assembly format, the instructions which are executed paralleled in one clock cycle must be arranged in a certain pattern where the functional units of these instructions are in an ascending order.Otherwise, the assembler cannot identify the instruction parallelism correctly.Thus, when generating assembly code, Magnolia compiler must check and rearrange the issue order of instructions, so that the information about the instruction parallelism can be delivered to the assembler in a right way.The assembler has been designed so that it can identify and recognize these pieces of information to issue correct instruction queue.(2) Avoiding real-time errors: as Magnolia architecture is designed dedicated to embedded systems, where realtime errors must be avoided, the linker is designed to perform static linking. Simulator. Simulator provides a platform to validate the design of software toolkit, evaluate the processor architecture, and accelerate the hardware development progress.Efficient modeling of the processor architecture and fast simulation are critical for the development of both hardware and software of VLIW architecture.The simulator for Magnolia architecture is designed based on Gem5 simulator.Gem5 [6,7] is an open source platform for computer system architecture research.And the simulator for Magnolia architecture is developed based on Gem5 simulator.The framework of our simulator consists of two main parts, simulation object and simulation core, and some auxiliary module.The simulator takes the application programs and the configuration script as input.The application programs are of ELF binary file format prepared by the linker.And the configuration script is coded in python language including system architecture configuration information and simulation parameters of simulated core.The simulator generates simulation report in the form of a text file including configurable trace information. Gem5 provides plenty of simulation object models with implementation details, including memory, CPU models, bus, cache, and physical, and so forth.However, Gem5 simulator does not support the EPIC architecture processor models and VLIW ISA simulation.In order to construct our simulator, we have created a processor model with VLIW features and added the description of Magnolia ISA to the original Gem5 system.The original Gem5 loader and simulation core were also modified.The Magnolia ISA is implemented using the Gem5 ISA description system, by generating a decoder function, which analyzes the Magnolia ISA description and generates a C++ instruction object.The C++ instruction object is then treated as a basic data type of the simulator and utilized by the simulation core and other simulation objects. The most important issue of the implementation of our simulator is to enable simulation of parallel execution of instructions.In the original Gem5 design, instructions are processed in sequence.Thus, when processing VLIW architectures, it might lead to conflicts among the instructions operating on the same register.To avoid the conflicts, in our simulator, the register file is duplicated.The processor reads register from the original register file.And the duplicated one is used for writing.When the instructions in a dispatching packet are all processed, a register file updating function is invoked, to update all the register values, so as to maintain the coherence of the register data and thus to enable the simulation of parallel execution of instructions. As mentioned before, Magnolia architecture uses the order of function units to indicate the instruction parallelism.If the function units of two adjacent instructions are in ascending order, then the two instructions will be issued concurrently.Or else, the issue of the latter instruction will be delayed to next cycle.The benefit is that we can save 1 bit to double the encoding space of the instruction sets.However, it is not compatible with RISC execution style.So, during the designing of our simulator, we have added instruction parallelism judgement mechanism in original Gem5 to support this feature. Related Works Chapman et al. [5] presented an interactive tool called Dragon, which provides detailed information about a C/Fortran77/Fortran90 program that may contain OpenMP/MPI constructs.It takes advantage of Open64 analysis and capabilities.The basic information displayed in Dragon's graphical tool is general-purpose and could be employed in many situations, from analyzing legacy sequential code to helping users reconstruct parallel code. Wu et al. [8] presented methods and experiences to develop software and toolkit flows for PAC (Parallel Architecture Core) VLIW DSP processors, which is a five-way VLIW DSP processor with distributed register cluster files and multibank register architectures.The presented toolkits include compilers, assemblers, debugger, and DSP microkernels. Chang et al. [9] presented a software framework based on Android and multicore embedded systems.In that framework, they have integrated the compiler toolkit chain for multicore programming environment which includes DSP C/C++ compilers, streaming RPC programming model, debugger, ESL simulator, and power management models. Wittenburg et al. [10] presented architecture and software development toolkit of a parallel VLIW RISC processor, called HiPAR-DSP.They have discussed their procedure for high-level language support on that DSP. Steiger and Grentzinger [18] presented a platform for signal processing experiments.And they also presented an associated software toolchain to support the development of DSP applications on that hardware board.The software toolchain is custom-made and dedicated to a specific application. Several simulators for VLIW architecture simulation have already existed, such as VLIWDLX [19], VLIW-sim [20], and Simple-VLIW [21].Multithread and multiprocessor systems become more and more popular, thus requiring the simulator to be capable of running multithread workload on multiprocessors.However, all the simulators mentioned before can only perform single processor simulation. Gem5 [6,7] is an open source platform for computer system architecture research, encompassing systemlevel architecture as well as processor microarchitecture.Gem5 is written in C++ and python language.It has several interchangeable CPU models and can support the simulating of multiprocessor systems.Gem5 has event-driven memory systems.However, Gem5 simulator does not support the EPIC architecture processor models and VLIW ISA simulation. Experimental Results Experiments were done by running programs from DSPstone [22] benchmark.Those programs in the DSPstone benchmark suite, such as matrix, fir, lms, and fft, represent quite a broad spectrum of the possibilities of using DSP in an embedded sensor-based system.Figure 3 shows the results of performance for programs of DSPstone benchmark suite estimated on Magnolia simulator. These programs are first compiled by the Magnolia compiler and then assembled and linked and finally loaded to the simulator to get the measurement of performance.Blue bar showed the performance generated from the compiler without any optimization.Yellow bar showed the performance generated by the compiler with some basic instruction scheduling optimization.Purple bar showed the performance generated by the compiler with optimizations such as EBO and WOPT.Green bar showed the performance generated by the compiler with loop optimizations.Finally, red bar showed the performance generated using the compiler with SIMD code autogeneration technique. The results indicated that our compiler can gain speedup up to around 5 times compared with the code without any optimizations.In some cases, SIMD autogeneration technique does not bring any further performance improvement.That is because, in these cases, either there is no loop in these benchmarks or the number of loop iterations does not satisfy our preprocessing rules.In that situation, our SIMD autogeneration technique does not work.In our future work, we will try to find a way to improve the applicability of our SIMD autogeneration technique, both for more levels of nested loop and for loops that have more sophisticated structures. Figure 4 shows the results of energy consumption for programs of DSPstone benchmark suite estimated on Magnolia simulator.The energy model used in this work is based on the instruction-level energy model in [23].In this model, the energy associated with an instruction is assumed to be dependent on its own properties as well as on its execution context, and different blocks may have different contribution factors when considering the energy consumption with the variation of number of clusters.This energy model has been validated to have an average absolute error of 1.9% and a standard deviation on the error of 5.8%, which shows a very high level of accuracy.A series of RTL-simulations have been carried out, using Cadence EDA tool chain to extract the parameters needed for construction of the energy model. Clearly, our optimization can significantly reduce the energy consumption related to the execution of those programs, thus making it more suitable for sensor-based application usage. Conclusion In this paper, we have presented our methods and experiences on developing system toolkit for a VLIW DSP architecture.Our entire system toolkit includes compiler, assembler, linker, debugger, and simulator.We presented our methods to develop those tools and also presented the experiences of dealing with issues encountered in the process.Results evaluated using DSPstone benchmarks indicated significant optimization on performance and energy consumption.The experiences presented in this paper might benefit the architecture designers and toolkit developers who are interested in similar VLIW DSP architectures. Figure 3 : Figure 3: Performance results measured on Magnolia simulator. Figure 4 : Figure 4: Energy consumption results measured on Magnolia simulator.
2018-04-03T00:11:25.846Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "5189e9c517cbae2aba1ffffee83481b8c6c49971", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/sp/2015/507896.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "5189e9c517cbae2aba1ffffee83481b8c6c49971", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
251671274
pes2o/s2orc
v3-fos-license
Clinical efficacy of endoscopic dilation combined with bleomycin injection for benign anastomotic stricture after rectal surgery Benign anastomotic stricture is a frequent complication after rectal surgery. This study investigated the feasibility of endoscopic dilation combined with bleomycin injection for benign anastomotic stricture after rectal surgery. 31 patients who diagnosed with benign anastomotic stricture after rectal surgery were included in this study. 15 patients received simple endoscopic dilation (dilation group) and 16 patients received endoscopic dilation combined with bleomycin injection (bleomycin group). The clinical effect and adverse events were compared in the 2 groups. The strictures were managed successfully and the obstruction symptoms were relieved immediately. There were 2 minor complications in dilation group and 3 minor complications in bleomycin group. The difference was not significant between the 2 groups (P > .05). During the follow-up, the mean reintervention interval was 4.97 ± 1.00 months in dilation group and 7.60 ± 1.36 months in bleomycin group. The median treatment times was 4 (range 3–5) in dilation group and 2 (range 2–3) in bleomycin group. The differences in the 2 groups were significant (P < .05). Compared with endoscopic dilation, endoscopic dilation combined with bleomycin injection may reduce the treatment times and prolong the reintervention interval, which is a safe and effective endoscopic management for benign anastomotic stricture after rectal surgery. Introduction Benign anastomotic stricture is one of the common complications after colorectal surgery. Low-set anastomoses, mechanical anastomosis, and other factors can promote anastomotic stricture. Influenced by these risk factors, the incidence of benign anastomotic stricture is from 5 to 30%. [1,2] The main clinical features include dyschezia, abdominal pain, and even ileus. Patients suffer from them. At present, the effective treatments are surgery and endoscopic management. Even though the effect of surgery is remarkable, it may lead to large injury, high cost, and restenosis. Therefore, surgery is usually used for complex situations. The most common endoscopic management is dilation. After 1-3 dilations, many patients can achieve cure. But some patients are difficult to be treated, even with repeated treatment. [3] Auxiliary ways of endoscopic dilation have been applied for improving the effect, such as steroid or mitomycin injection. Due to the limited sample size and research type, their roles remain uncertain. Bleomycin is a common chemotherapy drug, which is used for malignant tumors. Besides, efficacy of bleomycin for healing pterygium and hyperplastic scar have been confirmed. [4][5][6] There are no studies to research the usefulness of bleomycin in gastrointestinal benign anastomotic stricture. Therefore, a feasibility study was made to compare simple endoscopic dilation with endoscopic dilation combined with bleomycin injection. The clinical effect and adverse events were recorded. Patients Patients from Jiangsu Province Hospital, the Second Affiliated Hospital of Nanjing Medical University and Taizhou Traditional Medical Hospital between April 2014 and April 2019 were strictly selected according to standards. All patients underwent anterior resection for rectal cancer. This research followed the principles of the Declaration of Helsinki. Inclusion criteria were as follows: (1) difficulty with bowel movements, constipation or abdominal distension after anus-retained rectal operation. (2) colonoscope couldn't pass through the anastomotic site. (3) the anastomotic stricture was proved benign with routine biopsies. Exclusion criteria were as follows: (1) anastomotic abscess or fistula, (2) drug allergy or a low white blood cell count, (3) severe comorbidities, such as coagulation disorders, cardiopulmonary dysfunction. We explained the advantages and disadvantages of the 2 endoscopic treatments. Patients could choose their respective therapy on an as-needed basis. Endoscopic dilation combined with bleomycin injection was not suggested if patients had the drug allergy or a low white blood cell count. This study was approved by the Ethics Committee of Jiangsu Province Hospital (2018-SR-258). Before the treatment, written informed consents were obtained from all patients. Operation procedure After preoperative examinations, colonoscopy was performed by skilled doctors to detect the stricture site. Patients were under conscious sedation. Before treatment, the diameter of the anastomotic site was recorded with a biopsy forcep (MTN-BF-23, Micro-Tech, Nanjing, China). The forcep was 6 millimeter (mm) in width when it was opened. [7] Then a balloon (CRE Wireguided Balloon Dilator, Boston Scientific, Minneapolis, USA) was inserted for dilation in all patients. The diameter of the balloon was 16-20 mm. The dilation was repeated 3 times. Every time the procedure was held for 3 minutes (mins). [8] Then 10 milliliter (ml) bleomycin (10 milligram (mg) of 1 mg/ml, Haizheng, Zhejiang, China) was injected into the anastomotic site (at the 3, 6, 9, and 12 o'clock positions) with an injection needle (INJ1-A1, Medwork GmbH, Höchstadt, Germany) in bleomycin group (shown in Fig. 1). [9] Finally, the diameter of the anastomotic site was recorded again with the balloon. Postoperative care and follow-up After operation, all patients were fasted for 24 hours. If there were no related complications such as fever, abdominal pain or bloody stools, liquid diet was permitted to take. Patients were discharged when the obstruction symptoms disappeared. All patients were followed up at the outpatient department. The least follow-up period was 2 years. Patients who had obstruction symptoms recurred were admitted to the hospital and received the same endoscopic treatment again. Statistical analysis Statistical analyses were performed using SPSS 19.0 software (IBM, Armonk, NY, USA). Quantitative variables were expressed as mean ± standard deviation (SD) in normal distribution and median (range) in abnormal distribution. Qualitative variables were described as frequency or percentage. Student t or analysis of variance (ANOVA) was used to compare continuous variables. Chi-squared or Fisher exact test was used to compare categorical variables. Difference between the 2 groups was considered statistically significant when the P value was <0.05. Results A total of 31 patients with rectal benign anastomotic stricture were enrolled in this research, with 15 patients (10 men and 5 women) in the dilation group and 16 patients (9 men and 7 women) in the bleomycin group. All patients had history of rectal cancer operation and the lower digestive tract was rebuilt with end-to-end anastomose. The chosen patients should satisfy all inclusion criteria and eliminate every exclusion criterion. Clinical data of these patients were listed in Table 1. In dilation group, the mean age was 59.60 ± 10.55 years and the average diameter of the anastomotic site before treatment was 0.47 ± 0.14 centimeter (cm). The mean distance of the stricture site to anal verge was 6.33 ± 2.47 cm. In bleomycin group, the mean age was 60.06 ± 7.88 years and the average diameter of the anastomotic site before treatment was 0.52 ± 0.18 cm. The mean distance of the stricture site to anal verge was 5.88 ± 2.55 cm. There were no significant differences between the 2 groups in number of patients, age, gender, the diameter of the anastomotic site and the distance of anastomotic site to anal verge (P > .05). All patients achieved endoscopic treatment successfully. After operation, the anastomotic site could be passed through smoothly with colonoscope and stricture symptoms were relieved soon. The mean diameter of the anastomotic site was 1.67 ± 0.15 cm in dilation group and 1.69 ± 0.17 cm in bleomycin group. The mean reintervention interval was 4.97 ± 1.00 months in dilation group and 7.60 ± 1.36 months in bleomycin group. The difference was significant (P < .05). When stricture recurred, the same endoscopic treatment was repeated as before. The median treatment times of bleomycin group was less than dilation group (2 vs 4, P < .05). There were no major complications such as perforation, massive bleeding or leakage in the 2 groups. One patient had low grade fever and 1 patient had abdominal discomfort in dilation group. Two patients had low grade fever and 1 patient had hematochezia in bleomycin group. With conventional therapy patients were discharged soon. During the follow-up, no long-term complications were observed. Local bleomycin injections didn't cause mucosa injury and systemic side effects. These results were listed in Table 2. Discussion Anastomotic stricture is one of the common complications after colorectal surgery, which includes benign stricture and malignant stricture. The latter is usually caused by tumor relapse. The benign anastomotic stricture occurs more commonly in rectal surgery than colonic surgery because of the anastomosis method, and it has a deleterious influence on patients' life quality. The main cause of benign anastomotic stricture is hyperplastic scar. [10] The main therapy for benign anastomotic stricture is endoscopic dilation. While some patients respond poorly to it. Repeated dilation may lead to mucosa injury and increase the incidence of perforation. Therefore, extending no-stricture period and reducing treatment times is quite essential. Endoscopic stent implantation is a solution for refractory anastomotic stricture, especially for acute obstruction. It can relieve the symptoms rapidly. But the long-term effect of stent implantation is uncertain, which is influenced by stent migration and restenosis. [11,12] W Endoscopic incision was initially used for esophageal rings, [13] then it has been used to treat refractory digestive tract stricture. [14] This technology is still difficult and most primary hospitals can't carry out the operation now. In 1970s, local injection with corticosteroid was first reported to relieve gastrointestinal stricture. [15] Researchers hoped the treatment times and the recurrence rate could be decreased by endoscopic dilation combined with corticosteroid injection. However, the results are still controversial. Some studies indicated endoscopic dilation combined with corticosteroid injection might prolong symptom-free period and decrease the dilation frequency, [16,17] while Hirdes et al revealed corticosteroid injection combined with dilation didn't prolong patency period in patients with anastomotic stricture. [18] Therefore, further studies are needed. Bleomycin can inhibit the transforming growth factor expression, prohibit the fibroblast proliferation and reduce the collagen generation. Excessive fibroblast proliferation and collagen generation lead to postoperative scar formation. Therefore, bleomycin has the antistenosis effect in theory. [4] As the analogue of bleomycin, mitomycin has been proved the value for treating benign esophageal stricture. [19,20] However, the efficacy of mitomycin for rectal anastomotic stricture was not satisfactory in our preliminary study, especially for refractory stricture. Our study compared the efficacy of combination therapy with simple endoscopic dilation preliminarily. No matter in bleomycin group or dilation group, most patients develop anastomotic stricture after receiving mechanical anastomosis with staplers. It has been found that mechanical anastomoses are associated with higher levels of collagen deposition and more inflammation, which may lead to stricture formation. [21] Besides, surgeons can control the caliber and shape of anastomoses better when they are using the hand-sewn method. [22] The bleomycin group had the longer reintervention interval and less treatment times than dilation group. It indicated the clinical effect of combination therapy was better for benign anastomotic stricture after rectal surgery. It is possible that endoscopic dilation only tear scar tissue simply. The surrounding scar tissue will develop restenosis after a while. Bleomycin can effectively inhibit the formation of restenosis by its pharmacological role. [4] What is more, endoscopic dilation combined with bleomycin injection did not cause any severe adverse events. Only 2 patients had low grade fever and 1 patient had hematochezia. The incidence of complications in bleomycin group was similar to that in dilation group, which might be caused by endoscopic dilation itself. During the follow-up, no other complications were observed. Local bleomycin injections didn't cause mucosa injury and systemic side effects. Of course, some limitations in our research are noted. First, this is not a randomized controlled trial. Patients choose the treatment according to their preference, which will render a selection bias. Second, the best dosage of bleomycin for treating anastomotic stricture is still uncertain. Current usage is according to previous literatures and our experience. Last but not least, endoscopic size measurements are limited. Even though we used the biopsy forcep and balloon as references, it may be still inaccuracy. In conclusion, endoscopic dilation combined with bleomycin injection is confirmed to be an effective and safe method for benign anastomotic stricture after rectal surgery. Compared with simple dilation, endoscopic dilation combined with bleomycin injection can prolong anastomosis patency period and reduce treatment times, which is especially applicable to refractory anastomotic stricture. Although the study has a small sample and short-term follow-up. Large-scale, randomized controlled trials and longterm follow-up evaluation are needed in the future. Nevertheless, the promising results of our study may open new opportunities for endoscopic treatment of anastomotic stricture.
2022-08-20T06:17:54.935Z
2022-08-19T00:00:00.000
{ "year": 2022, "sha1": "54f195816a65504ebafb40f58ffc9fd0aabfd76f", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "9249d76dae10f53a5c37159d8749b0d6e9bc3ada", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
244426898
pes2o/s2orc
v3-fos-license
The Immunomodulatory Functions of Butyrate Abstract The gastrointestinal (GI) system contains many different types of immune cells, making it a key immune organ system in the human body. In the last decade, our knowledge has substantially expanded regarding our understanding of the gut microbiome and its complex interaction with the gut immune system. Short chain fatty acids (SCFA), and specifically butyrate, play an important role in mediating the effects of the gut microbiome on local and systemic immunity. Gut microbial alterations and depletion of luminal butyrate have been well documented in the literature for a number of systemic and GI inflammatory disorders. Although a substantial knowledge gap exists requiring the need for further investigations to determine cause and effect, there is heightened interest in developing immunomodulatory therapies by means of reprogramming of gut microbiome or by supplementing its beneficial metabolites, such as butyrate. In the current review, we discuss the role of endogenous butyrate in the inflammatory response and maintaining immune homeostasis within the intestine. We also present the experimental models and human studies which explore therapeutic potential of butyrate supplementation in inflammatory conditions associated with butyrate depletion. The Gut Microbiome -Overview The human gastrointestinal tract houses trillions of microbes predominantly within the colon known as the gut microbiota ( Figure 1). These microbes consist of a commensal blend of bacteria, fungi (mostly yeasts), viruses/phages, archaea, and parasites. 1 The gut microbiome is a term used to describe the genetic and functional aspects of the gut microbiota. Currently, most knowledge regarding the gut microbiome pertains to its bacterial composition which is reported to be at a 1:1 ratio with the body's eukaryotic cellular composition; 2 and its health is characterized by its richness and diversity. At the phylum level, the gut microbiota is comprised of Bacteroidetes, Firmicutes, Proteobacteria, Actinobacteria, and Verrucomicrobia, with gram negative Bacteroidetes and gram positive Firmicutes representing roughly 90% of the gut microbiota in healthy humans. 1 However, with high interpersonal and intrapersonal variability in the human gut microbiota there is no standard microbial ecology that all healthy people share. 3 While gut colonization begins in utero, the first major colonization occurs and varies with the mode of infant delivery, vaginal versus Cesarean section, and the method of infant feeding, breast milk versus infant formula. 4 Within the first 2-3 years of life, gut microbiome development occurs alongside the physiological and immune maturation of the intestine. As the infant diet progresses to solid foods, a sustained shift occurs in the richness and diversity of the gut microbiome where it begins to resemble that of an adult. Throughout life, multiple factors can influence the composition of the gut microbiome such as diet, medications, physical, metabolic and psychological stress, geography, and aging which is reviewed elsewhere 5 (Figure 2). Diet is one of the main driving factors contributing to the composition and diversity of the gut microbiota. In response to diet, the gut microbiota produces various metabolites. 6 A diet rich in complex indigestible carbohydrates (eg, fibers) supports gut microbe-derived metabolites such as short chain fatty acids (SCFA), notably acetate, propionate, and butyrate. Conversely, diets low in fiber and high in fat and simple carbohydrates have a low SCFA fermenting capacity and are linked with chronic health conditions such as colorectal cancer and cardio-metabolic diseases. 6 Crosstalk Gut Microbiota and Host -Mutualism for Homeostasis The gut microbiota and its host co-exist in a symbiotic relationship where both parties mutualistically benefit from the presence of the other. The host provides the gut microbiota a safe dwelling niche with a steady supply of nutrients for its survival, and the microbiota supports the host by generating beneficial metabolites, such as vitamins, enzymes and SCFA, participating in pathogen exclusion, and supporting the intestinal epithelial barrier and immune defenses. 7 The intestinal mucosal immune system is the largest immune constituent in the body that is in contact with the external environment making it essential for host defense and maintaining homeostasis. To accomplish this, the mucosal immune system needs to be tolerant of mutualistic microbes, while at the same time it must ensure a beneficial microbial composition by limiting microbial overgrowth and being reactive to opportunistic pathogens. Studies in germ-free animals indicate the gut microbiome is essential for optimal intestinal immune development and defense. Germ-free animals demonstrate deficiencies in mucosal immune development which compromises their immune defense mechanisms. Absence of a gut microbiome leads to underdeveloped lymphoid structures (eg, Peyer's patches, mesenteric lymph nodes) and reduced immune cell populations such as IgA-producing plasma cells, CD4+ lamina propria T-cells and intraepithelial αβ T-cell receptor CD8+ cells. 8,9 Angiogenin-4, a Paneth cell-derived antimicrobial peptide important for epithelial host defense against gut microbes, demonstrated decreased gene expression in germ-free compared to conventional mice. 10 Although the gut microbiome involvement in mucosal immune regulation expands beyond the intestinal tract, 9 here in this review we focused on intestinal innate and adaptive immunity. Short-Chain Fatty Acids -Butyrate Short-chain fatty acids are organic acids produced predominantly in the colon by gut microbial fermentation of dietary fermentable fiber and resistant starches, and to a lesser extent, dietary and endogenous proteins. 11 SCFA are monocarboxylates with a concentration ratio in a healthy colonic lumen of roughly 60:25:15 acetate (C2):propionate (C3): butyrate (C4), respectively. 11 The presence of these weak acids in the colon lowers luminal pH which favors the growth of butyrate-producing bacteria. The use of metagenomic-targeted approaches have identified butyrateproducing bacteria as a functional group rather than a coherent phylogenic group. 12 Predominating within the Firmicutes phylum within clostridial clusters IV and XIVa, butyrate producers are gram-positive, strictly anaerobic and oxygen-sensitive, saccharolytic bacteria. 11 Numbers of clostridial clusters IV and XIVa are low in the neonatal period, slightly increase up to 2 years of age, and then dramatically rise during late childhood and adolescence, but then begin to decline again in adulthood and especially in the elderly. 13,14 Butyrate The SCFA butyrate is known to be of high biological importance. Butyrate is the primary fuel source for the colonocyte where nearly 90% of generated butyrate is metabolized locally in the colonocyte. SCFA absorption Figure 2 Key factors influencing the composition and diversity of the gut microbiome. Many factors can influence the gut microbiota composition and diversity beginning with the birthing process and first feeding methods. Other factors such as diet, psychological and physiological stress, pharmaceutical exposure, geographic residence and traveling and exposures, are among several factors which influence the microbiome throughout the lifecycle. Reprinted with permission, Cleveland Clinic Center for Medical Art & Photography ©2015. All Rights Reserved. 163 occurs by passive diffusion, as well as active transport by intestinal epithelial cells via sodium-coupled monocarboxylate transporter 1 (SMCT1 encoded by SLC5A8) and proton-coupled monocarboxylate transporter 1 (MCT1; encoded by SLC16A1). 15 Expression of SCFA transporters is regulated by the presence of SCFA, as demonstrated in germ-free mice and conditions of gut dysbiosis and reduced luminal SCFA. [16][17][18] SCFA not metabolized in the colon is carried into the liver via the portal vein and used as an energy substrate for hepatocytes, thus leaving very little butyrate in the systemic circulation. 19 However, SCFA can reach the brain and cross the blood-brain barrier likely due to high expression of MCT1 on endothelial cells, with average butyrate concentrations of 17.0 pmol/ mg of brain tissue in humans. 20 Butyrate supports the integrity of the intestinal epithelial barrier by regulating the expression of tight junctional proteins and supporting intestinal mucus production. 19,21 Laboratory studies suggest that butyrate assists with gut motility 22 by serving as a ligand and activator of SCFA receptors, 23 inducing the gut hormone peptide YY 24 or mediating enterochromaffin cell release of serotonin. 25 Butyrate enhances water and electrolyte absorption through upregulation of the Na + -H + exchanger and induction of genes encoding for ATPase ion exchangers. 22 As a histone deacetylase inhibitor, butyrate can alter gene expression, inhibit cell proliferation, and induce cell differentiation or apoptosis, leading to butyrate's anti-tumor properties. 21 Butyrate also has antiinflammatory properties due in part to its role in HDAC inhibition in various cell types such as the intestinal epithelium and immune cells, as well as inhibition of the activation of the transcription factor nuclear factor-κB (NF-κB). 21 Through downregulation of the NF-κB signaling pathway, butyrate has been shown to modulate proinflammatory cytokine production. 21,26 Through interactions with these GPCRs, SCFA can activate anti-inflammatory signaling cascades and modulate intestinal homeostasis. Using experimental mouse models to induce intestinal inflammation and bacterial infection, GPR43, GPR41, and SCFA exposure were necessary for mounting an immune response, mitigating inflammatory insults, and clearing bacteria 36 via the induction of chemokine and cytokine release in intestinal epithelial cells and activated effector T cells. These responses were due to activated extracellular signal-regulated kinase 1/2 and p38 mitogen-activated protein kinase which were GPR41/43 dependent. 36 GPR109A is highly expressed on innate immune cells and adipose tissue, as well as the apical membrane of intestinal epithelial cells. 34 GPR109a −/− mice have altered intestinal immune capacity with reduced frequency of Treg and IL-10 producing CD4+ T cells in the colon. 33 When subjected to chemically-induced colonic inflammation and cancer with dextran sodium sulfate (DSS) and azoxymethane (AOM), GPR109a −/− mice had exacerbated inflammation and colon carcinogenesis, suggesting the importance of GPR109a in promoting anti-inflammatory properties and colonic homeostasis. 33 Further studies showed that Gpr109adependent signaling suppressed IL-23 production from dendritic cells and reduced colonic inflammation. 37 Presence of GPR109a was also shown to protect against experimental Escherichia coli (ETEC) infection, secretory IgA responses, and intestinal barrier integrity. 38 Sivaprakasam et al provided a detailed review of SCFA receptors. 39 Intestinal Immunity As the largest compartment of the immune system, the intestine contains anatomical and physiological distinctions of its immune components. The Peyer's patches and the mesenteric lymph nodes comprise organized lymphoid tissues known as the Gut Associated Lymphoid Tissue (GALT). The effector sites of the intestine are the mucosal epithelium and the underlying lamina propria. Within the lamina propria are many different immune cells including activated T cells, plasma cells, and numerous innate immune cells including mast cells, dendritic cells, eosinophils, and macrophages ( Figure 3). A detailed review of the intestinal immune system can be found in Mowat et al. 40 Butyrate in Intestinal Immunity -Innate Mucosal Barrier The single layer of intestinal epithelial cells and its adjacent mucous layer serve as the host's first line of intestinal immune defense by providing a physical barrier against pathogen penetration. It also possesses the ability to secrete antimicrobial peptides by Paneth cells located at the bottom of the intestinal crypts and secretory immunoglobulin A (sIgA). In response to microbes, epithelial cells secrete cytokines and chemokines that recruit immune cells for protective immunity. Within the lamina propria, macrophages and dendritic cells also facilitate the innate immune response in the mucosa. Dysregulation of these immune responses can lead to inflammatory conditions of the intestine. Tight junctional proteins seal the paracellular space between intestinal epithelial cells, and disassembly of Increased TJ reassembly and TER restoration was due to butyrate's induction of AMP-activated protein kinase (AMPK) activity. 41,43,46 During exposure to LPS, butyrate's protection against LPS-induced TER reduction and paracellular permeability coincided with less activation of NLRP3 inflammasome and autophagy via butyrate's HDAC activity. 45 Butyrate has also been shown to stabilize hypoxia-inducible factor (HIF), a transcription factor that coordinates low oxygen in the colonic epithelium to regulate intestinal barrier function. 47 Positive effects of butyrate on TJ proteins are dose dependent, with lower doses demonstrating benefit. In vitro studies with E12 mucus-producing epithelial cells demonstrated that lower (1-10 mM), but not higher (50-100 mM), butyrate dosing prevented alterations in TER, FITC-dextran permeability, and mucus production by goblet cells. 44 Higher butyrate dosing in Caco-2 monolayers (8 mM) was cytotoxic reducing TER, increasing FITC permeability, and inducing apoptosis compared to lower dosing (2 mM). 48 In a mouse model of ethanol exposure, the amount and delivery method of tributyrin, a butyrate prodrug, paradoxically impacted the effect on intestinal TJ proteins and liver injury. Higher doses (10 mM) provided daily in the food supply increased liver injury and steatosis compared to lower doses provided by oral gavage, despite both methods and doses protecting against ethanol-induced disassembly of intestinal TJ proteins. 41 Together, these data support the notion that butyrate has a direct beneficial effect on supporting the intestinal epithelial barrier integrity in a dose-dependent manner. Mucosal Inflammation The intestinal immune system must remain tolerant of commensal microbes in order to maintain homeostasis. Pattern recognition receptors (PRRs), including toll-like receptors (TLRs) and nucleotide-binding oligomerization domain-containing protein 2 (NOD2) are expressed by intestinal epithelial cells and immune cells within the lamina propria. These evolutionary conserved receptors recognize microbially-associated molecular patterns (MAMPS) and trigger diverse innate immune responses. Some PRRs also recognize damage-associated molecular patterns (DAMPS) released during cellular stress or tissue injury. Toll like receptors (1, 2, 4, 5, and 6) are located primarily in the plasma membrane and interact with components of microbial pathogens. Despite having different ligands, PRRs share signaling pathways that ultimately activate pro-inflammatory transcription factors, such as NF-κB, which controls expression of genes encoding for proinflammatory cytokines, chemokines, inducible inflammatory enzymes, adhesion molecules, growth factors, acute phase proteins, and immune receptors. 49 Thus, it is essential that there is tight regulation of PRR activity to avoid excessive inflammation and dysregulated immune responses. A more detailed review of PRRs can be found in Burgueno et al. 50 Multiple human and animal studies demonstrate that in response to butyrate within the intestine proinflammatory cytokines such as IFN-γ, TNF-α, IL-6, and IL-8 are inhibited, and anti-inflammatory cytokines IL-10 and TGF-β are induced. 40 Butyrate has a long-standing history of being anti-inflammatory through its inhibition of NF-κB, as demonstrated in several in vitro and in vivo studies. [51][52][53][54][55] Butyrate, a ligand for GPR109A, inhibited LPS-induced activation of NF-κB in normal colon cells. 34 The nuclear transcription factor PPARγ, which antagonizes NF-κB, was reported to be upregulated by butyrate in HT-29 colonic epithelial cells. 56 Butyrate suppression of reactive oxygen species through support of the antioxidant system has been suggested as a means for butyrate inactivation of NF-κB inflammatory signaling. 57 In a mouse model of chronic-binge ethanol feeding which induces oxidative stress, tributyrin co-supplementation mitigated losses in gene expression of superoxide dismutase 2 (SOD2), thioredoxin (TRX1) and protected against ethanol-induced NOX1. 58 Mucosal Antimicrobial Peptides Antimicrobial peptides (AMP), including defensins, cathelicidins, and C-type lectins (eg, regenerating [Reg] isletderived protein family), are highly conserved, evolutionary, and are important in innate immunity at intestinal mucosal surfaces. Butyrate has been shown to promote production of AMPs by intestinal epithelial cells through its interaction with GPR43, 59 activation of MEK/ERK and JNK pathways, 60 and its cell proliferation mechanisms. 60 Butyrate was also shown to increase AMPs secreted by macrophages. Acting via its HDAC3 inhibitory function, butyrate drove monocyte to macrophage differentiation and induced macrophage production of AMPs, S100A8 and S100A9 genes and calprotectin expression, in the absence of increased proinflammatory cytokine response, which led to enhanced bactericidal function in vitro and in vivo. 61 Butyrate and Intestinal Innate Immune Cells Neutrophils As first responders into an inflammatory site, neutrophils are responsive to pathogens by producing cytokines that begin coordinating the recruitment and activation of other immune cells. Several neutrophil functions are modulated by SCFA. By regulating the production of inflammatory mediators, such as TNFα and IL-17, SCFA modify neutrophil recruitment. 62 Neutrophil chemotaxis has been shown to be regulated through the SCFA activation of the GPR43 receptor in neutrophils. 63,64 SCFA may also modify neutrophil functions such as their phagocytic capacity and ability to produce and release reactive oxygen species and nitric oxide. 65 Butyrate and propionate induce apoptosis in both activated and non-activated neutrophils, which depends on activation of caspases but not Gα i/o and Gα q pathways, suggesting independence of SCFA receptors. 66 Macrophages Intestinal macrophages are the most abundant immune cells within the lamina propria where they are important for the induction of innate immune responses. In bone marrowderived macrophages stimulated with LPS, butyrate decreased secretion of IL-6, IL-12p40, and nitric oxide to a greater extent than acetate or propionate in a dose dependent manner, suggesting butyrate has anti-inflammatory effects on macrophages. 30 Butyrate treatment in macrophages isolated from the colonic lamina propria and stimulated with LPS, had less inflammatory response exhibited by decreased IL-6 secretion and mRNA expression, IL-12 and inducible nitric oxide synthase, but butyrate had no effect on TNFα or MCP-1. 30 Similar effects were noted in macrophages isolated from the colonic lamina propria of mice treated with antibiotics and butyrate, suggesting that butyrate modulates immune responses of colon lamina propria macrophages. These responses were dependent on HDAC inhibition but not TLRs and GPCRs. 30 Contrary to these studies, SCFA alone or in combination, and/or combined with TLR agonists, led to pro-inflammatory effects by inducing IL-1β, IL-6, CXCL8/IL-8 in human peripheral blood mononuclear cells. 67 Thus, the divergences on the inflammatory effects of butyrate appears to depend on the cell type studied and the conditions, environment, and type of stimulation. 68 Mast Cells Mast cells, which are abundant within the GI tract mucosa and submucosa, are known to play a role in GI diseases such as food allergy, as well as certain forms of colitis and Crohn's disease. [69][70][71][72] Provision of dietary fibers and prebiotics that are fermented into SCFA have been tested in animal models of food allergens. Benefits relating to anaphylaxis scores and IgE concentrations were noted with fiber supplementation, or the addition of acetate or butyrate to animal drinking water, suggesting the production of SCFA from fiber as the mediating effector on mast cells. 73 Similarly, prebiotics tested in mouse models of colitis which led to increased fecal SCFA levels were linked with protection of the intestinal barrier, and a reduction in inflammation and inflammatory cytokines. 74 Germinated barley, a prebiotic that is fermented into SCFA, reduced colonic mast cell recruitment when fed to rats in an experimental colitis model; 75 and ulcerative colitis patients fed germinated barley had reduced inflammation and an improved clinical activity index. 76,77 When testing the direct effects of butyrate on mast cells, jejunal mucosa of pigs treated with butyrate had reduced mast cell degranulation and gene expression of proinflammatory cytokines. 78 These data corroborate a previous report demonstrating that the direct effect of butyrate on mast cells was due to the MAPK signaling pathway and inhibition of JNK phosphorylation. 79 Innate Lymphoid Cells Innate lymphoid cells (ILC) are regulated by multiple endogenous mammalian cell-derived factors and integrate innate and adaptive immune responses to assist in maintaining physiological homeostasis. 80 Nontoxic ILCs consist of three distinct groups: ILC1, 2, and 3 and are defined based on their transcription factor requirements, effector cytokine expression, and other distinct effector functions. 80 ILC3s express retinoid-related orphan receptor γt (RORγt) and produce IL-17A, IL-22, lymphotoxin, and GM-CSF. 81 While ILC3s increase in population in the distal small intestinal lamina propria, they were found to have distinctive distribution in proximal versus ileal Peyer's patches in mice based on specific transcription factor expression; and the suppression of RORγt + ILC3 in ileal Peyer's patches was linked with the presence of butyrate. 82 Butyrate levels, which were higher in the ileum than jejunum as expected, were inversely associated with RORγt+ ILC3s and IL-22 expression, suggesting that butyrate was a regionally specialized factor suppressing ILC3s in terminal ileal Peyer's patches. 82 A study in mice found butyrate supplementation and subsequent increased colonic butyrate levels, promoted IL-22 production from ILCs in the lamina propria and mesenteric lymph nodes through histone deacetylase inhibition and GPR41 by promoting aryl hydrocarbon receptor and hypoxia-inducible factor 1α. 83 IL-22 aids in protecting the intestine against inflammatory injury by inducing AMPs and supporting the intestinal barrier. 84,85 Butyrate in Intestinal Immunity -Adaptive Butyrate has been shown to play an important role in an adaptive immune response via two distinct pathways: firstly, from the effect of butyrate on monocyte-derived dendritic cells (DC), 32,86-88 and secondly, through butyrate's direct effect on T lymphocytes. 89,90 Dendritic Cells As specialized antigen presenting cells, dendritic cells (DC) are in direct contact with the gut microbiota and its metabolites. In the intestine, DC induce adaptive immune responses in primary T cells bridging the gap between innate and adaptive immunity. 91,92 Immature DC help maintain a state of immune tolerance, and mature DC can activate immune responses. Butyrate treatment is reported to have a significant impact on differentiation, maturation, and overall T lymphocyte stimulating effects of human monocyte derived DC. 86,90 In vitro studies found butyrate in the presence of inducers (eg, LPS, TNF-α) affected the differentiation of DC and induced an immunosuppressive effect in DC derived from human monocytes and inhibited T cell proliferation. 21,86 Butyrate treatment at low non-toxic doses reduced the expression of mature DC surface markers for mature DC (CD80, CD83, CD40, CD45, MHC class II molecules). 21,86 Multiple studies have also explored the modulatory effect of butyrate on cytokine production by DC, and reported that butyrate treatment inhibited the production of pro-inflammatory cytokine IL-12 when DC were stimulated. 21,86,93 Liu et al reported that butyrate treatment resulted in 3-fold decrease in IL-12 secretion, 5-fold decrease in IFN-γ, and 11-fold increase in IL-10 secretions from DC. 21 Butyrate-stimulated DC significantly promoted IL-10 production by priming Type-1 regulatory T cells (Tr1). 93 Through activation of GPR109a in macrophages and DC, butyrate plays a key role in maintenance of proand anti-inflammatory T lymphocytes, as butyrate potentiates conversion of naïve T cells to FoxP3+ regulatory T cells while suppressing IFN-γ+ T cells. 33,87 Kaiser et al also reported that butyrate rendered DC metabolically less active by significantly antagonizing LPS-induced extracellular acidification rate, as well as by significantly reducing mitochondrial oxygen consumption rate in butyrate treated DC at baseline. 93 T and B Lymphocytes Independent of its immunomodulatory effects mediated by DC and macrophages, butyrate also has dose-dependent direct effects on T lymphocytes. 89,94,95 By utilizing a combination of both in-vivo and in-vitro experiments, Arpaia et al concluded that butyrate can boost extrathymic Treg-cell generation by acting directly on T cells, and in the absence of DC, this effect was mediated by an increase in the extrathymic CNS1 (Conserved Noncoding Sequence-1)-dependent differentiation of Treg cells. 89 Butyrate via its HDAC function, caused increased Foxp3 protein acetylation, which ultimately resulted in higher Foxp3 protein levels in Treg cell culture. 89 Kespohl et al studied the effect of different butyrate concentrations (0.1mM to 1mM for in-vitro, and 50mM to 200mM for invivo via oral route) on T cell-mediated immune response utilizing CD4+ T cells. 95 They reported that at lower concentrations (0.1 to 0.5 mM) butyrate facilitated differentiation of Tregs both in-vitro and in-vivo, while at a higher concentration (1mM) butyrate induced the expression of the transcription factor T-bet which resulted in IFN-γ-producing Tregs or conventional T cells. 95 In addition to its direct effect on CD4+ T lymphocytes, butyrate also directly modulated gene expression in CD8+ cytotoxic T lymphocytes and altered gene expression of effector molecules such as IFN-γ in a dose dependent manner. 96,97 Butyrate also improved the memory potential and enhanced recall capacity of CD8+ T memory cells (Tmem) through reprogramming cellular mitochondrial metabolic flux. 98 promote IL-10 production from Th1 cells via GPR43 mediated effect, which plays an important role in maintenance of intestinal homeostasis and Gpr43−/− CBir1 Tg Th1 cells have been reported to induce severe colitis in mice. 100 Recent studies have also investigated the effect of butyrate on regulatory B lymphocytes (B10) and found butyrate to have anti-inflammatory properties resulting from induction of IL-10 producing B cells. 101,102 However, conflicting data exist, as other studies which utilized different doses of butyrate reported direct inhibitory effect of butyrate on B10 cells, and speculated that previous reports on B10 induction by butyrate likely resulted from indirect effects via serotonin-derived metabolite 5-hydroxyindole-3-acetic acid. 103 A study by Daien et al showed that the SCFA acetate promoted B10 cell which resulted in an anti-inflammatory effect. 103 Butyrate has been reported to cause B-cell intrinsic epigenetic modulation of antibody response by HDAC inhibitory effect and enhancing class-switch DNA recombination, which results in inhibition of autoimmune response. 104 SCFA also induce antibody response by stimulating intestinal mucosal IgA responses and systemic IgG responses. 105 Overall, a substantial level of evidence has been reported supporting the role of SCFA in general and butyrate in specific in various stages of adaptive immune responses. Clinical Significance for Select GI Diseases Inflammatory Bowel Diseases Inflammatory bowel diseases (IBD) are chronic intestinal inflammatory disorders with two main subtypes: Crohn's disease and ulcerative colitis. 106 Although exact pathogenesis of IBD is not completely understood, IBD involves complex interactions among various influencing factors of genetics, gut microbiota and mucosal immunity via both innate and adaptive immune responses. 106 In both subtypes of IBD, a reduction in butyrate producing gut microbes has been reported. 107,108 As it has been described in our previous detailed discussion in this review, butyrate has multi-stage modulating effects on intestinal defense mechanisms which include protection of the intestinal mucosal barrier through promotion of tight junctional proteins in the intestinal epithelium, support of innate and adaptive immune responses, as well as inhibition of oxidative stress by reducing cyclooxygenase-2 (COX-2) levels, and improved detoxification of hydrogen peroxide (H 2 O 2 ) by induction of catalase. 109,110 Since the late 20th century when the implication of butyrate in colitis was first highlighted, a series of experimental and clinical studies have been conducted. 110,111 Experimental Studies with IBD Modeling In this section, we discuss the studies exploring the butyrate effect in IBD-specific processes. For the broader and more detailed overview of the butyrate effect on overall immune function please refer to the earlier section of this review. Although some of these previously discussed concepts do have significant application in IBD pathogenesis, they will not be discussed in the current section to avoid redundancy. Intestinal mucosal ulceration is one of the major manifestations of IBD, and butyrate's effects on intestinal epithelial cell growth and cell death processes have been well-documented. Depending on the overall homeostatic condition, such as the presence or absence of an alternate energy source, butyrate has been shown to have either growth stimulatory or apoptotic properties for human colonic epithelial cells. 112 In addition, butyrate has been shown to reduce DNA damage from oxidative stress in both human and rat-derived colonocyte cultures. 113,114 Early life exposures such as breastfeeding has been reported to have a protective role against development and pathogenesis of IBD. 115 Gao et al studied this mechanism further and analyzed this effect by utilizing immature human enterocytes. 116 Their team reported that breast milk induced the anti-inflammatory environment in newborns' GI tract via its metabolite butyrate by inducing the expression of genes for both tight junctional proteins and mucus production. 116 As discussed in earlier section of innate immunity, inflammasomes, a group of cytosolic protein complexes which regulate the balance of commensal bacteria and protect from pathogenic organisms, also have a potential role in pathogenesis of IBD. 117,118 While inflammasomes are protective when intestinal barrier is intact, once the barrier is disrupted by gut dysbiosis, inflammasomes' activation and recruitment of immune cells are associated with mucosal inflammationwhich is another major pathophysiologic mechanism for ongoing inflammation in IBD. 118 Butyrate has been shown to modulate pro-inflammatory signals and inhibit several nucleotide-binding oligomerization domain-like receptor-3 (NLRP3) inflammasome markers in an in vitro co-culture model of intestinal inflammation. 118,119 One study reported that butyrate significantly reduced IL-8 secretion, and therefore IL-8 mediated chemotaxis, when IL-1β was inhibited by other IBD therapies (such as 5-ASA), highlighting a mechanism behind the inconsistent clinical response by butyrate alone and potential for combining butyrate with DovePress other treatment modalities of IBD. 120,121 Geirnaert et al augmented the microbiota derived from Crohn's disease patients by adding butyrate-producing bacteria (F. prausnitzii, Butyricicoccus pullicaecorum, and the mix of six butyrateproducers) which improved epithelial barrier integrity in vitro. 122 Animal Studies In animal models, preventative and therapeutic potential of butyrate for colitis has been studied either by modulating butyrate levels through dietary supplementation of butyrateyielding prebiotics, sodium butyrate, or tributyrin, or through direct sodium butyrate instillation via rectal enemas. In a DSSinduced colitis model in male outbred-CD-1 mice, the dietary provision of baked corn and bean snack (20-40 g/kg body weight) resulted in the highest concentration of butyrate in the cecum and feces, and an anti-inflammatory effect by downregulating IL-1 receptor, TLR receptors and TNF-alpha pathways. 123 Smith et al reported that 150mM of butyrate supplementation in drinking water resulted in increased colonic regulatory T cells (cTregs) frequency and number in the germ free mice. 124 Similarly, in another study byZhang et al, they studied the effect of butyrate supplementation in a rat model of colitis and reported that butyrate supplementation played an important role in regulating Treg and Th17 cell balance and exerted protective effect against development of IBD. 125 Impaired intestinal barrier function and increased permeability is considered one of the key mechanisms in development of IBD, and as discussed earlier butyrate supplementation has been shown to mitigate this effect in multiple in-vitro animal models. 42,126 In IL-10 deficient mice which are prone to development of colitis, butyrate supplementation has been shown to provide protection against colitis through reducing the amount of colitogenic IgA-coated bacteria. 127 Butyrate enemas have also been shown to stimulate mucosal repair and healing, and exert anti-inflammatory effect in intestinal epithelium. [128][129][130] Burrello et al reported that fecal microbiota transplantation with healthy stool in mice exposed to chronic intestinal inflammation decreased colonic inflammation, effects which are mediated through T cell modulation. 131,132 Human Studies and Therapeutic Application Human studies for therapeutic use of butyrate in IBD have been implemented since late 20th century. In 1992 Scheppach et al performed a clinical trial in which 100mMol/L of butyrate via rectal enema was compared with placebo for distal ulcerative colitis and reported that butyrate improved all parameters related to colitis including clinical indices with decreased stool frequency and blood in stool, and endoscopic and histological inflammatory grading. 133 Other clinical trials which followed reported mixed results, ranging from no to some butyrate effectiveness, although not at the degree of therapeutic value. 134,135 Hamer et al studied the effect of butyrate in low grade inflammation in ulcerative colitis patients in remission, and reported only minor improvements in inflammatory and oxidative stress parameters after rectal enema of 10mM sodium butyrate for 20 days. 136 In a systematic review and meta-analysis of eight randomized clinical trials on a total of 227 patients with UC, Jamka et al reported that current limited evidence does not support the use of butyrate enemas in UC. 137 Due to inconsistencies in the response of butyrate therapy, perhaps at some degree due to variability in dosages, duration and a standardization of the formula, current applicability of butyrate in IBD is considered as an add-on supplementary therapy at best. 121,138 One area in which butyrate has shown more consistent effectiveness is in cases of diversion colitis, a post-surgical manifestation when a part of colon is out of continuity and butyrate depletion is thought to be the major factor for driving inflammation. 139 Although surgical treatment with either reconnection or resection of the diverted colon is a more definitive treatment, butyrate enema has been shown to have therapeutic value when medical management is considered. 140 It should be noted that due to overall discrepancy in the evidence, there remains a lack of SCFA or butyrate related guidelines from GI and Nutrition Societies. 141,142 Although most studies utilizing the gut microbiota reprogramming by means of probiotics, prebiotics, and synbiotics, 143 or the fecal microbiota transplantation in patients with UC or CD have produced positive results, 144-146 the exact mechanism behind this complex interaction between the gut microbiome and the host is not clearly understood, requiring further investigations to determine the role and implications of butyrate in management of this complex inflammatory disorder. Colorectal Cancer Patients with colorectal cancer have been reported to have low levels of SCFA including butyrate. 147 healthy cells in homeostasis, but suppressing the hyperproliferation induced by cancer. 112,148,149 Sodium butyrate has been shown to induce apoptosis in human colonic cancer cell lines in a p-53 independent pathway. 150 Butyrate also provides protection against oxidative stress and DNA damage. 113 Butyrate has also been reported to have cancer protective effects via several pathways which include suppression of Neuropilin-1 (NRP-1), 151 inhibition of mitogen-activated protein kinase (MAPK) signaling pathway, 152 differential regulation of Wnt-β-catenin signaling pathway, 153,154 and upregulation of microRNA miR-203 and promotion of cell apoptosis, 155 and inhibition of pro-proliferative miR-92a. 156 Due to a wellestablished role of dietary pattern with colorectal cancer, the majority of human trials have investigated interventions by means of modifying dietary fiber intake and reported reduced risk of colorectal cancer recurrence. 157,158 Recent extensive meta-analyses' have confirmed these trends and reported a strong association of colorectal cancer risk with dietary pattern and specifically with a low fiber, high fat and simple sugar containing diet. 159,160 Although there are several possible theories through which the anti-colon-cancer effect of the high fiber diet is thought to be mediated, the evidence from in-vitro studies have shown that butyrate plays a major role as an important intermediary metabolite in this pathway. 140,161,162 More research is needed to show the direct effects of gut microbially generated butyrate on anticolon cancer effects. Conclusion and Future Perspectives In summary, butyrate is a key gut microbial metabolite which mediates the effects of the gut microbiota on the immune system, and not only does it play a key role in the maintenance of intestinal immune homeostasis, but it also has future potential therapeutic implications for a spectrum of gastrointestinal and systemic disorders. Figure 4 summarizes what is currently known regarding butyrate's role within the intestinal immune system. Certain challenges remain due to its low bio-availability, short half-life, variable levels in healthy individuals, as well as the lack of consistent clinical data supporting its value as a therapeutic option. Future studies including rigorous Disclosure This work was supported by a National Institutes of Health grant (no. R01AA028043-01A1 (Cresci) and 2T32DK083251-11A1 (MPI). The authors report no conflicts of interest in this work.
2021-11-20T16:20:21.109Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "003652d981eb0da2ec0a3da95a181662921767ea", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=75951", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3ec084efd6b796b015f48852d185e34642e55138", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
218572226
pes2o/s2orc
v3-fos-license
Flexible Pathways for Modernisation of Undergraduate Engineering Programmes by Country-Adapted Implementation of the Practice- Integrated Dual Study Model in Bulgaria and Romania Abstract The paper addresses the need for more flexible routes for acquiring current industry-related skills necessary to boost and sustain innovation in the sectors identified by the national strategies of Smart Specialisation and regional innovation in Bulgaria and Romania. For this purpose, regular practical phases in enterprises were integrated in the ongoing engineering curricula to accelerate the update of knowledge traditionally provided by higher education institutions. The paper presents a summary of the feasibility study conducted to identify the transferability of a county-adapted model of dual higher education in Bulgaria and Romania. Consequently, the approaches of curriculum adaptation followed by the implementing universities in both countries are briefly described. Finally, the paper discusses the outcomes and provides an outlook for future development of the dual study model in Bulgaria and Romania. INTRODUCTION The need for flexible and responsive engineering curricula is a challenge in order to keep pace with the rapid technological advancement and increasing innovation pressure. In EU aspect, the need for modernising the ongoing engineering curricula is particularly acute in the new Member States Bulgaria, Romania and Croatia, which were ranked as the modest innovators in the EU scoring the last three places in the 2018 European Innovation Scoreboard. This leads to missed economic opportunities for both the states and EU investors since industrial sectors such as manufacturing represent one of the main sectors of opportunity in the region. However, the shortage of skills has been widely recognized as a key obstacle for innovation in the sectors. Problem Definition Advanced technologies are changing manufacturing industries transforming traditional business models and supply chains into dynamic and interconnected systems. Thus, there is an urgent need to create a flexible, adaptable and active learning workforce [Marr, B., 2019]. Education providers are challenged to regularly update engineering curricula in order to respond to the rapidly changing business and technological environment. However, the modernisation of the ongoing curriculum is often obstructed by long process of design, approval and accreditation phases within the laggard legal framework (university perspective). On the other hand, there is a lack of methodology how to involve industry stakeholders in the design and delivery of curriculum content and practical incompany training (business perspective). To tackle these problems, the project "DYNAMIC" established a knowledge alliance between academic organisations, industrial enterprises and chambers of industry and commerce to ensure better labour market intelligence and improve innovation capacities of the academic and industrial stakeholders. The alignment of objectives can be materialised through a practice-integrated dual study education programme, which strengthens the supply-demand feedback chain between business and academia. METHODOLOGY The implementation of the dual study model in Bulgaria and Romania was examined in the scope of a feasibility study under the name "EUDURE -European Dual Research and Education". Objective of the feasibility study was to identify the potentials and experiences of the countries in order to find adaptive elements for transfer that can best be harmonized and adapted to the regional structures and conditions. The EUDURE project examined the framework conditions and transfer options of the German dual study principles in Bulgaria and Romania and formulated specific recommendations for implementation. Subject of the study were two of the main forms of dual study programmes in Germany -programmes with integrated vocational training and practice-integrated dual programmes as defined by the German Council of Science. The study was conducted in cooperation with stakeholders from Bulgaria and Romania, represented by higher education institutions and social partners. The EUDURE feasibility study has adopted for the country-specific investigation the transfer factors formulated by the German Academic Exchange Services (DAAD) for measuring the adaptation potential in other countries. Based on the DAAD methodology, country-specific data was collected in order to answer the following questions: • What type and quality of binational exchange already exists? • Does the educational governance structure promote transfer initiatives? • Is there already an understanding of dual education models in the target countries? • Are the economic conditions in the target country conducive? • Legal framework and country specifics related to university internal rules • Are there German companies in the target country who are interested in cooperations? A further important condition for the transfer of the dual study form is the fundamental interest of social, economic and political decision-makers in the target country, e.g. through reforms and initiatives to promote dual training models. Results of the feasibility study will be exposed in the following section. SUMMARISED RESULTS OF THE FEASIBILITY STUDY Both Bulgaria and Romania offer study programs at the level of the German Bachelor's degree programs as part of the Bologna Process. In both countries, especially at universities, the focus is on practical and job-oriented university degrees. Both countries have comparable quality standards in teaching, a uniform higher education system with ECTS and a similar semester schedule, divided into winter and summer semesters. Furthermore, the framework conditions (both political and economic) are set up so that a fundamental transfer potential exists. In both countries, the efforts to upgrade vocational training are prominently located with the adaptation of the German dual vocational training system in the implementation stage. Extensive networks and cooperation between national and international economic and institutional partners already exist in both countries. Many companies with German participation are located in Bulgaria as well as in Romania. Based on the evaluation factors of the DAAD study on the transfer potential, both in Bulgaria and in Romania very good starting points for the initiation of pilot projects on dual study are a further step within the readiness to reform and the paradigm change in the education sector. In Bulgaria and Romania, there is a significant shortage of skilled workers in the forecasted economic upswing. This increases the demand for more practical orientation in highly qualified occupations and higher education. Therefore, the dual degree program is an attractive model for both countries. In both countries, a strong initiative of the economy is currently restructuring and rebuilding the vocational training structures, based on the German vocational training system. APPROACHES FOR CURRICULUM ADAPTATION FOR DUAL IMPLEMENTATION IN COUNTRY-SPECIFIC CONTEXT This section focuses on the adaptation of curricula of two of the university-partners in project Dynamic to curricula with dual education elements. Context of curriculum adaptation Higher education across Europe is strongly characterised by the Bologna Process, which reforms aim at more coherence to higher education systems across Europe. The implementation of education reforms based of Bologna objectives in the countries Bulgaria and Romania, in particular the threecycle system, as well as use of ECTS and Diploma Supplement tools, are fundamental for the introduction of dual studies at higher education level. The tools of EHEA establish comparability between programmes at the same graduation level throughout Europe. In this context, similarities in the operational environment of the partner higher education institutions could be drawn in order to justify the transferability of the dual education model across Europe. These were used to identify common parts in the degree structures between Germany and Austria, where dual studies at undergraduate level are well established, with those in the transfer target countries Bulgaria and Romania. Constrains and limitations of the curriculum adaptation process Despite the similarities in higher education structures across the partnering countries in project Dynamic, different approaches, explained by the country specifics of the single national higher education systems, were followed by the academic partner institutions during the curricula adaptation process. Beside the constrains in curriculum adaptation imposed by the national regulations in higher education, certain domains are subject to additional control and standard implications that must be taken into account. A practical example is provided by Technical University Varna, Bulgaria during the curriculum evaluation and realignment of undergraduate programmes, which underlay the regulation of the Executive Agency "Maritime Administration". Practice-integrated dual higher education in Romanian context Practical example "Mechatronics" in Lucian Blaga University Sibiu The selected approach to tailor the educational process in order to comply with the requirements of the industrial partners was to adapt/change the syllabuses of specialty subjects. The adaptation of the curriculum for the dual study specialization were made by adapting/changing the syllabuses of specialty subjects. Certain specialty subjects were selected for this change (Computers programming, Digital Electronics, Power Electronics, Microcontrollers Hydraulic and pneumatic driving systems and Programmable Logic Controllers). For the dual-study Mechatronics study program, supplementary hours of practical activities were added. A supplementary amount of 810 hours of practical activities were added to the existing 240, which will lead to a total amount of 1050 hour for the dual study option. Nine weeks of supplementary hours were added at the end of the 2nd, 4th and 6th semesters (a period which now is allocated to the summer holidays). Another difference between the regular and dual study forms is that students from dual study program must attend the extracurricular courses organized by the companies (mandatory requirement), while for the students from the regular study program the attendance is optional. A new syllabus for practical activities was designed for the dual-study program. Also new rules for assessing the students for the practical activities were established by LBUS and agreed with the industrial partners. All diploma works/graduation papers for the graduates of dual study program must be unfolded in companies (mandatory requirement). The implementation of the practical phases for dual study specialization were formalized by contractual agreements beween the university and the industrial partner (contract on practical work) as well as between the industrial partner and the student (contract of internship). The differences between the regular and the dual study form were also formalized by designing a modified curriculum for the dual-study option of Mechatronics study program, which was approved by the Council of the Faculty of Engineering and by the University Senate of "Lucian Blaga" University of Sibiu. Practice-integrated dual higher education in Bulgarian context Practical example from Technical University Varna The Innovation Strategy for Smart Specialization of Bulgaria as one of the thematic area includes "Mechatronics and clean technology". At the end of 2018 the strategy was updated including new priority directions "Blue economy -development technologies". Currently, there is an urgent need for personnel in the shipbuilding and ship repair industry in Bulgaria. For this reasons, the Bulgarian academic partner Technical University of Varna has selected the programmes "Naval Architecture and Marine Technology", "Marine Engineering" and "Design of Marine Power Plants and Systems" for update of ongoing curricula and alignment with industry needs. The specific of the selected engineering domains is characterized by the strong regulation of the ongoing curricula and syllabus by the Executive Agency "Maritime Administration" and the International Maritime Organization -for the specialty of "Marine Engineering" only. In addition all specialties follow the rules and legislations provided by the Law on Higher Education and the rules of activity of the Technical University Varna. For this reason, the curricula could be only partly adapted for dual implementation by integrating of practical components in the existing plans. The approach for curricula update and integration of practical phases can be described with the following principles: -all practical trainings (practices) included in the students' curriculum should not be conducted in the laboratories of TU-Varna, but on the territory of an industrial and design enterprises; -laboratory exercises in specialized subjects (if possible) -to be conducted in the territory of an industrial and design enterprises. -in the specialty of "Naval Architecture and Marine Technology" and "Marine Engineering" there are planned hours (respectively credits) for independent work, which were also incorporated into the industrial enterprise training. Integrating practical components in the programme "Marine engineering" Students from the specialty of "Marine machinery" ("Marine engineering"), in the fourth year of their study, from the Bachelor's Degree program, have subject "Repair of Ship Machinery" in the winter semester. The programme was rescheduled in order to free two weeks at the beginning of December in which students were accepted for training at the industrial partner MTG Dolphin. During this period of two weeks, the students could pass their practical training directly involved in the repairs of marine machines and mechanisms. Classes were full-time for two weeks. All students signed a contract with MTG Dolphin, as well as they were asked to fill-in every day their diary with explanation of provided and solved tasks. A contract between TU-Varna and MTG Dolphin was also signed in advance before the training. Students who have already completed their practical training in the subject of "Repair of marine machinery" and have passed successfully their state exams in "Marine engineering" and English language for mariners, at the end of June, are currently employed on board of marine ships, part of the World Maritime Merchant Fleet. Integrating practical components in the programme "Design of marine power plants and systems" According to the increasing demand for marine engineers and designers of marine power plants and systems, in 2018 the Department of "Naval Architecture and Marine Engineering" has accepted the four students to study at the Master's Degree Program of "Design of marine power plants and systems". These four students, enrolled to the Master's degree program are part-time students. They currently work at Industrial Holding Bulgaria -"Ship Design". Due to the small number of students and higher degree of flexibility, the programme was selected for test in dual mode. During the first two semesters of their study, students passed practical training at the Industrial Holding Bulgaria -"Ship Design" in the following subjects: • "Computer systems for design of ships and marine equipment"-1 part -15 hours of lectures / 45 hours of exercises • "Design of systems and devices for ships and marine equipment"-30 hours of lectures / 15 hours of exercises • "Computer systems for design of ships and marine equipment"-2 part -60 hours of exercises The specialty of "Design of marine power plants and systems" is not under the specific regulations of Maritime Administration Executive Agency. This means that changes in the curriculum content are allowable and changes can be acceptable in accordance with the Law on Higher Education, the regulations, the rules of activity of the Technical University -Varna and the requirements of the industry. Integrating practical components in the programme "Naval Architecture and Marine Technology" According to the actual curriculum three are two practical activities. After second semester there is so-called "Introduction Practice" (30 hours) and after 6-th semester "Specialized practice" (60 academic hours -2 ECTS). There are other subjects like Marine Piping Systems, Electrical Equipment of Ships and Marine Structures, Technical Safety, Structural Mechanics of Ships and Marine Structures, Welding of Marine Structures, Strength and Structure of Ships that include more than 500 extracurricular activities. Based on this the structure of the dual -study is organized in two phases: During semesters in TUV and in partner company -in summer vacation after 6th semester. The practical training is held in the summer months after the 6th semester in BULNAS (Bulgarian National Association of Shipbuilding and Shiprepair) companies with which TU-Varna has concluded partnership agreements. The practice starts with the student application followed by approval by the company. In summer vacation after 6-th semester there are total 640 academic hours (480 astronomical hours). This is equal to 60 working days (eight hours working day). The practice will be paid according to the company conditions. This and all other conditions will be described in the corresponding agreement. Special training logbooks will be elaborated for the needs of pilot implementation. All the necessary documentation -contracts, logbooks, reports etc. will be developed taking into account local conditions, based on good practices in partner countries involved in the project. Pilot implementation of dual study will be based on a voluntary choice by the students of the 3-year course. OUTLOOK The described activities of curricula adaptation and implementation in form of dual practiceintegrated programmes aim to demonstrate the need of closer cooperation between education providers and business actors in the countries Bulgaria and Romania. Although the dual model itself is known from the past in both countries, the connection between the stakeholders needs to be reestablished and strengthened. The developed programmes described in this paper are understood as a pilot introduction of the dual study model at higher education level. At this stage, only flexibilisation of ongoing curricula within the frame of the existing legislation could be achieved by integrating practical phases in the pilot programmes. The additional efforts of the dual students in comparison with those enrolled in the regular form of study could be demonstrated by using EU recognised tools such as the Diploma Supplement. However, for the future development of the dual study model in Bulgaria and Romania, there is a need for political action and adjustment of the higher education legislation that officially recognises the dual form of study. Within the scope of the project Dynamic, examples of dual higher education solutions were created and shall be serve to facilitate the dialog with policy makers. CONCLUSIONS Skills shortage and rapid workplace change create the need for agile workforce. To achieve this goal, higher education curricula should be more flexible and adaptive to the current industrial needs. The close business-academia cooperation is expected to strengthen the employability of the graduates by providing them with improved knowledge, skills and motivation. The dual higher education model provides a solution for more responsive education and talent growing for the benefit of all stakeholders. ACKNOWLEDGEMENT This paper was created with the support of the European Commission. This publication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein. We thank our colleagues from the projects DYNAMIC -Towards responsive engineering curricula through europeanisation of dual higher education (588378-EPP-1-2017-1-DE-EPPKA2-KA) and EUDURE -European Dual Research and Education (01DS15017) who provided insight and expertise that greatly assisted the research, as well as for their active contribution and documentation of project results.
2020-05-11T13:27:57.620Z
2019-10-01T00:00:00.000
{ "year": 2019, "sha1": "269addd233d72c8e592cc217c24fe7730cca5967", "oa_license": null, "oa_url": "https://doi.org/10.2478/cplbu-2020-0008", "oa_status": "GOLD", "pdf_src": "DeGruyter", "pdf_hash": "269addd233d72c8e592cc217c24fe7730cca5967", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Engineering" ] }
215795297
pes2o/s2orc
v3-fos-license
Homotypic cell membrane-cloaked biomimetic nanocarrier for the accurate photothermal-chemotherapy treatment of recurrent hepatocellular carcinoma Background Tumor recurrence in patients after surgery severely reduces the survival rate of surgical patients. Targeting and killing recurrent tumor cells and tissues is extremely important for the cancer treatment. Results Herein, we designed a nano-biomimetic photothermal-controlled drug-loading platform HepM-TSL with good targeting ability and immunocompatibility for the treatment of recurrent hepatocellular carcinoma. HepM-TSL can accurately target the recurrent tumor area with the aid of the cloaked homotypic cell membrane and release the chemotherapy drugs in a controlled manner. In vivo results have confirmed that HepM-TSL loaded with drugs and photosensitizer achieves the synergistic treatment of recurrent hepatocellular carcinoma with good therapeutic effect and slight side effects. Conclusion Accordingly, HepM-TSL provides a sound photothermal-chemotherapy synergistic strategy for the treatment of other recurrent cancers besides of recurrent hepatocellular carcinoma. Background Hepatocellular carcinoma (HCC), as the third leading cause of cancer-related mortality worldwide, is a common malignant tumor that seriously endangers human health [1][2][3]. HCC was diagnosed till advanced stages for lacking of effective therapies [4][5][6]. At present, partial hepatectomy is a relative curative treatment preferentially for primary HCC patients, which can effectively treat tumor and improve the survival rate of patients [2,[7][8][9]. Regrettably, 70-80% of patients undergo tumor recurrence within 5 years after surgery, greatly reducing the survival rate after surgery [10]. The high recurrence rate of HCC is an important issue in the treatment of liver cancer. As a result, the treatment of recurrent HCC is an urgent problem to be solved. So far, there are no current consensus guidelines to treat the patients with recurrent HCC. The current treatment methods mainly include repeat hepatectomy (RH), radiofrequency ablation (RFA) or transarterial chemoembolization (TACE) [11][12][13]. In theory, the best way to treat recurrent HCC is repeated hepatectomies and liver transplantation. However, due to the practical obstacles including multicentric tumors, extrahepatic spread and inadequate normal liver reserve, repeated hepatectomies are available only for selected patients. TACE and RFA may lead to small survival benefits [14][15][16][17]. In general, the therapeutic efficacy of single therapy is still dismal. The combination of photothermal-chemotherapy treatment brings effective therapy due to the synergistic effect [18,19]. Even so, problems include low targeting and low delivery efficiency of drug and photosensitizer still exist in current combination therapy [20][21][22][23][24]. Therefore, it is significant to design a nano-drug delivery platform that has good delivery and controlled-release effects for chemotherapy drugs and photothermal agents as well as precise target to tumor area. To this point, the use of the homotypic cancer cell membrane as the cloak of nanodrug delivery platform would be one effective strategy [25][26][27][28][29][30][31]. Once the nano-drug delivery platform is enveloped with the homotypic cancer cell membrane, the loaded chemotherapy drugs and photothermal agents will be released controllably in the tumor area due to the homotypic targeting ability of the cancer cell membrane, so as to improve the synergistic efficacy of photothermal therapy and chemotherapy. Herein, in order to effectively treat the recurrent HCC through the combination of photothermal therapy and chemotherapy, we designed a drug delivery platform using homotypic cancer cell membrane as the cloak and realized the synergistic treatment of recurrent HCC with good therapeutic effect and negligible side effects. As shown in Scheme 1, the nano-drug delivery platform HepM-TSL constructed by the thermosensitive liposome (TSL) vesicles which were coated with the HCC cell membrane, and the chemotherapy drug (doxorubicin, Dox) and photosensitizer (indocyanine green, ICG) were encapsulated into HepM-TSL, noted as ICG-Dox-HepM-TSL. With the help of the homotypic HCC cell membrane, ICG-Dox-HepM-TSL can escape the immune system and precisely target the recurrent HCC area. The encapsulated DOX and ICG could be controllably released in tumor area when decomposition of TSL was induced by laser irradiation, and photothermalchemo synergistic therapy was achieved. The in vivo results proved that the recurrent HCC in the mice drastically reduced with the therapy of ICG-Dox-HepM-TSL, accompanied by slight side effects on the normal organs and tissues. In a word, nano drug delivery platform developed in this work can target the tumor area precisely, and release the drug in a controlled manner, making the combination of photothermal and chemotherapy effective, which is expected to provide a basis for the treatment of recurrent tumors. Preparation of TSL nanoparticles The blank heat-sensitive liposome nanoparticles were placed in dialysis bag (3500D) and dialyzed in secondary water. After 8 h, the pH value of the internal and external aqueous phase was adjusted to make the pH value of the external aqueous phase 7.8. Then, the chemotherapeutic drug Dox and the photosensitizer ICG and the thermosensitive liposome nanoparticles were added to the liposome solution at a ratio of 1:1:10, and mixed evenly, then the above mixed solution were incubated in a 39 °C water Scheme 1 The design strategy of the nano-drug delivery platform ICG-Dox-HepM-TSL bath for 30 min. The mixture was then placed in a dialysis bag for 24 h to dialyze off the free chemotherapeutic drug and photosensitizer. Preparation of cell membrane-cloaked TSL nanoparticles 2 mL of TSL nanoparticles (1.0 mg/mL) was mixed with 1 mL of HepG2 cell or L02 cell membrane vesicles (0.5 mg/mL). Then, the mixture was sonicated for 10 min (40 kW). Then, the cell membrane-cloaked TSL nanoparticles were collected by centrifugation treatment at 14,000 rpm for 10 min at 4 °C, and the supernatant was discarded. The cell membrane-cloaked TSL nanoparticles were resuspended in 3 mL of secondary water. The concentration of Dox in the stock liquid was 41.32 μg/mL. Flow cytometry analysis Flow cytometry was used to assess the in vitro therapeutic effect of ICG-Dox-HepM-TSL. HepG2 cells were seeded and cultured for 24 h in 2 mL of DMEM (10% FBS). After the supernatant was discarded, the HepG2 cells were incubated with ICG-Dox-HepM-TSL and ICG-Dox-TSL and free Free-ICG-Dox for 4 h. The incubation buffer was prepared by diluting the stock solution, 24.2 μL of ICG-Dox-HepM-TSL/ICG-Dox-TSL was mixed with 975.8 μL of DMEM containing 10% FBS, and finally the concentration of Dox in the final incubation solution was 5 μg/mL. The concentration of Dox in ICG-Dox-HepM-TSL, ICG-Dox-TSL nanoparticles, free-ICG-DOX was kept the same. The above cells were divided into two groups, one of which was irradiated with infrared light and the other group was used as a blank control. After the incubation buffer was discarded, the cells were trypsinized (free from EDTA), collected by centrifugation at 800 rpm for 5 min and washed thrice with PBS (pH 7.4). Finally, the Annexin V-FITC/PI Apoptosis Detection Kit were used to stain the above cells and an Image-StreamX multispectral imaging flow cytometer (Amnis Corporation) was used to examine the apoptosis of these cells. The flow cytometry data were analyzed using IDEAS software. In vivo tumor image Select 4 to 6 weeks of BALB/c nude mice weighing 15-20 g were used. The mice were housed in cages (5 per cage) and regularly fed rat chow and water. In order to build a solid tumor of HCC in nude mice subcutaneously, 5 × 10 6 HepG2 cells were injected subcutaneously into the flank region of the nude mice. When the tumor volume of the nude mice reached 100-200 mm 3 , Surgical removal of tumors from mice and retention of 10 mm 3 tumor tissue for simulated tumor recurrence. After 1 week of recovery, the mice were randomly divided into 3 groups and the tail intravenous injected with ICG-Dox-HepM-TSL and its counterparts at regular intervals. The quality of the drug was the same in each group (5 mg/kg mouse). The concentration of Dox in ICG-Dox-HepM-TSL, ICG-Dox-TSL nanoparticles were kept same. Group 1 was injected with 100 μL of PBS with or without near-infrared irradiation and was the control group, Group 2 was injected with ICG-Dox-TSL solution with or without near-infrared irradiation, and Group 3 was injected with ICG-Dox-HepM-TSL solution with or without near-infrared irradiation. 24 h after mice tail intravenous injection,they were irradiated with NIR for 5 min. After the above groups of mice were treated for 13 days, the nude mice were subjected to the live imager and photothermal imager and the fluorescence imaging of Dox in the nude mice were collected. Then, 24 h after the final injection the nude mice were sacrificed and the blood were collected by cardiac puncture. Dox fluorescence imaging was performed for ex vivo tissue from the main organs (heart, liver, spleen, lung and kidney) and tumors. H&E staining was performed on the main organs and tumor tissues. Then the collected blood was centrifuged at 3000 rpm for 10 min and the serum were collected. The serum was dripped into the 96-well plate, then follow the instructions to measure the absorbance of the alkaline phosphatase (ALP), alanine aminotransferase (ALT), aspartate aminotransferase (AST), blood urea nitrogen (BUN) and serum creatinine (Cre) with an ELISA microplate reader. The experiment was repeated three times, and the data are shown as the mean ± SD. The body weights and tumor volumes of the mice were measured during treatment. All animal experiments were carried out according to the Principles of Laboratory Animal Care (People's Republic of China) and the Guidelines of the Animal Investigation Committee, Biology Institute of Shandong Academy of Science, China. The statistical data were analyzed using SPSS Statistics software, for deriving standard deviation, one-way ANOVA test and Bonferroni test. A p value of 0.05 was taken as the level of significance and the data were labeled with (*) for P < 0.05, and for (**) for P < 0.01, Each experiment was conducted in triplicate (n = 3). Hemolysis assay The blood from BALB/c mice were centrifuged at 4000 rpm for 5 min and the supernatant was discarded. The erythrocytes cells were washed with PBS three times until the supernatant became clear and transparent. Finally, the erythrocytes cells were resuspended in PBS and diluted to 2 v/v%. The above erythrocytes cells were respectively mixed with different concentrations of ICG-Dox-HepM-TSL, ICG-Dox-TSL, ICG, Dox, and Tween-80 for 4 h at 37 °C. After 4 h, the mixtures were centrifuged at 1500 rpm for 15 min, and then the supernatant was collected and its absorbance (A sample ) was measured at 540 nm with an ELISA microplate reader. Erythrocytes were incubated with PBS as the positive control (A 100 ) and deionized water as the negative control (A 0 ). The hemolysis rate of the experimental group was calculated as follow. The experiment was repeated three times, and the data are shown as the mean ± SD. Statistical analysis All the statistical data were analyzed using SPSS Statistics software, for deriving standard deviation, one-way ANOVA test and Bonferroni test. A p-value of 0.05 was taken as the level of significance and the data were labeled with (*) for P < 0.05, and for (**) for P < 0.01, Each experiment was conducted in triplicate (n = 3). Results and discussion Preparation and characterization of HepM-TSL TSL nanoparticles were first synthesized and the HepM-TSL nanoparticles were prepared with HepG2 cell membranes as the cloak. Cell membranes were obtained from the HepG2 cells according to previous literatures [32,33]. TSL nanoparticles, HepM-TSL nanoparticles and HepG2 cell membrane were characterized with varied approaches. (Figure 1a, b and Additional file 1: Fig. S1). The Zeta potential the HepM-TSL changes much compared to TSL (Fig. 1c). The protein ingredient analysis of HepM-TSL was verified with gel electrophoresis (Fig. 1d), and the membrane proteins profile of HepM-TSL were similar to those of HepG2 cell membrane vesicles, ensuring the intact retain of the HepG2 cell membrane during the preparation procedure. The western blot (WB) analysis results (Fig. 1e) illustrated that the main cellular adhesion molecules including galectin-1, galectin-3 and CD47 were enriched on the surface of HepM-TSL while the main intracellular nuclear marker and mitochondrial marker i.e. histone H3 and COXIV were little on the HepG2 cell membrane. Results in Fig. 1d, e confirmed the selective retention of membrane on the surface of HepM-TSL. The photothermal effects of the HepM-TSL with ICG were investigated by measuring the elevated temperatures of their suspensions (50 μg mL −1 ) under 808 nm NIR laser irradiation (1 W cm −2 , 1.41 W cm −2 , 720 s). As the power increased, the final temperature of the HepM-TSL loaded with ICG was elevated to near 60 °C (the highest final temperature) (Fig. 1f ), reaching the temperature requirement for heat killing of the tumor, which means that the TSL with ICG can efficiently convert NIR laser into heat. The stabilities of HepM-TSL and the TSL nanoparticles were measured using a dynamic light (Fig. 1g). After 12 days, the particle size of HepM-TSL hardly changed. In summary, it was proved that the HepG2 cell membrane coated with thermosensitive liposome was prepared with excellent stability and photothermal performance. Validating the homologous targeting ability of HepM-TSL The targeting ability of tumor cells by ICG-Dox-HepM-TSL relies on the ability of homotypic aggregation between homologous tumor cells. The study of targeting ICG-Dox-HepM-TSL to tumor cells was carried out and results were shown in Fig. 2. Hepatoma cells (HepG2 cells) and normal hepatocytes (L02 cells) were incubated with ICG-Dox-HepM-TSL, ICG-Dox-TSL nanoparticles, PBS, respectively for 4 h and then characterized with CLSM. The fluorescence intensity of HepG2 cells incubated with ICG-Dox-HepM-TSL was significantly stronger than that of the cells treated with ICG-Dox-TSL (Fig. 2a-c). As to L02 cells, the fluorescence intensities show little difference in the cells incubated with ICG-Dox-HepM-TSL and ICG-Dox-TSL (Fig. 2b-d). The ability to homotypic aggregation of ICG-Dox-HepM-TSL was verified by further experiments as shown in Additional file 1: Fig. S2 and S3. HepG2 cell, BGC-823 cell, Hela cell and MCF-7 cell were incubated with ICG-Dox-HepM-TSL for 4 h and then examined by the flow cytometric assay and CLSM. It was showed that ICG-Dox-HepM-TSL precisely targeted to HepG2 cells instead of other cancer cells. The above results indicate that ICG-Dox-HepM-TSL can target to HepG2 cells by virtue of the ability of homologous aggregation of HCC cell membranes and can target to recurrent HCC tumor regions. Drug release and MTT assay As one drug carrier platform, the drug loading capacity and the cumulative drug release efficiency of ICG-Dox-HepM-TSL in the tumor area are important issues for the treatment of recurrent tumors. Herein, Dox and ICG loading contents in ICG-Dox-HepM-TSL were determined with standard curves (Additional file 1: Fig. S4) and were 41.32 μg/mg and 34.83 μg/mg, respectively. Afterwards, the in vitro cumulative release profiles of Dox from ICG-Dox-HepM-TSL treated with laser were investigated. Under the irradiation of near-infrared laser, ICG converted laser into large amount of heat, making the thermosensitive liposome broken and Dox release. The cumulative drug release Dox from ICG-Dox-HepM-TSL in the presence or absence of near-infrared laser (808 nm) were studied (Fig. 3a, b). Results authenticated that, after three times irradiation, the cumulative release amount of DOX reached 81%, largely higher than that of the control group without laser irradiation (24%). As can be seen, the thermosensitive liposome in ICG-Dox-HepM-TSL made Dox controlled release by the NIR laser. The therapeutic effect of ICG-Dox-HepM-TSL was evaluated by the MTT assay with HepG2 cells (Fig. 3c) Cell viability of HepG2 cells treated with ICG-Dox-HepM-TSL was almost 70% in the absence of NIR irradiation, but sharply decreased as to 20% after NIR irradiation. Meanwhile, under irradiation of NIR, cells treated with ICG-Dox-HepM-TSL showed much lower viability compared with those treated with ICG-Dox-TSL or free ICG-Dox. These results verified that ICG-Dox-HepM-TSL had excellent photothermal-chemotherapy therapy efficiency against the recurrent HCC tumor cells. In vitro therapeutic effect In order to further verify the therapeutic effect of ICG-Dox-HepM-TSL in the treatment of cancer in vitro, HepG2 cells were incubated with ICG-Dox-HepM-TSL ICG-Dox-TSL, free ICG-Dox, PBS in presence or absence of the near-infrared laser irradiation after 4 h. Then the HepG2 cells were stained with Annexin V-FITC/PI and subjected to flow cytometry analysis (Fig. 4). ICG-Dox-HepM-TSL strongly induced apoptosis of the HepG2 cells after exposure to near-infrared laser and the apoptosis rate of HepG2 cells was up to 52.3%, (Fig. 4a, upper right quadrant, annexin V+/PI+), while the HepG2 cells treated with ICG-Dox-HepM-TSL without laser irradiation or subjected to ICG-Dox-TSL were hardly in the late apoptotic stage. (Figure 4a, b, upper right quadrant). These above results indicate that ICG-Dox-HepM-TSL has excellent targeting effect on HepG2 cells, and under the illumination of near-infrared laser, both ICG and Dox could be released controllably, resulting in excellent in vitro therapeutic effect. In vivo tumor image and antitumor effect The in vivo experiments further proved that targeting and combination therapy of ICG-Dox-HepM-TSL were prominent (Fig. 5, Additional file 1: Fig. S5). The accumulation of ICG-Dox-HepM-TSL in the nude mice tumor sites bearing recurrent HepG2 tumor was investigated by fluorescence imaging of Dox and photoacoustic imaging 13 days after the intravenous injection with or without (Fig. 5a, b, g, h). The visual images of the extracted tumors and the tumor weight histograms showed the accurate photothermalchemotherapy therapeutic effect of ICG-Dox-HepM-TSL on the recurrent tumor (Fig. 5c, d, i, j). Moreover, the volume of the recurrent tumor in the nude mice treated with ICG-Dox-HepM-TSL under irradiation decreased approximately 70%, while obvious increase was observed in other groups (Fig. 5e-k). Further study on the damage of organs is shown in Fig. 6. Fluorescence imaging and H&E staining analysis of the main internal organs and tumor tissues of mice showed that the internal organs of mice in the ICG-Dox-HepM-TSL group were less damaged. Hemocompatibility was examined by incubation erythrocytes with ICG-Dox-HepM-TSL, ICG-Dox-TSL, ICG, Dox and Tween 80 at gradient concentrations (Additional file 1: Fig. S6). Results indicated that, compared to Tween 80 controls (a commercial excipient intended for injectable use), ICG-Dox-HepM-TSL exhibited minimal hemolysis across all tested concentrations and had better hemocompatibility. During the clinical progression of chemotherapy drugs, the main side effects are due to their cumulative and dose-dependent hepatotoxicity and nephrotoxicity. Harsh renal side-effects significantly increased levels of blood urea nitrogen (BUN) and creatinine (Cre). The increase levels of alanine aminotransfease (ALT), aspartate transaminase (AST) and alkaline phosphatase (ALP) indicates serious hepatotoxicity. Blood biochemical indexs of ALT, AST, ALT, BUN, and Cre in plasma taken from recurrence-tumor mice were measured 24 h after the last injection (Fig. 7). We found that there are no significant differences between the ICG-DoxHepM-TSL groups and the PBS control group in the blood biochemistry indexes (ALT, AST, ALP, BUN, Cre). Together, ICG-Dox-HepM-TSL was characterized of low toxicity, excellent biocompatibility and satisfactory therapeutic efficiency. Conclusions In summary, according to the homotypic aggregation and immune escape of cancer cells, we designed a nano-biomimetic photothermal-controlled drug-loading platform ICG-Dox-HepM-TSL, in which the HCC cell membrane cloaked thermosensitive liposome acted as the In vivo tumor image and antitumor effect. a-f Acquired from the HepG2 tumor-bearing nude mice that were intravenously injected with ICG-Dox-HepM-TSL, ICG-Dox-TSL and PBS under NIR irradiation (808 nm, 1 W cm −2 ). a Fluorescence image of HepG2 tumor-bearing nude mice 13 days after the intravenous injection of ICG-Dox-HepM-TSL and its counterparts. b Photos of the tumors extracted from the nude mice in (a). c Photoacoustic imaging of tumor sites in HepG2 tumor-bearing nude mice in (a). d Tumor weights of the nude mice after therapy. e Quantitative results of the HepG2 tumor relative volumes during therapy. f Body weights of the nude mice during therapy. g-l The corresponding data of (a-f) and acquired from the HepG2 tumor-bearing nude mice that were intravenously injected with ICG-Dox-HepM-TSL, ICG-Dox-TSL and PBS without of NIR irradiation shell and the photothermal agent (ICG) and the chemotherapy drug (Dox) were cargoes. ICG-Dox-HepM-TSL could target the recurrent tumor area with the help of the homotypic HCC cell membrane. Once excited with infrared laser, ICG would generate heat; meanwhile Dox was released in a controlled manner, resulting in the synergistic effect of photothermal and chemotherapy on the recurrent HCC with little damage to normal tissues. It shows that HepM-TSL serves as a robust nanoplatform for recurrent HCC and provide a new strategy to the design of drug delivery platform for the treatment of cancer recurrence. In vitro fluorescence images and H&E staining analysis of the major organs and tumors tissues extracted from the nude mice bearing the recurrent HepG2 tumor 13 days after the intravenous injection of ICG-Dox-HepM-TSL and its counterparts under NIR irradiation (808 nm, 1 W cm −2 ) (a) and without of NIR irradiation (b) Fig. 7 Blood biochemistry data including liver-function markers: a ALP, b ALT, c AST, and kidney-function markers: d BUN, and e Cre. The levels in serum collected from recurrent HepG2 tumor-bearing nude mice after therapy under NIR irradiation (808 nm, 1 W cm −2 )
2020-04-17T14:05:44.484Z
2020-04-16T00:00:00.000
{ "year": 2020, "sha1": "d5c5958947d88242c4da0eda1a7fb9eb867feee0", "oa_license": "CCBY", "oa_url": "https://jnanobiotechnology.biomedcentral.com/track/pdf/10.1186/s12951-020-00617-2", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5c5958947d88242c4da0eda1a7fb9eb867feee0", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
270150349
pes2o/s2orc
v3-fos-license
Posterior capsular radial sign: a novel method to confirm anterior vitreous cortex resection in phacovitrectomy Background The main purpose of this paper is to introduce a method that can accurately locate the posterior capsule of the lens to facilitate a relatively complete resection of the anterior vitreous body. Methods A total of 51 patients in the experimental group and control group were enrolled in this study. Phacoemulsification combined with vitrectomy was performed in all cases. After the cataract procedure was completed in the control group, the surgeon performed a conventional anterior vitrectomy with the operative eye. In the experimental group, anterior vitrectomy was performed according to the threadiness corrugation of the posterior capsule of the lens. During the operation, with the help of triamcinolone, two surgeons confirmed the resection of the anterior vitreous cortex; the best corrected visual acuity and intraocular pressure of all patients were recorded at 1 week, 1 month and 3 months after surgery. Results Fifty patients underwent phacoemulsification combined with vitrectomy, except one patient in the experimental group who was lost to follow-up. After surgery, no significant complications were observed in all patients except two patients in the control group with temporary increases in intraocular pressure. There was no significant difference in preoperative visual acuity between the two groups (t = 0.83, P = 0.25). Both groups had varying degrees of improvement in best corrected visual acuity at 1 week, 1 month and 3 months after surgery. Moreover, there was no significant difference in BCVA between the two groups at the three follow-up time points (t=-1.15, -1.65, -1.09, P = 0.53, 0.21, 0.23). After surgery, no significant complications were observed in all patients except two patients in the control group with temporary increases in intraocular pressure. Incomplete resection of the anterior vitreous cortex was observed in 2 patients in each group, but there was no significant difference (χ2 = 7.81, P > 0.05). Conclusion In the process of cataract surgery combined with vitrectomy, thready corrugation appears in the posterior capsule of the lens and is an important sign of its localization. Anterior vitrectomy can be accomplished safely and effectively with the help of thread-like corrugation, and the surgical effect is almost the same as that of traditional surgery. Especially suitable for beginners in vitreous surgery. Background Minimally invasive vitrectomy combined with phacoemulsification and intraocular lens implantation has been widely used in the treatment of patients with cataract and vitreoretinal disease [1][2][3][4][5].Under normal conditions, when the intraocular lens was implanted, vitreous surgery is the next step.Anatomically, there is no definition that divides the vitreous body into anterior or posterior segments.Clinically, for some diseases such as posterior capsular opacification that cannot be effectively treated by Nd: YAG laser, dislocation of the lens, malignant glaucoma, etc.Only part of the vitreous body behind the posterior capsule of the lens needs to be removed, which is called "anterior vitrectomy".In combined operation, removing as many anterior vitreous body as possible can effectively prevent intraocular proliferation [6].This has important implications for some diseases, such as proliferative diabetic retinopathy (PDR), giant hole retinal detachment and proliferative vitreoretinopathy (PVR), especially after silicone oil tamponade surgery [7][8][9].It is well known that the emulsification and migration of silicone oil can cause many postoperative complications, including secondary glaucoma, corneal degeneration or decompensation [7,10,11], and even the introduction of silicone oil into the skull has been reported [12].Many clinical studies have suggested that silicone oil should be removed approximately 3 to 6 months after vitrectomy [13][14][15][16].The residual silicone oil droplets attached to the posterior lens capsule after silicone oil removal is one of the reasons to accelerate the opacity and affect the postoperative visual acuity [17][18][19][20].Reducing the residual silicone oil droplets under the posterior capsule of the lens can effectively delay its opacity [21].This requires the surgeon to remove the anterior vitreous cortex as completely as possible during the procedure.For novice surgeons, the implantation of intraocular lenses and the polishing of the posterior capsule make it difficult to locate the position of the anterior vitreous body.Through long-term intraoperative observation, we attempt to share a method that can achieve relatively complete excision of the anterior vitreous body by localization of the posterior capsule of the lens in this article. Patients This retrospective study included 51 patients who had undergone phacoemulsification and intraocular lens implantation combined with vitrectomy in the Department of Ophthalmology, Tongji Hospital affiliated to Tongji University.The whole study was approved by the Ethics Committee of Tongji Hospital and conformed to the Helsinki Declaration.All patients were informed of the purpose of the study and voluntarily signed informed consent before surgery.Both groups were diagnosed with cataracts and various fundus diseases, including vitreous hemorrhage, retinal detachment, and proliferative diabetic retinopathy.History of cataract surgery, intravitreal injection of anti-VEGF drugs, trabeculectomy and drainage valve implantation were not included in this study.Before the operation, patients are routinely given general examinations such as electrocardiogram, routine blood examination, coagulation function, liver and kidney function, fasting plasma glucose, blood pressure, etc.None of the patients had a systemic disease that made them unable to tolerate surgery.Relevant ophthalmic testing include best correlated visual acuity (BCVA), intraocular pressure (IOP), slit lamp biomicroscopy, funduscopy, ocular B-ultrasound, etc.The surgical procedures were performed by the same ophthalmologist using the 25gauge vitrectomy system (constellation surgical system, Alcon Surgical Inc., Fort Worth, TX).After completing the steps of cataract surgery, 26 patients in the control group underwent routine anterior vitrectomy and were observed directly under a microscope.The experimental group of 25 patients underwent the same procedure.Unlike the control group, threadiness corrugation of the posterior capsule of the lens was used as a marker for location anterior vitrectomy during the operation. Surgery All procedures were performed by the same experienced ophthalmologist, and the whole operation was performed under retrobulbar anesthesia.The specific steps were as follows: a corneal incision and lateral incision were made at the 11 and 3 o'clock positions above the corneal limbus.Appropriate amounts of medical hyaluronan gel (Bausch & Lomb Inc., Shangdong, China) were injected into the anterior chamber.After multidirectional divide and conquer, the nuclei lentis and cortex were removed by phacoemulsification after continuous circular capsulorheixis.After polishing the posterior capsule, medical hyaluronan gel was injected into the anterior chamber again.The intraocular lens of the corresponding diopter was implanted in the capsulalentis.It is important to emphasize here that the medical hyaluronan gel was aspirated and then reinjected into the anterior chamber, and the corneal incision was closed with 10 − 0 nylon suture.Then, 25G minimally invasive vitrectomy was performed.The infusion pressure is 25 mmHg, the cutting rate is 5000 cpm and the vacuum power is 300 mmHg.We can see that a "threadiness corrugation" (Fig. 1) appears in the central region of the posterior capsule, which is an important marker for positioning the posterior capsule of the lens.This allows the surgeon to carefully remove the anterior vitreous body behind the lens, avoiding accidental breakage to the posterior capsule of the lens.After the anterior vitrectomy was complete, the operation mode was changed to negative pressure suction, and the cutter head was attached to the posterior capsule to verify whether there was any vitrectomy body remaining below the posterior capsule.If "radial folds" (Fig. 2) appear at this time, the anterior vitreous body has been completely removed.Conversely, if the whole posterior capsule is in a flapping state, there is still a remnant of the vitreous cortex.After anterior vitrectomy, injecting an appropriate amount of triamcinolone into the vitreous cavity can help complete the process of posterior vitreous detachment, and the surgeon can observe whether the vitreous anterior cortex has been excised. Statistical analysis Data treating in this study was analyzed using SPSS 22.0 statistical software (SPSS Inc., Chicago, IL, USA).All data conformed to a normal distribution and are expressed as the mean ± standard deviation (M ± SD).T test was used for comparison of measurement data between groups.Chi-square tests were used to compare counting data between groups.A P value less than 0.05 indicated a statistically significant difference. Characteristics A of 25 patients were included in the experimental group, including 14 males and 11 females, and 1 male patient was lost to follow-up.The average age of the patients in the experimental group was 57.17 ± 5.16 (range 59-78) years.The control group completed follow-up of 26 patients, including 12 males and 14 females.The average age of the patients in the control group was 59.22 ± 4.48 (range 61-80) years.All operations in the two groups were successfully completed, and no serious postoperative complications were found.The intraocular lens was implanted in the lens capsule, and there was no rupture of the posterior capsule of the lens in any of the patients.In the control group, there were 2 patients with a transient increase in intraocular pressure (IOP) in the short term after the operation, and the intraocular pressure decreased to normal levels after brinzolamide eye drops (ALCON-COUVERUR n.v.Belgium) or Carteolol Hydrochloride Eye Drops (China Otsuka Pharmaceutical Co., Ltd.) treatment.There was no significant difference in age (P = 0.765), gender (P = 0.612), plasma glucose (P = 0.422), HBA1c (P = 0.185), hypertension (P = 0.741), Fig. 1 Shows that the "threadiness corrugation" appears in the posterior capsule of the lens after implantation of an IOL best corrected visual acuity (P = 0.283), or intraocular pressure (P = 0.456) between the two groups.(Table 1) BCVA and IOP The best correlated visual acuity and intraocular pressure were recorded in all patients before surgery and at 1 week, 1 month and 3 months after surgery.Before surgery, the BCVA of the two groups was 0.93 ± 0.73 and 0.90 ± 0.10, with no statistically significant difference (t = 0.83, P = 0.25).All patients in the two groups achieved varying degrees of improvement in BCVA after the operation.There was no significant difference in visual acuity at different times after surgery between the two groups.Similarly, there was no significant difference in IOP between the two groups at each time point after surgery.(Table 2) Verification of triamcinolone In view of the fact that there is still no effective method to quantitatively analyze the vitreous body, this study used triamcinolone to artificially observe the resection of the vitreous anterior cortex.After the surgeon excises the anterior segment of the vitreous body under the posterior capsule of the lens, triamcinolone acetonide is injected into the vitreous cavity.Both the surgeon and the assistant confirm the dispersion of triamcinolone in the vitreous cavity: if it settles completely below the vitreous cavity, the clearance of the vitreous anterior cortex is relatively complete; if the clearance of the anterior vitreous cortex is not complete, triamcinolone is still visible in the vicinity of the posterior capsule of the lens (Fig. 3).In this study, we observed that 2 patients in each group still had a small remnant of the anterior vitreous cortex (Table 3).There was no significant difference (χ 2 = 7.81, P > 0.05). Discussion Recent improvements in vitrectomy techniques have contributed to expanding the role of pars plana vitrectomy (PPV) in the management of certain vitreoretinal "+" means that the anterior vitreous cortex has been removed completely, and "-" means that it remains. Fig. 3 Shows the dispersion of dexamethasone in the vitreous cavity from the observation of two doctors diseases.In consideration of the complexity of vitreoretinal diseases and the unexpected conditions that may arise during surgery.Novice surgeons need sufficient time to achieve an acceptable success rate [22].Viola [23] and colleagues' study showed that beginners needed more than twice the time for the procedure compared to experienced surgeons.Resection of the anterior vitreous cortex is particularly important during combined anterior and posterior segment surgery, especially for cases requiring intraoperative silicone oil tamponade.Silicone oil remnants is a common postoperative complication of silicone oil removal.Even after repeated intraoperative gas-liquid exchange, it is difficult to avoid the hiding of silicone oil or some impurities behind the posterior capsule of the lens.Residual and emulsified silicone oil droplets can migrate to the anterior chamber, leading to a series of postoperative complications, including secondary glaucoma, after-cataract and corneal decompensation.Studies [24,25] have shown that intraoperative removal of the anterior vitreous cortex as much as possible can reduce the adhesion and even migration of silicone oil droplets.As mentioned above, vitrectomy is a complex ophthalmologic operation with a relatively long learning curve.Even with the help of the microscope, complete removal of the anterior vitreous cortex is not easy for novice surgeons.In this study, we hope to use the threadiness corrugation that appears in the posterior capsule of the lens as a marker to locate the anterior vitrectomy. The principle of this method is relatively clear: during the phacoemulsification process, the entire posterior capsule of the lens becomes slack and tends to drift forward after the nucleus and cortex are aspirated.It is like pulling something heavy out of a "plastic bag".The bag is wrinkled due to lack of support.At this time, "threadiness corrugation" can be observed in the posterior capsule of the lens.The surgeon can use this to locate the posterior capsule of the lens to remove the anterior vitreous cortex.It should be noted here that medical hyaluronan gel should be thoroughly aspirated.Otherwise, due to the supporting effect of medical hyaluronan gel, the posterior capsule of the lens will be distended.In addition, some surgeons have a habit of polishing the anterior capsule.These procedures make the posterior lens capsule difficult to distinguish and thus easy to accidentally cut during anterior vitrectomy. The safety of the procedure needs to be taken care of here: the negative pressure suction of the vitreous cutter needs to be gentle.Specifically, the vitreous cutter opening faces up and initiates the negative pressure suction mode close to the posterior capsule of the lens.When the anterior vitreous cortex is about to be completely excised, the entire lens capsular flutters and converges toward the vitreous cutter.Subsequently, radial folds of the posterior capsular of the lens indicate complete resection of the anterior vitreous body.After the surgeon and his assistant jointly confirmed that according to this method in all 26 patients in the experimental group, no posterior capsule of the lens was ruptured. This study focuses on introducing a method of locating the anterior vitreous cortex based on intraoperative observations.Additionally, the vitreous body cannot be specifically quantified by current ophthalmic examination methods, including intraoperative OCT, anterior segment OCT, ophthalmic B-ultrasound, and ocular ultrasonic biomicroscopy.In this study, we could only use triamcinolone to determine whether the vitreous cortex remained.The results showed that 2 patients in each group still had residual vitreous anterior cortex, and there was no significant difference. The limitation of this study is that the quantity of the vitreous body depends on the subjective judgment of the surgeon.Further research will need to explore more objective and quantitative indicators. Fig. 2 Fig. 2 Shows the "radial folds" of the posterior capsule of the lens when the anterior segment of the vitreous body behind the posterior capsule is excised Table 1 Clinical characteristics of patients in the two groups Table 2 BCVA and IOP after surgery Table 3 The number of excisions of the vitreous anterior cortex
2024-06-01T13:03:47.068Z
2024-05-31T00:00:00.000
{ "year": 2024, "sha1": "a61c27cafa9966d6bdf3d22062fbe2e62c05cada", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "c036915d7f0e87cb27d39f72dbb129ce43b618b2", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
69173741
pes2o/s2orc
v3-fos-license
Predicting Long-Term Mortality after Acute Coronary Syndrome Using Machine Learning Techniques and Hematological Markers Introduction Hematological indices including red cell distribution width and neutrophil to lymphocyte ratio are proven to be associated with outcomes of acute coronary syndrome. The usefulness of machine learning techniques in predicting mortality after acute coronary syndrome based on such features has not been studied before. Objective We aim to create an alternative risk assessment tool, which is based on easily obtainable features, including hematological indices and inflammation markers. Patients and Methods We obtained the study data from the electronic medical records of 5053 patients hospitalized with acute coronary syndrome during a 5-year period. The time of follow-up ranged from 12 to 72 months. A machine learning classifier was trained to predict death during hospitalization and within 180 and 365 days from admission. Our method was compared with the Global Registry of Acute Coronary Events (GRACE) Score 2.0 on a test dataset. Results For in-hospital mortality, our model achieved a c-statistic of 0.89 while the GRACE score 2.0 achieved 0.90. For six-month mortality, the results of our model and the GRACE score on the test set were 0.77 and 0.73, respectively. Red cell distribution width (HR 1.23; 95% CL 1.16-1.30; P < 0.001) and neutrophil to lymphocyte ratio (HR 1.08; 95% CL 1.05-1.10; P < 0.001) showed independent association with all-cause mortality in multivariable Cox regression. Conclusions Hematological markers, such as neutrophil count and red cell distribution width have a strong association with all-cause mortality after acute coronary syndrome. A machine-learned model which uses the abovementioned parameters can provide long-term predictions of accuracy comparable or superior to well-validated risk scores. Introduction The term acute coronary syndrome (ACS) refers to many conditions which include non-ST-segment elevation acute coronary syndrome (NSTE-ACS) and ST-elevation myocardial infarction (STEMI). The common cause of these conditions is inadequate blood flow to the myocardium which can be related to acute cholesterol plaque rupture or erosion and thrombus formation. These conditions have a similar presentation, and the most frequent symptom reported by patients is chest pain, which is one of the most common causes of presentation to the emergency room accounting for up to 6% of emergency department attendances and 27% of medical admissions [1]. Current guidelines emphasize the usefulness of established quantitative risk scores for prognosis estimation [2], which is necessary for the adequate and cost-effective provision of evidence-based therapies. An increased systemic and local inflammation plays a crucial role in the pathophysiology of ACS. Various hematological indices have been reported to be associated with poorer prognosis or the occurrence of major adverse cardiac events after ACS [3]. These indices include neutrophil to lymphocyte ratio (NLR) [4][5][6], platelet to lymphocyte ratio (PLR) [7], red cell distribution width (RDW) [8], and mean platelet volume (MPV). These studies brought evidence that such nonspecific markers of the inflammatory response are associated with the GRACE score. [9] Moreover, they can improve its discriminative capabilities [10,11]. Machine learning (ML) is a field of computer science that uses various computational algorithms to give computer systems the ability to progressively improve performance on a specific task with data, without being explicitly programmed. This term describes a vast spectrum of computational methods, many of which like logistic regression have been used extensively in medical sciences for many years [12]. The most state-of-the-art algorithms are currently subject of intense research and have been recently shown to perform on par with trained ophthalmologists in detecting diabetic retinopathy in eye fundus images [13], classify skin lesion images automatically with dermatologist-level accuracy [14], or detect hip fractures from frontal pelvic X-rays [15]. In our previous research, we successfully used ML techniques to predict in-hospital mortality [16]. In this study, we attempt to develop a new tool for long-term risk assessment following ACS and compare its performance with the GRACE 2.0 model. In contrast to existing risk scores, our tool relies on laboratory tests (including hematological indices) and simple measurements (including blood pressure and heart rate), rather than clinical features. The rationale for such approach is the proven association of inflammatory response with ACS outcomes. Methods We retrospectively examined electronic medical records of patients admitted to a cardiology department between January 2012 and December 2016 to select all patients hospitalized because of an ACS. The analyzed group comprised of patients who had their diagnosis confirmed by a cardiologist according to ESC guidelines [2]. 5053 individual patients were qualified (1522 with STEMI, 857 with NSTEMI, and 2674 with unstable angina). We analyzed the descriptions of the electrocardiograms in the patient's medical records to identify patients who had an ST-segment elevation (n = 1522) or any ST-segment deviation-elevation or depression (n = 4420) according to current guidelines. We obtained information on all-cause death or survival and on the exact date of death from the national death registry one year after the end of data collection. Patients who had incomplete records or had no blood sample taken during hospitalization were excluded from the study. If a patient was admitted with ACS more than one time in the analyzed period, only the last hospitalization was considered. All patients were treated according to current guidelines and doctor's therapeutic decisions. Each patient had a venous blood sample taken within 30 minutes from admission. The complete blood count and hematological parameters were analyzed using an automated blood cell counter CD-RUBY (Abbott, Lake Bluff, Illinois, USA). Biochemical parameters were measured using COBAS 6000 (Roche, Basel, Switzerland). The results of the laboratory tests as well as the clinical information were obtained retrospectively from the electronic medical record (EMR) system at the time of follow-up. During the period of data collection, both Troponin I and Troponin T were used. Therefore, we expressed troponin elevation as a ratio (actual value divided by the norm). Statistical analyses were performed using the RStudio Software. The Shapiro-Wilk test was used to test the variables' distribution for normality. Most of the analyzed variables did not have a normal distribution. Median and interquartile ranges were selected as measures of central tendency. The univariable two-tailed Mann-Whitney U test was used to compare numerical features. We created a multivariable Cox regression model using variables with statistically significant differences (P value <0.05) in univariate analysis. 310 observations were excluded from the analysis because of missing values. We did not use automated stepwise backward elimination. Instead, all variables which were suspected to influence the outcome were entered into the model [17]. The list of variables used in the Cox regression model is presented in Table 1. The proportional hazard assumption was verified using Schoenfeld residuals. To assess the time-varying effects of the selected variables, Aalen's additive model was used. A P value <0.05 indicated statistical significance. The results were presented as hazard ratios with 95% confidence intervals (CI). A probability of death during hospitalization and after 6 and 12 months from admission according to the GRACE 2.0 score was calculated using the model coefficients published on the GRACE project website (https://www.outcomesumassmed.org/grace/). A Python package was developed to allow for the batch calculation of the GRACE 2.0 death probability based on relevant clinical and laboratory features. As the information about Killip class and creatinine level was available for almost all patients, the full version of the algorithm was used. In 84 cases the missing data did not allow for the calculation of the GRACE probability. Table 1 presents and compares the variables analyzed in the COX regression model as well as the variables used by the ML model and for the calculation of the GRACE score. 2.1. Machine Learning Methods. Model selection, optimization, and fitting were performed using the Python 3.6 and scikit-learn software packages. We used 4969 observations for training and evaluating the ML model. We have excluded 84 observations where variables necessary to calculate the GRACE score were missing, as presented in Table 1. The remaining missing values which did not affect the calculation of the GRACE score were imputed using mean of all observations. The gradient-boosted tree algorithm was implemented using the xgboost [18] software package. One-fifth of the available data (n = 994) was put aside as a test set and not used for training. Observations for the test set were chosen randomly, but in a way that preserved the ratio of positive to negative class (death and survival). The ML classifier was optimized using the training data only (n = 3975), using the 5-fold cross-validation. In this process, the training data was divided into 5 parts, and each of these parts was used to train the classifier and to measure its performance. We measured the performance of the GRACE score and our model by calculating the areas under Receiver Operating Characteristic (ROC) curves. The performance measurements during cross-validation were averaged and expressed by mean ± standard deviation. Finally, the performance of both classifiers was compared by calculating the areas under the ROC curves on the test set which was not used for training the ML model at all. This process was repeated in identical fashion for all analyzed endpoints: in-hospital death, 6-month death, and 12-month death. Results The in-hospital mortality rate was 1.64% (n = 83) within 6 months from admission 5.87% (n = 297) and within a year from admission 7.85% (n = 397). 766 patients (15%) died during the period of the study (from January 2012 until acquisition of the survival data in December 2017). The baseline clinical characteristics and laboratory test results according to survival status are presented in Tables 2 and 3. Some variables including the presence of ST-segment elevation, troponin elevation, sodium levels, and systolic blood pressure did not meet the proportional hazard assumption. However, examining Aalen's additive model indicated that these parameters have a high prognostic value shortly after admission that decreases over time. The results of the multivariable Cox regression analysis are visualized in the form of a forest plot on Figure 1. High RDW, NLR, monocyte count, creatinine level, prothrombin time, age, and heart rate as well as low sodium and hemoglobin were significantly associated with all-cause mortality in the multivariable model. Due to a large number of missing values for CRP and LDL levels, they were not considered for survival analysis, but we kept them in the machine-learned model because of their known association with ACS pathophysiology and outcomes [19]. Machine Learning Results. The model based on the gradient-boosted trees was trained using the following variables as input: troponin elevation ratio, NLR, PLR, RDW, CRP, platelet count, creatinine, hemoglobin, mean cell volume, sodium, prothrombin time, fibrinogen, age, neutrophil count, body mass index, systolic and diastolic blood pressure, heart rate, and sex. The variables were selected to maximize the model's performance, but clinical parameters including the data from the patient's medical history and physical examination were not included in the model. The point was to create a model that could use data that is routinely collected in the EMR system for all patients. The model's performance metrics are summarized in Table 4. Figure 2 presents the Receiver Operating Characteristic curves for our classifier and the GRACE score 2.0 for the detection of in-hospital, 6-month, and one-year mortality. Eyeballing the Receiver Operating Characteristic (ROC) curves and analysis of areas under these curves (AUROC) reveal that the results of our model and the GRACE score 2.0 are similar. GRACE performed slightly better for short-term results (AUROC 0.9 vs. 0.89) while our model scored better in long-term results (AUROC 0.77 vs. 0.73 and 0.72 vs. 0.71 for 6-month and one-year mortality, respectively). Discussion The results of the survival analysis using Cox regression confirm findings from numerous studies regarding the association of hematological indices including RDW, NLR, and neutrophil count with short-and long-term prognosis after acute coronary syndrome [3]. The low-grade inflammatory process plays an important role in the formation and subsequent destabilization and rupture of the atherosclerotic plaque [20]. In the multivariable Cox regression model, RDW had a strong association with all-cause mortality (HR 1.22, 95% Cl 1.17-1.28). These results are consistent with the findings from other studies that identified RDW as a prognostic marker in cardiovascular diseases and heart failure [21] and also as a predictor of all-cause mortality [22]. It was suggested that patients with increased RDW have lower oxygen supply at tissue level due to decreased red blood cell deformability and impaired blood flow through microcirculation [23]. Our results also seem to confirm the findings Figure 1: Results of Cox regression. Hazard ratios are presented as black rectangles, and confidence level bands are presented as whiskers. The central vertical line indicates a hazard ratio of 1. from other studies [24] on the impact of admission anemia on long-term prognosis in ACS. Our model performed better than GRACE score for medium-and long-term prognosis. However, the difference in performance was small, and the calculations of the GRACE scores in our study were made based on retrospective data and could be inaccurate in some cases. This result needs to be confirmed in prospective validation. Better long-term performance of our model might be related to the fact that it uses inflammation biomarkers. The underlying inflammation process is known to be related to atherosclerosis, but the currently used risk scores do not take advantage of this fact. GRACE score 2.0 has been extensively validated in various populations and proved to have superior discriminatory accuracy for predicting major adverse cardiac events when compared to other risk assessment tools [25,26]. However, the adoption of its use in a clinical setting was reported to be unsatisfactory. One of the reasons for such situation is the necessity of use of an external application which requires manual data input and consumes extra time [27]. Studies have shown that the integration of risk assessment scores into IT solutions resulted in higher compliance [28]. With all the necessary data available in the electronic medical record system, after integration into existing software, our solution can provide risk assessment without any additional input from the physician. The result could then trigger relevant alerts, helping to select the highest risk patients. Several studies investigated the application of machine learning techniques to risk stratification in ACS. Most of these studies used data collected retrospectively from a large number of electronic medical reports, similarly as we did in our study [29,30]. The models they created, however, were based on numerous clinical features, and it is difficult to reproduce the results and apply their solution in a different setting. For instance, VanHouten et al. reported that their machine-learned model could outperform the GRACE score. They used numerous sparse features including the full blood count in most patients and their classifier achieved area under receiver operating curve of 0.85. Our model yields comparable performance, but thanks to using the smaller number of free-of-interpretation features, it is easier to apply and validate externally. Study Limitations In our study, we retrospectively analyzed the electronic medical records of patients hospitalized over several years. This allowed for rapid development on an ML algorithm but is also a significant limitation. Data stored in medical records are often incomplete, complex, messy, and can be biased [31]. The naive use of raw medical records as input for either inferential statistics or machine learning models can lead to false conclusions. A good example of such situation is the study of Fine et al., in which patients who were admitted with severe communityacquired pneumonia and died in the emergency department had very little information stored in medical records. As a result, some deceased patients appeared healthier than those who survived [32]. The most concerning limitation of our study is related to variables that were stored in medical records as unstructured data in the form of physicians' notes (e.g., descriptions of electrocardiograms). When designing our classifier, we only intended to use features that are available in the medical records as single measurements. Clinical features, including the results of physical examination, patient's symptoms, and medical history, were not considered. This approach is different than those proposed by many other studies exploring the application of machine learning methods in predicting ACS outcomes [29,30], where all the features that were available in EMR were used. Nevertheless, determining the presence of ST-segment deviation was necessary for calculating the GRACE score. We did not analyze the electrocardiograms directly, and the classification of some ECG descriptions was not obvious. Therefore, the calculations of the GRACE score were especially prone to bias. To make a justified statement on the performance of our classifier vs. any other existing score, it is necessary to evaluate it prospectively, and the scores should be calculated on the day of admission to the hospital. The follow-up in our study was limited to death or survival status. This is also an important limitation because it was not possible to assess the occurrence of major adverse cardiac events other than all-cause death. Many patients suffered from recurrent ACS, which we did not analyze in this study. Instead, we only took into account the last available hospitalization. Another important limitation is related to using the Cox regression model. Some of the variables which we used in this model did not meet the proportional hazard assumption. Nevertheless, after analyzing different regression models, we concluded that the predictive value of ST-segment elevation, troponin elevation, sodium levels, and systolic blood pressure may decrease over time and that it is worth presenting the results in this form. Finally, although the study included patients hospitalized over many years, this dataset is still modest in terms of machine learning model development. The performance of our classifier varied slightly, depending on which observations were chosen randomly for the test set. In contrast, GRACE score was validated on over 100000 patients worldwide, thus the evidence that supports its usefulness is strong. We do not aim to prove that our method is better than any existing well-validated risk score, but to present a new approach to long-term risk prediction in ACS based on different analytic methods and different variables than existing scores. Conclusions Hematological markers of inflammation show strong correlation with the outcomes of ACS, and they can be successfully incorporated into numerical models designed to support clinical decisions. Our model predicted long-term mortality better than GRACE score, but the difference might not be significant, and it requires prospective validation. The potential of such solution lies in taking advantage of the easily available hematological biomarkers and in eliminating the necessity to enter the results of clinical examination or the past medical history into the model.
2019-03-11T17:24:37.376Z
2019-01-30T00:00:00.000
{ "year": 2019, "sha1": "2442aa499c9d8bb7eb001bfd6b50e3fc6f886097", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1155/2019/9056402", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4d837b0f50e5abe03edf3d087e0fa1632690d93", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
105214957
pes2o/s2orc
v3-fos-license
Anionic polycondensation and equilibrium driven monomer formation of cyclic aliphatic carbonates The current work explores the sodium hydride mediated polycondensation of aliphatic diols with diethyl carbonate to produce both aliphatic polycarbonates and cyclic carbonate monomers. The lengths of the diol dictate the outcome of the reaction; for ethylene glycol and seven other 1,3-diols with a wide array of substitution patterns, the corresponding 5-membered and 6-membered cyclic carbonates were synthesized in excellent yield (70–90%) on a 100 gram scale. Diols with longer alkyl chains, under the same conditions, yielded polycarbonates with an Mw ranging from 5000 to 16 000. In all cases, the macromolecular architecture revealed that the formed polymer consisted purely of carbonate linkages, without decarboxylation as a side reaction. The synthetic design is completely solvent-free without any additional post purification steps and without the necessity of reactive ring-closing reagents. The results presented within provide a green and scalable approach to synthesize both cyclic carbonate monomers and polycarbonates with possible applications within the entire field of polymer technology. Introduction The production of valuable monomers through chemical recycling is considered central for a future sustainable society, as the methodology retains the material value and closes the loop for polymeric materials. [1][2][3][4] If these processes are designed accordingly, multiple green chemistry principles including waste prevention, atom economy, and use of less hazardous chemicals can all be met at once. 5,6 A prime example of this process can be found in the industrial production of lactide from poly(lactic acid). 7,8 The ability of a polymer to reconvert back to its monomer relies on the thermodynamics of the reaction. Within this, the chemical structure of the repeating unit dictates both the feasibility and reaction conditions necessary to reconvert the polymer back to its monomeric form. [9][10][11][12][13] The equilibrium behavior is clearly seen during the ring-opening polymerization of cyclic monomers, where, residual monomer will be present regardless of the reaction time and catalytic system employed. [14][15][16][17] The implications of residual monomer in the material or in the reaction mixture depend on the end-use. Obviously, for polymer synthesis the amount of residual monomer should be as low as possible; however, for monomer synthesis the opposite is advantageous. The underlying reason for a system favoring or disfavoring polymerization is related to the thermodynamic features of the reaction such as the entropic increase in the system that drives monomer formation. 18,19 Cyclic aliphatic carbonates constitute a class of monomeric building-blocks with applications covering the entire eld of polymer science and also many more. [20][21][22] Depending on the carbonate's ring-size, very different features for polymerization are obtained. For instance, most 5-membred cyclic carbonates are inert towards ring-opening polymerization at conventional conditions [23][24][25] and are instead oen used together with amines during isocyanate-free polyurethane synthesis. This is in sharp contrast to most of the 6-membered and larger cyclic carbonates, which propagate quickly under standard conditions and nd applications ranging from rened biomedical polymers to bulk materials. 20,26,27 The diversity in chemical structure of the cyclic carbonates relates to the well-developed and accessible ringclosing methodologies for an abundance of diols that can carry a wide range of functionalities and substituents. Classic ringclosing methodologies include, phosgene and triphosgenes, 28-30 CDI (1,1 0 -carbonyldiimidazole), 31-33 ethyl chloroformate, 26,34-37 enzymes 38 and many more. 39,40 The common nominator for all these ring-closing systems is that the entropic increase is achieved via dilution to drive the reaction towards ring-closure. Generally, during ring-opening polymerization there is a negative change in entropy, which means that ring-closing reactions may be induced by increasing the temperature, a reaction oen referred to as ring-closing depolymerization (RCD). This concept is not new and was actually one of the rst synthetic methodologies employed for the synthesis of these cyclic carbonates. 41 Inherently, RCD has several advantages such as being solvent free, using non-toxic ring-closing reagents, being scalable and inexpensive (obviously depending on the diol), thus aligning well with the pursuit of sustainable chemistry. Nevertheless, reports on using this methodology for the synthesis of propagating cyclic carbonate monomers are comparably scarce, and the reaction conditions employed lacks coherency making general conclusions on the robustness and thermodynamics of RCD difficult. [41][42][43][44][45][46] To shine some light on this methodology we intend to evaluate a wide range of different diols under the same reaction conditions to evaluate what system features that lead either to polycondensation or ring-closing depolymerization. We aim to highlight ring-closing depolymerization as the "go-to" method for cyclic carbonate monomer synthesis in a sustainable, largescale and inexpensive manner. The vision is that this will highlight the importance of cyclic aliphatic carbonates in a future society based on renewable resources. General oligomerization and controlled ring-closing depolymerization procedure The depolymerization method and setup has previously been reported. 47,48 For a typical depolymerization, 80 g of DEC (0.67 mol, 1.4 eq.) was charged into a 250 mL round bottom ask equipped with a magnetic stirring bar. 1 g of NaH (5% mol to hydroxyl groups) was added under nitrogen at room temperature. Aer a ne dispersion of NaH in DEC was formed, 37 g of 1,3-propanediol (0.48 mol, 1 eq.) was gradually added and once the slurry mixture turned colorless and transparent, the remaining diol was added. The temperature of the reaction vessel was then elevated to 120 C and it was equipped with a distillation step up. Subsequently the condensate ethanol was distilled off overnight. The reaction vessel was cooled to 60 C and vacuum was applied to further remove the ethanol and unreacted DEC. The temperature was then gradually elevated from 60 C to 140-200 C. Nuclear magnetic resonance (NMR) The 1 H-NMR (400.13 MHz) spectra were obtained from an Avance 400 (Bruker, USA) spectrometer at room temperature using CDCl 3 or DMSO-d 6 as solvent. Size exclusion chromatography (SEC) The number average molar mass (M n ) and dispersity (Đ) of the acetic acid quenched oligomers prior to RCD were analyzed with a Verotech PL-GPC 50 Plus system, equipped with a PL-RI detector and two Mixed-D (300 Â 7.5 mm) columns (Varian, Santa Clara). An injection rate of 1 mL min À1 at 30 C was used with chloroform as the mobile phase. Toluene was used as the internal standard for ow rate uctuation corrections. Polystyrene standards with a narrow mass distribution and a molecular weight of 160-371 000 g mol À1 were used for calibration. The product from the condensation reaction between the longer diols and DEC was analyzed with a TOSOH EcoSEC HLC-8320GPC system equipped with a RI detector, and two PSS columns (100 and 300), using N,N-dimethylformamide (DMF) with 0.01 M LiBr as the eluent. The analysis was conducted at 50 C with a ow rate of 0.2 mL min À1 . The results were plotted against linear PMMA standards. Results and discussions Aliphatic polycarbonates, either achieved via polycondensation or ring-opening polymerization of cyclic carbonate monomers, has the potential to be a key material class in a closed loop future sustainable society. We have previously shown that a onepot and two-step reaction set-up illustrated in Fig. 1 is a feasible methodology for the synthesis a six-membered functional cyclic carbonate monomer, 2-allyloxymethyl-2-ethyltrimethylene carbonate (AOMEC). 46,49 The rst step comprises a slow addition of the diol to a suspension of sodium hydride in diethyl carbonate (DEC). The mixture was heated to 120 C overnight to facilitate the condensation reaction between DEC and the alkoxide originating from the free diols or formed oligomeric species. Aer the formation of oligomers (Fig. 1, (i) and (ii)) the reaction vessel was cooled to 60 C, vacuum was applied and the reaction vessel was subsequently reheated and the cyclic carbonate was collected as a distillate (Fig. 1, (iii) and (iv)). This provides us a very powerful methodology to synthesize a large amount of monomer in a short time frame. Inspired by this, we decided to explore the generality of this methodology both for the condensation reaction and also for the synthesis of a series of cyclic carbonates from diols with structural diversity. Equilibrium between condensation and depolymerization as a function of the diol length To make aliphatic polycarbonates using diols and dialkyl carbonates as the starting materials, there are two ways to go: direct condensation or cyclization followed by ring-opening polymerization. Depending on the actual structure of the diol and the targeted polymer architecture, the strategy can be either route. In the condensation route, as exemplied in Fig. 1, polycarbonate is obtained as the desired product aer step (i). The only tuning of the polymeric structure is from the change of the diol. In the cyclic route, the cyclic monomer is obtained aer going through all four steps (i-iv). The polycarbonate is then obtained aer a further step of ring-opening polymerization. The advantage of undergoing cyclization followed by ringopening polymerization is the versatility of the ring-opening polymerization, which can result in a broad range of polymeric materials of varying chemical structures and architectures. It is well known that during ring-opening polymerization the ring size of the cyclic monomer and the temperature of the system play very important roles on the equilibrium between monomer and polymer. We therefore rst focused on how the size of the unsubstituted a,u-diols (Table 1) inuenced the equilibrium between the cyclic monomer and polymer. It was obvious that both ethylene glycol and 1,3-propanediol formed 5-and 6-membered cyclic carbonates at high yields, 83% and 70% respectively (Table 1 entries 1 and 2). In our set- Fig. 1 Synthetic methodology for the one-pot two-step oligomerization ring-closing depolymerization. up, all other diols only formed the polycondensation product of the respected polycarbonates (Table 1 entries 3 to 6). The carbonate moiety within the polymeric structure was conrmed by 13 C NMR (Fig. S1 †). The inability of this methodology to ringclose larger carbonate rings is in contrast to what has previously been shown where larger cyclic carbonates may undergo ringclosing depolymerization at elevated temperatures. 41,44,50,51 The features of this system is believed to be a consequence of the anionic environment produced via sodium alkoxide. Hence, under these reaction conditions, the diol-length fully controls if the reaction leads to either monomer or polymer formation. Substitution pattern and equilibrium tendencies The inuence from the diol-length on the formation of cyclic carbonate monomers and polycarbonates highlighted the bene-cial equilibrium tendencies of ethylene glycol and 1,3-propane diol to form cycles compared to all other diols. Yet, from a polymer perspective, the six-membered carbonate ring is much more valuable than cyclic carbonates of other ring-sizes as it is known to be highly propagating. As an example, trimethylene carbonate (TMC), an unsubstituted six-membered monomer, is utilized particularly in biomedical applications as an important comonomer to L-lactide. 47,52,53 The ring-closing depolymerization methodology was further expanded to six additional 1,3-diols with very different substitution patterns (Table 2). To our delight, all cyclic monomers were synthesized in high yields ranging from 70-90% (Table 2 entries 1-7). Most of the monomers required temperatures above 180 C under vacuum to be distilled from the reaction mixture (Table 2 entries 1-5), except for the more heavily substituted carbonate 4,4-dimethyl-1,3-dioxan-2-one (Table 2 compound 6B) and 4-isopropyl-5,5-dimethyl-1,3-dioxan-2-one (Table 2 compound 7B), where the former had an onset distillation temperature 140 C lower. To understand the differences in equilibrium behavior of the monomers, the reaction composition aer the initial condensation step (Fig. 1, (ii)) was analyzed through 1 H NMR (Fig. S2-S9 †). This provides a direct way to evaluate how the substitution pattern of the 1,3-propane diol relates to the equilibrium behavior of the monomer to oligomer (Fig. 2). It is well established that addition of substituents leads to an increased tendency towards ring-closure, more commonly referred to as the Thorpe-Ingold effect, and our system is in no way any different. 54,55 Substitution on the 2-position of the cyclic carbonate inuences the equilibrium behavior less as compared to the 1-position. Specically, when comparing the unsubstituted TMC to the cyclic carbonate, that is monomethylated on the 2-position, only an additional 2% of monomer in equilibrium is observed ( Table 2, entries 1, 2 and Fig. 2). This in contrast to substitution on the 1-position of the carbonate, were one methyl group changes the equilibrium with 7% more in favor of the formation of the cyclic monomer ( Table 2, entries 1,5 and Fig. 2). Similar trends have been seen for both cyclic carbonates and cyclic esters where both size of substitution and position have a large impact on the equilibrium tendencies between monomer and oligomer. 11,15,56,57 The small changes in equilibrium behavior may seem insignicant (Fig. 2), however, from a synthetic perspective this will have a large consequences on the polymerization behavior. 15,58 The equilibrium between aliphatic polycarbonates and cyclic carbonate monomers is related to the ceiling temperature (T c ) of the polymerization. The ceiling temperature is dened as the temperature at which zero conversion of the monomer to polymer occurs. To achieve polymers it is highly advantageous to have a high T c system, however this means that a high temperature is required to achieve ring closing depolymerization and the reaction conditions may lead to other sidereactions. The results obtained from the condensation step makes it possible to formulate several generic polymer synthetic conclusions based on the equilibrium observations (Fig. 2). The equilibrium behavior of the cyclic carbonate that on the 2position is mono-substituted as well as the di-methylated suggest high conversion at conventional polymerization condition. 59 In the case of the more substituted cyclic carbonates monomers the equilibrium reside more towards the cyclic carbonate monomer, thus suggesting a weaker polymerization behavior (Fig. 2). The monomer formation mechanism is proposed to occur through an anionic ring-closing depolymerization reaction sequence. The central component of this reaction is the anionic chain-end of the pre-formed oligomers that through a backbiting mechanism releases the cyclic monomer (Scheme 1). By constant removal of the cyclic monomer the reaction sequence can be driven towards high conversion. This is a very important difference compared to classic cyclization with ring-closing reagents, where the reaction outcome is determined by the initial features of the system. There are examples in the literature were larger carbonate monomers have been produced under depolymerization conditions, however our system did not result in these cycles (Table 1). 41,44,50,51 This underlines the importance of the catalytic system to produce the desired outcome of the system. Recent report on the DBU and thiourea catalyzed shis in equilibrium conversion points opens the possibility that the thermodynamic equilibrium is not completely independent of the catalytic system employed. 60 This may very well be the key component for eventually expanding the scope of accessible monomers through ringclosing depolymerization methodologies. Conclusion The length of the unsubstituted alkyl chain of the a,u-diol controls the outcome of the reaction set-up by leading to either polycarbonates or cyclic carbonate monomers. Ethylene glycol and 1,3-propane diol produced the cyclic carbonates at high yields, 83% and 70% respectively. Substitution on the 2-position of the cyclic carbonate have a smaller effect on the equilibrium behavior compared to the substitution on the 1-position. By comparing trimethylene carbonate to the mono-methylated version on the 2-position only an additional 2% of monomer in equilibrium is observed, however the same substituent on the 1-position of the carbonate leads to more than 7% more of cyclic monomer in equilibrium. The proposed mechanism for depolymerization is via an anionic induced ring-closing depolymerization from the chain end that releases the cyclic monomer. By constantly removing the cyclic monomer the reaction sequence can be driven towards high conversion. In all, we have successfully produced in total seven cyclic carbonates on a 100 g scale using a one-pot set-up without solvent or toxic reagents. All chemicals involved are commercially available and inexpensive and the experimental set-up is readily available. The ease in synthesis, scale and low cost highlight the immense potential of this class of monomers in the entire eld of polymer science. Conflicts of interest The authors declare no competing nancial interest.
2019-04-10T13:12:34.434Z
2018-11-16T00:00:00.000
{ "year": 2018, "sha1": "3ac15a4284e39919a5d54dc7023ab56cd0553ca2", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c8ra08219g", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e22b9fb73292819e96562c8ef5844b4bf3b73086", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
258146777
pes2o/s2orc
v3-fos-license
Using compartmental models and Particle Swarm Optimization to assess Dengue basic reproduction number R0 for the Republic of Panama in the 1999-2022 period Nowadays, the ability to make data-driven decisions in public health is of utmost importance. To achieve this, it is necessary for modelers to comprehend the impact of models on the future state of healthcare systems. Compartmental models are a valuable tool for making informed epidemiological decisions, and the proper parameterization of these models is crucial for analyzing epidemiological events. This work evaluated the use of compartmental models in conjunction with Particle Swarm Optimization (PSO) to determine optimal solutions and understand the dynamics of Dengue epidemics. The focus was on calculating and evaluating the rate of case reproduction, R0, for the Republic of Panama. Three compartmental models were compared: Susceptible-Infected-Recovered (SIR), Susceptible-Exposed-Infected-Recovered (SEIR), and Susceptible-Infected-Recovered Human-Susceptible-Infected Vector (SIR Human-SI Vector, SIR-SI). The models were informed by demographic data and Dengue incidence in the Republic of Panama between 1999 and 2022, and the susceptible population was analyzed. The SIR, SEIR, and SIR-SI models successfully provided R0 estimates ranging from 1.09 to 1.74. This study provides, to the best of our understanding, the first calculation of R0 for Dengue outbreaks in the Republic of Panama. Introduction The current landscape of global events is characterized by its complexity and its interconnections, making it necessary to proactively anticipate and mitigate future challenges. In response to this, there has been a growing emphasis on approaching public health events, such as epidemic outbreaks, through a mathematical lens. Mathematical models provide a valuable tool in predicting the 1. Collect data on the number of cases over time. This data may include the number of new cases, the number of cases accumulated and the number of deaths. 2. Determine the initial values of the model's parameters, such as the proportion of the population that is susceptible, infected, and recovered. 3. Use mathematical optimization techniques, to find the set of parameter values that best fit the data. 4. Use the parameterize model to estimate 0 . 5. Compare the results with the observed data to check the validity of the model. The process of parameterizing epidemic models is crucial in gaining insights into the dynamics of disease transmission. This includes determining the proportion of the population that is susceptible to infection, the rate at which susceptible individuals become infected, and the time required for recovery. These parameters are typically are taken from real epidemic data. Data set The quality and composition of data sets are paramount in epidemiological modeling, as they serve as the basis for accurate disease spread predictions. The utilization of robust and comprehensive data sets is crucial in ensuring the validity of the models and the subsequent effectiveness of disease control strategies derived from them. As it is shown in Fig. 1, the dengue incidence for the Republic of Panama was analyzed in the 1999-2022 period. It can be noticed that there outbreak years with its incidence being prominent and shaded in green. These peaks found in this period could be of three types: outbreak in the whole year, outbreak in the first semester of the year (named a) and outbreak in the second semester of the year (named b). In this manner, outbreak can be found for the years: 1999, 2001-2002, 2005-2006a, 2006b-2007a, 2007b-2008a, 2008b-2009a, 2009b- Dengue cases were grouped by epidemiological week and were represented as a time series of 24 years (1999-2022). It is importance to note that this period has been well studied, with publications focusing on molecular aspects of the serotypes, studying the demographic groups it affects via statistical analysis and finally the relationship with incidence and climate variables [15,24]. The data set used in this study is similar to that used in these publications before mentioned, which is taking into account cases of Dengue confirmed by laboratory and with a known epidemiological link. The parameterization was carried out in the intervals corresponding to Dengue outbreaks mentioned before, which are characterized by an abrupt increase in Dengue cases after a period with a low incidence of cases. The demographic data was obtained from the National Institute of Statistics (INEC). The estimated population of the Republic of Panama during the study period (1999-2021) was obtained from the open data portal on the INEC website, while the population for 2022 was predicted based on previous data [37][38][39][40]. Compartmental models Epidemiology plays a crucial role in understanding the spread and control of diseases, particularly in the context of pandemics. One of the most effective ways to study the spread of diseases is through mathematical models. In this field, several models have been developed, each with a specific focus on different aspects of disease transmission and control. These models provide a theoretical framework to understand the underlying mechanisms of disease spread, and to guide decision-making in the control and prevention of pandemics. The use of systems of differential equations plays a crucial role in modeling the dynamics of various situations, including the study of pandemics such as COVID-19 [41] and the impact of deforestation on wildlife [42]. Investigative efforts in vector-borne pandemics have leveraged ordinary differential equations (ODEs) as a valuable tool to guide the development of effective control strategies. The SIR and SIR-SI models, for instance, have been widely employed to determine optimal vaccination schedules and assess the efficacy of various control methods, as demonstrated in several recent studies [43][44][45]. The utilization of SIR and SIR-SI models in epidemiology has been extensively studied, with a focus on the impact of incorporating periodic components in disease transmission models. Research has compared the advantages and disadvantages of incorporating such components, which can help to accurately capture the seasonal fluctuations in disease transmission [46,47]. The consideration of periodic components in models is crucial as it helps to better predict the spread of diseases, particularly those with seasonal patterns such as vector-borne diseases. This information can then inform the development of effective control and intervention strategies to prevent outbreaks and minimize the impact of pandemics. In this study, the focus is placed on three models that have been widely used in epidemiology: the SIR, SEIR, and SIR-SI models. These models have been utilized to provide valuable insights into the dynamics of disease transmission, and to evaluate the effectiveness of various control and prevention strategies. The SIR model considers the flow of individuals between susceptible, infected, and recovered compartments. The SEIR model extends the SIR model to account for the incubation period between exposure to the disease and symptom onset. Finally, the SIR-SI model extends the SIR model to account for additional sources of infection, such as the mosquito vector. Model assumptions The SIR, SEIR, and SIR-SI models might seem similar, but they differ in their complexity. Each additional compartment added to a model increases the number of equations that it represents, enabling the model to fit better increasingly complex data. Hence, a model with more compartments is anticipated to provide a superior fit to the data. These compartments models play a crucial role in capturing the dynamics of the spread of the disease in a population. It should be noted that both presented models have assumptions in order simplify the number of compartments and the number of equations, as well as the amount of data was needed to make the model. These assumptions include, but are not limited to: • Population is assumed to be homogeneous in terms of susceptibility, transmission and recovery. • Individuals are equally likely to mix with one another, regardless of their Dengue serotype. • Transmission rate is assumed to be constant over the course of the outbreaks. • Once an individual recovers from the disease, they develop immunity to it. • Constant efforts to control Dengue spread impact the susceptible population. Assumptions play a crucial role in epidemiology models as they provide a foundation for forecasting the spread of diseases. These assumptions, when correctly interpreted, can provide valuable insights into the behavior of the disease and inform decision-making for disease control measures. However, it is important to be aware of the limitations of assumptions and to critically evaluate their validity in the context of a specific outbreak. A thorough understanding of the assumptions used in a model and how they impact the predictions is essential for the effective interpretation and use of epidemiology models in disease control and prevention. These assumptions are important for building accurate and realistic models of Dengue transmission, but they are also limitations, and it is important to be aware of these limitations when interpreting model results. To solve the system of differential equations, 4th-order Runge-Kutta method (RK-4) [48][49][50], with a step size (or integration time step) of ℎ = 0.01, representing time in weeks, was used. SIR The SIR model was originally formulated by Kermack et al. [5], later introduced, with a small aggregate representing the Demographic Data of Birth and Removal people [27]. In all the models used in this study, there are three compartments representing populations, which are the susceptible (S), infected (I), and recovered (R) compartments. The susceptible compartment represents the population that is susceptible to the disease. The infected compartment represents the population that is infected at a rate . The recovered compartment represents the population that has recovered from the disease at a rate . Additionally, people are added to the susceptible compartment at a birth rate , and people are removed from all compartments at a mortality rate , where N represents the population at any time. The differential equations of this model can be seen in Equation (1). The SIR model, representation shown in Fig. 2, is a theoretical framework that is used to describe the spread of infectious diseases within a population. The model tracks the number of individuals in each compartment over time and seeks to explain the dynamics of disease transmission based on the movement of individuals between compartments. In order to solve the SIR model, initial conditions must be introduced. The main objective of the SIR model is to understand the spread of disease by accounting for the velocity at which individuals move through the different compartments. SEIR The SEIR model is a modified version of the SIR model that includes a compartment to represent individuals who have been infected with the disease, but have not yet become infectious. This compartment, referred to as "exposed" (E), represents individuals who have been exposed to the virus, but have not yet developed symptoms or become contagious. This model is a local adaptation from the model presented by Yang et al. in [28]. The incubation rate, denoted by 1 , reflects the length of the incubation period, the time between exposure and the onset of symptoms. By incorporating the exposed compartment, the SEIR model provides a more comprehensive understanding of the dynamics of disease transmission. Equation (2) represents the mathematical formulation of the SEIR model. The SEIR model, representation shown in Fig. 3, considers the progression of individuals through different stages of a disease, beginning with exposure and ending with recovery or death. The model differentiates between individuals who have been exposed to the virus but have not yet developed symptoms (E), individuals who are infected and can spread the disease to others (I), and individuals who have recovered from the disease (R). In the SEIR model, after contact with an infected individual, individuals progress from the susceptible compartment to the exposure compartment, where they remain for an incubation period before moving on to the infected compartment. This progression represents the host's response to a new virus, and the model provides insight into the spread of infectious diseases within a population. SIR(h) -SI(v) The third model employed in this study is the two-tier host-vector model SIR(h)-SI(v), which represents the interactions between the mosquito vector and the human host in Dengue epidemics. This model considers the dynamics of the spread of the Dengue virus between the human host and the mosquito vector, allowing for a more comprehensive understanding of the dynamics of the disease transmission process. This model is a local adaptation from the model presented by Nishura et al. in [29]. The differential equations of this model can be seen in Equation (3). The SIR-SI model, representation shown in Fig. 4 captures complex interactions between host and vector and offers a more complete understanding of Dengue fever transmission dynamics. It considers both host and vector factors, providing a deeper insight into the virus, vector and host interplay. This model includes the interaction between mosquitoes and human hosts, both at the host and vector levels. Key parameters include: the likelihood of infection from mosquitoes to humans ( ℎ ), the likelihood of infection from humans to mosquitoes ( ℎ ), and the ratio of mosquito bites per unit time ( ). These parameters are combined to create the host-vector ratios ( ℎ = ℎ * ) and vector-host ratios ( ℎ = ℎ * ), and take into account the demographics of both mosquito vectors and human hosts. 0 and the Next-Generation-Matrix Method One of the ways that 0 is determined is via the Next Generation Matrix Method (NGM), developed by Diekmann et al. [51][52][53]. The NGM technique involves constructing two matrices, the Transmission ( ) and the Transition ( ), consider transmission to the positive ratio that introduces new infected individuals and transition to the negative ratio that move individuals from infected states (other than susceptible and recovered). Equation (4) provides a mathematical representation of the two matrices needed to calculate the NGM model. See Appendix A for the complete derivation procedure. Once the matrices and Σ are obtained, next, the product of − × Σ −1 is calculated and later their eigenvalues. The predominant eigenvalue (largest absolute value of its eigenvalues) or the spectral radius is the basic reproduction number 0 . In our case, each model used has a slightly different expression for 0 but encapsulates the same meaning, compare the speed of add and remove individuals between the infected compartments. For the SIR Model, it is calculated via Equation (5): For the SEIR Model, it is calculated via Equation (6): For the SIR-SI Model, it is calculated via Equation (7): After introducing the models used and the relevant expressions in this study, the following explanation is provided for determining the parameters of a system of differential equations using the PSO meta-heuristics method. Particle Swarm Optimization meta-heuristics PSO is a computational method for solving optimization problems, utilizing a meta-heuristic approach inspired by the collective behavior of swarms in nature such as hives, flocks, or schools of fish. It was first introduced by Kennedy and Eberhart in 1995 [54]. In PSO, particles or agents represent the coordinates of different points in the solution space at each iteration. Guided by their own best evaluations and the global best, they navigate the search space. This algorithm has the potential to explore the entire function surface and find either a local or global solution, depending on the configuration and computational resources available. When searching for a solution to a system of equations, it is crucial to have a structured method for solving the problem. There are various types of problems, such as scheduling problems, space allocation problems, clustering problems, classification problems, and in our case, we can define our problem as a standard weighted least-squares optimization problem. In this problem, the algorithm searches for the best parameters that fit the in-silico simulation with real data. Each type of problem has a preferred method of solution, and multiple methods can be used to solve the same problem. For example, the standard weighted least-squares optimization problem can be solved using PSO [55] or genetic algorithms [56]. In our case an optimal solution, will be one that best fits a system of differential equations (SIR, SEIR and SIR-SI models) to a set of demographic data, Dengue data and subject to initial conditions. It is considered a solution when the parameters that best describe the spread of a disease in a population are determined, thus providing valuable information on the spread of the disease and assisting in decision making for the control measure. In general, a swarm can be defined as shown in the following Equation (8): where, particles that compose the swarm, take values within the range defined by the limits 1 , 2 , ..., , particles move in search of solution parameters that minimize the objective function (OF). The initial distribution of particles is usually random over the solution space, but depending on the requirements of the model, an ordered initial distribution can sometimes yield better results. The OF is first resolved for each particle, then the solutions are compared and the best ones are identified as a guide for the swarm. The process is repeated until the convergence criteria are met. Fig. 5 provides a diagrammatic representation of the functioning of the Particle Swarm Optimization (PSO) method. Once the best solutions have been found, the particles move towards them. The velocity vector for each particle is calculated using the equation (9) that updates the speed of the swarm. The velocity vector includes three components: an inertia component 0 that maintains the forward movement of the particles, a learning component 1 with a random contribution 1 that moves the particle towards its personal best position , and a global learning component 2 with its corresponding random contribution 2 that moves the particle towards the best solution found by the entire swarm . When all the velocities at which the particles move have been calculated, the next step is to move the swarm. This happens by updating the position of each particle by adding its respective velocity using the following equation (10). In our case, this meta-heuristic method will be used to fit and optimize the parameters for each compartmental model. It allows the exploration of the entire solution surface space of the objective function (OF) within a given search interval. Finally, providing a local solution or even a global solution to the problem of adjusting an epidemiological model to real data. The Particle Swarm Optimization was carried out using the PySwarms package for Python [57] and accelerated using the CuPy package [58]. Objective function The iterative method operated by resolving a system where the number of unknowns exceeded the number of variables, typically through a process of trial and error. Each iterative method had an adjustment equation or objective that was refined in each iteration round. This equation was commonly referred to as the objective function, which evaluated the solution at each epoch and the meta-heuristic searched across the search interval to find the best fit. The goal was to fit a model to a set of Dengue epidemic data in order to estimate the evolution of outbreaks. To achieve this, the accumulated percentage difference between the in-silico results and real data was calculated, as well as the difference between the total simulated cases and actual cases. These metrics guided the search for the optimal parameters. The objective function, shown in equation (11), was calculated based on the percentage difference between the predicted weekly cases from the simulation and the actual cases, as well as the percentage difference between the total simulated recoveries and the actual recoveries. Search interval Determining the parameters to use was the central focus of the optimization algorithm. Its purpose was to calibrate the models to the given data, specifically to adjust epidemiological data to compartmental models in order to determine the reproductive rate of secondary cases at the onset of each annual outbreak. To facilitate the search, search intervals for each parameter were defined and listed in Table 1. The interval for the demographic parameters was derived from the average life expectancy in Panama [59], while the average duration of the disease was estimated to be 21 days [15]. The average removal rate for vector was taken from previous studies in Panama [60]. Hardware specifications All models were calculated using a Desktop PC with a Intel 11th Processor, 64 GB RAM and accelerated via GPU using an NVIDIA Ampere card. Results The accumulation of Dengue cases was documented over a period of 24 years, starting from 1999 to 2022, and was organized according to the epidemiological week. The yearly population data and the number of Dengue cases were carefully collected and recorded. Subsequently, the epidemic outbreaks were analyzed and classified with the objective of determining the reproductive number 0 and the susceptible population. The susceptible segment of the population represented areas where the epidemic control measures failed to achieve the desired outcomes. It signified the fraction of the effective population that was vulnerable to Dengue infection. For each outbreak, the reproductive rate and the size of the effective susceptible population were calculated. The estimation of these parameters was performed through the utilization of the PSO algorithm in 100 simulation runs. The results of these simulations were used to derive the lower and upper confidence intervals, which are listed in Table 2, with the first row providing the value for the SIR model, the second row provides the value for the SEIR model and the third and last one provides the estimate for the SIR-SI mode. It should be noted that the PSO algorithm was applied to minimize the OF, and 100,000 particles were utilized over a span of 50 iteration epochs in each simulation run. The outcomes of the simulation revealed that a segment of the population was prone to participating in Dengue outbreaks, in which they were exposed to the risk of infection from a contaminated vector. This segment of the population grew as the density of mosquitoes per capita or the number of infected individuals heightened. The vulnerability of the population to Dengue was greatly affected by the education programs on the disease and the tireless efforts of the units tasked with controlling its spread. These units, responsible for maintaining the well-being of the public, regularly conducted fumigations in areas with a history of Dengue cases or a high risk of outbreaks, as a precautionary measure to mitigate the risk of further transmission. Through these proactive measures, the spread of Dengue was effectively managed and contained, reducing its impact on the vulnerable population. Having analyzed the results and estimated the parameters, it was observed that the values of the susceptible population oscillated within comparable intervals for the three models. However, during epidemic outbreaks with a significant number of cases, the SIR model tended to estimate a comparatively higher effective susceptible population, whereas the SIR-SI model estimated a comparatively lower population, and the SEIR model estimated a population that was positioned somewhere between the values of the SIR and SIR-SI models. As for the estimation of the reproductive ratio of cases, the SEIR model generally tended to estimate a higher value due to its lower fraction of effective susceptible cases compared to the SIR and SIR-SI model, which had a higher fraction of effective susceptible cases and a lower reproductive ratio. Nonetheless, the three models replicated the number of secondary cases with a similar performance metric. The models demonstrated the ability to estimate the case curve and the number of recoveries during the end of the epidemic. Among the models, the SIR-SI model stood out for its ability to provide a more thorough analysis of parameters related to epidemic outbreaks, such as the density of mosquitoes per person. Outbreak analysis The process of parameterizing each individual outbreak year for Dengue was deemed highly important as it allowed for a more accurate estimation of the reproductive rate of secondary cases during the start of each annual outbreak. By tailoring the parameters to fit the specific circumstances of each year, the models used to analyze the outbreaks could be analyzed, leading to a better understanding of the spread and evolution of the disease. To further understand the impact of dengue outbreaks, each outbreak period was analyzed separately, results in terms of susceptible population is shown in Table 3. Also, mosquito density for each model is presented. In some instances, the data sets incorporated epidemic weeks from two consecutive years due to the extended duration of specific outbreaks. During less severe dengue outbreaks, it can be assumed that high populations of mosquitoes per capita were present in a particular epidemic location. However, as the outbreak progresses, the number of mosquitoes per capita gradually decreases. The purpose of this parameterization was to optimize the estimates for each individual year of outbreak, in order to achieve a more accurate representation of the spread and evolution of the disease. The magnitude of Dengue outbreaks has been found to be linked to the abundance of mosquitoes in a given area. Outbreaks of small scale often indicate the presence of a high concentration of mosquitoes in the susceptible population. However, more severe outbreaks do not necessarily imply a high density of mosquitoes, but instead point to the alongside presence of the vector in the vulnerable segment of the population. In these Figures, the real incidence of the epidemiological outbreaks is indicated in circular gray with sky-blue edge color, while the fitted curve for each outbreak was depicted in colored curves. These figures provide a visual representation of the ability of each model to estimate the case curve. Also, the obtained OF value and 2 are shown. Discussion The Republic of Panama, situated within the tropical region, experiences favorable climate conditions, particularly with regards to rainfall, that facilitate the propagation of the mosquito vector responsible for the transmission of Dengue. An analysis conducted by Díaz et al. [15] has confirmed the presence of all four serotypes of Dengue in the Republic of Panama since 1993. Compartmental models are a widely recognized tool in epidemiology, employed to comprehend the dynamics of an epidemic and, more significantly, to determine the rate of transmission, 0 [61,62]. The purpose of this study was to determine the reproductive ratio in Dengue outbreaks through the use of actual data. Our findings revealed that the 0 value can be effectively estimated with this method. Furthermore, our examination of years with documented outbreaks indicated a correlation between the number of mosquitoes per capita in the susceptible population and the likelihood of a Dengue epidemic. Using the proposed methodology, it is noted that in all analyzed outbreaks, the mosquito density per person Table 3, starting from an average density of (2.4 ± 0.8) mosquitoes per person (as seen for the year 2022), plays an important role in the development of Dengue epidemics. To determine the role of mosquito density in 0 estimation, a model akin to the one developed by Griffin for Malaria [63], should be studied. The Griffin model, uses parameters such as: biting rate on humans, expected duration of infectiousness of an infected mosquito, the probability of infection if bitten by an infected mosquito and human infectiousness to mosquitoes (if the human host is infected). These parameters to the best of our knowledge, have not been characterized for the Republic of Panama. Thus, the interpretation of results should be made in the direction that if at least one mosquito, with the ability to transmit the disease, is present in the presence of a susceptible person, there would be a possibility of transmission. Basically, while there could be other factors that contribute to the development of Dengue outbreaks, for our models, having mosquitoes capable of transmitting the disease is a crucial one. Comparing the resulting reproductive number across different countries in various latitudes can greatly aid in our understanding of the severity of Dengue outbreaks. By identifying similarities and differences in the ratios found, we can gain a deeper understanding of the results of the models used to estimate the reproductive number, which ultimately provides a foundation for developing effective strategies for controlling and mitigating the spread of the virus. Studies on the reproductive ratio of Dengue in the city of Kupan, Indonesia, showed a value that is comparable to the reproductive ratio found in our study, with a range of 1.30 to 2.02 [44]. In contrast, a study conducted in Colombia found a reproductive ratio that oscillated between 1.01 and 1.11, which is in agreement with the range found in our study [64]. Studies conducted in Central and South America have estimated the reproductive ratio of Dengue fever between the years 1999 and 2010 in a number of countries. The results showed varying values for the reproductive ratio, with Brazil having a ratio of 2.75, Colombia with a ratio of 3.075, Honduras with a ratio of 2.7, Mexico with a ratio of 1.975, and Puerto Rico with a ratio of 2.15 [65,66]. These findings highlight the importance of examining the reproductive ratio on a country-by-country basis as it provides valuable insights into the transmission dynamics and potential severity of the epidemic in a specific region. In this study, a SIR-SI model was introduced to address issues of mosquito disease control. This consideration can be further extended utilizing a granular approach by analyzing vector populations at the district or corregimiento levels in order to enhance the application of control strategies [67][68][69][70]. In this matter, works by Loaiza et al. can become very useful, as they have provided a first measure of mosquito larvae (both for Aedes aegypti and Aedes albopictus) first in the Azuero region [10] and then associated with specific Dengue cases countrywide [11]. Part of the challenge of adding this vector-host model can be to sort out a biological niche phenomenon, basically saying which species can be dominant in an specific region. For instance, it has been noted in the literature that there is ecological competition between Aedes aegypti and Aedes albopictus in parts of the country [71,72]. Moreover, as is known the same mosquito is responsible for Zika and Chikungunya on top of Dengue, which can alter the number of Dengue cases [34,15,31,73]. One of the limitations of this study was its reliance on certain assumptions, for instance, one of them is the notion of lifelong immunity following a primary Dengue infection. However, it is well established that this is not the case for Dengue, as subsequent infections can have different effects. As a result, the models may only reflect an approximation of the true dynamics of the disease. Also, some of the limitations of this study are that the models were made to works under assumptions. For instance the assumption of life long immunity after a primary Dengue infection, that in case of Dengue is different due to the well-studied effect for subsequent Dengue infection, Antibody-dependent enhancement (ADE) effects and cross immunity for other flaviviruses [74]. Public health is always a priority, even more nowadays, in the lights of a global pandemic, such as the one caused by SARS-CoV-2 virus. Even though Dengue has 30 years of endemic circulation in Panama, this is the first time that 0 have been calculated for these diseases in the country. Because subsequent Dengue infections the 0 calculation will provide a better understanding of possible Dengue outbreak, since these have been related with the introduction of a different serotype or genotype of the virus and immunity to the serotype (s) that have been in previous circulation [15,75]. In future work, it would be valuable to explore the potential of other differential equation-based compartmental models that take into account vector-host-mitigation strategies (vaccination or Wolbachia bacterium inoculation) as interactions [44,45,43]. Such models have been used in the past to evaluate the impact of local vector eradication and control efforts carried out by Health authorities [76][77][78]. These models offer the potential to gain a deeper understanding of the dynamics of vector-host interactions, thereby facilitating the development of more effective control strategies. Conclusions The outcome of these findings underscores the ongoing necessity for effective vector control within communities, proper disposal of waste that could facilitate the spread of other viruses, and educational programs for the public regarding hygiene measures to prevent proliferation and infection from diseases similar to or including Dengue. It is crucial to address these issues in order to minimize the severity of future outbreaks. Designing an effective dengue vaccination strategy requires the use of epidemiological modeling to understand the spread of the disease, the potential impact of the intervention, and inform resource allocation. Epidemiological models incorporate data, transmission dynamics, population demographics, and the impact of interventions such as inoculation to guide effective and efficient disease reduction strategies, highlighting the importance of applying models to assist decision makers. Since Dengue is endemic and its circulation still causes severe cases and deaths each year, the 0 calculation could be used as a tool of prevention for health authorities in the Republic of Panama. Regular collection and analysis of incidence data enables public health organizations to keep abreast of disease outbreaks and implement proactive interventions. Thus, the significance of high-quality data sets and consistent incidence reporting cannot be underestimated in the pursuit of successful epidemic control and prevention. Taken as a whole, our results emphasize the significance of implementing measures to control the mosquito population, including using insecticides, mosquito nets, and other preventative methods to reduce their numbers and prevent the spread of Dengue. Moreover, it highlights the importance of continuous mosquito control efforts, proper waste management to prevent the spread of other viruses, and educational programs for the public on maintaining proper hygiene to prevent the proliferation of diseases such as Dengue. As final words, it is imperative that public health be given priority, especially in the midst of a global pandemic such as that caused by the SARS-CoV-2 virus. The significance of this study lies in the fact that it provides, to the best of our knowledge, the first ever calculation of the reproductive ratio 0 for Dengue outbreaks in the Republic of Panama during the period from 1999 to 2022. Despite being endemic, Dengue continues to cause severe cases each year, emphasizing the importance of understanding its dynamics. Ethics approval and consent to participate Not applicable. The original study from which the data in this article was generated was approved by the Institutional Research Bioethics Committee for the Gorgas Memorial Institute for Health Studies approved under approval code 019/CBI/ICGES/18 on January 11th 2018. Consent for publication All authors have read and agreed to the published version of the manuscript. Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Data availability Data regarding the incidence of Dengue cases is of public domain, and is compiled yearly by MINSA and published in the Health Statistical Yearbook http://minsa .gob .pa /informacion -salud /anuarios -estadisticos, last accessed in May 25th, 2022. Demographic is compiled by the Instituto Nacional de Estadistica y Censo and published in "Panama en Cifras" yearbook https://inec .gob .pa / publicaciones /Default .aspx, last accessed in May 25th, 2022. The next step is to assemble the matrices and Σ, incorporating the Jacobian, and then substitute in the values at the disease-free equilibrium point (DFP). The next step is to compute the product of − Σ −1 and find the supremum absolute value of its eigenvalues.
2023-04-15T15:17:33.543Z
2023-04-01T00:00:00.000
{ "year": 2023, "sha1": "c1f80c464829d9ee0f3a36bb533d69699a63cec8", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.heliyon.2023.e15424", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "49f52004c2b5144633f2ad47e70563fc6fe76021", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
243525123
pes2o/s2orc
v3-fos-license
Identification method of drought resistance on maize based on qRT-PCR Background: In order to reveal the mechanism of drought resistance of maize, establish the molecular identification system of drought resistance and solve the problem of difficult identification of drought resistance of maize. Taking Nongdan 476, a drought-tolerant maize hybrid cultivar, and Zhongxin 978, a drought-sensitive maize hybrid cultivar, as materials, drought stress and normal watering treatment were carried out at seedling stage, flare stage, tasseling stage and filling stage, and leaf tissue were collected for transcriptomes. Results: P-vaule < 0.05 was selected as the screening standard. There were 6281, 17191, 21790 and 15475 differentially expressed genes of each stage, respectively. At the same time, only DnaJ gene was significantly differentially expressed at the four stages. The preliminary results of qRT-PCR showed that the up regulation of DnaJ gene expression was consistent with the results of transcriptome analysis. DnaJ gene expression was further detected by using control samples with specific drought resistance index of yield. The results showed that DnaJ gene expression drought resistance index (DI E ) was highly correlated with yield drought resistance index (DI P ), and the 5-level criteria for DnaJ gene expression drought resistance index (DI E ) were defined by their linear equations. Conclusion: Drought resistance of maize is a character controlled by multiple genes. There is a significant difference between the expression of DnaJ gene in drought resistant and non drought resistant maize, which has the potential to be used as molecular identification of drought resistance of maize, and can provide a more comprehensive and accurate technical means for drought resistance cultivation and breeding of maize. serious [6]. Maize is the world's highest yield food crop. In 2012, the domestic maize production and planting area have exceeded rice and wheat, becoming the largest food crop in China [7]. Maize needs a lot of water in the whole growth period [8,9,10]. The shortage of water resources has a serious impact on the yield and quality of maize [11,12]. It is very urgent to establish the identification method of drought resistance in maize. The drought resistance performance of maize is restricted by its own genetic effect and environmental factors. Due to the different growth and development period of crops, the influence of biological factors and non biological factors, so far any single drought resistance research has certain limitations, so it is difficult to directly and accurately evaluate the drought resistance of maize. There are many methods to identify the drought resistance of maize [13,14,15], including field direct identification method, artificial simulation environment method, physiological index method and molecular biological identification method. Among them, the field direct identification method is to identify the growth forms of crops in different growth and development periods, but these methods have the disadvantages of long time and environmental factors, especially the large interannual precipitation variation, which makes the research results difficult to repeat. The artificial simulated environment method is to regulate the soil and air in the arid shed, growth box or artificial climate chamber water content is the drought stress environment required by experiments. The method of evaluating the drought resistance of water maize by studying the growth and development of maize, physiological process or the change of yield results is needed. This method needs certain equipment, and the energy consumption is relatively large. The created drought stress environment is different from the field production under natural conditions, which results in the experiment results and the field production. There are some differences in the results of direct identification, physiological index method is mainly used to identify the drought resistance of plants, such as leaf water related indicators, plasma membrane permeability and enzyme activity. However, these physical and chemical indicators will change due to the different growth environment and growth period, and are also prone to errors due to the reagents used and human operation. Molecular biological identification method is based on modern molecular biology, the saturated molecular genetic map was constructed by using related molecular markers, and the drought resistance genes of maize were located. The drought resistance varieties were selected by using molecular markers. Drought resistance is a quantitative character controlled by multiple genes [16]. In recent years, many drought resistance QTLs were found, but the polymorphism frequency of the markers is very low, and a single QTL has little effect on the phenotypic difference, and the difficulty in evaluating epistasis effect. Further research is needed to use molecular marker assisted selection [17]. Therefore, it is urgent to establish new technologies and find new markers to solve these problems. In recent years, with the rapid development of omics, transcriptomics has been applied in the field of crop drought resistance research , which provides a development direction for the discovery and screening of molecular markers for drought resistance identification of maize and improving the level of drought resistance identification. Sample Total RNA Agarose Gel Electrophoresis Detection Total RNA of 36 samples was isolated from non-stressed and stressed at seedling stage, flare stage, tasseling stage and filling stage of the two maize hybrid cultivar(Nongdan476 and Zhongxin978) and prepared for Agarose gel electrophoresis ( Fig.1). It can be seen in the figure that the total RNA electrophoresis bands of 36 samples are clear and bright, and the RNA integrity is good, which is suitable for transcriptome sequencing. RNA-Sequencing (RNA Seq) Analysis Total RNA isolated from non-stressed and stressed was used for the RNA-Seq transcriptome analysis. After filtering the sequencing results, a total of 2.03B clean data was generated from 36 samples. For the 36 samples, the bases with the quality value of clean data ≥ Q30 were higher than 95.95%, which proved the reliability of the sequencing results ( Table 1). The clean reads of each sample were mapped to the maize reference genome sequence(B73 RefGen_v3), the results showed that the mapping rates ranged from 84.69% to 93.23% (Table 1). Differential Gene Expression Analysis In order to determine the response of materials to drought, the transcriptome of the same cultivar in the same stage of drought and control treatment were analyzed for differential expression, and in order to explore the genetic differential expression genes of two extreme cultivars, the differential expression genes of two hybrid cultivars under the same water condition were analyzed. Therefore, four sets of differentially expressed genes can be obtained in each treatment stage, namely seedling stage: TC_TD, SC_SD, TC_SC, SD_TD; flare stage: TC1_vs_TD1, SC1_vs_SD1, TC1_vs_SC1 and SD1_vs_TD1; tasseling stage: TC2vs_TD2, SC2_vs_SD2, TC2_vs_SC2 and SD2_vs_TD2; filling stage: TC3_vs_TD3, SC3_vs_SD3, TC3_vs_SC3 and SD3_vs_TD3. With |log2(fold change)|>1 , P-vaule<0.05 as the screening criteria, 6281, 17191, 21790 and 15475 differential genes were expressed at seedling stage, flare stage, tasseling stage and filling stage, respectively ( Fig. 2). Screening of Marker Genes When maize is under drought stress, there are up-regulated and down-regulated genes in different development stages and different strains (drought tolerance or sensitivity). Under the same moisture conditions, 4331, 6580, 8180 and 7477 differential genes were identified at seedling stage (TC_SC), flare stage (TC1_vs_SC1), tasseling stage (TC2_vs_SC2) and filling stage (TC3_vs_SC3) before drought treatment, respectively. After the drought treatment, 5398 differential genes were identified at seedling stage (SD_TD), 6282 differential genes were identified at flare stage (SD1_vs_TD1), 10091 differential genes were identified at tasseling stage (SD2_vs_TD2), and 7442 differential genes were identified at filling stage (SD3_vs_TD3). 129, 666, 2417, and 375 differential genes were identified in drought-resistant cultivar (Henong 476) at seedling stage (TC_TD), flare stage (TC1_vs_TD1), tasseling stage (TC2_vs_TD2), and filling stage (TC3_vs_TD3) before and after drought stress treatment. 754, 3663, 1102, and 181 differential genes were identified in the sensitive cultivar (Zhongxin978) at seedling stage (SC_SD), flare stage (SC1_vs_SD1), tasseling stage (SC2_vs_SD2), and filling stage (SC3_vs_SD3) before and after drought stress treatment (Fig. 3). In response to drought stress, some gene sets play a more important role. As shown in the figure, I, II, III and IV represent the differentially expressed genes of resistant cultivar-Henong 476 before and after drought treatment and the differentially expressed genes in response to drought stress in the two hybrid cultivars. Through analysis, we found that a differential gene zm00001d02666 was identified as a marker gene for co-expression during the four stages of drought treatment, which can be used to identify the drought resistance of Maize (Fig. 4). Moreover, heat maps were drawn for the expression of this differential gene in each group at four different stages. It could be found that the expression level of drought-resistant cultivar was higher than that of sensitive cultivar after drought stress (Fig. 5). Verification of Expression Difference of DnaJ Marker Gene The expression of DnaJ gene at seedling stage, flare stage, tasseling stage and filling stage was verified by using drought-resistant cultivar Nongdan 476 and drought-tolerant cultivar Zhongxin 978 (Fig. 6). It can be seen from Fig. 6 that the expression level of qRT-PCR of drought resistant cultivar Nongdan 476 was high at four stages, while that of non-drought resistant cultivar Zhongxin 978 was low at four stages. It is suggested that RNA-Seq of DnaJ gene is highly consistent with qRT-PCR. Fig.7. According to the correlation coefficient r = 0.92 in Fig.7, the Significant correlation is reached (P < 0.01). Table 3. According to table 3, the criteria of drought resistance index of expression quantity are as follows: ≥ 1.30 is extremely strong (HR); 1.11-1.29 is strong (R); 0.91-1.10 is medium (MR); 0.71-0.90 is weak (s); and ≤ 0.70 is extremely weak (HS) Discussion In this study, in order to explore the response of Maize to drought, we analyzed the transcriptome differential expression of drought stress and control treatment at seedling stage, flare stage, tasseling stage and filling stage. The results showed that the differential expression genes in seedling stage,flare stage, tasseling stage and filling stage were 6281, 17191, 21790, and 15475, respectively, Furthermore, drought resistance is a very complex quantitative trait controlled by multiple genes. The DnaJ gene (zm00001d02666) with significant difference in expression at four stages was screened through linkage analysis. The gene was preliminarily verified in drought resistant and non drought resistant maize by qRT-PCR. The result was up-regulated in drought resistant cultivars and significantly different from that of sensitive materials. This result was consistent with the result of transcriptome analysis, which further suggested that DnaJ gene could be used as drought resistant material The potential of the identified molecular marker. In addition, this study collected maize cultivars with different drought resistance that have been identified, and detected the expression level of DnaJ by qRT-PCR technology. The results showed that the expression level of DnaJ was significantly different in different drought resistant maize cultivars, and the expression level was positively correlated with the drought resistance index ID of different maize cultivars, and the difference was statistically significant, so it could be used as a marker for drought resistance identification of maize. DnaJ protein is a kind of protein in Hsp40 family. Its N-terminal contains a conserved J domain of about 70 amino acids, also known as J protein [18]. DnaJ protein can promote the ATPase activity of HSP70, and it is the chaperone of HSP70 [18,19]. In the adverse environment, it can complete the correct folding of protein, maintain the stability of peptide chain, and prevent cell damage caused by environmental stress [20]. Some studies have shown that DnaJ protein plays an important role in the life activities of plants to cope with environmental stress [21]. Therefore, it provides a theoretical basis for the selection of DnaJ gene as a marker for drought resistance identification of maize. Conclusions The marker DnaJ gene has many advantages in identifying drought resistance. The method is simple. It can identify the drought resistance of Maize by qRT-PCR alone, and does not need to combine with other characters and indexes. Accurate identification. From the results of this experiment, we can see that the expression level of DnaJ gene has a significant positive correlation with the drought resistance index IDP of different maize cultivars, which is consistent with the identification results of the field direct identification method widely used at present. It can be used to identify maize hybrids, such as Nongdan 476, Zhongxin 978 and maize inbred lines, such as 8112, zong31 and Mo17. Flexible identification period. It can be identified at seedling stage, flare stage, tasseling stage and filling stage. The qRT-PCR method based on DnaJ gene will provide an advanced technology for the identification of maize drought resistance. Transcriptome Sequencing Materials Two maize cultivars with contrasting drought sensitivity (tolerant Nongdan 476 and sensitive Zhongxin 978) were used in this experiment. Seeds of the two maize hybrid cultivars were provided by the North China Key Laboratory for Crop Germplasm Resources of Education Ministry ( Hebei Agricultural University, China). The experiment was conducted on May 2018 in a drought-resistant shed at Qing Yuan, Baoding, Hebei province, China (N 38°79′, E115°56′). The area of the experiment plot was 24m 2 , 60cm row spacing and 30cm plant spacing. Seeds of Nongdan 476 and Zhongxin 978 were sowed by double-grain hole with 6cm depth on the plots fertilized with compound fertilizer 512 kg/hm 2 . In this experiment, the normal watering control group and water stress treatment group were set up for the two experimental materials at seedling stage, flare stage, tasseling stage and filling stage, respectively. Reference material Twelve maize cultivars such as Xianyu 335, Nonghua 101, Jixiang 1 were selected as control samples. The drought resistance index and drought resistance are shown in Table 5. For twelve maize materials, a normal watering control group and a water stress treatment group were respectively set. The leaves at the seedling stage are used as experimental materials, RNA is extracted, qRT-PCR analysis were carried out , and the drought resistance index of gene expression was calculated using marker genes to verify the drought resistance of maize. Verification materials Forty identification materials were selected for stress treatment at seedling stage, of which forty were hybrids promoted in production. Maize seedlings grow under normal conditions until the three leaves are fully unfolded. Then, forty identification materials were subjected to drought stress treatment for 7 days. Half of the plants grew under sufficient water conditions (control group), the remaining plants were subjected to drought stress, and the soil moisture content did not exceed 50% (treatment group). After 7 days of treatment, the leaves treated with drought at the seedling stage were taken as experimental materials. RNA was extracted, and fluorescence quantification was performed. The drought resistance of maize was determined by marker gene. Measurement of Soil Relative Moisture Content and Sampling The relative moisture content of maize was measured by measuring the relative moisture content of the soil in the two experimental fields of normal irrigation and water stress. The relative moisture content of the soil in the normal irrigation control group and the moisture stress treatment group was 70-80% and 15-20% respectively. The relative soil moisture content (RSWC) of one meter underground was monitor by soil moisture meter ( Zhejiang Top Cloud-Agri Technology CO. Ltd., Zhejiang; China). In the water stress treatment group, the maize plants were treated with drought at the seedling stage, flare stage and the first 10 days before flowering stage, and the plants were treated with drought at the filling stage from pollination. After the soil moisture content was lower than 20% and lasted for 7 days, the top leaves were collected from the control and drought stress treated plants, frozen immediately with liquid nitrogen, stored in a refrigerator at -80 ℃, and analyzed for transcriptome. Each treatment was replicated three times. RNA Extraction, cDNA Library Construction and Transcriptome Sequencing Total RNA of the leaf samples was isolated from non-stressed and stressed leaves of the two maize hybrid cultivars using Trizol reagent (Invitrogen, Carlsbad, CA, USA) following the manufacturer`s protocols. The RNA was purified by the treatment of RNeasy column (QIAGEN, Pudong, Shanghai, China) to remove genomic DNA. The concentration of RNA was detected by NanoDrop 1000 spectrophotometer (NanoDrop Technologies Inc., Wilmington, DE, USA). and the quality of extracted RNA was detected by 1% agarose gel electrophoresis. According to Illumina standard cDNA library , a kit was constructed to construct a chain specific library (on an Illumina Hiseq Xten platform, San Diego, CA, USA), which was sequenced by Novogene Bioinformatics Technology Co. Ltd. (Beijing, China). Processing, Mapping of Sequencing Reads and Gene Expression Quantification Raw data (raw reads) generated by the Illumina HiSeq 2000 system contains low-quality sequences with splices. In order to ensure the quality of information analysis, We need to get clean data. In this step, clean data (clean reads) were obtained by removing the sequence with N-base (n indicates that the base information can not be determined), removing the connector sequence in reads, removing the low-quality base (Q < 20), and removing the base whose tail quality value is less than 20 using the sliding window method (the window size is 5 bp), so as to obtain clean data. These high quality reads were used all the subsequent analyses. All these clean reads were then mapped to the maize reference genome sequence (B73 RefGen_v3), Tophat 2.0.12 software was used to compare the filtered data [22]. Reads that were compared to known transcriptome and partial reads that were compared were further analyzed and annotated. For functional annotation, the quality reads were used for BLAST(basic local alignment search tool) alignment and annotation against non-redundant protein sequence database (Nr) (https://www.ncbi.nlm.nih.gov/), Swiss-port (a manually annotated and reviewed protein sequence database) (https://web.expasy.org/docs/swiss-prot); Clusters of Orthologous Groups (COG) (https://www.ncbi.nlm.nih.gov/COG/) and the Kyoto Encyclopedia of Genes and Genomes (KEGG) (http://www.genome.jp/kegg) [23]. Gene expression levels were calculated and standardized gene expression levels were expressed as RPKM (reads per kilobase of transcript per million mapped reads) [24]. Differentially Expresses Genes (DEGs) Library Construction and Differential Analysis The DESeq R package (1.10.1) [25] was used to analyze the differential expression of genes [26]. In order to get the genes with significant difference, the screening condition was P-value < 0.05 (P was the adjusted p value < 0.05), and the multiple of difference |log2FC|>1. According to the RPKM value of the differential genes, and P-value of each contrast corrected for multiplicity using the Benjamini and Hochberg method [27]. The heat map of their expression quantity in each sample was drawn. Quantitative real time-PCR (qRT-PCR) Analysis In order to verify the expression level of the gene DNAJ gene detected by Illumina RNA-seq. In this experiment, C1000 (CFX96 Real-time System) Thermal Cycler (Bio-Rad) was used for quantitative real-time PCR (qRT-PCR). Using 1 μg of RNA as a template, perform reverse transcription to 20 μl according to the instructions of the HiFiscript cDNA Synthesis Kit (CWBIO, Beijing, China), and perform qRT-PCR using the reverse transcribed cDNA as a template. In this experiment, Primer Premier 5 Designer (Premier Biosoft International, Palo Alto, CA, USA) was used to design specific primers for differentially expressed genes of the DnaJ gene, and the results of the transcriptome were verified. The maize gene GAPDH (accession no.X07156) with stable gene expression was selected as an internal reference gene. The qTR-PCR reaction system includes: 2 μl of template cDNA , 0.5 µl of forward primer (50 pmol), 0.5 µl of reverse primer (50 pmol), and 10 µl of SYBR Green mix (TOYOBO, Japan) in a total reaction volume of 20 µl. Each sample had three technical replicates.. The relative mRNA abundance was calculated according to the 2 −ΔΔCT method [28]. Statistical Data Processing and Analysis Methods The measured data are collected and processed uniformly by Microsoft Excel 2013. Expression Drought Resistance Coefficient (DCE) formula: DCE = water stress expression amount÷ control expression amount; Expression Drought Resistance Index (DIE) formula: DIE = (DCE × water stress expression amount) ÷ average water stress expression amount of all maize cultivars. According to the 5-level classification standard of yield drought resistance index, ≥ 1.20 is extremely strong (HR); 1.01-1.19 is strong (R); 0.81-1.00 is medium (MR); 0.60-0.80 is weak (s); and ≤ 0.60 is extremely weak (HS) to determine the 5-level classification standard of expression drought resistance index.
2020-10-28T18:57:29.173Z
2020-10-01T00:00:00.000
{ "year": 2020, "sha1": "6ddba20aba14c8bfa4eee61d578459d38b7394bb", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-77359/v1.pdf?c=1601587263000", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "9bc0affa9cbfa5fab0a42795685d6ee9c96b6f84", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [] }
218674179
pes2o/s2orc
v3-fos-license
Computer simulations of a heterogeneous membrane with enhanced sampling techniques Computational determination of the equilibrium state of heterogeneous phospholipid membranes is a significant challenge. We wish to explore the rich phase diagram of these multi-component systems. However, the diffusion and mixing times in membranes are long compared to typical time scales of computer simulations. Here, we evaluate the combination of the enhanced sampling techniques molecular dynamics with alchemical steps and Monte Carlo with molecular dynamics with a coarse-grained model of membranes (Martini) to reduce the number of steps and force evaluations that are needed to reach equilibrium. We illustrate a significant gain compared to straightforward molecular dynamics of the Martini model by factors between 3 and 10. The combination is a useful tool to enhance the study of phase separation and the formation of domains in biological membranes. I. INTRODUCTION Biological membranes are complex environments that function as a semi-permeable barrier between the cell interior and the external environment. They consist of phospholipids, cholesterol and protein molecules, and more. The membrane components assemble into microphases and nanodomains and regulate cell function. According to the raft hypothesis, lateral inhomogeneity of the lipid membranes plays a key role in cell signaling, protein aggregation, and membrane fusion. 1 A number of experimental techniques such as x-ray and neutron scattering, 2,3 nuclear magnetic resonance (NMR), 4 and others 5 provide structural data on lipid membranes. Despite significant progress, studying the structure of biological membranes at molecular resolution remains a challenging task due to the disordered and fluid characteristics of these systems. Scattering-based techniques can directly probe the spatial organization of lipid membranes without introducing additional probes that alter the membrane structure. 6 However, the scattering signal is a spatial and temporal average over many fluid structures, leading to a smooth and less detailed signal. The averaging and separation of signals is even more difficult in mixed membranes with multiple types of phospholipids. 7 The rapid growth in the power of computers and the development of simulation methodology has significantly increased the use of computer simulations for the study of lipid membranes. [8][9][10] Recently, a state-of-the-art realistic model of the plasma membrane that contains more than 60 types of phospholipids was developed. 11 Nevertheless, sampling the equilibrium distribution of constituents of heterogeneous membranes remains computationally challenging. The key problem is the slow diffusion of phospholipids in the membrane plane that prevents efficient mixing at time scales accessible for Molecular Dynamics (MD). One approach to reducing the computational cost is to use a coarse-grained description of lipids, such as the Martini model. 12 On average, the Martini model represents four heavy atoms as a single particle or a bead. The reduction in the total number of particles compared to atomistic force fields leads to a significant gain in speed. Moreover, the removal of the fast degrees of freedom (e.g., vibrations of atomic bonds) enables the use of larger time steps and diminishes thermal noise and friction. The diffusion coefficient is D = k B T/γ, where k B is the Boltzmann constant, T is the temperature, and γ the friction coefficient. Hence, the diffusion is faster when the friction coefficient is smaller. The effective speedup of the diffusion within the Martini model (∼4-5 times faster than in atomistic models and experiments) was documented in the literature. [12][13][14] However, even with the significant speedup compared to atomistic models, reaching equilibrium of large heterogeneous membranes with classic MD and the Martini force field is computationally expensive. Recently, a sampling approach that combines MD with Monte Carlo (MC) approaches in the grand canonical ensemble was proposed. The method is particularly suitable for the simulation of heterogeneous lipid membranes in which the different lipids are quite similar. 15 The system is sampled by alternating steps of (1) a straightforward MD step in the microcanonical or canonical ensemble and (2) an MC move that replaces a phospholipid by another phospholipid of a different type. A random lipid is selected for such an alchemical transformation. If the MC move is accepted following the usual Metropolis criterion, the lipid molecule is modified to its lipid counterpart (for example, from DPPC to DPPS). The MD/MC approach is not bound by the slow lateral diffusion of lipids, and phase separation and mixing are potentially sampled more efficiently than straightforward MD. The challenge with the MC approach is that the lipid types must be similar for the MC move to be accepted with a reasonable probability. Because the acceptance is typically low in Monte Carlo with Molecular Dynamics (MC-MD) in membranes, many trials are needed. To increase the number of trials for a fixed number of force evaluations only a single or a few MD steps separate two MC steps. The extraction of kinetic information (such as the diffusion constant) is not possible in this approach. The rapid transitions between MD and MC moves may also lead to hysteresis, and the sampling may deviate from the desired equilibrium. In addition, as the main goal of this approach is to enhance the sampling of mixing lipids, the procedure has to be more efficient than straightforward MD to be of any practical use. Optimizing trial MC moves to achieve higher acceptance probabilities allows longer MD trajectories between the MC moves and better relaxation to equilibrium. It also makes it possible to extract short time kinetic information, such as local diffusion constant. Therefore, a new algorithm was proposed: Molecular Dynamics with Alchemical Steps (MDAS). 16 Instead of performing the exchange in a single MC move, we conduct a gradual growth of the two lipids into their counterparts, relying on the Jarzynski equality, 17 and the algorithm for candidate Monte Carlo moves 18 to obtain the correct statistics. This exact approach significantly increases the acceptance probability of the MC move. Instead of modifying a single phospholipid, an exchange of a pair of phospholipids of different types is considered, which ensures a fixed composition. The MDAS algorithm generates steps that are more likely to be accepted than conventional MC moves. However, if the interactions of the exchanged phospholipids with their environments are significantly different, the gradual modification of the molecules can be inefficient. An example of a challenge for atomically detailed simulation, which is discussed in Sec. IV, is the pair of PS and PC phospholipids. PC is neutral, while PS is negatively charged. Therefore, the electrostatic interactions impact the rate of relaxation to equilibrium, leading to many rejected MC-MD and MDAS steps. This makes the efficient use of this procedure a challenge for atomistic simulations. However, this problem may not be present in the coarse-grained model of Martini. The mutation of PS to PC in Martini adjusts one bead (the head group) with only short-range interactions (Fig. 1). Here, we explore the use of the MDAS and MC-MD algorithms specifically for the sampling of a lipid mixture with the Martini force field. We test the performance of the algorithm with a binary DPPC/DPPS mixture and benchmark it against straightforward MD and MC-MD approaches. For comparison, we also carry out the atomistic equivalent. A. MDAS and MC-MD In the current paper, we use three approaches for the simulation of the system-straightforward MD, MC-MD, and MDAS. The MC-MD simulation is conducted as a series of alternating straightforward MD steps and MC lipid exchange moves. We randomly select a pair of different types of phospholipid molecules (e.g., one DPPC and one DPPS molecule) and change the chemical identity of the two lipids. Then, we either accept or reject the proposed MC move based on the Metropolis criterion, ARTICLE scitation.org/journal/jcp where ΔU is the energy difference before and after the trial move, ri and rj are the coordinates of the selected pair of lipids, respectively, k is the Boltzmann constant, and T is the temperature. If the move is accepted, we continue with an MD trajectory starting from the new state. If the move is rejected, we go back to the state before the exchange attempt and run a straightforward MD step. The MDAS algorithm substitutes a single-step MC move with a gradual adjustment, which is computed using an alchemical trajectory (AT). 16 During the AT, the selected phospholipid molecules change into their counterparts (e.g., DPPC to DPPS and vice versa). MDAS simulation consists of (1) straightforward MD trajectories for sampling and (2) "alchemical" trajectories (AT) that modify the phospholipids and generate candidate MC moves. 18,19 The AT is similar to the alchemical methods used to determine the free energy difference between two states. 20,21 Like in a free energy calculation, the AT path is parameterized with λ ∈ [0, 1]. When λ = 0, we are at the beginning of the attempted exchange, and when λ = 1, at the end of it. The potential energy along the AT, U(R, λ), is a function (λ). The dependence of the potential on λ is the choice of the user. The simplest implementation is linear. For an exchange of a system A to a system B, we have U(λ) = (1 − λ)UA + λUB. We conduct the AT as follows: We run M steps of straightforward MD at a fixed value of λ and then increase λ by a small Δλ in a single step. Starting with λ = 0, the M steps and the increase in λ are repeated until λ is equal to 1. The work done on the system during the entire AT is where xi are the system coordinates after i repeats of the M steps. The total work is used in an acceptance-rejection criterion of the AT move 16,19 similar to Metropolis [Eq. (1)], If the move is accepted, we continue the simulation from the last configuration of the state λ = 1 (the lipids are exchanged) and proceed with another segment of a straightforward MD trajectory. If the move is rejected, we discard the AT and continue from the last step of the previous straightforward MD segment. Thus, the entire MDAS simulation consists of a series of short conventional MD trajectories and exchange steps (AT) in between. We use only the conventional MD segments to calculate the thermodynamic properties of the system. In the current paper, we use two force fields to study the DPPC/DPPS lipid mixture-coarse-grained Martini force field and atomically detailed CHARMM36 force field. In the simulations with the Martini model, an exchange move consists of changing the chemical identity of a single headgroup particle (bead) that corresponds to the transition of the type P5 bead of DPPC to type Q0 bead of DPPS (Fig. 1). The change is performed by the modification of the VdW interaction parameters and charge without any particles vanishing or new particles appearing. With the atomistic CHARMM36 force field, an exchange move requires a swap between the choline group of DPPC and the serine group of DPPS, which involves multiple atoms (Fig. 2). In contrast to the Martini case, this transition The specific MDAS choice of Δλ = 1 and M = 0 is equivalent to a single-step MC exchange. In MDAS, the role of the AT trajectory is to allow the system to relax. A gradual exchange with Δλ ∼ 0.001-0.01 yields significantly lower work compared to the direct MC exchange, as it produces less steric overlap and corresponding high energies. Therefore, the MDAS steps in the atomically detailed models are accepted with a much higher probability than direct MC. However, if the two types of phospholipids are similar (as DPPC and DPPS are in Martini), the additional cost of computing AT vs direct MC is not a priori an improvement and testing is required. III. METHODS The simulations were performed with the standard Martini 2.2 force field, 22 with and without polarizable water. 23 The electrostatic interactions were modeled with the reaction-field method. 24 The screening constant was 15 and 2.5 for non-polarizable and polarizable water, respectively. The cutoff distance of the vdW interactions was at 1.1 nm with a potential-shift modifier. All the simulations were conducted at 335 K. The temperature was fixed with velocity rescaling 25 with 1.0 ps coupling constant and two separate coupling groups for the membrane and the solvent. A semi-isotropic (xy and z directions) Parrinello-Rahman barostat 26 maintains a constant pressure of 1.0 bar with the coupling constant of 12.0 ps. Integration was performed with the Verlet algorithm with a 20 fs time step. Standard MD simulations were performed with GROMACS 2019.1. 27,28 MDAS and MC-MD simulations were conducted with GRO-MACS 2019.1. All of the required free energy code is already available in GROMACS and the only change in the code was a hardcoded optimization to maintain lambda at the same value for n steps instead of the default options of a constant lambda or a lambda that changes linearly every step. For this proof of concept study, GROMACS was called a new process for every MD part and a number of scripts were used to parse the necessary energy and other information. For production use, a more integrated code is developed. A 1:1 mixture of DPPC and DPPS (200 DPPC and 200 DPPS molecules, 100 molecules of each type per monolayer) was considered. The system was solvated with 5815 Martini water beads and 265 Na + and 65 Cl − ions were added to the system. Because the differences in Martini between DPPC and DPPS are small (Fig. 1), a short AT of 1000 steps was sufficient. The parameter λ is modified every ten steps, hence, Δλ = 0.01. After 1000 steps, the total work is computed and the proposed move is accepted or rejected. Then, we sample another 2000 steps of straightforward MD before attempting another AT. The same approach was used for the MC-MD sampling scheme, but the AT is a single step (Δλ = 1.0). To compare different methods on equal footing, we consider the number of force evaluations used per number of sampled configurations. The cost of a single attempt of MDAS exchange is 3000 force evaluations (2000 straightforward MD and 1000 AT steps), and it is 2001 force evaluations for a single exchange attempt in MC-MD. For comparison, we also run an atomistic MDAS simulation for the DPPS/DPPC system using the program NAMD 29 and the CHARMM36 force field. 30 We had considerable success in the past in simulating a mixture of DOPC and DPPC, 16 which only differ in their tails. We show in Fig. 2 the required alchemical changes in the atomically detailed MDAS simulations. There are 26 atoms that require modifications, which is a considerably more complex task than the single-particle exchange of Martini or our previous DOPC/DPPC test case. As noted earlier, changes in the electrostatic interactions within the atomic models pose an additional and significant challenge to MDAS. The assigned charges of the phospholipids are of the CHARMM 36 force field 31 that was used in the atomistic simulations. We consider a 1:1 binary mixture of DPPC and DPPS. The bilayer consists of 200 phospholipids and is solvated with TIP3P water molecules. 32 100 potassium ions were added to neutralize the system. The entire system contains ∼50 000 atoms. The membrane was first equilibrated in the NPT ensemble for 10 ns, which was followed by 10 ns NVT simulation. To examine the efficiency of MDAS, we conducted 100 AT attempts. Each AT was for a total length of 100 ps with Δλ = 0.001. During an AT, the system was simulated in the NVT ensemble with a Langevin thermostat. To avoid the so-called end-point catastrophe, we used a soft-core potential with the following form to treat the van der Waals interactions during the exchange: 33 where δ = 5.0 nm 2 , and λ changes from 0 to 1 during the alchemical step. Note that at λ = 1, the above expression turns into the 6-12 Lennard-Jones potential and the interaction vanishes at λ = 0. The time step was 1 fs in all the atomistic simulations. In the allatom case, we used a dual topology scheme. We select a DPPS and a DPPC randomly and replace them with a dummy molecule that has a combination of PS/PC headgroups. Then, these molecules are being evolved with MDAS to make the exchange between the molecules. The velocities of the dummy atoms are selected randomly from the Maxwell distribution according to the desired temperature of the system. The bonded interactions of the dummy atoms do not change the statistics of the system and do not contribute to the work. If the move gets accepted, the dummy atoms and bonds are removed and a regular MD simulation is conducted before trying the next MDAS move. IV. RESULTS We use the 1:1 DPPC/DPPS lipid mixture as our test system for sampling efficiency. We simulate the Martini model using MDAS, mixed MD-MC, and straightforward MD. The atomically detailed calculations were attempted with MDAS. The initial state of the system was of separated DPPC and DPPS molecules (Fig. 3), which is far from the equilibrium of a uniformly mixed membrane. The radial distribution function g(r) of the PO4 beads of DPPS (or DPPC) phospholipids monitors mixing as a function of time. We have shown in Ref. 16 that the highest peak of the pair correlation function max[g(R)] is a good measure of the relaxation and is comparable to the alternative measure of mixing entropy. 34 As the mixing occurs, max[g(r)] approaches a constant value of the uniform mixing of the two phospholipid types. To obtain a quantitative estimate of the relaxation rate, the evolution of max[g(r)] is fitted with an exponential function. With the current choice of parameters for MDAS moves, the acceptance probability is about 29%. Figure 4 shows the time evolution of max[g(r)] for DPPS-DPPS PO4 beads as a function of the number of force evaluations. The fit of an exponential function to the evolution of max[g(r)] gives ∼11.3 ± 0.4 times speedup for the system mixing compared to straightforward MD. If we use the mixed MD-MC sampling scheme, which is equivalent to MDAS with an AT of a single step, the acceptance probability is about 16%. An exponential fit of max[g(r)] as a function of the number of force evaluations suggests ∼10.1 ± 0.2 speedup of the mixing dynamics compared to sampling by straightforward MD. Figure 5 shows snapshots of the top view of the lipid bilayer simulated with MD, MDAS, and MC-MD. After 2 × 10 6 force evaluations in a straightforward MD simulation, the bilayer is far from laterally homogeneous. At the same time, after 660 MDAS steps or 1000 MC-MD steps, which correspond to 2 × 10 6 force evaluations, the two lipid types are well mixed. To further explore the performance of the different sampling schemes with different parameterization, we simulated the same Martini system with polarizable water. As in the previous case, we benchmark MDAS and MD-MC methods against straightforward MD. With MDAS, we obtain 9% acceptance probability for the DPPC/DPPS exchange move, which translates into ∼2.5 ± 0.1 times speedup compared to straightforward MD (Fig. 6). After 10 000 attempts, we did not accept a single exchange move with MD-MC sampling. From the work values we estimate the average acceptance probability as 8.8 × 10 −5 . The distribution of work for different setups with the Martini force field considered here are shown in Fig. 7. A breakdown of the contributions of the VdW and Coulomb interactions to the total work for a typical MC-MD or MDAS exchange step indicates that for MDAS without polarizable water, ∼67% (0.43 kcal/mol) of total work comes from Coulomb interactions and 33% (0.20 kcal/mol) from VdW. MDAS with polarizable water gives 90% (3.49 kcal/mol) of work from Coulomb interactions and 10% (0.36 kcal/mol) from VdW. With MC-MD, we see a similar trend toward a significant increase in the weight of Coulomb interactions in the total work during an exchange move: 16% (1.07 kcal/mol) of work comes from Coulomb interactions and 84% (5.48 kcal/mol) from VdW with the non-polarizable water model. With polarizable water, MC-MD work shows 95% (26.91 kcal/mol) contribution from Coulomb interactions and 5% (1.38 kcal/mol) from VdW. For comparison, we also attempted to simulate the mixing of the DPPC/DPPS system using an atomically detailed MDAS model. We evaluated 100 AT steps of length of 100 ps (each) using Δλ = 0.001. The length of the straightforward MD trajectories between AT attempts was 100 ps as well. In Fig. 8, we show a histogram of the work values obtained from the AT trajectories. The distribution is broad and includes high work values, which makes the acceptance probability less than 10 −5 and impractical for the current AT path. For a typical exchange move, the Coulomb interactions contribute ∼79% (16.2 kcal/mol) of the total work and VdW contribution is 21% (4.3 kcal/mol). For qualitative analysis, we have calculated time courses for the relaxation of different energy terms in MDAS and in the atomically detailed simulations (Figs. 9 and 10). For the atomistic model, the electrostatic interactions are much slower to relax than the van der Waals interactions (Fig. 9). For the Martini examples, we observe the same trend-the electrostatic interactions decay much slower than the van der Waals interactions with the AT trajectory length for both non-polarizable (Fig. 10, top panel) and polarizable water models (Fig. 10, bottom panel). V. DISCUSSION The Martini model offers an efficient approach to sample membrane configurations by reducing the number of particles and using smoother energy landscapes compared to the atomistic models. It enables the study of heterogeneous membranes, assembly, and separation. However, the enormous diversity of biological membranes and their sheer sizes pose a significant challenge for converging straightforward MD simulations, even with the Martini model. MDAS enables a speedup of ∼1000 for specific atomistic systems. 16 However, there are lipid compositions that are difficult to simulate with atomically detailed MDAS models. The challenge in MDAS simulations is the design of the AT such that the amount of work is minimal and lead to significant acceptance probability. For example, an efficient acceptance probability when using ∼100 ps trajectories of straightforward Molecular Dynamics is about 10%. This design is The Journal of Chemical Physics ARTICLE scitation.org/journal/jcp difficult for the exchange of phospholipids with different charges, as we illustrated in this manuscript for the DPPC/DPPS system. The charge and the membrane electric field in atomically detailed models relax slowly to the new equilibrium imposed by the exchange. Here, we have shown that the combination of Martini and MDAS is promising. The simplified description of the electrostatic interactions and the smaller differences between two different lipid topologies facilitate the design of exchange pathways and a high acceptance rate of transformation steps (Figs. 7 and 8). The design of an efficient AT for diverse pairs of phospholipids is a topic of ongoing research. We compared straightforward MD and MDAS calculations for an atomistic model and the standard and polarizable Martini models. MDAS has the potential to be significantly advantageous compared to straightforward MD (Figs. 4 and 6), in particular for the standard Martini model. The efficiency of the MDAS algorithm can be evaluated by the computed work distributions from exploratory ATs (Figs. 7 and 8). An acceptance probability of the order of or greater than 10% allows for straightforward MD trajectories of about 100 ps long between ATs. If the acceptance probability is below this threshold, straightforward MD is likely more efficient. What types of AT generate small values of work? In our experience, modifications of the hydrocarbon chain (lengths, or single and double bond along the lipid chain) are good candidates for an MDAS calculation in atomic detail. A modification of the head group is more challenging for atomically detailed models, both compared to standard MD and compared to Martini (Figs. 7 and 8). For phosphate head groups of different charges, an efficient AT is hard to find. Similarly, an inefficient MC-MD move is found in the Martini model that incorporates electrostatics of water [ Fig. 7(b), right panel]. The DPPC/DPPS Martini system with polarizable water is an interesting example in which the MC-MD algorithm (or a single step AT) is not very efficient. In the non-polarizable case, the charged PS headgroups interact electrostatically only with the ions as the water beads in the Martini model are not charged. However, more electrostatic interactions are present when the polarizable water model is used. The water model includes an induced dipole that interacts with the charges of the head group. When we switch from PC to PS in a single step, the electrostatic interactions of the PS groups with water molecules contribute to a significant energy difference between the exchanged states, which amounts to 95% (26.9 kcal/mol) of the total work done during an exchange (see the breakdown of the different contributions to the total work in Sec. IV). As a result, the average acceptance probability in MC-MD is ∼8.8 × 10 −5 . However, if we simulate the transition between the PS/PC headgroup gradually with an AT, the work of the transition is reduced [ Fig. 7(b), left panel], which translates into a higher acceptance probability of the proposed exchange move. The electrostatic interaction still contributes up to 90% of the total work in this case, but the contribution amounts to ∼3.5 kcal/mol in contrast to 26.9 kcal/mol with MC-MD for a typical exchange move. The VdW contribution to the total work is increased in the absolute value in the case of polarizable water for MDAS (from 0.20 kcal/mol to 0.36 kcal/mol) and decreased for MC-MD (from 5.47 kcal/mol to 1.37 kcal/mol), but the total work is still dominated by the electrostatic interactions. As the only source of new charges in the setup with polarizable water, compared to the non-polarizable case, is the dipoles of the water beads, we attribute the difference between work with MDAS and MC-MD to the ability of polarized water molecules to readjust during AT and have sufficient time to lose their transient dipoles. An interesting application of the sampling methods of phospholipid mixtures would be an investigation of the asymmetric lipid bilayers. One can propose an AT that would exchange lipids between different layers. Since the flip/flop movements between the bilayers are an activated process with a significant barrier, such a move can greatly increase the equilibration rate. Of course, one should keep in mind that the membrane asymmetry is maintained in the biological systems by non-equilibrium processes, including ATP and gradient-powered transporters, and a true equilibrium may not be desired in such cases. However, for the investigation of synthetic systems, which may still be asymmetric, the MDAS algorithm and the Martini model are promising. VI. CONCLUSIONS In the current paper, we explored the possibility of using the exchange-based MDAS and MC-MD algorithms for the efficient sampling of mixed lipid bilayers within the Martini and atomically detailed models. The model system is a binary 1:1 DPPC/DPPS mixture that illustrates how the advanced sampling approaches within the Martini force field can significantly increase the sampling and open new possibilities for simulations of large multi-component lipid mixtures. For the relatively small system considered in this paper (400 lipid molecules), the speedup factor can be as large as 11. In the case of the Martini model with polarizable water, the MC-MD approach yields low acceptance probabilities, making sampling with this approach inefficient. However, the MDAS method still provides up to 2.5 times speedup for the same setup. For exchange moves with small energy modifications, the use of the basic MC-MD sampling scheme might be a desirable approach. However, for more complex exchange moves that require substantial adjustments of the system, the MDAS algorithm is more efficient compared to the MC-MD approach. The importance of the proper design of an exchange move with the MDAS algorithm and the particular advantage of a coarser representation of lipid molecules in the Martini model is illustrated with the atomistic simulations of a DPPC/DPPS mixture, where the energy modification during an exchange move is significantly higher compared to the Martini model. DATA AVAILABILITY The data that support the findings of this study are available from the corresponding author upon reasonable request.
2020-05-19T01:01:07.622Z
2020-05-17T00:00:00.000
{ "year": 2020, "sha1": "182531f09711758c3395ef5a15d3746a14df6066", "oa_license": "CCBY", "oa_url": "https://aip.scitation.org/doi/pdf/10.1063/5.0014176", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "4288794b4fb651fa5727cc6a8fdb4e7ca0475988", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Materials Science", "Physics" ] }
253355337
pes2o/s2orc
v3-fos-license
Pathogens in PICU before and during the SARS-CoV-2 pandemic in China: a multicenter retrospective study Background Nonpharmacological interventions for COVID-19 could reduce the incidence of children hospitalized in pediatric intensive care units (PICU) and the incidence of children with bacterial infections. This study aimed to evaluate changes in the bacterial profile of children in PICU before and during the COVID-19 pandemics. Methods This is a retrospective study, involving clinical data of children with positive bacterial cultures admitted to the PICU respectively in 2019 and 2021. Results In total 652 children were included in this study. The total number of hospitalized patients and the incidence of bacteria-positive children in 2021 were lower than those in 2019. There were no significant differences in the ratio of Gram-positive bacterial infection, Gram-negative bacteria infection or fungi infection between the two years. The rate of Streptococcus pneumoniae in 2021 was higher than that in 2019(p = 0.127). The incidence of Haemophilus influenzae in hospitalized patients decreased with a downward trend(p = 0.002). The distribution of previous underlying diseases in children admitted to PICU with different outcomes of bacterial infection between the two years were homogeneous (p > 0.05). Conclusion After the implementation of COVID-19 isolation, prevention and control measures, the number of hospitalizations and bacterial infections in PICU decreased, which may be due to changes in population’s behavior patterns. Meanwhile, the incidence of Haemophilus influenzae in hospitalized patients decreased with a downward trend. Introduction COVID-19 caused by SARS-CoV-2 infection broke out in Wuhan, China at the end of 2019.Protective measures such as large-scale control of the movement of population, large-scale disinfection, suspension of school, work and maximum use of masks to cut off the transmission of the coronavirus to the greatest extent were implemented.After applying a series of prevention and control measures, though the spread of the coronavirus has been controlled, it is still prevalent within a certain range. A United States multi-center study showed that between March 2020 and April 2020, the incidence of acute respiratory illnesses and acute respiratory infection involving respiratory syncytial virus and influenza in children in seven cities continued to decrease.This result may be attributed to timely and sustained isolation measures, which have effectively cut off transmission routes [1].Children with severe infection admitted to pediatric intensive care unit (PICU) mostly suffer mixed infections of both virus and bacteria.Notably, bacterial infection has been considered as an important complication of influenza pandemic, among which Streptococcus pneumoniae infection is the most common [2]. Moreover, studies have demonstrated that influenza, as a risk factor for bacterial infection, can often lead to secondary bacterial infection of Streptococcus pneumoniae, Staphylococcus aureus or Haemophilus influenzae [3].In the United States, the number of people vaccinated with pneumococcal vaccine and influenza vaccine decreased significantly during the COVID-19 pandemic [4].However, taking pneumococcal vaccine as an example, vaccination is an important way to prevent its infection [4,5].Although the whole world is struggling to cope with the consequences of the global pandemic associated with severe pneumonia caused by SARS-CoV-2, the existing evidences suggest that bacterial infection is a key factor contributing to severe infection in children.This study aimed to evaluate whether the bacterial pathogens in PICU inpatients in Mainland China changed during the Covid-19 epidemic. Methods This is a retrospective study.The data of six representative medical centers in different regions of China were collected, namely: Luoyang Maternal and Child Health Hospital, Henan Province (Central China), The Seventh Medical Center of Chinese PLA General Hospital (North China), Guangdong Provincial People's Hospital, and The People's Hospital of Guangxi Zhuang Autonomous Region (South China), Shandong Provincial Hospital (East China), Xi'an Children's Hospital (West China).Study subjects: the data of children within 48 h of PICU admission from 6 medical centers were evaluated.Inclusion criteria: (1) The patients involved in our research ranged from 29 days to 18 years; (2) Research period in: 2019 and 2021; (3) Children with positive bacterial culture results in the first 48 h of admission; (4) All the children who were admitted to the PICU, including those who were admitted directly to the PICU, and later needed to be admitted to the PICU.The specimens included sputum, tracheal aspirate, nasopharyngeal aspirate, bronchoalveolar lavage fluid, urine, cerebrospinal fluid and blood.Exclusion criteria: (1) Nosocomial infection (The infection occurred within 48 h after into the PICU; The infection directly related to the last hospitalization; A new infection in other parts on the basis of the original infection appeared); (2) The patient was admitted to the PICU after being in the ward for more than 48 h and there was growth in the culture after 48 h of hospitalization; (3) Repeated specimens from the same patient.All laboratory results came from the clinical laboratory management system, data were recorded by a physician on the preprinted case report forms and then collected by another physician with an electronical form.The data directly extracted from the electronical system was validated by two physicians back-to-back.The study was accorded to the approval of the Ethics Committee of Luoyang Maternal and Child Health Hospital, informed consent is not required(KY2022021401.0)(Luoyang;Henan, China).This study was registered at http://www.chictr.org.cn/index.aspx(ChiCTR2200057182). Statistical analysis The descriptive variants of all the patient admitted to the PICUs was shown as frequency (percentage) for categorical variables or median (interquartile range) for continuous variables.The statistical analysis was performed with SPSS 24.0 (SPSS Inc., Chicago, USA).The Wilcoxon signed-rank test were used to compare the age, body weight, hospital duration and hospital cost between the two years.The categorical variables including the gender, whether the patients received mechanical ventilation, prognosis and the others were compared by the Cochran-Mantel-Haenszel (CMH) Chi-Square Test.The chi-square test was used to compare the rate of the Streptococcus pneumoniae in Gram positive bacteria and the Haemophilus influenzae in Gram negative bacteria. Results In 2019 and 2021, 3700 and 2891 patients were admitted to PICU, including 391 in 2019 and 261 children with positive bacterial culture in 2021.The total number of inpatients and the number of children with positive bacterial culture in 2019 were higher than those in 2021, respectively.A horizontal comparison of the total number of inpatients and the number of children with positive bacterial culture in the same period by month showed that the total number of inpatients in January 2019 was slightly lower than that in 2021, and the monthly data from February to December 2019 were higher than those in 2021 (See Fig. 1). Among the children with positive bacterial culture admitted to the PICUs in 2019 and 2021, the main source of positive results was respiratory tract.The distribution of infection sites showed no statistically significance between the two years (p = 0.467).In our cohort, infection sites include respiratory tract (78.3% vs. 82.8%),blood (11.5% vs. 10.3%),urinary system (2.3% vs. 1.9%), central nervous system (3.1% vs. 1.1%) and lung (4.9% vs. 3.8%) (see Table 2). The rate of Gram-positive bacterial infection (40.4% vs. 42.1%,p = 0.659), Gram-negative bacterial infection (49.6% vs. 52.1%,p = 0.533), and fungi infection (10.0% vs. 5.7%, p = 0.055) in 2019 and 2021 showed no significant Note: a, continuous variable showed skewness distribution, which was expressed as median (Q1, Q3).Rank-sum test was used to compare the differences between the two years.b, categorical variable were expressed as n (%), and chi-square test was used to compare the differences between two years 3).Stratified analysis of clinical outcomes and former medical history of children with positive bacterial culture identified no significant differences in the rate of children discharged from hospital with former medical history between 2019 and 2021 (29.4% vs. 32.7%)(p = 0.420).The mortality in children with former medical history in 2021 was higher than that in 2019, though without statistically significant differences (44.0% vs. 37.9%, p = 0.522).The distribution of former medical history in children admitted to PICU with different pathogenic infections and clinical outcomes were homogenous (p > 0.05) (see Table 4). Discussion Children suspicious of severe bacterial infection in PICU are a frequent challenge for pediatricians, which if not handle with timely appropriate treatment would result in poor prognosis [6].Bacterial infections in PICU include community acquired infection (CAI) and hospital acquired infection (HAI).This study collected data from patients with community-acquired bacterial infections admitted to PICUs in 6 representative medical centers from different regions of mainland China in 2019 and 2021. This is a retrospective study based on infection distribution in the PICU of six medical centers and it revealed a decrease in the total number of children's hospitalizations in 2021 compared to 2019.Studies have reported that quarantine prevention and control measures reduce COVID-19 cases.The timing of prevention and control strategies is strongly associated with the trending decline in the epidemic growth rate of COVID-19 cases [7][8][9].While quarantine measures are implemented to contain the spread of COVID-19, the incidence of acute respiratory illnesses, respiratory syncytial virus and influenza acute respiratory illnesses in children also decreased [1].The medical centers involved in this study were under the epidemic prevention and control measures in 2020.Thus, after 2020, the behavioral patterns of the people in 2021 have changed greatly from those in 2019.These conclusions of this study are consistent with previous reports that the number of hospitalizations reduced in 2021 as compared to 2019 might also be attributable to nationwide prevention and control measures, including control of the movement of people, mass disinfection, suspension of classes, work, and maximum use of masks and other protective measures.It should also be taken into consideration that the epidemic prevention and control measures have caused more patients to choose local hospitals for treatment, rather than being referred to higherlevel hospitals across regions, which also explains the reduction in the number of hospitalizations. In the present study, the number of bacterial infections in 2021 was lower than that in 2019.Of note, bacterial infection is an important complication of influenza pandemic [2].Treatment of viral infection is anticipated to prevent bacterial superinfections, such as oral oseltamivir to anti influenza virus (type A and type B).Although the number of bacterial infections decreased in 2021 as compared with 2019, there was no difference in the distribution of infection sites, and the respiratory system was still the main infection site for bacterial infections in the PICU. Previous studies have shown that Haemophilus influenzae and Streptococcus pneumoniae are the main pathogens in PICU [10].Our study found a decrease in the rate of Haemophilus influenzae infections.Haemophilus influenzae, as an opportunistic pathogen, can cause a variety of clinical symptoms including otitis media, epiglottitis, sinusitis, and pneumonia, especially in children, the elderly, and immunocompromised patients.respiratory droplets from carriers [11].This result is consistent with the conclusions of previous studies [12,13]. Haemophilus influenzae is mainly transmitted through The various prevention and control measures during the epidemic could possibly cut off the pathogenic infection mainly dependent on the transmission of respiratory droplets [1].However, a limitation of this study was that we failed to isolate the serotype of Haemophilus influenzae.In addition, unlike Haemophilus influenzae, our study found that the rate of Streptococcus pneumoniae infection in 2021 was higher than that in 2019.Previous studies have shown that for severe community acquired pneumonia patients in intensive care unit, influenza is a risk factor for bacterial infection and Streptococcus pneumoniae is the most common bacterium among the major complications of influenza pandemic [2].Therefore, theoretically, Streptococcus pneumoniae infection secondary to virus infection should be reduced with declined rate of influenza, which is contrary to the results of this study.We speculate that the increase in Streptococcus pneumoniae infection is possibly relative to the reduced mass movement during the epidemic prevention and control measures, and the decreased vaccination of Streptococcus pneumoniae [3,4].In addition, CDC suggested that patients with positive diagnosis of covid-19 should delay routine immunization during the covid-19 pandemic [14].However, due to the small sample size, no statistical difference between the two years was found.Therefore, a large sample size and long-term observation is necessary for evaluating the changes in Streptococcus pneumoniae infection over the years.Moreover, we found that the mortality of children with former medical history was 31.9% in 2019 and 44.0% in 2021.Although the rate of hospitalization and bacterial infections in 2021 decreased, the mortality in children with former medical history increased, suggesting that epidemic prevention and control measures may be of some significance for the prevention of infectious diseases in healthy children, but children with previous medical history will need more protection and attention.However, due to the small sample size, there was no statistical difference between the two years, and further research and confirmation are needed for verification. Limitations The study has several limitations.Firstly, this study of 3 years span cannot rule out the impact of improved technical improvements and detection rates during these three years.Secondly, this is a multi-center study with diversified distribution of cases provided by different research centers using differed detection methods, and the climate differences between regions may also affect pathogenic microorganisms, the impact of which on the present study have not been explored.Thirdly, disease spectrum and medical levels among medical centers are different, which may have an impact on mortality. Conclusions After the implementation of COVID-19 isolation prevention and control measures, we noticed that with changed behavioral patterns in population, the number of PICU admissions and bacterial infections decreased, confirming that isolation and control measures can reduce admissions and bacterial infections in PICU.Meanwhile, the rate of Haemophilus influenzae infection in hospitalized patients demonstrated a downward trend.The rate of Streptococcus pneumoniae increased but without statistically significant differences due to the limited sample size, which requires further investigation. Table 1 Clinical characteristics of the children with positive bacterial culture admitted to PICU in 2019 and 2021 Table 2 Distribution Monthly distribution of total inpatients and infections in 2019 and 2021 differences.The gram-positive bacteria included: Streptococcus pneumoniae, Staphylococcus aureus, Enterococcus faecalis, et al.The gram-negative bacteria included: of infection sites in children admitted to PICU in 2019 and 2021, n (%) Infection sites 2019(n = 391) 2021(n = 261) χ 2 Table 3 Changes in Streptococcus pneumoniae and Haemophilus influenzae in 2019 and 2021 Table 4 Stratified analysis of clinical outcomes and former medical history of children infected and admitted to PICU in 2019 and 2021
2022-11-06T10:22:01.581Z
2023-10-20T00:00:00.000
{ "year": 2023, "sha1": "9c25ca615b24a3fe7f448b59d2a683e79ac15336", "oa_license": "CCBY", "oa_url": "https://bmcinfectdis.biomedcentral.com/counter/pdf/10.1186/s12879-023-08687-x", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "793aa38485508aba41d7a751a1def63851c1d01b", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
203811886
pes2o/s2orc
v3-fos-license
Total Ankylosis by Heterotopic Ossification in an Adolescent Anterior Trans-olecranon Fracture Dislocation: A Case Report The incidence of heterotopic ossification in adolescents appears to be lower than in adults. There exist very few reports of heterotopic ossification with total bony ankylosis in child or adolescent populations. We describe a case of total bony ankylosis of the elbow secondary to heterotopic ossification, in a 14-year-old female. Total ankylosis of the elbow at 45 degrees of flexion was noted 6 months post-surgery, and complete surgical excision of the heterotopic mass was performed. After an additional one-time dose of radiation therapy and nonsteroidal anti-inflammatory drug medication, full range of motion was obtained without any recurrence or other complications, up to the last follow-up of 30 months. Heterotopic ossification (HO) of the upper limb is an uncommon post-traumatic complication, and is even rarer among children. 1) The occurrence of HO around the elbow joint results in fixed deformities and complete limitation of motion. 2,3) Surgical excision of HO is subsequently required for patients afflicted with limitated motion, and complete surgical excision can achieve a significant improvement of elbow functions. 4,5) Total ankylosis of the elbow joint due to HO is not common, and very little is known about HO treatment in children or adolescents. To the best of our knowledge, surgical excision of HO around the elbow joint in an adolescent has not been performed previously. We describe a case of total ankylosis of the elbow joint by HO in a 14-year-old female with anterior transolecranon fracture dislocation, who was successfully treated by complete surgical excision of the HO. Case Report A 14-year-old right-handed female fell from a 10th floor apartment, and presented to our emergency department with multiple trauma. She had no history of any previous illness or medication before the injury. She complained of pain in the abdomen, right elbow, pelvic area, and low back area. She underwent iliac artery embolization and exploratory laparotomy to repair the iliac artery and a liver laceration at another hospital, and was subsequently transferred to our emergency center for additional surgical intervention involving the pelvic bone, spine and anterior trans-olecranon fracture dislocation (Fig. 1). Fortunately, computed tomography (CT) confirmed that she had no brain injury and no systematic neurologic symptoms, although there was a chance fracture of the T10 vertebra. She was subjected twice to a closed reduction of the injured elbow joint, once at the first hospital, and then in our emergency room. According to the Mayo classification, the olecranon fracture was diagnosed as type IIIB. Sixteen days after the initial trauma, the anterior trans-olecranon fracture dislocation of the right elbow joint was treated by open reduction and internal fixation by applying a pre-contoured olecranon plate (Zimmer, Warsaw, IN, USA) using the posterior approach incised along the triceps, followed by application of a long arm splint (Fig. 1). After surgery, the elbow was immobilized in a long arm splint at 90 degrees of flexion and neutral rotation. Starting from the 3rd postoperative day, the patient was allowed gentle exercise of the right elbow joint, but developed aggravated pain during rehabilitation at 2 weeks post-surgery. The HO was first noted in radiographs, at 3 weeks. At 10 weeks post-surgery, she had increased difficulty in moving her right elbow joint, and had reduced range of motion. The size of HO had progressed and was clearly revealed in radiographs. Physical examination revealed total ankylosis of the right elbow at 45 degrees of flexion, at post-operative 6 months. However, since the radioulnar joint was intact, pronation and supination of the forearm showed the full range of joint motion. Standard anteroposterior and lateral radiographs of the affected elbow revealed an unusually large mass connecting the distal humerus to the ulna (Fig. 1, 2). CT scan images revealed the progress of mineralization from the outer margins towards the center area (Fig. 3). After 6 months maturation of the heterotopic mass, surgical excision was carried out using the posterior approach through the previous incision scar. Skin, subcutaneous tissue, and deep fascia were incised in line with the skin incision. The ulnar nerve was identified and retracted. The extent of the mass was exposed adequately and was excised completely from the distal humerus and ulna (Fig. 2). Intraoperative elbow range of motion flexion 0 to 140 degree was checked. Postoperative radiographs showed no trace of HO (Fig. 4). One day after the operation, a 1-time dose of radiation therapy (800 cGy) was provided to prevent recurrence. Four days after surgery, the elbow range of motion exercise was started, as tolerated. At 30 months after surgery, patient had no pain in the elbow; radiographs revealed no further development of the HO, and a residual flexion deformity of 10 degrees with flexion up to 145 degrees, and supination and pronation of the forearm were achieved with full range of motion (Fig. 4, 5). The patient was permitted to return to play, without any restrictions. The Patient was given an opportunity to review the manuscript and consented to its publication. Discussion In the current case, the HO around the elbow was especially problematic for many reasons. Since the neurovascular structure lies within the elbow joint, this increased the possibility of complications. Second, the ankylotic tendency of the elbow joint to occur after injury further complicates the treatment of heterotopic ossification in the elbow joint. 2) Complete ankylosis of the elbow secondary to HO results in severe disability. 4) More than 20% patients who develop HO have clinically limited motion in the form of decreased arc of the flexion-extension to under 100 degrees. In our case, the HO was located on the posteromedial side, and the elbow range of motion was assessed as 30 to 60 degrees at post-operative 4 weeks, which gradually progressed to total ankylosis of the right elbow at 45 degrees of flexion. Restoration of elbow motion with complete ankylosis can be difficult. 6 after resection of HO, the elbow range of motion was found to be 0 to 140 degrees. At the final follow-up, the range of motion checked at 10 to 145 degrees. Surgical removal of the ectopic bone should be undertaken only for clear functional goals. Excision should be considered for patients in whom elbow motion is severely limited by extensive HO and other neurologic symptoms of the elbow joint. In the current case, our patient had complains of ulnar nerve symptoms such as tingling sensation and numbness. Since timing surgical excision of the heterotopic bone is important 7) and maturation of the HO was noted on radiographs after 6-month follow-up in our patient, we determined to remove the mass at that instance. Risk factors of HO development include neurologic injury, delayed internal fixation, use of bone graft, and the pattern of the fracture. 2) Delay in surgery increases the risk for HO, thereby prompting that operative fixation is imperative. 8) In our case, internal organ damage had made her vital signs unstable, and we first had to manage life threatening complications. The injured elbow was therefore treated 16 days after the initial trauma, which may have increased the possibility of the developing HO. Radiation is effective for prophylaxis and recurrence against HO. Radiation may be given at a dose of 700 to 800 cGy in single fraction, administered from 24 hours preoperatively until 48 to 72 hours postoperatively. 3) In our case, the patient was given a single dose of 800 cGy 1 day after surgery to prevent recurrence. Furthermore, prophylactic management was administered by prescribing nonsteroidal anti-inflammatory drugs (NSAIDs) for 2 months. NSAIDs prevent HO by inhibiting the osteogenic differentiation of progenitor cells. 9) Radiation and the NSAID therapy are both effective prophylactic treatments for the development of HO. 10) At the last follow-up, no recurrence was observed. In conclusion, surgical excision for a patient who had a fully mature HO bridging led to significant improvement in the elbow range of motion, ulnar nerve symptoms, and functional results, notwithstanding the subject being an adolescent patient. Complete excision with subsequent postoperative radiation and NSAID therapy helped to prevent recurrence of HO.
2019-10-08T00:07:46.623Z
2019-09-01T00:00:00.000
{ "year": 2019, "sha1": "49c41a6bcc8a2717d568a3fcea752f05b5d0440f", "oa_license": "CCBYNC", "oa_url": "https://www.cisejournal.org/upload/pdf/cise-2019-22-3-154.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69a98705f365acd7e8cdc0e9ef8ca3eaa2c47275", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
211561410
pes2o/s2orc
v3-fos-license
Acute Simultaneous Renal and Ovarian Vein Thrombosis Mimick- ing Renal Colic and Associated with Factor V Leiden: Case Report and Review of Literature Severe flank pain is a frequent complaint at the emergency department (ED). It is usually associated to other clinical symptoms like fever, dysuria, vomiting, diarrhea or radiation to the groin. Hereby, it may raise a lot of probable differential diagnosis that should be ruled out first depending on laboratory tests and imaging results. However, when flank pain is isolated and radiating to the groin without evidence of urolithiasis on abdominal CT scan, more rare diagnosis should be suggested such as renal vessels thrombosis with or without other vessel thrombosis in the pelvis. Thereafter, thrombophilic studies should be performed to elucidate the underlying etiology. We present the case of a 44-yearold lady having simultaneously acute left renal and ovarian veins thrombosis presenting to ED for severe isolated left flank pain radiating to the left groin and heterozygous factor V Leiden on thrombophilic studies. We would like to stress on the importance of having a high index of suspicion of such diagnosis at ED based on medical history and primary investigations because a good management may result in salvage of organs by minimally invasive techniques, an early effective treatment and convenient anticoagulation in the future to prevent further complications. Severe flank pain is a frequent complaint at the emergency department (ED). It is usually associated to other clinical symptoms like fever, dysuria, vomiting, diarrhea or radiation to the groin. Hereby, it may raise a lot of probable differential diagnosis that should be ruled out first depending on laboratory tests and imaging results. However, when flank pain is isolated and radiating to the groin without evidence of urolithiasis on abdominal CT scan, more rare diagnosis should be suggested such as renal vessels thrombosis with or without other vessel thrombosis in the pelvis. Thereafter, thrombophilic studies should be performed to elucidate the underlying etiology. We present the case of a 44-yearold lady having simultaneously acute left renal and ovarian veins thrombosis presenting to ED for severe isolated left flank pain radiating to the left groin and heterozygous factor V Leiden on thrombophilic studies. We would like to stress on the importance of having a high index of suspicion of such diagnosis at ED based on medical history and primary investigations because a good management may result in salvage of organs by minimally invasive techniques, an early effective treatment and convenient anticoagulation in the future to prevent further complications. Introduction Hypercoagulable state known as thrombophilia is the main factor predisposing to thromboembolic events. It was described by Rudolf Virchow in the nineteenth century as a constituent factor of a triad known to be the cause of any thromboembolic events and including: Stasis, endothelial injury, and hypercoagulability state. It may be inherited or acquired and should be suspected in patients with venous or arterial thrombosis of young age, unusual locations, recurrent thrombosis, pregnancy loss or positive family history. We discuss the case of a young previously healthy lady that was presented for intractable left flank pain without any other associated symptoms and was found to have left renal and ovarian venous thrombosis probably due to a heterozygous mutation in factor V. Case Presentation A 44-year-old female patient, bank officer, presented for the first time to our university hospital ED for severe left flank pain lasting since few hours. The pain was achy and dull, radiating to the left groin, increasing dramatically despite common traditional analgesics like proteins or casts were found in the urine. Urine culture was sterile and chest X-ray was normal. Non contrast-enhanced abdominal CT showed first an ill-defined calcification that was misdiagnosed with left ureterolithiasis but the diagnosis was adjusted one hour later by injecting intravenous contrast. The patient was found to have mixed left renal and left ovarian vein thrombosis with a clear-cut absence of any obstruction on the urinary tract. Left kidney was slightly enlarged without evidence of any tumor or infarction. There was no ascites or enlarged intra-abdominal lymph nodes ( Figure 1). Treatment with LMWH and analgesic drugs was started immediately, and the patient was constantly improving in the next few days. She left hospital on the fourth day following her admission. Thrombophilic studies performed six weeks later revealed factor V heterozygous mutation (Factor V Leiden) and the patient was put on long life treatment with new oral anticoagulation agent rivaroxaban 20 mg per day. Discussion Renal vein thrombosis (RVT) is a relatively rare clinical entity. It is commonly associated with nephrotic syndrome or direct invasion by renal cell cancer. Other less common causes include hypercoagulable states, extrinsic compression by tumors, infections, trauma, renal transplantation, Behcet syndrome or antiphospholipid antibody syndrome [1,2]. Almost two thirds of patients have bilateral renal vein involvement. In cases of unilateral thrombosis, the left renal vein is affected more commonly than the right one [3]. The severe passive congestion causing the kidney to become engorged, leads to degeneration of nephrons and causing symptoms of flank pain, hematuria and decreased urine output. The presentation of renal vein thrombosis is usually variable and vary by the rapidity of the venous occlusion; patients may be asymptomatic or having nonsteroidal anti-inflammatory and spasmolytic drugs, and associated only with nausea. She denies another gastrointestinal or urinary symptoms. She has had no fever or chills nor paresthesia or lower limbs weakness. She is non-smoker, drinks alcohol occasionally, takes no chronic medication, except acetylsalicylic acid, and has no known allergies. She is not taking any oral contraceptive hormones since more than 5 years. On admission, her blood pressure (BP) was 140/85 mmHg, body temperature of 38 °C, respiratory rate 22, heart rate 95 bpm, and oxygen saturation 98% on room air. Her physical examination revealed a conscious, well-nourished, non obese (BMI = 27 kg/m²) patient with a normally-colored skin and conjunctiva. She has left costovertebral angle (CVA) tenderness and left lower quadrant abdominal pain on deep palpation. Her cardio-pulmonary auscultation is normal and she has no weight loss, no buccal ulcers, no malar rash or arthralgia, no lower limb edema, and no palpable peripheral lymph nodes. In her past medical history, she reported post-partum left lower limb deep venous thrombosis (DVT) three years ago that was treated with low molecular weight heparin (LMWH) for only 3 months and was followed one year later by a mild spontaneous superficial right arm vein thrombosis. She was treated by local heparinoid cream (Hirudoid) and put on low dose acetylsalicylic acid (100 mg per day) since that time. At ED, her blood tests showed slightly elevated peripheral white blood cells (WBC = 11.5 G/liter; 68% neutrophils and 22% lymphocytes), hemoglobin of 13.6 g/ dL, and platelet count of 362000/μL. Serum glucose, C-reactive protein (CRP), serum creatinine, serum liver function tests, serum albumin and serum lipase level were all within normal range. Serum cholesterol, serum complement and antinuclear antibodies (ANAs) were also normal. Urinalysis showed numerous red blood cells and few leukocyte count (8-10/hpf). No crystals, Another interesting point to discuss in our clinical case is the causal relationship between the venous thrombosis and the presence of factor V Leiden. The imputability is hard to accept since this heterozygous mutation is usually asymptomatic and may be present in around 3% of the normal population. Moreover, it is rarely the only causative factor in unprovoked DVT and the previous lower limb DVT of our patient has occurred in the post-partum state. Nevertheless, no other medical or surgical predisposing situation was present during this episode of thrombosis and blood tests and imaging studies did not show any evidence of nephrotic syndrome, renal cancer or other abdominal tumors, abdominal trauma or infection, connective tissue disease or antiphospholipid syndrome. Therefore, heterozygous factor V Leiden remained the only etiology to admit. Venous thrombosis associated to factor V Leiden should be diagnosed and treated by long term anticoagulation (ie, heparin, warfarin). Duration of treatment is defined on case by case and depends on whether the mutation is heterozygous or homozygous, recurrent or first episode, provoked or unprovoked, associated or not to pulmonary emboli. In general, recurrent unprovoked venous thrombosis due to homozygous mutation should be treated for life [12]. Reduction in proteinuria, by the use of angiotensin-converting enzyme inhibitors (ACEIs) or angiotensin-receptor blockers (ARBs) is essential in the treatment of renal vein thrombosis (RVT) in patients who are nephrotic. The role for thrombolysis in the treatment renal venous thrombosis is unclear yet since no data are available comparing thrombolytic therapy with anticoagulation [13]. When anticoagulation is contraindicated, a vena cava filter must be placed at a suprarenal level to reduce the risk of pulmonary emboli. Surgical treatment is no more indicated unless there is underlying renal cell carcinoma [14]. Conclusion Renal vein thrombosis is a rare complication of many hereditary or acquired hypercoagulable state such as heterozygous mutation of factor V. In its acute presentation, it may be confounded with renal colic crisis especially when associated with ipsilateral gonadal vein thrombosis. Its occurrence is highly suspected in the presence of acute flank pain, resistant to traditional analgesics, the absence of ureterolithiasis on CT scan and a medical or familial history of hypercoagulable state. Prompt diagnosis and convenient etiologic management are essential to avoid further invasive investigations and unexpected complications. symptoms related only to the underlying etiology. Acute renal vein thrombosis usually presents with symptoms of renal infarction, including flank pain, flank tenderness, rapid deterioration of renal function and worsening proteinuria, micro or macroscopic hematuria. It may also present with pulmonary emboli [4]. In normal Western populations, heterozygosity for the factor V Leiden mutation is present in 2-5%, whereas in patients with venous thrombosis and a family history of thrombotic disease this figure may reach 50-60%. The presence of the mutation markedly increases the risk for renal vein thrombosis, particularly in neonates [5]. Ovarian vein thrombosis is a rare complication which arises classically in the post-partum [6]. This condition is classically a puerperal process, but may also be caused by endometritis, pelvic inflammatory disease, malignancy, thrombophilia, inflammatory bowel disease, and post-operatively [7]. This condition involves the right vein more commonly than the left. Ovarian vein thrombosis often has a vague and variable presentation, and a high index of suspicion is required to make the diagnosis [8]. Some patients will have nonspecific symptoms, including malaise, vague diffuse abdominal pain, or dyspnea; others will present with fever and lower quadrant pain [9]. Our patient has had most probably an acute rather than chronic venous thrombosis since the pain had rapid onset and was only associated to a recent microscopic hematuria without evidence of a chronic urinary or gynecologic manifestations. The renal vein thrombosis was on the left side like most of the cases described in the literature unlike the ovarian thrombosis. Our patient has both thrombosis on the left side which is concordant with many reports in the literature showing that left-sided renal vein thrombosis can lead to left gonadal vein thrombosis with pelvic congestion in females and painful swelling of the left testis in males [10]. The only positive factor that was indicating the possibility of thrombosis in this young healthy lady, was the presence of previous DVT in post-partum. Otherwise, ureterolithiasis has been evoked as the primary diagnosis owing to the type and localization of pain at ED. Factor V Leiden thrombophilia is characterized by a poor anticoagulant response to activated protein C (APC) and an increased risk for venous thromboembolism (VTE) [11]. It also increases the VTE risk in "unusual locations" not routinely seen. Heterozygous Factor V Leiden carriers have 5 to 10 times increased risk of thrombosis in comparison to the normal population while homozygous carriers have 50 to 100 times this risk. Other thrombophilic conditions such as the use of oral contraceptives are usually present in case of thrombosis due to heterozygous mutation.
2020-02-29T07:22:47.430Z
2020-02-14T00:00:00.000
{ "year": 2020, "sha1": "612d362cedffc7e51858b0b3e07e78e5e0f600f1", "oa_license": "CCBY", "oa_url": "https://www.clinmedjournals.org/articles/cmrcr/clinical-medical-reviews-and-case-reports-cmrcr-7-297.pdf?jid=cmrcr", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "612d362cedffc7e51858b0b3e07e78e5e0f600f1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5173775
pes2o/s2orc
v3-fos-license
A prospective multicenter phase II study evaluating multimodality treatment of patients with peritoneal carcinomatosis arising from appendiceal and colorectal cancer: the COMBATAC trial Background Peritoneal carcinomatosis is regarded as a common sign of advanced tumor stage, tumor progression or local recurrence of appendiceal and colorectal cancer and is generally associated with poor prognosis. Although survival of patients with advanced stage CRC has markedly improved over the last 20 years with systemic treatment, comprising combination chemotherapy +/− monoclonal antibodies, the oncological outcome—especially of the subgroup of patients with peritoneal metastases—is still unsatisfactory. In addition to systemic therapy, cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) are specific treatment options for a selected group of these patients and may provide an additional therapeutic benefit in the framework of an interdisciplinary treatment concept. Methods/design The COMBATAC trial is a prospective, multicenter, open-label, single-arm, single-stage phase II trial investigating perioperative systemic polychemotherapy including cetuximab in combination with CRS and HIPEC patients with histologically proven wild-type KRAS colorectal or appendiceal adenocarcinoma and synchronous or metachronous peritoneal carcinomatosis. The planned total number of patients to be recruited is 60. The primary endpoint is progression-free survival (PFS). Secondary endpoints include overall survival (OS), perioperative morbidity and treatment-associated toxicity, feasibility of the combined treatment regimen, quality of life (QoL) and histopathological regression after preoperative chemotherapy. Discussion The COMBATAC trial is designed to evaluate the feasibility and efficacy of the combined multidisciplinary treatment regimen consisting of perioperative systemic combination chemotherapy plus cetuximab and CRS plus bidirectional HIPEC with intraperitoneal oxaliplatin. Trial registration ClinicalTrials.gov Identifier: NCT01540344, EudraCT number: 2009-014040-11 Disease under study Colorectal cancer (CRC) is the third most commonly diagnosed cancer in males and the second in female worldwide and overall the fourth leading cause of cancer-related death. Whereas the mortality associated with CRC slightly decreased over the past 20 years the incidence is still increasing in most countries [1,2]. More than 10% of patients with CRC already show peritoneal carcinomatosis at the time of initial diagnosis [3]. In about 25% of the cases there is no evidence of further distant metastasis [4]. Moreover, up to 25% of all patients with CRC develop peritoneal carcinomatosis during the natural course of their disease as a common sign of tumor progression or recurrence. In contrast to lymphatic and hematologic spread of metastases, intraperitoneal carcinomatosis develops by direct transcolonic tumor spread or tumor cell seeding during surgical resection of the primary tumor [5][6][7][8]. Tumor cell distribution within the abdominal cavity results in avascular tumor nodules that often cannot be efficiently addressed by systemic chemotherapy [9]. Thus, peritoneal carcinomatosis is mostly associated with poor prognosis. In the prospective European multicenter EVOCAPE 1 study, a median survival of 5.2 months was reported out of the 118 patients with peritoneal carcinomatosis arising from CRC during the natural course of disease [10]. Another retrospective analysis of 3,000 patients with peritoneal colon cancer dissemination reported a comparable median survival of 7 months [11]. First-line treatment of advanced colorectal cancer Systemic chemotherapy for metastatic colorectal cancer (mCRC) is mainly based on 5-FU with folinic acid (FA), preferably given as 24-48 h infusion, or oral prodrugs (e.g. capecitabine) in combination with either oxaliplatin or irinotecan [12]. Several studies with different chemo doublets could show median overall and progression-free survival ranging from 15 to 23 and 7 to 14 months, respectively, in patients with metastatic colorectal cancer (Table 1). Recently, Falcone et al. has shown a triple chemotherapy regimen combining 5-FU/FA, oxaliplatin and irinotecan (FOLFOXIRI) to be superior to FOLFIRI as first-line therapy [13]. In addition, triplets including targeted therapy such as antibodies against the vascular endothelial growth factor, VEGF (bevacizumab) or the epidermal growth factor receptor EGFR (cetuximab or panitumab) have been proven to be efficient in terms of prolonged overall and diseasefree survival in first line mCRC treatment [14]. Thus, PFS reached up to 12 months and OS ranged from 17 to 30 months (Tables 1 and 2). Nevertheless, the efficacy of the different triplet regimens may depend on tumor biology-related factors (e.g. histology, dissemination pattern, KRAS or BRAF mutation, anticipated chemosensitivity and growth dynamics). However, triplets are recommended by the recently published ESMO Consensus Guidelines for first-line treatment or induction therapy for most of patients with advanced colorectal cancer [12]. EGFR-targeted therapy for advanced colorectal cancer The addition of targeted anticancer drugs against the epidermal growth factor receptor (EGFR), the monoclonal antibodies cetuximab and panitumumab, has further improved patient outcome in advanced stage colorectal cancer (Tables 2). Two prospective trials showed a survival benefit by adding cetuximab to best supportive care in patients with chemotherapy-refractory mCRC leading to a median OS of 6.4 and 6.1 months, respectively [38,39]. The BOND trial assigned patients with disease progression within three months after irinotecan-based chemotherapy to receive cetuximab with or without irinotecan. The median OS was 8.6 and 6.9 months, the time to progression 4.1 and 1.5 months, respectively [40]. In the randomized phase III CRYSTAL study investigating first-line treatment of mCRC the median PFS in the wild-type KRAS subgroup was 9.9 months in the FOLFIRI/cetuximab arm versus 8.7 months in the FOLFIRI arm. Median OS was 24.9 and 21 months, respectively. In patients with mutant KRAS status (n = 192) median PFS was reduced after additional treatment with cetuximab (7.6 vs. 8.1 months) [33,41]. Similar observations are reported by Bokemeyer et al. after subgroup analysis of the prospective randomized OPUS study. The median progression-free survival rate was 7.2 months in both treatment arms with a 0.5 months benefit for additional treatment with cetuximab in the wild-type KRAS subgroup [34]. The results have been confirmed by a recently published pooled analysis of the CRYSTAL and OPUS trials [42]. Moreover, these observations are supported by the PRIME study that showed a significant improvement of PFS of untreated patients with wild-type KRAS mCRC by adding the EGFR antibody panitumumab to FOLFOX-4. Median PFS was 9.6 months in the panitumumab group vs. 8.0 months in the control group. There was also a nonsignificant benefit in overall survival (23.9 vs. 19.7 months) [28]. Another prospective randomized phase III study showed an increased PFS after adding panitumumab to FOLFIRI in second-line treatment of patients with mCRC (5.9 vs. 3.9 months) [43]. In contrast, the MRC COIN trial investigating the addition of cetuximab to an oxaliplatin-based chemotherapy for first-line treatment of patients with advanced CRC could not reproduce these findings. Although the response rate increased from 57% to 64% by adding cetuximab there was no significant benefit in median OS (17.9 in the control group vs. 17.0 months in the cetuximab group) as well as PFS (8.6 vs. 8.6 months). Nevertheless, in the subgroup analysis the lack of benefit was only reported for oxaliplatin and fluoropyrimidine combinations plus cetuximab in contrast to combinations with infusional 5-FU [36]. In the recently published NORDIC-VII trial no benefit could be shown for the addition of cetuximab to an oxaliplatinbased combination with bolus 5-FU only (FLOX). In the ITT analysis the median progression-free survival was 7.9 months in the control group vs. 8.3 months in the cetuximab group, respectively [37]. [45]. In 63 patients with colorectal PM selected from the French database that received several regimens of modern systemic chemotherapy the median OS was 23.9 months [46]. An Asian prospective single-arm phase II study investigating FOLFOX-4 in patients with peritoneal metastases from CRC reported a median time to progression of 4.4 months and a median overall survival of 21.5 months [47]. Cytoreductive surgery and HIPEC The combined treatment concept of cytoreductive surgery (CRS) and hyperthermic intraperitoneal chemotherapy (HIPEC) was introduced by Sugarbaker et al. in the early 1990's and consists of complete macroscopic cytoreduction of all visible tumor nodules followed by local intraabdominal chemoperfusion at 41-42°C [48,49]. The aim of HIPEC in patients with peritoneal carcinomatosis is to circumvent the peritoneal barrier and to obtain higher local concentration of the cytostatic agents [50][51][52]. However, until today the intraperitoneal or bidirectional chemotherapeutic regimen is not standardized [53][54][55][56]. The addition of hyperthermia may potentiate the effect of the cytostatic agents by thermic cytotoxicity and induction of apoptosis. Moreover, heating can improve tissue penetration of the cytostatic agents [48,57,58]. Numerous retrospective analyses reported feasibility, safety and efficacy of the combined treatment concept of CRS and HIPEC in patients with peritoneal carcinomatosis arising from CRC (Table 3). However, data from prospective trials are still limited. Verwaal et al. reported a prospective randomized phase III trial analyzing CRS and HIPEC with MMC plus adjuvant chemotherapy with 5-FU/folinic acid compared to systemic chemotherapy with 5-FU/folinic acid and palliative surgery, if possible. After a median follow-up of 21.6 months, the experimental treatment arm showed a median overall survival of 22.3 months compared to 12.6 months in the standard arm. In the subgroup of patients with complete macroscopic cytoreduction (CC-0/1) median survival was 42.9 months. Median progression-free survival was 12.6 and 7.7 months, respectively [59,60]. Another randomized controlled trial was launched by a French group. This study published by Elias et al. was designed to compare CRS with early postoperative intraperitoneal chemotherapy (EPIC) to CRS alone. After premature termination due to recruitment difficulties a 2-year survival rate of 60% was reported in 35 patients with complete macroscopic cytoreduction [61]. In the comparative study published by Mahteme et al. the median survival in the HIPEC group was 32 months vs. 14 months in the control group. 5-year survival rates were 28% and 5% respectively [62]. A multi-center registry study of 506 patients treated with CRS and HIPEC for peritoneal carcinomatosis arising from colorectal cancer reported median overall survival of 19.2 months. In patients with complete macroscopic cytoreduction (CC-0/1) the median survival was 32.4 months [63]. In numerous observational studies the overall median survival ranged from 15 to 32 months and from 28 to 60 months after complete macroscopic cytoreduction (CC-0/1), respectively [64]. Elias [65]. The differences in median survival of the control group between these analyses and the Dutch Trial may be explained by patient selection and the introduction of more efficient combined chemotherapeutic regimens with or without targeted drugs in the standard treatment of advanced stage CRC. Study design The COMBATAC study is a prospective, multicenter, open-label, single-arm, single-stage phase II study. The investigator initiated trial (IIT) is conducted by the Department of Surgery of the University Medical Center Regensburg in collaboration with the Center for Clinical Studies Regensburg, the Coordination Centre for Clinical Trials Duesseldorf and the participating national peritoneal carcinomatosis centers. The study protocol is supported by the CRC Study Group of the Arbeitsgemeinschaft Internistische Onkologie Study objectives and endpoints The primary objective of the COMBATAC study in patients with peritoneal carcinomatosis arising from wildtype KRAS colorectal and appendiceal cancer is to estimate the progression-free survival (PFS). Based on this estimation, it will be determined whether the multimodality treatment with pre-and postoperative systemic chemotherapy plus cetuximab, cytoreductive surgery (CRS) and bidirectional hyperthermic intraoperative chemotherapy (HIPEC) shows sufficient evidence of efficacy for further investigation. PFS is defined as the time interval between the first day of preoperative treatment and the date of progression or death, whichever occurs first. Patients who are alive and progression-free at the time of analysis will be censored for PFS at the time of their last contact. Secondary endpoints include overall survival, morbidity and toxicity related to the locoregional approach, feasibility of the combined treatment concept, quality of life and pathohistological regression. Study population The study population of the COMBATAC study consists of patients with synchronous or metachronous peritoneal carcinomatosis arising from histologically proven wildtype KRAS colorectal or appendiceal cancer. The extent of peritoneal tumor spread (Peritoneal cancer Index, PCI) as assessed by diagnostics such as computed tomography and laparoscopy prior to patient enrolment should allow complete macroscopic cytoreduction (CC-0/1) at the time of surgery. Moreover, patients to be included in the study must meet the following inclusion criteria: treatment-free interval of at least 6 months after the completion of 3prior systemic chemotherapy, age over 18 and below 71 years, good general health status (Karnofsky index more than 70%, ECOG 0-2), absence of hematogenous metastases (lung, bone, brain, >3 peripheral resectable liver metastases), absence of contraindication for systemic chemotherapy and/or extended surgery, estimated life expectancy more than 6 months, absence of any psychological, familial, sociological or geographical condition potentially hampering compliance with the study protocol and follow-up schedule, written informed consent, creatinine clearance > 50 ml/min, serum creatinine ≤ 1.5 × ULN, serum bilirubin ≤ 1.5 × ULN, ASAT and ALAT ≤ 2.5 × ULN, platelet count > 100,000/ml, haemoglobin > 9 g/dl, neutrophil granulocytes ≥ 1,500/ml, International Normalized Ration (INR) ≤ 2, absence of peripheral neuropathy > grade 1 (CTCAE version 4.0), no pregnancy or breast feeding and adequate contraception in fertile patients. Patients with incomplete cytoreduction (≥CC-2), tumor debulking or palliative surgery, hematogenous metastasis excluding less than three resectable liver metastases and/or prior chemotherapy < 6 months before evaluation of study inclusion or therapy with EGFR receptor antibody for metastatic disease are excluded from the present study. Further exclusion criteria are KRAS mutation, known allergy to murine or chimeric monoclonal antibodies, concurrent chronic systemic immune therapy, chemotherapy, or hormone therapy not indicated in the study protocol, histology of signet ring carcinoma (>20% of tumor cells), other malignancy than disease under study or second cancer < 5 years after R0 resection, impaired liver, renal or hematologic function as mentioned above, heart failure NYHA ≥ 2 or significant coronary artery disease (CAD), alcohol and/or drug abuse, inclusion in other clinical trials interfering with the study protocol. Patients can only be included once in the COMBATAC study. Treatment schedule The interdisciplinary combined treatment regimen consists of pre-and postoperative systemic chemotherapy with FOLFOX or FOLFIRI plus the EGFR antagonist cetuximab, cytoreductive surgery (CRS) with complete macroscopic cytoreduction (CC-0/1) followed by bidirectional hyperthermic intraperitoneal chemotherapy (HIPEC). The treatment schedule is shown in Figure 1. Systemic chemotherapy will consist of standard-of-care chemotherapy. Preoperative intravenous chemotherapy will be applied for 3 months, and therapy will be completed by postoperative systemic chemotherapy for further 3 months starting 4-6 weeks after surgery. Cetuximab is given intravenously once weekly for max. 12 weeks. The initial dose is 400 mg/m 2 body surface area followed by a weekly dose of 250 mg/m 2 . Standard of care premedication will be administered as needed to patients receiving intravenous chemotherapy, including dexamethasone, acid suppressors, anti-emetics, analgetics and antipyretics. Systemic chemotherapy will be administered by the patients' medical oncologist or the department of oncology of the enrolling peritoneal carcinomatosis center. All decisions regarding the management of (serious) adverse events related to systemic chemotherapy, such as dose reduction, interruption of systemic treatment or change of treatment regimen are at the discretion of the treating medical oncologist and are allowed within the study protocol, if documented. Preoperative systemic chemotherapy is followed by cytoreductive surgery and HIPEC. The intent of cytoreductive surgery is to obtain complete macroscopic cytoreduction (CC-0/1) as a precondition for the application of HIPEC. The residual disease is classified intraoperatively using the completeness of cytoreduction (CC) score. CC-0 indicates no visible residual tumor and CC-1 residual tumor nodules ≤ 2.5 mm. CC-2 and CC-3 indicate residual tumor nodules between 2.5 mm and 2.5 cm and > 2.5 cm, respectively [70]. The initial extent of peritoneal tumor manifestation is determined intraoperatively using the Peritoneal Cancer Index (PCI, Washington Cancer Center), a combined numerical score of lesion size (LS-0 to LS-3) and tumor localization (region 0-12) [70,71]. During surgery patients are placed in modified lithotomy position. Surgery may include parietal and visceral peritonectomy, greater omentectomy, splenectomy, cholecystectomy, resection of liver capsule, small bowel resection, colonic and rectal resection, (subtotal) gastrectomy, lesser omentectomy, pancreatic resection, hysterectomy, ovariectomy and urine bladder resection. In patients with infiltration of the umbilicus, omphalectomy is necessary. Further operating procedures and resections may be necessary due to the intraoperative findings. Gastrointestinal reconstructions are performed following the individual center's standard operating procedures (SOPs). The following minimal requirements are prerequisites for CRS: complete greater omentectomy, complete adhesiolysis of the small intestine, complete mobilization of the liver to assess the right diaphragmatic space, assessment of the left diaphragmatic space requiring splenectomy in the majority of cases, assessment of the left and right paracolic spaces, assessment of the pelvis, often requiring anterior rectal resection. Bidirectional oxaliplatin-based hyperthermic intraperitoneal chemoperfusion (HIPEC) will only be applied intraoperatively in case of complete macroscopic cytoreduction (CC-0/1). HIPEC may be performed in an open or closed abdomen technique according to the peritoneal carcinomatosis center's SOPs. After CRS four intraabdominal drains and two temperature probes are placed for continuous abdominal perfusion using a roller pump system with heat exchanger as described before [72]. When Douglas pouch temperature reaches 40°C oxaliplatin at a concentration of 300 mg/m 2 body surface area is added and perfusion will be continued for further 30 minutes. The treatment is combined with synchronous IV administration of 400 mg/m 2 fluorouracil and 20 mg/m 2 folinic acid considering toxicity and safety instructions. After completion of the intraperitoneal perfusion cycle, the perfusion volume is evacuated from the abdominal cavity, all drains remain in situ and the patient is transferred to postoperative care. Assessments and follow-up During the screening period patients will be assessed for eligibility to be included in the COMBATAC study. Inclusion and exclusion criteria are assessed by the investigator and initial diagnostics will be completed as necessary prior to patient enrolment. During pre-and postoperative systemic chemotherapy clinical examination and laboratory testing will be performed within 7 days of each chemotherapy cycle. After completion of preoperative treatment and after completion of the postoperative chemotherapy (end of treatment period), a further staging computed tomography will be performed. Moreover, quality of life is assessed and tumor markers (CEA, CA19-9) are determined. The same items will be recorded within three weeks after surgery. Intraoperative data consisting of PCI, surgical procedures, number of anastomoses, operating time, blood loss and course of HIPEC procedure and additional postoperative such as stay on ICU and hospital stay will be documented. The follow-up time starts 30 days after the last day of drug administration during postoperative treatment with the 'end-of-treatment' visit. The follow-up time takes 24 months with three-monthly follow-up visits consisting of physical examination, laboratory testing including tumor markers and protocol CT scans. Quality of life will be assessed yearly during follow-up. Radiological disease progression will be assessed according to the revised RECIST criteria version 1.1 [73]. As mentioned above, computed tomography of the chest, abdomen and pelvis with oral, rectal and intravenous contrast will be performed prior to treatment start, within 3 weeks after cytoreductive surgery (CRS) and HIPEC and 30 days after the last systemic drug administration. Response to treatment is defined by the following four categories. (1) complete response (CR), (2) partial response (PR, 30% decrease in sum of baseline), (3) stable disease (SD) and (4) progression in the absence of lymphatic or hematogenous disease recurrence will be based on clinical signs or symptoms (e.g. malignant ascites, ureteral stenosis or bowel obstruction), radiological diagnosis (CT ± PET) and/or surgical evidence of progression during laparoscopy or laparotomy. In addition, CEA and CA19-9 will be routinely measured as mentioned above. An at least three fold increase in serum CEA or CA19-9 levels will be defined as progression. Morbidity and toxicity will be assessed as the number of medical and surgical complications occurring during the treatment period. The severity of complications (Grade I-V) will be assessed and adverse events will be categorized using the CTCAE version 4.0 [74]. Quality of life will be assessed using the EORTC QLQ-C30 questionnaire. Functional and symptom scores will be calculated according to the standard scoring procedures [75]. Comparisons will be drawn with the score means of the reference population [76]. A second round of analyses will be performed in order to identify the proportion of patients at any assessment point with pronounced deficits in QoL as defined by score points < 50 on a 0 = very bad to 100 = very good scale [77]. The pathohistological regression after systemic chemotherapy is assessed and graded using the classification published by Dworak et al [78]. This classification system was originally generated to evaluate regression of rectal cancer after neoadjuvant radiotherapy and consists of different types of necrosis and fibrosis with specific changes of vascular and cellular morphology. Statistical considerations The sample size was calculated using the primary endpoint, i.e. progression-free survival (PFS). Based on the literature, a median PFS of 10 months or less was considered to be of no further interest (treatment not promising). Alpha (one-sided) was set to 10% and Beta was set to 20% (acceptable error rates for phase II trials [79]). Assuming exponentially distributed progression times and a target median PFS of 14 months (treatment promising), at least 39 events (progressions or deaths) have to be observed. Equivalently, if the true median PFS is 14 months, 39 events will be sufficient to rule out a median PFS of 10 months, based on the one-sided 90% confidence interval (CI). The normal approximation used in the calculations is given by equation 3.2.7 of Lawless [80]. Assuming an accrual period of 12 months and a follow-up of at least 18 months from the last patient recruited, a minimum number of 51 patients will be required. With a lost-tofollow-up rate of maximal 15%, a total of 60 patients have to be included in the study. The final analysis with respect to PFS will be done after 39 observed events. PFS distribution and median PFS time with the corresponding one-sided 90% CI will be estimated by means of the Kaplan-Meier method. The treatment will be considered worth further investigation if the lower bound of the CI is greater than 10 months. The primary analysis will be based on the intention-to-treat (ITT) analysis set that consists of all patients who entered the study. A detailed description of statistical analysis methods will be given in the Statistical Analysis Plan which will be finalized prior to database lock. Data collection and quality assurance Patient data are collected in an electronic case report form (eCRF) at the data centre of the Center for Clinical Studies Regensburg in collaboration with the Coordination Centre for Clinical Trials Duesseldorf. Consistency checks will be performed on newly entered forms and queries issued in case of inconsistencies. Archiving of trial documents and trial data is performed according to the internal SOPs of the Center for Clinical Studies Regensburg. The originals of all essential trial documents are filed in the Trial Master File (TMF) and archived for at least 15 years. The sitespecific documents in the Investigator Site File (ISF) will be archived at the site for at least 15 years. On-site monitoring will be performed by an external CRO (multiservice-monitoring, Regensburg, Germany) adapted according to the site accrual. Ethical and legal aspects The protocol will be conducted according to the guidelines of Good Clinical Practice (GCP) and the ethical principles described in the Declaration of Helsinki. The study protocol was approved by the leading ethic committee (Ethikkommission an der Universitaet Regensburg) and the associated ethics committees, and was also subject to authorization by the national competent authority (BfArM) as mandatory by federal law. The study was assigned the EudraCT number 2009-014040-11 and is registered at ClinicalTrials.gov (NCT01540344). Discussion The COMBATAC study is designed to evaluate the feasibility and efficacy of CRS and bidirectional oxaliplatinbased HIPEC as an additional treatment option for selected patients within an interdisciplinary combined treatment concept consisting of standard-of care pre-and postoperative systemic chemotherapy. It is beyond question that systemic chemotherapy is the standard of care in patients with advanced stage CRC and peritoneal carcinomatosis. Although the oncological outcome of patients with advanced stage CRC and also the subgroup of patients with peritoneal carcinomatosis has improved since the introduction of combined chemotherapeutic regimens and new drugs, results of systemic therapy for patients with peritoneal carcinomatosis are still unsatisfactory [44]. Thus, additional treatment options should be evaluated. The existing data show that CRS and HIPEC may improve long-term survival of selected patients with peritoneal carcinomatosis of colonic origin [59,60]. Moreover, hyperthermic peritoneal perfusion with oxaliplatin in combination with synchronous intravenous application of 5-FU/folinic acid seems to improve the efficacy of HIPEC in comparison to a mitomycin C-based intraperitoneal treatment regimen, and may additionally contribute to a better local disease control [46,65]. Perioperative morbidity and mortality seems not to be impaired by the intensified oxaliplatin-based HIPEC regimen [81]. Nevertheless, the time of surgery including HIPEC, the perioperative treatment and the sequence of the therapeutic interventions is still a matter of debate. The intensified systemic treatment strategy with preoperative chemotherapy may lead to increased rates of complete macrosopic cytoreduction and together with the postoperative treatment to better control of distant metastasis and tumor recurrence. However, there is no prospective study available evaluating the clinical and oncological outcome after standard-of-care chemotherapy including targeted anticancer therapy in combination with CRS and HIPEC. Thus, the COMBATAC study is expected to give further information about the efficacy of this promising therapeutic option as an inherent part of a multidisciplinary treatment concept. Conclusions To our knowledge the COMBATAC study is the first prospective clinical trial investigating the feasibility and efficacy of CRS and bidirectional oxaliplatin-based HIPEC within an interdisciplinary treatment regimen with pre-and postoperative systemic chemotherapy including cetuximab.
2016-05-04T20:20:58.661Z
2013-02-07T00:00:00.000
{ "year": 2013, "sha1": "1673578aea69f3011aa85fa35b5d2ef3a4dd271d", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-13-67", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24f96669881a4c7a3e695f9ad975a23ee39b9e61", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252012768
pes2o/s2orc
v3-fos-license
Forensic Investigation of Homicide or Bodily Injury of a Newborn by His Mother : In criminal law from Romania, the offense of homicide or bodily injury of the newborn committed by the mother is a special mode of homicide, with specific sanctioning rules in conditions of physical or psychological disorders. In our country, the term of homicide of a newborn has a narrower meaning than the term infanticide. Infanticide refers to murder a child and is considered in the laws of other states as a form of qualified murder. The homicide of the newborn does not refer to the suppression of the life of any child, but only to the homicide of his newborn by his mother, immediately after birth but no later than 24 hours. Forensic investigation involves expanded and complex activity, caused usually by the mother's actions from the moment of birth time to the moment of mutilation, hiding or abandoning the corpse. It is necessary to reconstruct the crime scene by identifying the place where the birth occurred, where the mother committed the offense, the area where the newborn corpse was discovered, parts of the body or traces that indicate that it was incinerated, buried, devoured by animals and so forth. Very important as well is the identification of the trail between the place of birth to the one where the corpse was abandoned or hidden, the place where the tools used to commit the crime were discovered, objects that were used at transporting the corpse or suppressing the life of the newborn as well as issues related to the legal medicine. Introduction Current Romanian Penal Law provides a distinct title of the infractions against the person. Through these incriminations life is protected, physical integrity and the person's health from the time of the birth to the moment his death occurs in a natural way. The offence of murdering a newborn by the mother, is a variety of murder offence committed in specific circumstances and that justifies distinct incrimination with a penal sanction attenuation. The act of homicide committed by the mother has a narrower meaning than the term of the infanticide. The latter refers to the murder of a child after 24 hours of birth and is considered in the laws of other states as a form of qualified murder (Iftenie and Boroi 2002, 99). In Romanian Law, the attenuation involves the existence of certain psychological-physiological states caused by birth, that are not equivalent with a state of unconsciousness, but they are capable of explaining, to a certain extent, the action of homicide a newborn. The legislator sanctions more easily, in comparison to the delict of murder, this type of act committed by the newborn's mother, recognizing the possibility of occurrence of such states of disorder. In order to protect the mother who has murdered her own newborn in physical or mental disorder conditions, with less discernment, from a more severe punishment, it was considered necessary to introduce a distinct article that would delimit the homicide of the newborn by the offense of murder (Vasile 2013, 107). The victim of the offence must be a newborn having no importance if the infant is a child from marriage or outside marriage. The condition of the newborn involves the birth process. Such a process starts with specific uterine contractions, accompanied by pain and then gradual detachment of the child from the mother's body, completed with his expulsion. The newborn is considered to be the child who still carries the signs of the recent birth, but no more than 24 hours afterwards (Cristiean and Buzatu 2008, 310). In order to accomplish the offense, it is essential that the newborn be alive at the time when the murder deed is committed. It is important that the birth process is completed, that the fetus expelled and to begin a life independent of the mother's life. If the act was committed before the end of the birth process, the offense of bodily injury to the fetus will be retained. In the judicial practice in our country, it was decided that he does not commit this homicide, not fulfilling the condition that the deed should be committed immediately after birth but not more than 24 hours thereafter, the mother who murder newborn two weeks after her birth, after discharge from the hospital regardless of the mental state in which she was at the time of committing the criminal act. The 24-hour delimitation period leads to significant consequences in terms of legal classification of the offence. A homicide or injury of the newborn is the killing or injury caused by the mother only up to the 24-hour limit (Iftenie and Boroi 2002, 101). The same offense is also sanctioned when the homicide took place during this period, but the newborn death occurred subsequently. If the 24-hour period passes, even if the state of postnatal psychiatric disorder persists, the act will no longer constitute the offense of homicide or injury of the newborn committed by the mother but the offense of murder. Determination the state of psychological disorder is based on scientific and medical data, following a forensic expertise. The mother's attitude of pursuing or accepting the result of her actions must be spontaneous, determined only by the state of psychological disorder, and must manifest itself simultaneously, within the time when that condition exists, premeditation being impossible. If the mother acted intentionally, she would be criminally responsible for committing the offense of homicide or bodily injury a newborn (Iftenie and Boroi 2002, 103). The forensic investigation of the offense of homicide of a newborn is a very important issue as it requires a lot of attention, focus, objectivity and involvement from the entire team taking part in the investigation at the crime scene. In the case of the homicide offense of a newborn, the crime scene includes a fairly large and varied area, determined by the mom's actions from the moment of birth to the moment of mutilation, hiding or abandoning the corps. The victim's forensic characteristic in the case of homicide offense of a newborn includes the following defining signs and the research must establish the newborn condition, the lifetime of intrauterine and extrauterine life and the degree of maturity and viability of the fetus. Obstetric science considers that the length of a mature newborn is 47-62 centimeters (more frequently 50-52 centimeters) and that of premature less than 45 centimeters (Huidu 2010, 204). It is considered that the weight of a mature newborn is around the limits of 2500-3500 grams. The weight of the fetus in the 7th month of intrauterine life is between 1000-1400 grams, in the eighth month from 1500 to 2400 grams, and in the ninth month around 2500-3500 grams (Huidu 2010, 204). Viability is to be understood as the ability of the baby to survive outside the maternal body. From a clinical point of view, the fetus born in the second half of the 6th month of intrauterine life is considered viable, with a length of not less than 35 centimeters and a weight of at least 900-1000 grams, which may exist outside the body woman in special conditions (Huidu 2010, 206). The absence of a newborn child's condition leads to the different legal classification of the offence and the application of other investigative techniques than those characteristic of the offense of the homicide the newborn. The criminal investigation team at the crime scene must reconstitute the scene of the offense by identifying the place where the child was born, where the offense was committed, the area in which the newborn's cadaver was discovered, parts of the body, or traces in which resulted it was incinerated, buried, devoured by animals and so forth (Lăpăduși and Iancu 2004, 164). Also, they must establish the mother's trail from the place of birth to the place where the newborn baby's corpse was abandoned, as well as discovering instruments, objects, containers and so forth used in the birth, packaging, transporting of the body or suppressing the life of the newborn (Vasile 2013, 110). The criminal forensic investigation at the place of birth A first aspect that research should make clear is the place where the child was born, was injured or homicide occurred. Following the determination of the place of birth, the criminal investigator can determine if the mother gave birth in conditions assisted by a nurse, a child whom he injured killed or abandoned immediately after birth. Although the place is not relevant to the legal classification of the offence of the homicide, its is particularly important for establishing evidence of the criminal offense of homicide of the newborn. In order to achieve the objectives inherent in the investigation of the homicide of the newborn, the limits of the space where the child was murdered (room, apartment, dwelling, courtyard, forest, field, and so forth) must be established (Bulgaru-Iliescu, Costea, Enache, Gheorghiu, Astărăstoae 2013, 604). Considering the legal provisions and the practice of the judiciary, it is argued that the place where the crime was committed differs from case to case, in relation to the nature of the deed, with the multitude of methods and resources used for that purpose, with the specifics of the illicit activity lead by the perpetrators. The notion of a crime scene in the case of a crime of homicide of a newborn child comprises a fairly diversified area, generated mainly by the actions of the perpetrator from the time of the birth and to the moment of hiding or abandonment of the corpse. For example, in the case of unassisted births by the medical nurse, due to its physiological state, the pregnant woman can not move for a longer distance place. Thus, births of this kind occur in places close to the house where the mother usually lives. Therefore, the place of unassisted birth often becomes a place of committing injury or of homicide the newborn child by his mother (Tudor 2010, 183). Among the specific traces of recent births the following can be mentioned: the presence of afterbirth on the mother's body, of blood, amniotic fluid, rupture of the perineum, the presence on the newborn body of blood traces, meconium and the vernix caseosa substance (a white cheese-like protective material that covers the skin of a fetus), signs of umbilical cord, and so forth. In the place where the birth took place bloody clothes can be found, gauze, obstetric instruments, containers used for collecting birth-related liquids, and so forth (Dermengiu, Alexandrescu 2014, 134). At the crime scene, on the corpse and on objects that have entered the perpetrator's sphere of interest, the prints of her hands can be traced. Fingerprints can also be found on objects, instruments used by the perpetrator to commit the crime, scissors, obstetric instruments, ropes, and so forth. These traces can also be found on the objects in which the newborn was abandoned: boxes, plastic bags, bags, and so forth. In addition to papillary prints, traces of foot or footwear can also be found at the crime scene. By analyzing them, the expert can solve the following problems: determining the approximate age, waist and weight of the perpetrator, as well as her anatomo-pathological particularities. Knowing the typical information about the place where the crime of homicide a newborn happened, has a special significance for carrying out procedural, forensic and investigative-operative activities in order to investigate and fully discover the deed (Lăpăduși and Iancu 2004, 169). The objectives of the crime scene investigation are to discover, research, preserve and collect the present traces, determine the circumstances in which the offense was committed and the data to help identify the perpetrator (if her identity is unknown) (Vasile 2013, 113). The investigation at the crime scene includes a static stage and a dynamic stage. The static phase implies the research of the traces that are found there, the state of the place where the newborn body was found, the biological human traces such as birth, menstruation, breastfeeding, bleeding of various causes or other fragments of tissues or organs. It is worth noting the characteristics of wounds when they exist, the type of lesions, location, number, color, shape, dimensions, orientation, appearance of the edges and angles and their contents, as well as objects that are not to be cleaned or packed in cotton wool, so the traces are not erased (Budăi and Gavriș 2006, 44). The examination corresponds to a close collaboration between forensic doctor and criminologist. Photos of orientation, photographing of main objects and sketches should be made by the criminologists from the team. The sketches render the entire recording of the relationship between the child's body and the surrounding objects. To describe the location the cardinal points are envisaged, nearby buildings, if they are in an open space, room objects and biological traces. It must be mentioned that the place where the newborn's body is discovered does not always coincide with the place where the body injury occurred. Most of the time, the aggressor mother attempts to hide the traces of her deed, abandoning the newborn's body in a variety of places, such as garbage, canals, stairwell, trains, less populated places, and so forth. In the dynamic stage, the objects in the investigated perimeter are examined, after which they are moved. There is an external body examination after the complete undressing. Violent injuries, biological and non-biological traces, external signs of different illnesses, particular signs and resulting from therapeutic procedures are noted (Budăi and Gavriș 2006, 46). The photographic recording of the traces and the body of the child takes place, as far as possible in color, to highlight traumatic injuries. If the child's body was dismembered by the mother in order to conceal identity or make the identification impossible, each part of the body, the place and the position in which it was found are photographed, and after reconstitution, the body is photographed in its entirety. The probable date of death is determined, as far as possible, and the significance of the lesions of violence. The collection, packaging and sending of evidence shall be carried out for laboratory examinations. If it is suspected that it was a mechanical asphyxiation by compressing the throat with a chain, the material evidence is raised to the extent that it is identified. If it is supposed to be a drowning, water evidence is collected. If the victim's body was buried, exhumation is required for the examination of the body by autopsy. The forensic doctor that performed the autopsy will draw up an expert report, which will contain his conclusions about the way of death, the medical cause of death, traumatic wounds premortem, the mechanism of their production, the legality of criminal offences between traumatic injuries and death, results of laboratory investigations performed on the biological matter taken from the corpse and suspected discovered substances, the biological traces found on the body of the newborn, the probable date of death, and any other elements that may contribute to the elucidation the case. Forensic examination of the mother The status of the mother, which the law requires, must be established by the criminal investigation authority each time. The jurisprudence demonstrates, within the research, that the most frequent homicide offense or injuring of newborn is committed under the conditions of an unassisted birth by a medical nurse, which excludes the certificate of the birth, capable of contributing to the identification of the perpetrator. It is of great importance to clarify this aspect of the obstetrical examination of the mother. In some situations, women suspected of committing the crime of the newborn are detected at relatively short intervals, and it is possible to find the signs of the recent birth. Other times, the examination is done at longer intervals, favoring the finding of multipara signs (women who have had more than one pregnancy). It has to be demonstrated whether the suspect woman gave birth or not, for example she fulfills the mother personal circumstances, from a legal point of view and whether she was in the childbed period, from a medical point of view (Dermengiu and Alexandrescu 2014, 136). The forensic expertise of the woman suspected of committing a crime of killing a newborn involves three stages. Forensic examination is made in order to establish the retrospective diagnosis of pregnancy and childbirth, how the child was born and the following period. During the clinical examination of the mother, a general clinical examination, a genital examination, and an examination of products and objects found at the place of birth will be performed. General clinical examination will be carried highlighting the characteristic changes in pregnancy (hyperpigmentation, breasts raised in volume, presence of colostrum secretion in the early days, dairy secretion after 2-4 days, recent stretch marks with a rosy color, old stretch marks are white pearl shades). In the genital examination, the uterus involution will be highlighted, with an average rate of 1.5-2 centimeters a day. There may be cases of super involution, at a rate of over 2 centimeters a day, or underinvolution, at a slow rate of less than 1.5 centimeters per day. In the first hours, the uterus is reduced to the size of a pregnancy corresponding to the fourth month, after 12-16 days the pelvic organ is recovered. Within 5-6 weeks, the uterus returns to the usual shape and volume and the cervical canal is wide open for the first 3-5 days (Beliș 1995, 399). The inferior segment of the uterus returns to isthmus to 5-6 days with the permeable holes, the external opening is wide open, edematous, with small fissures after the first 2-3 days. The internal opening allows the penetration of a finger, and the external one in the transverse slot with some lateral edges permits the penetration of 1-2 fingers. The cervix with the internal opening closed and the external opening extensible allows the penetration of a finger to 10-12 days. The cervical canal has normal size at 12-14 days, and the external opening is completely restored after 14-16 days (Beliș 1995, 399). The vaginal canal is wide in the case of women with multiple births, the vaginal walls are extensible, the folds are wiped, and the mucus is purple. The intact hymen is an absolute proof of the lack of birth on vaginal way. The perineum may show ruptures or scars. The scarring of perineal ruptures is carried out and at the end of the first week, the granulation tissue appears. Colostrum occurs after birth as a mammary secretion and is replaced by dairy secretion after 5-6 days. The breast milk occurs 5-6 days after birth and lasts after the childbed period too. If the woman will not breastfeed, mother's secretion will gradually decrease, going to disappear after 8-10 weeks (Beliș V., 1995. 400). In laboratory examinations, biological pregnancy tests remain positive up to 10 days after birth, and the smear of mammary secretion reveals muriform corpuscles. As for the examination of the products and objects found at the place of birth, the placenta will be revealed, umbilical cord, amniotic fluid, meconium, blood stains, and so forth. Maternal death after giving birth exclude the criminal action for the act of homicide of the newborn. For this reason, it is necessary to evaluate pregnancy or birth signs of the body. These signs involve macroscopic and microscopic visceral changes, placental and ovarian remnants or the presence of dead fetus in the uterine cavity. In this situation, the fetus may be mummified, reduced in volume in the absence of amniotic fluid and yellowish-brown color (Iftenie and Dermengiu 2014, 495). In the 5 th -6 th months, the fetus may pass a maceration process, the tissues are soaked with serosity, the skin is brown, covered with fistulae containing a reddish serotype. It should be noted that in the 7th month the umbilical cord reaches 40 centimeters, and at birth 47-52 centimeters. The premature fetus has no hair but lanugo (lanugo is hair with a puffy look). The newborn also has secondary hair, 2-3 centimeters long (Dermengiu and Alexandrescu 2014, 342). Psychiatric forensic examination of the mother The psychiatric forensic examination will be carried out immediately after birth in order to determine the psychiatric disorders that could have caused the offense of homicide of the newborn or, if it is impossible, at an undetermined interval after birth, for reconstitution, on the basis of the case files and the circumstances in which the child was born, any psychiatric disorders that caused the act to be committed. The examination must be complete and detailed and is considering basic psychic functions. It is of an overwhelming importance because it captures the psychological picture of the perpetrator and the psychic state that he had at the time of committing a crime. The psychiatric forensic examination is mandatory in the case of homicide or injuring the newborn child or the fetus by the mother. The psychiatric examination follows the etiology of the woman's mental disorders during childbirth. They can be systematized in pathological conditions related to pregnancy and its development, psychiatric disorders in pregnancy toxicology, confusing states through cerebral anemia due to hemorrhage, mental disorders accompanying the obstetrical act, pre-existing latent psychiatric conditions and triggered or aggravated by pregnancy, birth or psychosis (Dragomirescu, Hanganu and Prelipceanu 1990, 80). Starting from the premise that the psychiatric expertise has as objective the establishment of a person's mental state, it can be noticed that in the case of investigating and judging the crime of homicide or bodily injury of the newborn, the main purpose of the expertise is to prove the state of psychological disorder in which the mother was at the time of injuring or homicide the newborn child. In the investigation of the offense, the psychiatric expertise should clarify whether there was any mental illness, the causation between the psychological disorder and the committed criminal activity and the fact of whether his mother was found, at the time of birth, in a state of disorder caused by it (Dragomirescu, Hanganu and Prelipceanu 1990, 81). From a medical point of view, it is accepted that any birth causes profound changes in the body of the woman. In the notion of psychological disorder, elements and facts have to be taken into account which have as a consequence of exacerbation the special mental states, relating to in some cases to the process of birth. For legal classification of the offence it is necessary to establish with certainty the existence of the mother's disorder at the time of committing the deed. The expertise should be complex, to contain psychological tests and decide on the structure of the personality of the examined woman (Ionescu 1997, 205). In order to obtain conclusions within the psychiatric expertise performed on the investigated mother, the competent judiciary bodies must collect data including the social and family conditions of the subject under the examination, as well as the medical history of the subject. In jurisprudence, on the occasion of performing psychiatric expertise, in many situations there have been identified conditions related to limited intellect, abolished maternal instinct, oligophrenia of varying degrees of severity or impulsive psychopathy, these being envisaged in the determination of the state of disorder during childbirth. The conclusions of the forensic expertise report, first, it must specify the basic diagnosis and diagnosis of the suspect or defendant's current state, with the exclusion of simulation (Vlădoiu 2007, 236). The report should also include the essential features of the personality of the person being examined, reflected in the diagnosis mentioned and the deviant behavior, as well as the disorder that characterizes the current state. It is worth mentioning the evolutionary stage of these disorders, whether or not they are episodic, whether they were triggered at the time of committing the deed and whether they pose a risk of aggravation or become chronic. Last but not least, the report should mention whether, through the pathological traits and personality and behavioral disorders, favored by exogenous or endogenous factors, the perpetrator presents a social or potentially criminal danger, which may also be the basis of the argumentation of the proposed preventive and recovery measures. The conclusions must also mention the causation between the features, personalities or main manifestations of the mental illness and the constituent elements of the offense, and whether these psychopathological disturbances or manifestations alter the discernment (Vlădoiu 2007, 238). The most important aspect in the psychiatric forensic examination report should clarify whether at the moment of homicide or injuring the newborn, the mother was in a state of psychological disorder or not, because otherwise she will not be detained liability for the offense of homicide or injuring the newborn, but for the crime of murder or injury. Women who have homicide newborns, often suffer from anxiety and depression. Anxiety usually occurs as a reaction to stress or other complex pathological disorders. It can often cause diminishing the individual's intellectual performance, leading to behavioral disturbances and even criminal behavior (Cartwright 2010, 167). In order to differentiate the offense of homicide or bodily injury of the newborn from the murder offense, the investigation must make clear whether or not the mother who has killed the newborn child immediately after birth, but not later than 24 hours, is or not in a state of birth-related disorder. The forensic expertise of the newborn Concerning the clarification of the cause of injury or death, it is of particular importance if we consider that it is one of the indispensable conditions for the crime of homicide of the newborn. For the offense of homicide of the newborn, it must be clear from the evidence's element that the injury or death is violent and is because of the action/inaction made by the mother. The aptitude of the act of violence to suppress the life of the newborn child is deduced from its materiality, including: the instrument used and considered killing, the large number of strokes applied, their intensity and their orientation to the vital regions of the victim's body, and so forth. In the case of an offense of homicide or bodily injury of the newborn, given the child's fragility and total dependence on the person who has given birth to him, the actions used against the victim are usually of lower intensity (Dungan, Medeanu and Pașca 2010, 148). At the same time, the research needs to clarify whether it is an accidental death or homicide of a newborn. In this regard, the forensic doctor must conduct a thorough examination to determine whether or not the signs of a violent death are present, and the causes of the triggering, determining whether death is accidental or provoked. Diagnosis of children born at term The diagnosis of children born at term is based on some morphological elements. In terms of weight, at birth the girls must have approximate 2800-3200 grams and the boys 3000-3500 grams. Girls have to measure 48-51 centimeters and boys 50-54 centimeters. If they are less than 45 centimeters, they are considered premature. The cranial perimeter should measure about 35 centimeters and the thoracic perimeter approximate 31 centimeters (Moraru 1967, 461). The skin should be pinksh, elastic, lanugo (hair very thin, soft, usually unpigmented) on the forehead on the forehead and and vernix caseosa substance on the body. The hair has 1-3 centimeters, and the fingernails exceed the finger pulp. External genital organs are normally conformed, with testicles in scrotum, the closed vulvar slit and the large labia covering the small ones. From the third month, the embryo becomes a human fetus, and in the last three months of pregnancy the external appearance undergoes major changes, the disproportion between the head and the other segments diminishing centimeters (Ioan 1967, 466). Age can be calculated by the length of the fetus, after the identification of the ossification nuclei or after the appearance of the dental alveoli. In the sixth month, the alveoli of the incisors begin to form, and in the seventh and eighth month the molars are formed. The case of homicide or bodily injury of newborn committed by the mother In the case of assisted birth by the medical nurse, in terms of violence on the newborn, round or oval shaped bumps and semilunar excisions can occur on the scalp, at the facial and cervical. In contrast, injuries from criminal asphyxia by strangulation are unevenly located, having an irregular shape. Other traumas may consist of rupture of the buccal walls, facial lesion and the production of semilunar echinoses mimicking nails, clogging of the eyeball through digital pressure, cervical damage by hyperextension, cranial fracture produced by pressing the two parietals and last but not least a jaw fracture in the medial portion (Tudor 2010, 182). In the case of homicide committed by the mother, a forensic autopsy of the fetus or newborn is ordered. A forensic autopsy of a fetus is ordered to establish intrauterine age, extrauterine survival ability, the type and cause of death, and the establishment of parentage when required. The forensic autopsy of a newborn is ordered to determine whether a living child was born and to determine the viability, duration of extrauterine survival, the way and cause of death, the date of death, and whether medical care were given after birth. The objectives of this operation are general and specific. The general considerations are whether the newborn is of the presumed mother, whether it was born alive, whether the death occurred immediately after birth, the duration of the extrauterine life, the quality of care given to the newborn, the precise setting of the date and the type of death, as well as establishing the traumatic lesions of the production mechanism and their causation with death. The specific objectives are to demonstrate the existence of extrauterine life and the violent nature of death (Scripcaru, Astărăstoae and Scripcaru 2005, 112). Externally, the fetus or newborn's body will be examined, as well as the textile material used for its wrapping, the packaging it will be described and any of the evidences. These are collected as biological proof for forensic and serological-genetic analysis. Identification and customization take place, mentioning the gender, height, weight, gestation age, extrauterine life, and other anatomical features that the forensic doctor appreciates. Then the signs of the real death are noted, such as lividity and stiffness postmortem, and signs of putrefaction. The body is weighted and measured. Weight is appreciated taking into account the dehydration processes through which the fetus has passed. The unviable fetus, which after the first seven months did not reach 1000 grams, is considered abortion (Dermengiu and Alexandrescu 2014, 331). The head size is mentioned, cranial diameter and circumference, circumference of the chest and abdomen, bihumeral, bitrohanterian, and bicistar diameters, and the distance between the xifoid appendix and the umbilical ring and the distance from the umbilical to the pubis. The umbilical cord insertion level on the abdominal median line is lower for the low age fetus. The external exam records the appearance of the skin, with hairs or hair with a look of puff on the back and on shoulders at fetuses and premature babies. In the case of prematures, the nails of the upper limbs do not cover the finger pulp. The subcutaneous tissue is poorly represented, the head is small, the face is triangular, the abdomen is voluminous, and the chest is enlarged at the base. Meconium is present at the newborn. Also, in the external examination, the existence of possible deformations should be mentioned, the length of the hair, the condition of the oral cavity, the consistency of the nasal cartilage and pavilions of the ears, the presence or absence of the testicles in the scrotum or if the big labia covers the small ones. The umbilical cord should be described in detail, with the indication of its length if it is attached to the placenta and what is the general appearance. The newborn recently has a consistent pearly white cordon. In case of death by intrauterine asphyxia, the umbilical cord is greenish and impregnated with meconium. Its end may be cut or broken. The umbilical plaque is scarred after about 2 weeks. On the placenta, the shape, face aspect, diameter, weight, color, consistency and structure should be described. It should be specified if it is completely removed or attached via the cord to the fetus and the mature newborn should have 500-550 grams (Moraru 1967, 464). The external examination is performed on anatomical regions. In the facial the eyeballs are envisaged. On a premature newborn child, the pupillary membrane is rich in capillary and it disappears in the eighth month of pregnancy. In the cases of death under 24 hours, the cornea is opalescent. In the auricular region, secretions may appear in the external auditory conduit. As for the anal region, the permeability is tested with a probe. On boys, the testicles penetrate into the inguinal canal in the 7th month, being present in scrotum in term. The bone system targets skull skeletal mobility in dehydration and aerated fetus the dehiscence of sutures in obstetrical trauma, and possible malformations such as hydrocephalus or meningocele. The internal exam begins with an anterior medial incision, cranio-caudal, at the level of the lower lip, lowering the neck, chest and abdomen. Determining the medical cause of injury or death of the fetus or the newborn. According to the way of death, intrauterine deaths are divided into violent deaths and nonviolent deaths, the violent ones can be accidental or homicide. They may be due to mechanical factors such as traumatic injuries to the pregnant or fetus, physical factors such as irradiation, burns with hot liquids or with flame or chemical factors, such as poisoning. In the seventh month, strong abdominal trauma can affect birth. A differential diagnosis of the lesions due to obstetric maneuvers must be made in relation to those resulting from acts of violence. If there has been a non-violent death, there may be general maternal causes such as pneumonia, viral infection, malaria, syphilis, cardiopathy, local maternal causes, fetal causes such as placental stroke or twisted umbilical cord and so forth. During pregnancy, it may be a pathological death or a death in consequence of the traumas suffered (Budăi and Gavriș 2006, 78). The Following maternal and fetal causes, death can be due to infections, uterine and vaginal malformations, fetal edema, macrosomes, placental detachment, risk of massive amniotic fluid aspiration, or umbilical cord pathology. At birth, asphyxia may occur during normal labor as a result of uterine contractions with a decrease in gas exchange. Postpartum death of the newborn may be due to a pathological cause or to the violence suffered. Aggression exerted on it involves mechanical asphyxia by obstruction of breathing holes by hand or with a textile material, drowning in lakes or rivers, burial in sand or ground, mechanical asphyxia toraco-abdominal compression, asphyxia by inserting a child in plastic bags or exposure to high temperatures or electric shock. The medical examination of the newborn body must meet the general and specific objectives. It must be settled down whether the newborn belongs to the presumtive mother, if it was born alive, if the death occurred immediately after birth, the duration of the extrauterine life, the quality of care, the date and cause of death, the traumatic lesions, the mechanism and their causation with death (Beliș 1995, 412). Essential in homicide offense or bodily injury of newborn by his mother, is the demonstration of extrauterine life. The most important criterion is the achievement of lung breathing. It does not matter if the newborn has completely detached or not from the mother's body, if the umbilical cord has been removed or the placenta has been expelled. Methods of homicide or bodily injury of newborn In forensic practice, the notions of homicide or injuring the newborn, active and passive, are used. Homicide or injuring active newborn indicate, according to statistics, the most commonly used method is asphyxia in various ways, used in 48.5% of cases. In most cases, there are no traces of violence, only the general signs of asphyxia, which require certification through a histopathological. Asphyxia by introducing foreign bodies into the mouth and pharyngeal appears in 4% of cases. Foreign bodies can be clothes, cotton, paper, toys, identified in their entirety or in the form of fragments in the oropharynx (Cartwright 2010, 177). In many cases, there are also traces of violence in the form of bruising/ecchymosis, excoriations or erosion on the mucous membranes. Suffocation occurs less frequently, by burial in the ground. The hypoxic process is prolonged, in the respiratory apparatus grains of sand being identified, soil, or other foreign bodies. Strangulation is present in 7% of cases, being committed with soft objects such as scarf, handkerchief or harsh and semi-rigid chains, belt, or wire, and so forth. Toraco-abdominal compression occurs in 3% of cases. The drowning of the newborn occurs in 13.5% of cases. Repeated injuries with hard objects in the cephalic extremity was found in 12% of cases. The hitting is usually multipolar with multiple hematomas and fractures and it is accompanied by meningeal haemorrhage. The homicide of the newborn by/through physical factors, burning or scarring of the newborn occurs in very rare cases, and the one committed by poisoning does not appear in the statistics. Passive homicide or injuring the newborn is manifested by abandoning of the newborn under cold conditions or the abandonment of the newborn in isolated places after birth. Conclusions Forensic medical and medical-social studies in cases of homicide or bodily injury of a newborn by his mother, as well as experimental research, allow conclusions to be drawn that represent the essence of all the research on the offense of homicide of the newborn over the years. The place where the woman lives or the environment in which she spends her daily life has a decisive influence on the number of facts that happen in our country. Inhuman living conditions refer both to the inherent shortcomings of any critical situation and to the affective state of the mother. Similarly, the problem of those who do not have a job is questioned, this aspect generating an insufficient moral balance, being a factor of negative influence. Following the statistical surveys conducted, was concluded that two thirds of the perpetrators are very young women, many of them even minors and without occupation. The lack of training, both in the family and in society, on sexual life leads, to homicide deeds. In examining the causes that have led to the homicide or bodily injury of a newborn, forensic expertise has a particularly important role. This important role is conferred by the existence of a complete dossier showing all the phases and means used as accurately and objectively as possible. However, the homicide or bodily injury of a newborn by his mother is difficult to investigate, due to impediments to determining the cause of death, of the probation regarding the viability of the newborn or to establishment the death of the newborn immediately after birth but not later than 24 hours.
2019-08-19T03:25:12.056Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "a9adc6d5a6e423017c9e24517b6425bccbd71f4d", "oa_license": "CCBY", "oa_url": "https://zenodo.org/record/1570340/files/024FP.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParsePlus", "pdf_hash": "634129233e22b954221442f1495e7b53c9d45f2b", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [] }
267410406
pes2o/s2orc
v3-fos-license
Immunoinformatics, molecular docking and dynamics simulation approaches unveil a multi epitope-based potent peptide vaccine candidate against avian leukosis virus Lymphoid leukosis is a poultry neoplastic disease caused by avian leukosis virus (ALV) and is characterized by high morbidity and variable mortality rates in chicks. Currently, no effective treatment and vaccination is the only means to control it. This study exploited the immunoinformatics approaches to construct multi-epitope vaccine against ALV. ABCpred and IEDB servers were used to predict B and T lymphocytes epitopes from the viral proteins, respectively. Antigenicity, allergenicity and toxicity of the epitopes were assessed and used to construct the vaccine with suitable adjuvant and linkers. Secondary and tertiary structures of the vaccine were predicted, refined and validated. Structural errors, solubility, stability, immune simulation, dynamic simulation, docking and in silico cloning were also evaluated.The constructed vaccine was hydrophilic, antigenic and non-allergenic. Ramchandran plot showed most of the residues in the favored and additional allowed regions. ProsA server showed no errors in the vaccine structure. Immune simulation showed significant immunoglobulins and cytokines levels. Stability was enhanced by disulfide engineering and molecular dynamic simulation. Docking of the vaccine with chicken’s TLR7 revealed competent binding energies.The vaccine was cloned in pET-30a(+) vector and efficiently expressed in Escherichia coli. This study provided a potent peptide vaccine that could assist in tailoring a rapid and cost-effective vaccine that helps to combat ALV. However, experimental validation is required to assess the vaccine efficiency. Materials and methods The immunoinformatics steps for the in silico vaccine design were visualized in the flow chart presented in Fig. 1. ALV protein's sequences retrieval The ALV demonstrated three proteins: polymerase protein, envelope protein, and transacting factor protein with the following accession numbers NP_040550.1,NP_040548.1,and NP_040549.1,respectively.The sequences of these three proteins were retrieved from the National Center for Biotechnology Information (NCBI) at (https:// www.ncbi.nlm.nih.gov/ prote in) 26 . Subcellular localization and transmembrane topologies of virus proteins Subcellular localization of viral proteins is considered as an important clue to the function of the immune cells and judging the potential efficacy of vaccine targets 32 .In addition, surface-localized proteins are among the best candidates for the recombinant vaccine, since they are the first molecular patterns of pathogens that contacted by the host immune system 32 .For the detection of the viral protein subcellular localization, the Phobius server (https:// phobi us.sbc.su.se/ index.html) was used 33 .The server provided a combination of a transmembrane topology (TMHs) and a signal peptide predictor. Epitopes prediction and conservancy A total of 50, 13, and 3 strains sequences were retrieved for the polymerase, envelope, and transacting factor protein, respectively.These strains were used for epitopes conservancy and were presented in Table 1.BioEdit program version 7.2.5, is a multiple sequence alignment (MSA) tool, was used to align each protein strains sequences 34 .The analysis of the aligned sequences was conducted in order to identify the conserved epitopes that effectively act against B and T lymphocytes.Epitopes that had 100% conservancy (no mutations) among the strains were selected for further analysis, while non-conserved epitopes were excluded. T cell epitopes prediction Based on Immune Epitope Database (IEDB) analysis resources at (https:// www.iedb.org/), different T cell epitope prediction tools were analyzed 37,38 .The reference sequence was used as an input for each protein analysis.The data for epitopes that interacted with the major histo-compatibility complex class I and II (MHC-I and MHC-II) is not yet organized in the IEDB resources for chicken alleles.Accordingly, the human alleles were exploited to predict epitopes from the ALV-retrieved proteins interacting with T cell epitope as previously described 39,40 . Cytotoxic T cell epitopes prediction The IEDB prediction method at (http:// tools.iedb.org/ mhci/) provided a number of MHC-I binding prediction methods.In this study, the prediction of the MHC-I interacted alleles were obtained by Artificial Neural Network, NetMHC (ANN) 41 .The human reference alleles sets (HLA-A, HLA-B, and HLA-C) were used for the prediction process.Conserved epitopes with a score equal to or less than 1 (≤ 1) percentile rank with nine amino acids length bound to alleles were only analyzed.The conserved cytotoxic T-cell epitopes were further assessed for antigenicity, allergenicity and toxicity predictions. Helper T cell epitopes prediction The IEDB MHC-II binding prediction tool (http:// tools.iedb.org/ mhcii/) was used to investigate the reference sequence of ALV proteins for epitope prediction against MHC-II 37,38 .The human alleles reference sets (HLA-DP, HLA-DQ, and HLA-DR) were employed to search for promising epitopes.The analysis comprises the Neural Networks Align method, NetMHCII, version 2.2 (NN-align) 37,38,41 .The approach was used to find potential epitopes having a percentile rank score equal to or less than 10 (≤ 10).The core sequence and peptide lengths were set to 9 and 18 amino acids, respectively.The antigenic, allergenic, and toxic evaluation of the conserved helper T cell epitopes was carried out using VaxiJen v2.0, AllerTOP, and ToxinPred servers, respectively. Assembly of the multi-epitope vaccine The primary assembly of the vaccine sequence was accomplished fusing the B and T cells predicted epitopes that demonstrate conservancy, antigenicity score more than 1 and were shown to be non-allergenic and non-toxic. The elected B and T helper epitopes were fused by the GPGPG linkers while the T cytotoxic epitopes were fused by the YAA linkers 42 .The 5′-amino terminal of the vaccine was supported by the β-defensin 3 (Q5U7J2) as an adjuvant after separation with the EAAAK linker.Moreover, the sequence was provided with the 6His-tag for purification and identification of the vaccine upon expression [42][43][44][45] . Secondary and tertiary structures prediction of the vaccine Predicting the secondary structure of the vaccine is available for free on the raptor X server (http:// rapto rx.uchic ago.edu/) 46 .The secondary structure (SS), disorder regions (DISO), and solvent accessibility (ACC) played crucial roles in predicting α-helices, β-pleated sheets, and coiled structures, respectively.For the tertiary structure prediction, the vaccine sequence was submitted to the same raptor X server 46 .The obtained results were received in the form of a PDB file that was further used for vaccine sequence refinement and adaptation. Determination of the stability of the vaccine The disulfide bonding in a given protein between the cysteine residues plays an important role in strengthening of the protein's geometric conformation and enhances its extensive stability 55 .Disulfide-by-Design 2.0 (DbD2) (http:// cptweb.cpt.wayne.edu/ DbD2/) is a web-based tool that facilitates designing disulfide bonds in vaccine construct by substituting particular amino acid with cysteine in high-mobility and unstable regions of proteins 55 .This was followed by formation of disulfide bonds between cysteine residues.The parameters such as the intrachain, inter-chain and build C β for Gly were chosen.The angle (− 87° or + 97° ±) was set to 30 and C α -C β -S γ angles (114.6°±) was set to 10 for proper prediction of the bonds 55 . Immune simulation For mimicry of the immune response and immunogenicity of the ALV vaccine in the host, C-ImmSim server (https:// kraken.iac.rm.cnr.it/C-IMMSIM/) was used 56 .Two injections were given with time step set at 1 and 90 (the server provided each time step as 8 h while the time step 1 represents the injection at time = zero).The other simulation parameters were set to default.The measure of diversity (Simpson index, D) was interpreted from the plot 56 . Molecular dynamic simulation (MD) iMODS server (https:// chaco nlab.org/ multi scale-simul ations/ imod) was used to analyze the collective motions of protein vaccine 57,58 .A normal modes analysis (NMA) in internal coordinates is conducted by the server to determine the stability of the vaccine protein.This server structured the dynamics of the protein complex and provided various results data, such as deformability, eigenvalues, B-factors, variance maps, co-variances, elastic networks in the atoms, and residue indexes in terms of magnitude and direction 57,58 . Prediction of discontinuous B-cell epitopes The ElliPro in the IEDB (http:// tools.iedb.org/ ellip ro/) was used to predict the discontinuous B cell epitopes 59 .ElliPro tool predicts discontinuous and linear antibody epitopes based on the protein 3D structure.The prediction method was based on the default parameters of the sever 59 .For instance, the minimum score and the maximum distance (Angstrom) of the selected epitopes prediction were set to 0.5 and 6, respectively. Active sites detection in the vaccine structure Searching for a ligand-binding region on a protein is an essential step prior to molecular docking process.The process primarily based on multiple factors such as detection of hydrophobic or hydrophilic interactions, salt bridges and electrostatic and hydrogen bonding interactions.The computed atlas of surface topography of proteins (CASTp 3.0) website (http:// sts.bioe.uic.edu/ castp/ index.html?3igg) was exploited to determine the vaccine's active regions 60,61 .The default probe radius of 1.4 ˚A was used. Molecular docking of the vaccine protein with chicken TLR7 Protein-protein interaction is essential for functioning of many biological molecules 62 .Analyzing the complex structures formed between these molecules is of great importance to assess the molecular interactions or the affinities between these molecules.Toll-like receptors (TLRs) are considered as recognition receptors that play a paramount role in recognition of pathogen.In birds, there are ten genes encoding for TLRs, among them, TLR7 was chosen for the docking with the vaccine construct since it is a viral-sensing TLR 63 .Thus the designed vaccine was docked against the chicken TLR7 using the HADDOCK 2.4 server (https:// www.bonvi nlab.org/ softw are/ haddo ck2.4/) 62 .Refinement interface in HADDOCK server was used to provide the accurate cluster.PRODIGY web server (https:// wenmr.scien ce.uu.nl/ prodi gy/) 64,65 was used to calculate the binding affinities of the best chosen clusters at 25 °C.Finally, the interaction between the vaccine and the chicken TLR7 was visualized by PDBsum server (https:// www.ebi.ac.uk/ thorn tonsrv/ datab ases/ pdbsum/ Gener ate.html) 66 . In silico molecular cloning and codon adaptation The in silico cloning ensures that a particular host would express the vaccine protein upon cloning in suitable vector 67 .To facilitate successful cloning, optimization process and cloning of the vaccine construct in the expression vector were performed.The optimization comprises the elimination of different restriction enzymes cleavage sites, prokaryotic ribosomal binding sites, and rho-independent terminators of transcription in the sequence of the vaccine 67 .A reverse translation of the vaccine protein sequence into a DNA sequence was performed with the Java Codon Adaptation Tool (JCAT) (http:// www.prodo ric.de/ JCat) because cloning uses DNA rather than proteins 67 .The codon adaptation index and the GC content were in ranges of 0.8-1.0 and 30-70%, respectively.The sequences of the restriction enzymes Xho1 (5-CTC GAG -3) and BamHI (5-GGA TCC -3) were added at the 5'and the 3' ends of the DNA, respectively.A restriction cloning module from SnapGene (https:// www.snapg ene.com/) 67 was used to clone the DNA sequence located between the restriction sites of BamHI and Xho1 in the pET-30a(+) vector. Characteristics of the virus proteome Polymerase, envelope, and transacting factor proteins from the ALV were retrieved from the NCBI database.These three proteins were found to be stable and hydrophilic using the ProtParam server.The VaxiJen server was used to determine and prove their antigenicity.The three proteins were used as inputs to predict B and T cell epitopes for designing the vaccine against ALV.All the physical and chemical features of the three proteins were provided in Table 2. Multiple sequence alignment and epitopes conservancy The ClustalW program provided in the Bioedit tool was used for multiple sequence alignment (MSA) of all retrieved strains.MSA was exploited to search for conserved epitopes among the retrieved stains from polymerase, envelope, and transacting factor proteins.Epitopes length that was not broken by mutated amino acids from other strains is considered conserved epitope.During the MSA, the retrieved strain sequences demonstrated high epitopes conservancy. Linear B-cell epitopes prediction The ABCpred server received the reference sequences from each protein.In the server, a trained recurrent neural network provided the predicted B-cell epitopes based on their scores.Generally, an epitope passing the threshold of 0.51 is more likely to have a higher peptide score.Based on the ABCpred server, 39, 29, and 10 epitopes were predicted from the polymerase, envelope, and transacting factor proteins, respectively.After assessing the antigenicity, allergenicity and the toxicity of the predicted epitopes from each protein, 11, 10 and 6 epitopes from polymerase, envelope and transacting proteins were chosen as B cell epitopes, respectively.These epitopes were provided in Table 3. Cytotoxic T lymphocyte epitopes prediction Based on the reference sequences of polymerase, envelope, and transacting factor, multiple epitopes were predicted against human alleles (HLA-A, HLA -B, HLA-C) using IEDB MHC-1 binding prediction tools.Antigenic, allergenic, and toxic effects were then assessed for the predicted epitopes.A total of 6, 11, and 15 epitopes were obtained from the polymerase, envelope, and transacting factor proteins, respectively, and were elected as T cytotoxic cell epitopes due to their high antigenicity scores, non-allergenicity, non-toxicity and the allelic interactions.These epitopes were provided in Table 4. Helper T lymphocyte epitopes prediction The reference sequence of each of the three proteins (polymerase, envelope, and transacting factor) was analyzed against the human alleles (HLA-DR, DQ, DP) using IEDB MHC-1I binding prediction tools with a percentile rank of (≤ 10).A vast amount of epitopes were predicted from the three proteins.The predicted epitopes were analyzed for antigenic, allergenic, and toxic outcomes.A total of 21, 6, and 7 epitopes were obtained from the polymerase, envelope, and transacting factor proteins, respectively.They were elected as T helper cell epitopes due to their high antigenicity scores, non-allergenicity, non-toxicity, and allelic interactions.These epitopes were provided in Table 5. www.nature.com/scientificreports/ Structure of the assembled vaccine The entire number of predicted B cell, T cytotoxic, and T helper epitopes from the three proteins of ALV were used in the construction of the vaccine.Adjuvant, linkers, and 6-His-tags were also embedded in the final structure of the vaccine.Thus the final vaccine structure comprised 738 amino acids.The antigenicity score of the assembled vaccine was 0.8535 when examined in the VaxiJen server.Also, the vaccine protein was non-allergic in the AllerTOP server. Physiochemical properties of the assembled vaccine ProtParam server was used to examine the physiochemical properties of the assembled vaccine.The predicted vaccine weighed 77.121 kilo Dalton (kd) and possessed a theoretical isoelectric point of 9.81, indicating the proposed vaccine had an alkaline pH.Negatively and positively charged residues were 33 and 79 respectively. The Extinction coefficient at 280 nm measured in water was shown to be 132,125 assuming all pairs of Cys residues forming cystines.The instability index score (II) was 38.24, indicating a stable vaccine protein, while the aliphatic index score was 78.73, indicating a hydrophilic vaccine.The grand average water affinity was -0.130, suggesting a hydrophilic vaccine. Secondary and tertiary structures prediction of the assembled vaccine The SS3, ACC, and DISO for the secondary structure were predicted using the Raptor X server.The SS3 showed 23%, 15%, and 61% of the residues as α-helix, β-sheets and coiled, respectively.The ACC provided 49% as exposed residues, 21% as medium residues and 29% as buried residues.The DISO (disordered predicted regions) was 43 (5%).Figure 2 showed the primary sequence, the tertiary and the refined structures of the vaccine construct. Vaccine tertiary structure refinement and validation The vaccine's stability was assessed via the Ramachandran plot after refinementt.In the plot, 90.9% of residues were located in the most favored region.While regions of additional allowed, generously allowed, and disallowed comprised residues of 6.1%, 1.9%, and 1.0%, respectively (Fig. 3a).The ProsA server provided a Z score of -5.68 demonstrating a favorable model structure (Fig. 3b).www.nature.com/scientificreports/ Solubility of the assembled vaccine Based on the Protein-Sol server, a scaled solubility score of 0.499 was obtained for the vaccine construct, competed with 0.45 for the population solubility of E. coli (Fig. 4a).As a confirmation, SOLpro was further used to predict the solubility.The probability of the proposed vaccine upon expression on SOLpro was 0.9843, greater than 0.5, provided by the server. Stability of the assembled vaccine By engineering disulfide bonds into the structure of the proposed vaccine, the structural stability of the vaccine was improved.The improvement in stability was made possible by substituting the amino acids in the highly mobile regions in the sequence of the vaccine by cysteine residues.As per the Disulfide by Design 2.0 server, 94 amino acid pairs were identified to form disulfide bonds.However based on the Chi3 angle between + 97 and − 87 and a tolerance of 30 and a maximum Ca-Cb-S angle of 114.60 in the server, five pairs of residues (amino acids) were unstable regions and were replaced by cysteine-cysteine residues.The position and the replaced residues in the vaccine structure were A107-R127; I150-G179; P210-P280; P278-P312 and G500-L538 and were shown in Fig. 4b,c.www.nature.com/scientificreports/ Immune simulation The obtained immune simulation results were coincided with actual immune responses.This was proved by marked increase in the primary, secondary and tertiary immune responses accompanied by drop in the antigen concentration (Fig. 5a).The cytokines and interleukins (IL) levels during the injections showed that the IL-2 level was compatible with the measure of diversity (Simpson index, D) (Fig. 5b).The elevation of the measure of diversity over time is considered as danger signal together with leukocyte growth factor.Therefore, the lower the measure of diversity value, the lower the diversity.In addition the primary response, for instance, was featured by augmented IgM level, while, secondary and tertiary responses provided marked elevation in the population of B-cells and the antibodies level (Fig. 5c).This showed the development of immune memory accompanied by rapid clearance of the antigen upon subsequent exposures.Moreover the population of T-cytotoxic (TC) (Fig. 5d) and T-helper (TH) (Fig. 5e) lymphocytes showed high response level coincided with memory development.The natural killer cells maintained high levels throughout the duration of exposure (Fig. 5f). Molecular dynamic simulation (MD) A Normal mode analysis (NMA) was performed on the MD of the vaccine protein using the iMODS server.As shown in Fig. 6a, the arrows indicated the direction in which each vaccine protein residue moves.Deformability was also demonstrated with hinges in the chief chain, as a result of an individual distortion of the residues (Fig. 6b).Experimental B-factors were calculated on the basis of the PDB field and the NMA data (Fig. 6c).A normal mode of deformability of the vaccine structure was shown by the eigenvalue, which directly correlated to the energy required with the deformability.The obtained eigenvalue (7.182836e-07) demonstrated the stiffness of the motion (Fig. 6d).The lower eigenvalue is always associated with the easier deformation of the protein structure.The normal mode variance is inversely related to the eigenvalue.Figure 6e illustrated the cumulative variance and individual variance as green and purple bars, respectively.It was possible to determine the correlations between proteins by examining the covariance matrix (Fig. 6f).Thus, red identified correlated motions, white indicated uncorrelated motions, and blue indicated anti-correlated motions.Spring-connected or joined atom pairs were demonstrated in the elastic network model.A single-atom pair spring was represented as a dot, and colored according to its stiffness, with darker dots denoting stiffer strings, and vice versa (Fig. 6g). Discontinuous B-cell epitopes prediction Table 6 and Fig. 7 demonstrated six discontinuous B cell epitopes.The scores of these epitopes were ranged from 0.996 to 0.615 with a total of 405 predicted residues.The size of the conformational epitopes ranged from 4 to 108 residues. Molecular docking of the vaccine protein with chicken TLR7 The interaction between the vaccine construct and chicken TLR7 was assessed by HADDOCK software.HAD-DOCK clustered 13 structures in 3 cluster(s), which represents 6% of the water-refined models.Upon refinement, 20 structures were grouped into one cluster, resulting in 100% of the HADDOCK water-refined version.The binding affinity between the vaccine and the chicken TLR7 was − 263.0 ± 3.1 demonstrating the strong binding between the molecules.As shown in Fig. 9, this binding was evident by 20 hydrogen bonds, 2 salt bridge, and 184 non-bonded contacts.These bonding events between the amino acids of the molecules were provided in Table 7. Additionally PRODIGY web server showed binding affinity in terms of Gibbs free energy (ΔG) and thermodynamics (dissociation constant) between the docked molecules.Such kind of binding affinity decided the real interaction between the docked molecules under certain circumstances within the cell.The server showed ΔG values − 21.1 kcal/mol for the vaccine construct and chicken TLR7 and the dissociation constant was 3.1e−16 indicating the docked molecules were energetically viable. In silico molecular cloning The potential host expression of the target protein was performed by JCAT.The protein sequence of the vaccine was reversibly translated into DNA sequence.The index codon adaptation value of the DNA sequence was equal Discussion The most common avian retrovirus that causes a variety of neoplastic diseases in chicken is the avian leukosis virus (ALV) 2 .Globally, the ALV morbidity and mortality rates contributed to the poultry industry's economic decline 1,3 .This is accompanied by subsequent adverse effects on the food supply worldwide.Preventing and controlling viral infection in avian industry is always via mass vaccination means.Therefore vaccines designed to combat avian viral diseases will significantly alleviate selection pressure on the virus and on the field strains 68 . Concerning ALV infection in poultry, many anti-ALV vaccines were developed, but they targeted only specific strains.Also some of the vaccine trials had less immunogenicity and limited protection 69 .Currently, neither known treatment nor vaccination against ALV is available.Multiple studies used the multi-epitope vaccine prediction against ALV and evaluated their possibility as effective vaccine candidate via challenging in chickens [70][71][72] .For instance, one study provided a novel oral vaccine of recombinant gp85 protein in L. plantarum with a significant increase in antibodies post-inoculation 72 .The study demonstrated a protection against ALV-J and showed protective immune response against early ALV-J infection based on viremia analysis 73 .Another study showed the impact of polysaccharides from Ulvapertusa as anti-ALV-J.The polysaccharides demonstrated strongest suppression of the ALV-J activity as they bound with the viral particles and obstacle ALV-J adsorption by host cells accompanied by significant reduction of gp85 protein expression 74 .However these studies reported partial immune protections against ALV-J infections in chickens. In this study a vaccine with multi-epitopes was designed and showed increased immunogenicity and enhanced immune responses as a result of the existence of epitopes from various target genes.Also the designed vaccine activated the humoral and cell-mediated immunity as previously described 75 .These could solve limitations occurred during controlling the ALV infection 75,76 .Most importantly, the safety and effectiveness, allergenicity and the immunogenicity of the predicted vaccine were also taken into consideration to ensure the safety of the designed epitopes 77 .In addition, the toxic effect, the solvent accessibility of the amino acids, the identification of B cells, and MHCmolecules were also contemplated to ensure the effectiveness of the predicted epitope vaccine.All these measures give the predicted vaccine an advantage over the traditional ones for controlling the ALV infection. Thus the conserved predicted epitopes from ALV proteins were submitted to the ABCpred server.Based on ANN, Hidden Markov model (HMM) and support vector machine (SVM) in the ABCpred server the B cell epitopes were predicted 78 .Furthermore, the predicted epitopes were subjected to antigenic, allergenic and toxic analysis to confirm their suitability as B cell epitopes.Also T cell epitopes were predicted from their reference sequences using the IEDB server.In addition to their high binding affinity to MHC alleles, the predicted epitopes demonstrated high antigenicity score in VaxiJen server, and they revealed no allergic or toxic characteristics.Therefore they were picked to enter the vaccine protein structure.With the aid of expedient linker sequences (protein spacers), the generated B-and T-cell epitopes were fused together 49,52 .Linkers are crucial to the assembly of stable, bioactive fusion proteins.Essentially, linkers reduced the likelihood of junctional antigen formation as well as enhancing antigen processing and presentation 52 .They are also important to construct and facilitate structural flexibility and reduced rigidity 52 .A sequence with the least junctional immunogenicity was generated in this study using the linkers GPGPG and YAA.The GPGPG linkers were applied to facilitate immune processing and merge the B-cells and T-helper cell epitopes.The YAA linkers ameliorated the immunogenicity of a vaccine by impacting protein stability and epitope presentation capacity and were used to fuse the cytotoxic T-cells 24,79 .As an adjuvant, the β-defensin was added via an EAAAK linker at the N terminus of the vaccine construct to improve the immunogenicity of the vaccine.EAAAK are helical linkers used to control the distance Table 6.The number of the predicted discontinuous B cell epitopes with the number of the residues and their scores.and decrease the interference between the domains 24,79 .As a 45amino acids peptide with a relatively small size, the β-defensin was used for its immune modulation and antimicrobial features 44 .To facilitate purification and downstream testing, a small 6His tag was added to the proposed vaccine at the C-terminal to prevent protein structure from being altered 80 .The stability of the vaccine was confirmed by the ProtParam server based on its physiochemical properties.VaxiJen and AllerTOP servers were used to assess the antigenic and allergenic features of the vaccine.The results indicated that the vaccine was antigenic without causing any allergic reactions.In order to select the best score of the model generated by the 3D structure of the vaccine protein, the secondary and tertiary structures of the vaccine construct were analyzed.The Ramachandran plot showed favorable results in the distribution of the vaccine residues and provided a stable structure.The ProSA server indicating that the overall model is suitable for acceptance as a potential ALV vaccine 51,52 . Residues Number of residues Score The solubility of the designed vaccine in this study was calculated with the protein-sol and SOLpro servers.As a comparison with the solubility of E coli, Protein-sol presented the vaccine as a soluble protein and predicted a scaled solubility of 0.499, an increase over 0.45 from the average solubility of the E. coli population.According to the SOLpro server, the predicted solubility was 0.9843, which confirmed this result.To obtain disulfide bonds between the vaccine residues, the proximity and geometry composition of the residue pairs were evaluated for the formation of disulfide bonds.Five unstable regions in the vaccine structure were replaced by the formation of disulfide bonds.Disulfide bonding increases the stability of the vaccine protein as previously stated 51,52 . Immune simulation demonstrated results that consistent with the real immune responses.Generally there were elevated levels of the immune responses after repeated exposure to the vaccine (antigen).In addition, there was marked development in the memory cells of B and T lymphocytes.Most importantly, IL-2 and IFN-γ were elevated following the initial injection and provided peak levels after antigen repeated exposures, showing the high levels of T-helper lymphocytes and efficient immunoglobulin production.The Simpson index demonstrated a possible different immune response, indicating the vaccine structure contains multiple B and T cells epitopes 44 .A study by Landman et al., demonstrated the interaction of the NK cells during ALV infection 81 .They showed that during ALV infection in immunosuppressed chicken, the NK cells provided reduced killing activity than the NK cells of the uninfected controls.Natural killer cells play a paramount defense mechanism in host and surveillance of tumor, resulting in cell death and secretion of cytokines and chemokine.Moreover, NK cells have a significant role in immune regulation of T cells and DC functions during viral infection in mouse models 81 .In addition to that, there is scarcity in ALV vaccine researches concerning the immune system of chickens.Thus www.nature.com/scientificreports/Molecular dynamics simulation (SD) was used to assess the complex stability of the vaccine protein.In previous studies, macromolecule stability was associated and correlated with the fluctuations of atoms 82,83 .Therefore MD was performed to evaluate the essential dynamics and complex stability of the vaccine based on the protein normal modes in the iMODS server.The analysis showed that no atoms had a significant distortion in the vaccine protein structure indicating less chance of deformability with proper stiffness motion. It is noteworthy that bioinformatics and immunologic analysis tools provided that the chimeric vaccine should comprises linear and discontinuous B-cell epitopes in addition to MHC-I and MHC-II epitopes 84 .Our predicted vaccine was shown comprising all these epitopes which strongly facilitate the interaction against the humoral and adaptive immunity of the host 84 . The geometry and topology features of protein structures, such as interior cavities, pockets in the structure surface and the cross channels prior to the docking process are essential to study the function of proteins.The vaccine construct showed surface binding pocket suitable for docking with chicken TLR7.Based on the molecular docking, the constructed vaccine and the TLR7 demonstrated a good binding affinity.The vaccine strongly bound to the chicken TLR7 revealed by the negative values of the docking process 62 .Among the ten chicken TLRs, TLR7 has a propensity to recognize the viral constituents located on the extracellular surfaces 63 , thus has the advantages to be elected for docking against ALV predicted vaccine. Molecular cloning is an important step to produce recombinant vaccines.Prior to cloning into the pET-30a [+] vector, reverse transcription and adaptation of the vaccine protein to DNA by the JCAT were performed on E. coli strain K12.The DNA sequence showed a Cal-Value of 1.00 and a GC ratio of 59.57%, demonstrating a high expression in bacteria.Cloning of the vaccine construct gene in the vector was typically carried out in multiple cloning sites.This result provided prolific cloning of the vaccine protein. Conclusion This study demonstrated the urgency need for effective vaccine strategy against ALV due to the lack of treatment or approved antiviral drugs.This study inclusively exploited the computational and immunoinformatics approaches to design and evaluate a multi-epitope vaccine candidate against ALV.Constructing abjunctive vaccine with antigenic characteristics, devoid of allergenicity and toxicity is a crucial footstep to combat ALV.This study provided a potential vaccine epitopes with immunogenic adjuvant and suitable linkers.The vaccine was stable and provokes strong immune response interactions.Moreover the vaccine showed favorable interaction with the chicken immune receptor as confirmed by molecular docking analysis.However, validation of this vaccine via experimental studies is essential to guarantee the immunogenicity and protective efficacy of the vaccine. Figure 1 . Figure 1.Schematic flowchart providing the overall steps used for designing the ALV multi-epitope based peptide vaccine. Figure 2 . Figure 2. (a) The primary sequence of the proposed vaccine.(b) The tertiary structure of the vaccine predicted by Raptor X server.c The refined structure of the vaccine predicted by the Galaxy web server. Figure 3 . Figure 3. (a) In the Ramchandran plot the most favoured region comprised 90.9%; additional allowed region comprised 6.1%, generously allowed region comprised 1.9%, a disallowed region comprised 1.0% of the residues.(b) ProSA-server with Z-score of − 5.68. Figure 4 . Figure 4. (a) The vaccine solubility in comparison to the solubility of E. coli.(b) Stability of the vaccine protein before disulfide bond engineering in the original form (the form before substitution of amino acids by cysteine).(c) The mutant form (the form after substitution of amino acids by cysteine) with five pairs of disulfide bond formation.The disulfide pairs were shown by golden sticks and pointed by white arrows. Figure 5 . Figure 5.The immune simulation of the predicted vaccine after the two injections of the antigen.(a) Antibodies production in response to antigen injections (antibodies were shown as different colored peaks and the antigen was shown in black color).(b) The induced cytokines secretion and the IL-2 level with the measure of diversity.(c) Showed the memory, not memory and the isotypes of B-cell populations.(d) Showed the active T-cytotoxic (TC) cell populations.(e) Showed the active T-helper (TH) cell populations.In (d, e) The resting state demonstrated the cells not provided with the antigen (vaccine).The anergic state demonstrated tolerance of the T-cells to the antigen due to repeated exposures.(f) Natural killer cell populations. Figure 6 . Figure 6.Showed the MD of the vaccine protein complex.(a) The direction of the motion was shown by the red and cyan colors.(b) The stability of the vaccine was analyzed by the low main chain deformability.(c) The B factor/ mobility.(d) The Eigenvalue demonstrated the protein's normal mode and the stiffness of the motion.(e) The normal mode variance and (f) is the covariance matrix.(g) the elastic network model showed a stiffer mode of the residues. Figure 7 . Figure 7. (a) showed the 3D structures of six discontinuous B-cell epitopes predicted by the ElliPro (1-6).Epitopes were shown in yellow color, while grey color showed the constructed vaccine.(b) The yellow color demonstrated the discontinuous epitopes while the green color was the continuous epitopes.The red line showed the threshold of the residues score. Figure 8 . Figure 8.(a) The pocket panel (shown in red color) in the structure of the vaccine.(b) the sequence and annotation panels in the vaccine construct. Figure 9 . Figure 9. Molecular docking interaction between the vaccine construct with chicken TLR7.(a) Interacting residues between the vaccine (chain A) and TLR7 (chain B).(b) Chicken TLR7 (red color) and the vaccine construct (blue color) docked complex.(c) interface statistics result.(d) Key showing the residue interactions across interface between the docked molecules. Figure 10 . Figure 10.The vaccine DNA sequence was cloned in the pET30a ( +) vector.The vector was shown in black colour, while the red colour represents the gene coding for the vaccine protein. Table 1 . The total number of the retrieved strains of the polymerase, envelope, and transacting protein of ALV with their accession numbers.*Reference sequence. Table 3 . The predicted B cell epitopes and their antigenicity scores.*The default score of the ABCpred server was 0.51 and the length of the predicted epitopes was12mers. Table 4 . The predicted T cytotoxic cells epitopes, their antigenicity scores from the polymerase, envelope, and transacting factor proteins.*PR: Percentile rank with a score of ≤ 1. # The Vaxijen server for antigenicity threshold was 0.4.All the predicted epitopes were nonallergic and nontoxic. Table 7 . List of Atom − Atom Interactions between the Vaccine and chicken TLR7 Interface.
2024-02-06T06:17:21.492Z
2024-02-04T00:00:00.000
{ "year": 2024, "sha1": "955c758b060cfe3614673eaba7f5da1b5791e4cb", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "10a32ef4c698992870a99febb02ee4318fa99fa6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
126125053
pes2o/s2orc
v3-fos-license
Effects of Kerr medium in coupled cavities on quantum state transfer We study the effect of Kerr type nonlinear medium in quantum state transfer. We have investigated the effect of different coupling schemes and Kerr medium parameters $p$ and $\omega_{{K}}$. We found that, the Kerr medium introduced in the connection channel can act like a controller for quantum state transfer. The numerical simulations are performed without taking the adiabatic approximation. Rotating wave approximation is used in the atom-cavity interaction only in the lower coupling regime. Introduction Quantum information processing (QIP) need an effective implementation of quantum state transferring schemes. The implementation of such systems with optical cavity has been a state of art. The simplest quantum description of cavity system are well described in the literature. [1][2][3] Modification on the Jaynes Cummings model (JCM) has been an active field of research ever since. The two level system (TLS) was then extended to multilevel, multi-cavity, multi atom TLS, 4-6 etc. Optical cavities are very good candidate for quantum information processing (QIP). 7 The nonlinear optical behavior due to χ (2) , Kerr nonlinearity (χ (3) ) etc. has also been used to modify these optical cavities and are extensively studied both theoretically as well as experimentally. 8,9 Implementation of such optical cavities and its use in QIP has been an area of active research since late 1960s. 10,11 State transfer in coupled cavities appears to be a reliable platform for data transfer in QIP. 7,[12][13][14][15][16] Here we discuss coupled cavity systems with and without Kerr type nonlinearity. In this paper we discuss the quantum state transfer (QST) in a linearly coupled cavity array with and without Kerr medium. We introduce the Kerr medium in the connection channel alone. We found that, the presence of a Kerr medium can be used as a controller in the QST. Linearly coupled cavities A quantum state carries an information, which has to be transfered from one place to another without any loss or modification. As quantum mechanics follows no cloning theorem, 17 it is not possible to send an exact copy of the information. 18 Here we consider a quantum state in the cavity 1 which has to be transfered to a cavity 3, through an intermediate cavity 2. The system is illustrated in the figure (1). Here in all the 3 cavities we have single mode photon field with a two level atom (or qubit) in 1st and 3rd cavities. The system can be described by the Hamiltonian, where,Ĥ 0 is the free Hamiltonian andĤ I is the interaction Hamiltonian and we havê Here λ i (i = 1, 3) are the atom field coupling constant, J lm are the coupling strength between the cavities l and m, a i (i = 1, 2, 3) denotes the field annihilation operator and σ i z , σ i + and σ i − (i = 1, 3) are the atomic operators for the ith cavity. A tensor product state of the system can be written as, where k i = 0 and k i = 1 correspond to ground and excited state respectively of the atom (qubit) in the ith cavity and n i represents the number of photons in the ith cavity. Thus, if we consider a state with maximum of one excitation at a time, the corresponding general state may be written as where q i (t) and f i (t) respectively are the atomic and field excitation coefficients in the ith cavity. The dynamics of the system can now be studied by solving the corresponding Schrödinger equation. For convenience, we can take the atomic transition frequency, ω a and the field frequency, ω c as the same. Thus the detuning, ∆ = ω a − ω c = 0 and we denote, ω a = ω c = ω. The state vector in the interaction picture, satisfies the evolution equation,Ĥ where we haveĤ since Ĥ 0 ,Ĥ I = 0 because the detuning is set as zero. The differential equations for q i (t) and f i (t) can be obtained as, ih ih ih These equations can be solved numerically and we can investigate how the coupling parameters affects the quantum state transfer. Analytical approach The Laplace transform of the equations (9) to (13) can be written as, (h = 1) i For further simplicity we can take the value of λ 1 = λ 3 = λ and J 12 = J 23 = J. Now solving the equations (14) to (18) for an initial condition, q 1 (0) = 1 and q 2 (0) = f 1 (0) = f 2 (0) = f 3 (0) = 0, results in, Now taking the inverse Laplace transform of equations (19) to (23), we obtain, The probability can now be calculated as using equation (29) to (33), we can define the population inversion of each qubit as, 3 Linearly coupled cavities with Kerr medium Nonlinear effects in optical cavities has been studied in the literature. 9,10 This nonlinear effects can be used to construct effective transfer mechanism in quantum engineering. 11,19 Here we consider the 3rd order nonlinearity, widely known as Kerr nonlinearity and we introduce such a nonlinear effect in the second cavity of the previously described system shown in the figure (1) and we get the modified system as in figure (2). Hamiltonian with a Kerr type nonlinear medium in the second cavity is described by 20, 21 where b is the annihilation operator of the Kerr medium, ω K denotes the anharmonic Kerr field frequency, q is the anharmonicity parameter and p represents the field-Kerr medium coupling strength. In the adiabatic limit the field frequency ω and medium frequency ω K are very different. Now the Hamiltonian of the new system takes the form, where the new free and interaction Hamiltonians are respectively given as, With the Kerr medium operator b and b † , we need to extend the Hilbert space of states given in equation (4) and it get modified to to accommodate the new state, which can be defined as, |ψ = |k 1 n 1 n 2 k 3 n 3 n b . where n b is the bosonic number of the Kerr medium. Here also we can find the Hamiltonian in the interaction picture and we can show that The dynamics can be studied by obtaining the differential equations similar to equations (9) to (13) and solving it. Analytical approach If we take only a maximum of one excitation in the cavity we may write the general state as The differential equations for q i (t), f i (t) and k(t) can be obtained as, The Laplace transform of equations (43) to (48) are, (h = 1) For Kerr medium analytical solutions of can be more rigorous to handle. We do not take the adiabatic approximation even when the ω K is far away from ω and we investigated the evolution of the system numerically. 22 4 Results and discussion Quantum state transfer without Kerr medium First we consider quantum state transfer for different coupling schemes, without an intermediate Kerr medium and we can see that, there is a transfer of state from the qubit 1 to qubit 2 . This time can be controlled by controlling the coupling parameter. With J lm = 0.1λ i and J lm = 0.2λ i , the results are shown in figures (3(a)) and (3(b)). The effect of coupling scheme is evident from these simulations. The photon number, n in each cavities are also estimated for J lm = 0.2λ i . Figure (4) shows the corresponding results. Here we have taken ω a = ω c = ω = 2πf and f is taken to be 1GHz for the simulations. The graphs are plotted against scaled time. Here in all the plotting the unit of scaled time is in nanoseconds. All other parameters are scaled with respect to the unit of ω. The photon number also follows a pattern similar to the population inversion, such that whenever the qubit is in the excited state, photon number becomes zero inside the respective cavity. However, the intermediate cavity shows a rise in the photon number as the coupling in the system is increased so that, the photon number in the first and last cavity has reduced. This suggest that the probability of inversion is not 100% in any case with an intermediate coupling cavity with a non zero cavity-cavity coupling. So the expense of a controlled transmission with an intermediate coupling cavity is the quality of the transmission. The value of qubit-cavity coupling is kept at, λ = 0.1ω, which allows us to use RWA. 23 The analytical solutions including detuning is very cumbersome. The effect of detuning on the state transfer is shown in figures (5) and (6). We can clearly see that detuning affects the state transfer. Quantum state transfer with Kerr medium The presence of a Kerr medium in the intermediate cavity can affect the state transfer. Numerical simulations are done for different Kerr-cavity coupling value, p and ω K . The results are shown in figures (7(a)), (7(b)), (7(c)) and (7(d)). Here we do not take the adiabatic approximation for the Kerr Hamiltonian. For p ≈ 0.5ω c , ω K ≈ ω c and J lm = 0.5λ i , the nature of population inversion is almost equivalent to the case where there is no Kerr medium in the second cavity and J lm = 0.1λ i . The results are shown in figures (8(a)) and (8(b)) Thus with higher coupling between the cavities, we can have a controlled state transfer between two qubits, by means of a Kerr medium in the intermediate cavity. Conclusion In the present work we have numerically studied a system of 3 linearly coupled cavities with one qubit in either ends of the cavity and an intermediate cavity in between them. Our study focused on the presence of a Kerr medium in the second cavity and how it affect the quantum state transfer. We found that the presence of Kerr medium can affect the transmission and hence can be used as a quantum state transfer controller in quantum information processing. Without taking the adiabatic approximation in the Kerr medium, there can be a controlled state transfer. All the plotting are done with a scaling corresponds to the cavity frequency, which set at 1 GHz. We have only taken the the RWA in the appropriate limit.
2019-04-22T13:12:48.046Z
2018-09-01T00:00:00.000
{ "year": 2020, "sha1": "22f1b2043569fdf97d06582a8776314ce3c384c3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2001.06608", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "1bdc5e84829926fdc067e50e09033b15bfeeb8ca", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
246188071
pes2o/s2orc
v3-fos-license
Motivations, sources of influence and barriers to being a podiatrist: a national questionnaire of student views Background Podiatry is an allied health profession which has seen a substantial decline in numbers in recent years. Every effort is required to recruit more students to reverse this diminishing supply and meet national foot health needs. To increase the number of applications to podiatry courses and encourage individuals to choose podiatry careers, the aim of this study was to understand the key motivations, sources of influence and barriers to choosing a podiatry career among current podiatry students, and consider the influence of choosing podiatry before or after a first career. Methods An online questionnaire, comprising mainly Likert-scale questions, was disseminated to podiatry students in England between February and March 2021. Respondents to the questionnaire were categorised as individuals who had either decided to engage in the profession ‘before’ or ‘after’ a first career. Mann-Whitney U non-parametric difference tests were performed to compare outcome questions relating to motivations, sources of influence and barriers between groups. Results One hundred and fifteen students completed the questionnaire. Overall, the study demonstrated many similarities between the groups (before and after a first career). However, there were distinct differences when considering some of the motivations (i.e., intellectually stimulating, student bursaries), sources of influence (i.e., own patient experience) and barriers (i.e., financial, job availability) associated with engaging in the podiatry profession. Overall, altruistic reasons were the key motivations for choosing podiatry. Personal sources of influence such as conducting own research, was the most important source of influence. Similar to other studies, a lack of awareness of the podiatry profession and what it entails remains problematic. Conclusions This is the first national questionnaire investigating career choice decision-making for podiatry students in England or in any other country. The similarities suggest that marketing is applicable to both groups. However, an absolute must is a future national strategy that makes educational sources more impactful. Additionally, following the Covid-19 pandemic, the increased interest in health and care professions suggests now is the right time to market podiatry to individuals looking for a career change. Finally, the influence of personal encounters with podiatrists shows the transformational role podiatrists can have in recruiting to the profession. Supplementary Information The online version contains supplementary material available at 10.1186/s13047-022-00551-6. departments [2,3]. However, recruitment of podiatrists falls short of NHS workforce requirements [4]. With the predicted supply of podiatrists in England being insufficient to meet foot health needs in the next ten years, 'every effort' is required to recruit more students to reverse the diminishing supply of podiatrists [4]. Applications and acceptances to podiatry pre-registration courses have been in decline since 2012. This was exacerbated in 2016 with the removal of the non-repayable NHS bursary which covered podiatry students' university fees and which was replaced with tuition fees and loans in 2017, and the addition of the Learning Support Fund in 2020 [5]. The cessation of national commissioning of healthcare training led to an increase in healthcare programmes e.g. physiotherapy which have attracted many students who may previously have considered podiatry. Undergraduate podiatry entrants decreased by 19% in 2017-18 in comparison to 2016-17 [6] and there was a 40% reduction overall between 2017 and 2019 [4]. Furthermore, there was an 8.1% attrition rate from the Health and Care Professions Council (HCPC) podiatry register [7]. The Saks report highlighted that there had been an increase in podiatry student recruitment [4] but it is important to acknowledge the effect of the Covid-19 pandemic on fluctuations in podiatry course entrants from the past two years. Addressing the reduction in the podiatry workforce, alongside other allied health professionals (AHPs), was recognised in the NHS Long-Term plan as part of the new workforce implementation plan [8]. If universities continue to experience issues with student recruitment to AHP courses, they may become unsustainable leading to a vacuum of qualified professionals which will affect delivery of care [9] and may lead to failure of the profession. It should be acknowledged that podiatry is one of four AHPs, alongside therapeutic radiography, prosthetics and orthotics and orthoptics, seen as being vulnerable to university course recruitment [9]. The landscape of AHP numbers and recruitment is varied and there are professions with more success in recruiting students to their profession than others. In a recent study, Whitham et al., [10] generated four themes from focus group discussions with Generation Z participants about what attracted them to podiatry careers. These were lack of awareness of podiatry, accessibility of course and career, career status and breadth/ opportunity of scope of practice. To increase the applications to podiatry courses, it is important to understand the decision-making process in choosing this career. A podiatry career offers a variety of practice settings including working in the NHS, the private sector or freelance, while pre-registration training includes general medicine, pharmacology, infection control and public health [4]. A strong characteristic of podiatry student cohorts is diversity including age. In comparison to other AHP courses, there is a higher proportion of mature students (~ 48 years) choosing podiatry courses and/or individuals who have enrolled in podiatry courses following a professional career in a different area of expertise [11]. Accordingly, in 2016/2017, 45% of students beginning a podiatry undergraduate course in England were over 25 years of age [6]. Using a national questionnaire administered across England, the purpose of this study was to: i) identify key motivations, sources of influence and barriers to choosing a podiatry career among current podiatry students in England, and ii) consider the influence of choosing podiatry as a first career or after a first career, on motivations, sources of influence and barriers. This is the first study to explore these topics and draws on the results of this national questionnaire to understand students' choice to become a podiatrist and to identify practical implications for educators and those responsible for future workforce strategy and transformation. It seeks to meet a key imperative to increase recruitment to the podiatry profession. Methods This study reports the national results in the area of podiatry from an online questionnaire designed and hosted using JISC (Bristol, UK) software which was disseminated to AHP students for four weeks between February and March 2021. The convenience sample for this study was students currently enrolled on all undergraduate and postgraduate podiatry courses in England. Gatekeepers were in the form of Education Leads for the professional body who distributed the questionnaire to universities in England. Participants were sent a link to access the questionnaire. Additionally, the questionnaire was promoted through the Health Education England (HEE) website, blog posts on HEE connect, HEE internal newsletters, a webpage detailing the project [12], social media and newsletters of the professional body. Ethical approval for the study was obtained from the University of Winchester Research and Knowledge Exchange Ethics Committee (Reference: HWB_ REC_21_03). A participant information page explaining the study was included at the beginning of the questionnaire. Participants then confirmed consent through ticking a confirmation box. The questionnaire was anonymous and took approximately fifteen minutes to complete. The questionnaire was designed based on the findings of a scoping review [13] and focus groups conducted with members of an AHP leadership programme. This process also informed the content validity of the questionnaire. The questionnaire was piloted among physiotherapy Participants were asked about their background, and the motivations, sources of influence and barriers to choosing an AHP career. A series of questions were posed within these broad headings, with participants being provided the opportunity to respond on a Likert scale. The Likert scale included the following statements and numeric values: strongly disagree (1), disagree (2), neutral (3), agree (4), strongly agree (5). Participants could also respond to any given question with a notapplicable response, while there was also the opportunity to add additional context to the answers provided in the questions via a series of free-text boxes. Open-ended questions included asking about public perception of their profession and advice to individuals interested in the profession. Demographic questions included year of study, ethnicity, disability, gender and age. An additional file shows the questionnaire (see Additional file 1). Data analysis As a result of the relatively high levels of podiatry students over the age of 25 [6], respondents were categorised as individuals who had either engaged in the profession 'before' or 'after' their first career in employment (Pre-FC and Post-FC, respectively). Pre-FC respondents included individuals who made their decision to join the podiatry profession during their secondary school, college or during their initial University degree. Post-FC respondents included individuals who were previously employed in an alternative career before joining the podiatry profession. Prior to statistical analysis, N/A responses on Likert scales were removed from the analysis. NVivo was used for managing the open-ended question data. The data was initially filtered for Pre-FC and Post-FC. Thematic analysis using Braun and Clarke's approach [14] was utilised to analyse the open-ended questions. This involved becoming familiar with the data through reading and rereading the open-ended question responses. The next step was generating initial codes. This took place through Inductive coding and themes were then identified from groups of codes. The final step in the process was defining and naming the emergent themes. The main themes were an interpretation of the open-ended question data obtained, which had allowed for participants to share their perspectives using their own words. Statistical analysis Demographic and outcome data was checked for normality using tests for skewness and kurtosis, as well as a graphical assessment for normal distribution. Thereafter, Mann-Whitney U non-parametric difference tests were used to compare participants' age, gender, ethnicity, year of study and disability between Pre-FC and Post-FC groups. A series of Mann-Whitney U tests were also used to compare outcome questions relating to personal, professional interests and day-to-day job context motivations; personal and educational, media and marketing sources of influence; and personal, professional and understanding the role barriers between groups. Data is presented as median and interquartile range (IQR; 25th-75th percentiles), mean ranks for Pre-FC and Post FC, U statistic, z score and p value. Effect sizes are also reported as r based on the following formula: Whereby z is the z score and n is the number of participants. Effect sizes of 0.1, 0.3 and 0.5 denote a small, medium and large effect, respectively. Statistical significance was originally set at p < 0.05 but adjusted where necessary via the Bonferroni technique to minimise the risk of type I error. Statistical analysis was undertaken on SPSS (v.26). Results This questionnaire recruited 115 podiatry participants (Pre-FC, n = 50; Post-FC, n = 65; Table 1). This is a response rate of 12.8% among a population of approximately 900 students. Pre-FC participants were typically younger and of a more varied ethnicity compared to Post-FC participants (both p < 0.01; Table 1). Post-FC participants were generally older (36+ years) and of white ethnicity (85%). There were no differences in gender, type of study, year of study and number of respondents with some form of disability between Pre-FC and Post-FC respondents (p > 0.05; Table 1). The results begin with the findings from the demographic and Likert scale questions before focusing on the open question findings. Motivations, sources of influence and barriers to choosing podiatry (Aim 1) Participants identified someone in the profession I saw/ met who was a really good role model for me and my own research into the podiatry profession as two important personal sources of influence (Table 2). In terms of motivations for choosing podiatry, particularly with regards to the day-to-day context of the profession, participants typically agreed/strongly agreed with all of the items reported (see Additional file 2). Where I can use my skills to improve the quality of life for a patient/service user was the most important motivation for both groups (see Additional file 2). Academic interests, interest in area of profession, intellectually stimulating and that suits my personal qualities and values were all perceived to be important motivations for engaging in the profession (Table 3). With regards to professional motivations, excluding the student bursary item, participants agreed/ strongly agreed with all items (e.g., the potential for job security, the opportunity for working in the private or public sector, good job availability and employment opportunities; Table 4). With regards to barriers to the profession, lower scores were generally reported when compared to the aforementioned areas of interest, with the median score for all items being ≤4 (see Additional file 2). Pre-FC and Post-FC effect on motivations, sources of influence and barriers to choosing podiatry (Aim 2) Motivations Intellectually-stimulating and challenging role were approaching statistical significance, with Post-FC participants reporting them to be more important to them than Pre-FC (Table 3). There were no other differences between groups when examining personal interests (all p > 0.05; Table 3). Student bursaries were also approaching statistical significance with the Post-FC group perceiving it to be more important than the Pre-FC group (Table 4). There were no differences in the perceived importance of the day-to-day context of the profession between groups (all p > 0.05; see Additional file 2). Sources of influence Teacher, professional visits to school/colleges, careers programme run by school/colleges and careers fairs were all more important for Pre-FC participants compared to the Post-FC participants ( Table 2). However, Post-FC recognised own patient experiences (their own experience or that of a relative receiving care from the profession) as a more important personal influence than Pre-FC, with findings approaching statistical significance ( Table 2). There were no other differences between groups in how information on podiatry was accessed (see Additional file 2). Barriers With regards to professional barriers, job availability was identified as a key barrier for the Post-FC group (p < 0.05). Financial support was approaching statistical significance, with a higher median rank reported for Post-FC participants (p = 0.08; see Additional file 2). There were no differences in any of the personal barriers between groups (all p > 0.05; see Additional file 2), although outside obligations was approaching statistical significance (p = 0.09). When trying to understand the profession, careers advisors' lack of awareness of the profession and misconception of profession were reported to be more important barriers for the Pre-FC group, with each of these analyses approaching statistical significance (p < 0.05; see Additional file 2). Open questions responses Two open-ended questions were included in the questionnaire. Table 5 illustrates a number of responses from the two groups. These have been selected as they represent the views of several participants in each group. In terms of public perception of podiatry, which was answered by 45% of the sample, themes identified included a lack of awareness and understanding about the profession and what the job entails which were highlighted by both groups. Any knowledge, and positive perception, was likely to be obtained through personal experience with a podiatrist. The perception of podiatry as 'nail cutting' was raised by a quarter of question respondents of the Post-FC group and it was suggested that this coincided with a view of podiatry not being taken seriously as a healthcare course. The second open-ended question asked what advice participants would give someone interested in the profession. This was answered by 81% of the sample. In terms of pre-application advice, a key theme was the importance of undertaking work experience or shadowing to understand the role which both groups emphasised. More participants in the Post-FC group mentioned being mindful of the academic workload and the financial commitment of the course. Discussion The purpose of this study was to identify the key motivations, sources of influence and barriers to choosing a podiatry career for students, and whether this differed between people who had (not) previously engaged in an alternative first career (Pre-FC and Post-FC). Altruistic reasons were the key motivations for choosing podiatry. Personal sources of influence, such as seeing a podiatrist at work, someone in the profession I saw/met who was a really good role model for me or my own research, were the most important sources of influence. Overall, educational, media and marketing sources scored low in terms of influence. On the whole, potential barriers to the profession scored low perhaps owing to the fact that the participants had overcome these barriers to enable their engagement with the podiatry profession. Nevertheless, a lack of awareness of the podiatry profession and what it entails remains problematic. Although the study demonstrated many similarities between Pre-FC and Post-FC respondents across the main themes to the study, there were distinct differences between groups when considering some of the motivations (i.e., intellectually stimulating, student bursaries), sources of influence (i.e., own patient experience) and barriers (i.e., financial, job availability) associated with engaging in the podiatry profession. However, as only small to medium effect sizes were observed between groups, these findings must be interpreted with caution. As this is the first national questionnaire to explore these topics in England, these findings may have important implications for recruitment of podiatrists both at the national level, (e.g. NHS and HEE), but also at a local level for universities advertising podiatry courses, and school and colleges providing satisfactory information on podiatry for it to be seen as a viable career option for both Pre-FC and Post-FC students. Motivations Post-FC participants reported intellectually stimulating, challenging role and student bursaries as three motivations that were of greater importance to them compared to Pre-FC. The two former motivations are likely related to Post-FC participants wanting to commit to a career change that may ultimately lead to job satisfaction [15]. In promoting podiatry, the importance of these motivations suggests that there needs to be a greater focus on the seriousness and medical emphasis of podiatry work and the level of skills and knowledge required [16]. Our study complements research exploring occupational therapy career choice motivation for mature students where financial pressure was identified as the key factor deterring students choosing this career [17]. The loss of the NHS bursary was seen as the likely reason that accelerated the decline in undergraduate applications to podiatry courses [10,16]. Podiatry was greatly affected because of the high proportion of mature students [4,16,18] who are more 'debt-adverse' than younger students and are more likely to have responsibilities which require funding [19]. Therefore, our finding relating to the importance of student bursaries among the Post-FC group is unsurprising. Minority ethnic students particularly from lower income groups are more averse to taking out loans [18] and therefore the bursary removal was likely to affect these students more. The Saks report confirmed that there had been an increase in student recruitment to podiatry courses in the current academic year and highlighted the 'welcome reinstatement of the bursaries' [4]. From September 2020, students starting or continuing an undergraduate or postgraduate podiatry course could apply for the NHS Learning Support Fund allowance. Our research took place in early 2021, and as such, public awareness of the fund may still be growing. That there has been an increase in student recruitment [4] suggests that awareness of the fund has improved since we undertook data collection, in addition to the vast HEE and Office for Students funded careers activity in recent years. Nevertheless, it is critical that public awareness of this fund is widespread so that different population groups (e.g., There is a synonymous link between mature students and Post-FC participants, as in our study, 94% of our Post-FC group were older than 25 years of age. Interestingly, although the Office for Students stated that older podiatry students have a greater interest in private practice and potentially earning a large salary [16], such findings were not evident in our study (Table 4). For example, 86.1% of our Post-FC respondents agreed/strongly agreed that working in the NHS was a key reason for choosing podiatry. With there being a minimum of a 19% NHS vacancy rate predicted in England for podiatry by 2025 [20] and with 60% of members of the Royal College of Podiatry working in the private sector [4], it may be prudent for NHS recruitment campaigns to either: i) increase recruitment of Post-FC podiatrists as they may want to work in the NHS, or ii) develop strategies to encourage Pre-FC podiatrists to be motivated to work in the public sector. Influences Post-FC participants recognised own patient experiences (their own experience or that of a relative receiving care from the profession) as a more important personal influence than the Pre-FC group. In the study by Byrne [21], mature students reported a proportionally higher amount of exposure to occupational therapy through personal life experiences than the rest of the cohort. More generally, in accordance with recent research [10,22], a high proportion of our study sample (46 and 50.1% for Pre-FC and Post-FC, respectively) were influenced by their own personal (or relatives) podiatry treatment. These findings highlight the opportunity for qualified podiatrists; they can take on the role of career ambassador when meeting patients. This message needs to be conveyed to all podiatrists through the NHS, private practice, HEE and the Royal College of Podiatry. Activating this extensive workforce to be the ambassadors for the profession so that they see every patient and relative as a future podiatrist and to overtly prioritise work experience and university clinics opening doors for work experience. Unsurprisingly, and similar to Craik et al. [22], we found school or college sources of influence to be approaching statistical differences: these sources were more important for the Pre-FC group. However, career advisors' lack of awareness of the profession was reported to be a substantial barrier for the Pre-FC group. In accordance with past research, careers advisors, and to a lesser extent, teachers, were reported to not have a strong influence on Pre-FC's choice to engage in podiatry [23,24,25]. However, previous employment in healthcare was perceived to be an important influence for the Post-FC group to engage with the profession. This was a similar finding to Craik et al. [22], who suggested that the higher number of mature students in occupational therapy may be partly owing to students only hearing about the profession through their work in health care settings and not when they first make their career choice at school or college. Career advisors are a vital conduit to the successful recruitment of students to AHPs [21], and therefore, they need to have a good level of knowledge and understanding about the podiatry profession. With medicine and nursing still primarily promoted as the key healthcare careers in schools [9], career advisors can use the familiarity of these careers to introduce students to AHPs, including podiatry. For Pre-FC participants, someone in the profession I saw/met who was a really good role model for me and my own research were considered more important than school or college sources. The importance of seeing AHP role models and the impact on career choice, has been explored in the literature, especially the lack of role models for minority ethnic individuals in particular professions, such as physiotherapy [26,27]. Despite increases in the ethnic diversity of podiatry students in recent years [6], the Saks report [4] recommended improving efforts to recruit a more diverse (including ethnicity and gender) podiatry student population. In our study, the majority of the sample were white (85 and 62% for Post-FC and Pre-FC participants, respectively) and the importance of seeing role models in the profession suggests that more ethnic minority role models in podiatry are needed to support student recruitment. Barriers Job availability was perceived to be an important barrier for the Post-FC group and careers advisors' lack of awareness of the profession and misconception of profession were reported to be more important barriers for the Pre-FC group. Previous research has shown that podiatry is an attractive career path to mature students as it leads to 'almost certain employment' following completion of undergraduate podiatry courses [4,9,10]. The importance of employment for Post-FC participants was shown in our study as the more mature students reported this to be of greater importance than Pre-FC participants. Job availability is not considered a pertinent reason for selecting different AHP careers [28,29]. However, although it may not be a key motivation, ensuring that job availability is viewed positively especially owing to the current podiatry recruitment challenges, is important especially for the Post-FC group. Misconceptions around the profession, highlighted also in answers to the open questions, suggests that marketing needs to emphasise the 'extensive, diverse and interesting' scope of practice [4]. The Saks report [4] also suggested that podiatry is not portrayed sufficiently as an appealing or important career. Our study was unable to determine why individuals working as an AHP overlooked the opportunity to engage in podiatry. Despite noting that there is an opportunity to maximise on the territory of the foot [30] once invested in the profession [31], the perceived status of the profession may lead to this never being actualised. Through addressing the perceived status of the podiatry profession by contemporary research, it may be possible to target marketing to attract applicants. The Saks report [4] highlighted the image of podiatry as a profession being perceived negatively by the public. Tollafield [32] suggested overcoming the 'ugh factor' associated with working with feet with an emphasis on function not condition; podiatrists help return people to activity and occupation as a slogan, not podiatrists' work with ingrowing toenails and ulcers. Current research is being undertaken to understand perceptions of the human foot among social media users and this may help understand the 'ugh factor' . But more broadly, research is needed to understand the relationship of profession status, 'ugh factor' and podiatry as a career choice. It is clear that other AHPs whilst larger in numbers, also have greater visibility and a better known profile, for example physiotherapists and paramedics. Media marketing strategies for podiatry should be implemented that not only increases the public awareness of the profession, but also enhances the government and other health professionals' understanding of podiatry [4]. In addition, that there has been an increased interest in health and care professions during the Covid-19 pandemic and changing employment circumstances and priorities [33] suggests now is the right time to market podiatry to individuals looking for a career change. Strengths and limitations It is important to contextualise our findings in light of the strengths and limitations to the study. The study was conducted prior to the publication of the Podiatry Career Framework [34] and Standards for the Foot Health Workforce [35]. These frameworks may influence the decision to study podiatry among podiatry associate professions in future years. Although conducting the study in February and March 2021 provided a fascinating insight into views of podiatry students during the COVID-19 pandemic, participant responses will have been influenced by the unique academic and professional conditions which participants experienced [36]. Despite the questionnaire being piloted, it had not been validated. Furthermore, recall ability is a limitation of questionnaires [37], and this is likely to have affected the participants in our study owing to new impressions formed on the course influencing perceptions of podiatry. The study sample lacked ethnic diversity as 75% of respondents were white. However, our sample was fairly representative of qualified podiatrists in the UK registered with the HCPC [38], with minority ethnic males, for example, comprising 3.5% of the study sample. Our sample was a relatively small self-selecting proportion of all podiatry students in England. Therefore, the findings cannot be seen to represent the views of the wider podiatry student population [39]. There are a number of existing studies exploring career choice motivations in other AHPs but they focus on one university or geographical area (for example, Craik et al. [23]). Finally, due to the substantial number of analyses, and the stringency in reporting the results to minimise the risk of type I error via the bonferroni technique, the study demonstrated a lack of statistical findings. A more refined questionnaire (fewer items) and a larger sample size may enable future research studies to elicit statistically significant findings. A real strength of our study was that it was a first national questionnaire about podiatry students, the results of which provide a data set for future studies exploring podiatry as a career choice. Conclusion The Saks report [4] mentioned the need for research exploring why students chose podiatry as their career route. Our study has afforded a nationwide insight into the motivations, sources of influence and barriers among people who chose podiatry as their career. This is the first study with national reach in this field and reveals areas for future focus in marketing for pre-registration course recruitment. This study has highlighted that individuals are choosing podiatry at all life stages and ages and yet there were a number of similarities between the two groups (pre and post
2022-01-23T17:10:49.315Z
2022-01-21T00:00:00.000
{ "year": 2022, "sha1": "8ebbc5294e0948ddd86fa8964f8f59a3588341f0", "oa_license": "CCBY", "oa_url": "https://jfootankleres.biomedcentral.com/track/pdf/10.1186/s13047-022-00551-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "fb4beced23fdf41b49bbcb2f68b0b7adefe22db4", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
14764542
pes2o/s2orc
v3-fos-license
Bayesian approach to cubic natural exponential families For a natural exponential family (NEF), one can associate in a natural way two standard families of conjugate priors, one on the natural parameter and the other on the mean parameter. These families of conjugate priors have been used to establish some remarkable properties and characterization results of the quadratic NEF's. In the present paper, we show that for a NEF, we can associate a class of NEF's, and for each one of these NEF's, we define a family of conjugate priors on the natural parameter and a family of conjugate priors on the mean parameter which are different of the standard ones. These families are then used to extend to the Letac-Mora class of real cubic natural exponential families the properties and characterization results related to the Bayesian theory established for the quadratic natural exponential families. Introduction and preliminaries To make clear the motivations of the present paper, we first recall some facts concerning the natural exponential families and their variance functions. Our notations are the ones used by Letac in [9]. Let µ be a positive radon measure on IR, and denote by L µ (λ) = exp(λx)µ(dx) (1.1) its Laplace transform. Let M(IR) be the set of measures µ such that the set Θ(µ) = interior{λ ∈ IR; L µ (λ) < +∞} (1.2) is non empty and µ is not Dirac measure. The cumulant function of an element µ of M(IR) is the function defined for λ ∈ Θ(µ) by k µ (λ) = ln L µ (λ). Λ(µ) is stable under addition which means that if λ and λ ′ are in Λ(µ), then λ + λ ′ are in Λ(µ), and Several classifications of NEFs based on the form of the variance function have been realized in the last three decades. The most interesting classes of real NEF's are the class of quadratic NEFs, i.e., the class of NEF's such that the variance function is a polynomial in the mean of degree less than or equal to 2 characterized by Morris [11], and the class of cubic NEF's, i.e., the class of NEF's such that the variance function is a polynomial in the mean of degree less than or equal to 3 characterized by Letac and Mora [10]. Recall that up to affine transformations and power of convolution the class of quadratic NEF's contains six families: the gaussian, the Poisson, the gamma, the binomial, the negative binomial, an the hyperbolic family. The class of cubic NEF's contains, besides the quadratic ones, six other families, the most famous is the inverse Gaussian distribution with variance function V (m) = m 3 . It is worth mentioning here that multivariate versions of these classes have also been defined and completely described. For instance, Casalis [1] has described the so-called class of multivariate simple quadratic NEFs and Hassairi [6] has described the class of multivariate simple cubic NEFs which are respectively the generalizations of the real quadratic and real cubic NEF's. The fact that the variance function of a family is quadratic or cubic is not only a question of form but it corresponds to some interesting analytical characteristic properties. Indeed, the Morris class of quadratic NEF's has some characterizations involving orthogonal polynomials due to Fiensilver [5]. These characterizations have been extended to the Letac and Mora class of real cubic NEF's by Hassairi and Zarai [8] using a notion of 2-orthogonality of a sequence of polynomials. Other remarkable characterizations of the quadratic NEF's are related to the Bayesian theory. For instance, given a NEF F (µ), Diaconis and Ylvisaker [4] have considered the standard family Π of priors on the natural parameter λ defined by , and C t 1 ,m 1 is a normalizing constant. This distribution is in fact a particular case of the so called implicit distribution on the parameter of a statistical model introduced in [7]. They have shown that if X is a random variable distributed according to P (λ, µ) (see (1.3)), then the only conjugate family of prior distributions on λ that gives a linear posterior expectation of k ′ µ (λ) given X is the standard one Π (see also [2]). Consonni and Veronese [3] have considered another family Π of prior distributions π t 1 ,m 1 on the mean parameter m defined also for t 1 > 0 and m 1 in M F (µ) by They have shown that the fact that Π contains k ′ µ (Π) characterizes the quadratic NEFs. These authors have also shown that if the prior on the mean parameter m is π t 1 ,m 1 , then under some conditions on the support of µ, the NEF F (µ) is quadratic if and only if the posterior expectation of k ′ µ (λ) is a linear function of the sample mean. We also mention that Diaconis and Yilvisaker [4] have shown that if the standard prior on λ is given by or equivalently in terms of the mean parameter A natural question within this approach is if one can extend the properties and characterization results concerning the quadratic NEF's and related to the Bayesian theory to the Letac-Mora class of real cubic NEFs. The aim of the present paper is to give an answer to this question. We first introduce, for a given NEF F (ν) and β in some interval of IR containing 0, a NEF F β (ν) such that F 0 (ν) = F (ν). We then define a family Π β of prior distributions on the natural parameter θ and a family Π β of prior distributions on the mean parameter m which may be seen as generalizations of the families Π and Π defined above, since Π = Π 0 and Π = Π 0 . After proving that for each β, the family Π β is a conjugate family of prior distributions with respect to the NEF F β (ν), we show that a cubic NEF F (ν) is characterized by the fact that there exists a β such that the posterior is linear when the prior on θ is π β t,m 0 . We also show that a cubic NEF F (ν) is characterized by a differential equation verified by the cumulant function k ν . A third characterization of a real cubic NEF is realized when the family of priors Π β contains the family k ′ ν (Π β ). The restriction of all these results to the subclass of quadratic NEF's leads to the results of Diconis and Ylvisaker [4] and Consonni and Veronese [3]. The results of the paper are illustrated by an example. Main results In this section, we state and prove our main results. Our considerations will be restricted to regular NEFs, so that the domain of the means of a NEF is equal to the interior of the convex hull of its support. This property of regularity is satisfied by all the most common NEF's. An important fact which will be crucial in our proofs is that up to affine transformations and powers of convolution, a cubic natural exponential family may be obtained from a quadratic one by the so-called action of the linear group GL(IR 2 ) on the real families. Originally, the action of the linear group includes the affine transformations and powers of convolution, however since these transformations preserve the class of quadratic NEF's and the class of cubic NEF's, we will focus on the facts which we need here, for more precise statements in this connection, refer to Hassairi [6]. Let F (ν) and F (µ) be two real NEFS's. Suppose that there exists a β in IR such that the set then we write F (ν) = T β (F (µ)). This defines an action on the natural exponential families, so that we have We also mention that F (ν) = T β (F (µ)) may be expressed in terms of the cumulant functions of the generating measures by or equivalently by An important fact is that the relation (2.11) between the cumulent functions may be explicitly given it terms of the measures themselves. In fact if the α-power of convolution ν α of ν is written as ν α (dx) = h(α, x)σ(dx), where σ(dx) is either Lebesgue measure or a counting measure, then the measure satisfies (2.10) and generates the family T −β (F (ν)). This measure µ will be denoted T −β (ν) and the family F (µ) = T −β (F (ν)) will be denoted F β . We mention here that if F (ν) is a cubic NEF, there exists β in B F (ν) and a quadratic NEF Besides the restriction to half lines for the domain of the means in the definition of (M F (ν) ) β , we also define for ν ∈ M(IR) and β ∈ IR, the sets We have the following preliminary result. Proof We will make a reasoning for β ≥ 0, the case β < 0 may be done in a similar way. Suppose that there exists m 0 in (M F (ν) ) β , that is m 0 ∈ M F (ν) and 1 + βm 0 > 0. As M F (ν) is equal to the interior of the convex hull of supp(ν), there exist x 0 in supp(ν) such that x 0 > m 0 . This with the fact that 1 + βm 0 > 0 imply that 1 + βx 0 > 0. Thus H β is an open set which contains an element of supp(µ). It follows that ν(H β ) > 0 and β is in Since H β is an open set, this implies that it contains an element x 0 of supp(ν). We have on the one hand that 1 + βx 0 > 0 so that there exists ε > 0 such that 1 + βx 0 − βε > 0. On the other hand, as M F (ν) is equal to the interior of the convex hull of supp(ν), there exists m 0 in (M F (ν) ) such that |m 0 − x 0 | < ε. From this we deduce that m 0 is in (M F (ν) ) β . ✷ We now use the natural parametrization and the parametrization by the mean of the original family F (ν) to give two parameterizations of the family F (µ). These parameterizations are, for β = 0, different of the usual parameterizations of F (µ). In fact, for θ ∈ Θ(ν), we write Similarly, parameterizing by m ∈ M F (ν) , we write Thus we have that Accordingly, we define for β in B F (ν) two families of prior distributions. Let Then we have that (M F (ν) ) β = k ′ ν ((Θ) β ), and we define for t > 0 and m 0 ∈ (M F (ν) ) β , and Π β = {π β t,m 0 ; t > 0 and m 0 ∈ (M F (ν) ) β }, This family comes in fact from the standard family Π defined in (1.5) using (2.11). The normalizing constant C β t,m 0 is then well defined for t > 0 and m 0 ∈ (M F (ν) ) β . Besides this family of priors on the parameter θ, we define a family of priors on the parameter m. Always for t > 0 and m 0 ∈ (M F (ν) ) β , we consider the probability distribution where C β t,m 0 is a normalizing constant. It is the image of π t 1 ,m 1 defined in (1.6) by the map m ′ −→ m ′ 1 − βm ′ . The family of priors on m is then Next, we prove that these families are conjugate families of prior distributions. Proposition 2.2 i) The family Π β is conjugate family of prior distributions with respect to the NEF F β parameterized by the natural parameter θ. ii) The family Π β is a conjugate family of prior distributions with respect to the NEF F β parameterized by the mean parameter m. Proof It is easy to see that the distribution of the random vector (θ, X 1 , ..., X n ) is With the same technic used in Proposition 2.2, we deduce that the posterior distribution of θ given X 1 , ..., X n is π β t+n−βnX,(tm 0 +nX)/(t+n−βnX ) . ✷ Proposition 2.4 Let F (ν) be a cubic NEF. Then there exists β in Proof Since F (ν) is cubic then there exist β in B F (ν) and a quadratic NEF F (µ) such that F (ν) = T β (F (µ)) and ν = T β (µ). Using (2.11) it is easy to see that if the prior on θ in (Θ) β is π β t,m 0 then the prior of λ in Θ(µ) is the standard π t 1 ,m 1 with t 1 = t(1 + βm 0 ) and m 1 = m 0 1 + βm 0 . Moreover we have C β t,m 0 = C t(1+βm 0 ),m 0 /(1+βm 0 ) . It follow that Invoking (1.7) we get ✷ Now we give a characterization of the cubic NEFs which is based on the linearity of the posterior expectation. Theorem 2.5 Let ν be in M (IR). The converse is true if we assume that supp(T −β (ν)) contains an open interval in which is linear in X. On the other hand, as the prior on the natural parameter θ is assumed to be π β t,m 0 , we get as prior on λ ∈ Θ(µ) the standard prior distribution given by ∈ M F (µ) and C t 1 ,m 1 = C β t,m 0 . As we have that supp(µ) = supp(T −β (ν)) ⊂ supp(ν) ∩ {1 − βx ∈ Λ(ν)}. the assumptions on supp(T −β (ν)) imply that supp(µ) satisfies hypotheses (H1) or (H2) of Theorem 1.1 of Consonni and Veronese. According to this and to the linearity of the conditional expectation of the mean parameter of F (µ), we deduce that this NEF is a quadratic. It follows that F (ν) = T β (F (µ)) is a cubic NEF. ✷ In the following theorem, we give a second characterization of the Letac-Mora class of real cubic NEFs. (2.13) Note that (2.13) may be expressed in terms of the cumulant function as there exist β and (a, b, c) ∈ IR 3 such that for all θ in Θ β that is the cumulant function is solution of some Monge-Ampère equation (see [12]). Proof Suppose that F (ν) is cubic, then there exist β in B F (ν) and a quadratic NEF F (µ) such that F (ν) = T β (F (µ)) or equivalently F (µ) = T −β (F (ν)). Then it follows from (2.9) that It is known (see [1]) that for the quadratic NEF Writing (2.11) in terms of the mean parameters we get Conversely, if (2.13) holds, then ln V F (ν) (m) = 3 ln(1 + βm) + aψ ν (m) + bk ν (ψ ν (m)) + c. Taking the derivative, we deduce that the variance function satisfies the differential equation Solving this equation by standard methods gives which is a polynomial of degree less than or equal to 3. ✷ A third characterization of the cubic NEF's is based on a relation between the associated families of prior distributions Π β and Π β . (2.15) Consider the set We will show that k ′ ν (Π β ) = { π β t 1 ,m 1 ; (t 1 , m 1 ) ∈ Ω}, which is a part of Π β . Let t > 0 and m 0 in (M F (ν) ) β , and denote by σ the image by k ′ ν of the prior π β t,m 0 on θ defined in (2.12). We easily verify that This using (2.15) becomes We have that t 1 − b = t > 0 and In the same, we verify that Finally, we obtain that The image of an element π β t,m 0 of Π β by k ′ ν is by the very definition Since it is assumed to be in Π β , there exists (t 1 , m 1 ) in IR * Comparing these two expressions of k ′ ν (π β t,m 0 ) gives where a = tm 0 − t 1 m 1 , b = t 1 − t, and c = ln According to Theorem 2.6, this is the desired result and the proof is complete. ✷ Example In this section we illustrate our results by an example involving the most famous family with variance function of degree 3 which is the inverse Gaussian natural exponential family. Consider the distribution which is up to an affine transformation an inverse Gaussian distribution. The NEF generated by ν is given by its mean parametrization is For all m > −1 the variance function is given by Now for β = 0 we have so that F β = T −β (F (ν)) = {P (β, θ, ν)(dx); θ < 0}, The corresponding family Π β of conjugate prior distributions is the family of distributions π β t,m 0 (dθ) = C β t,m 0 (1 + β( defined for t > 0 and m 0 in (M F (ν) ) β Also, in this example, the family Π β is the set of distributions defined for t 1 > 0 and m 1 in (M F (ν) ) β by π β t 1 ,m 1 (dm) = C β t 1 ,m 1 (1 + βm) −2 exp(− t 1 (m 1 + 1) 2(1 + m) 2 + To see how Theorem 2.6 holds in this example, we need only to take β = 1. Then µ = T −1 (ν) is the standard gaussian distribution with V F (µ) (m ′ ) = 1 for m ′ ∈ IR. We see that V F (ν) (m) = (1 + m) 3 exp(aψ ν (m) + bk ν (ψ ν (m)) + c), with a = b = c = 0. In fact with a ′ = b ′ = c ′ = 0, and using the relations a = a ′ , b = b ′ + βa ′ , c = c ′ , we get a = b = c = 0. Concerning Theorem 2.7, we fist observe that the hypotheses in this theorem are well verified. In fact, let π 1 t,m 0 be the prior on the natural parameter θ and π 1 t 1 ,m 1 be the prior on the mean parameter m, then The density function of π 1 t,m 0 is equal to and for all m > −1 the density function of k ′ ν (π 1 t,m 0 ) is given by which is equal to π 1 t,m 0 .
2012-11-10T07:13:38.000Z
2012-11-10T00:00:00.000
{ "year": 2012, "sha1": "9da35f18d2386d7dbcb23324d25496320f0f7b54", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1211.2299", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9da35f18d2386d7dbcb23324d25496320f0f7b54", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }